diff --git a/website/src/pages/en/about.mdx b/website/src/pages/en/about.mdx index 02b29895881f..833b097673d2 100644 --- a/website/src/pages/en/about.mdx +++ b/website/src/pages/en/about.mdx @@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block ## The Graph Provides a Solution -The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API. Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. ### How The Graph Functions -Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. #### Specifics -- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. +- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph. -- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. +- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -- When creating a subgraph, you need to write a subgraph manifest. +- When creating a Subgraph, you need to write a Subgraph manifest. -- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph. -The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. +The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions. ![A graphic explaining how The Graph uses Graph Node to serve queries to data consumers](/img/graph-dataflow.png) @@ -56,12 +56,12 @@ The flow follows these steps: 1. A dapp adds data to Ethereum through a transaction on a smart contract. 2. The smart contract emits one or more events while processing the transaction. -3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. -4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. +3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain. +4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. 5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. ## Next Steps -The following sections provide a more in-depth look at subgraphs, their deployment and data querying. +The following sections provide a more in-depth look at Subgraphs, their deployment and data querying. -Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. +Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/src/pages/en/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/en/archived/arbitrum/arbitrum-faq.mdx index 4fd8f08572e7..437e769c3ec0 100644 --- a/website/src/pages/en/archived/arbitrum/arbitrum-faq.mdx +++ b/website/src/pages/en/archived/arbitrum/arbitrum-faq.mdx @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from: - Security inherited from Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion. @@ -38,7 +38,7 @@ To take advantage of using The Graph on L2, use this dropdown switcher to toggle ![Dropdown switcher to toggle Arbitrum](/img/arbitrum-screenshot-toggle.png) -## As a subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? +## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support. @@ -50,9 +50,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto Everything has been tested thoroughly, and a contingency plan is in place to ensure a safe and seamless transition. Details can be found [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Are existing subgraphs on Ethereum working? +## Are existing Subgraphs on Ethereum working? -All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. +All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly. ## Does GRT have a new smart contract deployed on Arbitrum? diff --git a/website/src/pages/en/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/en/archived/arbitrum/l2-transfer-tools-faq.mdx index 639a29894883..27ebaeb8405a 100644 --- a/website/src/pages/en/archived/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/src/pages/en/archived/arbitrum/l2-transfer-tools-faq.mdx @@ -24,9 +24,9 @@ The exception is with smart contract wallets like multisigs: these are smart con The L2 Transfer Tools use Arbitrum’s native mechanism to send messages from L1 to L2. This mechanism is called a “retryable ticket” and is used by all native token bridges, including the Arbitrum GRT bridge. You can read more about retryable tickets in the [Arbitrum docs](https://docs.arbitrum.io/arbos/l1-to-l2-messaging). -When you transfer your assets (subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). +When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum). -This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. +This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you. ### I started my delegation/stake/curation transfer and I'm not sure if it made it through to L2, how can I confirm that it was transferred correctly? @@ -36,43 +36,43 @@ If you have the L1 transaction hash (which you can find by looking at the recent ## Subgraph Transfer -### How do I transfer my subgraph? +### How do I transfer my Subgraph? -To transfer your subgraph, you will need to complete the following steps: +To transfer your Subgraph, you will need to complete the following steps: 1. Initiate the transfer on Ethereum mainnet 2. Wait 20 minutes for confirmation -3. Confirm subgraph transfer on Arbitrum\* +3. Confirm Subgraph transfer on Arbitrum\* -4. Finish publishing subgraph on Arbitrum +4. Finish publishing Subgraph on Arbitrum 5. Update Query URL (recommended) -\*Note that you must confirm the transfer within 7 days otherwise your subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). +\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol). ### Where should I initiate my transfer from? -You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any subgraph details page. Click the "Transfer Subgraph" button in the subgraph details page to start the transfer. +You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer. -### How long do I need to wait until my subgraph is transferred +### How long do I need to wait until my Subgraph is transferred The transfer time takes approximately 20 minutes. The Arbitrum bridge is working in the background to complete the bridge transfer automatically. In some cases, gas costs may spike and you will need to confirm the transaction again. -### Will my subgraph still be discoverable after I transfer it to L2? +### Will my Subgraph still be discoverable after I transfer it to L2? -Your subgraph will only be discoverable on the network it is published to. For example, if your subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 subgraph will appear as deprecated. +Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network.  After the transfer, the L1 Subgraph will appear as deprecated. -### Does my subgraph need to be published to transfer it? +### Does my Subgraph need to be published to transfer it? -To take advantage of the subgraph transfer tool, your subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the subgraph. If your subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. +To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal. -### What happens to the Ethereum mainnet version of my subgraph after I transfer to Arbitrum? +### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum? -After transferring your subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. +After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated. ### After I transfer, do I also need to re-publish on Arbitrum? @@ -80,21 +80,21 @@ After the 20 minute transfer window, you will need to confirm the transfer with ### Will my endpoint experience downtime while re-publishing? -It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the subgraph on L1 and whether they keep indexing it until the subgraph is fully supported on L2. +It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2. ### Is publishing and versioning the same on L2 as Ethereum Ethereum mainnet? -Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the subgraph. +Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph. -### Will my subgraph's curation move with my subgraph? +### Will my Subgraph's curation move with my Subgraph? -If you've chosen auto-migrating signal, 100% of your own curation will move with your subgraph to Arbitrum One. All of the subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 subgraph. +If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph. -Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same subgraph. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. -### Can I move my subgraph back to Ethereum mainnet after I transfer? +### Can I move my Subgraph back to Ethereum mainnet after I transfer? -Once transferred, your Ethereum mainnet version of this subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. +Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One. ### Why do I need bridged ETH to complete my transfer? @@ -206,19 +206,19 @@ To transfer your curation, you will need to complete the following steps: \*If necessary - i.e. you are using a contract address. -### How will I know if the subgraph I curated has moved to L2? +### How will I know if the Subgraph I curated has moved to L2? -When viewing the subgraph details page, a banner will notify you that this subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the subgraph details page of any subgraph that has moved. +When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved. ### What if I do not wish to move my curation to L2? -When a subgraph is deprecated you have the option to withdraw your signal. Similarly, if a subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. +When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2. ### How do I know my curation successfully transferred? Signal details will be accessible via Explorer approximately 20 minutes after the L2 transfer tool is initiated. -### Can I transfer my curation on more than one subgraph at a time? +### Can I transfer my curation on more than one Subgraph at a time? There is no bulk transfer option at this time. @@ -266,7 +266,7 @@ It will take approximately 20 minutes for the L2 transfer tool to complete trans ### Do I have to index on Arbitrum before I transfer my stake? -You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to subgraphs on L2, index them, and present POIs. +You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs. ### Can Delegators move their delegation before I move my indexing stake? diff --git a/website/src/pages/en/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/en/archived/arbitrum/l2-transfer-tools-guide.mdx index 549618bfd7c3..4a34da9bad0e 100644 --- a/website/src/pages/en/archived/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/src/pages/en/archived/arbitrum/l2-transfer-tools-guide.mdx @@ -6,53 +6,53 @@ The Graph has made it easy to move to L2 on Arbitrum One. For each protocol part Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them. -## How to transfer your subgraph to Arbitrum (L2) +## How to transfer your Subgraph to Arbitrum (L2) -## Benefits of transferring your subgraphs +## Benefits of transferring your Subgraphs The Graph's community and core devs have [been preparing](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) to move to Arbitrum over the past year. Arbitrum, a layer 2 or "L2" blockchain, inherits the security from Ethereum but provides drastically lower gas fees. -When you publish or upgrade your subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your subgraphs to Arbitrum, any future updates to your subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your subgraph, increasing the rewards for Indexers on your subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. +When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2. -## Understanding what happens with signal, your L1 subgraph and query URLs +## Understanding what happens with signal, your L1 Subgraph and query URLs -Transferring a subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the subgraph to L2. The "transfer" will deprecate the subgraph on mainnet and send the information to re-create the subgraph on L2 using the bridge. It will also include the subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. +Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer. -When you choose to transfer the subgraph, this will convert all of the subgraph's curation signal to GRT. This is equivalent to "deprecating" the subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the subgraph, where they will be used to mint signal on your behalf. +When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf. -Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same subgraph. If a subgraph owner does not transfer their subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. +Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation. -As soon as the subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the subgraph. However, there will be Indexers that will 1) keep serving transferred subgraphs for 24 hours, and 2) immediately start indexing the subgraph on L2. Since these Indexers already have the subgraph indexed, there should be no need to wait for the subgraph to sync, and it will be possible to query the L2 subgraph almost immediately. +As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately. -Queries to the L2 subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. +Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible. ## Choosing your L2 wallet -When you published your subgraph on mainnet, you used a connected wallet to create the subgraph, and this wallet owns the NFT that represents this subgraph and allows you to publish updates. +When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates. -When transferring the subgraph to Arbitrum, you can choose a different wallet that will own this subgraph NFT on L2. +When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2. If you're using a "regular" wallet like MetaMask (an Externally Owned Account or EOA, i.e. a wallet that is not a smart contract), then this is optional and it is recommended to keep the same owner address as in L1. -If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your subgraph. +If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph. -**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the subgraph will be lost and cannot be recovered.** +**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.** ## Preparing for the transfer: bridging some ETH -Transferring the subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. +Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io). Since gas fees on Arbitrum are lower, you should only need a small amount. It is recommended that you start at a low threshold (0.e.g. 01 ETH) for your transaction to be approved. -## Finding the subgraph Transfer Tool +## Finding the Subgraph Transfer Tool -You can find the L2 Transfer Tool when you're looking at your subgraph's page on Subgraph Studio: +You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio: ![transfer tool](/img/L2-transfer-tool1.png) -It is also available on Explorer if you're connected with the wallet that owns a subgraph and on that subgraph's page on Explorer: +It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer: ![Transferring to L2](/img/transferToL2.png) @@ -60,19 +60,19 @@ Clicking on the Transfer to L2 button will open the transfer tool where you can ## Step 1: Starting the transfer -Before starting the transfer, you must decide which address will own the subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). +Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above). -Also please note transferring the subgraph requires having a nonzero amount of signal on the subgraph with the same account that owns the subgraph; if you haven't signaled on the subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). +Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice). -After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 subgraph (see "Understanding what happens with signal, your L1 subgraph and query URLs" above for more details on what goes on behind the scenes). +After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes). -If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. +If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum. ![Start the transfer to L2](/img/startTransferL2.png) -## Step 2: Waiting for the subgraph to get to L2 +## Step 2: Waiting for the Subgraph to get to L2 -After you start the transfer, the message that sends your L1 subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). +After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs). Once this wait time is over, Arbitrum will attempt to auto-execute the transfer on the L2 contracts. @@ -80,7 +80,7 @@ Once this wait time is over, Arbitrum will attempt to auto-execute the transfer ## Step 3: Confirming the transfer -In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your subgraph to L2 will be pending and require a retry within 7 days. +In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days. If this is the case, you will need to connect using an L2 wallet that has some ETH on Arbitrum, switch your wallet network to Arbitrum, and click on "Confirm Transfer" to retry the transaction. @@ -88,33 +88,33 @@ If this is the case, you will need to connect using an L2 wallet that has some E ## Step 4: Finishing the transfer on L2 -At this point, your subgraph and GRT have been received on Arbitrum, but the subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." +At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph." -![Publish the subgraph](/img/publishSubgraphL2TransferTools.png) +![Publish the Subgraph](/img/publishSubgraphL2TransferTools.png) -![Wait for the subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) +![Wait for the Subgraph to be published](/img/waitForSubgraphToPublishL2TransferTools.png) -This will publish the subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. +This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1. ## Step 5: Updating the query URL -Your subgraph has been successfully transferred to Arbitrum! To query the subgraph, the new URL will be : +Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be : `https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]` -Note that the subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the subgraph has been synced on L2. +Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2. ## How to transfer your curation to Arbitrum (L2) -## Understanding what happens to curation on subgraph transfers to L2 +## Understanding what happens to curation on Subgraph transfers to L2 -When the owner of a subgraph transfers a subgraph to Arbitrum, all of the subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a subgraph version or deployment but that follows the latest version of a subgraph. +When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph. -This conversion from signal to GRT is the same as what would happen if the subgraph owner deprecated the subgraph in L1. When the subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles subgraph upgrades and auto-migrated signal). Each Curator on that subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the subgraph. +This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph. -A fraction of these GRT corresponding to the subgraph owner is sent to L2 together with the subgraph. +A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph. -At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. +At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it. ## Choosing your L2 wallet @@ -130,9 +130,9 @@ If you're using a smart contract wallet, like a multisig (e.g. a Safe), then cho Before starting the transfer, you must decide which address will own the curation on L2 (see "Choosing your L2 wallet" above), and it is recommended having some ETH for gas already bridged on Arbitrum in case you need to retry the execution of the message on L2. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - since gas fees on Arbitrum are so low, you should only need a small amount, e.g. 0.01 ETH will probably be more than enough. -If a subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred subgraph. +If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph. -When looking at the subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. +When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool. ![Transfer signal](/img/transferSignalL2TransferTools.png) @@ -162,4 +162,4 @@ If this is the case, you will need to connect using an L2 wallet that has some E ## Withdrawing your curation on L1 -If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. +If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address. diff --git a/website/src/pages/en/archived/sunrise.mdx b/website/src/pages/en/archived/sunrise.mdx index eb18a93c506c..71262f22e7d8 100644 --- a/website/src/pages/en/archived/sunrise.mdx +++ b/website/src/pages/en/archived/sunrise.mdx @@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ ## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs. ### What happened to the hosted service? -The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. +The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service. -During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. +During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs. ### Was Subgraph Studio impacted by this upgrade? No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### Why were subgraphs published to Arbitrum, did it start indexing a different network? +### Why were Subgraphs published to Arbitrum, did it start indexing a different network? -The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/) +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/) ## About the Upgrade Indexer > The upgrade Indexer is currently active. -The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. +The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed. ### What does the upgrade Indexer do? -- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published. - It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/). -- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. +- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs. ### What does the upgrade indexer mean for existing Indexers? Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. -However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. +However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. ### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards. -It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs. -### How does this affect subgraph developers? +### How does this affect Subgraph developers? -Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. +Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade. ### How does the upgrade Indexer benefit data consumers? @@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -### When will the upgrade Indexer stop supporting a subgraph? +### When will the upgrade Indexer stop supporting a Subgraph? -The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. +The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. +Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days. -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. +Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/src/pages/en/indexing/chain-integration-overview.mdx b/website/src/pages/en/indexing/chain-integration-overview.mdx index 77141e82b34a..33619b03c483 100644 --- a/website/src/pages/en/indexing/chain-integration-overview.mdx +++ b/website/src/pages/en/indexing/chain-integration-overview.mdx @@ -36,7 +36,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub ### 2. What happens if Firehose & Substreams support comes after the network is supported on mainnet? -This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. +This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. ### 3. How much time will the process of reaching full protocol support take? diff --git a/website/src/pages/en/indexing/new-chain-integration.mdx b/website/src/pages/en/indexing/new-chain-integration.mdx index faa4b2f117b8..a52a1463ab6c 100644 --- a/website/src/pages/en/indexing/new-chain-integration.mdx +++ b/website/src/pages/en/indexing/new-chain-integration.mdx @@ -2,7 +2,7 @@ title: New Chain Integration --- -Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: +Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: 1. **EVM JSON-RPC** 2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. @@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through ## EVM considerations - Difference between JSON-RPC & Firehose -While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. +While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. -- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes. -> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) ## Graph Node Configuration -Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) 2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC or Firehose compliant URL @@ -66,4 +66,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your ## Substreams-powered Subgraphs -For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/src/pages/en/indexing/overview.mdx b/website/src/pages/en/indexing/overview.mdx index 582bd314f525..40e28090f8d8 100644 --- a/website/src/pages/en/indexing/overview.mdx +++ b/website/src/pages/en/indexing/overview.mdx @@ -7,7 +7,7 @@ Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) i GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers also earn rewards for delegated stake from Delegators, to contribute to the network. -Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing. +Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing. ## FAQ @@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT. **Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity. -**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network. +**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network. ### How are indexing rewards distributed? -Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** +Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.** Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack. ### What is a proof of indexing (POI)? -POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block. +POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block. ### When are indexing rewards distributed? @@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: +1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -91,24 +91,24 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%. -### How do Indexers know which subgraphs to index? +### How do Indexers know which Subgraphs to index? -Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network: +Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network: -- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up. +- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query voluming is ramping up. -- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand. +- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand. -- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply. +- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply. -- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards. +- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards. ### What are the hardware requirements? -- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded. +- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded. - **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests. -- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. -- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. +- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second. +- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic. | Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | | --- | :-: | :-: | :-: | :-: | :-: | @@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making ## Infrastructure -At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. +At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network. -- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. +- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions. -- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. +- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API. -- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com. - **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway. -- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations. +- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations. - **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server. @@ -149,8 +149,8 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | | 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | | 8040 | Prometheus metrics | /metrics | --metrics-port | - | @@ -159,7 +159,7 @@ Note: To support agile scaling, it is recommended that query and indexing concer | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | | 7300 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Agent @@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`. ### Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. +[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint. #### Getting started from source @@ -365,9 +365,9 @@ docker-compose up To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components: -- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. +- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each. -- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. +- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways. - **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules. @@ -525,7 +525,7 @@ graph indexer status #### Indexer management using Indexer CLI -The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. +The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution. #### Usage @@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules set [options] ...` - Set one or more indexing rules. -- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed. +- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed. - `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index. @@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported #### Indexing rules -Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. +Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing. -For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. +For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`. Data model: @@ -679,7 +679,7 @@ graph indexer actions execute approve Note that supported action types for allocation management have different input requirements: -- `Allocate` - allocate stake to a specific subgraph deployment +- `Allocate` - allocate stake to a specific Subgraph deployment - required action params: - deploymentID @@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input - poi - force (forces using the provided POI even if it doesn’t match what the graph-node provides) -- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment +- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment - required action params: - allocationID @@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input #### Cost models -Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. +Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers. #### Agora @@ -782,7 +782,7 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi 6. Call `stake()` to stake GRT in the protocol. -7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. +7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address. 8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks. @@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st After being created by an Indexer a healthy allocation goes through two states. -- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. +- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules. - **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)). -Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. +Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically. diff --git a/website/src/pages/en/indexing/tap.mdx b/website/src/pages/en/indexing/tap.mdx index 3bab672ab211..e81b7af5421c 100644 --- a/website/src/pages/en/indexing/tap.mdx +++ b/website/src/pages/en/indexing/tap.mdx @@ -63,10 +63,10 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. -- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) -- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) +- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) -> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually. +> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually. ## Migration Guide @@ -128,18 +128,18 @@ query_url = "" status_url = "" [subgraphs.network] -# Query URL for the Graph Network subgraph. +# Query URL for the Graph Network Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" [subgraphs.escrow] -# Query URL for the Escrow subgraph. +# Query URL for the Escrow Subgraph. query_url = "" # Optional, deployment to look for in the local `graph-node`, if locally indexed. -# Locally indexing the subgraph is recommended. +# Locally indexing the Subgraph is recommended. # NOTE: Use `query_url` or `deployment_id` only deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" diff --git a/website/src/pages/en/indexing/tooling/graph-node.mdx b/website/src/pages/en/indexing/tooling/graph-node.mdx index 0e8664f688c4..9ae620ae7200 100644 --- a/website/src/pages/en/indexing/tooling/graph-node.mdx +++ b/website/src/pages/en/indexing/tooling/graph-node.mdx @@ -2,31 +2,31 @@ title: Graph Node --- -Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node). ## Graph Node -[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query. +[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query. Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node). ### PostgreSQL database -The main store for the Graph Node, this is where subgraph data is stored, as well as metadata about subgraphs, and subgraph-agnostic network data such as the block cache, and eth_call cache. +The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache. ### Network clients In order to index a network, Graph Node needs access to a network client via an EVM-compatible JSON-RPC API. This RPC may connect to a single client or it could be a more complex setup that load balances across multiple. -While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). +While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)). **Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/). ### IPFS Nodes -Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. +Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com. ### Prometheus metrics server @@ -79,8 +79,8 @@ When it is running Graph Node exposes the following ports: | Port | Purpose | Routes | CLI Argument | Environment Variable | | --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | +| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | | 8020 | JSON-RPC
(for managing deployments) | / | --admin-port | - | | 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | | 8040 | Prometheus metrics | /metrics | --metrics-port | - | @@ -89,7 +89,7 @@ When it is running Graph Node exposes the following ports: ## Advanced Graph Node configuration -At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the subgraphs to be indexed. +At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed. This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables. @@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https: #### Multiple Graph Nodes -Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). +Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules). > Note that multiple Graph Nodes can all be configured to use the same database, which itself can be horizontally scaled via sharding. #### Deployment rules -Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. +Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision. Example deployment rule configuration: @@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ] match = { network = [ "xdai", "poa-core" ] } indexers = [ "index_node_other_0" ] [[deployment.rule]] -# There's no 'match', so any subgraph matches +# There's no 'match', so any Subgraph matches shards = [ "sharda", "shardb" ] indexers = [ "index_node_community_0", @@ -167,11 +167,11 @@ Any node whose --node-id matches the regular expression will be set up to only r For most use cases, a single Postgres database is sufficient to support a graph-node instance. When a graph-node instance outgrows a single Postgres database, it is possible to split the storage of graph-node's data across multiple Postgres databases. All databases together form the store of the graph-node instance. Each individual database is called a shard. -Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. +Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed. Sharding becomes useful when your existing database can't keep up with the load that Graph Node puts on it, and when it's not possible to increase the database size anymore. -> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between subgraphs; in those situations it can help dramatically if the high-volume subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume subgraphs. +> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs. In terms of configuring connections, start with max_connections in postgresql.conf set to 400 (or maybe even 200) and look at the store_connection_wait_time_ms and store_connection_checkout_count Prometheus metrics. Noticeable wait times (anything above 5ms) is an indication that there are too few connections available; high wait times there will also be caused by the database being very busy (like high CPU load). However if the database seems otherwise stable, high wait times indicate a need to increase the number of connections. In the configuration, how many connections each graph-node instance can use is an upper limit, and Graph Node will not keep connections open if it doesn't need them. @@ -188,7 +188,7 @@ ingestor = "block_ingestor_node" #### Supporting multiple networks -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: - Multiple networks - Multiple providers per network (this can allow splitting of load across providers, and can also allow for configuration of full nodes as well as archive nodes, with Graph Node preferring cheaper providers if a given workload allows). @@ -225,11 +225,11 @@ Users who are operating a scaled indexing setup with advanced configuration may ### Managing Graph Node -Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing subgraphs. +Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs. #### Logging -Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. +Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). @@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` -### Working with subgraphs +### Working with Subgraphs #### Indexing status API -Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different subgraphs, checking proofs of indexing, inspecting subgraph features and more. +Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more. The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). @@ -263,7 +263,7 @@ There are three separate parts of the indexing process: - Processing events in order with the appropriate handlers (this can involve calling the chain for state, and fetching data from the store) - Writing the resulting data to the store -These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where subgraphs are slow to index, the underlying cause will depend on the specific subgraph. +These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph. Common causes of indexing slowness: @@ -276,24 +276,24 @@ Common causes of indexing slowness: - The provider itself falling behind the chain head - Slowness in fetching new receipts at the chain head from the provider -Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. +Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance. -#### Failed subgraphs +#### Failed Subgraphs -During indexing subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: +During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure: - Deterministic failures: these are failures which will not be resolved with retries - Non-deterministic failures: these might be down to issues with the provider, or some unexpected Graph Node error. When a non-deterministic failure occurs, Graph Node will retry the failing handlers, backing off over time. -In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the subgraph code is required. +In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required. -> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. +> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository. #### Block and call cache -Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph. +Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph. -However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. +However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider. If a block cache inconsistency is suspected, such as a tx receipt missing event: @@ -304,7 +304,7 @@ If a block cache inconsistency is suspected, such as a tx receipt missing event: #### Querying issues and errors -Once a subgraph has been indexed, indexers can expect to serve queries via the subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. +Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process. However, even with a dedicated query node and replicas, certain queries can take a long time to execute, and in some cases increase memory usage and negatively impact the query time for other users. @@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat ##### Analysing queries -Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that subgraph or query. And then of course to resolve it, if possible. +Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible. In other cases, the trigger might be high memory usage on a query node, in which case the challenge is first to identify the query causing the issue. @@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the Once a table has been determined to be account-like, running `graphman stats account-like .` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again. -For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. +For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load. -#### Removing subgraphs +#### Removing Subgraphs > This is new functionality, which will be available in Graph Node 0.29.x -At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). +At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop). diff --git a/website/src/pages/en/indexing/tooling/graphcast.mdx b/website/src/pages/en/indexing/tooling/graphcast.mdx index 4072877a1257..d1795e9be577 100644 --- a/website/src/pages/en/indexing/tooling/graphcast.mdx +++ b/website/src/pages/en/indexing/tooling/graphcast.mdx @@ -10,10 +10,10 @@ Currently, the cost to broadcast information to other network participants is de The Graphcast SDK (Software Development Kit) allows developers to build Radios, which are gossip-powered applications that Indexers can run to serve a given purpose. We also intend to create a few Radios (or provide support to other developers/teams that wish to build Radios) for the following use cases: -- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). -- Conducting auctions and coordination for warp syncing subgraphs, substreams, and Firehose data from other Indexers. -- Self-reporting on active query analytics, including subgraph request volumes, fee volumes, etc. -- Self-reporting on indexing analytics, including subgraph indexing time, handler gas costs, indexing errors encountered, etc. +- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)). +- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers. +- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc. +- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc. - Self-reporting on stack information including graph-node version, Postgres version, Ethereum client version, etc. ### Learn More diff --git a/website/src/pages/en/resources/benefits.mdx b/website/src/pages/en/resources/benefits.mdx index 2e1a0834591c..00a32f92a1a3 100644 --- a/website/src/pages/en/resources/benefits.mdx +++ b/website/src/pages/en/resources/benefits.mdx @@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries. -Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. +Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet. -Curating signal on a subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a subgraph, and later withdrawn—with potential to earn returns in the process). +Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process). ## No Setup Costs & Greater Operational Efficiency @@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally. -Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/). +Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/). diff --git a/website/src/pages/en/resources/glossary.mdx b/website/src/pages/en/resources/glossary.mdx index ffcd4bca2eed..4c5ad55cd0d3 100644 --- a/website/src/pages/en/resources/glossary.mdx +++ b/website/src/pages/en/resources/glossary.mdx @@ -4,51 +4,51 @@ title: Glossary - **The Graph**: A decentralized protocol for indexing and querying data. -- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer. +- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer. -- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. +- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone. - **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. - 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network. + 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network. - 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. + 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually. - **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. - **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake. -- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph. -- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. +- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned. -- **Data Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a Subgraph. -- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. +- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network. -- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. +- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example. - **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day. -- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: +- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses: - 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. + 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. -- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. +- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs. - **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. @@ -56,28 +56,28 @@ title: Glossary - **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned. -- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT. +- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT. - **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT. - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. +- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. -- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol. +- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol. - **Graph CLI**: A command line interface tool for building and deploying to The Graph. - **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again. -- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake. +- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake. -- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings. -- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). +- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2). diff --git a/website/src/pages/en/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/en/resources/migration-guides/assemblyscript-migration-guide.mdx index 85f6903a6c69..576cbe911be6 100644 --- a/website/src/pages/en/resources/migration-guides/assemblyscript-migration-guide.mdx +++ b/website/src/pages/en/resources/migration-guides/assemblyscript-migration-guide.mdx @@ -2,13 +2,13 @@ title: AssemblyScript Migration Guide --- -Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 +Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉 -That will enable subgraph developers to use newer features of the AS language and standard library. +That will enable Subgraph developers to use newer features of the AS language and standard library. This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂 -> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest. +> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest. ## Features @@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null maybeValue.aMethod() ``` -If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler. +If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler. ### Variable Shadowing @@ -132,7 +132,7 @@ You'll need to rename your duplicate variables if you had variable shadowing. ### Null Comparisons -By doing the upgrade on your subgraph, sometimes you might get errors like these: +By doing the upgrade on your Subgraph, sometimes you might get errors like these: ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. @@ -330,7 +330,7 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // doesn't give compile time errors as it should ``` -We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your subgraph mappings, you should change them to do a null check before it. +We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it. ```typescript let wrapper = new Wrapper(y) @@ -352,7 +352,7 @@ value.x = 10 value.y = 'content' ``` -It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your subgraph has initialized their values, like this: +It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this: ```typescript var value = new Type() // initialized diff --git a/website/src/pages/en/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/en/resources/migration-guides/graphql-validations-migration-guide.mdx index 29fed533ef8c..ebed96df1002 100644 --- a/website/src/pages/en/resources/migration-guides/graphql-validations-migration-guide.mdx +++ b/website/src/pages/en/resources/migration-guides/graphql-validations-migration-guide.mdx @@ -20,7 +20,7 @@ To be compliant with those validations, please follow the migration guide. You can use the CLI migration tool to find any issues in your GraphQL operations and fix them. Alternatively you can update the endpoint of your GraphQL client to use the `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` endpoint. Testing your queries against this endpoint will help you find the issues in your queries. -> Not all subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. +> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid. ## Migration CLI tool diff --git a/website/src/pages/en/resources/roles/curating.mdx b/website/src/pages/en/resources/roles/curating.mdx index 1cc05bb7b62f..a228ebfb3267 100644 --- a/website/src/pages/en/resources/roles/curating.mdx +++ b/website/src/pages/en/resources/roles/curating.mdx @@ -2,37 +2,37 @@ title: Curating --- -Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index. +Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index. ## What Does Signaling Mean for The Graph Network? -Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed. +Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives. -Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. +Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) ## How to Signal -Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) +Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/) -A curator can choose to signal on a specific subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that subgraph. Both are valid strategies and come with their own pros and cons. +A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons. -Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred. +Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred. Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay a 0.5% curation tax on all auto-migrated curation shares. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. +> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time. Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax). -Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled. +Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled. -However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph. +However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph. ## Risks 1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. -3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). -4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. - - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. - - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax. +2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned. +3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/). +4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. + - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. + - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax. ## Curation FAQs ### 1. What % of query fees do Curators earn? -By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. +By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance. -### 2. How do I decide which subgraphs are high quality to signal on? +### 2. How do I decide which Subgraphs are high quality to signal on? -Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: +Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result: -- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future -- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. +- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on. -### 3. What’s the cost of updating a subgraph? +### 3. What’s the cost of updating a Subgraph? -Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas. +Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas. -### 4. How often can I update my subgraph? +### 4. How often can I update my Subgraph? -It’s suggested that you don’t update your subgraphs too frequently. See the question above for more details. +It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details. ### 5. Can I sell my curation shares? diff --git a/website/src/pages/en/resources/subgraph-studio-faq.mdx b/website/src/pages/en/resources/subgraph-studio-faq.mdx index 8761f7a31bf6..c2d4037bd099 100644 --- a/website/src/pages/en/resources/subgraph-studio-faq.mdx +++ b/website/src/pages/en/resources/subgraph-studio-faq.mdx @@ -4,7 +4,7 @@ title: Subgraph Studio FAQs ## 1. What is Subgraph Studio? -[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys. +[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys. ## 2. How do I create an API Key? @@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th After creating an API Key, in the Security section, you can define the domains that can query a specific API Key. -## 5. Can I transfer my subgraph to another owner? +## 5. Can I transfer my Subgraph to another owner? -Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'. +Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'. -Note that you will no longer be able to see or edit the subgraph in Studio once it has been transferred. +Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred. -## 6. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use? +## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use? -You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. +You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio. -Remember that you can create an API key and query any subgraph published to the network, even if you build a subgraph yourself. These queries via the new API key, are paid queries as any other on the network. +Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network. diff --git a/website/src/pages/en/resources/tokenomics.mdx b/website/src/pages/en/resources/tokenomics.mdx index 87483e2c1fcf..9b62872b41d5 100644 --- a/website/src/pages/en/resources/tokenomics.mdx +++ b/website/src/pages/en/resources/tokenomics.mdx @@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s ## Overview -The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. +The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph. ## Specifics @@ -24,9 +24,9 @@ There are four primary network participants: 1. Delegators - Delegate GRT to Indexers & secure the network -2. Curators - Find the best subgraphs for Indexers +2. Curators - Find the best Subgraphs for Indexers -3. Developers - Build & query subgraphs +3. Developers - Build & query Subgraphs 4. Indexers - Backbone of blockchain data @@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth ## Delegators (Passively earn GRT) -Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. +Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually. For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually. @@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head ## Curators (Earn GRT) -Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed. +Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed. -Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. +Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation. -Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT. +Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT. ## Developers -Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. +Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants. -### Creating a subgraph +### Creating a Subgraph -Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. +Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers. -Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. +Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network. -### Querying an existing subgraph +### Querying an existing Subgraph -Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph. +Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph. Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol. @@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th ## Indexers (Earn GRT) -Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs. +Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs. Indexers can earn GRT rewards in two ways: -1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). +1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). -2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. +2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately. -Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph. +Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph. In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve. -Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. +Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network. The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors. ## Token Supply: Burning & Issuance -The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. +The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network. -The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data. +The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data. ![Total burned GRT](/img/total-burned-grt.jpeg) diff --git a/website/src/pages/en/sps/introduction.mdx b/website/src/pages/en/sps/introduction.mdx index b11c99dfb8e5..366c126ac60a 100644 --- a/website/src/pages/en/sps/introduction.mdx +++ b/website/src/pages/en/sps/introduction.mdx @@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs sidebarTitle: Introduction --- -Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. +Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data. ## Overview -Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. +Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. ### Specifics There are two methods of enabling this technology: -1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. +1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph. -2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities. +2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities. -You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. +You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node. ### Additional Resources diff --git a/website/src/pages/en/sps/sps-faq.mdx b/website/src/pages/en/sps/sps-faq.mdx index abc1f3906686..250c466d5929 100644 --- a/website/src/pages/en/sps/sps-faq.mdx +++ b/website/src/pages/en/sps/sps-faq.mdx @@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams. -## What are Substreams-powered subgraphs? +## What are Substreams-powered Subgraphs? -[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities. +[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities. -If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API. +If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API. -## How are Substreams-powered subgraphs different from subgraphs? +## How are Substreams-powered Subgraphs different from Subgraphs? Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain. -By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. +By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times. -## What are the benefits of using Substreams-powered subgraphs? +## What are the benefits of using Substreams-powered Subgraphs? -Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. +Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka. ## What are the benefits of Substreams? @@ -35,7 +35,7 @@ There are many benefits to using Substreams, including: - High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery). -- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, subgraphs, flat files, Google Sheets. +- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets. - Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks. @@ -63,17 +63,17 @@ There are many benefits to using Firehose, including: - Leverages flat files: Blockchain data is extracted into flat files, the cheapest and most optimized computing resource available. -## Where can developers access more information about Substreams-powered subgraphs and Substreams? +## Where can developers access more information about Substreams-powered Subgraphs and Substreams? The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules. -The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. +The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph. The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. ## What is the role of Rust modules in Substreams? -Rust modules are the equivalent of the AssemblyScript mappers in subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. +Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data. See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details. @@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used. -As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a subgraph, and be queried by consumers. +As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers. ## How can you build and deploy a Substreams-powered Subgraph? After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/). -## Where can I find examples of Substreams and Substreams-powered subgraphs? +## Where can I find examples of Substreams and Substreams-powered Subgraphs? -You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered subgraphs. +You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs. -## What do Substreams and Substreams-powered subgraphs mean for The Graph Network? +## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network? The integration promises many benefits, including extremely high-performance indexing and greater composability by leveraging community modules and building on them. diff --git a/website/src/pages/en/sps/triggers.mdx b/website/src/pages/en/sps/triggers.mdx index 816d42cb5f12..66687aa21889 100644 --- a/website/src/pages/en/sps/triggers.mdx +++ b/website/src/pages/en/sps/triggers.mdx @@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL. ## Overview -Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. +Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer. -By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework. +By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework. ### Defining `handleTransactions` -The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. +The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created. ```tsx export function handleTransactions(bytes: Uint8Array): void { @@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file: 1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object 2. Looping over the transactions -3. Create a new subgraph entity for every transaction +3. Create a new Subgraph entity for every transaction -To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/). +To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/). ### Additional Resources diff --git a/website/src/pages/en/sps/tutorial.mdx b/website/src/pages/en/sps/tutorial.mdx index 4a265e5d01d7..bb0731e49937 100644 --- a/website/src/pages/en/sps/tutorial.mdx +++ b/website/src/pages/en/sps/tutorial.mdx @@ -3,7 +3,7 @@ title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana' sidebarTitle: Tutorial --- -Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. +Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. ## Get Started @@ -53,7 +53,7 @@ params: # Modify the param fields to meet your needs ### Step 2: Generate the Subgraph Manifest -Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: +Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container: ```bash substreams codegen subgraph @@ -80,7 +80,7 @@ dataSources: ### Step 3: Define Entities in `schema.graphql` -Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. +Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file. Here is an example: @@ -100,7 +100,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. -The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: +The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id: ```ts import { Protobuf } from 'as-proto/assembly' @@ -139,11 +139,11 @@ To generate Protobuf objects in AssemblyScript, run the following command: npm run protogen ``` -This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler. ### Conclusion -Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. +Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case. ### Video Tutorial diff --git a/website/src/pages/en/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/en/subgraphs/best-practices/avoid-eth-calls.mdx index 1d4178f234d2..004accd0b69d 100644 --- a/website/src/pages/en/subgraphs/best-practices/avoid-eth-calls.mdx +++ b/website/src/pages/en/subgraphs/best-practices/avoid-eth-calls.mdx @@ -5,15 +5,15 @@ sidebarTitle: 'Avoiding eth_calls' ## TLDR -`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. +`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`. ## Why Avoiding `eth_calls` Is a Best Practice -Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed. +Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed. ### What Does an eth_call Look Like? -`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: +`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need: ```yaml event Transfer(address indexed from, address indexed to, uint256 value); @@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void { } ``` -This is functional, however is not ideal as it slows down our subgraph’s indexing. +This is functional, however is not ideal as it slows down our Subgraph’s indexing. ## How to Eliminate `eth_calls` @@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo); ``` -With this update, the subgraph can directly index the required data without external calls: +With this update, the Subgraph can directly index the required data without external calls: ```typescript import { Address } from '@graphprotocol/graph-ts' @@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call. -Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0. +Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0. ## Conclusion -You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/en/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/en/subgraphs/best-practices/derivedfrom.mdx index 7b0bb1f712fe..7fdf08258327 100644 --- a/website/src/pages/en/subgraphs/best-practices/derivedfrom.mdx +++ b/website/src/pages/en/subgraphs/best-practices/derivedfrom.mdx @@ -5,7 +5,7 @@ sidebarTitle: 'Arrays with @derivedFrom' ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. ## How to Use the `@derivedFrom` Directive @@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema. comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient. ### Example Use Case for `@derivedFrom` @@ -60,16 +60,16 @@ type Comment @entity { Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. -This will not only make our subgraph more efficient, but it will also unlock three features: +This will not only make our Subgraph more efficient, but it will also unlock three features: 1. We can query the `Post` and see all of its comments. 2. We can do a reverse lookup and query any `Comment` and see which post it comes from. -3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings. ## Conclusion -Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/src/pages/en/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/en/subgraphs/best-practices/grafting-hotfix.mdx index 3991197e7379..9e0ac694aaf1 100644 --- a/website/src/pages/en/subgraphs/best-practices/grafting-hotfix.mdx +++ b/website/src/pages/en/subgraphs/best-practices/grafting-hotfix.mdx @@ -5,22 +5,22 @@ sidebarTitle: 'Grafting and Hotfixing' ## TLDR -Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. +Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones. ### Overview -This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. ## Benefits of Grafting for Hotfixes 1. **Rapid Deployment** - - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. - - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. 2. **Data Preservation** - - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records. - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. 3. **Efficiency** @@ -31,33 +31,33 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati 1. **Initial Deployment Without Grafting** - - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. - - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes. 2. **Implementing the Hotfix with Grafting** - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. - - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. - - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. - - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph. + - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible. 3. **Post-Hotfix Actions** - - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. - - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance. > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. - - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph. 4. **Important Considerations** - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. - **Tip**: Use the block number of the last correctly processed event. - - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. - - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. - - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features. ## Example: Deploying a Hotfix with Grafting -Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. +Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. 1. **Failed Subgraph Manifest (subgraph.yaml)** @@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing features: - grafting graft: - base: QmBaseDeploymentID # Deployment ID of the failed subgraph + base: QmBaseDeploymentID # Deployment ID of the failed Subgraph block: 6000000 # Last successfully indexed block ``` **Explanation:** -- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. - **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. - **Grafting Configuration**: - - **base**: Deployment ID of the failed subgraph. + - **base**: Deployment ID of the failed Subgraph. - **block**: Block number where grafting should begin. 3. **Deployment Steps** @@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. - **Deploy the Subgraph**: - Authenticate with the Graph CLI. - - Deploy the new subgraph using `graph deploy`. + - Deploy the new Subgraph using `graph deploy`. 4. **Post-Deployment** - - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point. - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. @@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. -- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. - **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. -- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. +- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability. ### Risk Management @@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec ## Conclusion -Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: +Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to: - **Quickly Recover** from critical errors without re-indexing. - **Preserve Historical Data**, maintaining continuity for applications and users. - **Ensure Service Availability** by minimizing downtime during critical fixes. -However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. ## Additional Resources - **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting - **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. -By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. +By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/en/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/en/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx index 042d904b6ef0..2e2b27a6cbb2 100644 --- a/website/src/pages/en/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx +++ b/website/src/pages/en/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx @@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend ### Reasons to Not Use Bytes as IDs 1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. +2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. 3. Indexing and querying performance improvements are not desired. ### Concatenating With Bytes as IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance. Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. @@ -172,7 +172,7 @@ Query Response: ## Conclusion -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/src/pages/en/subgraphs/best-practices/pruning.mdx b/website/src/pages/en/subgraphs/best-practices/pruning.mdx index 85fc9cc61baa..ea55b1c97363 100644 --- a/website/src/pages/en/subgraphs/best-practices/pruning.mdx +++ b/website/src/pages/en/subgraphs/best-practices/pruning.mdx @@ -5,7 +5,7 @@ sidebarTitle: 'Pruning with indexerHints' ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph. ## How to Prune a Subgraph With `indexerHints` @@ -13,11 +13,11 @@ Add a section called `indexerHints` in the manifest. `indexerHints` has three `prune` options: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. +- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0. - `prune: `: Sets a custom limit on the number of historical blocks to retain. - `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired. -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`: ```yaml specVersion: 1.0.0 @@ -39,7 +39,7 @@ dataSources: ## Conclusion -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/en/subgraphs/best-practices/timeseries.mdx b/website/src/pages/en/subgraphs/best-practices/timeseries.mdx index 756e0ff78b8e..d745dd5695ec 100644 --- a/website/src/pages/en/subgraphs/best-practices/timeseries.mdx +++ b/website/src/pages/en/subgraphs/best-practices/timeseries.mdx @@ -5,7 +5,7 @@ sidebarTitle: 'Timeseries and Aggregations' ## TLDR -Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. +Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance. ## Overview @@ -176,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar ### Conclusion -Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: +Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach: - Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. - Simplifies Development: Eliminates the need for manual aggregation logic in mappings. - Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. -By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. +By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs. ## Subgraph Best Practices 1-6 diff --git a/website/src/pages/en/subgraphs/billing.mdx b/website/src/pages/en/subgraphs/billing.mdx index df5883975dca..4d0c01d71cff 100644 --- a/website/src/pages/en/subgraphs/billing.mdx +++ b/website/src/pages/en/subgraphs/billing.mdx @@ -4,7 +4,7 @@ title: Billing ## Querying Plans -There are two plans to use when querying subgraphs on The Graph Network. +There are two plans to use when querying Subgraphs on The Graph Network. - **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. diff --git a/website/src/pages/en/subgraphs/cookbook/arweave.mdx b/website/src/pages/en/subgraphs/cookbook/arweave.mdx index 8924a9f56fa5..6a0534c8d5ed 100644 --- a/website/src/pages/en/subgraphs/cookbook/arweave.mdx +++ b/website/src/pages/en/subgraphs/cookbook/arweave.mdx @@ -2,7 +2,7 @@ title: Building Subgraphs on Arweave --- -> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave subgraphs! +> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs! In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain. @@ -25,12 +25,12 @@ The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are To be able to build and deploy Arweave Subgraphs, you need two packages: -1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. -2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. +1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`. +2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`. ## Subgraph's components -There are three components of a subgraph: +There are three components of a Subgraph: ### 1. Manifest - `subgraph.yaml` @@ -40,22 +40,22 @@ Defines the data sources of interest, and how they should be processed. Arweave Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body. -The requirements for Arweave subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). ### 3. AssemblyScript Mappings - `mapping.ts` This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed. -During subgraph development there are two key commands: +During Subgraph development there are two key commands: ``` $ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ## Subgraph Manifest Definition -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for an Arweave subgraph: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph: ```yaml specVersion: 0.0.5 @@ -82,7 +82,7 @@ dataSources: - handler: handleTx # the function name in the mapping file ``` -- Arweave subgraphs introduce a new kind of data source (`arweave`) +- Arweave Subgraphs introduce a new kind of data source (`arweave`) - The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet` - Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet @@ -99,7 +99,7 @@ Arweave data sources support two types of handlers: ## Schema Definition -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ## AssemblyScript Mappings @@ -152,7 +152,7 @@ Writing the mappings of an Arweave Subgraph is very similar to writing the mappi ## Deploying an Arweave Subgraph in Subgraph Studio -Once your subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. +Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command. ```bash graph deploy --access-token @@ -160,25 +160,25 @@ graph deploy --access-token ## Querying an Arweave Subgraph -The GraphQL endpoint for Arweave subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## Example Subgraphs -Here is an example subgraph for reference: +Here is an example Subgraph for reference: -- [Example subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) +- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions) ## FAQ -### Can a subgraph index Arweave and other chains? +### Can a Subgraph index Arweave and other chains? -No, a subgraph can only support data sources from one chain/network. +No, a Subgraph can only support data sources from one chain/network. ### Can I index the stored files on Arweave? Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions). -### Can I identify Bundlr bundles in my subgraph? +### Can I identify Bundlr bundles in my Subgraph? This is not currently supported. @@ -188,7 +188,7 @@ The source.owner can be the user's public key or account address. ### What is the current encryption format? -Data is generally passed into the mappings as Bytes, which if stored directly is returned in the subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). +Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/). The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`: diff --git a/website/src/pages/en/subgraphs/cookbook/enums.mdx b/website/src/pages/en/subgraphs/cookbook/enums.mdx index a10970c1539f..9f55ae07c54b 100644 --- a/website/src/pages/en/subgraphs/cookbook/enums.mdx +++ b/website/src/pages/en/subgraphs/cookbook/enums.mdx @@ -10,7 +10,7 @@ Enums, or enumeration types, are a specific data type that allows you to define ### Example of Enums in Your Schema -If you're building a subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. +If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned. You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity. @@ -65,7 +65,7 @@ Enums provide type safety, minimize typo risks, and ensure consistent and reliab > Note: The following guide uses the CryptoCoven NFT smart contract. -To define enums for the various marketplaces where NFTs are traded, use the following in your subgraph schema: +To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema: ```gql # Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint) @@ -80,7 +80,7 @@ enum Marketplace { ## Using Enums for NFT Marketplaces -Once defined, enums can be used throughout your subgraph to categorize transactions or events. +Once defined, enums can be used throughout your Subgraph to categorize transactions or events. For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum. diff --git a/website/src/pages/en/subgraphs/cookbook/grafting.mdx b/website/src/pages/en/subgraphs/cookbook/grafting.mdx index 57d5169830a7..0a0d7b3bb370 100644 --- a/website/src/pages/en/subgraphs/cookbook/grafting.mdx +++ b/website/src/pages/en/subgraphs/cookbook/grafting.mdx @@ -2,13 +2,13 @@ title: Replace a Contract and Keep its History With Grafting --- -In this guide, you will learn how to build and deploy new subgraphs by grafting existing subgraphs. +In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs. ## What is Grafting? -Grafting reuses the data from an existing subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. Also, it can be used when adding a feature to a subgraph that takes long to index from scratch. +Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - It adds or removes entity types - It removes attributes from entity types @@ -22,35 +22,35 @@ For more information, you can check: - [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network -> **Caution**: It is recommended to not use grafting for subgraphs published to The Graph Network +> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio. ### Best Practices -**Initial Migration**: when you first deploy your subgraph to the decentralized network, do so without grafting. Ensure that the subgraph is stable and functioning as expected. +**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected. -**Subsequent Updates**: once your subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. +**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data. By adhering to these guidelines, you minimize risks and ensure a smoother migration process. ## Building an Existing Subgraph -Building subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing subgraph used in this tutorial, the following repo is provided: +Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided: - [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial) -> Note: The contract used in the subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). +> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit). ## Subgraph Manifest Definition -The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest that you will use: +The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use: ```yaml specVersion: 0.0.4 @@ -85,27 +85,27 @@ dataSources: ## Grafting Manifest Definition -Grafting requires adding two new items to the original subgraph manifest: +Grafting requires adding two new items to the original Subgraph manifest: ```yaml --- features: - grafting # feature name graft: - base: Qm... # subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 5956000 # block number ``` - `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). -- `graft:` is a map of the `base` subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base subgraph up to and including the given block and then continue indexing the new subgraph from that block on. +- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on. -The `base` and `block` values can be found by deploying two subgraphs: one for the base indexing and one with grafting +The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting ## Deploying the Base Subgraph -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-example` -2. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-example` folder from the repo -3. Once finished, verify the subgraph is indexing properly. If you run the following command in The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example` +2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo +3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -138,16 +138,16 @@ It returns something like this: } ``` -Once you have verified the subgraph is indexing properly, you can quickly update the subgraph with grafting. +Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting. ## Deploying the Grafting Subgraph The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc. -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-replacement` -2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old subgraph. The `base` subgraph ID is the `Deployment ID` of your original `graph-example` subgraph. You can find this in Subgraph Studio. -3. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-replacement` folder from the repo -4. Once finished, verify the subgraph is indexing properly. If you run the following command in The Graph Playground +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement` +2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio. +3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo +4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground ```graphql { @@ -185,9 +185,9 @@ It should return the following: } ``` -You can see that the `graft-replacement` subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` subgraph. +You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph. -Congrats! You have successfully grafted a subgraph onto another subgraph. +Congrats! You have successfully grafted a Subgraph onto another Subgraph. ## Additional Resources diff --git a/website/src/pages/en/subgraphs/cookbook/near.mdx b/website/src/pages/en/subgraphs/cookbook/near.mdx index 6060eb27e761..698a0ac3486c 100644 --- a/website/src/pages/en/subgraphs/cookbook/near.mdx +++ b/website/src/pages/en/subgraphs/cookbook/near.mdx @@ -2,17 +2,17 @@ title: Building Subgraphs on NEAR --- -This guide is an introduction to building subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). +This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/). ## What is NEAR? [NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. -## What are NEAR subgraphs? +## What are NEAR Subgraphs? -The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts. +The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts. -Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR subgraphs: +Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs: - Block handlers: these are run on every new block - Receipt handlers: run every time a message is executed at a specified account @@ -23,32 +23,32 @@ Subgraphs are event-based, which means that they listen for and then process onc ## Building a NEAR Subgraph -`@graphprotocol/graph-cli` is a command-line tool for building and deploying subgraphs. +`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs. -`@graphprotocol/graph-ts` is a library of subgraph-specific types. +`@graphprotocol/graph-ts` is a library of Subgraph-specific types. -NEAR subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. +NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`. -> Building a NEAR subgraph is very similar to building a subgraph that indexes Ethereum. +> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum. -There are three aspects of subgraph definition: +There are three aspects of Subgraph definition: -**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. +**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source. -**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). +**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema). **AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. -During subgraph development there are two key commands: +During Subgraph development there are two key commands: ```bash $ graph codegen # generates types from the schema file identified in the manifest -$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder +$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder ``` ### Subgraph Manifest Definition -The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph: +The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph: ```yaml specVersion: 0.0.2 @@ -70,7 +70,7 @@ dataSources: file: ./src/mapping.ts # link to the file with the Assemblyscript mappings ``` -- NEAR subgraphs introduce a new `kind` of data source (`near`) +- NEAR Subgraphs introduce a new `kind` of data source (`near`) - The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` - NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. - NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted. @@ -92,7 +92,7 @@ NEAR data sources support two types of handlers: ### Schema Definition -Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). +Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema). ### AssemblyScript Mappings @@ -165,31 +165,31 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution. This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs. ## Deploying a NEAR Subgraph -Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). +Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released). Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names: - `near-mainnet` - `near-testnet` -More information on creating and deploying subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). +More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/). -As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a subgraph". +As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph". -Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command: +Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command: ```sh -$ graph create --node # creates a subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) -$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the subgraph to a specified Graph Node based on the manifest IPFS hash +$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI) +$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash ``` -The node configuration will depend on where the subgraph is being deployed. +The node configuration will depend on where the Subgraph is being deployed. ### Subgraph Studio @@ -204,7 +204,7 @@ graph deploy graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 ``` -Once your subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the subgraph itself: +Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself: ```graphql { @@ -228,11 +228,11 @@ We will provide more information on running the above components soon. ## Querying a NEAR Subgraph -The GraphQL endpoint for NEAR subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. +The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information. ## Example Subgraphs -Here are some example subgraphs for reference: +Here are some example Subgraphs for reference: [NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) @@ -242,13 +242,13 @@ Here are some example subgraphs for reference: ### How does the beta work? -NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR subgraphs, and keep you up to date on the latest developments! +NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments! -### Can a subgraph index both NEAR and EVM chains? +### Can a Subgraph index both NEAR and EVM chains? -No, a subgraph can only support data sources from one chain/network. +No, a Subgraph can only support data sources from one chain/network. -### Can subgraphs react to more specific triggers? +### Can Subgraphs react to more specific triggers? Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support. @@ -262,21 +262,21 @@ accounts: - mintbase1.near ``` -### Can NEAR subgraphs make view calls to NEAR accounts during mappings? +### Can NEAR Subgraphs make view calls to NEAR accounts during mappings? This is not supported. We are evaluating whether this functionality is required for indexing. -### Can I use data source templates in my NEAR subgraph? +### Can I use data source templates in my NEAR Subgraph? This is not currently supported. We are evaluating whether this functionality is required for indexing. -### Ethereum subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR subgraph? +### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph? -Pending functionality is not yet supported for NEAR subgraphs. In the interim, you can deploy a new version to a different "named" subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" subgraph, which will use the same underlying deployment ID, so the main subgraph will be instantly synced. +Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced. -### My question hasn't been answered, where can I get more help building NEAR subgraphs? +### My question hasn't been answered, where can I get more help building NEAR Subgraphs? -If it is a general question about subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. +If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com. ## References diff --git a/website/src/pages/en/subgraphs/cookbook/polymarket.mdx b/website/src/pages/en/subgraphs/cookbook/polymarket.mdx index 2edab84a377b..74efe387b0d7 100644 --- a/website/src/pages/en/subgraphs/cookbook/polymarket.mdx +++ b/website/src/pages/en/subgraphs/cookbook/polymarket.mdx @@ -3,17 +3,17 @@ title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph sidebarTitle: Query Polymarket Data --- -Query Polymarket’s onchain data using GraphQL via subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. +Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains. ## Polymarket Subgraph on Graph Explorer -You can see an interactive query playground on the [Polymarket subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. +You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query. ![Polymarket Playground](/img/Polymarket-playground.png) ## How to use the Visual Query Editor -The visual query editor helps you test sample queries from your subgraph. +The visual query editor helps you test sample queries from your Subgraph. You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want. @@ -73,7 +73,7 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on ## Polymarket's GraphQL Schema -The schema for this subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). +The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql). ### Polymarket Subgraph Endpoint @@ -88,7 +88,7 @@ The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegra 1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet 2. Go to https://thegraph.com/studio/apikeys/ to create an API key -You can use this API key on any subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. +You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket. 100k queries per month are free which is perfect for your side project! @@ -143,6 +143,6 @@ axios(graphQLRequest) ### Additional resources -For more information about querying data from your subgraph, read more [here](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/). -To explore all the ways you can optimize & customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/en/subgraphs/cookbook/secure-api-keys-nextjs.mdx b/website/src/pages/en/subgraphs/cookbook/secure-api-keys-nextjs.mdx index fc7e0ff52eb4..e17e594408ff 100644 --- a/website/src/pages/en/subgraphs/cookbook/secure-api-keys-nextjs.mdx +++ b/website/src/pages/en/subgraphs/cookbook/secure-api-keys-nextjs.mdx @@ -4,9 +4,9 @@ title: How to Secure API Keys Using Next.js Server Components ## Overview -We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). +We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). -In this cookbook, we will go over how to create a Next.js server component that queries a subgraph while also hiding the API key from the frontend. +In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend. ### Caveats @@ -18,7 +18,7 @@ In this cookbook, we will go over how to create a Next.js server component that In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. -### Using client-side rendering to query a subgraph +### Using client-side rendering to query a Subgraph ![Client-side rendering](/img/api-key-client-side-rendering.png) diff --git a/website/src/pages/en/subgraphs/cookbook/subgraph-debug-forking.mdx b/website/src/pages/en/subgraphs/cookbook/subgraph-debug-forking.mdx index 6610f19da66d..91aa7484d2ec 100644 --- a/website/src/pages/en/subgraphs/cookbook/subgraph-debug-forking.mdx +++ b/website/src/pages/en/subgraphs/cookbook/subgraph-debug-forking.mdx @@ -2,23 +2,23 @@ title: Quick and Easy Subgraph Debugging Using Forks --- -As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up subgraph debugging! +As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging! ## Ok, what is it? -**Subgraph forking** is the process of lazily fetching entities from _another_ subgraph's store (usually a remote one). +**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one). -In the context of debugging, **subgraph forking** allows you to debug your failed subgraph at block _X_ without needing to wait to sync-up to block _X_. +In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_. ## What?! How? -When you deploy a subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. +When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. -In a nutshell, we are going to _fork the failing subgraph_ from a remote Graph Node that is guaranteed to have the subgraph indexed up to block _X_ in order to provide the locally deployed subgraph being debugged at block _X_ an up-to-date view of the indexing state. +In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state. ## Please, show me some code! -To stay focused on subgraph debugging, let's keep things simple and run along with the [example-subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. +To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract. Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever: @@ -44,22 +44,22 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. +Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. The usual way to attempt a fix is: 1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't). -2. Re-deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). 3. Wait for it to sync-up. 4. If it breaks again go back to 1, otherwise: Hooray! It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._ -Using **subgraph forking** we can essentially eliminate this step. Here is how it looks: +Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks: 0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. 1. Make a change in the mappings source, which you believe will solve the issue. -2. Deploy to the local Graph Node, **_forking the failing subgraph_** and **_starting from the problematic block_**. +2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**. 3. If it breaks again, go back to 1, otherwise: Hooray! Now, you may have 2 questions: @@ -69,18 +69,18 @@ Now, you may have 2 questions: And I answer: -1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the subgraph's store. +1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store. 2. Forking is easy, no need to sweat: ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 ``` -Also, don't forget to set the `dataSources.source.startBlock` field in the subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! +Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork! So, here is what I do: -1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). +1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). ``` $ cargo run -p graph-node --release -- \ @@ -91,11 +91,11 @@ $ cargo run -p graph-node --release -- \ ``` 2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex. -3. After I made the changes I deploy my subgraph to the local Graph Node, **_forking the failing subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: +3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` 4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. -5. I deploy my now bug-free subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) +5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) diff --git a/website/src/pages/en/subgraphs/cookbook/subgraph-uncrashable.mdx b/website/src/pages/en/subgraphs/cookbook/subgraph-uncrashable.mdx index 0cc91a0fa2c3..a08e2a7ad8c9 100644 --- a/website/src/pages/en/subgraphs/cookbook/subgraph-uncrashable.mdx +++ b/website/src/pages/en/subgraphs/cookbook/subgraph-uncrashable.mdx @@ -2,23 +2,23 @@ title: Safe Subgraph Code Generator --- -[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your subgraph are completely safe and consistent. +[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent. ## Why integrate with Subgraph Uncrashable? -- **Continuous Uptime**. Mishandled entities may cause subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your subgraphs “uncrashable” and ensure business continuity. +- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity. -- **Completely Safe**. Common problems seen in subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. +- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic. -- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. **Key Features** -- The code generation tool accommodates **all** subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. +- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification. - The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. +- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. @@ -26,4 +26,4 @@ Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen graph codegen -u [options] [] ``` -Visit the [subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer subgraphs. +Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs. diff --git a/website/src/pages/en/subgraphs/cookbook/transfer-to-the-graph.mdx b/website/src/pages/en/subgraphs/cookbook/transfer-to-the-graph.mdx index 0c8cd7d869ee..6a846e70b194 100644 --- a/website/src/pages/en/subgraphs/cookbook/transfer-to-the-graph.mdx +++ b/website/src/pages/en/subgraphs/cookbook/transfer-to-the-graph.mdx @@ -2,13 +2,13 @@ title: Tranfer to The Graph --- -Quickly upgrade your subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). +Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/). ## Benefits of Switching to The Graph -- Use the same subgraph that your apps already use with zero-downtime migration. +- Use the same Subgraph that your apps already use with zero-downtime migration. - Increase reliability from a global network supported by 100+ Indexers. -- Receive lightning-fast support for subgraphs 24/7, with an on-call engineering team. +- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team. ## Upgrade Your Subgraph to The Graph in 3 Easy Steps @@ -21,9 +21,9 @@ Quickly upgrade your subgraphs from any platform to [The Graph's decentralized n ### Create a Subgraph in Subgraph Studio - Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -- Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". -> Note: After publishing, the subgraph name will be editable but requires onchain action each time, so name it properly. +> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly. ### Install the Graph CLI⁠ @@ -37,7 +37,7 @@ Using [npm](https://www.npmjs.com/): npm install -g @graphprotocol/graph-cli@latest ``` -Use the following command to create a subgraph in Studio using the CLI: +Use the following command to create a Subgraph in Studio using the CLI: ```sh graph init --product subgraph-studio @@ -53,7 +53,7 @@ graph auth ## 2. Deploy Your Subgraph to Studio -If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your subgraph. +If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph. In The Graph CLI, run the following command: @@ -62,7 +62,7 @@ graph deploy --ipfs-hash ``` -> **Note:** Every subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). +> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1). ## 3. Publish Your Subgraph to The Graph Network @@ -70,17 +70,17 @@ graph deploy --ipfs-hash ### Query Your Subgraph -> To attract about 3 indexers to query your subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. +> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph. -You can start [querying](/subgraphs/querying/introduction/) any subgraph by sending a GraphQL query into the subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. +You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio. #### Example -[CryptoPunks Ethereum subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: +[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari: ![Query URL](/img/cryptopunks-screenshot-transfer.png) -The query URL for this subgraph is: +The query URL for this Subgraph is: ```sh https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK @@ -96,9 +96,9 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the ### Monitor Subgraph Status -Once you upgrade, you can access and manage your subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all subgraphs in [The Graph Explorer](https://thegraph.com/networks/). +Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/). ### Additional Resources -- To quickly create and publish a new subgraph, check out the [Quick Start](/subgraphs/quick-start/). -- To explore all the ways you can optimize and customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/). +- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/). +- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/). diff --git a/website/src/pages/en/subgraphs/developing/creating/advanced.mdx b/website/src/pages/en/subgraphs/developing/creating/advanced.mdx index 14f8e608abdf..82ccd6f0b45a 100644 --- a/website/src/pages/en/subgraphs/developing/creating/advanced.mdx +++ b/website/src/pages/en/subgraphs/developing/creating/advanced.mdx @@ -4,9 +4,9 @@ title: Advanced Subgraph Features ## Overview -Add and implement advanced subgraph features to enhanced your subgraph's built. +Add and implement advanced Subgraph features to enhanced your Subgraph's built. -Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: +Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below: | Feature | Name | | ---------------------------------------------------- | ---------------- | @@ -14,7 +14,7 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | -For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: +For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be: ```yaml specVersion: 0.0.4 @@ -25,7 +25,7 @@ features: dataSources: ... ``` -> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used. +> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used. ## Timeseries and Aggregations @@ -33,9 +33,9 @@ Prerequisites: - Subgraph specVersion must be ≥1.1.0. -Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more. +Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more. -This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. +This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL. ### Example Schema @@ -97,11 +97,11 @@ Aggregation entities are automatically calculated on the basis of the specified ## Non-fatal errors -Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. +Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic. -> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio. +> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio. -Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest: +Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest: ```yaml specVersion: 0.0.4 @@ -111,7 +111,7 @@ features: ... ``` -The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example: +The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example: ```graphql foos(first: 100, subgraphError: allow) { @@ -123,7 +123,7 @@ _meta { } ``` -If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: +If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response: ```graphql "data": { @@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a ## IPFS/Arweave File Data Sources -File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. +File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave. > This also lays the groundwork for deterministic indexing of off-chain data, as well as the potential introduction of arbitrary HTTP-sourced data. @@ -290,7 +290,7 @@ Example: import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' -//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. +//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { let token = Token.load(event.params.tokenId.toString()) @@ -317,23 +317,23 @@ This will create a new file data source, which will poll Graph Node's configured This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity. -> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file +> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file Congratulations, you are using file data sources! -#### Deploying your subgraphs +#### Deploying your Subgraphs -You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0. +You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0. #### Limitations -File data source handlers and entities are isolated from other subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: +File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific: - Entities created by File Data Sources are immutable, and cannot be updated - File Data Source handlers cannot access entities from other file data sources - Entities associated with File Data Sources cannot be accessed by chain-based handlers -> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a subgraph! +> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph! Additionally, it is not possible to create data sources from a file data source, be it an onchain data source or another file data source. This restriction may be lifted in the future. @@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra > **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. ### How Topic Filters Work -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments. - The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. @@ -401,7 +401,7 @@ In this example: #### Configuration in Subgraphs -Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured: +Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured: ```yaml eventHandlers: @@ -436,7 +436,7 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver. -- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. +- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`. #### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses @@ -452,17 +452,17 @@ In this configuration: - `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. - `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. ## Declared eth_call > Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. This feature does the following: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. +- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency. - Allows faster data fetching, resulting in quicker query responses and a better user experience. - Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. @@ -474,7 +474,7 @@ This feature does the following: #### Scenario without Declarative `eth_calls` -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. Traditionally, these calls might be made sequentially: @@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds #### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. 2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing. #### Example Configuration in Subgraph Manifest Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. -`Subgraph.yaml` using `event.address`: +`subgraph.yaml` using `event.address`: ```yaml eventHandlers: @@ -524,7 +524,7 @@ Details for the example above: - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. -`Subgraph.yaml` using `event.params` +`subgraph.yaml` using `event.params` ```yaml calls: @@ -535,22 +535,22 @@ calls: > **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network). -When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. +When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. -A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: +A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level: ```yaml description: ... graft: - base: Qm... # Subgraph ID of base subgraph + base: Qm... # Subgraph ID of base Subgraph block: 7345624 # Block number ``` -When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph. +When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph. -Because grafting copies rather than indexes base data, it is much quicker to get the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. +Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied. -The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways: +The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways: - It adds or removes entity types - It removes attributes from entity types @@ -560,4 +560,4 @@ The grafted subgraph can use a GraphQL schema that is not identical to the one o - It adds or removes interfaces - It changes for which entity types an interface is implemented -> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest. +> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest. diff --git a/website/src/pages/en/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/en/subgraphs/developing/creating/assemblyscript-mappings.mdx index 2ac894695fe1..cd81dc118f28 100644 --- a/website/src/pages/en/subgraphs/developing/creating/assemblyscript-mappings.mdx +++ b/website/src/pages/en/subgraphs/developing/creating/assemblyscript-mappings.mdx @@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled. -In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: +In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events: ```javascript import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity' @@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil ## Code Generation -In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources. +In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources. This is done with @@ -80,7 +80,7 @@ This is done with graph codegen [--output-dir ] [] ``` -but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: +but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same: ```sh # Yarn @@ -90,7 +90,7 @@ yarn codegen npm run codegen ``` -This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. +This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with. ```javascript import { @@ -102,12 +102,12 @@ import { } from '../generated/Gravity/Gravity' ``` -In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with +In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with ```javascript import { Gravatar } from '../generated/schema' ``` -> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph. +> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph. -Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. +Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find. diff --git a/website/src/pages/en/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/en/subgraphs/developing/creating/graph-ts/api.mdx index d9099127799c..c36983307869 100644 --- a/website/src/pages/en/subgraphs/developing/creating/graph-ts/api.mdx +++ b/website/src/pages/en/subgraphs/developing/creating/graph-ts/api.mdx @@ -2,12 +2,12 @@ title: AssemblyScript API --- -> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). +> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/). -Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: +Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box: - The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) -- Code generated from subgraph files by `graph codegen` +- Code generated from Subgraph files by `graph codegen` You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). @@ -27,7 +27,7 @@ The `@graphprotocol/graph-ts` library provides the following APIs: ### Versions -The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. +The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph. | Version | Release notes | | :-: | --- | @@ -217,7 +217,7 @@ import { store } from '@graphprotocol/graph-ts' The `store` API allows to load, save and remove entities from and to the Graph Node store. -Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. +Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities. #### Creating entities @@ -276,8 +276,8 @@ As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotoco The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists. -- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. -- For some subgraphs, these missed lookups can contribute significantly to the indexing time. +- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some Subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -374,11 +374,11 @@ The Ethereum API provides access to smart contracts, public state variables, con #### Support for Ethereum Types -As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. +As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder. -With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them. +With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them. -The following example illustrates this. Given a subgraph schema like +The following example illustrates this. Given a Subgraph schema like ```graphql type Transfer @entity { @@ -477,7 +477,7 @@ class Log { #### Access to Smart Contract State -The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block. +The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block. A common pattern is to access the contract from which an event originates. This is achieved with the following code: @@ -500,7 +500,7 @@ export function handleTransfer(event: TransferEvent) { As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically. -Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address. +Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address. #### Handling Reverted Calls @@ -576,7 +576,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false import { log } from '@graphprotocol/graph-ts' ``` -The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. +The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. The `log` API includes the following functions: @@ -584,7 +584,7 @@ The `log` API includes the following functions: - `log.info(fmt: string, args: Array): void` - logs an informational message. - `log.warning(fmt: string, args: Array): void` - logs a warning. - `log.error(fmt: string, args: Array): void` - logs an error message. -- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph. +- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph. The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. @@ -715,7 +715,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. -On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed. +On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed. ### Crypto API @@ -830,7 +830,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to ### DataSourceContext in Manifest -The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. +The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Here is a YAML example illustrating the usage of various types in the `context` section: @@ -881,4 +881,4 @@ dataSources: - `List`: Specifies a list of items. Each item needs to specify its type and data. - `BigInt`: Specifies a large integer value. Must be quoted due to its large size. -This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs. +This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs. diff --git a/website/src/pages/en/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/en/subgraphs/developing/creating/graph-ts/common-issues.mdx index f8d0c9c004c2..65e8e3d4a8a3 100644 --- a/website/src/pages/en/subgraphs/developing/creating/graph-ts/common-issues.mdx +++ b/website/src/pages/en/subgraphs/developing/creating/graph-ts/common-issues.mdx @@ -2,7 +2,7 @@ title: Common AssemblyScript Issues --- -There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: +There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues: - `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object. - Scope is not inherited into [closure functions](https://www.assemblyscript.org/status.html#on-closures), i.e. variables declared outside of closure functions cannot be used. Explanation in [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s). diff --git a/website/src/pages/en/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/en/subgraphs/developing/creating/install-the-cli.mdx index bf6332298f06..c9d6966ef5fe 100644 --- a/website/src/pages/en/subgraphs/developing/creating/install-the-cli.mdx +++ b/website/src/pages/en/subgraphs/developing/creating/install-the-cli.mdx @@ -2,11 +2,11 @@ title: Install the Graph CLI --- -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). +> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/). ## Overview -The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. +The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network. ## Getting Started @@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. +The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started. ## Create a Subgraph ### From an Existing Contract -The following command creates a subgraph that indexes all events of an existing contract: +The following command creates a Subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,25 +51,25 @@ graph init \ - If any of the optional arguments are missing, it guides you through an interactive form. -- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. +- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page. ### From an Example Subgraph -The following command initializes a new project from an example subgraph: +The following command initializes a new project from an example Subgraph: ```sh graph init --from-example=example-subgraph ``` -- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. +- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. +- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. ### Add New `dataSources` to an Existing Subgraph -`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. +`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them. -Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: +Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command: ```sh graph add
[] @@ -101,5 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is The ABI file(s) must match your contract(s). There are a few ways to obtain ABI files: - If you are building your own project, you will likely have access to your most current ABIs. -- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. -- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail. +- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile. +- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail. diff --git a/website/src/pages/en/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/en/subgraphs/developing/creating/ql-schema.mdx index 27562f970620..7e0f889447c5 100644 --- a/website/src/pages/en/subgraphs/developing/creating/ql-schema.mdx +++ b/website/src/pages/en/subgraphs/developing/creating/ql-schema.mdx @@ -4,7 +4,7 @@ title: The Graph QL Schema ## Overview -The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. +The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language. > Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section. @@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar Before defining entities, it is important to take a step back and think about how your data is structured and linked. -- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform. +- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform. - It may be useful to imagine entities as "objects containing data", rather than as events or functions. - You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type. - Each type that should be an entity is required to be annotated with an `@entity` directive. @@ -141,7 +141,7 @@ type TokenBalance @entity { Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived. -For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical. +For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical. #### Example @@ -160,7 +160,7 @@ type TokenBalance @entity { } ``` -Here is an example of how to write a mapping for a subgraph with reverse lookups: +Here is an example of how to write a mapping for a Subgraph with reverse lookups: ```typescript let token = new Token(event.address) // Create Token @@ -231,7 +231,7 @@ query usersWithOrganizations { } ``` -This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query. +This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query. ### Adding comments to the schema @@ -287,7 +287,7 @@ query { } ``` -> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest. +> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest. ## Languages supported diff --git a/website/src/pages/en/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/en/subgraphs/developing/creating/starting-your-subgraph.mdx index 810ca65ddb85..180a343470b1 100644 --- a/website/src/pages/en/subgraphs/developing/creating/starting-your-subgraph.mdx +++ b/website/src/pages/en/subgraphs/developing/creating/starting-your-subgraph.mdx @@ -4,21 +4,21 @@ title: Starting Your Subgraph ## Overview -The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. +The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs. -When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. +When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL. -Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs. +Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs. ### Start Building -Start the process and build a subgraph that matches your needs: +Start the process and build a Subgraph that matches your needs: 1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure -2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component +2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component 3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema 4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings -5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features +5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/). @@ -26,7 +26,7 @@ Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/ | :-: | --- | | 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | | 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | | 0.0.9 | Supports `endBlock` feature | | 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | | 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | diff --git a/website/src/pages/en/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/en/subgraphs/developing/creating/subgraph-manifest.mdx index 5ab207de31aa..0de2f721353e 100644 --- a/website/src/pages/en/subgraphs/developing/creating/subgraph-manifest.mdx +++ b/website/src/pages/en/subgraphs/developing/creating/subgraph-manifest.mdx @@ -4,19 +4,19 @@ title: Subgraph Manifest ## Overview -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) ### Subgraph Capabilities -A single subgraph can: +A single Subgraph can: - Index data from multiple smart contracts (but not multiple networks). @@ -24,9 +24,9 @@ A single subgraph can: - Add an entry for each contract that requires indexing to the `dataSources` array. -The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). -For the example subgraph listed above, `subgraph.yaml` is: +For the example Subgraph listed above, `subgraph.yaml` is: ```yaml specVersion: 0.0.4 @@ -79,47 +79,47 @@ dataSources: ## Subgraph Entries -> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). +> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/). The important entries to update for the manifest are: -- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. +- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases. -- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio. +- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio. -- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer. +- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer. - `features`: a list of all used [feature](#experimental-features) names. -- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. +- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section. -- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. +- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts. - `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created. - `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`. -- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development. +- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development. - `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file. - `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings. -- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. +- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store. -- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. +- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store. -- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. +- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract. -A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. +A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array. ## Event Handlers -Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. +Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic. ### Defining an Event Handler -An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. +An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected. ```yaml dataSources: @@ -149,11 +149,11 @@ dataSources: ## Call Handlers -While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. +While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured. Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract. -> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. +> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network. ### Defining a Call Handler @@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han ### Mapping Function -Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: +Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument: ```typescript import { CreateGravatarCall } from '../generated/Gravity/Gravity' @@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a ## Block Handlers -In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a pre-defined filter. +In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter. ### Supported Filters @@ -218,7 +218,7 @@ filter: _The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._ -> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. +> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing. The absence of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type. @@ -261,7 +261,7 @@ blockHandlers: every: 10 ``` -The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals. +The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals. #### Once Filter @@ -276,7 +276,7 @@ blockHandlers: kind: once ``` -The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. +The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing. ```ts export function handleOnce(block: ethereum.Block): void { @@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void { ### Mapping Function -The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities. +The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities. ```typescript import { ethereum } from '@graphprotocol/graph-ts' @@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them. -To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. +To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false. ```yaml eventHandlers: @@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ ## Start Blocks -The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. +The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created. ```yaml dataSources: @@ -488,13 +488,13 @@ dataSources: ## Indexer Hints -The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. +The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning. > This feature is available from `specVersion: 1.0.0` ### Prune -`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: +`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include: 1. `"never"`: No pruning of historical data; retains the entire history. 2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. @@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde prune: auto ``` -> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities. +> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities. History as of a given block is required for: -- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history -- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block -- Rewinding the subgraph back to that block +- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history +- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block +- Rewinding the Subgraph back to that block If historical data as of the block has been pruned, the above capabilities will not be available. > Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data. -For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings: +For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings: To retain a specific amount of historical data: @@ -539,7 +539,7 @@ indexerHints: | :-: | --- | | 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | | 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs | | 0.0.9 | Supports `endBlock` feature | | 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). | | 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). | diff --git a/website/src/pages/en/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/en/subgraphs/developing/creating/unit-testing-framework.mdx index 2133c1d4b5c9..fdfd2116b563 100644 --- a/website/src/pages/en/subgraphs/developing/creating/unit-testing-framework.mdx +++ b/website/src/pages/en/subgraphs/developing/creating/unit-testing-framework.mdx @@ -2,12 +2,12 @@ title: Unit Testing Framework --- -Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs. +Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs. ## Benefits of Using Matchstick - It's written in Rust and optimized for high performance. -- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more. +- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more. ## Getting Started @@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra ### Using Matchstick -To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). +To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified). ### CLI options @@ -113,7 +113,7 @@ graph test path/to/file.test.ts ```sh -c, --coverage Run the tests in coverage mode --d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph) +-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph) -f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image. -h, --help Show usage information -l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes) @@ -145,13 +145,13 @@ libsFolder: path/to/libs manifestPath: path/to/subgraph.yaml ``` -### Demo subgraph +### Demo Subgraph You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph) ### Video tutorials -Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) +Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h) ## Tests structure @@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im There we go - we've created our first test! 👏 -Now in order to run our tests you simply need to run the following in your subgraph root folder: +Now in order to run our tests you simply need to run the following in your Subgraph root folder: `graph test Gravity` @@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => { ## Test Coverage -Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. +Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests. The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked. @@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as ## Additional Resources -For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). +For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_). ## Feedback diff --git a/website/src/pages/en/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/en/subgraphs/developing/deploying/multiple-networks.mdx index 1c324352d372..3b2b1bbc70ae 100644 --- a/website/src/pages/en/subgraphs/developing/deploying/multiple-networks.mdx +++ b/website/src/pages/en/subgraphs/developing/deploying/multiple-networks.mdx @@ -3,11 +3,11 @@ title: Deploying a Subgraph to Multiple Networks sidebarTitle: Deploying to Multiple Networks --- -This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/). +This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## Deploying the subgraph to multiple networks +## Deploying the Subgraph to multiple networks -In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. +In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. ### Using `graph-cli` @@ -21,7 +21,7 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development. > Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. @@ -55,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit > Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. -Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: +Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: ```yaml # ... @@ -97,7 +97,7 @@ yarn build --network sepolia yarn build --network sepolia --network-file path/to/config ``` -The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this: ```yaml # ... @@ -128,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). -To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: +To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: ```json { @@ -180,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional } ``` -To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: +To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands: ```sh # Mainnet: @@ -194,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e **Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. -## Subgraph Studio subgraph archive policy +## Subgraph Studio Subgraph archive policy -A subgraph version in Studio is archived if and only if it meets the following criteria: +A Subgraph version in Studio is archived if and only if it meets the following criteria: - The version is not published to the network (or pending publish) - The version was created 45 or more days ago -- The subgraph hasn't been queried in 30 days +- The Subgraph hasn't been queried in 30 days -In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. +In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived. -Every subgraph affected with this policy has an option to bring the version in question back. +Every Subgraph affected with this policy has an option to bring the version in question back. -## Checking subgraph health +## Checking Subgraph health -If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. +If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. -Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: +Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph: ```graphql { @@ -239,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of } ``` -This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/src/pages/en/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/en/subgraphs/developing/deploying/using-subgraph-studio.mdx index f709251b39ed..77d10212c770 100644 --- a/website/src/pages/en/subgraphs/developing/deploying/using-subgraph-studio.mdx +++ b/website/src/pages/en/subgraphs/developing/deploying/using-subgraph-studio.mdx @@ -2,23 +2,23 @@ title: Deploying Using Subgraph Studio --- -Learn how to deploy your subgraph to Subgraph Studio. +Learn how to deploy your Subgraph to Subgraph Studio. -> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain. +> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain. ## Subgraph Studio Overview In [Subgraph Studio](https://thegraph.com/studio/), you can do the following: -- View a list of subgraphs you've created -- Manage, view details, and visualize the status of a specific subgraph -- Create and manage your API keys for specific subgraphs +- View a list of Subgraphs you've created +- Manage, view details, and visualize the status of a specific Subgraph +- Create and manage your API keys for specific Subgraphs - Restrict your API keys to specific domains and allow only certain Indexers to query with them -- Create your subgraph -- Deploy your subgraph using The Graph CLI -- Test your subgraph in the playground environment -- Integrate your subgraph in staging using the development query URL -- Publish your subgraph to The Graph Network +- Create your Subgraph +- Deploy your Subgraph using The Graph CLI +- Test your Subgraph in the playground environment +- Integrate your Subgraph in staging using the development query URL +- Publish your Subgraph to The Graph Network - Manage your billing ## Install The Graph CLI @@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli 1. Open [Subgraph Studio](https://thegraph.com/studio/). 2. Connect your wallet to sign in. - You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe. -3. After you sign in, your unique deploy key will be displayed on your subgraph details page. - - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. +3. After you sign in, your unique deploy key will be displayed on your Subgraph details page. + - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised. -> Important: You need an API key to query subgraphs +> Important: You need an API key to query Subgraphs ### How to Create a Subgraph in Subgraph Studio @@ -57,25 +57,25 @@ npm install -g @graphprotocol/graph-cli ### Subgraph Compatibility with The Graph Network -To be supported by Indexers on The Graph Network, subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. +To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo. ## Initialize Your Subgraph -Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: +Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command: ```bash graph init ``` -You can find the `` value on your subgraph details page in Subgraph Studio, see image below: +You can find the `` value on your Subgraph details page in Subgraph Studio, see image below: ![Subgraph Studio - Slug](/img/doc-subgraph-slug.png) -After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected. +After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected. ## Graph Auth -Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page. +Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page. Then, use the following command to authenticate from the CLI: @@ -85,11 +85,11 @@ graph auth ## Deploying a Subgraph -Once you are ready, you can deploy your subgraph to Subgraph Studio. +Once you are ready, you can deploy your Subgraph to Subgraph Studio. -> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network. +> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network. -Use the following CLI command to deploy your subgraph: +Use the following CLI command to deploy your Subgraph: ```bash graph deploy @@ -102,30 +102,30 @@ After running this command, the CLI will ask for a version label. ## Testing Your Subgraph -After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. +After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready. -Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph. +Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph. ## Publish Your Subgraph -In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). +In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). ## Versioning Your Subgraph with the CLI -If you want to update your subgraph, you can do the following: +If you want to update your Subgraph, you can do the following: - You can deploy a new version to Studio using the CLI (it will only be private at this point). - Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer). -- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index. +- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index. -You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment. +You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment. -> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). +> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/). ## Automatic Archiving of Subgraph Versions -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio. +Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio. -> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived. +> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived. ![Subgraph Studio - Unarchive](/img/Unarchive.png) diff --git a/website/src/pages/en/subgraphs/developing/developer-faq.mdx b/website/src/pages/en/subgraphs/developing/developer-faq.mdx index 8dbe6d23ad39..e45141294523 100644 --- a/website/src/pages/en/subgraphs/developing/developer-faq.mdx +++ b/website/src/pages/en/subgraphs/developing/developer-faq.mdx @@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o ## Subgraph Related -### 1. What is a subgraph? +### 1. What is a Subgraph? -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. +A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query. -### 2. What is the first step to create a subgraph? +### 2. What is the first step to create a Subgraph? -To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). +To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -### 3. Can I still create a subgraph if my smart contracts don't have events? +### 3. Can I still create a Subgraph if my smart contracts don't have events? -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data. -If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. +If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -### 4. Can I change the GitHub account associated with my subgraph? +### 4. Can I change the GitHub account associated with my Subgraph? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. +No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph. -### 5. How do I update a subgraph on mainnet? +### 5. How do I update a Subgraph on mainnet? -You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on. -### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? +### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying? -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. +You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. -### 7. How do I call a contract function or access a public state variable from my subgraph mappings? +### 7. How do I call a contract function or access a public state variable from my Subgraph mappings? Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state). -### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? +### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings? Not currently, as mappings are written in AssemblyScript. @@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p ### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. ### 10. How are templates different from data sources? -Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. +Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates). -### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? +### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. @@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -### 15. Can I delete my subgraph? +### 15. Can I delete my Subgraph? -Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph. +Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph. ## Network Related @@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks) -### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed? Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: @@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... } ### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. ## Miscellaneous diff --git a/website/src/pages/en/subgraphs/developing/introduction.mdx b/website/src/pages/en/subgraphs/developing/introduction.mdx index c7938efb9c8e..2e313eed8924 100644 --- a/website/src/pages/en/subgraphs/developing/introduction.mdx +++ b/website/src/pages/en/subgraphs/developing/introduction.mdx @@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin On The Graph, you can: -1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). -2. Use GraphQL to query existing subgraphs. +1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing Subgraphs. ### What is GraphQL? -- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. ### Developer Actions -- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. -- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. -- Deploy, publish and signal your subgraphs within The Graph Network. +- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your Subgraphs within The Graph Network. -### What are subgraphs? +### What are Subgraphs? -A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. +A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. -Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. +Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics. diff --git a/website/src/pages/en/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/en/subgraphs/developing/managing/deleting-a-subgraph.mdx index f400044bb8fb..2d71b35a5d86 100644 --- a/website/src/pages/en/subgraphs/developing/managing/deleting-a-subgraph.mdx +++ b/website/src/pages/en/subgraphs/developing/managing/deleting-a-subgraph.mdx @@ -2,28 +2,28 @@ title: Deleting a Subgraph --- -Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/). +Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/). -> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. +> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it. ## Step-by-Step -1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). +1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/). 2. Click on the three-dots to the right of the "publish" button. -3. Click on the option to "delete this subgraph": +3. Click on the option to "delete this Subgraph": ![Delete-subgraph](/img/Delete-subgraph.png) -4. Depending on the subgraph's status, you will be prompted with various options. +4. Depending on the Subgraph's status, you will be prompted with various options. - - If the subgraph is not published, simply click “delete” and confirm. - - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. + - If the Subgraph is not published, simply click “delete” and confirm. + - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required. -> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner. +> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner. ### Important Reminders -- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. -- Curators will not be able to signal on the subgraph anymore. -- Curators that already signaled on the subgraph can withdraw their signal at an average share price. -- Deleted subgraphs will show an error message. +- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal. +- Curators will not be able to signal on the Subgraph anymore. +- Curators that already signaled on the Subgraph can withdraw their signal at an average share price. +- Deleted Subgraphs will show an error message. diff --git a/website/src/pages/en/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/en/subgraphs/developing/managing/transferring-a-subgraph.mdx index c8eed0425d03..328646055e06 100644 --- a/website/src/pages/en/subgraphs/developing/managing/transferring-a-subgraph.mdx +++ b/website/src/pages/en/subgraphs/developing/managing/transferring-a-subgraph.mdx @@ -2,18 +2,18 @@ title: Transferring a Subgraph --- -Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. +Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. ## Reminders -- Whoever owns the NFT controls the subgraph. -- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. -- You can easily move control of a subgraph to a multi-sig. -- A community member can create a subgraph on behalf of a DAO. +- Whoever owns the NFT controls the Subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network. +- You can easily move control of a Subgraph to a multi-sig. +- A community member can create a Subgraph on behalf of a DAO. ## View Your Subgraph as an NFT -To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: +To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: ``` https://opensea.io/your-wallet-address @@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres ## Step-by-Step -To transfer ownership of a subgraph, do the following: +To transfer ownership of a Subgraph, do the following: 1. Use the UI built into Subgraph Studio: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) -2. Choose the address that you would like to transfer the subgraph to: +2. Choose the address that you would like to transfer the Subgraph to: ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-2.png) diff --git a/website/src/pages/en/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/en/subgraphs/developing/publishing/publishing-a-subgraph.mdx index 32ed9b96d5f3..f894f3d37dc2 100644 --- a/website/src/pages/en/subgraphs/developing/publishing/publishing-a-subgraph.mdx +++ b/website/src/pages/en/subgraphs/developing/publishing/publishing-a-subgraph.mdx @@ -3,9 +3,9 @@ title: Publishing a Subgraph to the Decentralized Network sidebarTitle: Publishing to the Decentralized Network --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. +Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network. -When you publish a subgraph to the decentralized network, you make it available for: +When you publish a Subgraph to the decentralized network, you make it available for: - [Curators](/resources/roles/curating/) to begin curating it. - [Indexers](/indexing/overview/) to begin indexing it. @@ -18,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/). 1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard 2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +All published versions of an existing Subgraph can: - Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/). -- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published. +- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published. -### Updating metadata for a published subgraph +### Updating metadata for a published Subgraph -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. +- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. - Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. - It's important to note that this process will not create a new version since your deployment has not changed. ## Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). 1. Open the `graph-cli`. 2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) ### Customizing your deployment -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags: ``` USAGE @@ -62,33 +62,33 @@ FLAGS ``` -## Adding signal to your subgraph +## Adding signal to your Subgraph -Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph. +Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph. -- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. +- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled. -- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). +- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). - Specific supported networks can be checked [here](/supported-networks/). -> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers. +> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers. -> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph. +> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph. -The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. +The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. +When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -Indexers can find subgraphs to index based on curation signals they see in Graph Explorer. +Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer. -![Explorer subgraphs](/img/explorer-subgraphs.png) +![Explorer Subgraphs](/img/explorer-subgraphs.png) -Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published. +Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published. ![Curation Pool](/img/curate-own-subgraph-tx.png) -Alternatively, you can add GRT signal to a published subgraph from Graph Explorer. +Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer. ![Signal from Explorer](/img/signal-from-explorer.png) diff --git a/website/src/pages/en/subgraphs/developing/subgraphs.mdx b/website/src/pages/en/subgraphs/developing/subgraphs.mdx index 951ec74234d1..b5a75a88e94f 100644 --- a/website/src/pages/en/subgraphs/developing/subgraphs.mdx +++ b/website/src/pages/en/subgraphs/developing/subgraphs.mdx @@ -4,83 +4,83 @@ title: Subgraphs ## What is a Subgraph? -A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. +A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL. ### Subgraph Capabilities - **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3. -- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/). -- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). +- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/). +- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer). ## Inside a Subgraph -The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. +The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -The **subgraph definition** consists of the following files: +The **Subgraph definition** consists of the following files: -- `subgraph.yaml`: Contains the subgraph manifest +- `subgraph.yaml`: Contains the Subgraph manifest -- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL +- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL - `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema -To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/). +To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/). ## Subgraph Lifecycle -Here is a general overview of a subgraph’s lifecycle: +Here is a general overview of a Subgraph’s lifecycle: ![Subgraph Lifecycle](/img/subgraph-lifecycle.png) ## Subgraph Development -1. [Create a subgraph](/developing/creating-a-subgraph/) -2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/) -3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) -4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) -5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) +1. [Create a Subgraph](/developing/creating-a-subgraph/) +2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/) +3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio) +4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) +5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph) ### Build locally -Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs. +Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs. ### Deploy to Subgraph Studio -Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: +Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -- Use its staging environment to index the deployed subgraph and make it available for review. -- Verify that your subgraph doesn't have any indexing errors and works as expected. +- Use its staging environment to index the deployed Subgraph and make it available for review. +- Verify that your Subgraph doesn't have any indexing errors and works as expected. ### Publish to the Network -When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. +When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network. -- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers. -- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. -- Published subgraphs have associated metadata, which provides other network participants with useful context and information. +- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers. +- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT. +- Published Subgraphs have associated metadata, which provides other network participants with useful context and information. ### Add Curation Signal for Indexing -Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. +Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph. #### What is signal? -- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. -- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume. ### Querying & Application Development Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/). -Learn more about [querying subgraphs](/subgraphs/querying/introduction/). +Learn more about [querying Subgraphs](/subgraphs/querying/introduction/). ### Updating Subgraphs -To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. +To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. -- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying. ### Deleting & Transferring Subgraphs -If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). +If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/). diff --git a/website/src/pages/en/subgraphs/explorer.mdx b/website/src/pages/en/subgraphs/explorer.mdx index f29f2a3602d9..499fcede88d3 100644 --- a/website/src/pages/en/subgraphs/explorer.mdx +++ b/website/src/pages/en/subgraphs/explorer.mdx @@ -2,11 +2,11 @@ title: Graph Explorer --- -Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). +Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer). ## Overview -Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. +Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile. ## Inside Explorer @@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi ### Subgraphs Page -After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: +After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following: -- Your own finished subgraphs +- Your own finished Subgraphs - Subgraphs published by others -- The exact subgraph you want (based on the date created, signal amount, or name). +- The exact Subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you will be able to do the following: +When you click into a Subgraph, you will be able to do the following: - Test queries in the playground and be able to leverage network details to make informed decisions. -- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality. - - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. + - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, you can do the following: +On each Subgraph’s dedicated page, you can do the following: -- Signal/Un-signal on subgraphs +- Signal/Un-signal on Subgraphs - View more details such as charts, current deployment ID, and other metadata -- Switch versions to explore past iterations of the subgraph -- Query subgraphs via GraphQL -- Test subgraphs in the playground -- View the Indexers that are indexing on a certain subgraph +- Switch versions to explore past iterations of the Subgraph +- Query Subgraphs via GraphQL +- Test Subgraphs in the playground +- View the Indexers that are indexing on a certain Subgraph - Subgraph stats (allocations, Curators, etc) -- View the entity who published the subgraph +- View the entity who published the Subgraph ![Explorer Image 3](/img/Explorer-Signal-Unsignal.png) @@ -53,7 +53,7 @@ On this page, you can see the following: - Indexers who collected the most query fees - Indexers with the highest estimated APR -Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph. +Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph. ### Participants Page @@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every ![Explorer Image 4](/img/Indexer-Pane.png) -Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. +Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs. -In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards. **Specifics** @@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s - Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. - Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. - Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing. - Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations. - Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. @@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici #### 2. Curators -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed. -- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. - - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. +- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on. - The bonding curve incentivizes Curators to curate the highest quality data sources. In the The Curator table listed below you can see: @@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ A few key details to note: -- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -178,15 +178,15 @@ In this section, you can view the following: ### Subgraphs Tab -In the Subgraphs tab, you’ll see your published subgraphs. +In the Subgraphs tab, you’ll see your published Subgraphs. -> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. +> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Indexing Tab -In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: @@ -223,13 +223,13 @@ Keep in mind that this chart is horizontally scrollable, so if you scroll all th ### Curating Tab -In the Curation tab, you’ll find all the subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. +In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on. Within this tab, you’ll find an overview of: -- All the subgraphs you're curating on with signal details -- Share totals per subgraph -- Query rewards per subgraph +- All the Subgraphs you're curating on with signal details +- Share totals per Subgraph +- Query rewards per Subgraph - Updated at date details ![Explorer Image 14](/img/Curation-Stats.png) diff --git a/website/src/pages/en/subgraphs/querying/best-practices.mdx b/website/src/pages/en/subgraphs/querying/best-practices.mdx index ff5f381e2993..ab02b27cbc03 100644 --- a/website/src/pages/en/subgraphs/querying/best-practices.mdx +++ b/website/src/pages/en/subgraphs/querying/best-practices.mdx @@ -4,7 +4,7 @@ title: Querying Best Practices The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language. -Learn the essential GraphQL language rules and best practices to optimize your subgraph. +Learn the essential GraphQL language rules and best practices to optimize your Subgraph. --- @@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set ` ### Use a single query to request multiple records -By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` +By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}` Example of inefficient querying: diff --git a/website/src/pages/en/subgraphs/querying/from-an-application.mdx b/website/src/pages/en/subgraphs/querying/from-an-application.mdx index 0995fc8cbb7c..44677d78dcdf 100644 --- a/website/src/pages/en/subgraphs/querying/from-an-application.mdx +++ b/website/src/pages/en/subgraphs/querying/from-an-application.mdx @@ -11,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d ### Subgraph Studio Endpoint -After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: +After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this: ``` https://api.studio.thegraph.com/query/// @@ -21,13 +21,13 @@ https://api.studio.thegraph.com/query/// ### The Graph Network Endpoint -After publishing your subgraph to the network, you will receive an endpoint that looks like this: : +After publishing your Subgraph to the network, you will receive an endpoint that looks like this: : ``` https://gateway.thegraph.com/api//subgraphs/id/ ``` -> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data. +> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data. ## Using Popular GraphQL Clients @@ -35,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/ The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as: -- Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query +- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) - [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md) - Fully typed result @@ -44,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq ### Fetch Data with Graph Client -Let's look at how to fetch data from a subgraph with `graph-client`: +Let's look at how to fetch data from a Subgraph with `graph-client`: #### Step 1 @@ -169,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on ### Fetch Data with Apollo Client -Let's look at how to fetch data from a subgraph with Apollo client: +Let's look at how to fetch data from a Subgraph with Apollo client: #### Step 1 @@ -258,7 +258,7 @@ client ### Fetch data with URQL -Let's look at how to fetch data from a subgraph with URQL: +Let's look at how to fetch data from a Subgraph with URQL: #### Step 1 diff --git a/website/src/pages/en/subgraphs/querying/graph-client/README.md b/website/src/pages/en/subgraphs/querying/graph-client/README.md index 416cadc13c6f..47b6e5e55185 100644 --- a/website/src/pages/en/subgraphs/querying/graph-client/README.md +++ b/website/src/pages/en/subgraphs/querying/graph-client/README.md @@ -373,7 +373,7 @@ sources: #### Automatic Pagination -With most subgraphs, the number of records you can fetch is limited. In this case, you have to send multiple requests with pagination. +With most Subgraphs, the number of records you can fetch is limited. In this case, you have to send multiple requests with pagination. ```graphql query { diff --git a/website/src/pages/en/subgraphs/querying/graphql-api.mdx b/website/src/pages/en/subgraphs/querying/graphql-api.mdx index b3003ece651a..e10201771989 100644 --- a/website/src/pages/en/subgraphs/querying/graphql-api.mdx +++ b/website/src/pages/en/subgraphs/querying/graphql-api.mdx @@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph. ## What is GraphQL? -[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs. -To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/). +To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/). ## Queries with GraphQL -In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. +In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. > Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. @@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. -This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). +This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). ```graphql { @@ -329,7 +329,7 @@ This query will return `Challenge` entities, and their associated `Application` ### Fulltext Search Queries -Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph. +Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph. Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field. @@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). > Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. @@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en ### Subgraph Metadata -All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows: +All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows: ```graphQL { @@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s } ``` -If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the subgraph's start block, and less than or equal to the most recently indexed block. +If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block. `deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file. @@ -427,6 +427,6 @@ If a block is provided, the metadata is as of that block, if not the latest inde - hash: the hash of the block - number: the block number -- timestamp: the timestamp of the block, if available (this is currently only available for subgraphs indexing EVM networks) +- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks) -`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block +`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block diff --git a/website/src/pages/en/subgraphs/querying/introduction.mdx b/website/src/pages/en/subgraphs/querying/introduction.mdx index da62e42f0785..8094339d889d 100644 --- a/website/src/pages/en/subgraphs/querying/introduction.mdx +++ b/website/src/pages/en/subgraphs/querying/introduction.mdx @@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex ## Overview -When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph. +When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph. ## Specifics -Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner. +Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner. ![Query Subgraph Button](/img/query-button-screenshot.png) @@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/). -> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities. +> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities. > Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead. diff --git a/website/src/pages/en/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/en/subgraphs/querying/managing-api-keys.mdx index 6964b1a7ad9b..aed3d10422e1 100644 --- a/website/src/pages/en/subgraphs/querying/managing-api-keys.mdx +++ b/website/src/pages/en/subgraphs/querying/managing-api-keys.mdx @@ -4,11 +4,11 @@ title: Managing API keys ## Overview -API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. +API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application. ### Create and Manage API Keys -Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs. +Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs. The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries. @@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page: - Amount of GRT spent 2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can: - View and manage the domain names authorized to use your API key - - Assign subgraphs that can be queried with your API key + - Assign Subgraphs that can be queried with your API key diff --git a/website/src/pages/en/subgraphs/querying/python.mdx b/website/src/pages/en/subgraphs/querying/python.mdx index 0937e4f7862d..ed0d078a4175 100644 --- a/website/src/pages/en/subgraphs/querying/python.mdx +++ b/website/src/pages/en/subgraphs/querying/python.mdx @@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds sidebarTitle: Python (Subgrounds) --- -Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! +Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis! Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations. @@ -17,14 +17,14 @@ pip install --upgrade subgrounds python -m pip install --upgrade subgrounds ``` -Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). +Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame). ```python from subgrounds import Subgrounds sg = Subgrounds() -# Load the subgraph +# Load the Subgraph aave_v2 = sg.load_subgraph( "https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum") diff --git a/website/src/pages/en/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/en/subgraphs/querying/subgraph-id-vs-deployment-id.mdx index 103e470e14da..17258dd13ea1 100644 --- a/website/src/pages/en/subgraphs/querying/subgraph-id-vs-deployment-id.mdx +++ b/website/src/pages/en/subgraphs/querying/subgraph-id-vs-deployment-id.mdx @@ -2,17 +2,17 @@ title: Subgraph ID vs Deployment ID --- -A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID. +A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID. -When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph. +When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph. Here are some key differences between the two IDs: ![](/img/subgraph-id-vs-deployment-id.png) ## Deployment ID -The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). +The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api). -When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published. +When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published. Example endpoint that uses Deployment ID: @@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID: ## Subgraph ID -The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats. +The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats. -Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. +Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes. Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW` diff --git a/website/src/pages/en/subgraphs/quick-start.mdx b/website/src/pages/en/subgraphs/quick-start.mdx index 176b59d4133d..aec575871f73 100644 --- a/website/src/pages/en/subgraphs/quick-start.mdx +++ b/website/src/pages/en/subgraphs/quick-start.mdx @@ -2,7 +2,7 @@ title: Quick Start --- -Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. +Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph. ## Prerequisites @@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/ ## How to Build a Subgraph -### 1. Create a subgraph in Subgraph Studio +### 1. Create a Subgraph in Subgraph Studio Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. -Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. +Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys. -Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". +Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name". ### 2. Install the Graph CLI @@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -### 3. Initialize your subgraph +### 3. Initialize your Subgraph -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events. +The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events. -The following command initializes your subgraph from an existing contract: +The following command initializes your Subgraph from an existing contract: ```sh graph init @@ -51,48 +51,48 @@ graph init If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI. -When you initialize your subgraph, the CLI will ask you for the following information: +When you initialize your Subgraph, the CLI will ask you for the following information: -- **Protocol**: Choose the protocol your subgraph will be indexing data from. -- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- **Directory**: Choose a directory to create your subgraph in. -- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- **Protocol**: Choose the protocol your Subgraph will be indexing data from. +- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph. +- **Directory**: Choose a directory to create your Subgraph in. +- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from. - **Contract address**: Locate the smart contract address you’d like to query data from. - **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file. -- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. - **Contract Name**: Input the name of your contract. -- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event. - **Add another contract** (optional): You can add another contract. -See the following screenshot for an example for what to expect when initializing your subgraph: +See the following screenshot for an example for what to expect when initializing your Subgraph: ![Subgraph command](/img/CLI-Example.png) -### 4. Edit your subgraph +### 4. Edit your Subgraph -The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. +The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph. -When making changes to the subgraph, you will mainly work with three files: +When making changes to the Subgraph, you will mainly work with three files: -- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. -- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph. - AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). +For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -### 5. Deploy your subgraph +### 5. Deploy your Subgraph > Remember, deploying is not the same as publishing. -When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. +When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes. -Once your subgraph is written, run the following commands: +Once your Subgraph is written, run the following commands: ```sh graph codegen && graph build ``` -Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. +Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio. ![Deploy key](/img/subgraph-studio-deploy-key.jpg) @@ -105,37 +105,37 @@ Authenticate and deploy your subgraph. The deploy key can be found on the subgra The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. -### 6. Review your subgraph +### 6. Review your Subgraph -If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: +If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: - Run a sample query. -- Analyze your subgraph in the dashboard to check information. -- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: +- Analyze your Subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this: ![Subgraph logs](/img/subgraph-logs-image.png) -### 7. Publish your subgraph to The Graph Network +### 7. Publish your Subgraph to The Graph Network -When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: +When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following: -- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. -- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). -- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it. +- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network. +- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/). +- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it. -> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph. +> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph. #### Publishing with Subgraph Studio -To publish your subgraph, click the Publish button in the dashboard. +To publish your Subgraph, click the Publish button in the dashboard. -![Publish a subgraph on Subgraph Studio](/img/publish-sub-transfer.png) +![Publish a Subgraph on Subgraph Studio](/img/publish-sub-transfer.png) -Select the network to which you would like to publish your subgraph. +Select the network to which you would like to publish your Subgraph. #### Publishing from the CLI -As of version 0.73.0, you can also publish your subgraph with the Graph CLI. +As of version 0.73.0, you can also publish your Subgraph with the Graph CLI. Open the `graph-cli`. @@ -151,32 +151,32 @@ Use the following commands: graph publish ``` -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice. ![cli-ui](/img/cli-ui.png) To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/). -#### Adding signal to your subgraph +#### Adding signal to your Subgraph -1. To attract Indexers to query your subgraph, you should add GRT curation signal to it. +1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it. - - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph. 2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. - - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks. To learn more about curation, read [Curating](/resources/roles/curating/). -To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: +To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option: ![Subgraph publish](/img/studio-publish-modal.png) -### 8. Query your subgraph +### 8. Query your Subgraph -You now have access to 100,000 free queries per month with your subgraph on The Graph Network! +You now have access to 100,000 free queries per month with your Subgraph on The Graph Network! -You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. +You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). +For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/). diff --git a/website/src/pages/en/substreams/developing/dev-container.mdx b/website/src/pages/en/substreams/developing/dev-container.mdx index bd4acf16eec7..339ddb159c87 100644 --- a/website/src/pages/en/substreams/developing/dev-container.mdx +++ b/website/src/pages/en/substreams/developing/dev-container.mdx @@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container. It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file). -Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling. +Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling. ## Prerequisites @@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea You can configure your project to query data either through a Subgraph or directly from an SQL database: -- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). +- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph). - **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink). ## Deployment Options diff --git a/website/src/pages/en/substreams/developing/sinks.mdx b/website/src/pages/en/substreams/developing/sinks.mdx index c270193e0c15..48c246201e8f 100644 --- a/website/src/pages/en/substreams/developing/sinks.mdx +++ b/website/src/pages/en/substreams/developing/sinks.mdx @@ -8,7 +8,7 @@ Choose a sink that meets your project's needs. Once you find a package that fits your needs, you can choose how you want to consume the data. -Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph. +Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph. ## Sinks diff --git a/website/src/pages/en/substreams/developing/solana/transactions.mdx b/website/src/pages/en/substreams/developing/solana/transactions.mdx index c22bd0f50611..1542ae22dab7 100644 --- a/website/src/pages/en/substreams/developing/solana/transactions.mdx +++ b/website/src/pages/en/substreams/developing/solana/transactions.mdx @@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi ## Step 3: Load the Data -To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink. +To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink. ### Subgraph 1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions. -2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. +2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`. 3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`. ### SQL diff --git a/website/src/pages/en/substreams/introduction.mdx b/website/src/pages/en/substreams/introduction.mdx index e11174ee07c8..0bd1ea21c9f6 100644 --- a/website/src/pages/en/substreams/introduction.mdx +++ b/website/src/pages/en/substreams/introduction.mdx @@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh ## Substreams Benefits -- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing. +- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing. - **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. - **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections. - **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database. diff --git a/website/src/pages/en/substreams/publishing.mdx b/website/src/pages/en/substreams/publishing.mdx index 3d1a3863c882..b11247db128d 100644 --- a/website/src/pages/en/substreams/publishing.mdx +++ b/website/src/pages/en/substreams/publishing.mdx @@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s ### What is a package? -A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs. ## Publish a Package diff --git a/website/src/pages/en/supported-networks.mdx b/website/src/pages/en/supported-networks.mdx index 6bc2c9d802b0..3c5e73ea011f 100644 --- a/website/src/pages/en/supported-networks.mdx +++ b/website/src/pages/en/supported-networks.mdx @@ -12,11 +12,11 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. - Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. -- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. +- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). ## Running Graph Node locally If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration. -Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support. +Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support.