diff --git a/website/pages/ar/about.mdx b/website/pages/ar/about.mdx index 7ac49dc47560..7660b0dfd54b 100644 --- a/website/pages/ar/about.mdx +++ b/website/pages/ar/about.mdx @@ -2,46 +2,66 @@ title: حول The Graph --- -هذه الصفحة ستشرح The Graph وكيف يمكنك أن تبدأ. - ## What is The Graph? -The Graph is a decentralized protocol for indexing and querying blockchain data. The Graph makes it possible to query data that is difficult to query directly. +The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. + +## Understanding the Basics + +Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. + +### Challenges Without The Graph + +In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. + +- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. + +- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. + +- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. + +### Why is this a problem? + +It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. + +Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. + +Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. -المشاريع ذات العقود الذكية المعقدة مثل [ Uniswap ](https://uniswap.org/) و NFTs مثل [ Bored Ape Yacht Club ](https://boredapeyachtclub.com/) تقوم بتخزين البيانات على Ethereum blockchain ، مما يجعل من الصعب قراءة أي شيء بشكل مباشر عدا البيانات الأساسية من blockchain. +## The Graph Provides a Solution -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. -You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. +### How The Graph Functions -**إن فهرسة بيانات الـ blockchain أمر صعب.** +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +#### Specifics -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. -## كيف يعمل The Graph +- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -The Graph يفهرس بيانات الإيثيريوم بناء على أوصاف الـ subgraph ، والمعروفة باسم subgraph manifest. حيث أن وصف الـ subgraph يحدد العقود الذكية ذات الأهمية لـ subgraph ، ويحدد الأحداث في تلك العقود التي يجب الانتباه إليها ، وكيفية عمل map لبيانات الحدث إلى البيانات التي سيخزنها The Graph في قاعدة البيانات الخاصة به. +- When creating a subgraph, you need to write a subgraph manifest. -بمجرد كتابة `subgraph manifest` ، يمكنك استخدام Graph CLI لتخزين التعريف في IPFS وإخبار المفهرس ببدء فهرسة البيانات لذلك الـ subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. -This diagram gives more detail about the flow of data once a subgraph manifest has been deployed, dealing with Ethereum transactions: +The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. ![A graphic explaining how The Graph uses Graph Node to serve queries to data consumers](/img/graph-dataflow.png) تدفق البيانات يتبع الخطوات التالية: -1. A dapp adds data to Ethereum through a transaction on a smart contract. -2. العقد الذكي يصدر حدثا واحدا أو أكثر أثناء معالجة الإجراء. -3. يقوم الـ Graph Node بمسح الـ Ethereum باستمرار بحثا عن الكتل الجديدة وبيانات الـ subgraph الخاص بك. -4. يعثر الـ Graph Node على أحداث الـ Ethereum لـ subgraph الخاص بك في هذه الكتل ويقوم بتشغيل mapping handlers التي قدمتها. الـ mapping عبارة عن وحدة WASM والتي تقوم بإنشاء أو تحديث البيانات التي يخزنها Graph Node استجابة لأحداث الـ Ethereum. -5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. +1. A dapp adds data to Ethereum through a transaction on a smart contract. +2. العقد الذكي يصدر حدثا واحدا أو أكثر أثناء معالجة الإجراء. +3. يقوم الـ Graph Node بمسح الـ Ethereum باستمرار بحثا عن الكتل الجديدة وبيانات الـ subgraph الخاص بك. +4. يعثر الـ Graph Node على أحداث الـ Ethereum لـ subgraph الخاص بك في هذه الكتل ويقوم بتشغيل mapping handlers التي قدمتها. الـ mapping عبارة عن وحدة WASM والتي تقوم بإنشاء أو تحديث البيانات التي يخزنها Graph Node استجابة لأحداث الـ Ethereum. +5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. ## الخطوات التالية -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +The following sections provide a more in-depth look at subgraphs, their deployment and data querying. -Before you start writing your own subgraph, you might want to have a look at [Graph Explorer](https://thegraph.com/explorer) and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/pages/ar/arbitrum/arbitrum-faq.mdx b/website/pages/ar/arbitrum/arbitrum-faq.mdx index 98346d82a41d..2cf8402a7718 100644 --- a/website/pages/ar/arbitrum/arbitrum-faq.mdx +++ b/website/pages/ar/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: الأسئلة الشائعة حول Arbitrum Click [here](#billing-on-arbitrum-faqs) if you would like to skip to the Arbitrum Billing FAQs. -## لماذا يقوم The Graph بتطبيق حل L2؟ +## Why did The Graph implement an L2 Solution? -By scaling The Graph on L2, network participants can expect: +By scaling The Graph on L2, network participants can now benefit from: - Upwards of 26x savings on gas fees @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can expect: - Security inherited from Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers could open and close allocations to index a greater number of subgraphs with greater frequency, developers could deploy and update subgraphs with greater ease, Delegators could delegate GRT with increased frequency, and Curators could add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion. @@ -41,27 +41,21 @@ Once you have GRT on Arbitrum, you can add it to your billing balance. ## As a subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? -There is no immediate action required, however, network participants are encouraged to begin moving to Arbitrum to take advantage of the benefits of L2. +Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) for additional support. -Core developer teams are working to create L2 transfer tools that will make it significantly easier to move delegation, curation, and subgraphs to Arbitrum. Network participants can expect L2 transfer tools to be available by summer of 2023. +All indexing rewards are now entirely on Arbitrum. -اعتبارًا من 10 أبريل 2023 ، تم سك 5٪ من جميع مكافآت الفهرسة على Arbitrum. مع زيادة المشاركة في الشبكة ، وموافقة المجلس عليها ، ستتحول مكافآت الفهرسة تدريجياً من Ethereum إلى Arbitrum ، وستنتقل في النهاية بالكامل إلى Arbitrum. - -## إذا كنت أرغب في المشاركة في اشبكة L2 ، فماذا أفعل؟ - -Please help [test the network](https://testnet.thegraph.com/explorer) on L2 and report feedback about your experience in [Discord](https://discord.gg/graphprotocol). - -## هل توجد أي مخاطر مرتبطة بتوسيع الشبكة إلى L2؟ +## Were there any risks associated with scaling the network to L2? All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). Everything has been tested thoroughly, and a contingency plan is in place to ensure a safe and seamless transition. Details can be found [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## هل ستستمر ال subgraphs الموجودة على Ethereum في العمل؟ +## Are existing subgraphs on Ethereum working? -نعم ، ستعمل عقود شبكة The Graph بالتوازي على كل من Ethereum و Arbitrum حتى الانتقال بشكل كامل إلى Arbitrum في وقت لاحق. +All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. -## هل سيكون لدى GRT عقد ذكي جديد يتم نشره على Arbitrum؟ +## Does GRT have a new smart contract deployed on Arbitrum? Yes, GRT has an additional [smart contract on Arbitrum](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7). However, the Ethereum mainnet [GRT contract](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) will remain operational. diff --git a/website/pages/ar/billing.mdx b/website/pages/ar/billing.mdx index 68ee9ca693bd..42aa104673bb 100644 --- a/website/pages/ar/billing.mdx +++ b/website/pages/ar/billing.mdx @@ -14,7 +14,7 @@ There are two plans to use when querying subgraphs on The Graph Network. ## Query Payments with credit card -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) +- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. انقر على زر "توصيل المحفظة" في الزاوية اليمنى العليا من الصفحة. ستتم إعادة توجيهك إلى صفحة اختيار المحفظة. حدد محفظتك وانقر على "توصيل". 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. @@ -69,7 +69,7 @@ Once you bridge GRT, you can add it to your billing balance. 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. 4. Enter the amount of GRT you would like to withdraw. 5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. 6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. @@ -83,7 +83,7 @@ Once you bridge GRT, you can add it to your billing balance. - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. 5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. 6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. + - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. 7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. 8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. @@ -127,7 +127,7 @@ This will be a step by step guide for purchasing GRT on Binance. 7. Review your purchase and click "Buy GRT". 8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. 9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - Click on the "wallet" button, click withdraw, and select GRT. - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - Click "Continue" and confirm your transaction. @@ -198,7 +198,7 @@ You can learn more about getting ETH on Binance [here](https://www.binance.com/e ### How many queries will I need? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. @@ -208,6 +208,6 @@ Of course, both new and existing users can reach out to Edge & Node's BD team fo Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### What happens when my billing balance runs out? Will I get a warning? You will receive several email notifications before your billing balance runs out. diff --git a/website/pages/ar/chain-integration-overview.mdx b/website/pages/ar/chain-integration-overview.mdx index 501143bfb88d..8087395eb67b 100644 --- a/website/pages/ar/chain-integration-overview.mdx +++ b/website/pages/ar/chain-integration-overview.mdx @@ -6,12 +6,12 @@ title: نظرة عامة حول عملية التكامل مع الشبكة ## المرحلة الأولى: التكامل التقني -- تعمل الفرق على تكامل نقطة الغراف وفايرهوز بالنسبة للسلاسل الغير مبنية على آلة الإيثيريوم الإفتراضية. إليك الطريقة(https://thegraph. com/docs/en/new-chain-integration/). +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. - تستهل الفرق عملية التكامل مع البروتوكول من خلال إنشاء موضوع في المنتدى هنا(https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (الفئة الفرعية "مصادر البيانات الجديدة" تحت قسم "الحوكمة واقتراحات تحسين الغراف"). استخدام قالب المنتدى الافتراضي إلزامي. ## المرحلة الثانية: التحقق من صحة التكامل -- تتعاون الفرق مع المطورين الأساسيين، ومؤسسة الغراف، ومشغلي واجهات المستخدم الرسومية وبوابات الشبكة مثل سبغراف استوديو(https://thegraph.com/studio/) لضمان عملية تكامل سلسة. يتضمن ذلك توفير بنية تحتية للواجهة الخلفية، مثل إجراء الإستدعاء عن بعد -للترميز الكائني لجافاسكريبت- الخاص بالسلسلة المتكاملة أو نقاط نهاية فايرهوز. الفرق الراغبة في تجنب الإستضافة الذاتية مثل هذه البنية التحتية يمكنهم الإستفادة من مشغلي النقاط (المفهرسين) في مجتمع الغراف للقيام بذلك، والذي يمكن للمؤسسة المساعدة من خلاله. +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - مفهرسو الغراف يختبرون التكامل على شبكة إختبار الغراف. - يقوم المطورون الأساسيون والمفهرسون بمراقبة استقرار، وأداء، وحتمية البيانات. @@ -38,7 +38,7 @@ Ready to shape the future of The Graph Network? [Start your proposal](https://gi هذا سيؤثر فقط على دعم البروتوكول لمكافآت الفهرسة على الغرافات الفرعية المدعومة من سبستريمز. تنفيذ الفايرهوز الجديد سيحتاج إلى الفحص على شبكة الاختبار، وفقًا للمنهجية الموضحة للمرحلة الثانية في هذا المقترح لتحسين الغراف. وعلى نحو مماثل، وعلى افتراض أن التنفيذ فعال وموثوق به، سيتتطالب إنشاء طلب سحب على [مصفوفة دعم الميزات] (https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) ("مصادر بيانات سبستريمز" ميزة للغراف الفرعي)، بالإضافة إلى مقترح جديد لتحسين الغراف، لدعم البروتوكول لمكافآت الفهرسة. يمكن لأي شخص إنشاء طلب السحب ومقترح تحسين الغراف؛ وسوف تساعد المؤسسة في الحصول على موافقة المجلس. -### 3. كم من الوقت ستستغرق هذه العملية؟ +### 3. How much time will the process of reaching full protocol support take? يُتوقع أن يستغرق الوصول إلى الشبكة الرئيسية عدة أسابيع، وذلك يعتمد على وقت تطوير التكامل، وما إذا كانت هناك حاجة إلى بحوث إضافية، واختبارات وإصلاحات الأخطاء، وكذلك توقيت عملية الحوكمة التي تتطلب ملاحظات المجتمع كما هو الحال دائمًا. @@ -46,4 +46,4 @@ Ready to shape the future of The Graph Network? [Start your proposal](https://gi ### 4. كيف سيتم التعامل مع الأولويات؟ -كما في السؤال الثالث، سيتوقف ذلك على الجهوزية بشكل عام وعلى قدرات أصحاب الحصص المشاركين. على سبيل المثال، قد تستغرق سلسلة جديدة مع تطبيق فايرهوز جديد تمامًا وقتاً أطول من عمليات التكامل التي تم فحصها بالفعل أو التي قطعت شوطاً أطول في عملية الحوكمة. وينطبق هذا بشكل خاص على السلاسل المدعومة مسبقاً على الخدمة المستضافة (https://thegraph.com/hosted-service) أو تلك التي تعتمد على تقنيات تم اختبارها بالفعل. +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/ar/cookbook/arweave.mdx b/website/pages/ar/cookbook/arweave.mdx index 9a7bfaab0270..e2b25f673dfc 100644 --- a/website/pages/ar/cookbook/arweave.mdx +++ b/website/pages/ar/cookbook/arweave.mdx @@ -105,7 +105,7 @@ Schema definition describes the structure of the resulting subgraph database and تمت كتابة المعالجات الخاصة بمعالجة الأحداث بـ[ أسيمبلي سكريبت ](https://www.assemblyscript.org/). -Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/assemblyscript-api/). +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/ar/cookbook/base-testnet.mdx b/website/pages/ar/cookbook/base-testnet.mdx index a32276dd1875..31ef39f972df 100644 --- a/website/pages/ar/cookbook/base-testnet.mdx +++ b/website/pages/ar/cookbook/base-testnet.mdx @@ -63,10 +63,10 @@ Your subgraph slug is an identifier for your subgraph. The CLI tool will walk yo The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. +- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retrieve from the subgraph. - (AssemblyScript Mappings (mapping.ts هذا هو الكود الذي يترجم البيانات من مصادر البيانات الخاصة بك إلى الكيانات المحددة في المخطط. -If you want to index additional data, you will need extend the manifest, schema and mappings. +If you want to index additional data, you will need to extend the manifest, schema and mappings. For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). diff --git a/website/pages/ar/cookbook/cosmos.mdx b/website/pages/ar/cookbook/cosmos.mdx index 0ed45e614eee..49a2e8c52602 100644 --- a/website/pages/ar/cookbook/cosmos.mdx +++ b/website/pages/ar/cookbook/cosmos.mdx @@ -85,7 +85,7 @@ Schema definition describes the structure of the resulting subgraph database and تمت كتابة المعالجات(handlers) الخاصة بمعالجة الأحداث بـ[ AssemblyScript ](https://www.assemblyscript.org/). -Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/assemblyscript-api/). +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/ar/cookbook/grafting.mdx b/website/pages/ar/cookbook/grafting.mdx index 548091ac5b7d..08c347c50a63 100644 --- a/website/pages/ar/cookbook/grafting.mdx +++ b/website/pages/ar/cookbook/grafting.mdx @@ -22,7 +22,7 @@ For more information, you can check: - [تطعيم(Grafting)](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic usecase. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network @@ -30,7 +30,7 @@ In this tutorial, we will be covering a basic usecase. We will replace an existi ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. While this is an effective way to preserve data and save time on indexing, grafting may introduce complexities and potential issues when migrating from a hosted environment to the decentralized network. It is not possible to graft a subgraph from The Graph Network back to the hosted service or Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. ### Best Practices @@ -80,7 +80,7 @@ dataSources: ``` - The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. ## Grafting Manifest Definition @@ -191,7 +191,7 @@ Congrats! You have successfully grafted a subgraph onto another subgraph. ## مصادر إضافية -If you want more experience with grafting, here's a few examples for popular contracts: +If you want more experience with grafting, here are a few examples for popular contracts: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) diff --git a/website/pages/ar/cookbook/near.mdx b/website/pages/ar/cookbook/near.mdx index 62cb4fbcecb1..a2b523628969 100644 --- a/website/pages/ar/cookbook/near.mdx +++ b/website/pages/ar/cookbook/near.mdx @@ -27,17 +27,17 @@ title: بناء Subgraphs على NEAR `graphprotocol/graph-ts@` هي مكتبة لأنواع خاصة بـ subgraph. -تطوير NEAR subgraph يتطلب `graph-cli` بإصدار أعلى من `0.23.0` و `graph-ts` بإصدار أعلى من `0.23.0`. +تطوير NEAR subgraph يتطلب `graph-cli` بإصدار أعلى من ` 0.23.0 ` و `graph-ts` بإصدار أعلى من ` 0.23.0 `. > Building a NEAR subgraph is very similar to building a subgraph that indexes Ethereum. هناك ثلاثة جوانب لتعريف الـ subgraph: -**subgraph.yaml:** الـ subgraph manifest ، وتحديد مصادر البيانات ذات الأهمية ، وكيف يجب أن تتم معالجتها.علما أن NEAR هو `نوع` جديد لمصدر البيانات. +**subgraph.yaml:** الـ subgraph manifest ، وتحديد مصادر البيانات ذات الأهمية ، وكيف يجب أن تتم معالجتها.علما أن NEAR هو ` نوع ` جديد لمصدر البيانات. **schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph#the-graphql-schema). -**AssemblyScript Mappings:** [AssemblyScript code](/developing/assemblyscript-api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. +**AssemblyScript Mappings:** [AssemblyScript code](/developing/graph-ts/api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. During subgraph development there are two key commands: @@ -98,7 +98,7 @@ Schema definition describes the structure of the resulting subgraph database and تمت كتابة المعالجات(handlers) الخاصة بمعالجة الأحداث بـ[ AssemblyScript ](https://www.assemblyscript.org/). -NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/assemblyscript-api). +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/graph-ts/api). ```typescript @@ -163,11 +163,11 @@ class ReceiptWithOutcome { These types are passed to block & receipt handlers: - معالجات الكتلة ستتلقى`Block` -- معالجات الاستلام ستتلقى`ReceiptWithOutcome` +- معالجات الاستلام ستتلقى` ReceiptWithOutcome ` -Otherwise, the rest of the [AssemblyScript API](/developing/assemblyscript-api) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/developing/graph-ts/api) is available to NEAR subgraph developers during mapping execution. -This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/assemblyscript-api#json-api) to allow developers to easily process these logs. +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/graph-ts/api#json-api) to allow developers to easily process these logs. ## نشر NEAR Subgraph diff --git a/website/pages/ar/cookbook/subgraph-debug-forking.mdx b/website/pages/ar/cookbook/subgraph-debug-forking.mdx index 6c87d43045c5..44a8bfa28c2c 100644 --- a/website/pages/ar/cookbook/subgraph-debug-forking.mdx +++ b/website/pages/ar/cookbook/subgraph-debug-forking.mdx @@ -69,14 +69,14 @@ Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph St وأنا أجيب: -1. `fork-base` هو عنوان URL "الأساسي" ،فمثلا عند إلحاق _subgraph id_ ، يكون عنوان URL الناتج (`/`) هو GraphQL endpoint صالح لمخزن الـ subgraph. +1. ` fork-base ` هو عنوان URL "الأساسي" ،فمثلا عند إلحاق _subgraph id_ ، يكون عنوان URL الناتج (`/`) هو GraphQL endpoint صالح لمخزن الـ subgraph. 2. الـتفريع سهل ، فلا داعي للقلق: ```bash $ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020 ``` -أيضًا ، لا تنس تعيين حقل `dataSources.source.startBlock` في subgraph manifest لرقم الكتلة(block) التي بها المشكلة، حتى تتمكن من تخطي فهرسة الكتل الغير ضرورية والاستفادة من التفريع! +أيضًا ، لا تنس تعيين حقل ` dataSources.source.startBlock ` في subgraph manifest لرقم الكتلة(block) التي بها المشكلة، حتى تتمكن من تخطي فهرسة الكتل الغير ضرورية والاستفادة من التفريع! لذلك ، هذا ما أفعله: @@ -90,7 +90,7 @@ $ cargo run -p graph-node --release -- \ --fork-base https://api.thegraph.com/subgraphs/id/ ``` -2. بعد فحص دقيق ، لاحظت أن هناك عدم تطابق في تمثيلات الـ `id` المستخدمة عند فهرسة `Gravatar` في المعالجين الخاصين بي. بينما `handleNewGravatar` يحول (`event.params.id.toHex()`) إلى سداسي ، `handleUpdatedGravatar` يستخدم int32 (`event.params.id.toI32()`) مما يجعل `handleUpdatedGravatar` قلقا من "Gravatar not found!". أنا أجعلهم كلاهما يحولان `id` إلى سداسي. +2. بعد فحص دقيق ، لاحظت أن هناك عدم تطابق في تمثيلات الـ ` id ` المستخدمة عند فهرسة ` Gravatar ` في المعالجين الخاصين بي. بينما ` handleNewGravatar ` يحول (`event.params.id.toHex()`) إلى سداسي ، `handleUpdatedGravatar` يستخدم int32 (`event.params.id.toI32()`) مما يجعل ` handleUpdatedGravatar ` قلقا من "Gravatar not found!". أنا أجعلهم كلاهما يحولان ` id ` إلى سداسي. 3. After I made the changes I deploy my subgraph to the local Graph Node, **_forking the failing subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: ```bash diff --git a/website/pages/ar/cookbook/subgraph-uncrashable.mdx b/website/pages/ar/cookbook/subgraph-uncrashable.mdx index 989310a3f9a0..0cc91a0fa2c3 100644 --- a/website/pages/ar/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/ar/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: Safe Subgraph Code Generator - The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. These logs can be viewed in the The Graph's hosted service under the 'Logs' section. +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. diff --git a/website/pages/ar/cookbook/upgrading-a-subgraph.mdx b/website/pages/ar/cookbook/upgrading-a-subgraph.mdx index 4181a6b18255..b69433a19c5e 100644 --- a/website/pages/ar/cookbook/upgrading-a-subgraph.mdx +++ b/website/pages/ar/cookbook/upgrading-a-subgraph.mdx @@ -136,7 +136,7 @@ Make sure **Update Subgraph Details in Explorer** is checked and click on **Save ## Deprecating a Subgraph on The Graph Network -Follow the steps [here](/managing/deprecating-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. +Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. ## Querying a Subgraph + Billing on The Graph Network diff --git a/website/pages/ar/deploying/multiple-networks.mdx b/website/pages/ar/deploying/multiple-networks.mdx new file mode 100644 index 000000000000..dc2b8e533430 --- /dev/null +++ b/website/pages/ar/deploying/multiple-networks.mdx @@ -0,0 +1,241 @@ +--- +title: Deploying a Subgraph to Multiple Networks +--- + +This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph). + +## Deploying the subgraph to multiple networks + +In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. + +### Using `graph-cli` + +Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: + +```sh +Options: + + ... + --network Network configuration to use from the networks config file + --network-file Networks config file path (default: "./networks.json") +``` + +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. + +> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. + +If you don't have a `networks.json` file, you'll need to manually create one with the following structure: + +```json +{ + "network1": { // the network name + "dataSource1": { // the dataSource name + "address": "0xabc...", // the contract address (optional) + "startBlock": 123456 // the startBlock (optional) + }, + "dataSource2": { + "address": "0x123...", + "startBlock": 123444 + } + }, + "network2": { + "dataSource1": { + "address": "0x987...", + "startBlock": 123 + }, + "dataSource2": { + "address": "0xxyz..", + "startBlock": 456 + } + }, + ... +} +``` + +> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. + +Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x123...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +This is what your networks config file should look like: + +```json +{ + "mainnet": { + "Gravity": { + "address": "0x123..." + } + }, + "sepolia": { + "Gravity": { + "address": "0xabc..." + } + } +} +``` + +Now we can run one of the following commands: + +```sh +# Using default networks.json file +yarn build --network sepolia + +# Using custom named file +yarn build --network sepolia --network-file path/to/config +``` + +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: sepolia + source: + address: '0xabc...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Now you are ready to `yarn deploy`. + +> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: + +```sh +# Using default networks.json file +yarn deploy --network sepolia + +# Using custom named file +yarn deploy --network sepolia --network-file path/to/config +``` + +### Using subgraph.yaml template + +One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). + +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: + +```json +{ + "network": "mainnet", + "address": "0x123..." +} +``` + +and + +```json +{ + "network": "sepolia", + "address": "0xabc..." +} +``` + +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + network: {{network}} + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + address: '{{address}}' + abi: Gravity + mapping: + kind: ethereum/events +``` + +In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: + +```json +{ + ... + "scripts": { + ... + "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", + "prepare:sepolia": "mustache config/sepolia.json subgraph.template.yaml > subgraph.yaml" + }, + "devDependencies": { + ... + "mustache": "^3.1.0" + } +} +``` + +To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: + +```sh +# Mainnet: +yarn prepare:mainnet && yarn deploy + +# Sepolia: +yarn prepare:sepolia && yarn deploy +``` + +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). + +**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. + +## Subgraph Studio subgraph archive policy + +A subgraph version in Studio is archived if and only if it meets the following criteria: + +- The version is not published to the network (or pending publish) +- The version was created 45 or more days ago +- The subgraph hasn't been queried in 30 days + +In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. + +Every subgraph affected with this policy has an option to bring the version in question back. + +## Checking subgraph health + +If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. + +Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: + +```graphql +{ + indexingStatusForCurrentVersion(subgraphName: "org/subgraph") { + synced + health + fatalError { + message + block { + number + hash + } + handler + } + chains { + chainHeadBlock { + number + } + latestBlock { + number + } + } + } +} +``` + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/pages/ar/developing/creating-a-subgraph.mdx b/website/pages/ar/developing/creating-a-subgraph.mdx index cdee1d940b29..c8d84b5f1c5e 100644 --- a/website/pages/ar/developing/creating-a-subgraph.mdx +++ b/website/pages/ar/developing/creating-a-subgraph.mdx @@ -2,45 +2,47 @@ title: إنشاء subgraph --- -A subgraph extracts data from a blockchain, processing it and storing it so that it can be easily queried via GraphQL. +This detailed guide provides instructions to successfully create a subgraph. -![Defining a Subgraph](/img/defining-a-subgraph.png) +A subgraph extracts data from a blockchain, processes it, and stores it for efficient querying via GraphQL. -يتكون تعريف Subgraph من عدة ملفات: +![Defining a Subgraph](/img/defining-a-subgraph.png) -- `Subgraph.yaml`ملف YAML يحتوي على Subgraph manifest +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. -- `schema.graphql`: مخطط GraphQL يحدد البيانات المخزنة في Subgraph وكيفية الاستعلام عنها عبر GraphQL +## Getting Started -- `AssemblyScript Mappings`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) كود يترجم من بيانات الحدث إلى الكيانات المعرفة في مخططك (مثل`mapping.ts` في هذا الدرس) +### قم بتثبيت Graph CLI -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key). It is recommended that you [add signal](/network/curating/#how-to-signal) to your subgraph with at least [3,000 GRT](/sunrise/#how-can-i-ensure-high-quality-of-service-and-redundancy-for-subgraphs-on-the-graph-network). +To build and deploy a subgraph, you will need the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-tooling) which you will need to build and deploy a subgraph. +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. -## قم بتثبيت Graph CLI +On your local machine, run one of the following commands: -تمت كتابة Graph CLI بلغة JavaScript ، وستحتاج إلى تثبيت إما `yarn` أو `npm` لاستخدامها ؛ ومن المفترض أن يكون لديك yarn كالتالي. +#### Using [npm](https://www.npmjs.com/) -بمجرد حصولك على `yarn` ، قم بتثبيت Graph CLI عن طريق تشغيل +```bash +npm install -g @graphprotocol/graph-cli@latest +``` -**التثبيت بواسطة yarn:** +#### Using [yarn](https://yarnpkg.com/) ```bash yarn global add @graphprotocol/graph-cli ``` -**التثبيت بواسطة npm:** +- The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. -```bash -npm install -g @graphprotocol/graph-cli -``` +- This `graph init` command can also create a subgraph in Subgraph Studio by passing in `--product subgraph-studio`. + +- If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. -Once installed, the `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. This command can be used to create a subgraph in Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to your preferred network, bootstrapping a new subgraph from that contract can be a good way to get started. +## Create a subgraph -## من عقد موجود +### From an existing contract -الأمر التالي ينشئ subgraph يفهرس كل الأحداث للعقد الموجود. إنه يحاول جلب ABI للعقد من Etherscan ويعود إلى طلب مسار ملف محلي. إذا كانت أي من arguments الاختيارية مفقودة ، فسيأخذك عبر نموذج تفاعلي. +The following command creates a subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,21 +53,29 @@ graph init \ [] ``` -`` هو ID لـ subgraph الخاص بك في Subgraph Studio ، ويمكن العثور عليه في صفحة تفاصيل الـ subgraph. +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. -## من مثال Subgraph +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. -الوضع الثاني الذي يدعمه `graph init` هو إنشاء مشروع جديد من مثال subgraph. الأمر التالي يقوم بهذا: +### From an example subgraph + +The following command initializes a new project from an example subgraph: ```sh -graph init --studio +graph init --studio --from-example=example-subgraph ``` -The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. -## Add New dataSources To An Existing Subgraph +## Add new `dataSources` to an existing subgraph -Since `v0.31.0` the `graph-cli` supports adding new dataSources to an existing subgraph through the `graph add` command. +Since `v0.31.0`, the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: ```sh graph add
[] @@ -78,22 +88,45 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -The `add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option), and will create a new `dataSource` in the same way that `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. +### Specifics + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: + + - If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- The contract `address` will be written to the `networks.json` for the relevant network. + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. -The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: +## Components of a subgraph -- If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. -- If `false`: a new entity & event handler should be created with `${dataSourceName}{EventName}`. +### The Subgraph Manifest -The contract `address` will be written to the `networks.json` for the relevant network. +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -> **Note:** When using the interactive cli, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. +The **subgraph definition** consists of the following files: -## The Subgraph Manifest +- `subgraph.yaml`: Contains the subgraph manifest -Subgraph manifest `subgraph.yaml` تحدد العقود الذكية لفهارس الـ subgraph الخاص بك ، والأحداث من هذه العقود التي يجب الانتباه إليها ، وكيفية عمل map لبيانات الأحداث للكيانات التي تخزنها Graph Node وتسمح بالاستعلام عنها. يمكن العثور على المواصفات الكاملة لـ subgraph manifests [ هنا ](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL -بالنسبة لمثال الـ subgraph ،يكون الـ `subgraph.yaml`: +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +A single subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: ```yaml specVersion: 0.0.4 @@ -180,9 +213,9 @@ dataSources: يتم ترتيب المشغلات (triggers) لمصدر البيانات داخل الكتلة باستخدام العملية التالية: -1. يتم ترتيب triggers الأحداث والاستدعاءات أولا من خلال فهرس الإجراء داخل الكتلة. -2. Event and call triggers within the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. -3. يتم تشغيل مشغلات الكتلة بعد مشغلات الحدث والاستدعاء، بالترتيب المحدد في الـ manifest. +1. يتم ترتيب triggers الأحداث والاستدعاءات أولا من خلال فهرس الإجراء داخل الكتلة. +2. Event and call triggers within the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. +3. يتم تشغيل مشغلات الكتلة بعد مشغلات الحدث والاستدعاء، بالترتيب المحدد في الـ manifest. قواعد الترتيب هذه عرضة للتغيير. @@ -305,9 +338,9 @@ Imagine you have a subgraph that needs to make three Ethereum calls to fetch dat Traditionally, these calls might be made sequentially: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Total time taken = 3 + 2 + 4 = 9 seconds @@ -315,9 +348,9 @@ Total time taken = 3 + 2 + 4 = 9 seconds With this feature, you can declare these calls to be executed in parallel: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. @@ -325,9 +358,9 @@ Total time taken = max (3, 2, 4) = 4 seconds ### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. -2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. ### Example Configuration in Subgraph Manifest @@ -347,7 +380,7 @@ calls: Details for the example above: - `global0X128` is the declared `eth_call`. -- The text before colon(`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. @@ -360,17 +393,17 @@ calls: ### SpecVersion Releases -| الاصدار | ملاحظات الإصدار | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +| الاصدار | ملاحظات الإصدار | +|:-------:| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | ### الحصول على ABIs @@ -442,16 +475,16 @@ For some entity types the `id` is constructed from the id's of two other entitie We support the following scalars in our GraphQL API: -| النوع | الوصف | -| --- | --- | -| `Bytes` | مصفوفة Byte ، ممثلة كسلسلة سداسية عشرية. يشيع استخدامها في Ethereum hashes وعناوينه. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| النوع | الوصف | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | مصفوفة Byte ، ممثلة كسلسلة سداسية عشرية. يشيع استخدامها في Ethereum hashes وعناوينه. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | #### Enums @@ -593,7 +626,7 @@ This more elaborate way of storing many-to-many relationships will result in les #### إضافة تعليقات إلى المخطط (schema) -As per GraphQL spec, comments can be added above schema entity attributes using the hash symble `#`. This is illustrated in the example below: +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: ```graphql type MyFirstEntity @entity { @@ -873,7 +906,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **ملاحظة:** مصدر البيانات الجديد سيعالج فقط الاستدعاءات والأحداث للكتلة التي تم إنشاؤها فيه وجميع الكتل التالية ، ولكنه لن يعالج البيانات التاريخية ، أي البيانات الموجودة في الكتل السابقة. -> +> > إذا كانت الكتل السابقة تحتوي على بيانات ذات صلة بمصدر البيانات الجديد ، فمن الأفضل فهرسة تلك البيانات من خلال قراءة الحالة الحالية للعقد وإنشاء كيانات تمثل تلك الحالة في وقت إنشاء مصدر البيانات الجديد. ### سياق مصدر البيانات @@ -930,7 +963,7 @@ dataSources: ``` > **ملاحظة:** يمكن البحث عن كتلة إنشاء العقد بسرعة على Etherscan: -> +> > 1. ابحث عن العقد بإدخال عنوانه في شريط البحث. > 2. انقر فوق hash إجراء الإنشاء في قسم `Contract Creator`. > 3. قم بتحميل صفحة تفاصيل الإجراء(transaction) حيث ستجد كتلة البدء لذلك العقد. @@ -945,9 +978,9 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde `indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: -1. `"never"`: No pruning of historical data; retains the entire history. -2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. -3. A specific number: Sets a custom limit on the number of historical blocks to retain. +1. `"never"`: No pruning of historical data; retains the entire history. +2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +3. A specific number: Sets a custom limit on the number of historical blocks to retain. ``` indexerHints: @@ -982,29 +1015,6 @@ indexerHints: prune: never ``` -You can check the earliest block (with historical state) for a given subgraph by querying the [Indexing Status API](/deploying/deploying-a-subgraph-to-hosted/#checking-subgraph-health): - -``` -{ - indexingStatuses(subgraphs: ["Qm..."]) { - subgraph - synced - health - chains { - earliestBlock { - number - } - latestBlock { - number - } - chainHeadBlock { number } - } - } -} -``` - -Note that the `earliestBlock` is the earliest block with historical data, which will be more recent than the `startBlock` specified in the manifest, if the subgraph has been pruned. - ## Event Handlers Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. @@ -1477,7 +1487,7 @@ The file data source must specifically mention all the entity types which it wil #### Create a new handler to process files -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/assemblyscript-api/#json-api)). +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). The CID of the file as a readable string can be accessed via the `dataSource` as follows: diff --git a/website/pages/ar/developing/developer-faqs.mdx b/website/pages/ar/developing/developer-faqs.mdx index 1758e9f909b6..dbb6b7909e08 100644 --- a/website/pages/ar/developing/developer-faqs.mdx +++ b/website/pages/ar/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: الأسئلة الشائعة للمطورين --- -## 1. What is a subgraph? +This page summarizes some of the most common questions for developers building on The Graph. -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using the Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available to be queried by subgraph consumers. +## Subgraph Related -## 2. Can I delete my subgraph? +### 1. What is a subgraph? -لا يمكن حذف ال Subgraph بمجرد إنشائها. +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. Can I change my subgraph name? +### 2. What is the first step to create a subgraph? -لا. بمجرد إنشاء ال Subgraph ، لا يمكن تغيير الاسم. تأكد من التفكير بعناية قبل إنشاء ال Subgraph الخاص بك حتى يسهل البحث عنه والتعرف عليه من خلال ال Dapps الأخرى. +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. Can I change the GitHub account associated with my subgraph? +### 3. Can I still create a subgraph if my smart contracts don't have events? -لا. بمجرد إنشاء ال Subgraph ، لا يمكن تغيير حساب GitHub المرتبط. تأكد من التفكير بعناية قبل إنشاء ال Subgraph الخاص بك. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. Am I still able to create a subgraph if my smart contracts don't have events? +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are by far the fastest way to retrieve useful data. +### 4. Can I change the GitHub account associated with my subgraph? -If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended, as performance will be significantly slower. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. Is it possible to deploy one subgraph with the same name for multiple networks? +### 5. How do I update a subgraph on mainnet? -You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/deploying/deploying-a-subgraph-to-hosted#redeploying-a-subgraph) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. How are templates different from data sources? +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) upfront you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. +يجب عليك إعادة نشر ال الفرعيةرسم بياني ، ولكن إذا لم يتغير الفرعيةرسم بياني (ID (IPFS hash ، فلن يضطر إلى المزامنة من البداية. + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +ضمن ال Subgraph ، تتم معالجة الأحداث دائمًا بالترتيب الذي تظهر به في الكتل ، بغض النظر عما إذا كان ذلك عبر عقود متعددة أم لا. + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph#data-source-templates). -## 8. How do I make sure I'm using the latest version of graph-node for my local deployments? +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -يمكنك تشغيل الأمر التالي: +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -docker pull graphprotocol/graph-node:latest -``` +You can also use `graph add` command to add a new dataSource. -**ملاحظة:** سيستخدم docker / docker-compose دائما أي إصدار من graph-node تم سحبه في المرة الأولى التي قمت بتشغيلها ، لذلك من المهم القيام بذلك للتأكد من أنك محدث بأحدث إصدار من graph-node. +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. How do I call a contract function or access a public state variable from my subgraph mappings? +Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. -## 10. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Yes. On `graph init` command itself you can add multiple datasources by entering contracts one after the other. You can also use `graph add` command to add new datasource. +يمكنك تشغيل الأمر التالي: -## 11. I want to contribute or add a GitHub issue. Where can I find the open source repositories? +```sh +docker pull graphprotocol/graph-node:latest +``` -- [graph-node](https://github.com/graphprotocol/graph-node) -- [graph-tooling](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. What is the recommended way to build "autogenerated" ids for an entity when handling events? +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? إذا تم إنشاء كيان واحد فقط أثناء الحدث ولم يكن هناك أي شيء متاح بشكل أفضل ، فسيكون hash الإجراء + فهرس السجل فريدا. يمكنك إبهامها عن طريق تحويلها إلى Bytes ثم تمريرها عبر `crypto.keccak256` ولكن هذا لن يجعلها فريدة من نوعها. -## 13. When listening to multiple contracts, is it possible to select the contract order to listen to events? +### 15. Can I delete my subgraph? -ضمن ال Subgraph ، تتم معالجة الأحداث دائمًا بالترتيب الذي تظهر به في الكتل ، بغض النظر عما إذا كان ذلك عبر عقود متعددة أم لا. +It is not possible to delete subgraphs once they are created. However, you can [transfer and deprecate your subgraph](/managing/transfer-and-deprecate-a-subgraph/). -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +## Network Related + +### 16. What networks are supported by The Graph? + +You can find the list of the supported networks [here](/developing/supported-networks). + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? نعم. يمكنك القيام بذلك عن طريق استيراد `graph-ts` كما في المثال أدناه: @@ -78,23 +99,21 @@ Yes. On `graph init` command itself you can add multiple datasources by entering ()dataSource.address ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Do you support block and call handlers on Sepolia? Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. -## 16. Can I import ethers.js or other JS libraries into my subgraph mappings? - -ليس حاليًا ، حيث تتم كتابة ال mappings في AssemblyScript. أحد الحلول البديلة الممكنة لذلك هو تخزين البيانات الأولية في الكيانات وتنفيذ المنطق الذي يتطلب مكتبات JS على ال client. +## Indexing & Querying Related -## 17. Is it possible to specify what block to start indexing on? +### 19. Is it possible to specify what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 18. Are there some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 19. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? نعم! جرب الأمر التالي ، مع استبدال "Organization / subgraphName" بالمؤسسة واسم الـ subgraph الخاص بك: @@ -102,19 +121,7 @@ Yes, you should take a look at the optional start block feature to start indexin curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -## 20. What networks are supported by The Graph? - -You can find the list of the supported networks [here](/developing/supported-networks). - -## 21. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? - -يجب عليك إعادة نشر ال الفرعيةرسم بياني ، ولكن إذا لم يتغير الفرعيةرسم بياني (ID (IPFS hash ، فلن يضطر إلى المزامنة من البداية. - -## 22. Is this possible to use Apollo Federation on top of graph-node? - -لم يتم دعم Federation بعد ، على الرغم من أننا نريد دعمه في المستقبل. و في الوقت الحالي ، الذي يمكنك القيام به هو استخدام schema stitching ، إما على client أو عبر خدمة البروكسي. - -## 23. Is there a limit to how many objects The Graph can return per query? +### 22. Is there a limit to how many objects The Graph can return per query? By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: @@ -122,24 +129,19 @@ By default, query responses are limited to 100 items per collection. If you want someCollection(first: 1000, skip: ) { ... } ``` -## 24. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## 25. Where do I go to find my current subgraph on the hosted service? - -Head over to the hosted service in order to find subgraphs that you or others deployed to the hosted service. You can find it [here](https://thegraph.com/hosted-service). - -## 26. Will the hosted service start charging query fees? - -The Graph will never charge for the hosted service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The hosted service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to upgrade to the decentralized network as they are comfortable. - -## 27. How do I update a subgraph on mainnet? +## Miscellaneous -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +### 24. Is it possible to use Apollo Federation on top of graph-node? -## 28. In what order are the event, block, and call handlers triggered for a data source? +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +- [graph-node](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/ar/developing/graph-ts/api.mdx b/website/pages/ar/developing/graph-ts/api.mdx index 15fa02b9b2ba..544f2118f489 100644 --- a/website/pages/ar/developing/graph-ts/api.mdx +++ b/website/pages/ar/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: AssemblyScript API --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/release-notes/assemblyscript-migration-guide) +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -هذه الصفحة توثق APIs المضمنة التي يمكن استخدامها عند كتابة subgraph mappings. يتوفر نوعان من APIs خارج الصندوق: +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## مرجع API @@ -27,16 +29,16 @@ The `@graphprotocol/graph-ts` library provides the following APIs: The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. -| الاصدار | ملاحظات الإصدار | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| الاصدار | ملاحظات الإصدار | +| :-----: | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### الأنواع المضمنة (Built-in) @@ -252,7 +254,9 @@ export function handleTransfer(event: TransferEvent): void { When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. -Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### تحميل الكيانات من المخزن @@ -268,15 +272,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### Looking up entities created withing a block As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. -The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a Transaction from some on-chain event, and a later handler wants to access this transaction if it exists. In the case where the transaction does not exist, the subgraph will have to go to the database just to find out that the entity does not exist; if the subgraph author already knows that the entity must have been created in the same block, using loadInBlock avoids this database roundtrip. For some subgraphs, these missed lookups can contribute significantly to the indexing time. +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -503,7 +510,9 @@ Any other contract that is part of the subgraph can be imported from the generat #### معالجة الاستدعاءات المعادة -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -515,7 +524,7 @@ if (callResult.reverted) { } ``` -Note that a Graph node connected to a Geth or Infura client may not detect all reverts, if you rely on this we recommend using a Graph node connected to a Parity client. +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### تشفير/فك تشفير ABI diff --git a/website/pages/ar/developing/supported-networks.mdx b/website/pages/ar/developing/supported-networks.mdx index 96e737b0d743..c2e7677ae4fb 100644 --- a/website/pages/ar/developing/supported-networks.mdx +++ b/website/pages/ar/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. +- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. - If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). diff --git a/website/pages/ar/developing/unit-testing-framework.mdx b/website/pages/ar/developing/unit-testing-framework.mdx index 6d433b1a593f..d1075dc5572a 100644 --- a/website/pages/ar/developing/unit-testing-framework.mdx +++ b/website/pages/ar/developing/unit-testing-framework.mdx @@ -1368,18 +1368,18 @@ The log output includes the test run duration. Here's an example: > Critical: Could not create WasmInstance from valid module with context: unknown import: wasi_snapshot_preview1::fd_write has not been defined -This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/assemblyscript-api/#logging-api) +This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/graph-ts/api/#logging-api) > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Block(defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(18,12) -> +> > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Transaction(defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(24,12) The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as`. The best way to fix issues like this one is to update everything to the latest released version. diff --git a/website/pages/ar/glossary.mdx b/website/pages/ar/glossary.mdx index a94cb5d4be55..ddc833205c79 100644 --- a/website/pages/ar/glossary.mdx +++ b/website/pages/ar/glossary.mdx @@ -10,11 +10,9 @@ title: قائمة المصطلحات - **نقطة النهاية (Endpoint)**: عنوان URL يمكن استخدامه للاستعلام عن سبغراف. نقطة الاختبار لـ سبغراف استوديو هي: `https://api.studio.thegraph.com/query///` ونقطة نهاية مستكشف الغراف هي: `https://gateway.thegraph.com/api//subgraphs/id/` تُستخدم نقطة نهاية مستكشف الغراف للاستعلام عن سبغرافات على شبكة الغراف اللامركزية. -- **غراف فرعي (Subgraph)**: واجهة برمجة تطبيقات مفتوحة تستخلص البيانات من سلسلة الكتل، ومعالجتها، وتخزينها ليكون من السهل الاستعلام عنها من خلال لغة استعلام GraphQL. يمكن للمطورين بناء ونشر الغرافات الفرعية على شبكة الغراف اللامركزية. بعد ذلك، يمكن للمفهرسين البدء في فهرسة الغرافات الفرعية لتكون متاحة للاستعلام من قبل أي كان. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **الخدمة المستضافة (Hosted Service)**: هي خدمة مؤقتة تعمل كبنية تحتية لبناء واستعلام الغرافات الفرعية، حيث تقوم شبكة الغراف اللامركزية بتحسين تكاليف الخدمة وجودة الخدمة وتجربة المطور. - -- **Indexers**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. @@ -24,17 +22,17 @@ title: قائمة المصطلحات - **Indexer's Self Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. -- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service to The Graph Network. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegators**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curators**: Network participants that identify high-quality subgraphs, and “curate” them (i.e., signal GRT on them) in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. Indexers earn indexing rewards proportional to the signal on a subgraph. We see a correlation between the amount of GRT signalled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. -- **مستهلك الغراف الفرعي**: أي تطبيق أو مستخدم يستعلم عن غراف فرعي معين. +- **Data Consumer**: Any application or user that queries a subgraph. - **مطور السوبغراف**: هو المطور الذي يقوم ببناء ونشر السوبغراف على شبكة الغراف اللامركزية. @@ -46,11 +44,11 @@ title: قائمة المصطلحات 1. **Active**: An allocation is considered active when it is created on-chain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed. Specifically, the Indexer will lose 2.5% of their self-stake of GRT. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. @@ -62,11 +60,11 @@ title: قائمة المصطلحات - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **POI or Proof of Indexing**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent Proof of Indexing (POI). Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. @@ -78,10 +76,6 @@ title: قائمة المصطلحات - **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self stake. -- **_Upgrading_ a subgraph to The Graph Network**: The process of moving a subgraph from the hosted service to The Graph Network. - -- **_Updating_ a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). - -- **Upgrade Window**: The countdown for hosted service users to upgrade their subgraph(s) to The Graph Network beginning on April 11th, and ending on June 12th 2024. diff --git a/website/pages/ar/index.json b/website/pages/ar/index.json index 005c09a0cf30..f09f7bdca0b3 100644 --- a/website/pages/ar/index.json +++ b/website/pages/ar/index.json @@ -21,10 +21,6 @@ "createASubgraph": { "title": "إنشاء الـ Subgraph", "description": "استخدم Studio لإنشاء subgraphs" - }, - "migrateFromHostedService": { - "title": "Upgrade from the hosted service", - "description": "Upgrading subgraphs to The Graph Network" } }, "networkRoles": { diff --git a/website/pages/ar/managing/transfer-and-deprecate-a-subgraph.mdx b/website/pages/ar/managing/transfer-and-deprecate-a-subgraph.mdx new file mode 100644 index 000000000000..6bdd183f72d5 --- /dev/null +++ b/website/pages/ar/managing/transfer-and-deprecate-a-subgraph.mdx @@ -0,0 +1,65 @@ +--- +title: Transfer and Deprecate a Subgraph +--- + +## Transferring ownership of a subgraph + +Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. + +**Please note the following:** + +- Whoever owns the NFT controls the subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. +- You can easily move control of a subgraph to a multi-sig. +- A community member can create a subgraph on behalf of a DAO. + +### View your subgraph as an NFT + +To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: + +``` +https://opensea.io/your-wallet-address +``` + +Or a wallet explorer like **Rainbow.me**: + +``` +https://rainbow.me/your-wallet-addres +``` + +### Step-by-Step + +To transfer ownership of a subgraph, do the following: + +1. Use the UI built into Subgraph Studio: + + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) + +2. Choose the address that you would like to transfer the subgraph to: + + ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) + +Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: + +![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) + +## Deprecating a subgraph + +Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. + +### Step-by-Step + +To deprecate your subgraph, do the following: + +1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). +2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. +3. Your subgraph will no longer appear in searches on Graph Explorer. + +**Please note the following:** + +- The owner's wallet should call the `deprecateSubgraph` function. +- Curators will not be able to signal on the subgraph anymore. +- Curators that already signaled on the subgraph can withdraw their signal at an average share price. +- Deprecated subgraphs will show an error message. + +> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. diff --git a/website/pages/ar/mips-faqs.mdx b/website/pages/ar/mips-faqs.mdx index dfbc9049c656..596c948ea4fa 100644 --- a/website/pages/ar/mips-faqs.mdx +++ b/website/pages/ar/mips-faqs.mdx @@ -6,10 +6,6 @@ title: MIPs FAQs > Note: the MIPs program is closed as of May 2023. Thank you to all the Indexers who participated! -It's an exciting time to be participating in The Graph ecosystem! During [Graph Day 2022](https://thegraph.com/graph-day/2022/) Yaniv Tal announced the [sunsetting of the hosted service](https://thegraph.com/blog/sunsetting-hosted-service/), a moment The Graph ecosystem has been working towards for many years. - -To support the sunsetting of the hosted service and the migration of all of it's activity to the decentralized network, The Graph Foundation has announced the [Migration Infrastructure Providers (MIPs) program](https://thegraph.com/blog/mips-multi-chain-indexing-incentivized-program). - The MIPs program is an incentivization program for Indexers to support them with resources to index chains beyond Ethereum mainnet and help The Graph protocol expand the decentralized network into a multi-chain infrastructure layer. The MIPs program has allocated 0.75% of the GRT supply (75M GRT), with 0.5% to reward Indexers who contribute to bootstrapping the network and 0.25% allocated to Network Grants for subgraph developers using multi-chain subgraphs. diff --git a/website/pages/ar/network/benefits.mdx b/website/pages/ar/network/benefits.mdx index d4a42c2e21f9..c0ddbdb9be2d 100644 --- a/website/pages/ar/network/benefits.mdx +++ b/website/pages/ar/network/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Cost Comparison | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $0+ | $0 per month | -| Engineering time | $400 per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | -| Cost per query | $0 | $0 | -| البنية الأساسية | Centralized | Decentralized | -| Geographic redundancy | $750+ per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $750+ | $0 | +| Cost Comparison | Self Hosted | The Graph Network | +|:----------------------------:|:---------------------------------------:|:---------------------------------------------------------------:| +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $0+ | $0 per month | +| Engineering time | $400 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | +| Cost per query | $0 | $0 | +| البنية الأساسية | Centralized | Decentralized | +| Geographic redundancy | $750+ per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| Cost Comparison | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $500 per month | $120 per month | -| Engineering time | $800 per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | ~3,000,000 | -| Cost per query | $0 | $0.00004 | -| البنية الأساسية | Centralized | Decentralized | -| Engineering expense | $200 per hour | Included | -| Geographic redundancy | $1,200 in total costs per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $1,650+ | $120 | +| Cost Comparison | Self Hosted | The Graph Network | +|:----------------------------:|:------------------------------------------:|:---------------------------------------------------------------:| +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $500 per month | $120 per month | +| Engineering time | $800 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~3,000,000 | +| Cost per query | $0 | $0.00004 | +| البنية الأساسية | Centralized | Decentralized | +| Engineering expense | $200 per hour | Included | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| Cost Comparison | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Monthly server cost\* | $1100 per month, per node | $0 | -| Query costs | $4000 | $1,200 per month | -| Number of nodes needed | 10 | Not applicable | -| Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | ~30,000,000 | -| Cost per query | $0 | $0.00004 | -| البنية الأساسية | Centralized | Decentralized | -| Geographic redundancy | $1,200 in total costs per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $11,000+ | $1,200 | +| Cost Comparison | Self Hosted | The Graph Network | +|:----------------------------:|:-------------------------------------------:|:---------------------------------------------------------------:| +| Monthly server cost\* | $1100 per month, per node | $0 | +| Query costs | $4000 | $1,200 per month | +| Number of nodes needed | 10 | Not applicable | +| Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~30,000,000 | +| Cost per query | $0 | $0.00004 | +| البنية الأساسية | Centralized | Decentralized | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $11,000+ | $1,200 | \*including costs for backup: $50-$100 per month diff --git a/website/pages/ar/network/curating.mdx b/website/pages/ar/network/curating.mdx index 09b06f9e3476..43260bbe0aab 100644 --- a/website/pages/ar/network/curating.mdx +++ b/website/pages/ar/network/curating.mdx @@ -10,7 +10,7 @@ Before consumers can query a subgraph, it must be indexed. This is where curatio Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. @@ -18,7 +18,7 @@ The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). @@ -34,7 +34,7 @@ Signaling on a specific version is especially useful when one subgraph is used b Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay a 0.5% curation tax on all auto-migrated curation shares. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -49,7 +49,7 @@ However, it is recommended that curators leave their signaled GRT in place not o ## المخاطر 1. سوق الاستعلام يعتبر حديثا في The Graph وهناك خطر من أن يكون٪ APY الخاص بك أقل مما تتوقع بسبب ديناميكيات السوق الناشئة. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. 3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. يمكن أن يفشل ال subgraph بسبب خطأ. ال subgraph الفاشل لا يمكنه إنشاء رسوم استعلام. نتيجة لذلك ، سيتعين عليك الانتظار حتى يصلح المطور الخطأ وينشر إصدارا جديدا. - إذا كنت مشتركا في أحدث إصدار من subgraph ، فسيتم ترحيل migrate أسهمك تلقائيا إلى هذا الإصدار الجديد. هذا سيتحمل ضريبة تنسيق بنسبة 0.5٪. @@ -65,7 +65,7 @@ By signalling on a subgraph, you will earn a share of all the query fees that th Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dApp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: -- يمكن للمنسقين استخدام فهمهم للشبكة لمحاولة التنبؤ كيف لل subgraph أن يولد حجم استعلام أعلى أو أقل في المستقبل +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future - Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. ### 3. What’s the cost of updating a subgraph? @@ -78,50 +78,14 @@ It’s suggested that you don’t update your subgraphs too frequently. See the ### 5. هل يمكنني بيع أسهم التنسيق الخاصة بي؟ -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). ### 6. Am I eligible for a curation grant? Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## منحنى الترابط 101 - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![سعر السهم](/img/price-per-share.png) - -نتيجة لذلك ، يرتفع السعر بثبات ، مما يعني أنه سيكون شراء السهم أكثر تكلفة مع مرور الوقت. فيما يلي مثال لما نعنيه ، راجع منحنى الترابط أدناه: - -![منحنى الترابط Bonding curve](/img/bonding-curve.png) - -ضع في اعتبارك أن لدينا منسقان يشتركان في Subgraph واحد: - -- المنسق (أ) هو أول من أشار إلى ال Subgraphs. من خلال إضافة 120000 GRT إلى المنحنى ، سيكون من الممكن صك 2000 سهم. -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- لأن كلا من المنسقين يحتفظان بنصف إجمالي اسهم التنسيق ، فإنهم سيحصلان على قدر متساوي من عوائد المنسقين. -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- سيحصل المنسق المتبقي على جميع عوائد المنسق لهذ ال subgraphs. وإذا قام بحرق حصته للحصول علىGRT ، فإنه سيحصل على 120.000 GRT. -- **TLDR:** The GRT valuation of curation shares is determined by the bonding curve and can be volatile. There is potential to incur big losses. Signaling early means you put in less GRT for each share. By extension, this means you earn more curator royalties per GRT than later curators for the same subgraph. - -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** - -In the case of The Graph, [Bancor’s implementation of a bonding curve formula](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) is leveraged. - لازلت مشوشا؟ راجع فيديو دليل التنسيق أدناه: diff --git a/website/pages/ar/network/delegating.mdx b/website/pages/ar/network/delegating.mdx index 18571df08c11..a216b61a0fa5 100644 --- a/website/pages/ar/network/delegating.mdx +++ b/website/pages/ar/network/delegating.mdx @@ -2,13 +2,23 @@ title: Delegating --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## دليل المفوض -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly, as that information is widely available online. There are three sections in this guide: @@ -24,60 +34,84 @@ There are three sections in this guide: Delegators cannot be slashed for bad behavior, but there is a tax on Delegators to disincentivize poor decision-making that could harm the integrity of the network. -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +As a Delegator, it's important to understand the following: -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### فترة إلغاء التفويض Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely. -
لاحظ 0.5٪ رسوم التفويض ، بالإضافة إلى فترة 28 يوما لإلغاء التفويض.
+
+ لاحظ 0.5٪ رسوم التفويض ، بالإضافة إلى فترة 28 يوما لإلغاء التفويض. +
### اختيار مفهرس جدير بالثقة مع عائد جيد للمفوضين -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. + +#### Delegation Parameters -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
*المفهرس الأعلى يمنح المفوضين 90٪ من المكافآت. والمتوسط يمنح المفوضين 20٪. والأدنى يعطي المفوضين ~ 83٪.*
-- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +As you can see, in order to choose the right Indexer, you must consider multiple things. -### حساب العائد المتوقع للمفوضين +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -A Delegator must consider a lot of factors when determining the return. These include: +## Calculating Delegators Expected Return -- يمكن للمفوض إلقاء نظرة على قدرة المفهرسين على استخدام التوكن المفوضة المتاحة لهم. إذا لم يقم المفهرس بتخصيص جميع التوكن المتاحة ، فإنه لا يكسب أقصى ربح يمكن أن يحققه لنفسه أو للمفوضين. -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +A Delegator must consider the following factors to determine a return: + +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### النظر في اقتطاع رسوم الاستعلام query fee cut واقتطاع رسوم الفهرسة indexing fee cut -كما هو موضح في الأقسام أعلاه ، يجب عليك اختيار مفهرس يتسم بالشفافية والصدق بشأن اقتطاع رسوم الاستعلام Query Fee Cut واقتطاع رسوم الفهرسة Indexing Fee Cuts. يجب على المفوض أيضا إلقاء نظرة على بارامتارات Cooldown time لمعرفة مقدار الوقت المتاح لديهم. بعد الانتهاء من ذلك ، من السهل إلى حد ما حساب مقدار المكافآت التي يحصل عليها المفوضون. الصيغة هي: +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![صورة التفويض 3](/img/Delegation-Reward-Formula.png) ### النظر في أسهم تفويض المفهرس -باستخدام هذه الصيغة ، يمكننا أن نرى أنه من الممكن فعليا للمفهرس الذي يعرض 20٪ فقط للمفوضين ، أن يمنح المفوضين عائدا أفضل من المفهرس الذي يعطي 90٪ للمفوضين. +Delegators should consider the proportion of the Delegation Pool they own. -![شارك الصيغة](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +This gives the Delegator a share of the pool: + +![شارك الصيغة](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### النظر في سعة التفويض (delegation capacity) -Another thing to consider is the delegation capacity. Currently, the Delegation Ratio is set to 16. This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of Delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. @@ -85,16 +119,21 @@ Therefore, a Delegator should always consider the Delegation Capacity of an Inde ### MetaMask "Pending Transaction" Bug -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### مثال -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## Video guide for the network UI +## Video Guide -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/ar/network/developing.mdx b/website/pages/ar/network/developing.mdx index 638f2b5af282..640dbb33d81e 100644 --- a/website/pages/ar/network/developing.mdx +++ b/website/pages/ar/network/developing.mdx @@ -2,52 +2,88 @@ title: Developing --- -Developers are the demand side of The Graph ecosystem. Developers build subgraphs and publish them to The Graph Network. Then, they query live subgraphs with GraphQL in order to power their applications. +To start coding right away, go to [Developer Quick Start](/quick-start/). + +## نظره عامة + +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. + +On The Graph, you can: + +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. + +### What is GraphQL? + +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +### Developer Actions + +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. + +## Subgraph Specifics + +### What are subgraphs? + +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. + +A subgraph primarily consists of the following files: + +- `subgraph.yaml`: this YAML file contains the [subgraph manifest](/developing/creating-a-subgraph/#the-subgraph-manifest). +- `subgraph.graphql`: this GraphQL schema defines what data is stored for your subgraph, and how to query it via [GraphQL](/developing/creating-a-subgraph/#the-graphql-schema). +- `mappings`: this [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) mappings file translates data from the event data to the entities defined in your schema. + +Learn the detailed specifics to [create a subgraph](/developing/creating-a-subgraph/). ## دورة حياة الـ Subgraph -Subgraphs deployed to the network have a defined lifecycle. +Here is a general overview of a subgraph’s lifecycle: -### Build locally +![دورة حياة الـ Subgraph](/img/subgraph-lifecycle.png) -As with all subgraph development, it starts with local development and testing. Developers can use the same local setup whether they are building for The Graph Network, the hosted service or a local Graph Node, leveraging `graph-cli` and `graph-ts` to build their subgraph. Developers are encouraged to use tools such as [Matchstick](https://github.com/LimeChain/matchstick) for unit testing to improve the robustness of their subgraphs. +### Build locally -> There are certain constraints on The Graph Network, in terms of feature and network support. Only subgraphs on [supported networks](/developing/supported-networks) will earn indexing rewards, and subgraphs which fetch data from IPFS are also not eligible. +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. ### Deploy to Subgraph Studio -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. - -### Publish to the Network +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -When the developer is happy with their subgraph, they can publish it to The Graph Network. This is an on-chain action, which registers the subgraph so that it is discoverable by Indexers. Published subgraphs have a corresponding NFT, which is then easily transferable. The published subgraph has associated metadata, which provides other network participants with useful context and information. +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. -### Signal to Encourage Indexing +### Publish to the Network -Published subgraphs are unlikely to be picked up by Indexers without the addition of signal. Signal is locked GRT associated with a given subgraph, which indicates to Indexers that a given subgraph will receive query volume, and also contributes to the indexing rewards available for processing it. Subgraph developers will generally add signal to their subgraph, in order to encourage indexing. Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. -### Querying & Application Development +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-and-deprecate-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. -Once a subgraph has been processed by Indexers and is available for querying, developers can start to use the subgraph in their applications. Developers query subgraphs via a gateway, which forwards their queries to an Indexer who has processed the subgraph, paying query fees in GRT. +### Add Curation Signal for Indexing -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. +#### What is signal? -### Updating Subgraphs +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. +### Querying & Application Development -Once the Subgraph Developer is ready to update, they can initiate a transaction to point their subgraph at the new version. Updating the subgraph migrates any signal to the new version (assuming the user who applied the signal selected "auto-migrate"), which also incurs a migration tax. This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). -### Deprecating Subgraphs +Learn more about [querying subgraphs](/querying/querying-the-graph/). -At some point a developer may decide that they no longer need a published subgraph. At that point they may deprecate the subgraph, which returns any signalled GRT to the Curators. +### Updating Subgraphs -### Diverse Developer Roles +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -Some developers will engage with the full subgraph lifecycle on the network, publishing, querying and iterating on their own subgraphs. Some may be focused on subgraph development, building open APIs which others can build on. Some may be application focused, querying subgraphs deployed by others. +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. -### Developers and Network Economics +### Deprecating & Transferring Subgraphs -Developers are a key economic actor in the network, locking up GRT in order to encourage indexing, and crucially querying subgraphs, which is the network's primary value exchange. Subgraph developers also burn GRT whenever a subgraph is updated. +If you no longer need a published subgraph, you can [deprecate or transfer a subgraph](/managing/transfer-and-deprecate-a-subgraph/). Deprecating a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/ar/network/explorer.mdx b/website/pages/ar/network/explorer.mdx index 4c82281ebc72..2024b24bcd1c 100644 --- a/website/pages/ar/network/explorer.mdx +++ b/website/pages/ar/network/explorer.mdx @@ -2,21 +2,35 @@ title: Graph Explorer --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## Subgraphs -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![صورة المستكشف 1](/img/Subgraphs-Explorer-Landing.png) -عند النقر على Subgraphs ، يمكنك اختبار الاستعلامات وستكون قادرا على الاستفادة من تفاصيل الشبكة لاتخاذ قرارات صائبة. سيمكنك ايضا من الإشارة إلى GRT على Subgraphs الخاص بك أو subgraphs الآخرين لجعل المفهرسين على علم بأهميته وجودته. هذا أمر مهم جدا وذلك لأن الإشارة ل Subgraphs تساعد المفهرسين في اختيار ذلك ال Subgraph لفهرسته ، مما يعني أنه سيظهر على الشبكة لتقديم الاستعلامات. +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![صورة المستكشف 2](/img/Subgraph-Details.png) -في كل صفحة مخصصة ل subgraphs ، تظهر العديد من التفاصيل. وهذا يتضمن +On each subgraph’s dedicated page, you can do the following: - أشر/الغي الإشارة على Subgraphs - اعرض المزيد من التفاصيل مثل المخططات و ال ID الحالي وبيانات التعريف الأخرى @@ -31,26 +45,32 @@ First things first, if you just finished deploying and publishing your subgraph ## المشاركون -Within this tab, you’ll get a bird’s eye view of all the people that are participating in the network activities, such as Indexers, Delegators, and Curators. Below, we’ll go into an in-depth review of what each tab means for you. +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### 1. Indexers ![صورة المستكشف 4](/img/Indexer-Pane.png) -Let’s start with the Indexers. Indexers are the backbone of the protocol, being the ones that stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. In the Indexers table, you’ll be able to see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made off of query fees and indexing rewards. Deep dives below: +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. + +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. -- اقتطاع رسوم الاستعلام Query Fee Cut - هي النسبة المئوية لخصم رسوم الاستعلام والتي يحتفظ بها المفهرس عند التقسيم مع المفوضين Delegators -- اقتطاع المكافأة الفعالة Effective Reward Cut - هو اقتطاع مكافأة الفهرسة indexing reward cut المطبقة على مجموعة التفويضات. إذا كانت سالبة ، فهذا يعني أن المفهرس يتنازل عن جزء من مكافآته. إذا كانت موجبة، فهذا يعني أن المفهرس يحتفظ ببعض مكافآته -- فترة التهدئة Cooldown المتبقية - هو الوقت المتبقي حتى يتمكن المفهرس من تغيير بارامترات التفويض. يتم إعداد فترات التهدئة من قبل المفهرسين عندما يقومون بتحديث بارامترات التفويض الخاصة بهم -- مملوكة Owned - هذه هي حصة المفهرس المودعة ، والتي قد يتم شطبها بسبب السلوك الضار أو غير الصحيح -- مفوضة Delegated - هي حصة مفوضة من قبل المفوضين والتي يمكن تخصيصها بواسطة المفهرس ، لكن لا يمكن شطبها -- مخصصة Allocated - حصة يقوم المفهرسون بتخصيصها بشكل نشط نحو subgraphs التي يقومون بفهرستها -- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated +**Specifics** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations. -- رسوم الاستعلام Query Fees - هذا هو إجمالي الرسوم التي دفعها المستخدمون للاستعلامات التي يقدمها المفهرس طوال الوقت +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - مكافآت المفهرس Indexer Rewards - هو مجموع مكافآت المفهرس التي حصل عليها المفهرس ومفوضيهم Delegators. تدفع مكافآت المفهرس ب GRT. -Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. Indexing parameters are set by clicking on the right-hand side of the table, or by going into an Indexer’s profile and clicking the “Delegate” button. +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. To learn more about how to become an Indexer, you can take a look at the [official documentation](/network/indexing) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) @@ -58,9 +78,13 @@ To learn more about how to become an Indexer, you can take a look at the [offici ### 3. المفوضون Delegators -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially attractive subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -يمكن للمنسقين أن يكونوا من أعضاء المجتمع أو من مستخدمي البيانات أو حتى من مطوري ال subgraph والذين يشيرون إلى ال subgraphs الخاصة بهم وذلك عن طريق إيداع توكن GRT في منحنى الترابط. وبإيداع GRT ، يقوم المنسقون بصك أسهم التنسيق في ال subgraph. نتيجة لذلك ، يكون المنسقون مؤهلين لكسب جزء من رسوم الاستعلام التي يُنشئها ال subgraph المشار إليها. يساعد منحنى الترابط المنسقين على تنسيق مصادر البيانات الأعلى جودة. جدول المنسق في هذا القسم سيسمح لك برؤية: +In the The Curator table listed below you can see: - التاريخ الذي بدأ فيه المنسق بالتنسق - عدد ال GRT الذي تم إيداعه @@ -68,34 +92,36 @@ Curators analyze subgraphs to identify which subgraphs are of the highest qualit ![صورة المستكشف 6](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting the following links of [The Graph Academy](https://thegraph.academy/curators/) or [official documentation.](/network/curating) +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### 3. المفوضون Delegators -يلعب المفوضون دورا رئيسيا في الحفاظ على الأمن واللامركزية في شبكة The Graph. يشاركون في الشبكة عن طريق تفويض (أي ، "Staking") توكن GRT إلى مفهرس واحد أو أكثر. بدون المفوضين، من غير المحتمل أن يربح المفهرسون مكافآت ورسوم مجزية. لذلك ، يسعى المفهرسون إلى جذب المفوضين من خلال منحهم جزءا من مكافآت الفهرسة ورسوم الاستعلام التي يكسبونها. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![صورة المستكشف 7](/img/Delegation-Overview.png) -جدول المفوضين سيسمح لك برؤية المفوضين النشطين في المجتمع ، بالإضافة إلى مقاييس مثل: +In the Delegators table you can see the active Delegators in the community and important metrics: - عدد المفهرسين المفوض إليهم - التفويض الأصلي للمفوض Delegator’s original delegation - المكافآت التي جمعوها والتي لم يسحبوها من البروتوكول - المكافآت التي تم سحبها من البروتوكول - كمية ال GRT التي يمتلكونها حاليا في البروتوكول -- تاريخ آخر تفويض لهم +- The date they last delegated -If you want to learn more about how to become a Delegator, look no further! All you have to do is to head over to the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## Network -في قسم الشبكة ، سترى KPIs بالإضافة إلى القدرة على التبديل بين الفترات وتحليل مقاييس الشبكة بشكل مفصل. ستمنحك هذه التفاصيل فكرة عن كيفية أداء الشبكة بمرور الوقت. +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. ### نظره عامة -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +The overview section has both all the current network metrics and some cumulative metrics over time: - إجمالي حصة الشبكة الحالية - الحصة المقسمة بين المفهرسين ومفوضيهم @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - بارامترات البروتوكول مثل مكافأة التنسيق ومعدل التضخم والمزيد - رسوم ومكافآت الفترة الحالية -بعض التفاصيل الأساسية الجديرة بالذكر: +A few key details to note: -- **Query fees represent the fees generated by the consumers**, and they can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once the Indexers close their allocations towards the subgraphs they’ve been indexing. Thus the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![صورة المستكشف 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ In the Epochs section, you can analyze on a per-epoch basis, metrics such as: - الفترة النشطة هي الفترة التي يقوم فيها المفهرسون حاليا بتخصيص الحصص وتحصيل رسوم الاستعلام - فترات التسوية هي تلك الفترات التي يتم فيها تسوية قنوات الحالة state channels. هذا يعني أن المفهرسين يكونون عرضة للشطب إذا فتح المستخدمون اعتراضات ضدهم. - فترات التوزيع هي تلك الفترات التي يتم فيها تسوية قنوات الحالة للفترات ويمكن للمفهرسين المطالبة بخصم رسوم الاستعلام الخاصة بهم. - - الفترات النهائية هي تلك الفترات التي ليس بها خصوم متبقية على رسوم الاستعلام للمطالبة بها من قبل المفهرسين ، وبالتالي يتم الانتهاء منها. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![صورة المستكشف 9](/img/Epoch-Stats.png) ## ملف تعريف المستخدم الخاص بك -Now that we’ve talked about the network stats, let’s move on to your personal profile. Your personal profile is the place for you to see your network activity, no matter how you’re participating on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see: +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### نظرة عامة على الملف الشخصي -هذا هو المكان الذي يمكنك فيه رؤية الإجراءات الحالية التي اتخذتها. وأيضا هو المكان الذي يمكنك فيه العثور على معلومات ملفك الشخصي والوصف وموقع الويب (إذا قمت بإضافته). +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![صورة المستكشف 10](/img/Profile-Overview.png) ### تبويب ال Subgraphs -إذا قمت بالنقر على تبويب Subgraphs ، فسترى ال subgraphs المنشورة الخاصة بك. لن يشمل ذلك أي subgraphs تم نشرها ب CLI لأغراض الاختبار - لن تظهر ال subgraphs إلا عند نشرها على الشبكة اللامركزية. +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![صورة المستكشف 11](/img/Subgraphs-Overview.png) ### تبويب الفهرسة -إذا قمت بالنقر على تبويب الفهرسة "Indexing " ، فستجد جدولا به جميع المخصصات النشطة والتاريخية ل subgraphs ، بالإضافة إلى المخططات التي يمكنك تحليلها ورؤية أدائك السابق كمفهرس. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. هذا القسم سيتضمن أيضا تفاصيل حول صافي مكافآت المفهرس ورسوم الاستعلام الصافي الخاصة بك. سترى المقاييس التالية: @@ -158,7 +189,9 @@ Now that we’ve talked about the network stats, let’s move on to your persona ### تبويب التفويض Delegating Tab -المفوضون مهمون لشبكة the Graph. يجب أن يستخدم المفوض معرفته لاختيار مفهرسا يوفر عائدا على المكافآت. هنا يمكنك العثور على تفاصيل تفويضاتك النشطة والتاريخية ، مع مقاييس المفهرسين الذين قمت بتفويضهم. +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. في النصف الأول من الصفحة ، يمكنك رؤية مخطط التفويض الخاص بك ، بالإضافة إلى مخطط المكافآت فقط. إلى اليسار ، يمكنك رؤية KPIs التي تعكس مقاييس التفويض الحالية. diff --git a/website/pages/ar/network/indexing.mdx b/website/pages/ar/network/indexing.mdx index 06055f703f94..d0d1ce70321f 100644 --- a/website/pages/ar/network/indexing.mdx +++ b/website/pages/ar/network/indexing.mdx @@ -42,7 +42,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap تشتمل العديد من لوحات المعلومات التي أنشأها المجتمع على قيم المكافآت المعلقة ويمكن التحقق منها بسهولة يدويًا باتباع الخطوات التالية: -1. استعلم عن [mainnet الفرعيةرسم بياني ](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) للحصول على IDs لجميع المخصصات النشطة: +1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql } query indexerAllocations @@ -113,11 +113,11 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that - **كبيرة** - مُعدة لفهرسة جميع ال subgraphs المستخدمة حاليا وأيضا لخدمة طلبات حركة مرور البيانات ذات الصلة. | Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| صغير | 4 | 8 | 1 | 4 | 16 | -| قياسي | 8 | 30 | 1 | 12 | 48 | -| متوسط | 16 | 64 | 2 | 32 | 64 | -| كبير | 72 | 468 | 3.5 | 48 | 184 | +| ----- |:---------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| +| صغير | 4 | 8 | 1 | 4 | 16 | +| قياسي | 8 | 30 | 1 | 12 | 48 | +| متوسط | 16 | 64 | 2 | 32 | 64 | +| كبير | 72 | 468 | 3.5 | 48 | 184 | ### What are some basic security precautions an Indexer should take? @@ -149,20 +149,20 @@ At the center of an Indexer's infrastructure is the Graph Node which monitors th #### Graph Node -| المنفذ | الغرض | المسار | CLI Argument | متغيرات البيئة | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...

/subgraphs/name/.../... | http-port-- | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...

/subgraphs/name/.../... | ws-port-- | - | -| 8020 | JSON-RPC
(for managing deployments) | / | admin-port-- | - | -| 8030 | Subgraph indexing status API | /graphql | index-node-port-- | - | -| 8040 | Prometheus metrics | /metrics | metrics-port-- | - | +| المنفذ | الغرض | المسار | CLI Argument | متغيرات البيئة | +| ------ | ------------------------------------------------------- | ------------------------------------------------------------------- | ----------------- | -------------- | +| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...

/subgraphs/name/.../... | http-port-- | - | +| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...

/subgraphs/name/.../... | ws-port-- | - | +| 8020 | JSON-RPC
(for managing deployments) | / | admin-port-- | - | +| 8030 | Subgraph indexing status API | /graphql | index-node-port-- | - | +| 8040 | Prometheus metrics | /metrics | metrics-port-- | - | #### خدمة المفهرس -| المنفذ | الغرض | المسار | CLI Argument | متغيرات البيئة | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | port-- | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | metrics-port-- | - | +| المنفذ | الغرض | المسار | CLI Argument | متغيرات البيئة | +| ------ | ------------------------------------------------------------ | --------------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | port-- | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | metrics-port-- | - | #### وكيل المفهرس(Indexer Agent) @@ -545,7 +545,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules maybe [options] ` — Set the `decisionBasis` for a deployment to `rules`, so that the Indexer agent will use indexing rules to decide whether to index this deployment. -- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additonal argument `--status` can be used to print out all actions of a certain status. +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `graph indexer action queue allocate ` - Queue allocation action diff --git a/website/pages/ar/network/overview.mdx b/website/pages/ar/network/overview.mdx index 08469cdc547b..c6fdf2fdc81f 100644 --- a/website/pages/ar/network/overview.mdx +++ b/website/pages/ar/network/overview.mdx @@ -2,14 +2,20 @@ title: Network Overview --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## نظره عامة +## How does it work? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Specifics + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![اقتصاد الـ Token](/img/Network-roles@2x.png) -To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20 used to allocate resources in the network. +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/ar/new-chain-integration.mdx b/website/pages/ar/new-chain-integration.mdx index 5b4925685de2..ec987dfa5b55 100644 --- a/website/pages/ar/new-chain-integration.mdx +++ b/website/pages/ar/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: تكامل الشبكات الجديدة +title: New Chain Integration --- -عقدة الغراف يمكنه حاليًا فهرسة البيانات من أنواع الشبكات التالية: +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- إيثيريوم، من خلال استدعاء إجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت لآلة الإيثريوم الافتراضية ( EVM JSON-RPC) و [فايرهوز إيثيريوم](https://github.com/streamingfast/firehose-ethereum) -- نير، عبر [نير فايرهوز](https://github.com/streamingfast/near-firehose-indexer) -- كوسموس، عبر [كوسموس فايرهوز](https://github.com/graphprotocol/firehose-cosmos) -- أرويف، عبر [أرويف فايرهوز](https://github.com/graphprotocol/firehose-arweave) +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -إذا كنت مهتمًا بأي من تلك السلاسل، فإن التكامل يتطلب ضبط واختبار عقدة الغراف. +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -إذا كنت مهتمًا بنوع سلسلة مختلفة، فيجب بناء تكامل جديد مع عقدة الغراف. الطريقة الموصى بها هي تطوير فايرهوز جديد للسلسلة المعنية، ثم دمج ذلك الفايرهوز مع عقدة الغراف. المزيد من المعلومات أدناه. +## Integration Strategies -**1. استدعاء إجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت لآلة الإيثريوم الافتراضية** +### 1. EVM JSON-RPC -إذا كانت سلسلة الكتل متوافقة مع آلة الإيثريوم الافتراضية وإذا كان العميل/العقدة يوفر واجهة برمجة التطبيقات القياسية لاستدعاء إجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت لآلة الإيثريوم الافتراضية، ، فإنه يمكن لعقدة الغراف فهرسة هذه السلسلة الجديدة. لمزيد من المعلومات، يرجى الاطلاع على [اختبار استدعاء إجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت لآلة الإيثريوم الافتراضية] (تكامل*سلسة*جديدة #اختبار*استدعاء*إجراء*عن*بُعد*باستخدام*تمثيل*كائنات*جافا*سكريبت*لآلة*التشغيل*الافتراضية_لإثريوم). +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -**2. فايرهوز** +#### اختبار استدعاء إجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت لآلة الإيثريوم الافتراضية (EVM JSON-RPC) -بالنسبة لسلاسل الكتل الغير المبنية على آلة الإيثيريوم الافتراضية، يجب على عقدة الغراف استيعاب بيانات سلسلة الكتل عبر استدعاء الإجراءات عن بُعد من جوجل(gRPC) وتعريفات الأنواع المعروفة. يمكن القيام بذلك باستخدام [فايرهوز](فايرهوز/)، وهي تقنية جديدة تم تطويرها بواسطة [ستريمنج فاست](https://www.streamingfast.io/)، وتوفر حلاً لفهرسة سلسلة الكتل والقابلة للتوسع باستخدام نهج قائم على الملفات والتدفق المباشر. يمكنكم التواصل مع [فريق ستريمنج فاست](mailto:integrations@streamingfast.io/) إذا كنتم بحاجة إلى مساعدة في تطوير فايرهوز. +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## الفرق بين استدعاء إجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت لآلة الإيثريوم الافتراضية والفايرهوز +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`، ضمن طلب دفعة استدعاء الإجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت +- `trace_filter` _(optionally required for Graph Node to support call handlers)_ -في حين أن الاثنين مناسبان للغرافات الفرعية، فإن فايرهوز مطلوب دائمًا للمطورين الراغبين في البناء باستخدام [سبستريمز](سبستريمز/)، مثل بناء [غرافات فرعية مدعومة بسبستريمز](cookbook/substreams-powered-subgraphs/). بالإضافة إلى ذلك، يسمح فايرهوز بتحسين سرعات الفهرسة مقارنةً باستدعاء الإجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت. +### 2. Firehose Integration -قد يفكر المطورون الجدد لسلاسل آلة الإيثيريوم الافتراضة أيضًا في الاستفادة من نهج فايرهوز بناءً على فوائد سبستريمز وقدرات الفهرسة المتوازية الضخمة. إن دعم كليهما يسمح للمطورين بالاختيار بين بناء سبستريمز أو غرافات فرعية للسلسلة الجديدة. +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> ملاحظة: أن التكامل القائم على فايرهوز لسلاسل الآلة الإيثيريوم الافتراضية يتطلب من المفهرسين تشغيل عقدة نداء الإجراء عن بعد للأرشيف الخاص بالشبكة لفهرسة الغرافات الفرعية بشكل صحيح. يرجع ذلك إلى عدم قدرة فايرهوز على توفير حالة العقد الذكية التي يمكن الوصول إليها عادةً بطريقةنداء الإجراء عن بعد `eth_call`. (من الجدير بالذكر أن استخدام eth_calls [ليست ممارسة جيدة للمطورين](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## اختبار استدعاء إجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت لآلة الإيثريوم الافتراضية (EVM JSON-RPC) +#### Specific Firehose Instrumentation for EVM (`geth`) chains -لكي تتمكن عقدة الغراف من جمع البيانات من سلسلة EVM، يجب أن يوفر العقد RPC طرق EVM JSON RPC التالية: +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \_(للكتل التاريخية، باستخدام EIP-1898 - يتطلب نقطة أرشيف): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`، ضمن طلب دفعة استدعاء الإجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت -- _`trace_filter`_ _(مطلوبة اختياريًا لعقدة الغراف لدعم معالجات الاستدعاء)_ +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### تكوين عقدة الغراف +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -**ابدأ بإعداد بيئتك المحلية** +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## تكوين عقدة الغراف + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [استنسخ عقدة الغراف](https://github.com/graphprotocol/graph-node) -2. قم بتعديل [هذا السطر](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) لتضمين اسم الشبكة الجديدة والعنوان المتوافق مع استدعاء إجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت لآلة الإيثريوم الافتراضية - > لا تقم بتعديل اسم المتغير البيئي نفسه. يجب أن يظل اسمه `ethereum` حتى لو كان اسم الشبكة مختلفًا. -3. قم بتشغيل عقدة نظام الملفات بين الكواكب (IPFS) أو استخدم العقدة التي يستخدمها الغراف: https://api.thegraph.com/ipfs/ -**اختبر التكامل من خلال نشر الغراف الفرعي محليًا.** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. قم بإنشاء مثالًا بسيطًا للغراف الفرعي. بعض الخيارات المتاحة هي كالتالي: - 1. يُعتبر [غرافيتار](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) المُعد مسبقًا مثالًا جيدًا لعقد ذكي وغراف فرعي كنقطة انطلاقة جيدة - 2. قم بإعداد غراف فرعي محلي من أي عقد ذكي موجود أو بيئة تطوير صلبة [باستخدام هاردهات وملحق الغراف](https://github.com/graphprotocol/hardhat-graph) -3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. -4. أنشئ غرافك الفرعي في عقدة الغراف: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. انشر غرافك الفرعي إلى عقدة الغراف: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -إذا لم تكن هناك أخطاء يجب أن يقوم عقدة الغراف بمزامنة الغراف الفرعي المنشور. قم بمنحه بعض الوقت لإتمام عملية المزامنة، ثم قم بإرسال بعض استعلامات لغة الإستعلام للغراف (GraphQL) إلى نقطة نهاية واجهة برمجة التطبيقات الموجودة في السجلات. +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## تكامل سلسلة جديدة تدعم فايرهوز +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. قم بإنشاء مثالًا بسيطًا للغراف الفرعي. بعض الخيارات المتاحة هي كالتالي: + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +إذا لم تكن هناك أخطاء يجب أن يقوم عقدة الغراف بمزامنة الغراف الفرعي المنشور. قم بمنحه بعض الوقت لإتمام عملية المزامنة، ثم قم بإرسال بعض استعلامات لغة الإستعلام للغراف (GraphQL) إلى نقطة نهاية واجهة برمجة التطبيقات الموجودة في السجلات. -يتيح فايرهوز أيضًا إمكانية دمج سلسلة جديدة. يُعتبر هذا حاليًا الخيار الأفضل للسلاسل الغير معتمدة على آلة الإيثريوم الافتراضية ويعتبر متطلبًا لدعم سبستريمز. الوثائق الإضافية تركز على كيفية عمل فايرهوز وإضافة دعم فايرهوز لسلسلة جديدة ودمجها مع عقدة الغراف. يُوصى بالوثائق التالية للمطورين الذين يقومون بذلك: +## Substreams-powered Subgraphs -1. [وثائق عامة عن فايرهوز] (firehose/) -2. [إضافة دعم فايرهوز لسلسلة جديدة](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [دمج غراف نود مع سلسلة جديدة عبر فايرهوز](https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/ar/operating-graph-node.mdx b/website/pages/ar/operating-graph-node.mdx index ec9595ef8404..ac2816215c96 100644 --- a/website/pages/ar/operating-graph-node.mdx +++ b/website/pages/ar/operating-graph-node.mdx @@ -77,13 +77,13 @@ A complete Kubernetes example configuration can be found in the [indexer reposit When it is running Graph Node exposes the following ports: -| المنفذ | الغرض | المسار | CLI Argument | متغيرات البيئة | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...

/subgraphs/name/.../... | http-port-- | - | -| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...

/subgraphs/name/.../... | ws-port-- | - | -| 8020 | JSON-RPC
(for managing deployments) | / | admin-port-- | - | -| 8030 | Subgraph indexing status API | /graphql | index-node-port-- | - | -| 8040 | Prometheus metrics | /metrics | metrics-port-- | - | +| المنفذ | الغرض | المسار | CLI Argument | متغيرات البيئة | +| ------ | ------------------------------------------------------- | ------------------------------------------------------------------- | ----------------- | -------------- | +| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...

/subgraphs/name/.../... | http-port-- | - | +| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...

/subgraphs/name/.../... | ws-port-- | - | +| 8020 | JSON-RPC
(for managing deployments) | / | admin-port-- | - | +| 8030 | Subgraph indexing status API | /graphql | index-node-port-- | - | +| 8040 | Prometheus metrics | /metrics | metrics-port-- | - | > **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. diff --git a/website/pages/ar/querying/graphql-api.mdx b/website/pages/ar/querying/graphql-api.mdx index 2d2efb6008c2..258471c1d9d4 100644 --- a/website/pages/ar/querying/graphql-api.mdx +++ b/website/pages/ar/querying/graphql-api.mdx @@ -2,11 +2,19 @@ title: GraphQL API --- -This guide explains the GraphQL Query API that is used for The Graph Protocol. +Learn about the GraphQL Query API used in The Graph. -## الاستعلامات +## What is GraphQL? -في مخطط الـ subgraph الخاص بك ، يمكنك تعريف أنواع وتسمى `Entities`. لكل نوع من `Entity` ، سيتم إنشاء حقل `entity` و `entities` في المستوى الأعلى من نوع `Query`. لاحظ أنه لا يلزم تضمين `query` أعلى استعلام `graphql` عند استخدام The Graph. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +To understand the larger role that GraphQL plays, review [developing](/network/developing/) and [creating a subgraph](/developing/creating-a-subgraph/). + +## Queries with GraphQL + +In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. + +> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. ### Examples @@ -21,7 +29,7 @@ This guide explains the GraphQL Query API that is used for The Graph Protocol. } ``` -> **Note:** When querying for a single entity, the `id` field is required, and it must be a string. +> Note: When querying for a single entity, the `id` field is required, and it must be writen as a string. Query all `Token` entities: @@ -36,7 +44,10 @@ Query all `Token` entities: ### Sorting -When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. +When querying a collection, you may: + +- Use the `orderBy` parameter to sort by a specific attribute. +- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. #### مثال @@ -53,7 +64,7 @@ When querying a collection, the `orderBy` parameter may be used to sort by a spe As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities. -In the following example, we sort the tokens by the name of their owner: +The following example shows tokens sorted by the name of their owner: ```graphql { @@ -70,11 +81,12 @@ In the following example, we sort the tokens by the name of their owner: ### Pagination -When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. - -Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +When querying a collection, it's best to: -Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. +- Use the `first` parameter to paginate from the beginning of the collection. + - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. +- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. #### Example using `first` @@ -106,7 +118,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect #### Example using `first` and `id_ge` -If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: +If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: ```graphql query manyTokens($lastID: String) { @@ -117,11 +129,12 @@ query manyTokens($lastID: String) { } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### Filtering -You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. +- You can use the `where` parameter in your queries to filter for different properties. +- You can filter on multiple values within the `where` parameter. #### Example using `where` @@ -155,7 +168,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: #### Example for block filtering -You can also filter entities by the `_change_block(number_gte: Int)` - this filters entities which were updated in or after the specified block. +You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). @@ -193,7 +206,7 @@ As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/release ##### `AND` Operator -In the following example, we are filtering for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. +The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. ```graphql { @@ -208,7 +221,7 @@ In the following example, we are filtering for challenges with `outcome` `succee ``` > **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas. -> +> > ```graphql > { > challenges(where: { number_gte: 100, outcome: "succeeded" }) { @@ -223,7 +236,7 @@ In the following example, we are filtering for challenges with `outcome` `succee ##### `OR` Operator -In the following example, we are filtering for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. +The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. ```graphql { @@ -278,9 +291,9 @@ _change_block(number_gte: Int) You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -Note that the current implementation is still subject to certain limitations that might violate these gurantees. The implementation can not always tell that a given block hash is not on the main chain at all, or that the result of a query by block hash for a block that can not be considered final yet might be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. #### مثال @@ -322,12 +335,12 @@ Fulltext search queries have one required field, `text`, for supplying search te Fulltext search operators: -| رمز | عامل التشغيل | الوصف | -| --- | --- | --- | -| `&` | `And` | لدمج عبارات بحث متعددة في فلتر للكيانات التي تتضمن جميع العبارات المتوفرة | -| | | `أو` | الاستعلامات التي تحتوي على عبارات بحث متعددة مفصولة بواسطة عامل التشغيل or ستعيد جميع الكيانات المتطابقة من أي عبارة متوفرة | -| `<->` | `Follow by` | يحدد المسافة بين كلمتين. | -| `:*` | `Prefix` | يستخدم عبارة البحث prefix للعثور على الكلمات التي تتطابق بادئتها (مطلوب حرفان.) | +| رمز | عامل التشغيل | الوصف | +| ----------- | ------------ | --------------------------------------------------------------------------------------------------------------------------- | +| `&` | `And` | لدمج عبارات بحث متعددة في فلتر للكيانات التي تتضمن جميع العبارات المتوفرة | +| | | `أو` | الاستعلامات التي تحتوي على عبارات بحث متعددة مفصولة بواسطة عامل التشغيل or ستعيد جميع الكيانات المتطابقة من أي عبارة متوفرة | +| `<->` | `Follow by` | يحدد المسافة بين كلمتين. | +| `:*` | `Prefix` | يستخدم عبارة البحث prefix للعثور على الكلمات التي تتطابق بادئتها (مطلوب حرفان.) | #### Examples @@ -376,11 +389,11 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 ## المخطط -The schema of your data source--that is, the entity types, values, and relationships that are available to query--are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> **Note:** Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. +> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. ### Entities diff --git a/website/pages/ar/querying/querying-best-practices.mdx b/website/pages/ar/querying/querying-best-practices.mdx index 1068236fc184..47e1757b7cb2 100644 --- a/website/pages/ar/querying/querying-best-practices.mdx +++ b/website/pages/ar/querying/querying-best-practices.mdx @@ -2,11 +2,9 @@ title: أفضل الممارسات للاستعلام --- -يوفر The Graph طريقة لامركزية للاستعلام عن البيانات من سلاسل الكتل. +The Graph provides a decentralized way to query data from blockchains via GraphQL APIs, making it easier to query data with the GraphQL language. -يتم عرض بيانات شبكة Graph من خلال GraphQL API ، مما يسهل الاستعلام عن البيانات باستخدام لغة GraphQL. - -ستوجهك هذه الصفحة خلال القواعد الأساسية للغة GraphQL وأفضل ممارسات استعلامات GraphQL. +Learn the essential GraphQL language rules and GraphQL querying best practices. --- @@ -16,7 +14,7 @@ title: أفضل الممارسات للاستعلام على عكس REST API ، فإن GraphQL API مبنية على مخطط يحدد الاستعلامات التي يمكن تنفيذها. -على سبيل المثال ، طلب الاستعلام للحصول على توكن باستخدام استعلام `token` سيبدو كما يلي: +على سبيل المثال ، طلب الاستعلام للحصول على توكن باستخدام استعلام ` token ` سيبدو كما يلي: ```graphql query GetToken($id: ID!) { @@ -40,7 +38,7 @@ query GetToken($id: ID!) { تستخدم استعلامات GraphQL لغة GraphQL ، التي تم تحديدها في [المواصفات](https://spec.graphql.org/). -يتكون استعلام `GetToken` أعلاه من أجزاء متعددة للغة (تم استبدالها أدناه بـ placeholders `[...]`): +يتكون استعلام ` GetToken ` أعلاه من أجزاء متعددة للغة (تم استبدالها أدناه بـ placeholders ` [...] `): ```graphql query [operationName]([variableName]: [variableType]) { @@ -54,8 +52,8 @@ query [operationName]([variableName]: [variableType]) { على الرغم من أن قائمة القواعد التي يجب اتباعها طويلة، إلا أن هناك قواعد أساسية يجب أخذها في الاعتبار عند كتابة استعلامات GraphQL: -- يجب استخدام كل `queryName` مرة واحدة فقط لكل عملية. -- يجب استخدام كل `field` مرة واحدة فقط في التحديد (لا يمكننا الاستعلام عن `id` مرتين ضمن `token`) +- يجب استخدام كل ` queryName ` مرة واحدة فقط لكل عملية. +- يجب استخدام كل ` field ` مرة واحدة فقط في التحديد (لا يمكننا الاستعلام عن ` id ` مرتين ضمن ` token `) - Some `field`s or queries (like `tokens`) return complex types that require a selection of sub-field. Not providing a selection when expected (or providing one when not expected - for example, on `id`) will raise an error. To know a field type, please refer to [Graph Explorer](/network/explorer). - يجب أن يكون أي متغير تم تعيينه لوسيط متطابقًا مع نوعه. - في قائمة المتغيرات المعطاة ، يجب أن يكون كل واحد منها فريدًا. @@ -71,7 +69,7 @@ GraphQL is a language and set of conventions that transport over HTTP. It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`). -However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), we recommend you to use our `graph-client` that supports unique features such as: +However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), it's recommend to use `graph-client` which supports unique features such as: - التعامل مع ال subgraph عبر السلاسل: الاستعلام من عدة subgraphs عبر استعلام واحد - [تتبع الكتلة التلقائي](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) @@ -104,8 +102,6 @@ main() More GraphQL client alternatives are covered in ["Querying from an Application"](/querying/querying-from-an-application). -Now that we covered the basic rules of GraphQL queries syntax, let's now look at the best practices of GraphQL query writing. - --- ## Best Practices @@ -164,11 +160,11 @@ Doing so brings **many advantages**: - ** يمكن تخزين المتغيرات مؤقتًا ** على مستوى الخادم - ** يمكن تحليل طلبات البحث بشكل ثابت بواسطة الأدوات ** (المزيد حول هذا الموضوع في الأقسام التالية) -**Note: How to include fields conditionally in static queries** +### How to include fields conditionally in static queries -We might want to include the `owner` field only on a particular condition. +You might want to include the `owner` field only on a particular condition. -For this, we can leverage the `@include(if:...)` directive as follows: +For this, you can leverage the `@include(if:...)` directive as follows: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -191,7 +187,7 @@ const result = await execute(query, { }) ``` -Note: The opposite directive is `@skip(if: ...)`. +> Note: The opposite directive is `@skip(if: ...)`. ### Ask for what you want @@ -199,9 +195,8 @@ GraphQL became famous for its "Ask for what you want" tagline. For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually. -When querying GraphQL APIs, always think of querying only the fields that will be actually used. - -A common cause of over-fetching is collections of entities. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. Queries should therefore almost always set first explicitly, and make sure they only fetch as many entities as they actually need. This applies not just to top-level collections in a query, but even more so to nested collections of entities. +- When querying GraphQL APIs, always think of querying only the fields that will be actually used. +- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. For example, in the following query: @@ -337,8 +332,8 @@ query { Such repeated fields (`id`, `active`, `status`) bring many issues: -- harder to read for more extensive queries -- عند استخدام الأدوات التي تنشئ أنواع TypeScript بناءً على الاستعلامات (_المزيد عن ذلك في القسم الأخير_)، و `newDelate` و `oldDelegate` سينتج عنهما واجهتين مضمنتان متمايزتين. +- More extensive queries become harder to read. +- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. A refactored version of the query would be the following: @@ -364,13 +359,13 @@ fragment DelegateItem on Transcoder { } ``` -Using GraphQL `fragment` will improve readability (especially at scale) but also will result in better TypeScript types generation. +Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_). ### ما يجب فعله وما لا يجب فعله في GraphQL Fragment -**Fragment base must be a type** +### Fragment base must be a type A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**: @@ -382,7 +377,7 @@ fragment MyFragment on BigInt { `BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base. -**How to spread a Fragment** +#### How to spread a Fragment Fragments are defined on specific types and should be used accordingly in queries. @@ -411,16 +406,16 @@ fragment VoteItem on Vote { It is not possible to spread a fragment of type `Vote` here. -**Define Fragment as an atomic business unit of data** +#### Define Fragment as an atomic business unit of data -GraphQL Fragment must be defined based on their usage. +GraphQL `Fragment`s must be defined based on their usage. For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient. -Here is a rule of thumb for using Fragment: +Here is a rule of thumb for using fragments: -- عند تكرار الحقول من نفس النوع في استعلام ، قم بتجميعها في Fragment -- عند تكرار الحقول متشابهه ولكن غير متطابقة ، قم بإنشاء fragments متعددة ، على سبيل المثال: +- When fields of the same type are repeated in a query, group them in a `Fragment`. +- When similar but different fields are repeated, create multiple fragments, for instance: ```graphql # base fragment (mostly used in listing) @@ -443,7 +438,7 @@ fragment VoteWithPoll on Vote { --- -## الأدوات الأساسية +## The Essential Tools ### GraphQL web-based explorers @@ -461,8 +456,8 @@ In order to keep up with the mentioned above best practices and syntactic rules, [Setup the "operations-recommended"](https://the-guild.dev/graphql/eslint/docs/configs) config will enforce essential rules such as: -- `@ graphql-eslint / field-on-right-type`: هل يتم استخدام الحقل على النوع المناسب؟ -- `@ graphql-eslint / no-unused variables`: هل يجب أن يبقى المتغير المعطى غير مستخدم؟ +- ` @ graphql-eslint / field-on-right-type `: هل يتم استخدام الحقل على النوع المناسب؟ +- ` @ graphql-eslint / no-unused variables `: هل يجب أن يبقى المتغير المعطى غير مستخدم؟ - و اكثر! This will allow you to **catch errors without even testing queries** on the playground or running them in production! @@ -473,11 +468,11 @@ This will allow you to **catch errors without even testing queries** on the play The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get: -- syntax highlighting -- اقتراحات الإكمال التلقائي -- validation against schema -- snippets -- انتقل إلى تعريف ال fragment وأنواع الإدخال +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets +- Go to definition for fragments and input types If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly. @@ -485,9 +480,9 @@ If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketp The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing: -- syntax highlighting -- اقتراحات الإكمال التلقائي -- validation against schema -- snippets +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets -More information on this [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) that showcases all the plugin's main features. +For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. diff --git a/website/pages/ar/quick-start.mdx b/website/pages/ar/quick-start.mdx index f510c6ba381d..b49e360b956f 100644 --- a/website/pages/ar/quick-start.mdx +++ b/website/pages/ar/quick-start.mdx @@ -2,24 +2,18 @@ title: بداية سريعة --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph to Subgraph Studio. +Learn how to easily publish and query a [subgraph](/developing/developer-faqs/#1-what-is-a-subgraph) on The Graph. -تأكد من أن الغراف الفرعي الخاص بك سيقوم بفهرسة البيانات من [الشبكة المدعومة](/developing/supported-networks). - -تم كتابة هذا الدليل على افتراض أن لديك: +## Prerequisites for this guide - محفظة عملات رقمية -- عنوان عقد ذكي على الشبكة التي تختارها - -## 1. Create a subgraph on Subgraph Studio - -انتقل إلى [سبغراف استوديو] (https://thegraph.com/studio) وقم بربط محفظتك. +- A smart contract address on one of the [supported networks](/developing/supported-networks/) -Once your wallet is connected, you can begin by clicking “Create a Subgraph." It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name." +## Step-by-step -## 2. Install the Graph CLI +### 1. قم بتثبيت Graph CLI -The Graph CLI is written in TypeScript and you will need to have `node` and either `npm` or `yarn` installed to use it. Check that you have the most recent CLI version installed. +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. On your local machine, run one of the following commands: @@ -35,133 +29,161 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -## 3. Initialize your subgraph from existing contract +### 2. Create your subgraph + +If your contract has events, the `init` command will automatically create a scaffold of a subgraph. + +#### Create via Graph CLI + +Use the following command to create a subgraph in Subgraph Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +#### Create via Subgraph Studio + +Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. -Initialize your subgraph from an existing contract by running the initialize command: +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +2. Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". + +For additional information on subgraph creation and the Graph CLI, see [Creating a Subgraph](/developing/creating-a-subgraph). + +### 3. Initialize your subgraph + +#### From an existing contract + +The following command initializes your subgraph from an existing contract: ```sh graph init --studio ``` -> يمكنك العثور على الأوامر المتعلقة بالغراف الفرعي الخاص بك على صفحة الغراف الفرعي في (سبغراف استوديو) (https://thegraph.com/studio). +> Note: If your contract was verified on Etherscan, then the ABI will automatically be created in the CLI. + +يمكنك العثور على الأوامر المتعلقة بالغراف الفرعي الخاص بك على صفحة الغراف الفرعي في (سبغراف استوديو) (https://thegraph.com/studio). -عند تهيئة غرافك الفرعي، ستطلب منك أداة "واجهة سطر الأوامر" (CLI) المعلومات التالية: +When you initialize your subgraph, the CLI will ask you for the following information: -- البروتوكول: اختر البروتوكول الذي سيفهرس من فهرسة البيانات -- المعرّف الخاص بالغراف الفرعي: قم بإنشاء اسم لغرافك الغرعي. يُعتبر "سبغراف سلوج" معرّف فريد يستخدم لتمييز غرافك الفرعي. -- الدليل الذي سيتم إنشاء الغراف الفرعي فيه: اختر الدليل المحلي الذي ترغب في إنشاء الغراف الفرعي فيه -- شبكة الايثيروم(اختيارية): قد تحتاج إلى تحديد الشبكة المتوافقة مع آلة إيثيريوم الإفتراضية التي سيقوم غرافك الفرعي بفهرسة البيانات منها -- Contract address: Locate the smart contract address you’d like to query data from -- واجهة التطبيق الثنائية: إذا لم يتم ملء واجهة التطبيق الثنائية تلقائياً، فستحتاج إلى إدخاله يدوياً كملف JSON -- كتلة البداية: يُقترح إدخال كتلة البداية لتوفير الوقت أثناء قيام غرافك الفرعي بفهرسة بيانات سلاسل الكتل. يمكنك تحديد كتلة البداية من خلال العثور على الكتلة التي تم نشر عقدك فيها. -- Contract Name: input the name of your contract -- فهرسة أحداث العقد ككيانات: يُقترح ضبط هذا الخيار على "صحيح" (True) حيث سيتم إضافة تعيينات تلقائية إلى غرافك الفرعي لكل حدث يتم إصداره -- إضافة عقد آخر (اختياري): يمكنك إضافة عقد آخر +- Protocol: Choose the protocol your subgraph will be indexing data from. +- Subgraph slug: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. +- Directory to create the subgraph in: Choose your local directory. +- Ethereum network (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- Contract address: Locate the smart contract address you’d like to query data from. +- ABI: If the ABI is not auto-populated, you will need to input it manually as a JSON file. +- Start Block: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- Contract Name: Input the name of your contract. +- Index contract events as entities: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- Add another contract (optional): You can add another contract. يرجى مراجعة الصورة المرفقة كمثال عن ما يمكن توقعه عند تهيئة غرافك الفرعي: أمر الغراف الفرعي(/img/subgraph-init-example.png) -## 4. Write your subgraph +### 4. Write your subgraph -الأوامر السابقة تنشئ هيكل غرافك الفرعي والذي يمكنك استخدامه كنقطة بداية لبناء غرافك الفرعي. عند إجراء تغييرات على الغراف الفرعي، ستعمل بشكل رئيسي مع ثلاثة ملفات: +The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. -- Manifest (`subgraph.yaml`) - The manifest defines what datasources your subgraphs will index. -- Schema (`schema.graphql`) - The GraphQL schema defines what data you wish to retrieve from the subgraph. -- AssemblyScript Mappings (`mapping.ts`) - This is the code that translates data from your datasources to the entities defined in the schema. +When making changes to the subgraph, you will mainly work with three files: -للمزيد من المعلومات حول كيفية كتابة غرافك الفرعي، يُرجى الاطلاع على إنشاء غراف فرعي(/developing/creating-a-subgraph). +- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -## 5. Deploy to Subgraph Studio +For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -عند كتابة غرافك الفرعي، قم بتنفيذ الأوامر التالية: +### 5. Deploy your subgraph -```sh -$ graph codegen -$ graph build -``` +Remember, deploying is not the same as publishing. + +- When you deploy a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. +- When you publish a subgraph, you are publishing it onchain to the decentralized network. + +1. عند كتابة غرافك الفرعي، قم بتنفيذ الأوامر التالية: + + ```sh + graph codegen + graph build + ``` + +2. Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. -- قم بالمصادقة وأنشر غرافك الفرعي. يمكن العثور على مفتاح النشر على صفحة الغراف الفرعي في سبغراف استيديو. +![Deploy key](/img/subgraph-studio-deploy-key.jpg) +```` ```sh -$ graph auth --studio -$ graph deploy --studio -``` -You will be asked for a version label. It's strongly recommended to use [semver](https://semver.org/) for versioning like `0.0.1`. That said, you are free to choose any string as version such as:`v1`, `version1`, `asdf`. - -## 6. اختبر غرافك الفرعي - -In Subgraph Studio's playground environment, you can test your subgraph by making a sample query. - -ستخبرك السجلات إذا كانت هناك أي أخطاء في غرافك الفرعي. ستبدو سجلات الغراف الفرعي الفعّال على النحو التالي: - -![Subgraph logs](/img/subgraph-logs-image.png) - -إذا فشل غرافك الفرعي، فيمكنك الاستعلام عن صحة الغراف الفرعي باستخدام ملعب غرافي GraphiQL Playground. لاحظ أنه يمكنك الاستفادة من الاستعلام أدناه وإدخال معرف النشر الخاص بك لغرافك الفرعي. في هذه الحالة، `Qm...` هو معرف النشر (يمكن العثور عليه في صفحة الغراف الفرعي ضمن **التفاصيل**). سيخبرك الاستعلام أدناه عند فشل الغراف الفرعي حتى تتمكن من إصلاحه بناءً عليه: - -```graphql -{ - indexingStatuses(subgraphs: ["Qm..."]) { - node - synced - health - fatalError { - message - block { - number - hash - } - handler - } - nonFatalErrors { - message - block { - number - hash - } - handler - } - chains { - network - chainHeadBlock { - number - } - earliestBlock { - number - } - latestBlock { - number - } - lastHealthyBlock { - number - } - } - entityCount - } -} +graph auth --studio + +graph deploy --studio ``` +```` -## 7. Publish your subgraph to The Graph’s Decentralized Network +- The CLI will ask for a version label. + - It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. That said, you can choose any string for the version such as: `v1`, `version1`, `asdf`, etc. -Once your subgraph has been deployed to Subgraph Studio, you have tested it out, and you are ready to put it into production, you can then publish it to the decentralized network. +### 6. Review your subgraph -In Subgraph Studio, you will be able to click the publish button on the top right of your subgraph's page. +If you’d like to examine your subgraph before publishing it to the network, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: -حدد الشبكة التي ترغب في نشر غرافك الفرعي عليها. يُوصى بنشر الغرافات الفرعية على شبكة أربترم ون للاستفادة من [سرعة معاملات أسرع وتكاليف غاز أقل](/arbitrum/arbitrum-faq). +- Run a sample query. +- Analyze your subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: -The (upgrade Indexer)[/sunrise/#about-the-upgrade-indexer] will begin serving queries on your subgraph regardless of subgraph curation, and it will provide you with 100,000 free queries per month. + ![Subgraph logs](/img/subgraph-logs-image.png) -For a higher quality of service and stronger redundancy, you can curate your subgraph to attract more Indexers. At the time of writing, it is recommended that you curate your own subgraph with at least 3,000 GRT to ensure 3-5 additional Indexers begin serving queries on your subgraph. +### 7. Publish your subgraph to The Graph Network -لتوفير تكاليف الغاز، يمكنك تنسيق غرافك الفرعي في نفس العملية التي نشرته عن طريق اختيار هذا الزر عند نشر غرافك الفرعي على شبكة الغراف اللامركزية: +Publishing a subgraph to the decentralized network makes it available for [Curators](/network/curating/) to begin curating it and [Indexers](/network/indexing/) to begin indexing it. -![Subgraph publish](/img/publish-and-signal-tx.png) +#### Publishing with Subgraph Studio + +1. To publish your subgraph, click the Publish button in the dashboard. +2. Select the network to which you would like to publish your subgraph. + +#### Publishing from the CLI + +As of version 0.73.0, you can also publish your subgraph with the Graph CLI. + +1. Open the `graph-cli`. + +2. Use the following commands: + + ```sh + graph codegen && graph build + ``` + + Then, + + ```sh + graph publish + ``` -## 8. Query your subgraph +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +To customize your deployment, see [Publishing a Subgraph](/publishing/publishing-a-subgraph/). + +#### Adding signal to your subgraph + +1. To attract indexers to query your subgraph, you should add GRT curation signal to it. + + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + +2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. + + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + +To learn more about curation, read [Curating](/network/curating/). + +To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: + +![Subgraph publish](/img/publish-and-signal-tx.png) -الآن يمكنك الاستعلام عن غرافك الفرعي عن طريق إرسال استعلامات لغة GraphQL إلى عنوان استعلامات غرافك الفرعي URL والذي يمكنك أن تجده عن طريق النقر على زر الاستعلام. +### 8. Query your subgraph -If you don't have your API key, you can query via the free, rate-limited development query URL, which can be used for development and staging. +Now, you can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read more [here](/querying/querying-the-graph/). +For more information about querying data from your subgraph, read [Querying The Graph](/querying/querying-the-graph/). diff --git a/website/pages/ar/release-notes/assemblyscript-migration-guide.mdx b/website/pages/ar/release-notes/assemblyscript-migration-guide.mdx index 84e00f13b4e1..9674f1777573 100644 --- a/website/pages/ar/release-notes/assemblyscript-migration-guide.mdx +++ b/website/pages/ar/release-notes/assemblyscript-migration-guide.mdx @@ -127,11 +127,8 @@ let a = a + b ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - ستحتاج إلى إعادة تسمية المتغيرات المكررة إذا كان لديك variable shadowing. - ### مقارانات Null - من خلال إجراء الترقية على ال Subgraph الخاص بك ، قد تحصل أحيانًا على أخطاء مثل هذه: ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - لحل المشكلة يمكنك ببساطة تغيير عبارة `if` إلى شيء مثل هذا: ```typescript @@ -288,7 +284,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - لإصلاح هذه المشكلة ، يمكنك إنشاء متغير للوصول إلى الخاصية حتى يتمكن المترجم من القيام بعملية التحقق من الـ nullability: ```typescript diff --git a/website/pages/ar/sps/introduction.mdx b/website/pages/ar/sps/introduction.mdx new file mode 100644 index 000000000000..12e3f81c6d53 --- /dev/null +++ b/website/pages/ar/sps/introduction.mdx @@ -0,0 +1,19 @@ +--- +title: Introduction to Substreams-powered Subgraphs +--- + +By using a Substreams package (`.spkg`) as a data source, your subgraph gains access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. + +There are two methods of enabling this technology: + +Using Substreams [triggers](./triggers): Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. + +Using [Entity Changes](https://substreams.streamingfast.io/documentation/consume/subgraph/graph-out): By writing more of the logic into Substreams, you can consume the module's output directly into graph-node. In graph-node, you can use the Substreams data to create your subgraph entities. + +It is really a matter of where you put your logic, in the subgraph or the Substreams. Keep in mind that having more of your logic in Substreams benefits from a parallelized model, whereas triggers will be linearly consumed in graph-node. + +Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: + +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/ar/sps/triggers-example.mdx b/website/pages/ar/sps/triggers-example.mdx new file mode 100644 index 000000000000..bcb29a772f71 --- /dev/null +++ b/website/pages/ar/sps/triggers-example.mdx @@ -0,0 +1,140 @@ +--- +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" +--- + +## المتطلبات الأساسية + +Before starting, make sure to: + +- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. +- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. + +## Step 1: Initialize Your Project + + + +1. Open your Dev Container and run the following command to initialize your project: + + ```bash + substreams init + ``` + +2. Select the "minimal" project option. + +3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: + +```yaml +specVersion: v0.1.0 +package: + name: my_project_sol + version: v0.1.0 + +imports: # Pass your spkg of interest + solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg + +modules: + - name: map_spl_transfers + use: solana:map_block # Select corresponding modules available within your spkg + initialBlock: 260000082 + + - name: map_transactions_by_programid + use: solana:solana:transactions_by_programid_without_votes + +network: solana-mainnet-beta + +params: # Modify the param fields to meet your needs + # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA + map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE +``` + +## Step 2: Generate the Subgraph Manifest + +Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: + +```bash +substreams codegen subgraph +``` + +You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: + +```yaml +--- +dataSources: + - kind: substreams + name: my_project_sol + network: solana-mainnet-beta + source: + package: + moduleName: map_spl_transfers # Module defined in the substreams.yaml + file: ./my-project-sol-v0.1.0.spkg + mapping: + apiVersion: 0.0.7 + kind: substreams/graph-entities + file: ./src/mappings.ts + handler: handleTriggers +``` + +## Step 3: Define Entities in `schema.graphql` + +Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. Here is an example: + +```graphql +type MyTransfer @entity { + id: ID! + amount: String! + source: String! + designation: String! + signers: [String!]! +} +``` + +This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. + +## Step 4: Handle Substreams Data in `mappings.ts` + +With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: + +```ts +import { Protobuf } from 'as-proto/assembly' +import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events' +import { MyTransfer } from '../generated/schema' + +export function handleTriggers(bytes: Uint8Array): void { + const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode) + + for (let i = 0; i < input.data.length; i++) { + const event = input.data[i] + + if (event.transfer != null) { + let entity_id: string = `${event.txnId}-${i}` + const entity = new MyTransfer(entity_id) + entity.amount = event.transfer!.instruction!.amount.toString() + entity.source = event.transfer!.accounts!.source + entity.designation = event.transfer!.accounts!.destination + + if (event.transfer!.accounts!.signer!.single != null) { + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] + } else if (event.transfer!.accounts!.signer!.multisig != null) { + entity.signers = event.transfer!.accounts!.signer!.multisig!.signers + } + entity.save() + } + } +} +``` + +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + +## Conclusion + +You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. + +For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). diff --git a/website/pages/ar/sps/triggers.mdx b/website/pages/ar/sps/triggers.mdx new file mode 100644 index 000000000000..ed19635d4768 --- /dev/null +++ b/website/pages/ar/sps/triggers.mdx @@ -0,0 +1,37 @@ +--- +title: Substreams Triggers +--- + +Custom triggers allow you to send data directly into your subgraph mappings file and entities (similar to tables and fields), enabling full use of the GraphQL layer. By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data within your subgraph’s handler, ensuring efficient and streamlined data management within the subgraph framework. + +> Note: If you haven’t already, visit one of the How-To Guides found [here](./introduction) to scaffold your first project in the Development Container. + +The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. + +```tsx +export function handleTransactions(bytes: Uint8Array): void { + let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).trasanctions // 1. + if (transactions.length == 0) { + log.info('No transactions found', []) + return + } + + for (let i = 0; i < transactions.length; i++) { + // 2. + let transaction = transactions[i] + + let entity = new Transaction(transaction.hash) // 3. + entity.from = transaction.from + entity.to = transaction.to + entity.save() + } +} +``` + +Here's what you’re seeing in the `mappings.ts` file: + +1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object +2. Looping over the transactions +3. Create a new subgraph entity for every transaction + +To go through a detailed example of a trigger-based subgraph, [click here](./triggers-example). diff --git a/website/pages/ar/substreams.mdx b/website/pages/ar/substreams.mdx index cc4cb7918c45..95b647a8428c 100644 --- a/website/pages/ar/substreams.mdx +++ b/website/pages/ar/substreams.mdx @@ -4,9 +4,11 @@ title: سبستريمز ![Substreams Logo](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## How Substreams Works in 4 Steps @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### Expand Your Knowledge - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/ar/sunrise.mdx b/website/pages/ar/sunrise.mdx index 32bf6c6d26d4..14d1444cf8cd 100644 --- a/website/pages/ar/sunrise.mdx +++ b/website/pages/ar/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Post-Sunrise + Upgrading to The Graph Network FAQ --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Note: The Sunrise of Decentralized Data ended June 12th, 2024. -## What is the Sunrise of Decentralized Data? +## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data is an initiative spearheaded by Edge & Node. The goal is to enable subgraph developers to seamlessly upgrade to The Graph’s decentralized network. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan draws on many previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs, and the ability to integrate new blockchain networks to The Graph. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### What are the phases of the Sunrise? +### What happened to the hosted service? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. -## Upgrading subgraphs to The Graph Network +During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. -### When will hosted service subgraphs no longer be available? +### Was Subgraph Studio impacted by this upgrade? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Why were subgraphs published to Arbitrum, did it start indexing a different network? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Will my hosted service subgraph be supported on The Graph Network? - -Yes, the upgrade Indexer will automatically support all hosted service subgraphs published to The Graph Network for a seamless upgrade experience. - -### How do I upgrade my hosted service subgraph? - -> Note: Upgrading a subgraph to The Graph Network cannot be undone. - - - -To upgrade a hosted service subgraph, you can visit the subgraph dashboard on the [hosted service](https://thegraph.com/hosted-service). - -1. Select the subgraph(s) you want to upgrade. -2. Select the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### How can I get support with the upgrade process? - -The Graph community is here to support developers as they move to The Graph Network. Join The Graph's [Discord server](https://discord.gg/vtvv7FP) and request help in the #upgrade-decentralized-network channel. - -### How can I ensure high quality of service and redundancy for subgraphs on The Graph Network? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Members from these blockchain communities are encouraged to integrate their chain through the [Chain Integration Process](/chain-integration-overview/). - -### How do I publish new versions to the network? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Upgrade to the latest version of [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Update your deploy command - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> Publishing requires Arbitrum ETH - upgrading your subgraph also airdrops a small amount to facilitate your first protocol interactions 🧑‍🚀 - -### I use a subgraph developed by someone else, how can I make sure that my service isn't interrupted? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### What happens if I don't upgrade my subgraph? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### How can I get started querying subgraphs on The Graph Network? - -You can explore available subgraphs on [Graph Explorer](https://thegraph.com/explorer). [Learn more about querying subgraphs on The Graph](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## About the Upgrade Indexer -### What is the upgrade Indexer? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> The upgrade Indexer is currently active. -### What chains does the upgrade Indexer support? +The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. -The upgrade Indexer supports chains that were previously only available on the hosted service. +### What does the upgrade Indexer do? -Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. ### What does the upgrade indexer mean for existing Indexers? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. + +However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As more subgraphs are upgraded from the hosted service to The Graph Network, Delegators stand to benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### Will the upgrade Indexer compete with existing Indexers for rewards? +### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer will only allocate the minimum amount per subgraph and will not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### How will this affect subgraph developers? +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### How does this benefit data consumers? +### How does the upgrade Indexer benefit data consumers? The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. -### How will the upgrade Indexer price queries? - -The upgrade Indexer will price queries at the market rate so as not to influence the query fee market. - -### What are the criteria for the upgrade Indexer to stop supporting a subgraph? - -The upgrade Indexer will serve a subgraph until it is sufficiently and successfully served with consistent queries served by at least 3 other Indexers. - -Furthermore, the upgrade Indexer will stop supporting a subgraph if it has not been queried in the last 30 days. - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## About The Graph Network - -### Do I need to run my own infrastructure? - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -Once your subgraph has reached adequate curation signal and other Indexers begin supporting it, the upgrade Indexer will gradually taper off, allowing other Indexers to collect indexing rewards and query fees. - -### Should I host my own indexing infrastructure? - -Running infrastructure for your own project is [significantly more resource intensive](/network/benefits/) when compared to using The Graph Network. - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -That being said, if you’re still interested in running a [Graph Node](https://github.com/graphprotocol/graph-node), consider joining The Graph Network [as an Indexer](https://thegraph.com/blog/how-to-become-indexer/) to earn indexing rewards and query fees by serving data on your subgraph and others. - -### Should I use a centralized indexing provider? - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -Here's a detailed breakdown of the benefits of The Graph over centralized hosting: +### How does the upgrade Indexer price queries? -- **Resilience and Redundancy**: Decentralized systems are inherently more robust and resilient due to their distributed nature. Data isn't stored on a single server or location. Instead, it's served by hundreds of independent Indexers around the globe. This reduces the risk of data loss or service interruptions if one node fails, leading to exceptional uptime (99.99%). +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **Quality of Service**: In addition to the impressive uptime, The Graph Network features a ~106ms median query speed (latency), and higher query success rates compared to hosted alternatives. Read more in [this blog](https://thegraph.com/blog/qos-the-graph-network/). +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -Just as you've chosen your blockchain network for its decentralized nature, security, and transparency, opting for The Graph Network is an extension of those same principles. By aligning your data infrastructure with these values, you ensure a cohesive, resilient, and trust-driven development environment. +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/ar/supported-network-requirements.mdx b/website/pages/ar/supported-network-requirements.mdx index 9c820d055399..1f61d9d971ca 100644 --- a/website/pages/ar/supported-network-requirements.mdx +++ b/website/pages/ar/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| Network | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| بوليجون | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | +| Network | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| بوليجون | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | diff --git a/website/pages/ar/tap.mdx b/website/pages/ar/tap.mdx new file mode 100644 index 000000000000..d4f8ebe66247 --- /dev/null +++ b/website/pages/ar/tap.mdx @@ -0,0 +1,197 @@ +--- +title: TAP Migration Guide +--- + +Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. + +## نظره عامة + +[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: + +- Efficiently handles micropayments. +- Adds a layer of consolidations to on-chain transactions and costs. +- Allows Indexers control of receipts and payments, guaranteeing payment for queries. +- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. + +## Specifics + +TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. + +For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. + +### RAV Details + +- It’s money that is waiting to be sent to the blockchain. + +- It will continue to send requests to aggregate and ensure that the total value of non-aggregated receipts does not exceed the `amount willing to lose`. + +- Each RAV can be redeemed once in the contracts, which is why they are sent after the allocation is closed. + +### Redeeming RAV + +As long as you run `tap-agent` and `indexer-agent`, everything will be executed automatically. The following provides a detailed breakdown of the process: + +1. An Indexer closes allocation. + +2. ` period, tap-agent` takes all pending receipts for that specific allocation and requests an aggregation into a RAV, marking it as `last`. + +3. `indexer-agent` takes all the last RAVS and sends redeem requests to the blockchain, which will update the value of `redeem_at`. + +4. During the `` period, `indexer-agent` monitors if the blockchain has any reorganizations that revert the transaction. + + - If it was reverted, the RAV is resent to the blockchain. If it was not reverted, it gets marked as `final`. + +## Blockchain Addresses + +### Contracts + +| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| ------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | +| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | +| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | + +### Gateway + +| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| ---------- | --------------------------------------------- | --------------------------------------------- | +| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | + +### Requirements + +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. + +- [Graph TAP Aribtrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) + +> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployement. As a result, you have to index it manually. + +## Migration Guide + +### Software versions + +| Component | الاصدار | Image Link | +| --------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------- | +| indexer-service | v1.0.0-rc.6 | [indexer-service](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-service-rs/264320627?tag=1.0.0-rc.6) | +| indexer-agent | PR #995 | [indexer-agent](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent/266166026?tag=sha-d98cf80) | +| tap-agent | v1.0.0-rc.6 | [tap-agent](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-tap-agent/264320547?tag=1.0.0-rc.6) | + +### Steps + +1. **Indexer Agent** + + - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). + - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + +2. **Indexer Service** + + - Fully replace your current configuration with the [new Indexer Service rs](https://github.com/graphprotocol/indexer-rs). It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + - Like the older version, you can scale Indexer Service horizontally easily. It is still stateless. + +3. **TAP Agent** + + - Run _one_ single instance of [TAP Agent](https://github.com/graphprotocol/indexer-rs) at all times. It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + +4. **Configure Indexer Service and TAP Agent** + + Configuration is a TOML file shared between `indexer-service` and `tap-agent`, supplied with the argument `--config /path/to/config.toml`. + + Check out the full [configuration](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) and the [default values](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) + +For minimal configuration, use the following template: + +```bash +# You will have to change *all* the values below to match your setup. +# +# Some of the config below are global graph network values, which you can find here: +# +# +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: +# +# [database] +# postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" + +[indexer] +indexer_address = "0x1111111111111111111111111111111111111111" +operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" + +[database] +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. +postgres_url = "postgres://postgres@postgres:5432/postgres" + +[graph_node] +# URL to your graph-node's query endpoint +query_url = "" +# URL to your graph-node's status endpoint +status_url = "" + +[subgraphs.network] +# Query URL for the Graph Network subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[subgraphs.escrow] +# Query URL for the Escrow subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[blockchain] +# The chain ID of the network that the graph network is running on +chain_id = 1337 +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. +receipts_verifier_address = "0x2222222222222222222222222222222222222222" + +######################################## +# Specific configurations to tap-agent # +######################################## +[tap] +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: +# max_amount_willing_to_lose_grt = "0.1" +max_amount_willing_to_lose_grt = 20 + +[tap.sender_aggregator_endpoints] +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. +0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" +``` + +Notes: + +- Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/tap/#gateway). +- Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/tap/#contracts) using the appropriate chain id. + +**Log Level** + +- You can set the log level by using the `RUST_LOG` environment variable. +- It’s recommended that you set it to `RUST_LOG=indexer_tap_agent=debug,info`. + +## Monitoring + +### Metrics + +All components expose the port 7300 to be queried by prometheus. + +### Grafana Dashboard + +You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) and import. + +### Launchpad + +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer) diff --git a/website/pages/cs/about.mdx b/website/pages/cs/about.mdx index 5e95320d27d4..e29bcf5fe650 100644 --- a/website/pages/cs/about.mdx +++ b/website/pages/cs/about.mdx @@ -2,46 +2,66 @@ title: O Grafu --- -Tato stránka vysvětlí, co je The Graph a jak můžete začít. - ## Co je Graf? -Grafu je decentralizovaný protokol pro indexování a dotazování dat blockchainu. Graf umožňuje dotazovat se na data, která je obtížné dotazovat přímo. +The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. + +## Understanding the Basics + +Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. + +### Challenges Without The Graph + +In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. + +- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. + +- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. + +- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. + +### Why is this a problem? + +It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. + +Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. + +Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. -Projekty se složitými chytrými smlouvami, jako je [Uniswap](https://uniswap.org/), a iniciativy NFT, jako je [Bored Ape Yacht Club](https://boredapeyachtclub.com/), ukládají data do blockchainu Etherea, takže je opravdu obtížné číst cokoli jiného než základní data přímo z blockchainu. +## The Graph Provides a Solution -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. -Můžete si také vytvořit vlastní server, zpracovávat na něm transakce, ukládat je do databáze a nad tím vším vytvořit koncový bod API pro dotazování na data. Tato možnost je však [náročná na zdroje](/network/benefits/), vyžaduje údržbu, představuje jediný bod selhání a porušuje důležité bezpečnostní vlastnosti potřebné pro decentralizaci. +### How The Graph Functions -**Indexování blockchainových dat je opravdu, ale opravdu těžké.** +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +#### Specifics -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. -## Jak funguje graf +- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -Grafu se učí, co a jak indexovat data Ethereu, m na základě popisů podgrafů, známých jako manifest podgrafu. Popis podgrafu definuje chytré smlouvy, které jsou pro podgraf zajímavé, události v těchto smlouvách, kterým je třeba věnovat pozornost, a způsob mapování dat událostí na data, která Grafu uloží do své databáze. +- When creating a subgraph, you need to write a subgraph manifest. -Jakmile napíšete `manifest podgrafu`, použijete Graph CLI k uložení definice do IPFS a řeknete indexeru, aby začal indexovat data pro tento podgraf. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. -Tento diagram podrobněji popisuje tok dat po nasazení podgraf manifestu, který se zabývá transakcemi Ethereum: +The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. ![Grafu vysvětlující, jak Graf používá Uzel grafu k doručování dotazů konzumentům dat](/img/graph-dataflow.png) Průběh se řídí těmito kroky: -1. Dapp přidává data do Ethereum prostřednictvím transakce na chytrém kontraktu. -2. Chytrý smlouva vysílá při zpracování transakce jednu nebo více událostí. -3. Uzel grafu neustále vyhledává nové bloky Ethereum a data pro váš podgraf, která mohou obsahovat. -4. Uzel grafu v těchto blocích vyhledá události Etherea pro váš podgraf a spustí vámi zadané mapovací obsluhy. Mapování je modul WASM, který vytváří nebo aktualizuje datové entity, které Uzel grafu ukládá v reakci na události Ethereum. -5. Aplikace dapp se dotazuje grafického uzlu na data indexovaná z blockchainu pomocí [GraphQL endpoint](https://graphql.org/learn/). Uzel Grafu zase překládá dotazy GraphQL na dotazy pro své podkladové datové úložiště, aby tato data načetl, přičemž využívá indexovací schopnosti úložiště. Dapp tato data zobrazuje v bohatém UI pro koncové uživatele, kteří je používají k vydávání nových transakcí na platformě Ethereum. Cyklus se opakuje. +1. Dapp přidává data do Ethereum prostřednictvím transakce na chytrém kontraktu. +2. Chytrý smlouva vysílá při zpracování transakce jednu nebo více událostí. +3. Uzel grafu neustále vyhledává nové bloky Ethereum a data pro váš podgraf, která mohou obsahovat. +4. Uzel grafu v těchto blocích vyhledá události Etherea pro váš podgraf a spustí vámi zadané mapovací obsluhy. Mapování je modul WASM, který vytváří nebo aktualizuje datové entity, které Uzel grafu ukládá v reakci na události Ethereum. +5. Aplikace dapp se dotazuje grafického uzlu na data indexovaná z blockchainu pomocí [GraphQL endpoint](https://graphql.org/learn/). Uzel Grafu zase překládá dotazy GraphQL na dotazy pro své podkladové datové úložiště, aby tato data načetl, přičemž využívá indexovací schopnosti úložiště. Dapp tato data zobrazuje v bohatém UI pro koncové uživatele, kteří je používají k vydávání nových transakcí na platformě Ethereum. Cyklus se opakuje. ## Další kroky -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +The following sections provide a more in-depth look at subgraphs, their deployment and data querying. -Než začnete psát vlastní podgraf, můžete se podívat do [Graph Explorer](https://thegraph.com/explorer) a prozkoumat některé z již nasazených podgrafů. Stránka každého podgrafu obsahuje hřiště, které vám umožní dotazovat se na data daného podgrafu pomocí GraphQL. +Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/pages/cs/arbitrum/arbitrum-faq.mdx b/website/pages/cs/arbitrum/arbitrum-faq.mdx index 4f9d8f545b6a..486e371b527d 100644 --- a/website/pages/cs/arbitrum/arbitrum-faq.mdx +++ b/website/pages/cs/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: Arbitrum FAQ Pokud chcete přejít na často ptal dotazy k účtování Arbitrum, klikněte na [here](#billing-on-arbitrum-faqs). -## Proč The Graph implementuje řešení L2? +## Why did The Graph implement an L2 Solution? -Škálováním The Graph na L2, sítě účastníci mohou očekávat: +By scaling The Graph on L2, network participants can now benefit from: - Až 26x úspora na poplatcích za plyn @@ -14,7 +14,7 @@ Pokud chcete přejít na často ptal dotazy k účtování Arbitrum, klikněte n - Zabezpečení zděděné po Ethereum -Škálování chytrých smluv protokolu na L2 umožňuje účastníkům sítě interakci častěji při snížených nákladech na plyn. Například, indexéry by mohly otevírat a zavírat alokace pro indexování většího počtu podgrafů s větší frekvencí, vývojáři mohli snadněji zavádět a aktualizovat podgrafy s větší lehkostí, Delegátor by mohli častěji delegovat GRT, Kurátoři by mohli přidávat nebo odebírat signály do většího počtu podgrafů–akcí dříve považovány za příliš nákladné dělat často kvůli nákladům. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. Komunita Graf se v loňském roce rozhodla pokračovat v Arbitrum po výsledku diskuze [GIP-0031] (https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). @@ -41,27 +41,21 @@ Pro využití výhod používání a Graf na L2 použijte rozevírací přepína ## Jako vývojář podgrafů, Spotřebitel dat, indexer, kurátor, nebo delegátor, co mám nyní udělat? -Není třeba přijímat žádná okamžitá opatření, nicméně vyzýváme účastníky sítě, aby začali přecházet na Arbitrum a využívali výhod L2. +Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) for additional support. -Týmy hlavních vývojářů pracují na vytvoření nástrojů pro přenos L2, které usnadní přesun delegování, kurátorství a podgrafů do služby Arbitrum. Účastníci sítě mohou očekávat, že nástroje pro přenos L2 budou k dispozici do léta 2023. +All indexing rewards are now entirely on Arbitrum. -Od 10. dubna 2023 se na Arbitrum razí 5 % všech indexačních odměn. S rostoucí účastí v síti a se souhlasem Rady, odměny za indexování se postupně přesunou z Etherea na Arbitrum a nakonec zcela na Arbitrum. - -## Co mám udělat, pokud se chci zapojit do sítě L2? - -Pomozte prosím [otestovat síť](https://testnet.thegraph.com/explorer) na L2 a nahlaste své zkušenosti na [Discord](https://discord.gg/graphprotocol). - -## Existují nějaká rizika spojená s rozšiřováním sítě na L2? +## Were there any risks associated with scaling the network to L2? Všechny chytré smlouvy byly důkladně [auditovány](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). Vše bylo důkladně otestováno, a je připraven pohotovostní plán, který zajistí bezpečný a bezproblémový přechod. Podrobnosti naleznete [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Budou stávající subgrafy na Ethereum fungovat i nadále? +## Are existing subgraphs on Ethereum working? -Ano, smlouvy Graf síť budou fungovat paralelně na platformě Ethereum i Arbitrum, dokud se později plně nepřesunou na Arbitrum. +All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. -## Bude mít GRT na Arbitrum nasazen nový chytrý kontrakt? +## Does GRT have a new smart contract deployed on Arbitrum? Ano, GRT má další [smart contract na Arbitrum](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7). Mainnetový [kontrakt GRT](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) na Ethereum však zůstane v provozu. diff --git a/website/pages/cs/billing.mdx b/website/pages/cs/billing.mdx index 8eb2b1e0bd2e..c308319f4286 100644 --- a/website/pages/cs/billing.mdx +++ b/website/pages/cs/billing.mdx @@ -14,7 +14,7 @@ There are two plans to use when querying subgraphs on The Graph Network. ## Query Payments with credit card -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) +- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Klikněte na tlačítko "Připojit peněženku" v pravém horním rohu stránky. Budete přesměrováni na stránku pro výběr peněženky. Vyberte svou peněženku a klikněte na "Připojit". 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. @@ -69,7 +69,7 @@ Once you bridge GRT, you can add it to your billing balance. 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. 4. Enter the amount of GRT you would like to withdraw. 5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. 6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. @@ -83,7 +83,7 @@ Once you bridge GRT, you can add it to your billing balance. - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. 5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. 6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. + - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. 7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. 8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. @@ -127,7 +127,7 @@ This will be a step by step guide for purchasing GRT on Binance. 7. Review your purchase and click "Buy GRT". 8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. 9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - Click on the "wallet" button, click withdraw, and select GRT. - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - Click "Continue" and confirm your transaction. @@ -198,7 +198,7 @@ Více informací o získání ETH na Binance se dozvíte [zde](https://www.binan ### How many queries will I need? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. @@ -208,6 +208,6 @@ Of course, both new and existing users can reach out to Edge & Node's BD team fo Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### What happens when my billing balance runs out? Will I get a warning? You will receive several email notifications before your billing balance runs out. diff --git a/website/pages/cs/chain-integration-overview.mdx b/website/pages/cs/chain-integration-overview.mdx index a54cf6823bf1..673d312e81e1 100644 --- a/website/pages/cs/chain-integration-overview.mdx +++ b/website/pages/cs/chain-integration-overview.mdx @@ -6,12 +6,12 @@ Pro blockchainové týmy, které usilují o [integraci s protokolem The Graph](h ## Fáze 1. Technická integrace -- Týmy pracují na integraci Uzel grafu a Firehose pro řetězce nezaložené na EvM. [Zde je návod](/new-chain-integration/). +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. - Týmy zahájí proces integrace protokolu vytvořením vlákna na fóru [zde](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (podkategorie Nové zdroje dat v části Správa a GIP). Použití výchozí šablony Fóra je povinné. ## Fáze 2. Ověřování integrace -- Týmy spolupracují s hlavními vývojáři, Graph Foundation a provozovateli GUIs a síťových bran, jako je [Subgraph Studio](https://thegraph.com/studio/), aby byl zajištěn hladký proces integrace. To zahrnuje poskytnutí nezbytné backendové infrastruktury, jako jsou koncové body JSON RPC nebo Firehose integračního řetězce. Týmy, které se chtějí vyhnout vlastnímu hostování takové infrastruktury, mohou k tomu využít komunitu provozovatelů uzlů (Indexers) Grafu, s čímž jim může pomoci nadace. +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - Graf Indexers testují integraci na testovací síti Grafu. - Vývojáři jádra a indexátoři sledují stabilitu, výkon a determinismus dat. @@ -38,7 +38,7 @@ Tento proces souvisí se službou Datová služba podgrafů a vztahuje se pouze To by mělo vliv pouze na podporu protokolu pro indexování odměn na podgrafech s podsílou. Novou implementaci Firehose by bylo třeba testovat v testnetu podle metodiky popsané pro fázi 2 v tomto GIP. Podobně, za předpokladu, že implementace bude výkonná a spolehlivá, by bylo nutné provést PR na [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) (`Substreams data sources` Subgraph Feature) a také nový GIP pro podporu protokolu pro indexování odměn. PR a GIP může vytvořit kdokoli; nadace by pomohla se schválením Radou. -### 3. Kolik času zabere tento proces? +### 3. How much time will the process of reaching full protocol support take? Očekává se, že doba do uvedení do mainnetu bude trvat několik týdnů a bude se lišit v závislosti na době vývoje integrace, na tom, zda bude zapotřebí další výzkum, testování a opravy chyb, a jako vždy na načasování procesu řízení, který vyžaduje zpětnou vazbu od komunity. @@ -46,4 +46,4 @@ Podpora protokolu pro odměny za indexování závisí na šířce pásma zúča ### 4. Jak budou řešeny priority? -Podobně jako u bodu č. 3 bude záležet na celkové připravenosti a šířce pásma zúčastněných stran. Například nový řetězec se zcela novou implementací Firehose může trvat déle než integrace, které již byly testovány v praxi nebo jsou v procesu správy dále. To platí zejména pro řetězce, které byly dříve podporovány na [hostované službě](https://thegraph.com/hosted-service) nebo které se spoléhají na již otestované stacky. +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/cs/cookbook/arweave.mdx b/website/pages/cs/cookbook/arweave.mdx index 0e8ac24b2593..cadffd6ff5b6 100644 --- a/website/pages/cs/cookbook/arweave.mdx +++ b/website/pages/cs/cookbook/arweave.mdx @@ -105,7 +105,7 @@ Definice schématu popisuje strukturu výsledné databáze podgrafu a vztahy mez Obslužné programy pro zpracování událostí jsou napsány v jazyce [AssemblyScript](https://www.assemblyscript.org/). -Indexování Arweave zavádí do [AssemblyScript API](/developing/assemblyscript-api/) datové typy specifické pro Arweave. +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/cs/cookbook/base-testnet.mdx b/website/pages/cs/cookbook/base-testnet.mdx index c38c8030cc27..8805638e06a5 100644 --- a/website/pages/cs/cookbook/base-testnet.mdx +++ b/website/pages/cs/cookbook/base-testnet.mdx @@ -70,7 +70,7 @@ Pokud chcete indexovat další data, musíte rozšířit manifest, schéma a map Pro více informací o tom, jak napsat svůj podgraf, se podívejte do části [Creating a Subgraph](/developing/creating-a-subgraph). -### 4. Deploy to Subgraph Studio +### 4. Nasazení do a Studio Podgraf Před nasazením podgrafu se musíte ověřit v Podgraf Studio. To provedete spuštěním následujícího příkazu: diff --git a/website/pages/cs/cookbook/cosmos.mdx b/website/pages/cs/cookbook/cosmos.mdx index c751655dd336..468065532adb 100644 --- a/website/pages/cs/cookbook/cosmos.mdx +++ b/website/pages/cs/cookbook/cosmos.mdx @@ -85,7 +85,7 @@ Definice schématu popisuje strukturu výsledné databáze podgrafů a vztahy me Obslužné programy pro zpracování událostí jsou napsány v jazyce [AssemblyScript](https://www.assemblyscript.org/). -Indexování Cosmos zavádí datové typy specifické pro Cosmos do [AssemblyScript API](/developing/assemblyscript-api/). +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { @@ -226,32 +226,32 @@ Koncový bod GraphQL pro podgrafy Cosmos je určen definicí schématu se stáva #### Co je Cosmos Hub? -The [Cosmos Hub blockchain](https://hub.cosmos.network/) is the first blockchain in the [Cosmos](https://cosmos.network/) ecosystem. You can visit the [official documentation](https://docs.cosmos.network/) for more information. +[Cosmos Hub blockchain](https://hub.cosmos.network/) je první blockchain v ekosystému [Cosmos](https://cosmos.network/). Další informace naleznete v [oficiální dokumentaci](https://docs.cosmos.network/). #### Sítě -Cosmos Hub mainnet is `cosmoshub-4`. Cosmos Hub current testnet is `theta-testnet-001`.
Other Cosmos Hub networks, i.e. `cosmoshub-3`, are halted, therefore no data is provided for them. +Hlavní síť Cosmos Hub je `cosmoshub-4`. Současná testovací síť Cosmos Hub je `theta-testnet-001`.
Ostatní sítě Cosmos Hub, jako je `cosmoshub-3`, jsou zastavené, a proto pro ně nejsou poskytována žádná data. ### Osmosis -> Osmosis support in Graph Node and on Subgraph Studio is in beta: please contact the graph team with any questions about building Osmosis subgraphs! +> Podpora Osmosis v uzel grafua v Podgraph Studio je ve fázi beta: s případnými dotazy ohledně vytváření podgrafů Osmosis se obraťte na grafový tým! #### Co je osmosis? -[Osmosis](https://osmosis.zone/) is a decentralized, cross-chain automated market maker (AMM) protocol built on top of the Cosmos SDK. It allows users to create custom liquidity pools and trade IBC-enabled tokens. You can visit the [official documentation](https://docs.osmosis.zone/) for more information. +[Osmosis](https://osmosis.zone/) je decentralizovaný, cross-chain automatizovaný tvůrce trhu (AMM) protokol postavený na Cosmos SDK. Umožňuje uživatelům vytvářet vlastní fondy likvidity a obchodovat s tokeny povolenými IBC. Pro více informací můžete navštívit [oficiální dokumentaci](https://docs.osmosis.zone/). #### Sítě -Osmosis mainnet is `osmosis-1`. Osmosis current testnet is `osmo-test-4`. +Osmosis mainnet je `osmosis-1`. Aktuální testnet Osmosis je `osmo-test-4`. ## Příklady podgrafů -Here are some example subgraphs for reference: +Zde je několik příkladů podgrafů: -[Block Filtering Example](https://github.com/graphprotocol/graph-tooling/tree/main/examples/cosmos-block-filtering) +[Příklad blokového filtrování](https://github.com/graphprotocol/graph-tooling/tree/main/examples/cosmos-block-filtering) -[Validator Rewards Example](https://github.com/graphprotocol/graph-tooling/tree/main/examples/cosmos-validator-rewards) +[Příklad odměn validátoru](https://github.com/graphprotocol/graph-tooling/tree/main/examples/cosmos-validator-rewards) -[Validator Delegations Example](https://github.com/graphprotocol/graph-tooling/tree/main/examples/cosmos-validator-delegations) +[Příklad delegování validátoru](https://github.com/graphprotocol/graph-tooling/tree/main/examples/cosmos-validator-delegations) -[Osmosis Token Swaps Example](https://github.com/graphprotocol/graph-tooling/tree/main/examples/cosmos-osmosis-token-swaps) +[Příklad výměny tokenů Osmosis](https://github.com/graphprotocol/graph-tooling/tree/main/examples/cosmos-osmosis-token-swaps) diff --git a/website/pages/cs/cookbook/derivedfrom.mdx b/website/pages/cs/cookbook/derivedfrom.mdx index e95a2cbe3069..aaab2a280096 100644 --- a/website/pages/cs/cookbook/derivedfrom.mdx +++ b/website/pages/cs/cookbook/derivedfrom.mdx @@ -1,28 +1,28 @@ --- -title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom +title: Podgraf Doporučený postup 2 - Zlepšení indexování a rychlosti dotazů pomocí @derivedFrom --- ## TLDR -Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly. +Pole ve vašem schématu mohou skutečně zpomalit výkon podgrafu, pokud jejich počet přesáhne tisíce položek. Pokud je to možné, měla by se při použití polí používat direktiva `@derivedFrom`, která zabraňuje vzniku velkých polí, zjednodušuje obslužné programy a snižuje velikost jednotlivých entit, čímž výrazně zvyšuje rychlost indexování a výkon dotazů. -## How to Use the `@derivedFrom` Directive +## Jak používat směrnici `@derivedFrom` -You just need to add a `@derivedFrom` directive after your array in your schema. Like this: +Stačí ve schématu za pole přidat směrnici `@derivedFrom`. Takto: ```graphql comments: [Comment!]! @derivedFrom(field: "post") ``` -`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient. +`@derivedFrom` vytváří efektivní vztahy typu one-to-many, které umožňují dynamické přiřazení entity k více souvisejícím entitám na základě pole v související entitě. Tento přístup odstraňuje nutnost ukládat duplicitní data na obou stranách vztahu, čímž se podgraf stává efektivnějším. -### Example Use Case for `@derivedFrom` +### Příklad případu použití pro `@derivedFrom` -An example of a dynamically growing array is a blogging platform where a “Post” can have many “Comments”. +Příkladem dynamicky rostoucího pole je blogovací platforma, kde "příspěvek“ může mít mnoho "komentářů“. -Let’s start with our two entities, `Post` and `Comment` +Začněme s našimi dvěma entitami, `příspěvek` a `Komentář` -Without optimization, you could implement it like this with an array: +Bez optimalizace byste to mohli implementovat takto pomocí pole: ```graphql type Post @entity { @@ -38,9 +38,9 @@ type Comment @entity { } ``` -Arrays like these will effectively store extra Comments data on the Post side of the relationship. +Taková pole budou efektivně ukládat další data komentářů na straně Post vztahu. -Here’s what an optimized version looks like using `@derivedFrom`: +Zde vidíte, jak vypadá optimalizovaná verze s použitím `@derivedFrom`: ```graphql type Post @entity { @@ -57,18 +57,18 @@ type Comment @entity { } ``` -Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded. +Pouhým přidáním direktivy `@derivedFrom` bude toto schéma ukládat "Komentáře“ pouze na straně "Komentáře“ vztahu a nikoli na straně "Příspěvek“ vztahu. Pole se ukládají napříč jednotlivými řádky, což umožňuje jejich výrazné rozšíření. To může vést k obzvláště velkým velikostem, pokud je jejich růst neomezený. -This will not only make our subgraph more efficient, but it will also unlock three features: +Tím se nejen zefektivní náš podgraf, ale také se odemknou tři funkce: -1. We can query the `Post` and see all of its comments. +1. Můžeme se zeptat na `Post` a zobrazit všechny jeho komentáře. -2. We can do a reverse lookup and query any `Comment` and see which post it comes from. +2. Můžeme provést zpětné vyhledávání a dotazovat se na jakýkoli `Komentář` a zjistit, ze kterého příspěvku pochází. -3. We can use [Derived Field Loaders](/developing/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings. +3. Pomocí [Derived Field Loaders](/developing/graph-ts/api/#looking-up-derived-entities) můžeme odemknout možnost přímého přístupu a manipulace s daty z virtuálních vztahů v našich mapováních podgrafů. ## Závěr -Adopting the `@derivedFrom` directive in subgraphs effectively handles dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Přijetí direktivy `@derivedFrom` v podgraf efektivně zpracovává dynamicky rostoucí pole, což zvyšuje efektivitu indexování a vyhledávání dat. -To learn more detailed strategies to avoid large arrays, read this blog from Kevin Jones: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). +Chcete-li se dozvědět podrobnější strategie, jak se vyhnout velkým polím, přečtěte si tento blog od Kevina Jonese: [Osvědčené postupy při vývoji podgrafů: vyhýbání se velkým polím] (https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). diff --git a/website/pages/cs/cookbook/grafting.mdx b/website/pages/cs/cookbook/grafting.mdx index b68cbe3707c2..637a85c5774e 100644 --- a/website/pages/cs/cookbook/grafting.mdx +++ b/website/pages/cs/cookbook/grafting.mdx @@ -22,15 +22,15 @@ Další informace naleznete na: - [Roubování](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -V tomto tutoriálu se budeme zabývat základním případem použití. Nahradíme stávající smlouvu identickou smlouvou (s novou adresou, ale stejným kódem). Poté naroubujeme stávající podgraf na "základní" podgraf, který sleduje nový kontrakt. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## Důležité upozornění k roubování při aktualizaci na síť -> **Caution**: It is recommended to not use grafting for subgraphs published to The Graph Network +> **Upozornění**: Doporučujeme nepoužívat roubování pro podgrafy publikované v síti grafů ### Proč je to důležité? -Štěpování je výkonná funkce, která umožňuje "naroubovat" jeden podgraf na druhý, čímž efektivně přenese historická data ze stávajícího podgrafu do nové verze. Ačkoli se jedná o účinný způsob, jak zachovat data a ušetřit čas při indexování, roubování může přinést složitosti a potenciální problémy při migraci z hostovaného prostředí do decentralizované sítě. Podgraf není možné naroubovat ze sítě The Graph Network zpět do hostované služby nebo do aplikace Subgraph Studio. +Štěpování je výkonná funkce, která umožňuje "naroubovat" jeden podgraf na druhý, čímž efektivně přenese historická data ze stávajícího podgrafu do nové verze. Podgraf není možné naroubovat ze Sítě grafů zpět do Podgraf Studio. ### Osvědčené postupy @@ -42,7 +42,7 @@ Dodržováním těchto pokynů minimalizujete rizika a zajistíte hladší průb ## Vytvoření existujícího podgrafu -Building subgraphs is an essential part of The Graph, described more in depth [here](/quick-start/). To be able to build and deploy the existing subgraph used in this tutorial, the following repo is provided: +Vytváření podgrafů je důležitou součástí Grafu, která je podrobněji popsána [zde](/quick-start/). Aby bylo možné sestavit a nasadit existující podgraf použitý v tomto tutoriálu, je k dispozici následující repozitář: - [Příklad repo subgrafu](https://github.com/Shiyasmohd/grafting-tutorial) @@ -80,7 +80,7 @@ dataSources: ``` - Zdroj dat `Lock` je adresa abi a smlouvy, kterou získáme při kompilaci a nasazení smlouvy -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - Sekce `mapování` definuje spouštěče, které vás zajímají, a funkce, které by měly být spuštěny v reakci na tyto spouštěče. V tomto případě nasloucháme na událost `Výstup` a po jejím vyslání voláme funkci `obsluhovatVýstup`. ## Definice manifestu roubování @@ -96,14 +96,14 @@ graft: block: 5956000 # block number ``` -- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features). +- `funkce:` je seznam všech použitých [jmen funkcí](/developing/creating-a-subgraph/#experimental-features). - `graft:` je mapa subgrafu `base` a bloku, na který se má roubovat. `block` je číslo bloku, od kterého začít indexovat. Graph zkopíruje data základního subgrafu až k zadanému bloku včetně, a poté pokračuje v indexaci nového subgrafu od tohoto bloku dále. Hodnoty `base` a `block` lze nalézt nasazením dvou podgrafů: jednoho pro základní indexování a druhého s roubováním ## Nasazení základního podgrafu -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-example` +1. Přejděte do [Podgraf Studio](https://thegraph.com/studio/) a vytvořte podgraf v testovací síti Sepolia s názvem `graft-example` 2. Následujte pokyny v části `AUTH & DEPLOY` na stránce vašeho subgrafu v adresáři `graft-example` ve vašem repozitáři 3. Po dokončení ověřte, zda se podgraf správně indexuje. Pokud spustíte následující příkaz v The Graph Playground @@ -144,8 +144,8 @@ Jakmile ověříte, že se podgraf správně indexuje, můžete jej rychle aktua Náhradní podgraf.yaml bude mít novou adresu smlouvy. K tomu může dojít při aktualizaci dapp, novém nasazení kontraktu atd. -1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-replacement` -2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old subgraph. The `base` subgraph ID is the `Deployment ID` of your original `graph-example` subgraph. You can find this in Subgraph Studio. +1. Přejděte do [Podgraf Studio](https://thegraph.com/studio/) a vytvořte podgraf v testovací síti Sepolia s názvem `graft-replacement` +2. Vytvořte nový manifest. Soubor `subgraph.yaml` pro `graph-replacement` obsahuje jinou adresu kontraktu a nové informace o tom, jak by měl být podgraf nasazen. Tyto informace zahrnují `block` [poslední emitovanou událost](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) od starého kontraktu a `base` starého podgrafu. ID `base` podgrafu je `Deployment ID` vašeho původního `graph-example` subgrafu. To můžete najít v Podgraf Studiu. 3. Postupujte podle pokynů v části `AUTH & DEPLOY` na stránce podgrafu ve složce `graft-replacement` z repozitáře 4. Po dokončení ověřte, zda se podgraf správně indexuje. Pokud spustíte následující příkaz v The Graph Playground @@ -185,18 +185,18 @@ Měla by vrátit následující: } ``` -You can see that the `graft-replacement` subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` subgraph. +Vidíte, že podgraf `graft-replacement` indexuje ze starších dat `graph-example` a novějších dat z nové adresy smlouvy. Původní smlouva emitovala dvě události `Odstoupení`, [Událost 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) a [Událost 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). Nová smlouva emitovala jednu událost `Výběr` poté, [Událost 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). Dvě dříve indexované transakce (Událost 1 a 2) a nová transakce (Událost 3) byly spojeny dohromady v podgrafu `výměna-odvod`. -Congrats! You have successfully grafted a subgraph onto another subgraph. +Gratulujeme! Úspěšně jste naroubovali podgraf na jiný podgraf. ## Další zdroje -Pokud chcete získat více zkušeností s roubováním, zde je několik příkladů oblíbených smluv: +If you want more experience with grafting, here are a few examples for popular contracts: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) - [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml), -To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results +Chcete-li se stát ještě větším odborníkem na graf, zvažte možnost seznámit se s dalšími způsoby zpracování změn v podkladových zdrojích dat. Alternativy jako [Šablony zdroje dat](/developing/creating-a-subgraph/#data-source-templates) mohou dosáhnout podobných výsledků > Poznámka: Mnoho materiálů z tohoto článku bylo převzato z dříve publikovaného [článku Arweave](/cookbook/arweave/) diff --git a/website/pages/cs/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx b/website/pages/cs/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx index 6864eef796ff..12a504471cb7 100644 --- a/website/pages/cs/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx +++ b/website/pages/cs/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx @@ -1,48 +1,48 @@ --- -title: How to Secure API Keys Using Next.js Server Components +title: Jak zabezpečit klíče API pomocí komponent serveru Next.js --- ## Přehled -We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). +K řádnému zabezpečení našeho klíče API před odhalením ve frontendu naší aplikace můžeme použít [komponenty serveru Next.js](https://nextjs.org/docs/app/building-your-application/rendering/server-components). Pro další zvýšení zabezpečení našeho klíče API můžeme také [omezit náš klíč API na určité podgrafy nebo domény v Podgraf Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). -In this cookbook, we will go over how to create a Next.js server component that queries a subgraph while also hiding the API key from the frontend. +V této kuchařce probereme, jak vytvořit serverovou komponentu Next.js, která se dotazuje na podgraf a zároveň skrývá klíč API před frontend. -### Caveats +### Upozornění -- Next.js server components do not protect API keys from being drained using denial of service attacks. -- The Graph Network gateways have denial of service detection and mitigation strategies in place, however using server components may weaken these protections. -- Next.js server components introduce centralization risks as the server can go down. +- Součásti serveru Next.js nechrání klíče API před odčerpáním pomocí útoků typu odepření služby. +- Brány Graf síť mají zavedené strategie detekce a zmírňování odepření služby, avšak použití serverových komponent může tyto ochrany oslabit. +- Server komponenty Next.js přinášejí rizika centralizace, protože může dojít k výpadku serveru. -### Why It's Needed +### Proč je to důležité -In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side. +Ve standardní aplikaci React mohou být klíče API obsažené v kódu frontendu vystaveny na straně klienta, což představuje bezpečnostní riziko. Soubory `.env` se sice běžně používají, ale plně klíče nechrání, protože kód Reactu se spouští na straně klienta a vystavuje klíč API v hlavičkách. Serverové komponenty Next.js tento problém řeší tím, že citlivé operace zpracovávají na straně serveru. -### Using client-side rendering to query a subgraph +### Použití vykreslování na straně klienta k dotazování podgrafu ![Client-side rendering](/img/api-key-client-side-rendering.png) ### Požadavky -- An API key from [Subgraph Studio](https://thegraph.com/studio) -- Basic knowledge of Next.js and React. -- An existing Next.js project that uses the [App Router](https://nextjs.org/docs/app). +- Klíč API od [Subgraph Studio](https://thegraph.com/studio) +- Základní znalosti Next.js a React. +- Existující projekt Next.js, který používá [App Router](https://nextjs.org/docs/app). -## Step-by-Step Cookbook +## Kuchařka krok za krokem -### Step 1: Set Up Environment Variables +### Krok 1: Nastavení proměnných prostředí -1. In our Next.js project root, create a `.env.local` file. -2. Add our API key: `API_KEY=`. +1. V kořeni našeho projektu Next.js vytvořte soubor `.env.local`. +2. Přidejte náš klíč API: `API_KEY=`. -### Step 2: Create a Server Component +### Krok 2: Vytvoření součásti serveru -1. In our `components` directory, create a new file, `ServerComponent.js`. -2. Use the provided example code to set up the server component. +1. V adresáři `components` vytvořte nový soubor `ServerComponent.js`. +2. K nastavení komponenty serveru použijte přiložený ukázkový kód. -### Step 3: Implement Server-Side API Request +### Krok 3: Implementace požadavku API na straně serveru -In `ServerComponent.js`, add the following code: +Do souboru `ServerComponent.js` přidejte následující kód: ```javascript const API_KEY = process.env.API_KEY @@ -95,10 +95,10 @@ export default async function ServerComponent() { } ``` -### Step 4: Use the Server Component +### Krok 4: Použití komponenty serveru -1. In our page file (e.g., `pages/index.js`), import `ServerComponent`. -2. Render the component: +1. V našem souboru stránky (např. `pages/index.js`) importujte `ServerComponent`. +2. Vykreslení komponenty: ```javascript import ServerComponent from './components/ServerComponent' @@ -112,12 +112,12 @@ export default function Home() { } ``` -### Step 5: Run and Test Our Dapp +### Krok 5: Spusťte a otestujte náš Dapp -Start our Next.js application using `npm run dev`. Verify that the server component is fetching data without exposing the API key. +Spusťte naši aplikaci Next.js pomocí `npm run dev`. Ověřte, že serverová komponenta načítá data bez vystavení klíče API. ![Server-side rendering](/img/api-key-server-side-rendering.png) ### Závěr -By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/cookbook/upgrading-a-subgraph/#securing-your-api-key) to increase your API key security even further. +Použitím serverových komponent Next.js jsme efektivně skryli klíč API před klientskou stranou, čímž jsme zvýšili bezpečnost naší aplikace. Tato metoda zajišťuje, že citlivé operace jsou zpracovávány na straně serveru, mimo potenciální zranitelnosti na straně klienta. Nakonec nezapomeňte prozkoumat [další opatření pro zabezpečení klíče API](/cookbook/upgrading-a-subgraph/#securing-your-api-key), abyste ještě více zvýšili zabezpečení svého klíče API. diff --git a/website/pages/cs/cookbook/immutable-entities-bytes-as-ids.mdx b/website/pages/cs/cookbook/immutable-entities-bytes-as-ids.mdx index 378e73ac83b8..69bc0cbd8715 100644 --- a/website/pages/cs/cookbook/immutable-entities-bytes-as-ids.mdx +++ b/website/pages/cs/cookbook/immutable-entities-bytes-as-ids.mdx @@ -1,14 +1,14 @@ --- -title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs +title: Osvědčený postup 3 - Zlepšení indexování a výkonu dotazů pomocí neměnných entit a bytů jako ID --- ## TLDR -Using Immutable Entities and Bytes for IDs in our `schema.graphql` file [significantly improves ](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/) indexing speed and query performance. +Použití neměnných entit a bytů pro ID v našem souboru `schema.graphql` [výrazně zlepšuje ](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/) rychlost indexování a výkonnost dotazů. -## Immutable Entities +## Nezměnitelné entity -To make an entity immutable, we simply add `(immutable: true)` to an entity. +Aby byla entita neměnná, jednoduše k ní přidáme `(immutable: true)`. ```graphql type Transfer @entity(immutable: true) { @@ -19,21 +19,21 @@ type Transfer @entity(immutable: true) { } ``` -By making the `Transfer` entity immutable, graph-node is able to process the entity more efficiently, improving indexing speeds and query responsiveness. +Tím, že je entita `Transfer` neměnná, je grafový uzel schopen ji zpracovávat efektivněji, což zvyšuje rychlost indexování a odezvu dotazů. -Immutable Entities structures will not change in the future. An ideal entity to become an Immutable Entity would be an entity that is directly logging on-chain event data, such as a `Transfer` event being logged as a `Transfer` entity. +Struktury neměnných entit se v budoucnu nezmění. Ideální entitou, která by se měla stát nezměnitelnou entitou, by byla entita, která přímo zaznamenává data událostí v řetězci, například událost `Převod` by byla zaznamenána jako entita `Převod`. -### Under the hood +### Pod kapotou -Mutable entities have a 'block range' indicating their validity. Updating these entities requires the graph node to adjust the block range of previous versions, increasing database workload. Queries also need filtering to find only live entities. Immutable entities are faster because they are all live and since they won't change, no checks or updates are required while writing, and no filtering is required during queries. +Mutabilní entity mají "rozsah bloku", který udává jejich platnost. Aktualizace těchto entit vyžaduje, aby uzel grafu upravil rozsah bloků předchozích verzí, což zvyšuje zatížení databáze. Dotazy je také třeba filtrovat, aby byly nalezeny pouze živé entity. Neměnné entity jsou rychlejší, protože jsou všechny živé, a protože se nebudou měnit, nejsou při zápisu nutné žádné kontroly ani aktualizace a při dotazech není nutné žádné filtrování. -### When not to use Immutable Entities +### Kdy nepoužívat nezměnitelné entity -If you have a field like `status` that needs to be modified over time, then you should not make the entity immutable. Otherwise, you should use immutable entities whenever possible. +Pokud máte pole, jako je `status`, které je třeba v průběhu času měnit, neměli byste entitu učinit neměnnou. Jinak byste měli používat neměnné entity, kdykoli je to možné. -## Bytes as IDs +## Bajty jako IDs -Every entity requires an ID. In the previous example, we can see that the ID is already of the Bytes type. +Každá entita vyžaduje ID. V předchozím příkladu vidíme, že ID je již typu Bytes. ```graphql type Transfer @entity(immutable: true) { @@ -44,19 +44,19 @@ type Transfer @entity(immutable: true) { } ``` -While other types for IDs are possible, such as String and Int8, it is recommended to use the Bytes type for all IDs due to character strings taking twice as much space as Byte strings to store binary data, and comparisons of UTF-8 character strings must take the locale into account which is much more expensive than the bytewise comparison used to compare Byte strings. +I když jsou možné i jiné typy ID, například String a Int8, doporučuje se pro všechna ID používat typ Bytes, protože pro uložení binárních dat zabírají znakové řetězce dvakrát více místa než řetězce Byte a při porovnávání znakových řetězců UTF-8 se musí brát v úvahu locale, což je mnohem dražší než bytewise porovnávání používané pro porovnávání řetězců Byte. -### Reasons to Not Use Bytes as IDs +### Důvody, proč nepoužívat bajty jako IDs -1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used. -2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used. -3. Indexing and querying performance improvements are not desired. +1. Pokud musí být IDs entit čitelné pro člověka, například automaticky doplňované číselné IDs nebo čitelné řetězce, neměly by být použity bajty pro IDs. +2. Při integraci dat podgrafu s jiným datovým modelem, který nepoužívá bajty jako IDs, by se bajty jako IDs neměly používat. +3. Zlepšení výkonu indexování a dotazování není žádoucí. -### Concatenating With Bytes as IDs +### Konkatenace s byty jako IDs -It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance. +V mnoha podgrafech se běžně používá spojování řetězců ke spojení dvou vlastností události do jediného ID, například pomocí `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. Protože se však tímto způsobem vrací řetězec, značně to zhoršuje indexování podgrafů a výkonnost dotazování. -Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant. +Místo toho bychom měli použít metodu `concatI32()` pro spojování vlastností událostí. Výsledkem této strategie je ID `Bytes`, které je mnohem výkonnější. ```typescript export function handleTransfer(event: TransferEvent): void { @@ -73,11 +73,11 @@ export function handleTransfer(event: TransferEvent): void { } ``` -### Sorting With Bytes as IDs +### Třídění s bajty jako ID -Sorting using Bytes as IDs is not optimal as seen in this example query and response. +Třídění pomocí bajtů jako IDs není optimální, jak je vidět v tomto příkladu dotazu a odpovědi. -Query: +Dotaz: ```graphql { @@ -90,7 +90,7 @@ Query: } ``` -Query response: +Odpověď na dotaz: ```json { @@ -119,9 +119,9 @@ Query response: } ``` -The IDs are returned as hex. +ID jsou vrácena v hex. -To improve sorting, we should create another field on the entity that is a BigInt. +Abychom zlepšili třídění, měli bychom v entitě vytvořit další pole, které bude BigInt. ```graphql type Transfer @entity { @@ -133,9 +133,9 @@ type Transfer @entity { } ``` -This will allow for sorting to be optimized sequentially. +To umožní postupnou optimalizaci třídění. -Query: +Dotaz: ```graphql { @@ -146,7 +146,7 @@ Query: } ``` -Query Response: +Odpověď na dotaz: ```json { @@ -171,6 +171,6 @@ Query Response: ## Závěr -Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. +Bylo prokázáno, že použití neměnných entit i bytů jako ID výrazně zvyšuje efektivitu podgrafů. Testy konkrétně ukázaly až 28% nárůst výkonu dotazů a až 48% zrychlení indexace. -Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). +Více informací o používání nezměnitelných entit a bytů jako ID najdete v tomto příspěvku na blogu Davida Lutterkorta, softwarového inženýra ve společnosti Edge & Node: [Dvě jednoduchá vylepšení výkonu podgrafu](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). diff --git a/website/pages/cs/cookbook/near.mdx b/website/pages/cs/cookbook/near.mdx index 2ad4a1deec4d..dbf25376f038 100644 --- a/website/pages/cs/cookbook/near.mdx +++ b/website/pages/cs/cookbook/near.mdx @@ -6,7 +6,7 @@ Tato příručka je úvodem do vytváření subgrafů indexujících chytré kon ## Co je NEAR? -[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information. +[NEAR](https://near.org/) je platforma pro chytré smlouvy, která slouží k vytváření decentralizovaných aplikací. Další informace najdete v [oficiální dokumentaci](https://docs.near.org/concepts/basics/protocol). ## Co jsou podgrafy NEAR? @@ -17,7 +17,7 @@ Podgrafy jsou založeny na událostech, což znamená, že naslouchají událost - Obsluhy bloků: jsou spouštěny při každém novém bloku. - Obsluhy příjmu: spouštějí se pokaždé, když je zpráva provedena na zadaném účtu. -[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): +[Z dokumentace NEAR](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt): > Příjemka je jediným objektem, který lze v systému použít. Když na platformě NEAR hovoříme o "zpracování transakce", znamená to v určitém okamžiku "použití účtenky". @@ -37,7 +37,7 @@ Definice podgrafů má tři aspekty: **schema.graphql:** soubor se schématem, který definuje, jaká data jsou uložena pro váš podgraf, a jak je možné je dotazovat pomocí GraphQL. Požadavky na podgrafy NEAR jsou pokryty [existující dokumentací](/developing/creating-a-subgraph#the-graphql-schema). -**Mapování v jazyce AssemblyScript:** [Kód jazyka AssemblyScript](/developing/assemblyscript-api), který převádí data událostí na entity definované ve vašem schématu. Podpora NEAR zavádí datové typy specifické pro NEAR a nové funkce pro parsování JSON. +**Mapování AssemblyScript:** [Kód AssemblyScript](/developing/graph-ts/api), který převádí data událostí na entity definované ve vašem schématu. Podpora NEAR zavádí datové typy specifické pro NEAR a nové funkce pro parsování JSON. Při vývoji podgrafů existují dva klíčové příkazy: @@ -71,8 +71,8 @@ dataSources: ``` - Podgrafy NEAR představují nový `druh` zdroje dat (`near`) -- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` -- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. +- `Síť` by měla odpovídat síti v hostitelském uzlu Graf. V Podgraf Studio je hlavní síť NEAR `near-mainnet` a testovací síť NEAR je `near-testnet` +- Zdroje dat NEAR zavádějí volitelné pole `source.account`, které je čitelným ID odpovídajícím [účtu NEAR](https://docs.near.org/concepts/protocol/account-model). Může to být účet nebo podúčet. - NEAR datové zdroje představují alternativní volitelné pole `source.accounts`, které obsahuje volitelné přípony a předpony. Musí být specifikována alespoň jedna z předpony nebo přípony, které odpovídají jakémukoli účtu začínajícímu nebo končícímu uvedenými hodnotami. Příklad níže by odpovídal: `[app|good].*[morning.near|morning.testnet]`. Pokud je potřeba pouze seznam předpon nebo přípon, druhé pole lze vynechat. ```yaml @@ -88,7 +88,7 @@ accounts: Zdroje dat NEAR podporují dva typy zpracovatelů: - `blockHandlers`: spustí se na každém novém bloku NEAR. Není vyžadován žádný `source.account`. -- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources). +- `receiptHandlers`: spustí se na každé příjemce, kde je `účet zdroje dat` příjemcem. Všimněte si, že se zpracovávají pouze přesné shody ([podúčty](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) musí být přidány jako nezávislé zdroje dat). ### Definice schématu @@ -98,7 +98,7 @@ Definice schématu popisuje strukturu výsledné databáze podgrafů a vztahy me Obslužné programy pro zpracování událostí jsou napsány v jazyce [AssemblyScript](https://www.assemblyscript.org/). -Indexování NEAR zavádí do rozhraní [AssemblyScript API](/developing/assemblyscript-api) datové typy specifické pro NEAR. +Indexování NEAR zavádí do [API AssemblyScript](/developing/graph-ts/api) datové typy specifické pro NEAR. ```typescript @@ -165,9 +165,9 @@ Tyto typy jsou předány do block & obsluha účtenek: - Obsluhy bloků obdrží `Block` - Obsluhy příjmu obdrží `ReceiptWithOutcome` -Jinak je zbytek [AssemblyScript API](/developing/assemblyscript-api) dostupný vývojářům podgrafů NEAR během provádění mapování. +V opačném případě mají vývojáři podgrafů NEAR během provádění mapování k dispozici zbytek [AssemblyScript API](/developing/graph-ts/api). -To zahrnuje novou funkci parsování JSON - záznamy na NEAR jsou často vysílány ve formě zřetězených JSON. Nová funkce `json.fromString(...)` je k dispozici jako součást [JSON API](/developing/assemblyscript-api#json-api), které umožňuje vývojářům snadno zpracovávat tyto záznamy. +To zahrnuje novou funkci pro parsování JSON - log na NEAR jsou často emitovány jako serializované JSONs. Nová funkce `json.fromString(...)` je k dispozici jako součást [JSON API](/developing/graph-ts/api#json-api), která umožňuje vývojářům snadno zpracovávat tyto log. ## Nasazení podgrafu NEAR @@ -232,7 +232,7 @@ Koncový bod GraphQL pro podgrafy NEAR je určen definicí schématu se stávaj ## Příklady podgrafů -Here are some example subgraphs for reference: +Zde je několik příkladů podgrafů: [NEAR bloky](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks) diff --git a/website/pages/cs/cookbook/pruning.mdx b/website/pages/cs/cookbook/pruning.mdx index 7533d0070737..417a8c7aa81f 100644 --- a/website/pages/cs/cookbook/pruning.mdx +++ b/website/pages/cs/cookbook/pruning.mdx @@ -1,22 +1,22 @@ --- -title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning +title: Doporučený postup 1 - Zlepšení rychlosti dotazu pomocí ořezávání podgrafů --- ## TLDR -[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph. +[Pruning](/developing/creating-a-subgraph/#prune) odstraní archivní entity z databáze podgrafu až do daného bloku a odstranění nepoužívaných entit z databáze podgrafu zlepší výkonnost dotazu podgrafu, často výrazně. Použití `indexerHints` je snadný způsob, jak podgraf ořezat. -## How to Prune a Subgraph With `indexerHints` +## Jak prořezat podgraf pomocí `indexerHints` -Add a section called `indexerHints` in the manifest. +Přidejte do manifestu sekci `indexerHints`. -`indexerHints` has three `prune` options: +`indexerHints` má tři možnosti `prune`: -- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0. -- `prune: `: Sets a custom limit on the number of historical blocks to retain. -- `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/querying/graphql-api/#time-travel-queries) are desired. +- `prune: auto`: Udržuje minimální potřebnou historii nastavenou indexátorem, čímž optimalizuje výkon dotazu. Toto je obecně doporučené nastavení a je výchozí pro všechny podgrafy vytvořené pomocí `graph-cli` >= 0.66.0. +- `prune: `: Nastaví vlastní omezení počtu historických bloků, které se mají zachovat. +- `prune: never`: Je výchozí, pokud není k dispozici sekce `indexerHints`. `prune: never` by mělo být vybráno, pokud jsou požadovány [Dotazy na cestování časem](/querying/graphql-api/#time-travel-queries). -We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`: +Aktualizací souboru `subgraph.yaml` můžeme do podgrafů přidat `indexerHints`: ```yaml specVersion: 1.0.0 @@ -30,12 +30,12 @@ dataSources: network: mainnet ``` -## Important Considerations +## Důležité úvahy -- If [Time Travel Queries](/querying/graphql-api/#time-travel-queries) are desired as well as pruning, pruning must be performed accurately to retain Time Travel Query functionality. Due to this, it is generally not recommended to use `indexerHints: prune: auto` with Time Travel Queries. Instead, prune using `indexerHints: prune: ` to accurately prune to a block height that preserves the historical data required by Time Travel Queries, or use `prune: never` to maintain all data. +- Pokud jsou kromě ořezávání požadovány i [dotazy na cestování v čase](/querying/graphql-api/#time-travel-queries), musí být ořezávání provedeno přesně, aby byla zachována funkčnost dotazů na cestování v čase. Z tohoto důvodu se obecně nedoporučuje používat `indexerHints: prune: auto` s Time Travel Queries. Místo toho proveďte ořezávání pomocí `indexerHints: prune: ` pro přesné ořezání na výšku bloku, která zachovává historická data požadovaná dotazy Time Travel, nebo použijte `prune: never` pro zachování všech dat. -- It is not possible to [graft](/cookbook/grafting/) at a block height that has been pruned. If grafting is routinely performed and pruning is desired, it is recommended to use `indexerHints: prune: ` that will accurately retain a set number of blocks (e.g., enough for six months). +- Není možné [roubovat](/cookbook/grafting/) na výšku bloku, který byl prořezán. Pokud se roubování provádí běžně a je požadováno prořezání, doporučuje se použít `indexerHints: prune: ` který přesně zachová stanovený počet bloků (např. dostatečný počet na šest měsíců). ## Závěr -Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. +Ořezávání pomocí `indexerHints` je osvědčeným postupem pro vývoj podgrafů, který nabízí významné zlepšení výkonu dotazů. diff --git a/website/pages/cs/cookbook/subgraph-debug-forking.mdx b/website/pages/cs/cookbook/subgraph-debug-forking.mdx index e0e3e2a69641..522c273a7e36 100644 --- a/website/pages/cs/cookbook/subgraph-debug-forking.mdx +++ b/website/pages/cs/cookbook/subgraph-debug-forking.mdx @@ -2,7 +2,7 @@ title: Rychlé a snadné ladění podgrafů pomocí vidliček --- -As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up subgraph debugging! +Stejně jako u mnoha systémů zpracovávajících velké množství dat může indexerům grafu (Graph Nodes) trvat poměrně dlouho, než synchronizují váš podgraf s cílovým blockchainem. Nesoulad mezi rychlými změnami za účelem ladění a dlouhými čekacími dobami potřebnými pro indexaci je extrémně kontraproduktivní a jsme si toho dobře vědomi. To je důvod, proč představujeme **rozvětvování podgrafů**, vyvinutý společností [LimeChain](https://limechain.tech/), a v tomto článku Ukážu vám, jak lze tuto funkci použít k podstatnému zrychlení ladění podgrafů! ## Ok, co to je? @@ -12,9 +12,9 @@ V kontextu ladění vám ** vidličkování podgrafů** umožňuje ladit neúsp ## Co?! Jak? -When you deploy a subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_. +Když nasadíte podgraf do vzdáleného uzlu Graf pro indexování a ten selže v bloku _X_, dobrou zprávou je, že uzel Graf bude stále obsluhovat dotazy GraphQL pomocí svého úložiště, které je synchronizováno s blokem _X_. To je skvělé! To znamená, že můžeme využít tohoto "aktuálního" úložiště k opravě chyb vznikajících při indexování bloku _X_. -In a nutshell, we are going to _fork the failing subgraph_ from a remote Graph Node that is guaranteed to have the subgraph indexed up to block _X_ in order to provide the locally deployed subgraph being debugged at block _X_ an up-to-date view of the indexing state. +Stručně řečeno, _rozvětvíme neúspěšný podgraf_ ze vzdáleného uzlu grafu, u kterého je zaručeno, že podgraf bude indexován až do bloku *X*, abychom lokálně nasazenému podgrafu laděnému v bloku _X_ poskytli aktuální pohled na stav indexování. ## Ukažte mi prosím nějaký kód! @@ -44,12 +44,12 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void { } ``` -Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error. +Oops, jak nešťastné, když jsem nasadil můj perfektně vypadající podgraf do [Podgraf Studio](https://thegraph.com/studio/), selhalo to s chybou _"Gravatar nenalezen!"_. Obvyklý způsob, jak se pokusit o opravu, je: 1. Proveďte změnu ve zdroji mapování, která podle vás problém vyřeší (zatímco já vím, že ne). -2. Re-deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node). +2. Znovu nasaďte podgraf do [Subgraph Studio](https://thegraph.com/studio/) (nebo jiného vzdáleného uzlu Graf). 3. Počkejte na synchronizaci. 4. Pokud se opět rozbije, vraťte se na 1, jinak: Hurá! @@ -59,7 +59,7 @@ Pomocí **vidličkování podgrafů** můžeme tento krok v podstatě eliminovat 0. Spin-up a local Graph Node with the **_appropriate fork-base_** set. 1. Proveďte změnu ve zdroji mapování, která podle vás problém vyřeší. -2. Deploy to the local Graph Node, **_forking the failing subgraph_** and **_starting from the problematic block_**. +2. Nasazení do místního uzlu Graf, **_forking selhávajícího podgrafu_** a **_zahájení od problematického bloku_**. 3. Pokud se opět rozbije, vraťte se na 1, jinak: Hurá! Nyní můžete mít 2 otázky: @@ -80,7 +80,7 @@ Nezapomeňte také nastavit pole `dataSources.source.startBlock` v manifestu pod Takže to dělám takhle: -1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/). +1. Spustím místní uzel Graf ([zde je návod, jak to udělat](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) s volbou `fork-base` nastavenou na: `https://api.thegraph.com/subgraphs/id/`, protože budu forkovat podgraf, ten chybný, který jsem nasadil dříve, z [Podgraph Studio](https://thegraph.com/studio/). ``` $ cargo run -p graph-node --release -- \ @@ -91,11 +91,11 @@ $ cargo run -p graph-node --release -- \ ``` 2. Po pečlivém prozkoumání si všímám, že existuje nesoulad v reprezentacích `id`, které se používají při indexaci `Gravatar` v mých dvou obslužných funkcích. Zatímco `handleNewGravatar` ho převede na hex (`event.params.id.toHex()`), `handleUpdatedGravatar` používá int32 (`event.params.id.toI32()`), což způsobuje, že `handleUpdatedGravatar` selže s chybou "Gravatar nenalezen!". Udělám, aby obě převedly `id` na hex. -3. After I made the changes I deploy my subgraph to the local Graph Node, **_forking the failing subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`: +3. Po provedení změn jsem nasadil svůj podgraf do místního uzlu Graf, **_rozvětveníl selhávající podgraf_** a nastavil `dataSources.source.startBlock` na `6190343` v `subgraph.yaml`: ```bash $ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020 ``` -4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working. -5. I deploy my now bug-free subgraph to a remote Graph Node and live happily ever after! (no potatoes tho) +4. Zkontroluji protokoly vytvořené místním graf uzlem a hurá, zdá se, že vše funguje. +5. Nasadím svůj nyní již bezchybný podgraf do vzdáleného uzlu Graf a žiji šťastně až do smrti! (bez brambor) diff --git a/website/pages/cs/cookbook/subgraph-uncrashable.mdx b/website/pages/cs/cookbook/subgraph-uncrashable.mdx index 1c2b3d9e4dad..13c979d18853 100644 --- a/website/pages/cs/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/cs/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: Generátor kódu bezpečného podgrafu - Framework také obsahuje způsob (prostřednictvím konfiguračního souboru), jak vytvořit vlastní, ale bezpečné funkce setteru pro skupiny proměnných entit. Tímto způsobem není možné, aby uživatel načetl/použil zastaralou entitu grafu, a také není možné zapomenout uložit nebo nastavit proměnnou, kterou funkce vyžaduje. -- Varovné protokoly se zaznamenávají jako protokoly označující místa, kde došlo k porušení logiky podgrafu, aby bylo možné problém opravit a zajistit přesnost dat. Tyto protokoly lze zobrazit v hostované službě The Graph v části 'Logs' sekce. +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. Podgraf Uncrashable lze spustit jako volitelný příznak pomocí příkazu Graph CLI codegen. diff --git a/website/pages/cs/cookbook/upgrading-a-subgraph.mdx b/website/pages/cs/cookbook/upgrading-a-subgraph.mdx index 31aee1c8fe1a..79ce1bf1da17 100644 --- a/website/pages/cs/cookbook/upgrading-a-subgraph.mdx +++ b/website/pages/cs/cookbook/upgrading-a-subgraph.mdx @@ -136,7 +136,7 @@ Ujistěte se, že je zaškrtnuto políčko **Aktualizovat podrobnosti subgrafu v ## Odepsání subgrafu v síti Graph -Postupujte podle pokynů [zde](/managing/deprecating-a-subgraph), abyste svůj podgraf vyřadili a odstranili jej ze sítě The Graph Network. +Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. ## Dotazování podgrafu + fakturace v síti graph diff --git a/website/pages/cs/deploying/multiple-networks.mdx b/website/pages/cs/deploying/multiple-networks.mdx new file mode 100644 index 000000000000..0c53c9686fb4 --- /dev/null +++ b/website/pages/cs/deploying/multiple-networks.mdx @@ -0,0 +1,241 @@ +--- +title: Deploying a Subgraph to Multiple Networks +--- + +This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph). + +## Nasazení podgrafu do více sítí + +V některých případech budete chtít nasadit stejný podgraf do více sítí, aniž byste museli duplikovat celý jeho kód. Hlavním problémem, který s tím souvisí, je skutečnost, že smluvní adresy v těchto sítích jsou různé. + +### Using `graph-cli` + +Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: + +```sh +Options: + + ... + --network Network configuration to use from the networks config file + --network-file Networks config file path (default: "./networks.json") +``` + +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. + +> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. + +If you don't have a `networks.json` file, you'll need to manually create one with the following structure: + +```json +{ + "network1": { // the network name + "dataSource1": { // the dataSource name + "address": "0xabc...", // the contract address (optional) + "startBlock": 123456 // the startBlock (optional) + }, + "dataSource2": { + "address": "0x123...", + "startBlock": 123444 + } + }, + "network2": { + "dataSource1": { + "address": "0x987...", + "startBlock": 123 + }, + "dataSource2": { + "address": "0xxyz..", + "startBlock": 456 + } + }, + ... +} +``` + +> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. + +Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x123...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Takto by měl vypadat konfigurační soubor sítě: + +```json +{ + "mainnet": { + "Gravity": { + "address": "0x123..." + } + }, + "sepolia": { + "Gravity": { + "address": "0xabc..." + } + } +} +``` + +Nyní můžeme spustit jeden z následujících příkazů: + +```sh +# Using default networks.json file +yarn build --network sepolia + +# Using custom named file +yarn build --network sepolia --network-file path/to/config +``` + +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: sepolia + source: + address: '0xabc...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Now you are ready to `yarn deploy`. + +> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: + +```sh +# Using default networks.json file +yarn deploy --network sepolia + +# Using custom named file +yarn deploy --network sepolia --network-file path/to/config +``` + +### Použití šablony subgraph.yaml + +One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). + +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: + +```json +{ + "network": "mainnet", + "address": "0x123..." +} +``` + +a + +```json +{ + "network": "sepolia", + "address": "0xabc..." +} +``` + +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + network: {{network}} + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + address: '{{address}}' + abi: Gravity + mapping: + kind: ethereum/events +``` + +In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: + +```json +{ + ... + "scripts": { + ... + "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", + "prepare:sepolia": "mustache config/sepolia.json subgraph.template.yaml > subgraph.yaml" + }, + "devDependencies": { + ... + "mustache": "^3.1.0" + } +} +``` + +To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: + +```sh +# Mainnet: +yarn prepare:mainnet && yarn deploy + +# Sepolia: +yarn prepare:sepolia && yarn deploy +``` + +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). + +**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. + +## Zásady archivace subgrafů Subgraph Studio + +A subgraph version in Studio is archived if and only if it meets the following criteria: + +- The version is not published to the network (or pending publish) +- The version was created 45 or more days ago +- The subgraph hasn't been queried in 30 days + +In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. + +Každý podgraf ovlivněný touto zásadou má možnost vrátit danou verzi zpět. + +## Kontrola stavu podgrafů + +Pokud se podgraf úspěšně synchronizuje, je to dobré znamení, že bude dobře fungovat navždy. Nové spouštěče v síti však mohou způsobit, že se podgraf dostane do neověřeného chybového stavu, nebo může začít zaostávat kvůli problémům s výkonem či operátory uzlů. + +Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: + +```graphql +{ + indexingStatusForCurrentVersion(subgraphName: "org/subgraph") { + synced + health + fatalError { + message + block { + number + hash + } + handler + } + chains { + chainHeadBlock { + number + } + latestBlock { + number + } + } + } +} +``` + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/pages/cs/deploying/subgraph-studio.mdx b/website/pages/cs/deploying/subgraph-studio.mdx index f612377ac534..384f839fd330 100644 --- a/website/pages/cs/deploying/subgraph-studio.mdx +++ b/website/pages/cs/deploying/subgraph-studio.mdx @@ -1,12 +1,12 @@ --- -title: How to Use Subgraph Studio +title: Jak používat Podgraf Studio --- Vítejte na svém novém odpalovacím zařízení 👩🏽‍🚀 -Subgraph Studio is your place to build and create subgraphs, add metadata, and publish them to the new decentralized Explorer (more on that [here](/network/explorer)). +Podgraf Studio je místem, kde můžete sestavovat a vytvářet podgrafy, přidávat metadata a publikovat je v novém decentralizovaném Průzkumníku (více o něm [zde](/network/explorer)). -What you can do in Subgraph Studio: +Co můžete dělat v aplikaci Podgraf Studio: - Vytvoření podgrafu prostřednictvím UI Studio - Nasazení podgrafu pomocí CLI @@ -15,7 +15,7 @@ What you can do in Subgraph Studio: - Integrujte jej do staging pomocí dotazu URL - Vytváření a správa klíčů API pro konkrétní podgrafy -Here in Subgraph Studio, you have full control over your subgraphs. Not only can you test your subgraphs before you publish them, but you can also restrict your API keys to specific domains and only allow certain Indexers to query from their API keys. +V Podgraf Studio máte nad svými podgrafy plnou kontrolu. Nejenže můžete své podgrafy před zveřejněním otestovat, ale můžete také omezit klíče API na konkrétní domény a povolit dotazování z jejich klíčů API pouze určitým indexerům. Dotazování podgrafů generuje poplatky za dotazy, které se používají k odměňování [Indexerů](/network/indexing) v síti Graf. Pokud jste vývojářem aplikací nebo podgrafů, Studio vám umožní vytvářet lepší subgrafy, které budou sloužit k dotazování vašemu nebo vaší komunity. Studio se skládá z 5 hlavních částí: @@ -27,7 +27,7 @@ Dotazování podgrafů generuje poplatky za dotazy, které se používají k odm ## Jak si vytvořit účet -1. Sign in with your wallet - you can do this via MetaMask, WalletConnect, Coinbase Wallet or Safe. +1. Přihlaste se pomocí své peněženky - můžete tak učinit prostřednictvím MetaMask, WalletConnect, Coinbase Wallet nebo Safe. 1. Po přihlášení se na domovské stránce účtu zobrazí váš jedinečný klíč pro nasazení. Ten vám umožní buď publikovat vaše podgrafy, nebo spravovat vaše klíče API + fakturaci. Budete mít jedinečný deploy klíč, který lze znovu vygenerovat, pokud se domníváte, že byl ohrožen. ## Jak vytvořit podgraf v Podgraf Studio @@ -36,7 +36,7 @@ Dotazování podgrafů generuje poplatky za dotazy, které se používají k odm ## Kompatibilita podgrafů se sítí grafů -In order to be supported by Indexers on The Graph Network, subgraphs must: +Aby mohly být podgrafy podporovány indexátory v síti grafů, musí: - Index [podporované sítě](/developing/supported-networks) - Nesmí používat žádnou z následujících funkcí: @@ -50,7 +50,7 @@ Další funkce & sítě budou do síť grafů přidávány postupně. ![Životní cyklus podgrafů](/img/subgraph-lifecycle.png) -After you have created your subgraph, you will be able to deploy it using the [CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), or command-line interface. Deploying a subgraph with the CLI will push the subgraph to the Studio where you’ll be able to test subgraphs using the playground. This will eventually allow you to publish to the Graph Network. For more information on CLI setup, [check this out](/developing/defining-a-subgraph#install-the-graph-cli) (psst, make sure you have your deploy key on hand). Remember, deploying is **not the same as** publishing. When you deploy a subgraph, you just push it to the Studio where you’re able to test it. Versus, when you publish a subgraph, you are publishing it on-chain. +Po vytvoření podgrafu jej budete moci nasadit pomocí [CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) neboli příkazového řádku. Nasazení podgrafu pomocí CLI přesune podgraf do Studio, kde budete moci testovat podgrafy pomocí hřiště. To vám nakonec umožní publikovat do sítě Graf. Další informace o nastavení CLI najdete [v tomto článku](/developing/defining-a-subgraph#install-the-graph-cli) (psst, ujistěte se, že máte po ruce deploy klíč). Nezapomeňte, že nasazení **není totéž jako** publikování. Při nasazení dílčího grafu jej pouze odešlete do Studio, kde jej můžete otestovat. Oproti tomu, když publikujete podgraf, publikujete jej v řetězci. ## Testování Podgrafu v Podgraf Studio @@ -60,13 +60,13 @@ Pokud chcete subgraf otestovat před jeho publikováním v síti, můžete tak u Dostali jste se až sem - gratulujeme! -In order to publish your subgraph successfully, you’ll need to go through the following steps outlined in this [section](/publishing/publishing-a-subgraph/). +Abyste mohli svůj podgraf úspěšně publikovat, musíte provést následující kroky popsané v této [sekci](/publishing/publishing-a-subgraph/). Podívejte se také na níže uvedený videopřehled: -Remember, while you’re going through your publishing flow, you’ll be able to push to either Arbitrum One or Arbitrum Sepolia. If you’re a first-time subgraph developer, we highly suggest you start with publishing to Arbitrum Sepolia, which is free to do. This will allow you to see how the subgraph will work in Graph Explorer and will allow you to test curation elements. +Nezapomeňte, že během publikačního toku budete moci tlačit na Arbitrum One nebo Arbitrum Sepolia. Pokud vyvíjíte podgrafy poprvé, důrazně doporučujeme začít s publikováním do Arbitrum Sepolia, které je zdarma. To vám umožní zjistit, jak bude subgraf fungovat v Průzkumníku grafů, a umožní vám to otestovat kurátorské prvky. Indexátoři musí předkládat povinné záznamy Proof of Indexing od určitého bloku hash. Protože zveřejnění podgrafu je akce prováděná v řetězci, nezapomeňte, že provedení transakce může trvat až několik minut. Jakákoli adresa, kterou použijete k publikování kontraktu, bude jediná, která bude moci publikovat budoucí verze. Vybírejte proto moudře! @@ -76,14 +76,14 @@ Podgrafy s kurátorským signál jsou zobrazeny indexátorům, aby mohly být in ## Verzování podgrafu pomocí CLI -Developers might want to update their subgraph, for a variety of reasons. When this is the case, you can deploy a new version of your subgraph to the Studio using the CLI (it will only be private at this point) and if you are happy with it, you can publish this new deployment to Graph Explorer. This will create a new version of your subgraph that curators can start signaling on and Indexers will be able to index this new version. +Vývojáři mohou chtít aktualizovat svůj podgraf z různých důvodů. V takovém případě můžete pomocí CLI nasadit novou verzi podgrafu do Studio (v tomto okamžiku bude pouze soukromá), a pokud jste s ní spokojeni, můžete toto nové nasazení publikovat v Graf Explorer. Tím se vytvoří nová verze vašeho podgrafu, kterou mohou kurátoři začít signalizovat, a indexátory budou moci tuto novou verzi indexovat. -Up until recently, developers were forced to deploy and publish a new version of their subgraph to the Explorer to update the metadata of their subgraphs. Now, developers can update the metadata of their subgraphs **without having to publish a new version**. Developers can update their subgraph details in the Studio (under the profile picture, name, description, etc) by checking an option called **Update Details** in Graph Explorer. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. +Až donedávna byli vývojáři nuceni nasadit a publikovat novou verzi svého podgrafu v Průzkumníku, aby mohli aktualizovat metadata svých podgrafů. Nyní mohou vývojáři aktualizovat metadata svých podgrafů **bez nutnosti publikovat novou verzi**. Vývojáři mohou aktualizovat podrobnosti o svých podgrafech ve Studio (pod profilovým obrázkem, názvem, popisem atd.) zaškrtnutím možnosti nazvané **Aktualizovat podrobnosti** v Průzkumníku grafů. Pokud je tato možnost zaškrtnuta, bude vygenerována řetězová transakce, která aktualizuje podrobnosti subgrafu v Průzkumníku, aniž by bylo nutné publikovat novou verzi s novým nasazením. Upozorňujeme, že s publikováním nové verze podgrafu v síti jsou spojeny náklady. Kromě transakčních poplatků musí vývojáři financovat také část kurátorské daně za automaticky migrující signál. Novou verzi podgrafu nelze publikovat, pokud na ni kurátoři nesignalizovali. Více informací o rizicích kurátorství najdete [zde](/network/curating). ### Automatická archivace verzí podgrafů -Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in the Studio UI. Please note that previous versions of non-published subgraphs deployed to the Studio will be automatically archived. +Při každém nasazení nové verze podgrafu v Podgraf Studio se předchozí verze archivuje. Archivované verze nebudou indexovány/synchronizovány, a proto se na ně nelze dotazovat. Archivovanou verzi podgrafu můžete zrušit v UI Studio. Upozorňujeme, že předchozí verze nepublikovaných podgrafů nasazených do Studio budou archivovány automaticky. ![Podraf Studio - Unarchive](/img/Unarchive.png) diff --git a/website/pages/cs/developing/creating-a-subgraph.mdx b/website/pages/cs/developing/creating-a-subgraph.mdx index 3fc67304eabd..3ca487e7fd09 100644 --- a/website/pages/cs/developing/creating-a-subgraph.mdx +++ b/website/pages/cs/developing/creating-a-subgraph.mdx @@ -2,45 +2,47 @@ title: Vytvoření podgraf --- -Podgraf získává data z blockchain, zpracovává je a ukládá tak, aby se na ně dalo snadno dotazovat prostřednictvím jazyka GraphQL. +This detailed guide provides instructions to successfully create a subgraph. -![Definování podgrafu](/img/defining-a-subgraph.png) +A subgraph extracts data from a blockchain, processes it, and stores it for efficient querying via GraphQL. -Definice podgraf se skládá z několika souborů: +![Definování podgrafu](/img/defining-a-subgraph.png) -- `subgraph.yaml`: soubor YAML obsahující manifest podgraf +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. -- `schema.graphql`: schéma GraphQL, které definuje, jaká data jsou uložena pro váš podgraf a jak se na ně dotazovat prostřednictvím jazyka GraphQL +## Začínáme -- `Mapování skriptů sestavy`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) kód, který převádí data událostí na entity definované ve vašem schématu (např. `mapping.ts` v tomto tutoriálu) +### Instalace Graf CLI -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key). It is recommended that you [add signal](/network/curating/#how-to-signal) to your subgraph with at least [3,000 GRT](/sunrise/#how-can-i-ensure-high-quality-of-service-and-redundancy-for-subgraphs-on-the-graph-network). +To build and deploy a subgraph, you will need the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-tooling) which you will need to build and deploy a subgraph. +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. -## Instalace Graf CLI +V místním počítači spusťte jeden z následujících příkazů: -Graf CLI je napsáno v jazyce JavaScript a k jeho použití je třeba nainstalovat buď `yarn`, nebo `npm`; v následujícím se předpokládá, že máte yarn. +#### Using [npm](https://www.npmjs.com/) -Jakmile budete mít `yarn`, nainstalujte Graf CLI spuštěním příkazu +```bash +npm install -g @graphprotocol/graph-cli@latest +``` -**Instalace pomocí yarn:** +#### Using [yarn](https://yarnpkg.com/) ```bash yarn global add @graphprotocol/graph-cli ``` -**Instalace pomocí npm:** +- The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. -```bash -npm install -g @graphprotocol/graph-cli -``` +- This `graph init` command can also create a subgraph in Subgraph Studio by passing in `--product subgraph-studio`. + +- If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. -Once installed, the `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. This command can be used to create a subgraph in Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to your preferred network, bootstrapping a new subgraph from that contract can be a good way to get started. +## Create a subgraph -## Ze stávající smlouvy +### From an existing contract -Následující příkaz vytvoří podgraf, který indexuje všechny události existující smlouvy. Pokusí se načíst ABI smlouvy z Etherscan a vrátí se k požadavku na cestu k místnímu souboru. Pokud některý z nepovinných argumentů chybí, projde příkaz interaktivním formulářem. +The following command creates a subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,21 +53,29 @@ graph init \ [] ``` -`` je ID vašeho podgraf ve Studio podgraph, najdete ho na stránce s podrobnostmi o podgrafu. +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. -## Z příkladu podgraf +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. -Druhý režim `graf init` podporuje vytvoření nového projektu z příkladového podgraf. To provede následující příkaz: +### From an example subgraph + +The following command initializes a new project from an example subgraph: ```sh -graph init --studio +graph init --studio --from-example=example-subgraph ``` -The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. -## Přidání nových zdrojů dat do existujícího podgraf +## Add new `dataSources` to an existing subgraph -Od verze `v0.31.0` podporuje `graf-cli` přidávání nových zdrojů dat do existujícího podgrafu pomocí příkazu `graf add`. +Since `v0.31.0`, the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: ```sh graph add
[] @@ -78,22 +88,45 @@ Možnosti: --network-file Cesta ke konfiguračnímu souboru sítě (výchozí: "./networks.json") ``` -Příkaz `add` načte ABI z Etherscan (pokud není zadána cesta k ABI pomocí volby `--abi`) a vytvoří nový `dataSource` stejným způsobem jako příkaz `graph init` vytvoří `dataSource` `--from-contract`, přičemž odpovídajícím způsobem aktualizuje schéma a mapování. +### Specifics + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- Volba `--merge-entities` určuje, jak chce vývojář řešit konflikty názvů `entity` a `event`: + + - Pokud `true`: nový `dataSource` by měl používat stávající `eventHandlers` & `entity`. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- Smlouva `adresa` bude zapsána do souboru `networks.json` pro příslušnou síť. + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. -Volba `--merge-entities` určuje, jak chce vývojář řešit konflikty názvů `entity` a `event`: +## Components of a subgraph -- Pokud `true`: nový `dataSource` by měl používat stávající `eventHandlers` & `entity`. -- Pokud `false`: měla by být vytvořena nová entita & obsluha události s `${dataSourceName}{EventName}`. +### Manifest podgrafu -Smlouva `adresa` bude zapsána do souboru `networks.json` pro příslušnou síť. +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -> **Poznámka:** Při použití interaktivního klienta budete po úspěšném spuštění `graf init` vyzváni k přidání nového `dataSource`. +The **subgraph definition** consists of the following files: -## Manifest podgrafu +- `subgraph.yaml`: Contains the subgraph manifest -Manifest podgrafu `subgraph.yaml` definuje inteligentní smlouvy, které váš podgraf indexuje, kterým událostem z těchto smluv má věnovat pozornost a jak mapovat data událostí na entity, které Graf uzel ukládá a umožňuje dotazovat. Úplnou specifikaci manifestů podgrafu naleznete [zde](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL -Pro příklad podgraf `subgraph.yaml` je: +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +A single subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: ```yaml specVersion: 0.0.4 @@ -180,9 +213,9 @@ Jeden subgraf může indexovat data z více inteligentní smluv. Do pole `dataSo Spouštěče pro zdroj dat v rámci bloku jsou seřazeny podle následujícího postupu: -1. Spouštěče událostí a volání jsou nejprve seřazeny podle indexu transakce v rámci bloku. -2. Spouštěče událostí a volání v rámci jedné transakce jsou seřazeny podle konvence: nejprve spouštěče událostí a poté spouštěče volání, přičemž každý typ dodržuje pořadí, v jakém jsou definovány v manifestu. -3. Spouštěče bloků jsou spuštěny po spouštěčích událostí a volání, v pořadí, v jakém jsou definovány v manifestu. +1. Spouštěče událostí a volání jsou nejprve seřazeny podle indexu transakce v rámci bloku. +2. Spouštěče událostí a volání v rámci jedné transakce jsou seřazeny podle konvence: nejprve spouštěče událostí a poté spouštěče volání, přičemž každý typ dodržuje pořadí, v jakém jsou definovány v manifestu. +3. Spouštěče bloků jsou spuštěny po spouštěčích událostí a volání, v pořadí, v jakém jsou definovány v manifestu. Tato pravidla objednávání se mohou změnit. @@ -305,9 +338,9 @@ Imagine you have a subgraph that needs to make three Ethereum calls to fetch dat Traditionally, these calls might be made sequentially: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Total time taken = 3 + 2 + 4 = 9 seconds @@ -315,9 +348,9 @@ Total time taken = 3 + 2 + 4 = 9 seconds With this feature, you can declare these calls to be executed in parallel: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. @@ -325,9 +358,9 @@ Total time taken = max (3, 2, 4) = 4 seconds ### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. -2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. ### Example Configuration in Subgraph Manifest @@ -347,7 +380,7 @@ calls: Details for the example above: - `global0X128` is the declared `eth_call`. -- The text before colon(`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. @@ -360,17 +393,17 @@ calls: ### SpecVersion Releases -| Verze | Poznámky vydání | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | +| Verze | Poznámky vydání | +|:-----:| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | | 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | ### Získání ABI @@ -442,16 +475,16 @@ U některých typů entit je `id` vytvořeno z id dvou jiných entit; to je mož V našem GraphQL API podporujeme následující skaláry: -| Typ | Popis | -| --- | --- | -| `Bajtů` | Pole bajtů reprezentované jako hexadecimální řetězec. Běžně se používá pro hashe a adresy Ethereum. | -| `Řetězec` | Skalár pro hodnoty `řetězce`. Nulové znaky nejsou podporovány a jsou automaticky odstraněny. | -| `Boolean` | Skalár pro hodnoty `boolean`. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | Celé číslo se znaménkem o velikosti 8 bajtů, známé také jako 64bitové celé číslo se znaménkem, může uchovávat hodnoty v rozsahu od -9 223 372 036 854 775 808 do 9 223 372 036 854 775 807. Přednostně se používá k reprezentaci `i64` z ethereum. | -| `BigInt` | Velká celá čísla. Používá se pro typy `uint32`, `int64`, `uint64`, ..., `uint256` společnosti Ethereum. Poznámka: Vše pod `uint32`, jako například `int32`, `uint24` nebo `int8`, je reprezentováno jako `i32`. | -| `BigDecimal` | `BigDecimal` Desetinná čísla s vysokou přesností reprezentovaná jako signifikand a exponent. Rozsah exponentu je od -6143 do +6144. Zaokrouhleno na 34 významných číslic. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| Typ | Popis | +| ------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bajtů` | Pole bajtů reprezentované jako hexadecimální řetězec. Běžně se používá pro hashe a adresy Ethereum. | +| `Řetězec` | Skalár pro hodnoty `řetězce`. Nulové znaky nejsou podporovány a jsou automaticky odstraněny. | +| `Boolean` | Skalár pro hodnoty `boolean`. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | Celé číslo se znaménkem o velikosti 8 bajtů, známé také jako 64bitové celé číslo se znaménkem, může uchovávat hodnoty v rozsahu od -9 223 372 036 854 775 808 do 9 223 372 036 854 775 807. Přednostně se používá k reprezentaci `i64` z ethereum. | +| `BigInt` | Velká celá čísla. Používá se pro typy `uint32`, `int64`, `uint64`, ..., `uint256` společnosti Ethereum. Poznámka: Vše pod `uint32`, jako například `int32`, `uint24` nebo `int8`, je reprezentováno jako `i32`. | +| `BigDecimal` | `BigDecimal` Desetinná čísla s vysokou přesností reprezentovaná jako signifikand a exponent. Rozsah exponentu je od -6143 do +6144. Zaokrouhleno na 34 významných číslic. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | #### Enums @@ -593,7 +626,7 @@ Tento propracovanější způsob ukládání vztahů mnoho-více vede k menším #### Přidání komentářů do schématu -As per GraphQL spec, comments can be added above schema entity attributes using the hash symble `#`. This is illustrated in the example below: +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: ```graphql type MyFirstEntity @entity { @@ -873,7 +906,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **Poznámka:** Nový zdroj dat bude zpracovávat pouze volání a události pro blok, ve kterém byl vytvořen, a všechny následující bloky, ale nebude zpracovávat historická data, tj. data obsažená v předchozích blocích. -> +> > Pokud předchozí bloky obsahují data relevantní pro nový zdroj dat, je nejlepší tato data indexovat načtením aktuálního stavu smlouvy a vytvořením entit reprezentujících tento stav v době vytvoření nového zdroje dat. ### Kontext zdroje dat @@ -930,7 +963,7 @@ dataSources: ``` > **Poznámka:** Blok pro vytvoření smlouvy lze rychle vyhledat v Etherscan: -> +> > 1. Vyhledejte smlouvu zadáním její adresy do vyhledávacího řádku. > 2. Klikněte na hash transakce vytvoření v sekci `Tvůrce smlouvy`. > 3. Načtěte stránku s podrobnostmi o transakci, kde najdete počáteční blok pro danou smlouvu. @@ -945,9 +978,9 @@ Nastavení `indexerHints` v manifestu podgrafu poskytuje směrnice pro indexáto `indexerHints.prune`: Definuje zachování historických blokových dat pro podgraf. Mezi možnosti patří: -1. `"nikdy"`: Žádné ořezávání historických dat; zachovává celou historii. -2. `"auto"`: Zachovává minimální potřebnou historii nastavenou indexátorem, čímž optimalizuje výkon dotazu. -3. Konkrétní číslo: Nastaví vlastní limit počtu historických bloků, které se mají zachovat. +1. `"nikdy"`: Žádné ořezávání historických dat; zachovává celou historii. +2. `"auto"`: Zachovává minimální potřebnou historii nastavenou indexátorem, čímž optimalizuje výkon dotazu. +3. Konkrétní číslo: Nastaví vlastní limit počtu historických bloků, které se mají zachovat. ``` indexerHints: @@ -982,29 +1015,6 @@ indexerHints: prune: never ``` -You can check the earliest block (with historical state) for a given subgraph by querying the [Indexing Status API](/deploying/deploying-a-subgraph-to-hosted/#checking-subgraph-health): - -``` -{ - indexingStatuses(subgraphs: ["Qm..."]) { - subgraph - synced - health - chains { - earliestBlock { - number - } - latestBlock { - number - } - chainHeadBlock { number } - } - } -} -``` - -Note that the `earliestBlock` is the earliest block with historical data, which will be more recent than the `startBlock` specified in the manifest, if the subgraph has been pruned. - ## Event Handlers Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. @@ -1230,7 +1240,7 @@ Počínaje `specVersion` `0.0.4` musí být funkce podgrafů explicitně deklaro | [Fulltextové vyhledávání](#defining-fulltext-search-fields) | `fullTextSearch` | | [Štěpování](#grafting-onto-existing-subgraphs) | `štěpování` | -Pokud například dílčí graf používá funkce **Plnotextové vyhledávání** a **Nefatální chyby**, pole `Vlastnosti` v manifestu by mělo být: +Pokud například dílčí graf používá funkce **Plnotextové vyhledávání** a **Nefatální chyby**, pole ` Vlastnosti ` v manifestu by mělo být: ```yaml specVersion: 0.0.4 @@ -1477,7 +1487,7 @@ The file data source must specifically mention all the entity types which it wil #### Vytvoření nové obslužné pro zpracování souborů -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/assemblyscript-api/#json-api)). +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). CID souboru jako čitelný řetězec lze získat prostřednictvím `dataSource` následujícím způsobem: diff --git a/website/pages/cs/developing/developer-faqs.mdx b/website/pages/cs/developing/developer-faqs.mdx index 828debf0f7a4..43945af1b50c 100644 --- a/website/pages/cs/developing/developer-faqs.mdx +++ b/website/pages/cs/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: FAQs pro vývojáře --- -## 1. Co je to podgraf? +This page summarizes some of the most common questions for developers building on The Graph. -Podgraf je vlastní API postavené na datech blockchainu. Podgrafy jsou dotazovány pomocí dotazovacího jazyka GraphQL a jsou nasazeny do uzlu Graf pomocí Graf CLI. Po nasazení a zveřejnění v decentralizované síti Graf zpracovávají indexery podgrafy a zpřístupňují je k dotazování konzumentům podgrafů. +## Subgraph Related -## 2. Mohu svůj podgraf smazat? +### 1. Co je to podgraf? -Jednou vytvořené podgrafy není možné odstranit. +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. Mohu změnit název podgrafu? +### 2. What is the first step to create a subgraph? -Ne. Jakmile je podgraf vytvořen, nelze jeho název změnit. Před vytvořením podgrafu si to důkladně promyslete, aby byl snadno vyhledatelný a identifikovatelný ostatními dapps. +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. Mohu změnit účet GitHub přidružený k mému podgrafu? +### 3. Can I still create a subgraph if my smart contracts don't have events? -Ne. Jakmile je podgraf vytvořen, nelze přidružený účet GitHub změnit. Než vytvoříte podgraf, důkladně si to promyslete. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. Mohu vytvořit podgraf i v případě, že moje chytré smlouvy nemají události? +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -Důrazně doporučujeme, abyste své chytré kontrakty strukturovali tak, aby měly události spojené s daty, na která se chcete dotazovat. Obsluhy událostí v podgrafu jsou spouštěny událostmi smlouva a jsou zdaleka nejrychlejším způsobem, jak získat užitečná data. +### 4. Mohu změnit účet GitHub přidružený k mému podgrafu? -Pokud smlouva, se kterými pracujete, neobsahují události, můžete ke spuštění indexování použít obsluhy volání a bloků. To se však nedoporučuje, protože výkon bude výrazně nižší. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. Je možné nasadit jeden podgraf se stejným názvem pro více sítí? +### 5. How do I update a subgraph on mainnet? -Pro více sítí budete potřebovat samostatné názvy. I když nemůžete mít různé podgrafy pod stejným názvem, existují pohodlné způsoby, jak mít jednu kódovou základnu pro více sítí. Více informací o tom najdete v naší dokumentaci: [přemístění podgrafu](/deploying/deploying-a-subgraph-to-hosted#redeploying-a-subgraph) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. Jak se liší šablony od zdrojů dat? +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -Šablony umožňují vytvářet zdroje dat za běhu, zatímco se podgraf indexuje. Může se stát, že vaše smlouva bude vytvářet nové smlouvy, jak s ní budou lidé interagovat, a protože znáte tvar těchto smluv (ABI, události atd.) předem, můžete definovat, jak je chcete indexovat v šabloně, a když se vytvoří, váš podgraf vytvoří dynamický zdroj dat dodáním adresy smlouvy. +Podgraf musíte znovu nasadit, ale pokud se ID podgrafu (hash IPFS) nezmění, nebude se muset synchronizovat od začátku. + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +V rámci podgrafu se události zpracovávají vždy v pořadí, v jakém se objevují v blocích, bez ohledu na to, zda se jedná o více smluv. + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. Podívejte se do části "Instancování šablony zdroje dat" na: [Šablony datových zdrojů](/developing/creating-a-subgraph#data-source-templates). -## 8. Jak se ujistím, že pro místní nasazení používám nejnovější verzi graph-node? +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -Můžete spustit následující příkaz: +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -docker pull graphprotocol/graph-node:latest -``` +You can also use `graph add` command to add a new dataSource. -**POZNÁMKA:** docker / docker-compose vždy použije tu verzi graf uzlu, která byla stažena při prvním spuštění, takže je důležité to udělat, abyste se ujistili, že máte nejnovější verzi graf uzlu. +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. Jak mohu z mapování podgrafů zavolat smluvní funkci nebo přistupovat k veřejné stavové proměnné? +Obsluhy událostí a volání jsou nejprve seřazeny podle indexu transakce v rámci bloku. Obsluhy událostí a volání v rámci téže transakce jsou seřazeny podle konvence: nejprve obsluhy událostí, pak obsluhy volání, přičemž každý typ dodržuje pořadí, v jakém jsou definovány v manifestu. Obsluhy bloků se spouštějí po obsluhách událostí a volání v pořadí, v jakém jsou definovány v manifestu. I tato pravidla řazení se mohou měnit. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +Při vytváření nových dynamických zdrojů dat se obslužné rutiny definované pro dynamické zdroje dat začnou zpracovávat až po zpracování všech existujících obslužných rutin zdrojů dat a budou se opakovat ve stejném pořadí, kdykoli budou spuštěny. -## 10. Je možné vytvořit podgraf pomocí `graph init` z `graph-cli` se dvěma smlouvami? Nebo mám po spuštění `graph init` ručně přidat další datový zdroj v `subgraph.yaml`? +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Ano. V samotném příkazu `graph init` můžete přidat více datových zdrojů zadáním smluv za sebou. Pro přidání nového datového zdroje můžete také použít příkaz `graph add`. +Můžete spustit následující příkaz: -## 11. Chci přispět nebo přidat problém na GitHub. Kde najdu repozitáře s otevřeným zdrojovým kódem? +```sh +docker pull graphprotocol/graph-node:latest +``` -- [graph-node](https://github.com/graphprotocol/graph-node) -- [graph-tooling](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. Jaký je doporučený způsob vytváření "automaticky generovaných" ids pro entity při zpracování událostí? +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? Pokud je během události vytvořena pouze jedna entita a pokud není k dispozici nic lepšího, pak by hash transakce + index protokolu byly jedinečné. Můžete je obfuskovat tak, že je převedete na bajty a pak je proženete přes `crypto.keccak256`, ale tím se jejich jedinečnost nezvýší. -## 13. Je možné při poslechu více smluv zvolit pořadí smlouvy, ve kterém se mají události poslouchat? +### 15. Can I delete my subgraph? -V rámci podgrafu se události zpracovávají vždy v pořadí, v jakém se objevují v blocích, bez ohledu na to, zda se jedná o více smluv. +It is not possible to delete subgraphs once they are created. However, you can [transfer and deprecate your subgraph](/managing/transfer-and-deprecate-a-subgraph/). -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +## Network Related + +### 16. What networks are supported by The Graph? + +Seznam podporovaných sítí najdete [zde](/developing/supported-networks). + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? Ano, můžete to provést importováním `graph-ts` podle níže uvedeného příkladu: @@ -78,23 +99,21 @@ dataSource.network() dataSource.address() ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Do you support block and call handlers on Sepolia? Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. -## 16. Mohu do mapování podgrafů importovat ethers.js nebo jiné JS knihovny? - -V současné době ne, protože mapování jsou zapsána v AssemblyScript. Jedním z možných alternativních řešení je ukládat surová data do entit a logiku, která vyžaduje knihovny JS, provádět na klientovi. +## Indexing & Querying Related -## 17. Je možné určit, od kterého bloku se má indexování spustit? +### 19. Is it possible to specify what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 18. Existují nějaké tipy, jak zvýšit výkon indexování? Synchronizace mého podgrafu trvá velmi dlouho +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -Ano, měli byste se podívat na volitelnou funkci start bloku, která umožňuje zahájit indexování od bloku, ve kterém byla smlouva nasazena: [Start bloky](/developing/creating-a-subgraph#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 19. Existuje způsob, jak se přímo zeptat podgrafu a zjistit poslední číslo bloku, který indexoval? +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? Ano! Vyzkoušejte následující příkaz, přičemž "organization/subgraphName" nahraďte názvem organizace, pod kterou je publikován, a názvem vašeho podgrafu: @@ -102,44 +121,27 @@ Ano! Vyzkoušejte následující příkaz, přičemž "organization/subgraphName curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -## 20. Jaké sítě podporuje Graf? - -Seznam podporovaných sítí najdete [zde](/developing/supported-networks). - -## 21. Je možné duplikovat podgraf do jiného účtu nebo koncového bodu, aniž by bylo nutné provést nové nasazení? - -Podgraf musíte znovu nasadit, ale pokud se ID podgrafu (hash IPFS) nezmění, nebude se muset synchronizovat od začátku. - -## 22. Je možné použít Apollo Federation nad graph-node? +### 22. Is there a limit to how many objects The Graph can return per query? -Federace zatím není podporována, i když ji chceme v budoucnu podporovat. V současné době můžete použít sešívání schémat, a to buď na klientovi, nebo prostřednictvím služby proxy. - -## 23. Je nějak omezeno, kolik objektů může Graf vrátit na jeden dotaz? - -Ve výchozím nastavení jsou odpovědi na dotazy omezeny na 100 položek na kolekci. Pokud chcete získat více, můžete jít až na 1000 položek na kolekci a nad tuto hranici můžete stránkovat pomocí: +By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: ```graphql someCollection(first: 1000, skip: ) { ... } ``` -## 24. Pokud můj frontend dapp používá pro dotazování The Graph, musím svůj dotazovací klíč zapsat přímo do frontend? Co když budeme za uživatele platit poplatky za dotazování - způsobí zlomyslní uživatelé, že naše poplatky za dotazování budou velmi vysoké? - -V současné době je doporučeným přístupem pro dapp přidání klíče do frontendu a jeho zpřístupnění koncovým uživatelům. Přitom můžete tento klíč omezit na název hostitele, například _yourdapp.io_ a podgraf. Bránu v současné době provozuje Edge & Node. Součástí odpovědnosti brány je monitorování zneužití a blokování provozu od škodlivých klientů. - -## 25. Kde najdu svůj aktuální podgraf v hostované službě? - -Přejděte do hostované služby, abyste našli podgrafy, které jste vy nebo jiní uživatelé nasadili do hostované služby. Najdete ji [zde](https://thegraph.com/hosted-service). - -## 26. Začne hostovaná služba účtovat poplatky za dotazy? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Graf nikdy nebude účtovat poplatky za hostovanou službu. Graf je decentralizovaný protokol a zpoplatnění centralizované služby není v souladu s hodnotami Graf. Hostovaná služba byla vždy dočasným krokem, který měl pomoci dostat se k decentralizované síti. Vývojáři budou mít dostatek času přejít na decentralizovanou síť, jak jim to bude vyhovovat. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## 27. Jak mohu aktualizovat podgraf v síti mainnet? +## Miscellaneous -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +### 24. Is it possible to use Apollo Federation on top of graph-node? -## 28. V jakém pořadí se spouštějí obsluhy událostí, bloků a volání pro zdroj dat? +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -Obsluhy událostí a volání jsou nejprve seřazeny podle indexu transakce v rámci bloku. Obsluhy událostí a volání v rámci téže transakce jsou seřazeny podle konvence: nejprve obsluhy událostí, pak obsluhy volání, přičemž každý typ dodržuje pořadí, v jakém jsou definovány v manifestu. Obsluhy bloků se spouštějí po obsluhách událostí a volání v pořadí, v jakém jsou definovány v manifestu. I tato pravidla řazení se mohou měnit. +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -Při vytváření nových dynamických zdrojů dat se obslužné rutiny definované pro dynamické zdroje dat začnou zpracovávat až po zpracování všech existujících obslužných rutin zdrojů dat a budou se opakovat ve stejném pořadí, kdykoli budou spuštěny. +- [graph-node](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/cs/developing/graph-ts/api.mdx b/website/pages/cs/developing/graph-ts/api.mdx index d812f220a91b..624e5b95d141 100644 --- a/website/pages/cs/developing/graph-ts/api.mdx +++ b/website/pages/cs/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: AssemblyScript API --- -> Poznámka: pokud jste vytvořili subgraf před verzí `graph-cli`/`graph-ts` `0.22.0`, používáte starší verzi jazyka AssemblyScript, doporučujeme se podívat do [`Migration Guide`](/release-notes/assemblyscript-migration-guide) +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -Tato stránka dokumentuje, jaké vestavěné API lze použít při psaní mapování podgrafů. Dva druhy API jsou k dispozici hned po vybalení: +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- kód generovaný ze souborů podgrafů pomocí `graph codegen`. +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -Jako závislosti je možné přidat i další knihovny, pokud jsou kompatibilní s [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Vzhledem k tomu, že mapování je psáno v tomto jazyce, je [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) dobrým zdrojem informací o funkcích jazyka a standardních knihoven. +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## Reference API @@ -27,16 +29,16 @@ Knihovna `@graphprotocol/graph-ts` poskytuje následující API: `apiVersion` v manifestu podgrafu určuje verzi mapovacího API, kterou pro daný podgraf používá uzel Graf. -| Verze | Poznámky vydání | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Přidá ověření existence polí ve schéma při ukládání entity. | -| 0.0.7 | Přidání tříd `TransactionReceipt` a `Log` do typů Ethereum\<0/Přidání pole `receipt` do objektu Ethereum událost | -| 0.0.6 | Přidáno pole `nonce` do objektu Ethereum Transaction
Přidáno `baseFeePerGas` do objektu Ethereum bloku | +| Verze | Poznámky vydání | +| :---: | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Přidá ověření existence polí ve schéma při ukládání entity. | +| 0.0.7 | Přidání tříd `TransactionReceipt` a `Log` do typů Ethereum<0/Přidání pole `receipt` do objektu Ethereum událost | +| 0.0.6 | Přidáno pole `nonce` do objektu Ethereum Transaction
Přidáno `baseFeePerGas` do objektu Ethereum bloku | | 0.0.5 | AssemblyScript povýšen na verzi 0.19.10 (obsahuje rozbíjející změny, viz [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` přejmenováno na `ethereum.transaction.gasLimit` | -| 0.0.4 | Přidání pole `functionSignature` do objektu Ethereum SmartContractCall | -| 0.0.3 | Do objektu Ethereum Call přidáno pole `from`
`etherem.call.address` přejmenováno na `ethereum.call.to` | -| 0.0.2 | Přidání pole `input` do objektu Ethereum Transackce | +| 0.0.4 | Přidání pole `functionSignature` do objektu Ethereum SmartContractCall | +| 0.0.3 | Do objektu Ethereum Call přidáno pole `from`
`etherem.call.address` přejmenováno na `ethereum.call.to` | +| 0.0.2 | Přidání pole `input` do objektu Ethereum Transackce | ### Vestavěné typy @@ -145,7 +147,7 @@ _Math_ - `x.notEqual(y: BigInt): bool` –lze zapsat jako `x != y`. - `x.lt(y: BigInt): bool` – lze zapsat jako `x < y`. - `x.le(y: BigInt): bool` – lze zapsat jako `x <= y`. -- `x.gt(y: BigInt): bool` – lze zapsat jako `x > y`. +- `x.gt(y: BigInt): bool` – lze zapsat jako `x > y`. - `x.ge(y: BigInt): bool` – lze zapsat jako `x >= y`. - `x.neg(): BigInt` – lze zapsat jako `-x`. - `x.divDecimal(y: BigDecimal): BigDecimal` – dělí desetinným číslem, čímž získá desetinný výsledek. @@ -252,7 +254,9 @@ export function handleTransfer(event: TransferEvent): void { Pokud se při zpracování řetězce vyskytne událost `Transfer`, je předána obsluze události `handleTransfer` pomocí vygenerovaného typu `Transfer` (zde alias `TransferEvent`, aby nedošlo ke konfliktu názvů s typem entity). Tento typ umožňuje přístup k datům, jako je nadřazená transakce události a její parametr. -Každá entita musí mít jedinečné ID, aby nedocházelo ke kolizím s jinými entitami. Je poměrně běžné, že parametry událostí obsahují jedinečný identifikátor, který lze použít. Poznámka: Použití hashe transakce jako ID předpokládá, že žádné jiné události ve stejné transakci nevytvářejí entity s tímto hashem jako ID. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### Načítání entity z úložiště @@ -268,15 +272,18 @@ if (transfer == null) { // Použijte entitu Transfer jako dříve ``` -Protože entita ještě nemusí v ukládat existovat, metoda `load` vrátí hodnotu typu `Transfer | null`. Proto může být nutné před použitím hodnoty zkontrolovat, zda se nejedná o případ `null`. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> **Poznámka:** Načtení entit je nutné pouze v případě, že změny provedené v mapování závisí na předchozích datech entity. Dva způsoby aktualizace existujících entit naleznete v následující části. +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### Vyhledávání entit vytvořených v rámci bloku Od verzí `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 a `@graphprotocol/graph-cli` v0.49.0 je metoda `loadInBlock` dostupná pro všechny typy entit. -API úložiště usnadňuje načítání entit, které byly vytvořeny nebo aktualizovány v aktuálním bloku. Typickou situací je, že jeden obslužný program vytvoří transakci z nějaké události v řetězci a pozdější obslužný program chce k této transakci přistupovat, pokud existuje. V případě, že transakce neexistuje, bude muset podgraf jít do databáze, jen aby zjistil, že entita neexistuje; pokud autor podgrafu již ví, že entita musela být vytvořena v tomtéž bloku, použitím funkce loadInBlock se této okružní cestě do databáze vyhne. U některých podgrafů mohou tato zmeškaná vyhledávání významně přispět k prodloužení doby indexace. +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -503,7 +510,9 @@ Jakákoli jiná smlouva, která je součástí podgrafu, může být importován #### Zpracování vrácených volání -Pokud se metody vaší smlouvy určené pouze pro čtení mohou vrátit, měli byste to řešit voláním vygenerované metody smlouvy s předponou `try_`. Například kontrakt Gravity vystavuje metodu `gravatarToOwner`. Tento kód by byl schopen zvládnout revert v této metodě: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -515,7 +524,7 @@ if (callResult.reverted) { } ``` -Všimněte si, že uzel Graf připojený ke klientovi Geth nebo Infura nemusí detekovat všechny reverty, pokud na to spoléháte, doporučujeme použít uzel Graf připojený ke klientovi Parity. +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### Kódování/dekódování ABI diff --git a/website/pages/cs/developing/supported-networks.mdx b/website/pages/cs/developing/supported-networks.mdx index b9addda0b59e..cc2b778d6cad 100644 --- a/website/pages/cs/developing/supported-networks.mdx +++ b/website/pages/cs/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. +- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. - If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - Úplný seznam funkcí podporovaných v decentralizované síti najdete na [této stránce](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). diff --git a/website/pages/cs/developing/unit-testing-framework.mdx b/website/pages/cs/developing/unit-testing-framework.mdx index d72560d965f6..11e4593d926d 100644 --- a/website/pages/cs/developing/unit-testing-framework.mdx +++ b/website/pages/cs/developing/unit-testing-framework.mdx @@ -1368,18 +1368,18 @@ Výstup protokolu obsahuje dobu trvání test. Zde je příklad: > Kritické: Nelze vytvořit WasmInstance z platného modulu s kontextem: neznámý import: wasi_snapshot_preview1::fd_write nebyl definován -To znamená, že jste ve svém kódu použili `console.log`, což není podporováno jazykem AssemblyScript. Zvažte prosím použití [Logging API](/developing/assemblyscript-api/#logging-api) +This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/graph-ts/api/#logging-api) > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Block(defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(18,12) -> +> > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Transaction(defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(24,12) Neshoda v argumentech je způsobena neshodou v `graph-ts` a `matchstick-as`. Nejlepší způsob, jak opravit problémy, jako je tento, je aktualizovat vše na nejnovější vydanou verzi. diff --git a/website/pages/cs/glossary.mdx b/website/pages/cs/glossary.mdx index b1e74d28b440..8ebc7be68ab5 100644 --- a/website/pages/cs/glossary.mdx +++ b/website/pages/cs/glossary.mdx @@ -10,11 +10,9 @@ title: Glosář - **Koncový bod**: URL, které lze použít k dotazu na podgraf. Testovací koncový bod pro Podgraf Studio je `https://api.studio.thegraph.com/query///` a koncový bod Graf Exploreru je `https://gateway.thegraph.com/api//subgraphs/id/`. Koncový bod Graf Explorer se používá k dotazování podgrafů v decentralizované síti Graf. -- **Podgraf**: Otevřené API, které získává data z blockchainu, zpracovává je a ukládá tak, aby bylo možné se na ně snadno dotazovat prostřednictvím GraphQL. Vývojáři mohou vytvářet, nasazovat a publikovat podgrafy v síti Graf Poté mohou indexátoři začít indexovat podgrafy, aby je kdokoli mohl vyhledávat. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **Hostovaná služba**: Dočasná lešenářská služba pro vytváření a dotazování podgrafů v době, kdy decentralizovaná síť Graf dozrává v oblasti nákladů na služby, kvality služeb a zkušeností vývojářů. - -- **Indexery**: Účastníci sítě, kteří provozují indexovací uzly pro indexování dat z blockchainů a obsluhu dotazů GraphQL. +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Příjmy indexátorů**: Indexátoři jsou v GRT odměňováni dvěma složkami: slevami z poplatků za dotazy a odměnami za indexování. @@ -24,17 +22,17 @@ title: Glosář - **Vlastní vklad indexátora**: Částka GRT, kterou indexátoři vkládají, aby se mohli účastnit decentralizované sítě. Minimum je 100,000 GRT a horní hranice není stanovena. -- **Upgrade indexeru**: Dočasný indexer určený jako záložní pro dotazy na podgrafy, které nejsou obsluhovány jinými indexery v síti. Zajišťuje bezproblémový přechod pro podgrafy, které se upgradují z hostované služby na Síť Graf. Upgrade Indexer není konkurenční vůči ostatním Indexerům. Podporuje řadu blokových řetězců, které byly dříve dostupné pouze v hostované službě. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegátoři**: Účastníci sítě, kteří vlastní GRT a delegují své GRT na indexátory. To umožňuje Indexerům zvýšit svůj podíl v podgrafech v síti. Delegáti na oplátku dostávají část odměn za indexování, které indexátoři dostávají za zpracování podgrafů. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **Delegační daň**: 0.5% poplatek, který platí delegáti, když delegují GRT na indexátory. GRT použitý k úhradě poplatku se spálí. -- **Kurátoři**: Účastníci sítě, kteří identifikují vysoce kvalitní podgrafy a "kurátorují" je (tj. signalizují na nich GRT) výměnou za kurátorské podíly. Když indexátoři požadují poplatky za dotaz na podgraf, 10% se rozdělí kurátorům tohoto podgrafu. Indexátoři získávají indexační odměny úměrné signálu na podgrafu. Vidíme korelaci mezi množstvím signalizovaných GRT a počtem indexátorů indexujících podgraf. +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **Curation Tax**: Kurátoři platí 1% poplatek, když signalizují GRT na podgraf GRT použitý k zaplacení poplatku se spálí. -- **Podgraf Spotřebitel**: Jakákoli aplikace nebo uživatel, který se dotazuje na podgraf. +- **Data Consumer**: Any application or user that queries a subgraph. - **Vývojář podgrafů**: Vývojář, který vytváří a nasazuje subgraf do decentralizované sítě Grafu. @@ -46,11 +44,11 @@ title: Glosář 1. **Aktivní**: Alokace je považována za aktivní, když je vytvořena v řetězci. Tomu se říká otevření alokace a signalizuje síti, že indexátor aktivně indexuje a obsluhuje dotazy pro daný podgraf. Aktivní alokace získávají odměny za indexování úměrné signálu na podgrafu a množství alokovaného GRT. - 2. **Zavřeno**: Indexátor si může nárokovat odměny za indexaci daného podgrafu předložením aktuálního a platného dokladu o indexaci (POI). Tomuto postupu se říká uzavření přídělu. Alokace musí být otevřena minimálně jednu epochu, aby mohla být uzavřena. Maximální doba přidělení je 28 epoch. Pokud indexátor ponechá alokaci otevřenou déle než 28 epoch, je tato alokace označována jako zastaralá. Když je alokace ve stavu **uzavřeno**, může rybář stále otevřít spor a napadnout indexátor za podávání falešných dat. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **Podgraf Studio**: Výkonná aplikace pro vytváření, nasazování a publikování podgrafů. -- **Rybáři**: Úloha v rámci sítě Grafu, kterou zastávají účastníci, kteří sledují přesnost a integritu dat poskytovaných indexátory. Pokud Rybář identifikuje odpověď na dotaz nebo POI, o které se domnívá, že je nesprávná, může iniciovat spor s Indexátorem. Pokud spor rozhodne ve prospěch Rybáře, je Indexátor vyřazen. Konkrétně indexátor přijde o 2.5 % svého vlastního podílu na GRT. Z této částky je 50% přiznáno Rybáři jako odměna za jeho bdělost a zbývajících 50% je staženo z oběhu (spáleno). Tento mechanismus je navržen tak, aby Rybáře motivoval k tomu, aby pomáhali udržovat spolehlivost sítě tím, že zajistí, aby Indexátoři nesli odpovědnost za data, která poskytují. +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Rozhodčí**: Rozhodci jsou účastníci sítě jmenovaní v rámci procesu řízení. Úkolem arbitra je rozhodovat o výsledku sporů týkajících se indexace a dotazů. Jejich cílem je maximalizovat užitečnost a spolehlivost sítě Graf. @@ -62,11 +60,11 @@ title: Glosář - **GRT**: Token pracovního nástroje Grafu. GRT poskytuje účastníkům sítě ekonomické pobídky za přispívání do sítě. -- **POI nebo Doklad o indexování**: Když indexátor uzavře svůj příděl a chce si nárokovat své naběhlé odměny za indexování na daném podgrafu, musí předložit platný a aktuální doklad o indexování (POI). Rybáři mohou POI poskytnuté indexátorem zpochybnit. Spor vyřešený ve prospěch lovce bude mít za následek snížení indexátoru. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Uzel grafu**: Uzel grafu je komponenta, která indexuje podgrafy a zpřístupňuje výsledná data pro dotazování prostřednictvím rozhraní GraphQL API. Jako takový je ústředním prvkem zásobníku indexátoru a správná činnost Uzel grafu je pro úspěšný provoz indexátoru klíčová. +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Agent indexátoru**: Agent indexeru je součástí zásobníku indexeru. Usnadňuje interakce indexeru v řetězci, včetně registrace v síti, správy rozmístění podgrafů do jeho grafových uzlů a správy alokací. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **Klient grafu**: Knihovna pro decentralizované vytváření dapps na bázi GraphQL. @@ -78,10 +76,6 @@ title: Glosář - **Nástroje pro přenos L2**: Chytré smlouvy a UI, které umožňují účastníkům sítě převádět aktiva související se sítí z mainnetu Ethereum do Arbitrum One. Účastníci sítě mohou převádět delegované GRT, podgrafy, kurátorské podíly a vlastní podíl Indexera. -- **_Vylepšit_ podgrafu do Sítě grafů**: Proces přesunu podgrafu z hostované služby do Sítě grafů. - -- **_Aktualizace_ podgrafu**: Proces vydání nové verze podgrafu s aktualizacemi manifestu, schématu nebo mapování podgrafu. +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migrace**: Proces sdílení kurátorů, při kterém se přechází ze staré verze podgrafu na novou verzi podgrafu (např. při aktualizaci verze v0.0.1 na verzi v0.0.2). - -- **Okno aktualizace**: Odpočet, kdy mohou uživatelé hostovaných služeb aktualizovat své podgrafy na síť The Graph Network, začíná 11, dubna a končí 12, června 2024. diff --git a/website/pages/cs/index.json b/website/pages/cs/index.json index 62910320a112..a1bae4af6a25 100644 --- a/website/pages/cs/index.json +++ b/website/pages/cs/index.json @@ -21,10 +21,6 @@ "createASubgraph": { "title": "Vytvoření podgrafu", "description": "Vytváření podgrafů pomocí Studio" - }, - "migrateFromHostedService": { - "title": "Upgrade z hostované služby", - "description": "Aktualizace podgrafů do sítě grafů" } }, "networkRoles": { diff --git a/website/pages/cs/managing/transfer-and-deprecate-a-subgraph.mdx b/website/pages/cs/managing/transfer-and-deprecate-a-subgraph.mdx new file mode 100644 index 000000000000..fcf45e562390 --- /dev/null +++ b/website/pages/cs/managing/transfer-and-deprecate-a-subgraph.mdx @@ -0,0 +1,65 @@ +--- +title: Transfer and Deprecate a Subgraph +--- + +## Převod vlastnictví podgrafu + +Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. + +**Please note the following:** + +- Whoever owns the NFT controls the subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. +- You can easily move control of a subgraph to a multi-sig. +- A community member can create a subgraph on behalf of a DAO. + +### View your subgraph as an NFT + +To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: + +``` +https://opensea.io/your-wallet-address +``` + +Or a wallet explorer like **Rainbow.me**: + +``` +https://rainbow.me/your-wallet-addres +``` + +### Step-by-Step + +To transfer ownership of a subgraph, do the following: + +1. Use the UI built into Subgraph Studio: + + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) + +2. Choose the address that you would like to transfer the subgraph to: + + ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) + +Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: + +![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) + +## Deprecating a subgraph + +Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. + +### Step-by-Step + +To deprecate your subgraph, do the following: + +1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). +2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. +3. Your subgraph will no longer appear in searches on Graph Explorer. + +**Please note the following:** + +- The owner's wallet should call the `deprecateSubgraph` function. +- Kurátoři již nebudou moci signalizovat na podgrafu. +- Curators that already signaled on the subgraph can withdraw their signal at an average share price. +- Deprecated subgraphs will show an error message. + +> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. diff --git a/website/pages/cs/mips-faqs.mdx b/website/pages/cs/mips-faqs.mdx index f826d4fdc367..214716156a5c 100644 --- a/website/pages/cs/mips-faqs.mdx +++ b/website/pages/cs/mips-faqs.mdx @@ -6,10 +6,6 @@ title: MIPs FAQs > Poznámka: program MIPs je od května 2023 uzavřen. Děkujeme všem indexátorům, kteří se programu zúčastnili! -Účast v ekosystému Grafu je vzrušující! Během [Dne Grafu 2022](https://thegraph.com/graph-day/2022/) Yaniv Tal oznámil [ukončení hostované služby](https://thegraph.com/blog/sunsetting-hosted-service/), což je okamžik, na kterém ekosystém Graf pracoval mnoho let. - -Nadace The Graph Foundation vyhlásila program [Migration Infrastructure Providers (MIPs)](https://thegraph.com/blog/mips-multi-chain-indexing-incentivized-program), který má podpořit ukončení hostované služby a migraci všech jejích aktivit do decentralizované sítě. - Program MIPs je motivační program pro indexátory, který je podporuje zdroji pro indexování řetězců mimo mainnet Ethereum a pomáhá protokolu The Graph rozšířit decentralizovanou síť na infrastrukturní vrstvu s více řetězci. Program MIPs vyčlenil 0.75% zásoby GRT (75M GRT), přičemž 0.5% je určeno na odměnu indexátorům, kteří přispívají k zavádění sítě, a 0.25% na síťové granty pro vývojáře podgrafů využívajících víceřetězcové podgrafy. @@ -96,11 +92,11 @@ Procento, které má být rozděleno na konci programu, bude podléhat nároku. ### 13. Budou mít všichni členové týmů s více než jedním členem role MIPs Discord? -Ano +An o ### 14. Je možné použít uzamčené tokeny z programu Kurátor grafů k účasti v testnetu MIPs? -An o +Ano ### 15. Bude během programu MIPs existovat lhůta pro zpochybnění neplatných POI? diff --git a/website/pages/cs/network/benefits.mdx b/website/pages/cs/network/benefits.mdx index 6f786e54adfe..620a8109b0be 100644 --- a/website/pages/cs/network/benefits.mdx +++ b/website/pages/cs/network/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Srovnání nákladů | Vlastní hostitel | The Graph Network | -| :-: | :-: | :-: | -| Měsíční náklady na server\* | $350 měsíčně | $0 | -| Náklady na dotazování | $0+ | $0 per month | -| Inženýrský čas | $400 měsíčně | Žádné, zabudované do sítě s globálně distribuovanými indexery | -| Dotazy za měsíc | Omezeno na infra schopnosti | 100,000 (Free Plan) | -| Náklady na jeden dotaz | $0 | $0 | -| Infrastruktura | Centralizovaný | Decentralizované | -| Geografická redundancy | $750+ Usd za další uzel | Zahrnuto | -| Provozuschopnost | Různé | 99.9%+ | -| Celkové měsíční náklady | $750+ | $0 | +| Srovnání nákladů | Vlastní hostitel | The Graph Network | +|:-----------------------------:|:---------------------------------------:|:-------------------------------------------------------------:| +| Měsíční náklady na server\* | $350 měsíčně | $0 | +| Náklady na dotazování | $0+ | $0 per month | +| Inženýrský čas | $400 měsíčně | Žádné, zabudované do sítě s globálně distribuovanými indexery | +| Dotazy za měsíc | Omezeno na infra schopnosti | 100,000 (Free Plan) | +| Náklady na jeden dotaz | $0 | $0 | +| Infrastruktura | Centralizovaný | Decentralizované | +| Geografická redundancy | $750+ Usd za další uzel | Zahrnuto | +| Provozuschopnost | Různé | 99.9%+ | +| Celkové měsíční náklady | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| Srovnání nákladů | Vlastní hostitel | The Graph Network | -| :-: | :-: | :-: | -| Měsíční náklady na server\* | $350 měsíčně | $0 | -| Náklady na dotazování | $500 měsíčně | $120 per month | -| Inženýrský čas | $800 měsíčně | Žádné, zabudované do sítě s globálně distribuovanými indexery | -| Dotazy za měsíc | Omezeno na infra schopnosti | ~3,000,000 | -| Náklady na jeden dotaz | $0 | $0.00004 | -| Infrastruktura | Centralizovaný | Decentralizované | -| Výdaje inženýrskou | $200 za hodinu | Zahrnuto | -| Geografická redundancy | $1,200 celkových nákladů na další uzel | Zahrnuto | -| Provozuschopnost | Různé | 99.9%+ | -| Celkové měsíční náklady | $1,650+ | $120 | +| Srovnání nákladů | Vlastní hostitel | The Graph Network | +|:-----------------------------:|:------------------------------------------:|:-------------------------------------------------------------:| +| Měsíční náklady na server\* | $350 měsíčně | $0 | +| Náklady na dotazování | $500 měsíčně | $120 per month | +| Inženýrský čas | $800 měsíčně | Žádné, zabudované do sítě s globálně distribuovanými indexery | +| Dotazy za měsíc | Omezeno na infra schopnosti | ~3,000,000 | +| Náklady na jeden dotaz | $0 | $0.00004 | +| Infrastruktura | Centralizovaný | Decentralizované | +| Výdaje inženýrskou | $200 za hodinu | Zahrnuto | +| Geografická redundancy | $1,200 celkových nákladů na další uzel | Zahrnuto | +| Provozuschopnost | Různé | 99.9%+ | +| Celkové měsíční náklady | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| Srovnání nákladů | Vlastní hostitel | The Graph Network | -| :-: | :-: | :-: | -| Měsíční náklady na server\* | $1100 měsíčně za uzel | $0 | -| Náklady na dotazování | $4000 | $1,200 per month | -| Počet potřebných uzlů | 10 | Nepoužije se | -| Inženýrský čas | 6$, 000 nebo více měsíčně | Žádné, zabudované do sítě s globálně distribuovanými indexery | -| Dotazy za měsíc | Omezeno na infra schopnosti | ~30,000,000 | -| Náklady na jeden dotaz | $0 | $0.00004 | -| Infrastruktura | Centralizovaný | Decentralizované | -| Geografická redundancy | $1,200 celkových nákladů na další uzel | Zahrnuto | -| Provozuschopnost | Různé | 99.9%+ | -| Celkové měsíční náklady | $11,000+ | $1,200 | +| Srovnání nákladů | Vlastní hostitel | The Graph Network | +|:-----------------------------:|:-------------------------------------------:|:-------------------------------------------------------------:| +| Měsíční náklady na server\* | $1100 měsíčně za uzel | $0 | +| Náklady na dotazování | $4000 | $1,200 per month | +| Počet potřebných uzlů | 10 | Nepoužije se | +| Inženýrský čas | 6$, 000 nebo více měsíčně | Žádné, zabudované do sítě s globálně distribuovanými indexery | +| Dotazy za měsíc | Omezeno na infra schopnosti | ~30,000,000 | +| Náklady na jeden dotaz | $0 | $0.00004 | +| Infrastruktura | Centralizovaný | Decentralizované | +| Geografická redundancy | $1,200 celkových nákladů na další uzel | Zahrnuto | +| Provozuschopnost | Různé | 99.9%+ | +| Celkové měsíční náklady | $11,000+ | $1,200 | \*včetně nákladů na zálohování: $50-$100 měsíčně diff --git a/website/pages/cs/network/curating.mdx b/website/pages/cs/network/curating.mdx index 7b10db17c678..82ab291c8d01 100644 --- a/website/pages/cs/network/curating.mdx +++ b/website/pages/cs/network/curating.mdx @@ -10,7 +10,7 @@ Before consumers can query a subgraph, it must be indexed. This is where curatio Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. @@ -18,7 +18,7 @@ The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). @@ -34,7 +34,7 @@ Signalizace na konkrétní verzi je užitečná zejména tehdy, když jeden podg Automatická migrace signálu na nejnovější produkční sestavení může být cenná, protože zajistí, že se poplatky za dotazy budou neustále zvyšovat. Při každém kurátorství se platí 1% kurátorský poplatek. Při každé migraci také zaplatíte 0,5% kurátorskou daň. Vývojáři podgrafu jsou odrazováni od častého publikování nových verzí - musí zaplatit 0.5% kurátorskou daň ze všech automaticky migrovaných kurátorských podílů. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -49,7 +49,7 @@ However, it is recommended that curators leave their signaled GRT in place not o ## Rizika 1. Trh s dotazy je v Graf ze své podstaty mladý a existuje riziko, že vaše %APY může být nižší, než očekáváte, v důsledku dynamiky rodícího se trhu. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. 3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. Podgraf může selhat kvůli chybě. Za neúspěšný podgraf se neúčtují poplatky za dotaz. V důsledku toho budete muset počkat, až vývojář chybu opraví a nasadí novou verzi. - Pokud jste přihlášeni k odběru nejnovější verze podgrafu, vaše sdílené položky se automaticky přemigrují na tuto novou verzi. Při tom bude účtována 0,5% kurátorská daň. @@ -65,7 +65,7 @@ By signalling on a subgraph, you will earn a share of all the query fees that th Nalezení kvalitních podgrafů je složitý úkol, ale lze k němu přistupovat mnoha různými způsoby. Jako kurátor chcete hledat důvěryhodné podgrafy, které jsou zdrojem objemu dotazů. Důvěryhodný podgraf může být cenný, pokud je úplný, přesný a podporuje datové potřeby dApp. Špatně navržený podgraf může vyžadovat revizi nebo opětovné zveřejnění a může také skončit neúspěchem. Pro kurátory je zásadní, aby přezkoumali architekturu nebo kód podgrafu, aby mohli posoudit, zda je podgraf hodnotný. V důsledku toho: -- Kurátoři mohou využít své znalosti sítě k tomu, aby se pokusili předpovědět, jak může jednotlivý podgraf v budoucnu generovat vyšší nebo nižší objem dotazů +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future - Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. ### 3. Jaké jsou náklady na aktualizaci podgrafu? @@ -78,50 +78,14 @@ Doporučujeme, abyste podgrafy neaktualizovali příliš často. Další podrobn ### 5. Mohu prodat své kurátorské podíly? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). ### 6. Am I eligible for a curation grant? Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## Křivka lepení 101 - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![Cena za akcii](/img/price-per-share.png) - -V důsledku toho se cena lineárně zvyšuje, což znamená, že nákup akcie bude v průběhu času dražší. Zde je příklad toho, co máme na mysli, viz níže uvedená vazební křivka: - -![Křivka lepení](/img/bonding-curve.png) - -Uvažujme, že máme dva kurátory, kteří mintují podíly pro podgraf - -- Kurátor A signalizuje jako první na podgrafu. Přidáním 120,000 GRT do křivky se jim podaří vydolovat 2000 akcií. -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- Vzhledem k tomu, že oba kurátoři mají polovinu všech kurátorských podílů, dostávali by stejnou částku kurátorských honorářů. -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- Zbývající kurátor by nyní obdržel všechny kurátorské honoráře za tento podgraf. Pokud by své podíly spálili a vybrali GRT, získali by 120,000 GRT. -- **TLDR:** Ocenění kurátorských akcií GRT je určeno vazebnou křivkou a může být volatilní. Existuje potenciál pro vznik velkých ztrát. Včasná signalizace znamená, že do každé akcie vložíte méně GRT. V důsledku to znamená, že vyděláte více kurátorských poplatků za GRT než pozdější kurátoři za stejný podgraf. - -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** - -V případě Grafu se využívá [Bankorova implementace vzorce vazební křivky](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA). - Stále jste zmateni? Podívejte se na našeho videoprůvodce kurátorstvím níže diff --git a/website/pages/cs/network/delegating.mdx b/website/pages/cs/network/delegating.mdx index d444c8edf2b3..4bbaf6c0ba23 100644 --- a/website/pages/cs/network/delegating.mdx +++ b/website/pages/cs/network/delegating.mdx @@ -2,13 +2,23 @@ title: Delegování --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## Průvodce delegáta -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly, as that information is widely available online. There are three sections in this guide: @@ -24,15 +34,19 @@ Níže jsou uvedena hlavní rizika plynoucí z delegáta v protokolu. Delegáti nemohou být za špatné chování kráceni, ale existuje daň pro delegáty, která má odradit od špatného rozhodování, jež by mohlo poškodit integritu sítě. -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +As a Delegator, it's important to understand the following: -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### Konec období vázanosti delegací Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely.
![Zrušení vázanosti delegací](/img/Delegation-Unbonding.png) _Všimněte si 0.5% poplatku v UI delegací a 28denní lhůty. @@ -41,47 +55,65 @@ Another thing to consider is how to choose an Indexer wisely. If you choose an I ### Výběr důvěryhodného indexátora se spravedlivou odměnou pro delegáty -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +#### Delegation Parameters + +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
- ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *Nejlepší indexátor dává delegátům 90 % odměn. Na prostřední dává - delegátům 20 % odměn. Spodní dává delegátům ~83 %.* + ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *Nejlepší indexátor dává delegátům 90 % odměn. Na + prostřední dává delegátům 20 % odměn. Spodní dává delegátům ~83 %.*
-- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. + +As you can see, in order to choose the right Indexer, you must consider multiple things. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -### Výpočet očekávaného výnosu delegátů +## Calculating Delegators Expected Return -A Delegator must consider a lot of factors when determining the return. These include: +A Delegator must consider the following factors to determine a return: -- Technický delegát se může také podívat na schopnost indexátoru používat dostupné delegované tokeny. Pokud Indexátor nealokuje všechny dostupné tokeny, nevydělává pro sebe ani pro své Delegáty maximální možný zisk. -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### S ohledem na snížení poplatků za dotaz a indexaci -Jak je popsáno v předchozích částech, měli byste si vybrat indexátor, který je transparentní a poctivý, pokud jde o nastavení snížení poplatků za dotaz a indexování. Delegovatel by se měl také podívat na dobu Cooldown parametrů, aby zjistil, jak velkou má časovou rezervu. Poté je poměrně jednoduché vypočítat výši odměn, které Delegátoři dostávají. Vzorec je následující: +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![Delegování Obrázek 3](/img/Delegation-Reward-Formula.png) ### Zohlednění fondu delegování indexátoru -Další věcí, kterou musí delegát zvážit, je, jakou část fondu delegátů vlastní. Všechny odměny za delegování se rozdělují rovnoměrně, přičemž jednoduché vyvážení fondu se určuje podle částky, kterou delegát do fondu vložil. Delegát tak získá podíl na fondu: +Delegators should consider the proportion of the Delegation Pool they own. -![Sdílet vzorec](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +This gives the Delegator a share of the pool: + +![Sdílet vzorec](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### Vzhledem ke kapacitě delegace -Další věcí, kterou je třeba zvážit, je kapacita delegování. V současné době je poměr delegování nastaven na 16. To znamená, že pokud indexátor vsadil 1,000,000 GRT, jeho delegační kapacita je 16,000,000 GRT delegovaných tokenů, které může v protokolu použít. Jakékoli delegované tokeny nad toto množství rozředí všechny odměny delegátora. +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. @@ -89,16 +121,21 @@ Therefore, a Delegator should always consider the Delegation Capacity of an Inde ### Chyba MetaMask "Čekající transakce" -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### Příklad -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## Videoprůvodce UI sítě +## Video Guide -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/cs/network/developing.mdx b/website/pages/cs/network/developing.mdx index 6d508e4a3b7a..04dc4759291a 100644 --- a/website/pages/cs/network/developing.mdx +++ b/website/pages/cs/network/developing.mdx @@ -2,52 +2,88 @@ title: Vývoj --- -Vývojáři jsou poptávkovou stranou ekosystému Grafu. Vývojáři vytvářejí podgrafy a publikují je v síti Graf. Poté se dotazují na živé podgrafy pomocí GraphQL, aby mohli využívat své aplikace. +To start coding right away, go to [Developer Quick Start](/quick-start/). + +## Přehled + +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. + +On The Graph, you can: + +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. + +### What is GraphQL? + +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +### Developer Actions + +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. + +## Subgraph Specifics + +### What are subgraphs? + +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. + +A subgraph primarily consists of the following files: + +- `subgraph.yaml`: this YAML file contains the [subgraph manifest](/developing/creating-a-subgraph/#the-subgraph-manifest). +- `subgraph.graphql`: this GraphQL schema defines what data is stored for your subgraph, and how to query it via [GraphQL](/developing/creating-a-subgraph/#the-graphql-schema). +- `mappings`: this [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) mappings file translates data from the event data to the entities defined in your schema. + +Learn the detailed specifics to [create a subgraph](/developing/creating-a-subgraph/). ## Životní cyklus podgrafů -Podgrafy nasazené do sítě mají definovaný životní cyklus. +Here is a general overview of a subgraph’s lifecycle: -### Stavět lokálně +![Životní cyklus podgrafů](/img/subgraph-lifecycle.png) -Stejně jako při vývoji všech podgrafů se začíná lokálním vývojem a testováním. Vývojáři mohou používat stejné místní nastavení, ať už vytvářejí pro síti Graf, hostovanou službu nebo místní uzel Grafu, a využívat při vytváření podgrafu `graph-cli` a `graph-ts`. Vývojářům se doporučuje používat nástroje, jako je [Matchstick](https://github.com/LimeChain/matchstick), pro testování jednotek, aby zvýšili robustnost svých podgrafů. +### Stavět lokálně -> Síť Graf má určitá omezení, pokud jde o funkce a podporu sítě. Odměny za indexaci získají pouze podgrafy na [podporovaných sítích](/developing/supported-networks) a odměny za indexaci nemohou získat ani podgrafy, které načítají data z IPFS. +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. ### Deploy to Subgraph Studio -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. - -### Publikovat v síti +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -Jakmile je vývojář se svým podgrafem spokojen, může jej zveřejnit v síti Grafu. Jedná se o akci v řetězci, která zaregistruje podgraf tak, aby jej indexery mohly objevit. Zveřejněné podgrafy mají odpovídající NFT, který je pak snadno přenositelný. Zveřejněný podgraf má přiřazená metadata, která poskytují ostatním účastníkům sítě užitečný kontext a informace. +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. -### Signál na podporu indexování +### Publikovat v síti -Publikované podgrafy pravděpodobně nebudou zachyceny indexátory bez přidání signálu. Signál je uzamčený GRT spojený s daným podgrafem, který indikuje indexátorům, že daný podgraf obdrží objem dotazů, a také přispívá k indexačním odměnám, které jsou k dispozici pro jeho zpracování. Vývojáři podgrafů obvykle přidávají ke svým podgrafům signál, aby podpořili indexování. Kurátoři třetích stran mohou také signalizovat daný podgraf, pokud se domnívají, že podgraf bude pravděpodobně vytvářet objem dotazů. +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. -### Dotazování & Vývoj aplikací +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-and-deprecate-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. -Jakmile je podgraf zpracován indexery a je k dispozici pro dotazování, mohou jej vývojáři začít používat ve svých aplikacích. Vývojáři se dotazují na podgrafy prostřednictvím brány, která jejich dotazy předává indexeru, jenž podgraf zpracoval, a platí poplatky za dotazy v GRT. +### Add Curation Signal for Indexing -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. +#### What is signal? -### Updating Subgraphs +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. +### Dotazování & Vývoj aplikací -Jakmile je vývojář podgrafu připraven k aktualizaci, může iniciovat transakci, která jeho podgraf nasměruje na novou verzi. Aktualizace podgrafu migruje jakýkoli signál na novou verzi (za předpokladu, že uživatel, který signál aplikoval, zvolil "automatickou migraci"), čímž také vzniká migrační daň. Tato migrace signálu by měla přimět indexátory, aby začaly indexovat novou verzi podgrafu, takže by měl být brzy k dispozici pro dotazování. +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). -### Vyřazování podgrafů +Learn more about [querying subgraphs](/querying/querying-the-graph/). -V určitém okamžiku se vývojář může rozhodnout, že publikovaný podgraf již nepotřebuje. V tu chvíli může podgraf vyřadit, čímž se kurátorům vrátí všechny signalizované GRT. +### Updating Subgraphs -### Různorodé role vývojáře +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -Někteří vývojáři se zapojí do celého životního cyklu podgrafů v síti, publikují, dotazují se a iterují své vlastní podgrafy. Někteří se mohou zaměřit na vývoj podgrafů a vytvářet otevřené API, na kterém mohou stavět ostatní. Někteří se mohou zaměřit na aplikace a dotazovat se na podgrafy, které nasadili jiní. +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. -### Vývojáři a síťová ekonomika +### Deprecating & Transferring Subgraphs -Vývojáři jsou v síti klíčovým ekonomickým subjektem, který blokuje GRT, aby podpořil indexování, a hlavně se dotazuje na podgrafy, což je hlavní výměna hodnot v síti. Vývojáři podgrafů také spalují GRT, kdykoli je podgraf aktualizován. +If you no longer need a published subgraph, you can [deprecate or transfer a subgraph](/managing/transfer-and-deprecate-a-subgraph/). Deprecating a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/cs/network/explorer.mdx b/website/pages/cs/network/explorer.mdx index 5d12bb618838..e501104f13e9 100644 --- a/website/pages/cs/network/explorer.mdx +++ b/website/pages/cs/network/explorer.mdx @@ -2,21 +2,35 @@ title: Průzkumník grafů --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## Podgrafy -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![Obrázek průzkumníka 1](/img/Subgraphs-Explorer-Landing.png) -Po kliknutí do podgrafu budete moci testovat dotazy na hřišti a využívat podrobnosti o síti k přijímání informovaných rozhodnutí. Budete také moci signalizovat GRT na svém vlastním podgrafu nebo podgrafech ostatních, aby si indexátory uvědomily jeho důležitost a kvalitu. To je velmi důležité, protože signalizace na podgrafu motivuje k jeho indexaci, což znamená, že se v síti objeví a nakonec bude sloužit dotazům. +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Obrázek průzkumníka 2](/img/Subgraph-Details.png) -Na stránce věnované každému podgrafu se objeví několik podrobností. Patří mezi ně: +On each subgraph’s dedicated page, you can do the following: - Signál/nesignál na podgraf - Zobrazit další podrobnosti, například grafy, ID aktuálního nasazení a další metadata @@ -31,26 +45,32 @@ Na stránce věnované každému podgrafu se objeví několik podrobností. Pat ## Účastníci -Na této kartě získáte přehled o všech osobách, které se podílejí na činnostech sítě, jako jsou indexátoři, delegáti a kurátoři. Níže si podrobně rozebereme, co pro vás jednotlivé karty znamenají. +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### 1. Indexery ![Obrázek průzkumníka 4](/img/Indexer-Pane.png) -Začněme u indexátorů. Základem protokolu jsou indexery, které sázejí na podgrafy, indexují je a obsluhují dotazy všech, kdo podgrafy spotřebovávají. V tabulce Indexers uvidíte parametry delegace indexerů, jejich podíl, kolik vsadili na jednotlivé podgrafy a kolik vydělali na poplatcích za dotazy a odměnách za indexování. Hlubší ponory níže: +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. + +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. -- Query Fee Cut - % slevy z poplatku za dotaz, které si indexátor ponechá při rozdělení s delegáty -- Efektivní snížení odměny - indexační snížení odměny použité na fond delegací. Pokud je záporná, znamená to, že indexátor odevzdává část svých odměn. Pokud je kladná, znamená to, že si indexátor ponechává část svých odměn -- Cooldown Remaining - doba, která zbývá do doby, kdy indexátor může změnit výše uvedené parametry delegování. Období Cooldown nastavují indexátory při aktualizaci parametrů delegování. -- Owned - Jedná se o uložený podíl indexátora, který může být zkrácen za škodlivé nebo nesprávné chování. -- Delegated - Podíl z delegátů, který může být přidělen indexátor, ale nemůže být zkrácen -- Allocated - Podíl, který indexátory aktivně alokují k indexovaným podgrafy -- Dostupná kapacita delegování - množství delegovaných podílů, které mohou indexátoři ještě obdržet, než dojde k jejich nadměrnému delegování +**Specifics** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Maximální kapacita delegování - maximální množství delegovaných podílů, které může indexátor produktivně přijmout. Nadměrný delegovaný podíl nelze použít pro alokace nebo výpočty odměn. -- Poplatky za dotazy - jedná se o celkové poplatky, které koncoví uživatelé zaplatili za dotazy z indexátoru za celou dobu +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - Odměny indexátorů - jedná se o celkové odměny indexátorů, které indexátor a jeho delegáti získali za celou dobu. Odměny indexátorů jsou vypláceny prostřednictvím vydání GRT. -Indexátoři mohou získat jak poplatky za dotazy, tak odměny za indexování. Funkčně k tomu dochází, když účastníci sítě delegují GRT na indexátor. To indexátorům umožňuje získávat poplatky za dotazování a odměny v závislosti na parametrech indexátoru. Parametry indexování se nastavují kliknutím na pravou stranu tabulky nebo vstupem do profilu indexátora a kliknutím na tlačítko "Delegate". +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. Chcete-li se dozvědět více o tom, jak se stát indexátorem, můžete se podívat do [oficiální dokumentace](/network/indexing) nebo do [průvodců pro indexátory akademie graf.](https://thegraph.academy/delegators/choosing-indexers/) @@ -58,9 +78,13 @@ Chcete-li se dozvědět více o tom, jak se stát indexátorem, můžete se pod ### 2. Kurátoři -Kurátoři analyzují podgrafy, aby určili, které podgrafy jsou nejkvalitnější. Jakmile kurátor najde potenciálně atraktivní podgraf, může jej kurátorovi signalizovat na jeho vazební křivce. Kurátoři tak dávají indexátorům vědět, které podgrafy jsou vysoce kvalitní a měly by být indexovány. +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -Kurátory mohou být členové komunity, konzumenti dat nebo dokonce vývojáři podgrafů, kteří signalizují své vlastní podgrafy tím, že vkládají žetony GRT do vazební křivky. Vložením GRT kurátoři razí kurátorské podíly podgrafu. V důsledku toho mají kurátoři nárok vydělat část poplatků za dotazy, které signalizovaný podgraf generuje. Vázací křivka motivuje kurátory ke kurátorství datových zdrojů nejvyšší kvality. Tabulka kurátorů v této části vám umožní vidět: +In the The Curator table listed below you can see: - Datum, kdy kurátor zahájil kurátorskou činnost - Počet uložených GRT @@ -68,34 +92,36 @@ Kurátory mohou být členové komunity, konzumenti dat nebo dokonce vývojáři ![Obrázek průzkumníka 6](/img/Curation-Overview.png) -Pokud se chcete o roli kurátora dozvědět více, můžete tak učinit na následujících odkazech [The Graph Academy](https://thegraph.academy/curators/) nebo [oficiální dokumentace.](/network/curating) +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### 3. Delegáti -Delegáti hrají klíčovou roli při udržování bezpečnosti a decentralizace sítě Graf. Podílejí se na síti tím, že delegují (tj. "sází") tokeny GRT jednomu nebo více indexátorům. Bez delegátů mají indexátoři menší šanci získat významné odměny a poplatky. Proto se indexátoři snaží přilákat delegáty tím, že jim nabízejí část odměn za indexování a poplatků za dotazy, které získají. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -Delegáti zase vybírají indexátory na základě řady různých proměnných, jako je výkonnost v minulosti, míra odměny za indexaci a snížení poplatků za dotaz. Svou roli může hrát i pověst v rámci komunity! Doporučujeme se s vybranými indexátory spojit prostřednictvím [Discord Grafu](https://discord.gg/graphprotocol) nebo [Fóra Grafu](https://forum.thegraph.com/)! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![Obrázek průzkumníka 7](/img/Delegation-Overview.png) -Tabulka Delegáti vám umožní zobrazit aktivní delegáty v komunitě a také metriky, jako jsou: +In the Delegators table you can see the active Delegators in the community and important metrics: - Počet indexátorů, na které deleguje delegát - Původní delegace delegát - Odměny, které nashromáždili, ale z protokolu si je nevyzvedli - Realizované odměny odstranili z protokolu - Celkové množství GRT, které mají v současné době v protokolu -- Datum, kdy byly naposledy delegovány na +- The date they last delegated -Pokud se chcete dozvědět více o tom, jak se stát delegátem, už nemusíte hledat dál! Stačí, když se vydáte na [oficiální dokumentaci](/network/delegating) nebo [Akademii Graf](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## Síť -V sekci Síť uvidíte globální klíčové ukazatele výkonnosti (KPI) a také možnost přepnout na základ epoch a detailněji analyzovat síťové metriky. Tyto podrobnosti vám poskytnou představu o tom, jak síť funguje v průběhu času. +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. ### Přehled -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +The overview section has both all the current network metrics and some cumulative metrics over time: - Současný celkový podíl v síti - Rozdělení stake mezi indexátory a jejich delegátory @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - Parametry protokolu, jako je odměna za kurátorství, míra inflace a další - Odměny a poplatky současné epochy -Několik klíčových informací, které stojí za zmínku: +A few key details to note: -- **Poplatky za dotazy představují poplatky generované spotřebiteli** a indexátory si je mohou nárokovat (nebo ne) po uplynutí nejméně 7 epoch (viz níže) poté, co byly jejich příděly vůči podgraf uzavřeny a data, která obsluhovali, byla potvrzena spotřebiteli. -- ** Odměny za indexaci představují množství odměn, které indexátoři nárokovali ze síťové emise během epochy.** Ačkoli je emise protokolu pevně daná, odměny jsou vyraženy až poté, co indexátoři uzavřou své alokace vůči podgraf, které indexovali. Proto se počet odměn v jednotlivých epochách mění (tj. během některých epoch mohli indexátoři kolektivně uzavřít alokace, které byly otevřené mnoho dní). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Obrázek průzkumníka 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ V části Epochy můžete na základě jednotlivých epoch analyzovat metriky, j - Aktivní epocha je ta, ve které indexéry právě přidělují podíl a vybírají poplatky za dotazy - Epoch zúčtování jsou ty, ve kterých se zúčtovávají stavové kanály. To znamená, že indexátoři podléhají krácení, pokud proti nim spotřebitelé zahájí spory. - Distribuční epochy jsou epochy, ve kterých se vypořádávají státní kanály pro epochy a indexátoři si mohou nárokovat slevy z poplatků za dotazy. - - Finalizované epochy jsou epochy, u nichž indexátorům nezbývají žádné slevy z poplatků za dotaz, a jsou tedy finalizované. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![Obrázek průzkumníka 9](/img/Epoch-Stats.png) ## Váš uživatelský profil -Nyní, když jsme si řekli něco o statistikách sítě, přejděme k vašemu osobnímu profilu. Váš osobní profil je místem, kde vidíte svou aktivitu v síti, ať už se jí účastníte jakýmkoli způsobem. Vaše kryptopeněženka bude fungovat jako váš uživatelský profil a pomocí uživatelského panelu si ji budete moci prohlédnout: +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### Přehled profilů -Zde se zobrazují všechny aktuální akce, které jste provedli. Zde také najdete informace o svém profilu, popis a webové stránky (pokud jste si je přidali). +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![Obrázek průzkumníka 10](/img/Profile-Overview.png) ### Tab Podgrafy -Pokud kliknete na kartu podgrafy, zobrazí se vaše publikované podgrafy. Nebudou zde zahrnuty žádné podgrafy nasazené pomocí CLI pro účely testování - podgrafy se zobrazí až po jejich zveřejnění v decentralizované síti. +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Obrázek průzkumníka 11](/img/Subgraphs-Overview.png) ### Tab Indexování -Pokud kliknete na kartu Indexování, najdete tabulku se všemi aktivními a historickými alokacemi k dílčím grafy a také grafy, které můžete analyzovat a podívat se na svou minulou výkonnost jako indexátor. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. Tato část bude také obsahovat podrobnosti o vašich čistých odměnách za indexování a čistých poplatcích za dotazy. Zobrazí se následující metriky: @@ -158,7 +189,9 @@ Tato část bude také obsahovat podrobnosti o vašich čistých odměnách za i ### Tab Delegování -Delegáti jsou pro síť Graf důležití. Delegát musí využít svých znalostí k výběru indexátora, který mu zajistí zdravou návratnost odměn. Zde najdete podrobnosti o svých aktivních a historických delegacích spolu s metrikami Indexátorů, ke kterým jste delegovali. +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. V první polovině stránky vidíte graf delegování a také graf odměn. Vlevo vidíte klíčové ukazatele výkonnosti, které odrážejí vaše aktuální metriky delegování. diff --git a/website/pages/cs/network/indexing.mdx b/website/pages/cs/network/indexing.mdx index 7dbb2e7ced77..0d777471fc43 100644 --- a/website/pages/cs/network/indexing.mdx +++ b/website/pages/cs/network/indexing.mdx @@ -42,7 +42,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Mnoho informačních panelů vytvořených komunitou obsahuje hodnoty čekajících odměn a lze je snadno zkontrolovat ručně podle následujících kroků: -1. Dotazem na podgraf [mainnet](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) získáte ID všech aktivních alokací: +1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -113,11 +113,11 @@ Indexátory se mohou odlišovat použitím pokročilých technik pro rozhodován - **Large** - Připraveno k indexování všech aktuálně nepoužívaných příbuzných podgrafů. | Nastavení | Postgres
(CPUs) | Postgres
(paměť v GBs) | Postgres
(disk v TBs) | VMs
(CPUs) | VMs
(paměť v GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Malé | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Střední | 16 | 64 | 2 | 32 | 64 | -| Velký | 72 | 468 | 3.5 | 48 | 184 | +| --------- |:--------------------------:|:---------------------------------:|:--------------------------------:|:---------------------:|:----------------------------:| +| Malé | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Střední | 16 | 64 | 2 | 32 | 64 | +| Velký | 72 | 468 | 3.5 | 48 | 184 | ### Jaká jsou základní bezpečnostní opatření, která by měl indexátor přijmout? @@ -149,20 +149,20 @@ Poznámka: Pro podporu agilního škálování se doporučuje oddělit dotazová #### Uzel Graf -| Port | Účel | Trasy | CLI Argument | Proměnná prostředí | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(pro dotazy podgrafy) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(pro odběry podgrafů) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(pro správu nasazení) | / | --admin-port | - | -| 8030 | Stav indexování podgrafů API | /graphql | --index-node-port | - | -| 8040 | Metriky Prometheus | /metrics | --metrics-port | - | +| Port | Účel | Trasy | CLI Argument | Proměnná prostředí | +| ---- | ---------------------------------------------------- | ---------------------------------------------------- | ----------------- | ------------------ | +| 8000 | GraphQL HTTP server
(pro dotazy podgrafy) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(pro odběry podgrafů) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(pro správu nasazení) | / | --admin-port | - | +| 8030 | Stav indexování podgrafů API | /graphql | --index-node-port | - | +| 8040 | Metriky Prometheus | /metrics | --metrics-port | - | #### Služba Indexer -| Port | Účel | Trasy | CLI Argument | Proměnná prostředí | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(pro placené dotazy na podgrafy) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Metriky Prometheus | /metrics | --metrics-port | - | +| Port | Účel | Trasy | CLI Argument | Proměnná prostředí | +| ---- | --------------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(pro placené dotazy na podgrafy) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Metriky Prometheus | /metrics | --metrics-port | - | #### Agent indexátoru @@ -545,7 +545,7 @@ Navrhovaným nástrojem pro interakci s **Indexer Management API** je **Indexer - `možná pravidla indexování grafů [možnosti] ` - Nastaví `decisionBasis` pro nasazení na `rules`, takže agent Indexer bude při rozhodování o indexování tohoto nasazení používat pravidla indexování. -- `Akce indexátoru grafu získají [možnosti] ` - Získá jednu nebo více akcí pomocí `all` nebo ponechá `action-id` prázdné pro získání všech akcí. Přídavný argument `--status` lze použít pro vypsání všech akcí určitého stavu. +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `graph indexer action queue allocate ` - Akce přidělení fronty diff --git a/website/pages/cs/network/overview.mdx b/website/pages/cs/network/overview.mdx index 0060dfc506a4..aeb16e0d488e 100644 --- a/website/pages/cs/network/overview.mdx +++ b/website/pages/cs/network/overview.mdx @@ -2,14 +2,20 @@ title: Přehled sítě --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## Přehled +## How does it work? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Specifics + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![Token Economics](/img/Network-roles@2x.png) -Pro zajištění ekonomické bezpečnosti sítě Graf a integrity dotazovaných dat účastníci sázejí a používají graf tokeny ([GRT](/tokenomics)). GRT je pracovní užitkový token, který má hodnotu ERC-20 a slouží k přidělování zdrojů v síti. +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/cs/new-chain-integration.mdx b/website/pages/cs/new-chain-integration.mdx index 1c5466566491..6fd34e39cbac 100644 --- a/website/pages/cs/new-chain-integration.mdx +++ b/website/pages/cs/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: Integrace nových sítí +title: New Chain Integration --- -Uzel grafu může v současné době indexovat data z následujících typů řetězců: +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- Ethereum, via EVM JSON-RPC and [Ethereum Firehose](https://github.com/streamingfast/firehose-ethereum) -- NEAR, prostřednictvím [NEAR Firehose](https://github.com/streamingfast/near-firehose-indexer) -- Cosmos, prostřednictvím [Cosmos Firehose](https://github.com/graphprotocol/firehose-cosmos) -- Arweave, prostřednictvím [Arweave Firehose](https://github.com/graphprotocol/firehose-arweave) +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -Pokud máte zájem o některý z těchto řetězců, je integrace otázkou konfigurace a testování uzlu Graf. +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -Pokud máte zájem o jiný typ řetězce, je třeba vytvořit novou integraci s Uzel Graf. Naším doporučeným přístupem je vytvoření nového Firehose pro daný řetězec a následná integrace tohoto Firehose s Uzel Graf. Více informací naleznete níže. +## Integration Strategies -**1. EVM JSON-RPC** +### 1. EVM JSON-RPC -Pokud je blockchain ekvivalentní EVM a klient/uzel vystavuje standardní EVM JSON-RPC API, měl by být Uzel Grafu schopen indexovat nový řetězec. Další informace naleznete v části [Testování EVM JSON-RPC](new-chain-integration#testing-an-evm-json-rpc). +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -**2. Firehose** +#### Testování EVM JSON-RPC -U řetězců, které nejsou založeny na EvM, musí Uzel Graf přijímat data blockchainu prostřednictvím gRPC a známých definic typů. To lze provést prostřednictvím [Firehose](firehose/), nové technologie vyvinuté společností [StreamingFast](https://www.streamingfast.io/), která poskytuje vysoce škálovatelné řešení indexování blockchainu pomocí přístupu založeného na souborech a streamování. Pokud potřebujete s vývojem Firehose pomoci, obraťte se na tým [StreamingFast](mailto:integrations@streamingfast.io/). +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## Rozdíl mezi EVM JSON-RPC a Firehose +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, in a JSON-RPC batch request +- `trace_filter` _(optionally required for Graph Node to support call handlers)_ -Zatímco pro podgrafy jsou tyto dva typy vhodné, pro vývojáře, kteří chtějí vytvářet pomocí [Substreams](substreams/), jako je vytváření [Substreams-powered subgraphs](cookbook/substreams-powered-subgraphs/), je vždy vyžadován Firehose. Firehose navíc umožňuje vyšší rychlost indexování ve srovnání s JSON-RPC. +### 2. Firehose Integration -Noví integrátoři řetězců EVM mohou také zvážit přístup založený na technologii Firehose vzhledem k výhodám substreamů a jejím masivním možnostem paralelizovaného indexování. Podpora obojího umožňuje vývojářům zvolit si mezi vytvářením substreamů nebo podgrafů pro nový řetězec. +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> **POZNÁMKA**: Integrace založená na Firehose pro řetězce EVM bude stále vyžadovat, aby indexátory spustily archivační uzel RPC řetězce, aby správně indexovaly podgrafy. Důvodem je neschopnost Firehose poskytovat stav inteligentních kontraktů typicky přístupný metodou `eth_call` RPC. (Stojí za to připomenout, že eth_call je [pro vývojáře není dobrou praxí](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## Testování EVM JSON-RPC +#### Specific Firehose Instrumentation for EVM (`geth`) chains -Aby mohl uzel Grafu přijímat data z řetězce EVM, musí uzel RPC zpřístupnit následující metody EVM JSON RPC: +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \_(for historical blocks, with EIP-1898 - requires archive node): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`, in a JSON-RPC batch request -- _`trace_filter`_ _(volitelně vyžadováno pro Uzel Graf, aby podporoval obsluhu volání)_ +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### Config uzlu grafu +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -**Začněte přípravou místního prostředí** +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## Config uzlu grafu + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) -2. Upravte [tento řádek](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) tak, aby obsahoval nový název sítě a URL adresu EVM kompatibilní s JSON RPC - > Samotný název env var neměňte. Musí zůstat `ethereum`, i když je název sítě jiný. -3. Spusťte uzel IPFS nebo použijte ten, který používá Graf: https://api.thegraph.com/ipfs/ -**Testování integrace lokálním nasazením podgrafu** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. Vytvořte jednoduchý příklad podgrafu. Některé možnosti jsou uvedeny níže: - 1. Předpřipravený chytrá smlouva [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) a podgraf je dobrým výchozím bodem - 2. Zavedení lokálního podgrafu z jakéhokoli existujícího chytrého kontraktu nebo vývojového prostředí Solidity [pomocí Hardhat s plugin Graph](https://github.com/graphprotocol/hardhat-graph) -3. Upravte výsledný soubor `subgraph.yaml` změnou názvu `dataSources.network` na stejný, který byl dříve předán uzlu Graf. -4. Vytvořte podgraf v uzlu Graf: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. Zveřejněte svůj podgraf v uzlu Graf: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -Pokud nedošlo k chybám, měl by uzel Graf synchronizovat nasazený podgraf. Dejte mu čas na synchronizaci a poté odešlete několik dotazů GraphQL na koncový bod API vypsaný v protokolech. +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## Integrace nového řetězce s podporou služby Firehose +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. Vytvořte jednoduchý příklad podgrafu. Některé možnosti jsou uvedeny níže: + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +Pokud nedošlo k chybám, měl by uzel Graf synchronizovat nasazený podgraf. Dejte mu čas na synchronizaci a poté odešlete několik dotazů GraphQL na koncový bod API vypsaný v protokolech. -Integrace nového řetězce je možná také pomocí přístupu Firehose. To je v současné době nejlepší možnost pro řetězce, které nejsou součástí EVM, a požadavek na podporu substreamů. Další dokumentace se zaměřuje na to, jak Firehose funguje, přidání podpory Firehose pro nový řetězec a jeho integraci s Uzel Graf. Doporučená dokumentace pro integrátory: +## Substreams-powered Subgraphs -1. [General docs on Firehose](firehose/) -2. [Přidání podpory Firehose pro nový řetězec](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [Integrace graf uzlu s novým řetězcem přes Firehose](https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/cs/operating-graph-node.mdx b/website/pages/cs/operating-graph-node.mdx index b6067883a47a..9a720348c8c5 100644 --- a/website/pages/cs/operating-graph-node.mdx +++ b/website/pages/cs/operating-graph-node.mdx @@ -77,13 +77,13 @@ Kompletní příklad konfigurace Kubernetes naleznete v úložišti [indexer](ht Když je Graf Uzel spuštěn, zpřístupňuje následující ports: -| Port | Účel | Trasy | CLI Argument | Proměnná prostředí | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(pro dotazy podgrafy) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(pro odběry podgrafů) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(pro správu nasazení) | / | --admin-port | - | -| 8030 | Stav indexování podgrafů API | /graphql | --index-node-port | - | -| 8040 | Metriky Prometheus | /metrics | --metrics-port | - | +| Port | Účel | Trasy | CLI Argument | Proměnná prostředí | +| ---- | ---------------------------------------------------- | ---------------------------------------------------- | ----------------- | ------------------ | +| 8000 | GraphQL HTTP server
(pro dotazy podgrafy) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(pro odběry podgrafů) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(pro správu nasazení) | / | --admin-port | - | +| 8030 | Stav indexování podgrafů API | /graphql | --index-node-port | - | +| 8040 | Metriky Prometheus | /metrics | --metrics-port | - | > **Důležité**: Dávejte pozor na veřejné vystavování portů - **administrační porty** by měly být uzamčeny. To se týká i koncového bodu JSON-RPC uzlu Graf. diff --git a/website/pages/cs/querying/graphql-api.mdx b/website/pages/cs/querying/graphql-api.mdx index e1c7f3e566f7..16b65fe32b1e 100644 --- a/website/pages/cs/querying/graphql-api.mdx +++ b/website/pages/cs/querying/graphql-api.mdx @@ -2,11 +2,19 @@ title: GraphQL API --- -This guide explains the GraphQL Query API that is used for The Graph Protocol. +Learn about the GraphQL Query API used in The Graph. -## Dotazy +## What is GraphQL? -Ve schématu podgrafu definujete typy nazvané `Entity`. Pro každý typ `Entity` bude na nejvyšší úrovni typu `Query` vygenerováno pole `entity` a `entity`. Všimněte si, že `dotaz` nemusí být při použití Grafu zahrnut na vrcholu `graphql` dotazu. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +To understand the larger role that GraphQL plays, review [developing](/network/developing/) and [creating a subgraph](/developing/creating-a-subgraph/). + +## Queries with GraphQL + +In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. + +> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. ### Příklady @@ -21,7 +29,7 @@ Dotaz na jednu entitu `Token` definovanou ve vašem schématu: } ``` -> **Poznámka:** Při dotazování na jednu entitu je pole `id` povinné a musí to být řetězec. +> Note: When querying for a single entity, the `id` field is required, and it must be writen as a string. Dotaz na všechny entity `Token`: @@ -36,7 +44,10 @@ Dotaz na všechny entity `Token`: ### Třídění -Při dotazování na kolekci lze parametr `orderBy` použít k seřazení podle určitého atributu. Kromě toho lze pomocí parametru `orderDirection` určit směr řazení, `asc` pro vzestupné nebo `desc` pro sestupné. +When querying a collection, you may: + +- Use the `orderBy` parameter to sort by a specific attribute. +- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. #### Příklad @@ -53,7 +64,7 @@ Při dotazování na kolekci lze parametr `orderBy` použít k seřazení podle Od verze Uzel grafu [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) lze entity třídit na základě vnořených entit. -V následujícím příkladu seřadíme tokeny podle jména jejich vlastníka: +The following example shows tokens sorted by the name of their owner: ```graphql { @@ -70,11 +81,12 @@ V následujícím příkladu seřadíme tokeny podle jména jejich vlastníka: ### Stránkování -Při dotazování na kolekci lze parametr `První` použít pro stránkování od začátku kolekce. Stojí za zmínku, že výchozí řazení je podle ID ve vzestupném alfanumerickém pořadí, nikoli podle času vytvoření. - -Dále lze parametr `skip` použít k přeskočení entit a stránkování, např. `first:100` zobrazí prvních 100 entit a `first:100, skip:100` zobrazí dalších 100 entit. +When querying a collection, it's best to: -Dotazy by se měly vyvarovat používání velmi velkých hodnot `přeskočit`, protože mají obecně nízkou výkonnost. Pro získání velkého počtu položek je mnohem lepší procházet entity na základě atributu, jak je uvedeno v posledním příkladu. +- Use the `first` parameter to paginate from the beginning of the collection. + - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. +- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. #### Příklad s použitím `first` @@ -106,7 +118,7 @@ Dotaz na 10 entit `Token`, posunutých o 10 míst od začátku kolekce: #### Příklad s použitím `first` a `id_ge` -Pokud klient potřebuje získat velký počet entit, je mnohem výkonnější založit dotazy na atributu a filtrovat podle něj. Klient by například pomocí tohoto dotazu získal velký počet tokenů: +If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: ```graphql query manyTokens($lastID: String) { @@ -117,11 +129,12 @@ query manyTokens($lastID: String) { } ``` -Poprvé by odeslal dotaz s `lastID = ""` a při dalších požadavcích by nastavil `lastID` na atribut `id` poslední entity v předchozím požadavku. Tento přístup bude fungovat podstatně lépe než použití rostoucích hodnot `skip`. +The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### Filtrování -Pomocí parametru `where` můžete v dotazech filtrovat různé vlastnosti. V rámci parametru `kde` můžete filtrovat podle více hodnot. +- You can use the `where` parameter in your queries to filter for different properties. +- You can filter on multiple values within the `where` parameter. #### Příklad s použitím `where` @@ -155,7 +168,7 @@ Pro porovnání hodnot můžete použít přípony jako `_gt`, `_lte`: #### Příklad pro filtrování bloků -Entity můžete filtrovat také pomocí `_change_block(number_gte: Int)` - filtruje entity, které byly aktualizovány v zadaném bloku nebo po něm. +You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. To může být užitečné, pokud chcete načíst pouze entity, které se změnily například od posledního dotazování. Nebo může být užitečná pro zkoumání nebo ladění změn entit v podgrafu (v kombinaci s blokovým filtrem můžete izolovat pouze entity, které se změnily v určitém bloku). @@ -193,7 +206,7 @@ Od verze Uzel grafu [`v0.30.0`](https://github.com/graphprotocol/graph-node/rele ##### Operátor `AND` -V následujícím příkladu filtrujeme výzvy s `outcome` `succeeded` a `number` větším nebo rovným `100`. +The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. ```graphql { @@ -208,7 +221,7 @@ V následujícím příkladu filtrujeme výzvy s `outcome` `succeeded` a `number ``` > **Syntaktický cukr:** Výše uvedený dotaz můžete zjednodušit odstraněním operátoru `a` předáním podvýrazu odděleného čárkami. -> +> > ```graphql > { > challenges(where: { number_gte: 100, outcome: "succeeded" }) { @@ -223,7 +236,7 @@ V následujícím příkladu filtrujeme výzvy s `outcome` `succeeded` a `number ##### Operátor `OR` -V následujícím příkladu filtrujeme výzvy s `outcome` `succeeded` nebo `number` větším nebo rovným `100`. +The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. ```graphql { @@ -278,9 +291,9 @@ _change_block(number_gte: Int) Můžete se dotazovat na stav entit nejen pro nejnovější blok, což je výchozí nastavení, ale také pro libovolný blok v minulosti. Blok, u kterého má dotaz proběhnout, lze zadat buď číslem bloku, nebo jeho blokovým hashem, a to tak, že do polí toplevel dotazů zahrnete argument `blok`. -Výsledek takového dotazu se v průběhu času nemění, tj. dotaz na určitý minulý blok vrátí stejný výsledek bez ohledu na to, kdy je proveden, s výjimkou toho, že pokud se dotazujete na blok velmi blízko hlavy řetězce, výsledek se může změnit, pokud se ukáže, že tento blok není v hlavním řetězci a řetězec se reorganizuje. Jakmile lze blok považovat za konečný, výsledek dotazu se nezmění. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -Všimněte si, že současná implementace stále podléhá určitým omezením, která by mohla tyto záruky porušit. Implementace nemůže vždy zjistit, že daný blokový hash vůbec není v hlavním řetězci, nebo že výsledek dotazu podle blokového hashe na blok, který ještě nelze považovat za finální, může být ovlivněn reorganizací bloku probíhající současně s dotazem. Neovlivňují výsledky dotazů podle blokové hash, pokud je blok finální a je známo, že je v hlavním řetězci. [Toto Problém ](https://github.com/graphprotocol/graph-node/issues/1405) podrobně vysvětluje, jaká jsou tato omezení. +> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. #### Příklad @@ -322,12 +335,12 @@ Fulltextové vyhledávací dotazy mají jedno povinné pole `text` pro zadání Operátory fulltextového vyhledávání: -| Symbol | Operátor | Popis | -| --- | --- | --- | -| `&` | `a` | Pro kombinaci více vyhledávacích výrazů do filtru pro entity, které obsahují všechny zadané výrazy | -| | | `Nebo` | Dotazy s více hledanými výrazy oddělenými operátorem nebo vrátí všechny entity, které odpovídají některému z uvedených výrazů | -| `<->` | `Follow by` | Zadejte vzdálenost mezi dvěma slovy. | -| `:*` | `Prefix` | Pomocí předponového výrazu vyhledejte slova, jejichž předpona se shoduje (vyžadovány 2 znaky) | +| Symbol | Operátor | Popis | +| ----------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------- | +| `&` | `a` | Pro kombinaci více vyhledávacích výrazů do filtru pro entity, které obsahují všechny zadané výrazy | +| | | `Nebo` | Dotazy s více hledanými výrazy oddělenými operátorem nebo vrátí všechny entity, které odpovídají některému z uvedených výrazů | +| `<->` | `Follow by` | Zadejte vzdálenost mezi dvěma slovy. | +| `:*` | `Prefix` | Pomocí předponového výrazu vyhledejte slova, jejichž předpona se shoduje (vyžadovány 2 znaky) | #### Příklady @@ -376,11 +389,11 @@ Uzel grafu implementuje ověření [založené na specifikacích](https://spec.g ## Schema -Schéma datového zdroje - tj. typy entit, hodnoty a vztahy, které jsou k dispozici pro dotazování - jsou definovány pomocí [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -Schéma GraphQL obecně definují kořenové typy pro `dotazy`, `odběry` a `mutace`. Graf podporuje pouze `dotazy`. Kořenový typ `Dotaz` pro váš podgraf je automaticky vygenerován ze schématu GraphQL, které je součástí manifestu podgrafu. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> **Poznámka:** Naše API nevystavuje mutace, protože se očekává, že vývojáři budou vydávat transakce přímo proti podkladovému blockchainu ze svých aplikací. +> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. ### Entities diff --git a/website/pages/cs/querying/querying-best-practices.mdx b/website/pages/cs/querying/querying-best-practices.mdx index f2fb16bcf8b7..ee710d28bc25 100644 --- a/website/pages/cs/querying/querying-best-practices.mdx +++ b/website/pages/cs/querying/querying-best-practices.mdx @@ -2,11 +2,9 @@ title: Osvědčené postupy dotazování --- -Graf poskytuje decentralizovaný způsob dotazování na data z blockchainů. +The Graph provides a decentralized way to query data from blockchains via GraphQL APIs, making it easier to query data with the GraphQL language. -Data sítě Graf jsou zpřístupněna prostřednictvím GraphQL API, což usnadňuje dotazování na data pomocí jazyka GraphQL. - -Tato stránka vás provede základními pravidly jazyka GraphQL a osvědčenými postupy pro dotazy GraphQL. +Learn the essential GraphQL language rules and GraphQL querying best practices. --- @@ -71,7 +69,7 @@ GraphQL je jazyk a sada konvencí, které se přenášejí přes protokol HTTP. To znamená, že se můžete dotazovat na GraphQL API pomocí standardního `fetch` (nativně nebo pomocí `@whatwg-node/fetch` nebo `isomorphic-fetch`). -Jak je však uvedeno v části ["Dotazování z aplikace"](/querying/querying-from-an-application), doporučujeme používat našeho `graf-klienta`, který podporuje jedinečné funkce, jako např: +However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), it's recommend to use `graph-client` which supports unique features such as: - Manipulace s podgrafy napříč řetězci: Dotazování z více podgrafů v jednom dotazu - [Automatické sledování](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) @@ -104,8 +102,6 @@ main() Další alternativy klienta GraphQL jsou popsány v ["Dotazování z aplikace"](/querying/querying-from-an-application). -Nyní, když jsme se seznámili se základními pravidly syntaxe dotazů GraphQL, se podíváme na osvědčené postupy psaní dotazů GraphQL. - --- ## Osvědčené postupy @@ -164,11 +160,11 @@ To přináší **mnoho výhod**: - **Proměnné lze ukládat do mezipaměti** na úrovni serveru - **Nástroje mohou staticky analyzovat dotazy** (více v následujících kapitolách) -**Poznámka: Jak podmíněně zahrnout pole do statických dotazů** +### How to include fields conditionally in static queries -Pole `vlastník` můžeme chtít zahrnout pouze při splnění určité podmínky. +You might want to include the `owner` field only on a particular condition. -K tomu můžeme využít direktivu `@include(if:...)` takto: +For this, you can leverage the `@include(if:...)` directive as follows: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -191,7 +187,7 @@ const result = await execute(query, { }) ``` -Poznámka: Opačným direktivou je `@skip(if: ...)`. +> Poznámka: Opačným direktivou je `@skip(if: ...)`. ### Ask for what you want @@ -199,9 +195,8 @@ GraphQL se proslavil sloganem „Požádejte o to, co chcete“. Z tohoto důvodu neexistuje způsob, jak v GraphQL získat všechna dostupná pole, aniž byste je museli vypisovat jednotlivě. -Při dotazování na GraphQL vždy myslete na to, abyste dotazovali pouze pole, která budou skutečně použita. - -Častou příčinou nadměrného načítání jsou kolekce entit. Ve výchozím nastavení dotazy načtou 100 entit v kolekci, což je obvykle mnohem více, než kolik se skutečně použije, např. pro zobrazení uživateli. Dotazy by proto měly být téměř vždy nastaveny explicitně jako první a měly by zajistit, aby načítaly pouze tolik entit, kolik skutečně potřebují. To platí nejen pro kolekce nejvyšší úrovně v dotazu, ale ještě více pro vnořené kolekce entit. +- Při dotazování na GraphQL vždy myslete na to, abyste dotazovali pouze pole, která budou skutečně použita. +- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. Například v následujícím dotazu: @@ -337,8 +332,8 @@ query { Taková opakovaná pole (`id`, `active`, `status`) přinášejí mnoho problémů: -- hůře čitelné pro rozsáhlejší dotazy -- při použití nástrojů, které generují typy TypeScript na základě dotazů (_více o tom v poslední části_), budou `newDelegate` a `oldDelegate` mít za následek dvě samostatné inline rozhraní. +- More extensive queries become harder to read. +- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. Přepracovaná verze dotazu by byla následující: @@ -364,13 +359,13 @@ fragment DelegateItem on Transcoder { } ``` -Použití GraphQL `fragment` zlepší čitelnost (zejména v měřítku), ale také povede k lepšímu generování typůTypeScript. +Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. Při použití nástroje pro generování typů vygeneruje výše uvedený dotaz vhodný typ `DelegateItemFragment` (_viz poslední část "Nástroje"_). ### Co dělat a nedělat s fragmenty GraphQL -**Základem fragmentu musí být typ** +### Základem fragmentu musí být typ Fragment nemůže být založen na nepoužitelném typu, zkrátka **na typu, který nemá pole**: @@ -382,7 +377,7 @@ fragment MyFragment on BigInt { `BigInt` je **skalární** (nativní "jednoduchý" typ), který nelze použít jako základ fragmentu. -**Jak šířit fragment** +#### Jak šířit fragment Fragmenty jsou definovány na konkrétních typech a podle toho by se měly používat v dotazech. @@ -411,16 +406,16 @@ fragment VoteItem on Vote { Fragment typu `Vote` zde není možné šířit. -**Definice fragmentu jako atomické obchodní jednotky dat** +#### Definice fragmentu jako atomické obchodní jednotky dat -Fragment GraphQL musí být definován na základě jejich použití. +GraphQL `Fragment`s must be defined based on their usage. Pro většinu případů použití stačí definovat jeden fragment pro každý typ (v případě opakovaného použití polí nebo generování typů). -Zde je praktický postup pro použití Fragmentu: +Here is a rule of thumb for using fragments: -- pokud se v dotazu opakují pole stejného typu, seskupte je do fragmentu -- pokud se opakují podobná, ale ne stejná pole, vytvořte více fragmentů, např: +- When fields of the same type are repeated in a query, group them in a `Fragment`. +- When similar but different fields are repeated, create multiple fragments, for instance: ```graphql # base fragment (mostly used in listing) @@ -443,7 +438,7 @@ fragment VoteWithPoll on Vote { --- -## Základní nástroje +## The Essential Tools ### Weboví průzkumníci GraphQL @@ -473,11 +468,11 @@ To vám umožní **odhalit chyby i bez testování dotazů** na hřišti nebo je Rozšíření [GraphQL VSCode](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) je vynikajícím doplňkem vašeho vývojového pracovního postup: -- zvýraznění syntaxe -- návrhy automatického dokončování -- validace proti schéma -- snippets -- přejít na definici fragmentů a vstupních typů +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets +- Go to definition for fragments and input types Pokud používáte `graphql-eslint`, je rozšíření [ESLint VSCode](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) nutností pro správnou vizualizaci chyb a varování v kódu. @@ -485,9 +480,9 @@ Pokud používáte `graphql-eslint`, je rozšíření [ESLint VSCode](https://ma Zásuvný modul [JS GraphQL](https://plugins.jetbrains.com/plugin/8097-graphql/) výrazně zlepší vaše zkušenosti při práci s GraphQL tím, že poskytuje: -- zvýraznění syntaxe -- návrhy automatického dokončování -- validace proti schématu -- snippets +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets -Další informace najdete v tomto [článku o WebStormu](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/), který představuje všechny hlavní funkce zásuvného. +For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. diff --git a/website/pages/cs/quick-start.mdx b/website/pages/cs/quick-start.mdx index 458443a8e3dd..c645261ec03c 100644 --- a/website/pages/cs/quick-start.mdx +++ b/website/pages/cs/quick-start.mdx @@ -2,24 +2,18 @@ title: Rychlé Začít --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph to Subgraph Studio. +Learn how to easily publish and query a [subgraph](/developing/developer-faqs/#1-what-is-a-subgraph) on The Graph. -Ujistěte se, že váš podgraf bude indexovat data z [podporované sítě](/developing/supported-networks). - -Tato příručka je napsána za předpokladu, že máte: +## Prerequisites for this guide - Kryptopeněženka -- Adresa chytrého kontraktu v síti podle vašeho výběru - -## 1. Vytvoření podgrafu v Subgraph Studio - -Přejděte do [Subgraph Studio](https://thegraph.com/studio/) a připojte peněženku. +- A smart contract address on one of the [supported networks](/developing/supported-networks/) -Once your wallet is connected, you can begin by clicking “Create a Subgraph." It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name." +## Step-by-step -## 2. Nainstalujte Graph CLI +### 1. Nainstalujte Graph CLI -The Graph CLI is written in TypeScript and you will need to have `node` and either `npm` or `yarn` installed to use it. Check that you have the most recent CLI version installed. +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. V místním počítači spusťte jeden z následujících příkazů: @@ -35,133 +29,161 @@ Použitím [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -## 3. Initialize your subgraph from existing contract +### 2. Create your subgraph + +If your contract has events, the `init` command will automatically create a scaffold of a subgraph. + +#### Create via Graph CLI + +Use the following command to create a subgraph in Subgraph Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +#### Create via Subgraph Studio + +Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. -Initialize your subgraph from an existing contract by running the initialize command: +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +2. Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". + +For additional information on subgraph creation and the Graph CLI, see [Creating a Subgraph](/developing/creating-a-subgraph). + +### 3. Initialize your subgraph + +#### From an existing contract + +The following command initializes your subgraph from an existing contract: ```sh graph init --studio ``` -> Příkazy pro konkrétní podgraf najdete na stránce podgrafu v [Subgraph Studio](https://thegraph.com/studio/). +> Note: If your contract was verified on Etherscan, then the ABI will automatically be created in the CLI. + +Příkazy pro konkrétní podgraf najdete na stránce podgrafu v [Subgraph Studio](https://thegraph.com/studio/). -Při inicializaci podgrafu vás nástroj CLI požádá o následující informace: +When you initialize your subgraph, the CLI will ask you for the following information: -- Protokol: vyberte protokol, ze kterého bude váš podgraf indexovat data. -- Slug podgrafu: vytvořte název podgrafu. Váš podgraf slug je identifikátor vašeho podgrafu. -- Adresář pro vytvoření podgrafu: vyberte místní adresář. -- Ethereum síť (nepovinné): možná budete muset zadat, ze které sítě kompatibilní s EVM bude váš subgraf indexovat data. -- Adresa zakázky: Vyhledejte adresu chytré smlouvy, ze které se chcete dotazovat na data. -- ABI: Pokud se ABI nevyplňuje automaticky, je třeba jej zadat ručně jako soubor JSON. -- Počáteční blok: Doporučuje se zadat počáteční blok, abyste ušetřili čas, zatímco váš subgraf indexuje data blockchainu. Počáteční blok můžete vyhledat tak, že najdete blok, ve kterém byl váš kontrakt nasazen. -- Název smlouvy: zadejte název své smlouvy. -- Indexovat události smlouvy jako entity: doporučujeme nastavit tuto hodnotu na true, protože se automaticky přidá mapování do vašeho subgrafu pro každou emitovanou událost -- Přidat další smlouvu(nepovinné): můžete přidat další smlouvu +- Protocol: Choose the protocol your subgraph will be indexing data from. +- Subgraph slug: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. +- Directory to create the subgraph in: Choose your local directory. +- Ethereum network (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- Contract address: Locate the smart contract address you’d like to query data from. +- ABI: If the ABI is not auto-populated, you will need to input it manually as a JSON file. +- Start Block: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- Contract Name: Input the name of your contract. +- Index contract events as entities: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- Add another contract (optional): You can add another contract. Na následujícím snímku najdete příklad toho, co můžete očekávat při inicializaci podgrafu: ![Subgraph command](/img/subgraph-init-example.png) -## 4. Write your subgraph +### 4. Write your subgraph -Předchozí příkazy vytvořily podgraf lešení, který můžete použít jako výchozí bod pro sestavení podgrafu. Při provádění změn v podgrafu budete pracovat především se třemi soubory: +The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. -- Manifest (`subgraph.yaml`) - Manifest definuje, jaké datové zdroje budou vaše podgrafy indexovat. -- Schéma (`schema.graphql`) - Schéma GraphQL definuje, jaká data chcete z podgrafu získat. -- AssemblyScript Mapování (`mapping.ts`) - Jedná se o kód, který převádí data z vašich datových zdrojů na entity definované ve schématu. +When making changes to the subgraph, you will mainly work with three files: -Další informace o zápisu podgrafu naleznete v části [Creating a Subgraph](/developing/creating-a-subgraph). +- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -## 5. Deploy to Subgraph Studio +For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -Jakmile je podgraf napsán, spusťte následující příkazy: +### 5. Deploy your subgraph -```sh -$ graph codegen -$ graph build -``` +Remember, deploying is not the same as publishing. + +- When you deploy a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. +- When you publish a subgraph, you are publishing it onchain to the decentralized network. + +1. Jakmile je podgraf napsán, spusťte následující příkazy: + + ```sh + graph codegen + graph build + ``` + +2. Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. -- Ověřte a nasaďte svůj podgraf. Klíč k nasazení najdete na stránce Subgraph ve Studiu Subgraph. +![Deploy key](/img/subgraph-studio-deploy-key.jpg) +```` ```sh -$ graph auth --studio -$ graph deploy --studio -``` -Budete vyzváni k zadání štítku verze. Důrazně se doporučuje použít [semver](https://semver.org/) pro označení verzí jako `0.0.1`. Přesto můžete jako verzi zvolit libovolný řetězec, například:`v1`, `version1`, `asdf`. - -## 6. Otestujte svůj podgraf - -In Subgraph Studio's playground environment, you can test your subgraph by making a sample query. - -V protokolech se dozvíte, zda se v podgrafu vyskytly nějaké chyby. Protokoly funkčního podgrafu budou vypadat takto: - -![Subgraph logs](/img/subgraph-logs-image.png) - -Pokud podgraf selhává, můžete se na stav podgrafu zeptat pomocí nástroje GraphiQL Playground. Všimněte si, že můžete využít níže uvedený dotaz a zadat ID nasazení vašeho podgrafu. V tomto případě je `Qm...` ID nasazení (které najdete na stránce podgrafu v části **Podrobnosti**). Níže uvedený dotaz vás informuje o selhání podgrafu, takže můžete podle toho provádět ladění: - -```graphql -{ - indexingStatuses(subgraphs: ["Qm..."]) { - node - synced - health - fatalError { - message - block { - number - hash - } - handler - } - nonFatalErrors { - message - block { - number - hash - } - handler - } - chains { - network - chainHeadBlock { - number - } - earliestBlock { - number - } - latestBlock { - number - } - lastHealthyBlock { - number - } - } - entityCount - } -} +graph auth --studio + +graph deploy --studio ``` +```` -## 7. Publish your subgraph to The Graph’s Decentralized Network +- The CLI will ask for a version label. + - It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. That said, you can choose any string for the version such as: `v1`, `version1`, `asdf`, etc. -Once your subgraph has been deployed to Subgraph Studio, you have tested it out, and you are ready to put it into production, you can then publish it to the decentralized network. +### 6. Review your subgraph -In Subgraph Studio, you will be able to click the publish button on the top right of your subgraph's page. +If you’d like to examine your subgraph before publishing it to the network, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: -Vyberte síť, do které chcete podgraf publikovat. Doporučujeme publikovat podgrafy do sítě Arbitrum One, abyste mohli využít výhod [vyšší rychlost transakcí a nižší náklady na plyn](/arbitrum/arbitrum-faq). +- Run a sample query. +- Analyze your subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: -The (upgrade Indexer)[/sunrise/#about-the-upgrade-indexer] will begin serving queries on your subgraph regardless of subgraph curation, and it will provide you with 100,000 free queries per month. + ![Subgraph logs](/img/subgraph-logs-image.png) -Pro vyšší kvalitu služeb a silnější redundanci můžete svůj podgraf upravit tak, aby přilákal více indexátorů. V době psaní tohoto článku je doporučeno, abyste svůj podgraf kurátorovali s alespoň 3,000 GRT, abyste zajistili, že 3-5 dalších Indexerů začne obsluhovat dotazy na vašem podgrafu. +### 7. Publish your subgraph to The Graph Network -Abyste ušetřili náklady na benzín, můžete svůj subgraf kurátorovat ve stejné transakci, v níž jste ho publikovali, a to výběrem tohoto tlačítka při publikování subgrafu do decentralizované sítě The Graph: +Publishing a subgraph to the decentralized network makes it available for [Curators](/network/curating/) to begin curating it and [Indexers](/network/indexing/) to begin indexing it. -![Subgraph publish](/img/publish-and-signal-tx.png) +#### Publishing with Subgraph Studio + +1. To publish your subgraph, click the Publish button in the dashboard. +2. Select the network to which you would like to publish your subgraph. + +#### Publishing from the CLI + +As of version 0.73.0, you can also publish your subgraph with the Graph CLI. + +1. Open the `graph-cli`. + +2. Use the following commands: + + ```sh + graph codegen && graph build + ``` + + Then, + + ```sh + graph publish + ``` -## 8. Query your subgraph +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +To customize your deployment, see [Publishing a Subgraph](/publishing/publishing-a-subgraph/). + +#### Přidání signálu do podgrafu + +1. To attract indexers to query your subgraph, you should add GRT curation signal to it. + + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + +2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. + + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + +To learn more about curation, read [Curating](/network/curating/). + +To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: + +![Subgraph publish](/img/publish-and-signal-tx.png) -Nyní se můžete dotazovat na svůj podgraf odesláním dotazů GraphQL na adresu URL dotazu podgrafu, kterou najdete kliknutím na tlačítko dotazu. +### 8. Query your subgraph -If you don't have your API key, you can query via the free, rate-limited development query URL, which can be used for development and staging. +Now, you can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -Další informace o dotazování na data z podgrafu najdete [zde](/querying/querying-the-graph/). +For more information about querying data from your subgraph, read [Querying The Graph](/querying/querying-the-graph/). diff --git a/website/pages/cs/release-notes/assemblyscript-migration-guide.mdx b/website/pages/cs/release-notes/assemblyscript-migration-guide.mdx index d1b9eb00bc04..e59516868de6 100644 --- a/website/pages/cs/release-notes/assemblyscript-migration-guide.mdx +++ b/website/pages/cs/release-notes/assemblyscript-migration-guide.mdx @@ -127,11 +127,8 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - Pokud jste použili stínování proměnných, musíte duplicitní proměnné přejmenovat. - ### Nulová srovnání - Při aktualizaci podgrafu může někdy dojít k těmto chybám: ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - Pro vyřešení můžete jednoduše změnit příkaz `if` na něco takového: ```typescript @@ -288,7 +284,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - Chcete-li tento problém vyřešit, můžete vytvořit proměnnou pro přístup k této vlastnosti, aby překladač mohl provést kouzlo kontroly nulovatelnosti: ```typescript diff --git a/website/pages/cs/sps/introduction.mdx b/website/pages/cs/sps/introduction.mdx new file mode 100644 index 000000000000..12e3f81c6d53 --- /dev/null +++ b/website/pages/cs/sps/introduction.mdx @@ -0,0 +1,19 @@ +--- +title: Introduction to Substreams-powered Subgraphs +--- + +By using a Substreams package (`.spkg`) as a data source, your subgraph gains access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. + +There are two methods of enabling this technology: + +Using Substreams [triggers](./triggers): Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. + +Using [Entity Changes](https://substreams.streamingfast.io/documentation/consume/subgraph/graph-out): By writing more of the logic into Substreams, you can consume the module's output directly into graph-node. In graph-node, you can use the Substreams data to create your subgraph entities. + +It is really a matter of where you put your logic, in the subgraph or the Substreams. Keep in mind that having more of your logic in Substreams benefits from a parallelized model, whereas triggers will be linearly consumed in graph-node. + +Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: + +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/cs/sps/triggers-example.mdx b/website/pages/cs/sps/triggers-example.mdx new file mode 100644 index 000000000000..daf41320ec8d --- /dev/null +++ b/website/pages/cs/sps/triggers-example.mdx @@ -0,0 +1,140 @@ +--- +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" +--- + +## Požadavky + +Before starting, make sure to: + +- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. +- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. + +## Step 1: Initialize Your Project + + + +1. Open your Dev Container and run the following command to initialize your project: + + ```bash + substreams init + ``` + +2. Select the "minimal" project option. + +3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: + +```yaml +specVersion: v0.1.0 +package: + name: my_project_sol + version: v0.1.0 + +imports: # Pass your spkg of interest + solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg + +modules: + - name: map_spl_transfers + use: solana:map_block # Select corresponding modules available within your spkg + initialBlock: 260000082 + + - name: map_transactions_by_programid + use: solana:solana:transactions_by_programid_without_votes + +network: solana-mainnet-beta + +params: # Modify the param fields to meet your needs + # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA + map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE +``` + +## Step 2: Generate the Subgraph Manifest + +Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: + +```bash +substreams codegen subgraph +``` + +You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: + +```yaml +--- +dataSources: + - kind: substreams + name: my_project_sol + network: solana-mainnet-beta + source: + package: + moduleName: map_spl_transfers # Module defined in the substreams.yaml + file: ./my-project-sol-v0.1.0.spkg + mapping: + apiVersion: 0.0.7 + kind: substreams/graph-entities + file: ./src/mappings.ts + handler: handleTriggers +``` + +## Step 3: Define Entities in `schema.graphql` + +Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. Here is an example: + +```graphql +type MyTransfer @entity { + id: ID! + amount: String! + source: String! + designation: String! + signers: [String!]! +} +``` + +This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. + +## Step 4: Handle Substreams Data in `mappings.ts` + +With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: + +```ts +import { Protobuf } from 'as-proto/assembly' +import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events' +import { MyTransfer } from '../generated/schema' + +export function handleTriggers(bytes: Uint8Array): void { + const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode) + + for (let i = 0; i < input.data.length; i++) { + const event = input.data[i] + + if (event.transfer != null) { + let entity_id: string = `${event.txnId}-${i}` + const entity = new MyTransfer(entity_id) + entity.amount = event.transfer!.instruction!.amount.toString() + entity.source = event.transfer!.accounts!.source + entity.designation = event.transfer!.accounts!.destination + + if (event.transfer!.accounts!.signer!.single != null) { + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] + } else if (event.transfer!.accounts!.signer!.multisig != null) { + entity.signers = event.transfer!.accounts!.signer!.multisig!.signers + } + entity.save() + } + } +} +``` + +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + +## Závěr + +You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. + +For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). diff --git a/website/pages/cs/sps/triggers.mdx b/website/pages/cs/sps/triggers.mdx new file mode 100644 index 000000000000..ed19635d4768 --- /dev/null +++ b/website/pages/cs/sps/triggers.mdx @@ -0,0 +1,37 @@ +--- +title: Substreams Triggers +--- + +Custom triggers allow you to send data directly into your subgraph mappings file and entities (similar to tables and fields), enabling full use of the GraphQL layer. By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data within your subgraph’s handler, ensuring efficient and streamlined data management within the subgraph framework. + +> Note: If you haven’t already, visit one of the How-To Guides found [here](./introduction) to scaffold your first project in the Development Container. + +The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. + +```tsx +export function handleTransactions(bytes: Uint8Array): void { + let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).trasanctions // 1. + if (transactions.length == 0) { + log.info('No transactions found', []) + return + } + + for (let i = 0; i < transactions.length; i++) { + // 2. + let transaction = transactions[i] + + let entity = new Transaction(transaction.hash) // 3. + entity.from = transaction.from + entity.to = transaction.to + entity.save() + } +} +``` + +Here's what you’re seeing in the `mappings.ts` file: + +1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object +2. Looping over the transactions +3. Create a new subgraph entity for every transaction + +To go through a detailed example of a trigger-based subgraph, [click here](./triggers-example). diff --git a/website/pages/cs/substreams.mdx b/website/pages/cs/substreams.mdx index 7cc86d6a0f04..d4cdca190e4a 100644 --- a/website/pages/cs/substreams.mdx +++ b/website/pages/cs/substreams.mdx @@ -4,9 +4,11 @@ title: Substreams ![Substreams Logo](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## Jak funguje Substreams ve 4 krocích @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### Rozšiřte své znalosti - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/cs/sunrise.mdx b/website/pages/cs/sunrise.mdx index 157bab9d09e9..75076fb51020 100644 --- a/website/pages/cs/sunrise.mdx +++ b/website/pages/cs/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Časté dotazy po východu slunce + aktualizace na síť Graf --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Poznámka: Východ slunce decentralizovaných dat skončil 12. června 2024. -## Jaký je východ slunce decentralizovaných dat? +## Jaký byl úsvit decentralizovaných dat? -Východ slunce decentralizovaných dat je iniciativa, za kterou stojí společnost Edge & Node. Jejím cílem je umožnit vývojářům podgrafů bezproblémový přechod na decentralizovanou síť Graf. +Úsvit decentralizovaných dat byla iniciativa, kterou vedla společnost Edge & Node. Tato iniciativa umožnila vývojářům podgrafů bezproblémově přejít na decentralizovanou síť Graf. -Tento plán vychází z mnoha předchozích změn v ekosystému Graf, včetně vylepšeného indexeru pro obsluhu dotazů na nově publikované podgrafy a možnosti integrovat do Graf nové blockchainové sítě. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### Jaké jsou fáze východu Slunce? +### Co se stalo s hostovanou službou? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +Koncové body dotazů hostované služby již nejsou k dispozici a vývojáři nemohou v hostované službě nasadit nové podgrafy. -## Aktualizace podgrafů do sítě grafů +Během procesu aktualizace mohli vlastníci podgrafů hostovaných služeb aktualizovat své podgrafy na síť Graf. Vývojáři navíc mohli nárokovat automatickou aktualizaci podgrafů. -### Kdy přestanou být podgrafy hostovaných služeb k dispozici? +### Měla tato aktualizace vliv na Podgraf Studio? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +Ne, na Podgraf Studio neměl Sunrise vliv. Podgrafy byly okamžitě k dispozici pro dotazování, a to díky aktualizačnímu indexeru, který využívá stejnou infrastrukturu jako hostovaná služba. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Proč byly podgrafy zveřejněny na Arbitrum, začalo indexovat jinou síť? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Bude můj podgraf hostované služby podporován v síti Graf? - -Ano, nástroj Indexer pro upgrade bude automaticky podporovat všechny podgrafy hostovaných služeb publikované v síti Graf pro bezproblémový upgrade. - -### Jak mohu aktualizovat podgraf hostované služby? - -> Poznámka: Upgrade podgrafu na síť grafů nelze vrátit zpět. - - - -Chcete-li aktualizovat podgraf hostované služby, můžete navštívit ovládací panel podgrafu na adrese [hostovaná služba](https://thegraph.com/hosted-service). - -1. Vyberte podgraf nebo podgrafy, které chcete aktualizovat. -2. Vyberte přijímající peněženku (peněženku, která se stane vlastníkem podgrafu). -3. Klikněte na tlačítko "Upgrade". - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### Jak mohu získat podporu pro proces aktualizace? - -Komunita Graf je zde, aby podporovala vývojáře při přechodu na síť Graf. Připojte se k [serveru Discord](https://discord.gg/vtvv7FP) společnosti The Graph a požádejte o pomoc v kanálu #upgrade-decentralized-network. - -### Jak lze zajistit vysokou kvalitu služeb a redundanci podgrafů v síti Graf? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Členové těchto blockchainových komunit jsou vyzýváni k integraci svého řetězce prostřednictvím [procesu integrace řetězce](/chain-integration-overview/). - -### Jak mohu publikovat nové verze do sítě? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Upgrade na nejnovější verzi [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Aktualizace příkazu deploy - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> Publikování vyžaduje Arbitrum ETH - při upgradu vašeho subgrafu se také uvolní malá částka, která vám usnadní první interakce s protokolem 🧑‍🚀 - -### Používám podgraf vytvořený někým jiným, jak mohu zajistit, aby nedošlo k přerušení mé služby? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### Co se stane, když svůj podgraf neaktualizuji? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### Jak se mohu začít dotazovat na podgrafy v síti grafů? - -Dostupné podgrafy můžete prozkoumat na stránce [Graph Explorer](https://thegraph.com/explorer). [Více informací o dotazování na podgrafy na Graf](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## O Upgrade Indexer -### Co je to upgrade Indexer? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> Aktualizace Indexer je v současné době aktivní. -### Jaké řetězce podporuje upgrade Indexer? +Upgrade Indexer byl implementován za účelem zlepšení zkušeností s upgradem podgrafů z hostované služby do sit' Graf a podpory nových verzí stávajících podgrafů, které dosud nebyly indexovány. -Upgrade Indexeru podporuje řetězce, které byly dříve dostupné pouze v hostované službě. +### Co dělá upgrade Indexer? -Úplný seznam podporovaných řetěz najdete [zde](/developing/supported-networks/). +- Zavádí řetězce, které ještě nezískaly odměnu za indexaci v síti Graf, a zajišťuje, aby byl po zveřejnění podgrafu co nejrychleji k dispozici indexátor pro obsluhu dotazů. +- Podporuje řetězce, které byly dříve dostupné pouze v hostované službě. Úplný seznam podporovaných řetězců najdete [zde](/developing/supported-networks/). +- Indexátoři, kteří provozují upgrade indexátoru, tak činí jako veřejnou službu pro podporu nových podgrafů a dalších řetězců, kterým chybí indexační odměny, než je Rada grafů schválí. ### Proč Edge & Node spouští aktualizaci Indexer? -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historicky udržovaly hostovanou službu, a proto již mají synchronizovaná data pro podgrafy hostované služby. ### Co znamená upgrade indexeru pro stávající indexery? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Řetězce, které byly dříve podporovány pouze v hostované službě, byly vývojářům zpřístupněny v síti Graf nejprve bez odměn za indexování. + +Tato akce však uvolnila poplatky za dotazy pro všechny zájemce o indexování a zvýšila počet podgrafů zveřejněných v síti Graf. V důsledku toho mají indexátoři více příležitostí indexovat a obsluhovat tyto podgrafy výměnou za poplatky za dotazy, a to ještě předtím, než jsou odměny za indexování pro řetězec povoleny. -Upgrade Indexer také poskytuje komunitě Indexer informace o potenciální poptávce po podgraf nových řetězcích v síti grafů. +Upgrade Indexer také poskytuje komunitě Indexer informace o potenciální poptávce po podgrafech a nových řetězcích v síti grafů. ### Co to znamená pro delegáti? -Upgrade Indexer nabízí delegátům velkou příležitost. Jakmile bude více podgrafů upgradováno z hostované služby do sítě Graf, budou mít delegáti prospěch ze zvýšené aktivity v síti. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### Bude upgrade Indexeru soutěžit o odměny se stávajícími Indexery? +### Did the upgrade Indexer compete with existing Indexers for rewards? -Ne, indexátor aktualizace přidělí pouze minimální částku na podgraf a nebude vybírat odměny za indexování. +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### Jak to ovlivní vývojáře podgrafů? +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### Jaký to má přínos pro spotřebitele dat? +### How does the upgrade Indexer benefit data consumers? Aktualizace Indexeru umožňuje podporu blockchainů v síti, které byly dříve dostupné pouze v rámci hostované služby. Tímto se rozšiřuje rozsah a dostupnost dat, která lze v síti dotazovat. -### Jak bude aktualizace Indexer oceňovat dotazy? - -Upgrade Indexer stanoví cenu dotazů podle tržní sazby, aby neovlivňoval trh s poplatky za dotazy. - -### Jaká jsou kritéria pro to, aby nástroj Indexer přestal podporovat podgraf? - -Aktualizační indexátor bude obsluhovat podgraf, dokud nebude dostatečně a úspěšně obsloužen konzistentními dotazy obsluhovanými alespoň třemi dalšími indexátory. - -Kromě toho indexátor aktualizace přestane podgraf podporovat, pokud se na něj v posledních 30 dnech nikdo nezeptal. - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## O síti grafů - -### Musím provozovat vlastní infrastrukturu? - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -Jakmile váš podgraf dosáhne dostatečného kurátorského signálu a ostatní indexátory jej začnou podporovat, upgrade indexátoru se postupně sníží a umožní ostatním indexátorům vybírat odměny za indexování a poplatky za dotazy. - -### Měl bych hostovat vlastní indexovací infrastrukturu? - -Provozování infrastruktury pro vlastní projekt je [výrazně náročnější na zdroje](/network/benefits/) ve srovnání s používáním sit' Graf. - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -Pokud přesto máte zájem o provozování [Graph Node](https://github.com/graphprotocol/graph-node), zvažte možnost připojit se k síti The Graph Network [jako indexátor](https://thegraph.com/blog/how-to-become-indexer/) a získávat odměny za indexování a poplatky za dotazy tím, že budete poskytovat data na svém podgrafu a dalších. - -### Měl bych používat centralizovaného poskytovatele indexování? - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -Zde je podrobný přehled výhod Graf oproti centralizovanému hosting: +### How does the upgrade Indexer price queries? -- **Odolnost a redundance**: Decentralizované systémy jsou díky své distribuované povaze ze své podstaty robustnější a odolnější. Data nejsou uložena na jediném serveru nebo místě. Místo toho je obsluhují stovky nezávislých indexérů po celém světě. Tím se snižuje riziko ztráty dat nebo přerušení služby v případě selhání jednoho uzlu, což vede k výjimečné provozuschopnosti (99.99%). +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **Kvalita služeb**: Kromě působivé doby provozu se Sit' Graf vyznačuje průměrnou rychlostí dotazů (latence) ~106 ms a vyšší úspěšností dotazů ve srovnání s hostovanými alternativami. Více informací naleznete v [tomto blogu](https://thegraph.com/blog/qos-the-graph-network/). +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -Stejně jako jste si vybrali blockchainovou síť kvůli její decentralizované povaze, bezpečnosti a transparentnosti, je volba sit' Graf rozšířením stejných principů. Sladěním své datové infrastruktury s těmito hodnotami zajistíte soudržné, odolné a důvěryhodné vývojové prostředí. +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/cs/supported-network-requirements.mdx b/website/pages/cs/supported-network-requirements.mdx index a81118cec231..47555bff589f 100644 --- a/website/pages/cs/supported-network-requirements.mdx +++ b/website/pages/cs/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| Síť | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | +| Síť | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | diff --git a/website/pages/cs/tap.mdx b/website/pages/cs/tap.mdx new file mode 100644 index 000000000000..b9fe9ac7cb3c --- /dev/null +++ b/website/pages/cs/tap.mdx @@ -0,0 +1,197 @@ +--- +title: TAP Migration Guide +--- + +Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. + +## Přehled + +[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: + +- Efficiently handles micropayments. +- Adds a layer of consolidations to on-chain transactions and costs. +- Allows Indexers control of receipts and payments, guaranteeing payment for queries. +- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. + +## Specifics + +TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. + +For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. + +### RAV Details + +- It’s money that is waiting to be sent to the blockchain. + +- It will continue to send requests to aggregate and ensure that the total value of non-aggregated receipts does not exceed the `amount willing to lose`. + +- Each RAV can be redeemed once in the contracts, which is why they are sent after the allocation is closed. + +### Redeeming RAV + +As long as you run `tap-agent` and `indexer-agent`, everything will be executed automatically. The following provides a detailed breakdown of the process: + +1. An Indexer closes allocation. + +2. ` period, tap-agent` takes all pending receipts for that specific allocation and requests an aggregation into a RAV, marking it as `last`. + +3. `indexer-agent` takes all the last RAVS and sends redeem requests to the blockchain, which will update the value of `redeem_at`. + +4. During the `` period, `indexer-agent` monitors if the blockchain has any reorganizations that revert the transaction. + + - If it was reverted, the RAV is resent to the blockchain. If it was not reverted, it gets marked as `final`. + +## Blockchain Addresses + +### Contracts + +| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| ------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | +| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | +| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | + +### Gateway + +| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| ---------- | --------------------------------------------- | --------------------------------------------- | +| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | + +### Požadavky + +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. + +- [Graph TAP Aribtrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) + +> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployement. As a result, you have to index it manually. + +## Migration Guide + +### Software versions + +| Component | Verze | Image Link | +| --------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------- | +| indexer-service | v1.0.0-rc.6 | [indexer-service](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-service-rs/264320627?tag=1.0.0-rc.6) | +| indexer-agent | PR #995 | [indexer-agent](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent/266166026?tag=sha-d98cf80) | +| tap-agent | v1.0.0-rc.6 | [tap-agent](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-tap-agent/264320547?tag=1.0.0-rc.6) | + +### Steps + +1. **Indexer Agent** + + - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). + - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + +2. **Indexer Service** + + - Fully replace your current configuration with the [new Indexer Service rs](https://github.com/graphprotocol/indexer-rs). It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + - Like the older version, you can scale Indexer Service horizontally easily. It is still stateless. + +3. **TAP Agent** + + - Run _one_ single instance of [TAP Agent](https://github.com/graphprotocol/indexer-rs) at all times. It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + +4. **Configure Indexer Service and TAP Agent** + + Configuration is a TOML file shared between `indexer-service` and `tap-agent`, supplied with the argument `--config /path/to/config.toml`. + + Check out the full [configuration](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) and the [default values](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) + +For minimal configuration, use the following template: + +```bash +# You will have to change *all* the values below to match your setup. +# +# Some of the config below are global graph network values, which you can find here: +# +# +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: +# +# [database] +# postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" + +[indexer] +indexer_address = "0x1111111111111111111111111111111111111111" +operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" + +[database] +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. +postgres_url = "postgres://postgres@postgres:5432/postgres" + +[graph_node] +# URL to your graph-node's query endpoint +query_url = "" +# URL to your graph-node's status endpoint +status_url = "" + +[subgraphs.network] +# Query URL for the Graph Network subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[subgraphs.escrow] +# Query URL for the Escrow subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[blockchain] +# The chain ID of the network that the graph network is running on +chain_id = 1337 +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. +receipts_verifier_address = "0x2222222222222222222222222222222222222222" + +######################################## +# Specific configurations to tap-agent # +######################################## +[tap] +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: +# max_amount_willing_to_lose_grt = "0.1" +max_amount_willing_to_lose_grt = 20 + +[tap.sender_aggregator_endpoints] +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. +0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" +``` + +Poznámky: + +- Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/tap/#gateway). +- Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/tap/#contracts) using the appropriate chain id. + +**Log Level** + +- You can set the log level by using the `RUST_LOG` environment variable. +- It’s recommended that you set it to `RUST_LOG=indexer_tap_agent=debug,info`. + +## Monitoring + +### Metrics + +All components expose the port 7300 to be queried by prometheus. + +### Grafana Dashboard + +You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) and import. + +### Launchpad + +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer) diff --git a/website/pages/de/about.mdx b/website/pages/de/about.mdx index 36c6a49f8fbc..9c21bf00d08f 100644 --- a/website/pages/de/about.mdx +++ b/website/pages/de/about.mdx @@ -2,46 +2,66 @@ title: About The Graph --- -This page will explain what The Graph is and how you can get started. - ## What is The Graph? -The Graph is a decentralized protocol for indexing and querying blockchain data. The Graph makes it possible to query data that is difficult to query directly. +The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. + +## Understanding the Basics + +Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. + +### Challenges Without The Graph + +In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. + +- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. + +- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. + +- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. + +### Why is this a problem? + +It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. + +Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. + +Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. -Projects with complex smart contracts like [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it really difficult to read anything other than basic data directly from the blockchain. +## The Graph Provides a Solution -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. -You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. +### How The Graph Functions -**Indexing blockchain data is really, really hard.** +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +#### Specifics -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. -## How The Graph Works +- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -The Graph learns what and how to index Ethereum data based on subgraph descriptions, known as the subgraph manifest. The subgraph description defines the smart contracts of interest for a subgraph, the events in those contracts to pay attention to, and how to map event data to data that The Graph will store in its database. +- When creating a subgraph, you need to write a subgraph manifest. -Once you have written a `subgraph manifest`, you use the Graph CLI to store the definition in IPFS and tell the indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. -This diagram gives more detail about the flow of data once a subgraph manifest has been deployed, dealing with Ethereum transactions: +The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. ![A graphic explaining how The Graph uses Graph Node to serve queries to data consumers](/img/graph-dataflow.png) The flow follows these steps: -1. A dapp adds data to Ethereum through a transaction on a smart contract. -2. The smart contract emits one or more events while processing the transaction. -3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. -4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. -5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. +1. A dapp adds data to Ethereum through a transaction on a smart contract. +2. The smart contract emits one or more events while processing the transaction. +3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. +4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. +5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. ## Next Steps -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +The following sections provide a more in-depth look at subgraphs, their deployment and data querying. -Before you start writing your own subgraph, you might want to have a look at [Graph Explorer](https://thegraph.com/explorer) and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/pages/de/arbitrum/arbitrum-faq.mdx b/website/pages/de/arbitrum/arbitrum-faq.mdx index 67fffeeb677c..7e48874081e2 100644 --- a/website/pages/de/arbitrum/arbitrum-faq.mdx +++ b/website/pages/de/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: Arbitrum-FAQ Klicken Sie [hier](#billing-on-arbitrum-faqs), wenn Sie zu den Arbitrum Billing FAQs springen möchten. -## Warum implementiert The Graph eine L2-Lösung? +## Warum hat The Graph eine L2-Lösung eingeführt? -Durch die Skalierung von The Graph auf L2 können die Netzwerkteilnehmer erwarten: +Durch die Skalierung von The Graph auf L2 können die Netzwerkteilnehmer nun von folgenden Vorteilen profitieren: - Bis zu 26-fache Einsparungen bei den Gebühren für Gas @@ -14,26 +14,26 @@ Durch die Skalierung von The Graph auf L2 können die Netzwerkteilnehmer erwarte - Von Ethereum übernommene Sicherheit -Die Skalierung der Smart Contracts des Protokolls auf L2 ermöglicht den Netzwerkteilnehmern eine häufigere Interaktion zu geringeren Kosten in Form von Gasgebühren. Zum Beispiel könnten Indexer Zuweisungen öffnen und schließen, um eine größere Anzahl von Subgraphen mit größerer Häufigkeit zu indexieren, Entwickler könnten Subgraphen mit größerer Leichtigkeit bereitstellen und aktualisieren, Delegatoren könnten GRT mit größerer Häufigkeit delegieren und Kuratoren könnten Signale zu einer größeren Anzahl von Subgraphen hinzufügen oder entfernen - Aktionen, die zuvor als zu kostenintensiv angesehen wurden, um sie häufig auszuführen. +Die Skalierung der Smart Contracts des Protokolls auf L2 ermöglicht den Netzwerkteilnehmern eine häufigere Interaktion zu geringeren Kosten in Form von Gasgebühren. So können Indexer beispielsweise häufiger Zuweisungen öffnen und schließen, um eine größere Anzahl von Subgraphen zu indexieren. Entwickler können Subgraphen leichter bereitstellen und aktualisieren, und Delegatoren können GRT häufiger delegieren. Kuratoren können einer größeren Anzahl von Subgraphen Signale hinzufügen oder entfernen - Aktionen, die bisher aufgrund der Kosten zu kostspielig waren, um sie häufig durchzuführen. -DieThe Graph-Community beschloss letztes Jahr nach dem Ergebnis der [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305)-Diskussion, mit Arbitrum weiterzumachen. +Die The Graph-Community beschloss letztes Jahr nach dem Ergebnis der [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305)-Diskussion, mit Arbitrum weiterzumachen. ## Was muss ich tun, um The Graph auf L2 zu nutzen? -The Graph’s billing system accepts GRT on Arbitrum, and users will need ETH on Arbitrum to pay their gas. While The Graph protocol started on Ethereum Mainnet, all activity, including the billing contracts, is now on Arbitrum One. +Das Abrechnungssystem von The Graph akzeptiert GRT auf Arbitrum, und die Nutzer benötigen ETH auf Arbitrum, um ihr Gas zu bezahlen. Während das The Graph-Protokoll auf dem Ethereum Mainnet begann, finden alle Aktivitäten, einschließlich der Abrechnungsverträge, nun auf Arbitrum One statt. -Consequently, to pay for queries, you need GRT on Arbitrum. Here are a few different ways to achieve this: +Um Abfragen zu bezahlen brauchen Sie also GRT auf Arbitrum. Hier sind ein paar Möglichkeiten, dies zu erreichen: -- If you already have GRT on Ethereum, you can bridge it to Arbitrum. You can do this via the GRT bridging option provided in Subgraph Studio or by using one of the following bridges: +- Wenn Sie bereits GRT auf Ethereum haben, können Sie es zu Arbitrum überbrücken. Sie können dieses über GRT-Bridging-Option in Subgraph Studio tun oder eine der folgenden Bridges verwenden: - - [The Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161) + - [Die Arbitrum-Brücke] (https://bridge.arbitrum.io/?l2ChainId=42161) - [TransferTo](https://transferto.xyz/swap) -- If you have other assets on Arbitrum, you can swap them for GRT through a swapping protocol like Uniswap. +- Wenn du andere Vermögenswerte auf Arbitrum hast, kannst du sie über ein Swapping-Protokoll wie Uniswap in GRT tauschen. -- Alternatively, you can acquire GRT directly on Arbitrum through a decentralized exchange. +- Alternativ können Sie GRT auch direkt auf Arbitrum über einen dezentralen Handelsplatz erwerben. -Once you have GRT on Arbitrum, you can add it to your billing balance. +Sobald Sie GRT auf Arbitrum haben, können Sie es zu Ihrem Guthaben hinzufügen. Um die Vorteile von The Graph auf L2 zu nutzen, verwenden Sie diesen Dropdown-Schalter, um zwischen den Ketten umzuschalten. @@ -41,27 +41,21 @@ Um die Vorteile von The Graph auf L2 zu nutzen, verwenden Sie diesen Dropdown-Sc ## Was muss ich als Entwickler von Subgraphen, Datenkonsument, Indexer, Kurator oder Delegator jetzt tun? -Es besteht kein unmittelbarer Handlungsbedarf, jedoch werden die Netzwerkteilnehmer ermutigt, mit der Umstellung auf Arbitrum zu beginnen, um von den Vorteilen von L2 zu profitieren. +Die Netzwerkteilnehmer müssen zu Arbitrum wechseln, um weiterhin am The Graph Netzwerk teilzunehmen. Bitte lesen Sie den [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) für zusätzliche Unterstützung. -Kernentwicklerteams arbeiten an der Erstellung von L2-Transfer-Tools, die die Übertragung von Delegation, Kuration und Subgraphen auf Arbitrum erheblich erleichtern werden. Netzwerkteilnehmer können davon ausgehen, dass L2-Transfer-Tools bis zum Sommer 2023 verfügbar sein werden. - -Ab dem 10. April 2023 werden 5% aller Indexierungs-Rewards auf Arbitrum geprägt. Mit zunehmender Beteiligung des Netzwerks und der Zustimmung des Rates werden die Indexierungsprämien schrittweise von Ethereum auf Arbitrum und schließlich vollständig auf Arbitrum umgestellt. - -## Was muss ich tun, wenn ich am L2-Netz teilnehmen möchte? - -Bitte helfen Sie [test the network](https://testnet.thegraph.com/explorer) auf L2 und berichten Sie über Ihre Erfahrungen in [Discord](https://discord.gg/graphprotocol). +Alle Indexierungsprämien sind jetzt vollständig auf Arbitrum. ## Sind mit der Skalierung des Netzes auf L2 irgendwelche Risiken verbunden? -All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). +Alle Smart Contracts wurden gründlich [audited] (https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). Alles wurde gründlich getestet, und es gibt einen Notfallplan, um einen sicheren und nahtlosen Übergang zu gewährleisten. Einzelheiten finden Sie [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Werden die bestehenden Subgraphen auf Ethereum weiterhin funktionieren? +## Funktionieren die vorhandenen Subgraphen auf Ethereum? -Ja, die The Graph Netzwerk-Verträge werden parallel sowohl auf Ethereum als auch auf Arbitrum laufen, bis sie zu einem späteren Zeitpunkt vollständig auf Arbitrum umgestellt werden. +Alle Subgraphen sind jetzt auf Arbitrum. Bitte lesen Sie den [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/), um sicherzustellen, dass Ihre Subgraphen reibungslos funktionieren. -## Wird GRT einen neuen Smart Contract auf Arbitrum bereitstellen? +## Verfügt GRT über einen neuen Smart Contract, der auf Arbitrum eingesetzt wird? Ja, GRT hat einen zusätzlichen [Smart Contract auf Arbitrum] (https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7). Der Ethereum-Hauptnetz-[GRT-Vertrag](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) wird jedoch weiterhin funktionieren. @@ -83,4 +77,4 @@ Die Brücke wurde [umfangreich geprüft] (https://code4rena.com/contests/2022-10 Das Hinzufügen von GRT zu Ihrem Arbitrum-Abrechnungssaldo kann mit nur einem Klick in [Subgraph Studio] (https://thegraph.com/studio/) erfolgen. Sie können Ihr GRT ganz einfach mit Arbitrum verbinden und Ihre API-Schlüssel in einer einzigen Transaktion füllen. -Visit the [Billing page](/billing/) for more detailed instructions on adding, withdrawing, or acquiring GRT. +Besuchen Sie die [Abrechnungsseite] (/) für detaillierte Anweisungen zum Hinzufügen, Abheben oder Erwerben von GRT. diff --git a/website/pages/de/arbitrum/l2-transfer-tools-faq.mdx b/website/pages/de/arbitrum/l2-transfer-tools-faq.mdx index eb4fda3fc003..48944eebbfbb 100644 --- a/website/pages/de/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/pages/de/arbitrum/l2-transfer-tools-faq.mdx @@ -2,23 +2,23 @@ title: L2-Übertragungs-Tools FAQ --- -## General +## Allgemein ### Was sind L2-Transfer-Tools? -The Graph has made it 26x cheaper for contributors to participate in the network by deploying the protocol to Arbitrum One. The L2 Transfer Tools were created by core devs to make it easy to move to L2. +The Graph hat die Teilnahme am Netzwerk für Mitwirkende um das 26-fache kostengünstiger gemacht, indem das Protokoll auf Arbitrum One bereitgestellt wurde. Die L2-Transfer-Tools wurden von den Kernentwicklern entwickelt, um den Wechsel zu L2 zu erleichtern. -For each network participant, a set of L2 Transfer Tools are available to make the experience seamless when moving to L2, avoiding thawing periods or having to manually withdraw and bridge GRT. +Für jeden Netzwerkteilnehmer stehen eine Reihe von L2-Transfer-Tools zur Verfügung, die einen nahtlosen Übergang zu L2 ermöglichen, ohne dass Auftauzeiten entstehen oder GRT manuell entnommen und überbrückt werden müssen. -These tools will require you to follow a specific set of steps depending on what your role is within The Graph and what you are transferring to L2. +Für diese Tools müssen Sie eine Reihe von Schritten befolgen, je nachdem, welche Rolle Sie bei The Graph spielen und was Sie auf L2 übertragen. ### Kann ich dieselbe Wallet verwenden, die ich im Ethereum Mainnet benutze? Wenn Sie eine [EOA](https://ethereum.org/en/developers/docs/accounts/#types-of-account) Wallet verwenden, können Sie dieselbe Adresse verwenden. Wenn Ihr Ethereum Mainnet Wallet ein Kontrakt ist (z.B. ein Multisig), dann müssen Sie eine [Arbitrum Wallet Adresse](/arbitrum/arbitrum-faq/#what-do-i-need-to-do-to-use-the-graph-on-l2) angeben, an die Ihr Transfer gesendet wird. Bitte überprüfen Sie die Adresse sorgfältig, da Überweisungen an eine falsche Adresse zu einem dauerhaften Verlust führen können. Wenn Sie einen Multisig auf L2 verwenden möchten, stellen Sie sicher, dass Sie einen Multisig-Vertrag auf Arbitrum One einsetzen. -Wallets on EVM blockchains like Ethereum and Arbitrum are a pair of keys (public and private), that you create without any need to interact with the blockchain. So any wallet that was created for Ethereum will also work on Arbitrum without having to do anything else. +Wallets auf EVM-Blockchains wie Ethereum und Arbitrum bestehen aus einem Paar von Schlüsseln (öffentlich und privat), die Sie erstellen, ohne mit der Blockchain interagieren zu müssen. Jede Wallet, die für Ethereum erstellt wurde, funktioniert also auch auf Arbitrum, ohne dass Sie etwas anderes tun müssen. -The exception is with smart contract wallets like multisigs: these are smart contracts that are deployed separately on each chain, and get their address when they are deployed. If a multisig was deployed to Ethereum, it won't exist with the same address on Arbitrum. A new multisig must be created first on Arbitrum, and may get a different address. +Die Ausnahme sind Smart-Contract-Wallets wie Multisigs: Das sind Smart Contracts, die auf jeder Kette separat eingesetzt werden und ihre Adresse erhalten, wenn sie eingesetzt werden. Wenn ein Multisig auf Ethereum bereitgestellt wurde, wird er nicht mit der gleichen Adresse auf Arbitrum existieren. Ein neuer Multisig muss zuerst auf Arbitrum erstellt werden und kann eine andere Adresse erhalten. ### Was passiert, wenn ich meinen Transfer nicht innerhalb von 7 Tagen abschließe? @@ -28,7 +28,7 @@ Wenn Sie Ihre Vermögenswerte (Subgraph, Anteil, Delegation oder Kuration) an L2 Dies ist der so genannte "Bestätigungsschritt" in allen Übertragungswerkzeugen - er wird in den meisten Fällen automatisch ausgeführt, da die automatische Ausführung meist erfolgreich ist, aber es ist wichtig, dass Sie sich vergewissern, dass die Übertragung erfolgreich war. Wenn dies nicht gelingt und es innerhalb von 7 Tagen keine erfolgreichen Wiederholungsversuche gibt, verwirft die Arbitrum-Brücke das Ticket, und Ihre Assets (Subgraph, Pfahl, Delegation oder Kuration) gehen verloren und können nicht wiederhergestellt werden. Die Entwickler des Graph-Kerns haben ein Überwachungssystem eingerichtet, um diese Situationen zu erkennen und zu versuchen, die Tickets einzulösen, bevor es zu spät ist, aber es liegt letztendlich in Ihrer Verantwortung, sicherzustellen, dass Ihr Transfer rechtzeitig abgeschlossen wird. Wenn Sie Probleme mit der Bestätigung Ihrer Transaktion haben, wenden Sie sich bitte an [dieses Formular] (https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) und die Entwickler des Kerns werden Ihnen helfen. -### I started my delegation/stake/curation transfer and I'm not sure if it made it through to L2, how can I confirm that it was transferred correctly? +### Ich habe mit der Übertragung meiner Delegation/des Einsatzes/der Kuration begonnen und bin mir nicht sicher, ob sie an L2 weitergeleitet wurde. Wie kann ich bestätigen, dass sie korrekt übertragen wurde? If you don't see a banner on your profile asking you to finish the transfer, then it's likely the transaction made it safely to L2 and no more action is needed. If in doubt, you can check if Explorer shows your delegation, stake or curation on Arbitrum One. @@ -64,7 +64,7 @@ Die Übertragungszeit beträgt etwa 20 Minuten. Die Arbitrum-Brücke arbeitet im ### Wird mein Subgraph noch auffindbar sein, nachdem ich ihn auf L2 übertragen habe? -Ihr Subgraph ist nur in dem Netzwerk auffindbar, in dem er veröffentlicht ist. Wenn Ihr Subgraph zum Beispiel auf Arbitrum One ist, können Sie ihn nur im Explorer auf Arbitrum One finden und nicht auf Ethereum. Bitte vergewissern Sie sich, dass Sie Arbitrum One in der Netzwerkumschaltung oben auf der Seite ausgewählt haben, um sicherzustellen, dass Sie sich im richtigen Netzwerk befinden. Nach der Übertragung wird der L1-Subgraph als veraltet angezeigt. +Ihr Subgraph ist nur in dem Netzwerk auffindbar, in dem er veröffentlicht ist. Wenn Ihr Subgraph zum Beispiel auf Arbitrum One ist, können Sie ihn nur im Explorer auf Arbitrum One finden und nicht auf Ethereum. Bitte vergewissern Sie sich, dass Sie Arbitrum One in der Netzwerkumschaltung oben auf der Seite ausgewählt haben, um sicherzustellen, dass Sie sich im richtigen Netzwerk befinden. Nach der Übertragung wird der L1-Subgraph als veraltet angezeigt. ### Muss mein Subgraph ( Teilgraph ) veröffentlicht werden, um ihn zu übertragen? diff --git a/website/pages/de/billing.mdx b/website/pages/de/billing.mdx index 37f9c840d00b..d9480d139452 100644 --- a/website/pages/de/billing.mdx +++ b/website/pages/de/billing.mdx @@ -14,7 +14,7 @@ There are two plans to use when querying subgraphs on The Graph Network. ## Query Payments with credit card -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) +- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. You'll be redirected to the wallet selection page. Select your wallet and click on "Connect". 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. @@ -31,11 +31,11 @@ Subgraph users can use The Graph Token (or GRT) to pay for queries on The Graph ### GRT on Arbitrum or Ethereum -The Graph’s billing system accepts GRT on Arbitrum, and users will need ETH on Arbitrum to pay their gas. While The Graph protocol started on Ethereum Mainnet, all activity, including the billing contracts, is now on Arbitrum One. +Das Abrechnungssystem von The Graph akzeptiert GRT auf Arbitrum, und die Nutzer benötigen ETH auf Arbitrum, um ihr Gas zu bezahlen. Während das The Graph-Protokoll auf dem Ethereum Mainnet begann, finden alle Aktivitäten, einschließlich der Abrechnungsverträge, nun auf Arbitrum One statt. To pay for queries, you need GRT on Arbitrum. Here are a few different ways to achieve this: -- If you already have GRT on Ethereum, you can bridge it to Arbitrum. You can do this via the GRT bridging option provided in Subgraph Studio or by using one of the following bridges: +- Wenn Sie bereits GRT auf Ethereum haben, können Sie es zu Arbitrum überbrücken. Sie können dieses über GRT-Bridging-Option in Subgraph Studio tun oder eine der folgenden Bridges verwenden: - [The Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161) - [TransferTo](https://transferto.xyz/swap) @@ -69,7 +69,7 @@ Once you bridge GRT, you can add it to your billing balance. 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. 4. Enter the amount of GRT you would like to withdraw. 5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. 6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. @@ -83,7 +83,7 @@ Once you bridge GRT, you can add it to your billing balance. - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. 5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. 6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. + - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. 7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. 8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. @@ -127,7 +127,7 @@ This will be a step by step guide for purchasing GRT on Binance. 7. Review your purchase and click "Buy GRT". 8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. 9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - Click on the "wallet" button, click withdraw, and select GRT. - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - Click "Continue" and confirm your transaction. @@ -198,7 +198,7 @@ You can learn more about getting ETH on Binance [here](https://www.binance.com/e ### How many queries will I need? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. @@ -208,6 +208,6 @@ Of course, both new and existing users can reach out to Edge & Node's BD team fo Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### What happens when my billing balance runs out? Will I get a warning? You will receive several email notifications before your billing balance runs out. diff --git a/website/pages/de/chain-integration-overview.mdx b/website/pages/de/chain-integration-overview.mdx index 2fe6c2580909..a142f3f817f9 100644 --- a/website/pages/de/chain-integration-overview.mdx +++ b/website/pages/de/chain-integration-overview.mdx @@ -6,12 +6,12 @@ A transparent and governance-based integration process was designed for blockcha ## Stage 1. Technical Integration -- Teams work on a Graph Node integration and Firehose for non-EVM based chains. [Here's how](/new-chain-integration/). +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. - Teams initiate the protocol integration process by creating a Forum thread [here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (New Data Sources sub-category under Governance & GIPs). Using the default Forum template is mandatory. ## Stage 2. Integration Validation -- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON RPC or Firehose endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - Graph Indexers test the integration on The Graph's testnet. - Core developers and Indexers monitor stability, performance, and data determinism. @@ -38,7 +38,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. -### 3. How much time will this process take? +### 3. How much time will the process of reaching full protocol support take? The time to mainnet is expected to be several weeks, varying based on the time of integration development, whether additional research is required, testing and bug fixes, and, as always, the timing of the governance process that requires community feedback. @@ -46,4 +46,4 @@ Protocol support for indexing rewards depends on the stakeholders' bandwidth to ### 4. How will priorities be handled? -Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. This is especially true for chains previously supported on the [hosted service](https://thegraph.com/hosted-service) or those relying on already tested stacks. +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/de/cookbook/arweave.mdx b/website/pages/de/cookbook/arweave.mdx index 0d96b778f186..ec3eca650e4f 100644 --- a/website/pages/de/cookbook/arweave.mdx +++ b/website/pages/de/cookbook/arweave.mdx @@ -105,7 +105,7 @@ Schema definition describes the structure of the resulting subgraph database and Die Handler für die Ereignisverarbeitung sind in [AssemblyScript](https://www.assemblyscript.org/) geschrieben. -Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/assemblyscript-api/). +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/de/cookbook/base-testnet.mdx b/website/pages/de/cookbook/base-testnet.mdx index cd96026d2596..751e806644b3 100644 --- a/website/pages/de/cookbook/base-testnet.mdx +++ b/website/pages/de/cookbook/base-testnet.mdx @@ -63,10 +63,10 @@ Your subgraph slug is an identifier for your subgraph. The CLI tool will walk yo The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. +- Schema (schema.graphql) - Das GraphQL-Schema definiert, welche Daten Sie aus dem Subgraph abrufen möchten. - AssemblyScript Mappings (mapping.ts) - Dies ist der Code, der die Daten aus Ihren Datenquellen in die im Schema definierten Entitäten übersetzt. -If you want to index additional data, you will need extend the manifest, schema and mappings. +If you want to index additional data, you will need to extend the manifest, schema and mappings. For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). diff --git a/website/pages/de/cookbook/cosmos.mdx b/website/pages/de/cookbook/cosmos.mdx index 6739350b3958..ebb746818925 100644 --- a/website/pages/de/cookbook/cosmos.mdx +++ b/website/pages/de/cookbook/cosmos.mdx @@ -85,7 +85,7 @@ Schema definition describes the structure of the resulting subgraph database and Die Handler für die Ereignisverarbeitung sind in [AssemblyScript](https://www.assemblyscript.org/) geschrieben. -Die Cosmos-Indizierung führt Cosmos-spezifische Datentypen in die [AssemblyScript-API](/developing/assemblyscript-api/) ein. +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/de/cookbook/grafting.mdx b/website/pages/de/cookbook/grafting.mdx index d6a88a506760..4e2311f14da2 100644 --- a/website/pages/de/cookbook/grafting.mdx +++ b/website/pages/de/cookbook/grafting.mdx @@ -22,7 +22,7 @@ For more information, you can check: - [Grafting](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic usecase. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network @@ -30,7 +30,7 @@ In this tutorial, we will be covering a basic usecase. We will replace an existi ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. While this is an effective way to preserve data and save time on indexing, grafting may introduce complexities and potential issues when migrating from a hosted environment to the decentralized network. It is not possible to graft a subgraph from The Graph Network back to the hosted service or Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. ### Best Practices @@ -80,7 +80,7 @@ dataSources: ``` - The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. ## Grafting Manifest Definition @@ -191,7 +191,7 @@ Congrats! You have successfully grafted a subgraph onto another subgraph. ## Additional Resources -If you want more experience with grafting, here's a few examples for popular contracts: +If you want more experience with grafting, here are a few examples for popular contracts: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) diff --git a/website/pages/de/cookbook/near.mdx b/website/pages/de/cookbook/near.mdx index e571f7b79a4b..0b5436f36152 100644 --- a/website/pages/de/cookbook/near.mdx +++ b/website/pages/de/cookbook/near.mdx @@ -37,7 +37,7 @@ There are three aspects of subgraph definition: **schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph#the-graphql-schema). -**AssemblyScript Mappings:** [AssemblyScript code](/developing/assemblyscript-api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. +**AssemblyScript Mappings:** [AssemblyScript code](/developing/graph-ts/api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. During subgraph development there are two key commands: @@ -98,7 +98,7 @@ Schema definition describes the structure of the resulting subgraph database and Die Handler für die Ereignisverarbeitung sind in [AssemblyScript](https://www.assemblyscript.org/) geschrieben. -NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/assemblyscript-api). +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/graph-ts/api). ```typescript @@ -165,9 +165,9 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/developing/assemblyscript-api) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/developing/graph-ts/api) is available to NEAR subgraph developers during mapping execution. -This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/assemblyscript-api#json-api) to allow developers to easily process these logs. +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/graph-ts/api#json-api) to allow developers to easily process these logs. ## Deploying a NEAR Subgraph diff --git a/website/pages/de/cookbook/subgraph-uncrashable.mdx b/website/pages/de/cookbook/subgraph-uncrashable.mdx index 989310a3f9a0..0cc91a0fa2c3 100644 --- a/website/pages/de/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/de/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: Safe Subgraph Code Generator - The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. These logs can be viewed in the The Graph's hosted service under the 'Logs' section. +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. diff --git a/website/pages/de/cookbook/upgrading-a-subgraph.mdx b/website/pages/de/cookbook/upgrading-a-subgraph.mdx index f1b757e9942a..1aac0794b687 100644 --- a/website/pages/de/cookbook/upgrading-a-subgraph.mdx +++ b/website/pages/de/cookbook/upgrading-a-subgraph.mdx @@ -136,7 +136,7 @@ Make sure **Update Subgraph Details in Explorer** is checked and click on **Save ## Deprecating a Subgraph on The Graph Network -Follow the steps [here](/managing/deprecating-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. +Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. ## Querying a Subgraph + Billing on The Graph Network diff --git a/website/pages/de/deploying/multiple-networks.mdx b/website/pages/de/deploying/multiple-networks.mdx new file mode 100644 index 000000000000..dc2b8e533430 --- /dev/null +++ b/website/pages/de/deploying/multiple-networks.mdx @@ -0,0 +1,241 @@ +--- +title: Deploying a Subgraph to Multiple Networks +--- + +This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph). + +## Deploying the subgraph to multiple networks + +In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. + +### Using `graph-cli` + +Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: + +```sh +Options: + + ... + --network Network configuration to use from the networks config file + --network-file Networks config file path (default: "./networks.json") +``` + +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. + +> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. + +If you don't have a `networks.json` file, you'll need to manually create one with the following structure: + +```json +{ + "network1": { // the network name + "dataSource1": { // the dataSource name + "address": "0xabc...", // the contract address (optional) + "startBlock": 123456 // the startBlock (optional) + }, + "dataSource2": { + "address": "0x123...", + "startBlock": 123444 + } + }, + "network2": { + "dataSource1": { + "address": "0x987...", + "startBlock": 123 + }, + "dataSource2": { + "address": "0xxyz..", + "startBlock": 456 + } + }, + ... +} +``` + +> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. + +Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x123...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +This is what your networks config file should look like: + +```json +{ + "mainnet": { + "Gravity": { + "address": "0x123..." + } + }, + "sepolia": { + "Gravity": { + "address": "0xabc..." + } + } +} +``` + +Now we can run one of the following commands: + +```sh +# Using default networks.json file +yarn build --network sepolia + +# Using custom named file +yarn build --network sepolia --network-file path/to/config +``` + +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: sepolia + source: + address: '0xabc...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Now you are ready to `yarn deploy`. + +> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: + +```sh +# Using default networks.json file +yarn deploy --network sepolia + +# Using custom named file +yarn deploy --network sepolia --network-file path/to/config +``` + +### Using subgraph.yaml template + +One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). + +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: + +```json +{ + "network": "mainnet", + "address": "0x123..." +} +``` + +and + +```json +{ + "network": "sepolia", + "address": "0xabc..." +} +``` + +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + network: {{network}} + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + address: '{{address}}' + abi: Gravity + mapping: + kind: ethereum/events +``` + +In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: + +```json +{ + ... + "scripts": { + ... + "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", + "prepare:sepolia": "mustache config/sepolia.json subgraph.template.yaml > subgraph.yaml" + }, + "devDependencies": { + ... + "mustache": "^3.1.0" + } +} +``` + +To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: + +```sh +# Mainnet: +yarn prepare:mainnet && yarn deploy + +# Sepolia: +yarn prepare:sepolia && yarn deploy +``` + +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). + +**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. + +## Subgraph Studio subgraph archive policy + +A subgraph version in Studio is archived if and only if it meets the following criteria: + +- The version is not published to the network (or pending publish) +- The version was created 45 or more days ago +- The subgraph hasn't been queried in 30 days + +In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. + +Every subgraph affected with this policy has an option to bring the version in question back. + +## Checking subgraph health + +If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. + +Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: + +```graphql +{ + indexingStatusForCurrentVersion(subgraphName: "org/subgraph") { + synced + health + fatalError { + message + block { + number + hash + } + handler + } + chains { + chainHeadBlock { + number + } + latestBlock { + number + } + } + } +} +``` + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/pages/de/developing/creating-a-subgraph.mdx b/website/pages/de/developing/creating-a-subgraph.mdx index fec3c46ccc4d..fab793f9905a 100644 --- a/website/pages/de/developing/creating-a-subgraph.mdx +++ b/website/pages/de/developing/creating-a-subgraph.mdx @@ -2,45 +2,47 @@ title: Creating a Subgraph --- -A subgraph extracts data from a blockchain, processing it and storing it so that it can be easily queried via GraphQL. +This detailed guide provides instructions to successfully create a subgraph. -![Defining a Subgraph](/img/defining-a-subgraph.png) +A subgraph extracts data from a blockchain, processes it, and stores it for efficient querying via GraphQL. -The subgraph definition consists of a few files: +![Defining a Subgraph](/img/defining-a-subgraph.png) -- `subgraph.yaml`: a YAML file containing the subgraph manifest +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. -- `schema.graphql`: a GraphQL schema that defines what data is stored for your subgraph, and how to query it via GraphQL +## Getting Started -- `AssemblyScript Mappings`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) code that translates from the event data to the entities defined in your schema (e.g. `mapping.ts` in this tutorial) +### Install the Graph CLI -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key). It is recommended that you [add signal](/network/curating/#how-to-signal) to your subgraph with at least [3,000 GRT](/sunrise/#how-can-i-ensure-high-quality-of-service-and-redundancy-for-subgraphs-on-the-graph-network). +To build and deploy a subgraph, you will need the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-tooling) which you will need to build and deploy a subgraph. +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. -## Install the Graph CLI +Führen Sie einen der folgenden Befehle auf Ihrem lokalen Computer aus: -The Graph CLI is written in JavaScript, and you will need to install either `yarn` or `npm` to use it; it is assumed that you have yarn in what follows. +#### Using [npm](https://www.npmjs.com/) -Once you have `yarn`, install the Graph CLI by running +```bash +npm install -g @graphprotocol/graph-cli@latest +``` -**Install with yarn:** +#### Using [yarn](https://yarnpkg.com/) ```bash yarn global add @graphprotocol/graph-cli ``` -**Install with npm:** +- The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. -```bash -npm install -g @graphprotocol/graph-cli -``` +- This `graph init` command can also create a subgraph in Subgraph Studio by passing in `--product subgraph-studio`. + +- If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. -Once installed, the `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. This command can be used to create a subgraph in Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to your preferred network, bootstrapping a new subgraph from that contract can be a good way to get started. +## Create a subgraph -## From An Existing Contract +### From an existing contract -The following command creates a subgraph that indexes all events of an existing contract. It attempts to fetch the contract ABI from Etherscan and falls back to requesting a local file path. If any of the optional arguments are missing, it takes you through an interactive form. +The following command creates a subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,21 +53,29 @@ graph init \ [] ``` -The `` is the ID of your subgraph in Subgraph Studio, it can be found on your subgraph details page. +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. -## From An Example Subgraph +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. -The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: +### From an example subgraph + +The following command initializes a new project from an example subgraph: ```sh -graph init --studio +graph init --studio --from-example=example-subgraph ``` -The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. -## Add New dataSources To An Existing Subgraph +## Add new `dataSources` to an existing subgraph -Since `v0.31.0` the `graph-cli` supports adding new dataSources to an existing subgraph through the `graph add` command. +Since `v0.31.0`, the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: ```sh graph add
[] @@ -78,22 +88,45 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -The `add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option), and will create a new `dataSource` in the same way that `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. +### Specifics + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: + + - If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- The contract `address` will be written to the `networks.json` for the relevant network. + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. -The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: +## Components of a subgraph -- If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. -- If `false`: a new entity & event handler should be created with `${dataSourceName}{EventName}`. +### The Subgraph Manifest -The contract `address` will be written to the `networks.json` for the relevant network. +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -> **Note:** When using the interactive cli, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. +The **subgraph definition** consists of the following files: -## The Subgraph Manifest +- `subgraph.yaml`: Contains the subgraph manifest -The subgraph manifest `subgraph.yaml` defines the smart contracts your subgraph indexes, which events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL -For the example subgraph, `subgraph.yaml` is: +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +A single subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: ```yaml specVersion: 0.0.4 @@ -180,9 +213,9 @@ A single subgraph can index data from multiple smart contracts. Add an entry for The triggers for a data source within a block are ordered using the following process: -1. Event and call triggers are first ordered by transaction index within the block. -2. Event and call triggers within the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. -3. Block triggers are run after event and call triggers, in the order they are defined in the manifest. +1. Event and call triggers are first ordered by transaction index within the block. +2. Event and call triggers within the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. +3. Block triggers are run after event and call triggers, in the order they are defined in the manifest. These ordering rules are subject to change. @@ -305,9 +338,9 @@ Imagine you have a subgraph that needs to make three Ethereum calls to fetch dat Traditionally, these calls might be made sequentially: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Total time taken = 3 + 2 + 4 = 9 seconds @@ -315,9 +348,9 @@ Total time taken = 3 + 2 + 4 = 9 seconds With this feature, you can declare these calls to be executed in parallel: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. @@ -325,9 +358,9 @@ Total time taken = max (3, 2, 4) = 4 seconds ### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. -2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. ### Example Configuration in Subgraph Manifest @@ -347,7 +380,7 @@ calls: Details for the example above: - `global0X128` is the declared `eth_call`. -- The text before colon(`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. @@ -360,17 +393,17 @@ calls: ### SpecVersion Releases -| Version | Release notes | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +| Version | Release notes | +|:-------:| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | ### Getting The ABIs @@ -442,16 +475,16 @@ For some entity types the `id` is constructed from the id's of two other entitie We support the following scalars in our GraphQL API: -| Type | Description | -| --- | --- | -| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| Type | Description | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | #### Enums @@ -593,7 +626,7 @@ This more elaborate way of storing many-to-many relationships will result in les #### Adding comments to the schema -As per GraphQL spec, comments can be added above schema entity attributes using the hash symble `#`. This is illustrated in the example below: +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: ```graphql type MyFirstEntity @entity { @@ -873,7 +906,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. -> +> > If prior blocks contain data relevant to the new data source, it is best to index that data by reading the current state of the contract and creating entities representing that state at the time the new data source is created. ### Data Source Context @@ -930,7 +963,7 @@ dataSources: ``` > **Note:** The contract creation block can be quickly looked up on Etherscan: -> +> > 1. Search for the contract by entering its address in the search bar. > 2. Click on the creation transaction hash in the `Contract Creator` section. > 3. Load the transaction details page where you'll find the start block for that contract. @@ -945,9 +978,9 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde `indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: -1. `"never"`: No pruning of historical data; retains the entire history. -2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. -3. A specific number: Sets a custom limit on the number of historical blocks to retain. +1. `"never"`: No pruning of historical data; retains the entire history. +2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +3. A specific number: Sets a custom limit on the number of historical blocks to retain. ``` indexerHints: @@ -982,29 +1015,6 @@ indexerHints: prune: never ``` -You can check the earliest block (with historical state) for a given subgraph by querying the [Indexing Status API](/deploying/deploying-a-subgraph-to-hosted/#checking-subgraph-health): - -``` -{ - indexingStatuses(subgraphs: ["Qm..."]) { - subgraph - synced - health - chains { - earliestBlock { - number - } - latestBlock { - number - } - chainHeadBlock { number } - } - } -} -``` - -Note that the `earliestBlock` is the earliest block with historical data, which will be more recent than the `startBlock` specified in the manifest, if the subgraph has been pruned. - ## Event Handlers Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. @@ -1477,7 +1487,7 @@ The file data source must specifically mention all the entity types which it wil #### Create a new handler to process files -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/assemblyscript-api/#json-api)). +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). The CID of the file as a readable string can be accessed via the `dataSource` as follows: diff --git a/website/pages/de/developing/developer-faqs.mdx b/website/pages/de/developing/developer-faqs.mdx index b4af2c711bc8..c8906615c081 100644 --- a/website/pages/de/developing/developer-faqs.mdx +++ b/website/pages/de/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: Developer FAQs --- -## 1. What is a subgraph? +This page summarizes some of the most common questions for developers building on The Graph. -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using the Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available to be queried by subgraph consumers. +## Subgraph Related -## 2. Can I delete my subgraph? +### 1. What is a subgraph? -It is not possible to delete subgraphs once they are created. +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. Can I change my subgraph name? +### 2. What is the first step to create a subgraph? -No. Once a subgraph is created, the name cannot be changed. Make sure to think of this carefully before you create your subgraph so it is easily searchable and identifiable by other dapps. +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. Can I change the GitHub account associated with my subgraph? +### 3. Can I still create a subgraph if my smart contracts don't have events? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Make sure to think of this carefully before you create your subgraph. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. Am I still able to create a subgraph if my smart contracts don't have events? +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are by far the fastest way to retrieve useful data. +### 4. Can I change the GitHub account associated with my subgraph? -If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended, as performance will be significantly slower. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. Is it possible to deploy one subgraph with the same name for multiple networks? +### 5. How do I update a subgraph on mainnet? -You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/deploying/deploying-a-subgraph-to-hosted#redeploying-a-subgraph) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. How are templates different from data sources? +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) upfront you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. +You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph#data-source-templates). -## 8. How do I make sure I'm using the latest version of graph-node for my local deployments? +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -You can run the following command: +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -docker pull graphprotocol/graph-node:latest -``` +You can also use `graph add` command to add a new dataSource. -**NOTE:** docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so it is important to do this to make sure you are up to date with the latest version of graph-node. +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. How do I call a contract function or access a public state variable from my subgraph mappings? +Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. -## 10. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Yes. On `graph init` command itself you can add multiple datasources by entering contracts one after the other. You can also use `graph add` command to add new datasource. +You can run the following command: -## 11. I want to contribute or add a GitHub issue. Where can I find the open source repositories? +```sh +docker pull graphprotocol/graph-node:latest +``` -- [graph-node](https://github.com/graphprotocol/graph-node) -- [graph-tooling](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. What is the recommended way to build "autogenerated" ids for an entity when handling events? +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -## 13. When listening to multiple contracts, is it possible to select the contract order to listen to events? +### 15. Can I delete my subgraph? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +It is not possible to delete subgraphs once they are created. However, you can [transfer and deprecate your subgraph](/managing/transfer-and-deprecate-a-subgraph/). -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +## Network Related + +### 16. What networks are supported by The Graph? + +You can find the list of the supported networks [here](/developing/supported-networks). + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? Yes. You can do this by importing `graph-ts` as per the example below: @@ -78,23 +99,21 @@ dataSource.network() dataSource.address() ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Do you support block and call handlers on Sepolia? Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. -## 16. Can I import ethers.js or other JS libraries into my subgraph mappings? - -Not currently, as mappings are written in AssemblyScript. One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. +## Indexing & Querying Related -## 17. Is it possible to specify what block to start indexing on? +### 19. Is it possible to specify what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 18. Are there some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 19. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: @@ -102,19 +121,7 @@ Yes! Try the following command, substituting "organization/subgraphName" with th curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -## 20. What networks are supported by The Graph? - -You can find the list of the supported networks [here](/developing/supported-networks). - -## 21. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? - -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. - -## 22. Is this possible to use Apollo Federation on top of graph-node? - -Federation is not supported yet, although we do want to support it in the future. At the moment, something you can do is use schema stitching, either on the client or via a proxy service. - -## 23. Is there a limit to how many objects The Graph can return per query? +### 22. Is there a limit to how many objects The Graph can return per query? By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: @@ -122,24 +129,19 @@ By default, query responses are limited to 100 items per collection. If you want someCollection(first: 1000, skip: ) { ... } ``` -## 24. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## 25. Where do I go to find my current subgraph on the hosted service? - -Head over to the hosted service in order to find subgraphs that you or others deployed to the hosted service. You can find it [here](https://thegraph.com/hosted-service). - -## 26. Will the hosted service start charging query fees? - -The Graph will never charge for the hosted service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The hosted service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to upgrade to the decentralized network as they are comfortable. - -## 27. How do I update a subgraph on mainnet? +## Miscellaneous -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +### 24. Is it possible to use Apollo Federation on top of graph-node? -## 28. In what order are the event, block, and call handlers triggered for a data source? +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +- [graph-node](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/de/developing/graph-ts/api.mdx b/website/pages/de/developing/graph-ts/api.mdx index c2f994f31006..020d37ade7f7 100644 --- a/website/pages/de/developing/graph-ts/api.mdx +++ b/website/pages/de/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: AssemblyScript API --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/release-notes/assemblyscript-migration-guide) +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## API Reference @@ -27,16 +29,16 @@ The `@graphprotocol/graph-ts` library provides the following APIs: The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. -| Version | Release notes | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Release notes | +| :-----: | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Built-in Types @@ -252,7 +254,9 @@ export function handleTransfer(event: TransferEvent): void { When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. -Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### Loading entities from the store @@ -268,15 +272,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### Looking up entities created withing a block As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. -The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a Transaction from some on-chain event, and a later handler wants to access this transaction if it exists. In the case where the transaction does not exist, the subgraph will have to go to the database just to find out that the entity does not exist; if the subgraph author already knows that the entity must have been created in the same block, using loadInBlock avoids this database roundtrip. For some subgraphs, these missed lookups can contribute significantly to the indexing time. +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -503,7 +510,9 @@ Any other contract that is part of the subgraph can be imported from the generat #### Handling Reverted Calls -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -515,7 +524,7 @@ if (callResult.reverted) { } ``` -Note that a Graph node connected to a Geth or Infura client may not detect all reverts, if you rely on this we recommend using a Graph node connected to a Parity client. +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### Encoding/Decoding ABI diff --git a/website/pages/de/developing/supported-networks.mdx b/website/pages/de/developing/supported-networks.mdx index 7c2d8d858261..797202065e99 100644 --- a/website/pages/de/developing/supported-networks.mdx +++ b/website/pages/de/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. +- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. - If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). diff --git a/website/pages/de/developing/unit-testing-framework.mdx b/website/pages/de/developing/unit-testing-framework.mdx index f826a5ccb209..308135181ccb 100644 --- a/website/pages/de/developing/unit-testing-framework.mdx +++ b/website/pages/de/developing/unit-testing-framework.mdx @@ -1368,18 +1368,18 @@ The log output includes the test run duration. Here's an example: > Critical: Could not create WasmInstance from valid module with context: unknown import: wasi_snapshot_preview1::fd_write has not been defined -This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/assemblyscript-api/#logging-api) +This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/graph-ts/api/#logging-api) > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Block(defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(18,12) -> +> > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Transaction(defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(24,12) The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as`. The best way to fix issues like this one is to update everything to the latest released version. diff --git a/website/pages/de/docsearch.json b/website/pages/de/docsearch.json index 9f300c69acb0..366e6903069d 100644 --- a/website/pages/de/docsearch.json +++ b/website/pages/de/docsearch.json @@ -7,36 +7,36 @@ "searchBox": { "resetButtonTitle": "Die Abfrage löschen", "resetButtonAriaLabel": "Die Abfrage löschen", - "cancelButtonText": "Cancel", + "cancelButtonText": "Abbrechen", "cancelButtonAriaLabel": "Anuluj" }, "startScreen": { - "recentSearchesTitle": "Recent", - "noRecentSearchesText": "No recent searches", - "saveRecentSearchButtonTitle": "Save this search", - "removeRecentSearchButtonTitle": "Remove this search from history", - "favoriteSearchesTitle": "Favorite", - "removeFavoriteSearchButtonTitle": "Remove this search from favorites" + "recentSearchesTitle": "Aktuelle", + "noRecentSearchesText": "Keine aktuellen Suchanfragen", + "saveRecentSearchButtonTitle": "Diese Suche speichern", + "removeRecentSearchButtonTitle": "Diese Suche aus dem Verlauf entfernen", + "favoriteSearchesTitle": "Favorit", + "removeFavoriteSearchButtonTitle": "Die Suche aus Favoriten entfernen" }, "errorScreen": { - "titleText": "Unable to fetch results", - "helpText": "You might want to check your network connection." + "titleText": "Ergebnis kann nicht abgerufen werden", + "helpText": "Sie sollten Ihre Netzwerkverbindung überprüfen." }, "footer": { - "selectText": "to select", - "selectKeyAriaLabel": "Enter key", - "navigateText": "to navigate", - "navigateUpKeyAriaLabel": "Arrow up", - "navigateDownKeyAriaLabel": "Arrow down", - "closeText": "to close", - "closeKeyAriaLabel": "Escape key", - "searchByText": "Search by" + "selectText": "zur Auswahl", + "selectKeyAriaLabel": "Enter-Taste", + "navigateText": "zum Navigieren", + "navigateUpKeyAriaLabel": "Pfeil nach oben", + "navigateDownKeyAriaLabel": "Pfeil nach unten", + "closeText": "schließen", + "closeKeyAriaLabel": "Escape-Taste", + "searchByText": "Suche nach" }, "noResultsScreen": { - "noResultsText": "No results for", - "suggestedQueryText": "Try searching for", - "reportMissingResultsText": "Believe this query should return results?", - "reportMissingResultsLinkText": "Let us know." + "noResultsText": "Kein Ergebnis für", + "suggestedQueryText": "Suchen Sie nach", + "reportMissingResultsText": "Glauben Sie, dass diese Abfrage Ergebnisse liefern sollte?", + "reportMissingResultsLinkText": "Informieren Sie uns." } } } diff --git a/website/pages/de/glossary.mdx b/website/pages/de/glossary.mdx index cd24a22fd4d5..2978ecce3561 100644 --- a/website/pages/de/glossary.mdx +++ b/website/pages/de/glossary.mdx @@ -10,11 +10,9 @@ title: Glossary - **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Then, Indexers can begin indexing subgraphs to make them available to be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **Hosted service**: A temporary scaffold service for building and querying subgraphs as The Graph's decentralized network is maturing its cost of service, quality of service, and developer experience. - -- **Indexers**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. @@ -24,17 +22,17 @@ title: Glossary - **Indexer's Self Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. -- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service to The Graph Network. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegators**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curators**: Network participants that identify high-quality subgraphs, and “curate” them (i.e., signal GRT on them) in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. Indexers earn indexing rewards proportional to the signal on a subgraph. We see a correlation between the amount of GRT signalled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. -- **Subgraph Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a subgraph. - **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. @@ -46,11 +44,11 @@ title: Glossary 1. **Active**: An allocation is considered active when it is created on-chain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed. Specifically, the Indexer will lose 2.5% of their self-stake of GRT. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. @@ -62,11 +60,11 @@ title: Glossary - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **POI or Proof of Indexing**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent Proof of Indexing (POI). Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. @@ -78,10 +76,6 @@ title: Glossary - **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self stake. -- **_Upgrading_ a subgraph to The Graph Network**: The process of moving a subgraph from the hosted service to The Graph Network. - -- **_Updating_ a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). - -- **Upgrade Window**: The countdown for hosted service users to upgrade their subgraph(s) to The Graph Network beginning on April 11th, and ending on June 12th 2024. diff --git a/website/pages/de/index.json b/website/pages/de/index.json index b808df67cb08..c0bc1e266688 100644 --- a/website/pages/de/index.json +++ b/website/pages/de/index.json @@ -21,10 +21,6 @@ "createASubgraph": { "title": "Create a Subgraph", "description": "Use Studio to create subgraphs" - }, - "migrateFromHostedService": { - "title": "Upgrade from the hosted service", - "description": "Upgrading subgraphs to The Graph Network" } }, "networkRoles": { @@ -60,16 +56,12 @@ "graphExplorer": { "title": "Graph Explorer", "description": "Explore subgraphs and interact with the protocol" - }, - "hostedService": { - "title": "Hosted Service", - "description": "Create and explore subgraphs on the hosted service" } } }, "supportedNetworks": { "title": "Supported Networks", - "description": "The Graph supports the following networks.", - "footer": "For more details, see the {0} page." + "description": "The Graph unterstützt folgende Netzwerke.", + "footer": "Weitere Einzelheiten finden Sie auf der Seite {0}." } } diff --git a/website/pages/de/managing/transfer-and-deprecate-a-subgraph.mdx b/website/pages/de/managing/transfer-and-deprecate-a-subgraph.mdx new file mode 100644 index 000000000000..1041b21b2541 --- /dev/null +++ b/website/pages/de/managing/transfer-and-deprecate-a-subgraph.mdx @@ -0,0 +1,65 @@ +--- +title: Transfer and Deprecate a Subgraph +--- + +## Übertragen des Eigentums an einem Subgraphen + +Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. + +**Please note the following:** + +- Whoever owns the NFT controls the subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. +- You can easily move control of a subgraph to a multi-sig. +- A community member can create a subgraph on behalf of a DAO. + +### View your subgraph as an NFT + +To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: + +``` +https://opensea.io/your-wallet-address +``` + +Or a wallet explorer like **Rainbow.me**: + +``` +https://rainbow.me/your-wallet-addres +``` + +### Step-by-Step + +To transfer ownership of a subgraph, do the following: + +1. Use the UI built into Subgraph Studio: + + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) + +2. Choose the address that you would like to transfer the subgraph to: + + ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) + +Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: + +![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) + +## Deprecating a subgraph + +Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. + +### Step-by-Step + +To deprecate your subgraph, do the following: + +1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). +2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. +3. Your subgraph will no longer appear in searches on Graph Explorer. + +**Please note the following:** + +- The owner's wallet should call the `deprecateSubgraph` function. +- Curators will not be able to signal on the subgraph anymore. +- Curators that already signaled on the subgraph can withdraw their signal at an average share price. +- Deprecated subgraphs will show an error message. + +> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. diff --git a/website/pages/de/mips-faqs.mdx b/website/pages/de/mips-faqs.mdx index ae460989f96e..1f7553923765 100644 --- a/website/pages/de/mips-faqs.mdx +++ b/website/pages/de/mips-faqs.mdx @@ -6,10 +6,6 @@ title: MIPs FAQs > Note: the MIPs program is closed as of May 2023. Thank you to all the Indexers who participated! -It's an exciting time to be participating in The Graph ecosystem! During [Graph Day 2022](https://thegraph.com/graph-day/2022/) Yaniv Tal announced the [sunsetting of the hosted service](https://thegraph.com/blog/sunsetting-hosted-service/), a moment The Graph ecosystem has been working towards for many years. - -To support the sunsetting of the hosted service and the migration of all of it's activity to the decentralized network, The Graph Foundation has announced the [Migration Infrastructure Providers (MIPs) program](https://thegraph.com/blog/mips-multi-chain-indexing-incentivized-program). - The MIPs program is an incentivization program for Indexers to support them with resources to index chains beyond Ethereum mainnet and help The Graph protocol expand the decentralized network into a multi-chain infrastructure layer. The MIPs program has allocated 0.75% of the GRT supply (75M GRT), with 0.5% to reward Indexers who contribute to bootstrapping the network and 0.25% allocated to Network Grants for subgraph developers using multi-chain subgraphs. diff --git a/website/pages/de/network/benefits.mdx b/website/pages/de/network/benefits.mdx index e80dd34993af..e500ae9987a5 100644 --- a/website/pages/de/network/benefits.mdx +++ b/website/pages/de/network/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Cost Comparison | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $0+ | $0 per month | -| Engineering time | $400 per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | -| Cost per query | $0 | $0 | -| Infrastruktur | Centralized | Decentralized | -| Geographic redundancy | $750+ per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $750+ | $0 | +| Cost Comparison | Self Hosted | The Graph Network | +|:----------------------------:|:---------------------------------------:|:---------------------------------------------------------------:| +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $0+ | $0 per month | +| Engineering time | $400 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | +| Cost per query | $0 | $0 | +| Infrastruktur | Centralized | Decentralized | +| Geographic redundancy | $750+ per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| Cost Comparison | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $500 per month | $120 per month | -| Engineering time | $800 per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | ~3,000,000 | -| Cost per query | $0 | $0.00004 | -| Infrastruktur | Centralized | Decentralized | -| Engineering expense | $200 per hour | Included | -| Geographic redundancy | $1,200 in total costs per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $1,650+ | $120 | +| Cost Comparison | Self Hosted | The Graph Network | +|:----------------------------:|:------------------------------------------:|:---------------------------------------------------------------:| +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $500 per month | $120 per month | +| Engineering time | $800 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~3,000,000 | +| Cost per query | $0 | $0.00004 | +| Infrastruktur | Centralized | Decentralized | +| Engineering expense | $200 per hour | Included | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| Cost Comparison | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Monthly server cost\* | $1100 per month, per node | $0 | -| Query costs | $4000 | $1,200 per month | -| Number of nodes needed | 10 | Not applicable | -| Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | ~30,000,000 | -| Cost per query | $0 | $0.00004 | -| Infrastruktur | Centralized | Decentralized | -| Geographic redundancy | $1,200 in total costs per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $11,000+ | $1,200 | +| Cost Comparison | Self Hosted | The Graph Network | +|:----------------------------:|:-------------------------------------------:|:---------------------------------------------------------------:| +| Monthly server cost\* | $1100 per month, per node | $0 | +| Query costs | $4000 | $1,200 per month | +| Number of nodes needed | 10 | Not applicable | +| Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~30,000,000 | +| Cost per query | $0 | $0.00004 | +| Infrastruktur | Centralized | Decentralized | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $11,000+ | $1,200 | \*including costs for backup: $50-$100 per month diff --git a/website/pages/de/network/curating.mdx b/website/pages/de/network/curating.mdx index fb2107c53884..b2864660fe8c 100644 --- a/website/pages/de/network/curating.mdx +++ b/website/pages/de/network/curating.mdx @@ -10,7 +10,7 @@ Before consumers can query a subgraph, it must be indexed. This is where curatio Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. @@ -18,7 +18,7 @@ The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). @@ -34,7 +34,7 @@ Signaling on a specific version is especially useful when one subgraph is used b Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay a 0.5% curation tax on all auto-migrated curation shares. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -49,7 +49,7 @@ However, it is recommended that curators leave their signaled GRT in place not o ## Risks 1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. 3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. @@ -78,50 +78,14 @@ It’s suggested that you don’t update your subgraphs too frequently. See the ### 5. Can I sell my curation shares? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). ### 6. Am I eligible for a curation grant? Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## Bonding Curve 101 - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![Price per shares](/img/price-per-share.png) - -As a result, price increases linearly, meaning that it will get more expensive to purchase a share over time. Here’s an example of what we mean, see the bonding curve below: - -![Bonding curve](/img/bonding-curve.png) - -Consider we have two curators that mint shares for a subgraph: - -- Curator A is the first to signal on the subgraph. By adding 120,000 GRT into the curve, they are able to mint 2000 shares. -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- Since both curators hold half the total of curation shares, they would receive an equal amount of curator royalties. -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- The remaining curator would now receive all the curator royalties for that subgraph. If they were to burn their shares to withdraw GRT, they would receive 120,000 GRT. -- **TLDR:** The GRT valuation of curation shares is determined by the bonding curve and can be volatile. There is potential to incur big losses. Signaling early means you put in less GRT for each share. By extension, this means you earn more curator royalties per GRT than later curators for the same subgraph. - -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** - -In the case of The Graph, [Bancor’s implementation of a bonding curve formula](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) is leveraged. - Still confused? Check out our Curation video guide below: diff --git a/website/pages/de/network/delegating.mdx b/website/pages/de/network/delegating.mdx index 81824234e072..f7430c5746ae 100644 --- a/website/pages/de/network/delegating.mdx +++ b/website/pages/de/network/delegating.mdx @@ -2,13 +2,23 @@ title: Delegating --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## Delegator Guide -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly, as that information is widely available online. There are three sections in this guide: @@ -24,15 +34,19 @@ Listed below are the main risks of being a Delegator in the protocol. Delegators cannot be slashed for bad behavior, but there is a tax on Delegators to disincentivize poor decision-making that could harm the integrity of the network. -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +As a Delegator, it's important to understand the following: -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### The delegation unbonding period Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely.
![Delegation unbonding](/img/Delegation-Unbonding.png) _Note the 0.5% fee in the Delegation UI, as well as the 28 day @@ -41,47 +55,65 @@ Another thing to consider is how to choose an Indexer wisely. If you choose an I ### Choosing a trustworthy Indexer with a fair reward payout for Delegators -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +#### Delegation Parameters + +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *The top Indexer is giving Delegators 90% of the rewards. The middle one is giving Delegators 20%. The bottom one is giving Delegators ~83%.*
-- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. + +As you can see, in order to choose the right Indexer, you must consider multiple things. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -### Calculating Delegators expected return +## Calculating Delegators Expected Return -A Delegator must consider a lot of factors when determining the return. These include: +A Delegator must consider the following factors to determine a return: -- A technical Delegator can also look at the Indexer's ability to use the Delegated tokens available to them. If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### Considering the query fee cut and indexing fee cut -As described in the above sections, you should choose an Indexer that is transparent and honest about setting their Query Fee Cut and Indexing Fee Cuts. A Delegator should also look at the Parameters Cooldown time to see how much of a time buffer they have. After that is done, it is fairly simple to calculate the amount of rewards the Delegators are getting. The formula is: +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![Delegation Image 3](/img/Delegation-Reward-Formula.png) ### Considering the Indexer's delegation pool -Another thing a Delegator has to consider is what proportion of the Delegation Pool they own. All delegation rewards are shared evenly, with a simple rebalancing of the pool determined by the amount the Delegator has deposited into the pool. This gives the Delegator a share of the pool: +Delegators should consider the proportion of the Delegation Pool they own. -![Share formula](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +This gives the Delegator a share of the pool: + +![Share formula](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### Considering the delegation capacity -Another thing to consider is the delegation capacity. Currently, the Delegation Ratio is set to 16. This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of Delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. @@ -89,16 +121,21 @@ Therefore, a Delegator should always consider the Delegation Capacity of an Inde ### MetaMask "Pending Transaction" Bug -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### Example -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## Video guide for the network UI +## Video Guide -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/de/network/developing.mdx b/website/pages/de/network/developing.mdx index 1b76eb94ccca..81231c36ad59 100644 --- a/website/pages/de/network/developing.mdx +++ b/website/pages/de/network/developing.mdx @@ -2,52 +2,88 @@ title: Developing --- -Developers are the demand side of The Graph ecosystem. Developers build subgraphs and publish them to The Graph Network. Then, they query live subgraphs with GraphQL in order to power their applications. +To start coding right away, go to [Developer Quick Start](/quick-start/). + +## Overview + +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. + +On The Graph, you can: + +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. + +### What is GraphQL? + +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +### Developer Actions + +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. + +## Subgraph Specifics + +### What are subgraphs? + +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. + +A subgraph primarily consists of the following files: + +- `subgraph.yaml`: this YAML file contains the [subgraph manifest](/developing/creating-a-subgraph/#the-subgraph-manifest). +- `subgraph.graphql`: this GraphQL schema defines what data is stored for your subgraph, and how to query it via [GraphQL](/developing/creating-a-subgraph/#the-graphql-schema). +- `mappings`: this [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) mappings file translates data from the event data to the entities defined in your schema. + +Learn the detailed specifics to [create a subgraph](/developing/creating-a-subgraph/). ## Subgraph Lifecycle -Subgraphs deployed to the network have a defined lifecycle. +Here is a general overview of a subgraph’s lifecycle: -### Build locally +![Subgraph Lifecycle](/img/subgraph-lifecycle.png) -As with all subgraph development, it starts with local development and testing. Developers can use the same local setup whether they are building for The Graph Network, the hosted service or a local Graph Node, leveraging `graph-cli` and `graph-ts` to build their subgraph. Developers are encouraged to use tools such as [Matchstick](https://github.com/LimeChain/matchstick) for unit testing to improve the robustness of their subgraphs. +### Build locally -> There are certain constraints on The Graph Network, in terms of feature and network support. Only subgraphs on [supported networks](/developing/supported-networks) will earn indexing rewards, and subgraphs which fetch data from IPFS are also not eligible. +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. ### Deploy to Subgraph Studio -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. - -### Publish to the Network +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -When the developer is happy with their subgraph, they can publish it to The Graph Network. This is an on-chain action, which registers the subgraph so that it is discoverable by Indexers. Published subgraphs have a corresponding NFT, which is then easily transferable. The published subgraph has associated metadata, which provides other network participants with useful context and information. +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. -### Signal to Encourage Indexing +### Publish to the Network -Published subgraphs are unlikely to be picked up by Indexers without the addition of signal. Signal is locked GRT associated with a given subgraph, which indicates to Indexers that a given subgraph will receive query volume, and also contributes to the indexing rewards available for processing it. Subgraph developers will generally add signal to their subgraph, in order to encourage indexing. Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. -### Querying & Application Development +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-and-deprecate-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. -Once a subgraph has been processed by Indexers and is available for querying, developers can start to use the subgraph in their applications. Developers query subgraphs via a gateway, which forwards their queries to an Indexer who has processed the subgraph, paying query fees in GRT. +### Add Curation Signal for Indexing -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. +#### What is signal? -### Updating Subgraphs +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. +### Querying & Application Development -Once the Subgraph Developer is ready to update, they can initiate a transaction to point their subgraph at the new version. Updating the subgraph migrates any signal to the new version (assuming the user who applied the signal selected "auto-migrate"), which also incurs a migration tax. This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). -### Deprecating Subgraphs +Learn more about [querying subgraphs](/querying/querying-the-graph/). -At some point a developer may decide that they no longer need a published subgraph. At that point they may deprecate the subgraph, which returns any signalled GRT to the Curators. +### Updating Subgraphs -### Diverse Developer Roles +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -Some developers will engage with the full subgraph lifecycle on the network, publishing, querying and iterating on their own subgraphs. Some may be focused on subgraph development, building open APIs which others can build on. Some may be application focused, querying subgraphs deployed by others. +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. -### Developers and Network Economics +### Deprecating & Transferring Subgraphs -Developers are a key economic actor in the network, locking up GRT in order to encourage indexing, and crucially querying subgraphs, which is the network's primary value exchange. Subgraph developers also burn GRT whenever a subgraph is updated. +If you no longer need a published subgraph, you can [deprecate or transfer a subgraph](/managing/transfer-and-deprecate-a-subgraph/). Deprecating a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/de/network/explorer.mdx b/website/pages/de/network/explorer.mdx index bca2993eb0b3..02dca6ed2f9f 100644 --- a/website/pages/de/network/explorer.mdx +++ b/website/pages/de/network/explorer.mdx @@ -2,21 +2,35 @@ title: Graph Explorer --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## Subgraphs -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you’ll be able to test queries in the playground and be able to leverage network details to make informed decisions. You’ll also be able to signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. This is critical because signaling on a subgraph incentivizes it to be indexed, which means that it’ll surface on the network to eventually serve queries. +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, several details are surfaced. These include: +On each subgraph’s dedicated page, you can do the following: - Signal/Un-signal on subgraphs - View more details such as charts, current deployment ID, and other metadata @@ -31,26 +45,32 @@ On each subgraph’s dedicated page, several details are surfaced. These include ## Participants -Within this tab, you’ll get a bird’s eye view of all the people that are participating in the network activities, such as Indexers, Delegators, and Curators. Below, we’ll go into an in-depth review of what each tab means for you. +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### 1. Indexers ![Explorer Image 4](/img/Indexer-Pane.png) -Let’s start with the Indexers. Indexers are the backbone of the protocol, being the ones that stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. In the Indexers table, you’ll be able to see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made off of query fees and indexing rewards. Deep dives below: +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. + +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. -- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators -- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards -- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters -- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior -- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing -- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated +**Specifics** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations. -- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - Indexer Rewards - this is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance. -Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. Indexing parameters are set by clicking on the right-hand side of the table, or by going into an Indexer’s profile and clicking the “Delegate” button. +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. To learn more about how to become an Indexer, you can take a look at the [official documentation](/network/indexing) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) @@ -58,9 +78,13 @@ To learn more about how to become an Indexer, you can take a look at the [offici ### 2. Curators -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially attractive subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. By depositing GRT, Curators mint curation shares of a subgraph. As a result, Curators are eligible to earn a portion of the query fees that the subgraph they have signaled on generates. The bonding curve incentivizes Curators to curate the highest quality data sources. The Curator table in this section will allow you to see: +In the The Curator table listed below you can see: - The date the Curator started curating - The number of GRT that was deposited @@ -68,34 +92,36 @@ Curators can be community members, data consumers, or even subgraph developers w ![Explorer Image 6](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting the following links of [The Graph Academy](https://thegraph.academy/curators/) or [official documentation.](/network/curating) +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### 3. Delegators -Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers seek to attract Delegators by offering them a portion of the indexing rewards and query fees that they earn. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![Explorer Image 7](/img/Delegation-Overview.png) -The Delegators table will allow you to see the active Delegators in the community, as well as metrics such as: +In the Delegators table you can see the active Delegators in the community and important metrics: - The number of Indexers a Delegator is delegating towards - A Delegator’s original delegation - The rewards they have accumulated but have not withdrawn from the protocol - The realized rewards they withdrew from the protocol - Total amount of GRT they have currently in the protocol -- The date they last delegated at +- The date they last delegated -If you want to learn more about how to become a Delegator, look no further! All you have to do is to head over to the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## Network -In the Network section, you will see global KPIs as well as the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. ### Overview -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +The overview section has both all the current network metrics and some cumulative metrics over time: - The current total network stake - The stake split between the Indexers and their Delegators @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - Protocol parameters such as curation reward, inflation rate, and more - Current epoch rewards and fees -A few key details that are worth mentioning: +A few key details to note: -- **Query fees represent the fees generated by the consumers**, and they can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once the Indexers close their allocations towards the subgraphs they’ve been indexing. Thus the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ In the Epochs section, you can analyze on a per-epoch basis, metrics such as: - The active epoch is the one in which Indexers are currently allocating stake and collecting query fees - The settling epochs are the ones in which the state channels are being settled. This means that the Indexers are subject to slashing if the consumers open disputes against them. - The distributing epochs are the epochs in which the state channels for the epochs are being settled and Indexers can claim their query fee rebates. - - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers, thus being finalized. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![Explorer Image 9](/img/Epoch-Stats.png) ## Your User Profile -Now that we’ve talked about the network stats, let’s move on to your personal profile. Your personal profile is the place for you to see your network activity, no matter how you’re participating on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see: +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### Profile Overview -This is where you can see any current actions you took. This is also where you can find your profile information, description, and website (if you added one). +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![Explorer Image 10](/img/Profile-Overview.png) ### Subgraphs Tab -If you click into the Subgraphs tab, you’ll see your published subgraphs. This will not include any subgraphs deployed with the CLI for testing purposes – subgraphs will only show up when they are published to the decentralized network. +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Indexing Tab -If you click into the Indexing tab, you’ll find a table with all the active and historical allocations towards the subgraphs, as well as charts that you can analyze and see your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: @@ -158,7 +189,9 @@ This section will also include details about your net Indexer rewards and net qu ### Delegating Tab -Delegators are important to the Graph Network. A Delegator must use their knowledge to choose an Indexer that will provide a healthy return on rewards. Here you can find details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. In the first half of the page, you can see your delegation chart, as well as the rewards-only chart. To the left, you can see the KPIs that reflect your current delegation metrics. diff --git a/website/pages/de/network/indexing.mdx b/website/pages/de/network/indexing.mdx index 68a96556ac68..83e2168e9811 100644 --- a/website/pages/de/network/indexing.mdx +++ b/website/pages/de/network/indexing.mdx @@ -42,7 +42,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) to get the IDs for all active allocations: +1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -113,11 +113,11 @@ Indexers may differentiate themselves by applying advanced techniques for making - **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. | Konfiguration | Postgres
(CPUs) | Postgres
(Speicher in GB) | Postgres
(Festplatte in TB) | VMs
(CPUs) | VMs
(Speicher in GB) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Klein | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Mittel | 16 | 64 | 2 | 32 | 64 | -| Groß | 72 | 468 | 3.5 | 48 | 184 | +| ------------- |:--------------------------:|:------------------------------------:|:--------------------------------------:|:---------------------:|:-------------------------------:| +| Klein | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Mittel | 16 | 64 | 2 | 32 | 64 | +| Groß | 72 | 468 | 3.5 | 48 | 184 | ### What are some basic security precautions an Indexer should take? @@ -149,20 +149,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer #### Der Graph-Knoten -| Port | Zweck | Routen | CLI-Argument | Umgebungsvariable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP-Server
(für Subgraf-Abfragen) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(für Subgraf-Abonnements) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(zum Verwalten von Deployments) | / | --admin-port | - | -| 8030 | Subgraf-Indizierungsstatus-API | /graphql | --index-node-port | - | -| 8040 | Prometheus-Metriken | /metrics | --metrics-port | - | +| Port | Zweck | Routen | CLI-Argument | Umgebungsvariable | +| ---- | ----------------------------------------------------- | ---------------------------------------------------- | ----------------- | ----------------- | +| 8000 | GraphQL HTTP-Server
(für Subgraf-Abfragen) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(für Subgraf-Abonnements) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(zum Verwalten von Deployments) | / | --admin-port | - | +| 8030 | Subgraf-Indizierungsstatus-API | /graphql | --index-node-port | - | +| 8040 | Prometheus-Metriken | /metrics | --metrics-port | - | #### Indexer Service -| Port | Zweck | Routen | CLI-Argument | Umgebungsvariable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL-HTTP-Server
(für kostenpflichtige Subgraf-Abfragen) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus-Metriken | /metrics | --metrics-port | - | +| Port | Zweck | Routen | CLI-Argument | Umgebungsvariable | +| ---- | ---------------------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL-HTTP-Server
(für kostenpflichtige Subgraf-Abfragen) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus-Metriken | /metrics | --metrics-port | - | #### Indexer Agent @@ -545,7 +545,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules maybe [options] ` — Set the `decisionBasis` for a deployment to `rules`, so that the Indexer agent will use indexing rules to decide whether to index this deployment. -- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additonal argument `--status` can be used to print out all actions of a certain status. +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `graph indexer action queue allocate ` - Queue allocation action diff --git a/website/pages/de/network/overview.mdx b/website/pages/de/network/overview.mdx index 16214028dbc9..0779d9a6cb00 100644 --- a/website/pages/de/network/overview.mdx +++ b/website/pages/de/network/overview.mdx @@ -2,14 +2,20 @@ title: Network Overview --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## Overview +## How does it work? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Specifics + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![Token Economics](/img/Network-roles@2x.png) -To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20 used to allocate resources in the network. +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/de/new-chain-integration.mdx b/website/pages/de/new-chain-integration.mdx index 35b2bc7c8b4a..534d6701efdb 100644 --- a/website/pages/de/new-chain-integration.mdx +++ b/website/pages/de/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: Integrating New Networks +title: New Chain Integration --- -Graph Node can currently index data from the following chain types: +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- Ethereum, via EVM JSON-RPC and [Ethereum Firehose](https://github.com/streamingfast/firehose-ethereum) -- NEAR, via a [NEAR Firehose](https://github.com/streamingfast/near-firehose-indexer) -- Cosmos, via a [Cosmos Firehose](https://github.com/graphprotocol/firehose-cosmos) -- Arweave, via an [Arweave Firehose](https://github.com/graphprotocol/firehose-arweave) +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -If you are interested in any of those chains, integration is a matter of Graph Node configuration and testing. +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -If you are interested in a different chain type, a new integration with Graph Node must be built. Our recommended approach is developing a new Firehose for the chain in question and then the integration of that Firehose with Graph Node. More info below. +## Integration Strategies -**1. EVM JSON-RPC** +### 1. EVM JSON-RPC -If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. For more information, refer to [Testing an EVM JSON-RPC](new-chain-integration#testing-an-evm-json-rpc). +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -**2. Firehose** +#### Testing an EVM JSON-RPC -For non-EVM-based chains, Graph Node must ingest blockchain data via gRPC and known type definitions. This can be done via [Firehose](firehose/), a new technology developed by [StreamingFast](https://www.streamingfast.io/) that provides a highly-scalable indexing blockchain solution using a files-based and streaming-first approach. Reach out to the [StreamingFast team](mailto:integrations@streamingfast.io/) if you need help with Firehose development. +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## Difference between EVM JSON-RPC & Firehose +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, in a JSON-RPC batch request +- `trace_filter` _(optionally required for Graph Node to support call handlers)_ -While the two are suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](substreams/), like building [Substreams-powered subgraphs](cookbook/substreams-powered-subgraphs/). In addition, Firehose allows for improved indexing speeds when compared to JSON-RPC. +### 2. Firehose Integration -New EVM chain integrators may also consider the Firehose-based approach, given the benefits of substreams and its massive parallelized indexing capabilities. Supporting both allows developers to choose between building substreams or subgraphs for the new chain. +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> **NOTE**: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that eth_calls are [not a good practice for developers](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## Testing an EVM JSON-RPC +#### Specific Firehose Instrumentation for EVM (`geth`) chains -For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON RPC methods: +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \_(for historical blocks, with EIP-1898 - requires archive node): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`, in a JSON-RPC batch request -- _`trace_filter`_ _(optionally required for Graph Node to support call handlers)_ +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### Graph Node Configuration +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -**Start by preparing your local environment** +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## Graph Node Configuration + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) -2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON RPC compliant URL - > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ -**Test the integration by locally deploying a subgraph** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. Create a simple example subgraph. Some options are below: - 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point - 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) -3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. -4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -Graph Node should be syncing the deployed subgraph if there are no errors. Give it time to sync, then send some GraphQL queries to the API endpoint printed in the logs. +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## Integrating a new Firehose-enabled chain +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. Create a simple example subgraph. Some options are below: + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +Graph Node should be syncing the deployed subgraph if there are no errors. Give it time to sync, then send some GraphQL queries to the API endpoint printed in the logs. -Integrating a new chain is also possible using the Firehose approach. This is currently the best option for non-EVM chains and a requirement for substreams support. Additional documentation focuses on how Firehose works, adding Firehose support for a new chain and integrating it with Graph Node. Recommended docs for integrators: +## Substreams-powered Subgraphs -1. [General docs on Firehose](firehose/) -2. [Adding Firehose support for a new chain](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [Integrating Graph Node with a new chain via Firehose](https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/de/operating-graph-node.mdx b/website/pages/de/operating-graph-node.mdx index 1db929271a01..571f332cd774 100644 --- a/website/pages/de/operating-graph-node.mdx +++ b/website/pages/de/operating-graph-node.mdx @@ -77,13 +77,13 @@ Eine vollständige Kubernetes-Beispielkonfiguration finden Sie im [Indexer-Repos Wenn es ausgeführt wird, stellt Graph Node die folgenden Ports zur Verfügung: -| Port | Zweck | Routen | CLI-Argument | Umgebungsvariable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP-Server
(für Subgraf-Abfragen) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(für Subgraf-Abonnements) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(zum Verwalten von Deployments) | / | --admin-port | - | -| 8030 | Subgraf-Indizierungsstatus-API | /graphql | --index-node-port | - | -| 8040 | Prometheus-Metriken | /metrics | --metrics-port | - | +| Port | Zweck | Routen | CLI-Argument | Umgebungsvariable | +| ---- | ----------------------------------------------------- | ---------------------------------------------------- | ----------------- | ----------------- | +| 8000 | GraphQL HTTP-Server
(für Subgraf-Abfragen) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(für Subgraf-Abonnements) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(zum Verwalten von Deployments) | / | --admin-port | - | +| 8030 | Subgraf-Indizierungsstatus-API | /graphql | --index-node-port | - | +| 8040 | Prometheus-Metriken | /metrics | --metrics-port | - | > **Wichtig**: Seien Sie vorsichtig, wenn Sie Ports öffentlich zugänglich machen - **Administrationsports** sollten gesperrt bleiben. Dies schließt den JSON-RPC-Endpunkt des Graph-Knotens ein. @@ -97,9 +97,9 @@ Dieses Setup kann horizontal skaliert werden, indem mehrere Graph-Knoten und meh Eine [TOML](https://toml.io/en/)-Konfigurationsdatei kann verwendet werden, um komplexere Konfigurationen als die in der CLI bereitgestellten festzulegen. Der Speicherort der Datei wird mit dem Befehlszeilenschalter --config übergeben. -> When using a configuration file, it is not possible to use the options --postgres-url, --postgres-secondary-hosts, and --postgres-host-weights. +> Bei Verwendung einer Konfigurationsdatei ist es nicht möglich, die Optionen --postgres-url, --postgres-secondary-hosts und --postgres-host-weights zu verwenden. -A minimal `config.toml` file can be provided; the following file is equivalent to using the --postgres-url command line option: +Eine minimale `config.toml`-Datei kann bereitgestellt werden; Die folgende Datei entspricht der Verwendung der Befehlszeilenoption --postgres-url: ```toml [store] @@ -110,19 +110,19 @@ connection="<.. postgres-url argument ..>" indexers = [ "<.. list of all indexing nodes ..>" ] ``` -Full documentation of `config.toml` can be found in the [Graph Node docs](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md). +Die vollständige Dokumentation von `config.toml` finden Sie in den [Graph Node Dokumenten](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md). -#### Multiple Graph Nodes +#### Mehrere Graph-Knoten Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules). -> Note that multiple Graph Nodes can all be configured to use the same database, which itself can be horizontally scaled via sharding. +> Beachten Sie darauf, dass mehrere Graph-Knoten so konfiguriert werden können, dass sie dieselbe Datenbank verwenden, die ihrerseits durch Sharding horizontal skaliert werden kann. -#### Deployment rules +#### Bereitstellungsregeln Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision. -Example deployment rule configuration: +Beispielkonfiguration für Bereitstellungsregeln: ```toml [deployment] @@ -150,49 +150,49 @@ indexers = [ ] ``` -Read more about deployment rules [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#controlling-deployment). +Weitere Informationen zu Bereitstellungsregeln finden Sie [hier](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#controlling-deployment). -#### Dedicated query nodes +#### Dedizierte Abfrageknoten -Nodes can be configured to explicitly be query nodes by including the following in the configuration file: +Knoten können explizit als Abfrageknoten konfiguriert werden, indem Sie Folgendes in die Konfigurationsdatei aufnehmen: ```toml [general] query = "" ``` -Any node whose --node-id matches the regular expression will be set up to only respond to queries. +Jeder Knoten, dessen --node-id mit dem regulären Ausdruck übereinstimmt, wird so eingerichtet, dass er nur auf Abfragen antwortet. -#### Database scaling via sharding +#### Datenbankskalierung durch Sharding -For most use cases, a single Postgres database is sufficient to support a graph-node instance. When a graph-node instance outgrows a single Postgres database, it is possible to split the storage of graph-node's data across multiple Postgres databases. All databases together form the store of the graph-node instance. Each individual database is called a shard. +Für die meisten Anwendungsfälle reicht eine einzelne Postgres-Datenbank aus, um eine Graph-Node-Instanz zu unterstützen. Wenn eine Graph-Node-Instanz aus einer einzelnen Postgres-Datenbank herauswächst, ist es möglich, die Speicherung der Daten des Graph-Nodes auf mehrere Postgres-Datenbanken aufzuteilen. Alle Datenbanken zusammen bilden den Speicher der Graph-Node-Instanz. Jede einzelne Datenbank wird als Shard bezeichnet. Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed. -Sharding becomes useful when your existing database can't keep up with the load that Graph Node puts on it, and when it's not possible to increase the database size anymore. +Sharding wird nützlich, wenn Ihre vorhandene Datenbank nicht mit der Last Schritt halten kann, die Graph Node ihr auferlegt, und wenn es nicht mehr möglich ist, die Datenbankgröße zu erhöhen. > It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between subgraphs; in those situations it can help dramatically if the high-volume subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume subgraphs. -In terms of configuring connections, start with max_connections in postgresql.conf set to 400 (or maybe even 200) and look at the store_connection_wait_time_ms and store_connection_checkout_count Prometheus metrics. Noticeable wait times (anything above 5ms) is an indication that there are too few connections available; high wait times there will also be caused by the database being very busy (like high CPU load). However if the database seems otherwise stable, high wait times indicate a need to increase the number of connections. In the configuration, how many connections each graph-node instance can use is an upper limit, and Graph Node will not keep connections open if it doesn't need them. +Was das Konfigurieren von Verbindungen betrifft, beginnen Sie mit max_connections in postgresql.conf, das auf 400 (oder vielleicht sogar 200) eingestellt ist, und sehen Sie sich die Prometheus-Metriken store_connection_wait_time_ms und store_connection_checkout_count an. Spürbare Wartezeiten (alles über 5 ms) sind ein Hinweis darauf, dass zu wenige Verbindungen verfügbar sind; hohe Wartezeiten werden auch dadurch verursacht, dass die Datenbank sehr ausgelastet ist (z. B. hohe CPU-Last). Wenn die Datenbank jedoch ansonsten stabil erscheint, weisen hohe Wartezeiten darauf hin, dass die Anzahl der Verbindungen erhöht werden muss. In der Konfiguration ist die Anzahl der Verbindungen, die jede Graph-Knoten-Instanz verwenden kann, eine Obergrenze, und der Graph-Knoten hält Verbindungen nicht offen, wenn er sie nicht benötigt. -Read more about store configuration [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-multiple-databases). +Weitere Informationen zur Speicherkonfiguration finden Sie [hier](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-multiple-databases). -#### Dedicated block ingestion +#### Dedizierte Blockaufnahme -If there are multiple nodes configured, it will be necessary to specify one node which is responsible for ingestion of new blocks, so that all configured index nodes aren't polling the chain head. This is done as part of the `chains` namespace, specifying the `node_id` to be used for block ingestion: +Wenn mehrere Knoten konfiguriert sind, muss ein Knoten angegeben werden, der für die Aufnahme neuer Blöcke verantwortlich ist, damit nicht alle konfigurierten Indexknoten den Kettenkopf abfragen. Dies geschieht als Teil des `chains`-Namespace, der die `node_id` angibt, die für die Blockaufnahme verwendet werden soll: ```toml [chains] ingestor = "block_ingestor_node" ``` -#### Supporting multiple networks +#### Unterstützung mehrerer Netzwerke -The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of: +Das Graph-Protokoll erhöht die Anzahl der Netzwerke, die für die Indizierung von Belohnungen unterstützt werden, und es gibt viele Subgraphen, die nicht unterstützte Netzwerke indizieren, die ein Indexer verarbeiten möchte. Die Datei `config.toml` ermöglicht eine ausdrucksstarke und flexible Konfiguration von: -- Multiple networks -- Multiple providers per network (this can allow splitting of load across providers, and can also allow for configuration of full nodes as well as archive nodes, with Graph Node preferring cheaper providers if a given workload allows). -- Additional provider details, such as features, authentication and the type of provider (for experimental Firehose support) +- Mehrere Netzwerke +- Mehrere Anbieter pro Netzwerk (dies kann eine Aufteilung der Last auf Anbieter ermöglichen und kann auch die Konfiguration von vollständigen Knoten sowie Archivknoten ermöglichen, wobei Graph Node günstigere Anbieter bevorzugt, wenn eine bestimmte Arbeitslast dies zulässt). +- Zusätzliche Anbieterdetails, wie Funktionen, Authentifizierung und Anbietertyp (für experimentelle Firehose-Unterstützung) The `[chains]` section controls the ethereum providers that graph-node connects to, and where blocks and other metadata for each chain are stored. The following example configures two chains, mainnet and kovan, where blocks for mainnet are stored in the vip shard and blocks for kovan are stored in the primary shard. The mainnet chain can use two different providers, whereas kovan only has one provider. @@ -210,17 +210,17 @@ shard = "primary" provider = [ { label = "kovan", url = "http://..", features = [] } ] ``` -Read more about provider configuration [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-ethereum-providers). +Weitere Informationen zur Anbieterkonfiguration finden Sie [hier](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#configuring-ethereum-providers). -### Environment variables +### Umgebungsvariablen -Graph Node supports a range of environment variables which can enable features, or change Graph Node behaviour. These are documented [here](https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md). +Graph Node unterstützt eine Reihe von Umgebungsvariablen, die Funktionen aktivieren oder das Verhalten von Graph Node ändern können. Diese sind [hier](https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md) dokumentiert. -### Continuous deployment +### Kontinuierlicher Einsatz -Users who are operating a scaled indexing setup with advanced configuration may benefit from managing their Graph Nodes with Kubernetes. +Benutzer, die ein skaliertes Indizierungs-Setup mit erweiterter Konfiguration betreiben, können von der Verwaltung ihrer Graph-Knoten mit Kubernetes profitieren. -- The indexer repository has an [example Kubernetes reference](https://github.com/graphprotocol/indexer/tree/main/k8s) +- Das Indexer-Repository enthält eine [Beispielreferenz für Kubernetes](https://github.com/graphprotocol/indexer/tree/main/k8s) - [Launchpad](https://docs.graphops.xyz/launchpad/intro) is a toolkit for running a Graph Protocol Indexer on Kubernetes maintained by GraphOps. It provides a set of Helm charts and a CLI to manage a Graph Node deployment. ### Managing Graph Node @@ -231,25 +231,25 @@ Given a running Graph Node (or Graph Nodes!), the challenge is then to manage de Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace. -In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs). +Außerdem bietet das Festlegen von `GRAPH_LOG_QUERY_TIMING` auf `gql` weitere Details darüber, wie GraphQL-Abfragen ausgeführt werden (obwohl dies eine große Menge an Protokollen generieren wird). -#### Monitoring & alerting +#### Überwachung & Warnungen -Graph Node provides the metrics via Prometheus endpoint on 8040 port by default. Grafana can then be used to visualise these metrics. +Graph Node stellt die Metriken standardmäßig durch den Prometheus-Endpunkt am Port 8040 bereit. Grafana kann dann zur Visualisierung dieser Metriken verwendet werden. -The indexer repository provides an [example Grafana configuration](https://github.com/graphprotocol/indexer/blob/main/k8s/base/grafana.yaml). +Das Indexer-Repository bietet eine [Beispielkonfiguration für Grafana](https://github.com/graphprotocol/indexer/blob/main/k8s/base/grafana.yaml). #### Graphman -`graphman` is a maintenance tool for Graph Node, helping with diagnosis and resolution of different day-to-day and exceptional tasks. +`graphman` ist ein Wartungstool für Graph Node, das bei der Diagnose und Lösung verschiedener alltäglicher und außergewöhnlicher Aufgaben hilft. The graphman command is included in the official containers, and you can docker exec into your graph-node container to run it. It requires a `config.toml` file. -Full documentation of `graphman` commands is available in the Graph Node repository. See \[/docs/graphman.md\] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs` +Eine vollständige Dokumentation der `graphman`-Befehle ist im Graph Node-Repository verfügbar. Siehe \[/docs/graphman.md\] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) im Graph Node `/docs` ### Working with subgraphs -#### Indexing status API +#### Indizierungsstatus-API Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different subgraphs, checking proofs of indexing, inspecting subgraph features and more. diff --git a/website/pages/de/querying/graphql-api.mdx b/website/pages/de/querying/graphql-api.mdx index c1831e3117e6..abe9db8cc7f4 100644 --- a/website/pages/de/querying/graphql-api.mdx +++ b/website/pages/de/querying/graphql-api.mdx @@ -1,16 +1,24 @@ --- -title: GraphQL API +title: GraphQL-API --- -This guide explains the GraphQL Query API that is used for The Graph Protocol. +Learn about the GraphQL Query API used in The Graph. -## Queries +## What is GraphQL? -In your subgraph schema you define types called `Entities`. For each `Entity` type, an `entity` and `entities` field will be generated on the top-level `Query` type. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +To understand the larger role that GraphQL plays, review [developing](/network/developing/) and [creating a subgraph](/developing/creating-a-subgraph/). + +## Queries with GraphQL + +In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. + +> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. ### Beispiele -Query for a single `Token` entity defined in your schema: +Die Abfrage für eine einzelne `Token`-Entität, die in Ihrem Schema definiert ist: ```graphql { @@ -21,9 +29,9 @@ Query for a single `Token` entity defined in your schema: } ``` -> **Note:** When querying for a single entity, the `id` field is required, and it must be a string. +> Note: When querying for a single entity, the `id` field is required, and it must be writen as a string. -Query all `Token` entities: +Die Abfrage für alle `Token`-Entitäten: ```graphql { @@ -34,11 +42,14 @@ Query all `Token` entities: } ``` -### Sorting +### Sortierung -When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. +When querying a collection, you may: -#### Example +- Use the `orderBy` parameter to sort by a specific attribute. +- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. + +#### Beispiel ```graphql { @@ -49,11 +60,11 @@ When querying a collection, the `orderBy` parameter may be used to sort by a spe } ``` -#### Example for nested entity sorting +#### Beispiel für die Sortierung verschachtelter Entitäten -As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities. +Ab Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) können Entitäten auf der Basis von verschachtelten Entitäten sortiert werden. -In the following example, we sort the tokens by the name of their owner: +The following example shows tokens sorted by the name of their owner: ```graphql { @@ -66,19 +77,20 @@ In the following example, we sort the tokens by the name of their owner: } ``` -> Currently, you can sort by one-level deep `String` or `ID` types on `@entity` and `@derivedFrom` fields. Unfortunately, [sorting by interfaces on one level-deep entities](https://github.com/graphprotocol/graph-node/pull/4058), sorting by fields which are arrays and nested entities is not yet supported. +> Derzeit können Sie in den Feldern `@entity` und `@derivedFrom` nach einstufig tiefen `String`- oder `ID`-Typen sortieren. Leider ist das [Sortieren nach Schnittstellen auf Entitäten mit einer Tiefe von einer Ebene](https://github.com/graphprotocol/graph-node/pull/4058), das Sortieren nach Feldern, die Arrays und verschachtelte Entitäten sind, noch nicht unterstützt. ### Pagination -When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. - -Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +When querying a collection, it's best to: -Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. +- Use the `first` parameter to paginate from the beginning of the collection. + - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. +- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. -#### Example using `first` +#### Ein Beispiel für die Verwendung von `first` -Query the first 10 tokens: +Die Abfrage für die ersten 10 Token: ```graphql { @@ -89,7 +101,7 @@ Query the first 10 tokens: } ``` -To query for groups of entities in the middle of a collection, the `skip` parameter may be used in conjunction with the `first` parameter to skip a specified number of entities starting at the beginning of the collection. +Um Gruppen von Entitäten in der Mitte einer Sammlung abzufragen, kann der Parameter `skip` in Verbindung mit dem Parameter `first` verwendet werden, um eine bestimmte Anzahl von Entitäten beginnend am Anfang der Sammlung zu überspringen. #### Example using `first` and `skip` @@ -106,7 +118,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect #### Example using `first` and `id_ge` -If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: +If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: ```graphql query manyTokens($lastID: String) { @@ -117,11 +129,12 @@ query manyTokens($lastID: String) { } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### Filtering -You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. +- You can use the `where` parameter in your queries to filter for different properties. +- You can filter on multiple values within the `where` parameter. #### Example using `where` @@ -155,7 +168,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: #### Example for block filtering -You can also filter entities by the `_change_block(number_gte: Int)` - this filters entities which were updated in or after the specified block. +You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). @@ -193,7 +206,7 @@ As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/release ##### `AND`-Operator -In the following example, we are filtering for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. +The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. ```graphql { @@ -208,7 +221,7 @@ In the following example, we are filtering for challenges with `outcome` `succee ``` > **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas. -> +> > ```graphql > { > challenges(where: { number_gte: 100, outcome: "succeeded" }) { @@ -223,7 +236,7 @@ In the following example, we are filtering for challenges with `outcome` `succee ##### `OR` Operator -In the following example, we are filtering for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. +The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. ```graphql { @@ -278,9 +291,9 @@ _change_block(number_gte: Int) You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -Note that the current implementation is still subject to certain limitations that might violate these gurantees. The implementation can not always tell that a given block hash is not on the main chain at all, or that the result of a query by block hash for a block that can not be considered final yet might be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. #### Example @@ -322,12 +335,12 @@ Fulltext search queries have one required field, `text`, for supplying search te Fulltext search operators: -| Symbol | Operator | Beschreibung | -| --- | --- | --- | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| Symbol | Operator | Beschreibung | +| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | +| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | +| `<->` | `Follow by` | Specify the distance between two words. | +| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | #### Beispiele @@ -376,11 +389,11 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 ## Schema -The schema of your data source--that is, the entity types, values, and relationships that are available to query--are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> **Note:** Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. +> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. ### Entities diff --git a/website/pages/de/querying/querying-best-practices.mdx b/website/pages/de/querying/querying-best-practices.mdx index 32d1415b20fa..5654cf9e23a5 100644 --- a/website/pages/de/querying/querying-best-practices.mdx +++ b/website/pages/de/querying/querying-best-practices.mdx @@ -2,11 +2,9 @@ title: Querying Best Practices --- -The Graph provides a decentralized way to query data from blockchains. +The Graph provides a decentralized way to query data from blockchains via GraphQL APIs, making it easier to query data with the GraphQL language. -The Graph network's data is exposed through a GraphQL API, making it easier to query data with the GraphQL language. - -This page will guide you through the essential GraphQL language rules and GraphQL queries best practices. +Learn the essential GraphQL language rules and GraphQL querying best practices. --- @@ -71,7 +69,7 @@ GraphQL is a language and set of conventions that transport over HTTP. It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`). -However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), we recommend you to use our `graph-client` that supports unique features such as: +However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), it's recommend to use `graph-client` which supports unique features such as: - Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) @@ -104,8 +102,6 @@ main() More GraphQL client alternatives are covered in ["Querying from an Application"](/querying/querying-from-an-application). -Now that we covered the basic rules of GraphQL queries syntax, let's now look at the best practices of GraphQL query writing. - --- ## Best Practices @@ -164,11 +160,11 @@ Doing so brings **many advantages**: - **Variables can be cached** at server-level - **Queries can be statically analyzed by tools** (more on this in the following sections) -**Note: How to include fields conditionally in static queries** +### How to include fields conditionally in static queries -We might want to include the `owner` field only on a particular condition. +You might want to include the `owner` field only on a particular condition. -For this, we can leverage the `@include(if:...)` directive as follows: +For this, you can leverage the `@include(if:...)` directive as follows: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -191,7 +187,7 @@ const result = await execute(query, { }) ``` -Note: The opposite directive is `@skip(if: ...)`. +> Note: The opposite directive is `@skip(if: ...)`. ### Ask for what you want @@ -199,9 +195,8 @@ GraphQL became famous for its "Ask for what you want" tagline. For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually. -When querying GraphQL APIs, always think of querying only the fields that will be actually used. - -A common cause of over-fetching is collections of entities. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. Queries should therefore almost always set first explicitly, and make sure they only fetch as many entities as they actually need. This applies not just to top-level collections in a query, but even more so to nested collections of entities. +- When querying GraphQL APIs, always think of querying only the fields that will be actually used. +- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. For example, in the following query: @@ -337,8 +332,8 @@ query { Such repeated fields (`id`, `active`, `status`) bring many issues: -- harder to read for more extensive queries -- when using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. +- More extensive queries become harder to read. +- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. A refactored version of the query would be the following: @@ -364,13 +359,13 @@ fragment DelegateItem on Transcoder { } ``` -Using GraphQL `fragment` will improve readability (especially at scale) but also will result in better TypeScript types generation. +Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_). ### GraphQL Fragment do's and don'ts -**Fragment base must be a type** +### Fragment base must be a type A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**: @@ -382,7 +377,7 @@ fragment MyFragment on BigInt { `BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base. -**How to spread a Fragment** +#### How to spread a Fragment Fragments are defined on specific types and should be used accordingly in queries. @@ -411,16 +406,16 @@ fragment VoteItem on Vote { It is not possible to spread a fragment of type `Vote` here. -**Define Fragment as an atomic business unit of data** +#### Define Fragment as an atomic business unit of data -GraphQL Fragment must be defined based on their usage. +GraphQL `Fragment`s must be defined based on their usage. For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient. -Here is a rule of thumb for using Fragment: +Here is a rule of thumb for using fragments: -- when fields of the same type are repeated in a query, group them in a Fragment -- when similar but not the same fields are repeated, create multiple fragments, ex: +- When fields of the same type are repeated in a query, group them in a `Fragment`. +- When similar but different fields are repeated, create multiple fragments, for instance: ```graphql # base fragment (mostly used in listing) @@ -443,7 +438,7 @@ fragment VoteWithPoll on Vote { --- -## The essential tools +## The Essential Tools ### GraphQL web-based explorers @@ -473,11 +468,11 @@ This will allow you to **catch errors without even testing queries** on the play The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get: -- syntax highlighting -- autocomplete suggestions -- validation against schema -- snippets -- go to definition for fragments and input types +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets +- Go to definition for fragments and input types If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly. @@ -485,9 +480,9 @@ If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketp The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing: -- syntax highlighting -- autocomplete suggestions -- validation against schema -- snippets +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets -More information on this [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) that showcases all the plugin's main features. +For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. diff --git a/website/pages/de/quick-start.mdx b/website/pages/de/quick-start.mdx index 1a3c915185de..a3d8871965ad 100644 --- a/website/pages/de/quick-start.mdx +++ b/website/pages/de/quick-start.mdx @@ -2,24 +2,18 @@ title: Schnellstart --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph to Subgraph Studio. +Learn how to easily publish and query a [subgraph](/developing/developer-faqs/#1-what-is-a-subgraph) on The Graph. -Stellen Sie sicher, dass Ihr Subgraph Daten aus einem [unterstützten Netzwerk] (/developing/supported-networks) indiziert. - -Bei der Erstellung dieses Leitfadens wird davon ausgegangen, dass Sie über die entsprechenden Kenntnisse verfügen: +## Prerequisites for this guide - Eine Krypto-Wallet -- Eine Smart-Contract-Adresse im Netzwerk Ihrer Wahl nach - -## 1. Erstellen Sie einen Untergraphen in Subgraph Studio - -Go to the [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- A smart contract address on one of the [supported networks](/developing/supported-networks/) -Once your wallet is connected, you can begin by clicking “Create a Subgraph." It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name." +## Step-by-step -## 2. Installieren der Graph-CLI +### 1. Install the Graph CLI -The Graph CLI is written in TypeScript and you will need to have `node` and either `npm` or `yarn` installed to use it. Check that you have the most recent CLI version installed. +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. Führen Sie einen der folgenden Befehle auf Ihrem lokalen Computer aus: @@ -35,133 +29,161 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -## 3. Initialize your subgraph from existing contract +### 2. Create your subgraph + +If your contract has events, the `init` command will automatically create a scaffold of a subgraph. + +#### Create via Graph CLI + +Use the following command to create a subgraph in Subgraph Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +#### Create via Subgraph Studio + +Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. -Initialize your subgraph from an existing contract by running the initialize command: +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +2. Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". + +For additional information on subgraph creation and the Graph CLI, see [Creating a Subgraph](/developing/creating-a-subgraph). + +### 3. Initialize your subgraph + +#### From an existing contract + +The following command initializes your subgraph from an existing contract: ```sh graph init --studio ``` -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> Note: If your contract was verified on Etherscan, then the ABI will automatically be created in the CLI. + +You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -Wenn Sie Ihren Untergraphen initialisieren, fragt das CLI-Tool Sie nach den folgenden Informationen: +When you initialize your subgraph, the CLI will ask you for the following information: -- Protokoll: Wählen Sie das Protokoll aus, von dem Ihr Untergraph ( Subgraph ) Daten indizieren soll. -- Subgraph slug: Erstellen Sie einen Namen für Ihren Subgraphen. Ihr Subgraph-Slug ist ein Identifikationsmerkmal für Ihren Subgraphen. -- Verzeichnis zur Erstellung des Subgraphen: Wählen Sie Ihr lokales Verzeichnis -- Ethereum-Netzwerk (optional): Sie müssen ggf. angeben, von welchem EVM-kompatiblen Netzwerk Ihr Subgraph Daten indizieren soll. -- Vertragsadresse: Suchen Sie die Smart-Contract-Adresse, von der Sie Daten abfragen möchten -- ABI: Wenn die ABI nicht automatisch ausgefüllt wird, müssen Sie sie manuell in Form einer JSON-Datei eingeben. -- Startblock: Es wird empfohlen, den Startblock einzugeben, um Zeit zu sparen, während Ihr Subgraph die Blockchain-Daten indiziert. Sie können den Startblock finden, indem Sie den Block suchen, in dem Ihr Vertrag bereitgestellt wurde. -- Vertragsname: Geben Sie den Namen Ihres Vertrags ein -- Index contract events as entities (Vertragsereignisse als Entitäten): Es wird empfohlen, dies auf true (wahr) zu setzen, da es automatisch Zuordnungen zu Ihrem Subgraph für jedes emittierte Ereignis hinzufügt -- Einen weiteren Vertrag hinzufügen (optional): Sie können einen weiteren Vertrag hinzufügen +- Protocol: Choose the protocol your subgraph will be indexing data from. +- Subgraph slug: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. +- Directory to create the subgraph in: Choose your local directory. +- Ethereum network (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- Contract address: Locate the smart contract address you’d like to query data from. +- ABI: If the ABI is not auto-populated, you will need to input it manually as a JSON file. +- Start Block: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- Contract Name: Input the name of your contract. +- Index contract events as entities: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- Add another contract (optional): You can add another contract. Der folgende Screenshot zeigt ein Beispiel dafür, was Sie bei der Initialisierung Ihres Untergraphen ( Subgraph ) erwarten können: ![Subgraph command](/img/subgraph-init-example.png) -## 4. Write your subgraph +### 4. Write your subgraph -Die vorangegangenen Befehle erstellen einen gerüstartigen Subgraphen, den Sie als Ausgangspunkt für den Aufbau Ihres Subgraphen verwenden können. Wenn Sie Änderungen an dem Subgraphen vornehmen, werden Sie hauptsächlich mit +The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. -- Manifest (`subgraph.yaml`) - The manifest defines what datasources your subgraphs will index. -- Schema (`schema.graphql`) - The GraphQL schema defines what data you wish to retrieve from the subgraph. -- AssemblyScript Mappings (`mapping.ts`) - This is the code that translates data from your datasources to the entities defined in the schema. +When making changes to the subgraph, you will mainly work with three files: -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). +- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -## 5. Deploy to Subgraph Studio +For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -Sobald Ihr Subgraph geschrieben ist, führen Sie die folgenden Befehle aus: +### 5. Deploy your subgraph -```sh -$ graph codegen -$ graph build -``` +Remember, deploying is not the same as publishing. + +- When you deploy a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. +- When you publish a subgraph, you are publishing it onchain to the decentralized network. + +1. Sobald Ihr Subgraph geschrieben ist, führen Sie die folgenden Befehle aus: + + ```sh + graph codegen + graph build + ``` + +2. Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. -- Authentifizieren Sie Ihren Subgraphen und stellen Sie ihn bereit. Den Bereitstellungsschlüssel finden Sie auf der Seite "Subgraph" in Subgraph Studio. +![Deploy key](/img/subgraph-studio-deploy-key.jpg) +```` ```sh -$ graph auth --studio -$ graph deploy --studio -``` -You will be asked for a version label. It's strongly recommended to use [semver](https://semver.org/) for versioning like `0.0.1`. That said, you are free to choose any string as version such as:`v1`, `version1`, `asdf`. - -## 6. Testen Sie Ihren Untergraphen ( Subgraphen ) - -In Subgraph Studio's playground environment, you can test your subgraph by making a sample query. - -In den Protokollen können Sie sehen, ob es Fehler in Ihrem Subgraphen gibt. Die Protokolle eines funktionierenden Subgraphen sehen wie folgt aus: - -![Subgraph logs](/img/subgraph-logs-image.png) - -If your subgraph is failing, you can query the subgraph health by using the GraphiQL Playground. Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails, so you can debug accordingly: - -```graphql -{ - indexingStatuses(subgraphs: ["Qm..."]) { - node - synced - health - fatalError { - message - block { - number - hash - } - handler - } - nonFatalErrors { - message - block { - number - hash - } - handler - } - chains { - network - chainHeadBlock { - number - } - earliestBlock { - number - } - latestBlock { - number - } - lastHealthyBlock { - number - } - } - entityCount - } -} +graph auth --studio + +graph deploy --studio ``` +```` -## 7. Publish your subgraph to The Graph’s Decentralized Network +- The CLI will ask for a version label. + - It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. That said, you can choose any string for the version such as: `v1`, `version1`, `asdf`, etc. -Once your subgraph has been deployed to Subgraph Studio, you have tested it out, and you are ready to put it into production, you can then publish it to the decentralized network. +### 6. Review your subgraph -In Subgraph Studio, you will be able to click the publish button on the top right of your subgraph's page. +If you’d like to examine your subgraph before publishing it to the network, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: -Select the network you would like to publish your subgraph to. It is recommended to publish subgraphs to Arbitrum One to take advantage of the [faster transaction speeds and lower gas costs](/arbitrum/arbitrum-faq). +- Run a sample query. +- Analyze your subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: -The (upgrade Indexer)[/sunrise/#about-the-upgrade-indexer] will begin serving queries on your subgraph regardless of subgraph curation, and it will provide you with 100,000 free queries per month. + ![Subgraph logs](/img/subgraph-logs-image.png) -For a higher quality of service and stronger redundancy, you can curate your subgraph to attract more Indexers. At the time of writing, it is recommended that you curate your own subgraph with at least 3,000 GRT to ensure 3-5 additional Indexers begin serving queries on your subgraph. +### 7. Publish your subgraph to The Graph Network -Um Gaskosten zu sparen, können Sie Ihren Subgraphen in der gleichen Transaktion kuratieren, in der Sie ihn veröffentlicht haben, indem Sie diese Schaltfläche auswählen, wenn Sie Ihren Subgraphen im dezentralen Netzwerk von The Graph veröffentlichen: +Publishing a subgraph to the decentralized network makes it available for [Curators](/network/curating/) to begin curating it and [Indexers](/network/indexing/) to begin indexing it. -![Subgraph publish](/img/publish-and-signal-tx.png) +#### Publishing with Subgraph Studio + +1. To publish your subgraph, click the Publish button in the dashboard. +2. Select the network to which you would like to publish your subgraph. + +#### Publishing from the CLI + +As of version 0.73.0, you can also publish your subgraph with the Graph CLI. + +1. Open the `graph-cli`. + +2. Use the following commands: + + ```sh + graph codegen && graph build + ``` + + Then, + + ```sh + graph publish + ``` -## 8. Query your subgraph +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +To customize your deployment, see [Publishing a Subgraph](/publishing/publishing-a-subgraph/). + +#### Adding signal to your subgraph + +1. To attract indexers to query your subgraph, you should add GRT curation signal to it. + + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + +2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. + + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + +To learn more about curation, read [Curating](/network/curating/). + +To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: + +![Subgraph publish](/img/publish-and-signal-tx.png) -Jetzt können Sie Ihren Subgraphen abfragen, indem Sie GraphQL-Abfragen an die Abfrage-URL Ihres Subgraphen senden, die Sie durch Klicken auf die Abfrage-Schaltfläche finden können. +### 8. Query your subgraph -If you don't have your API key, you can query via the free, rate-limited development query URL, which can be used for development and staging. +Now, you can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read more [here](/querying/querying-the-graph/). +For more information about querying data from your subgraph, read [Querying The Graph](/querying/querying-the-graph/). diff --git a/website/pages/de/release-notes/assemblyscript-migration-guide.mdx b/website/pages/de/release-notes/assemblyscript-migration-guide.mdx index fb1ad8beb382..058c48b32e6f 100644 --- a/website/pages/de/release-notes/assemblyscript-migration-guide.mdx +++ b/website/pages/de/release-notes/assemblyscript-migration-guide.mdx @@ -127,11 +127,8 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - You'll need to rename your duplicate variables if you had variable shadowing. - ### Null Comparisons - By doing the upgrade on your subgraph, sometimes you might get errors like these: ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - To solve you can simply change the `if` statement to something like this: ```typescript @@ -288,7 +284,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: ```typescript diff --git a/website/pages/de/sps/introduction.mdx b/website/pages/de/sps/introduction.mdx new file mode 100644 index 000000000000..12e3f81c6d53 --- /dev/null +++ b/website/pages/de/sps/introduction.mdx @@ -0,0 +1,19 @@ +--- +title: Introduction to Substreams-powered Subgraphs +--- + +By using a Substreams package (`.spkg`) as a data source, your subgraph gains access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. + +There are two methods of enabling this technology: + +Using Substreams [triggers](./triggers): Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. + +Using [Entity Changes](https://substreams.streamingfast.io/documentation/consume/subgraph/graph-out): By writing more of the logic into Substreams, you can consume the module's output directly into graph-node. In graph-node, you can use the Substreams data to create your subgraph entities. + +It is really a matter of where you put your logic, in the subgraph or the Substreams. Keep in mind that having more of your logic in Substreams benefits from a parallelized model, whereas triggers will be linearly consumed in graph-node. + +Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: + +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/de/sps/triggers-example.mdx b/website/pages/de/sps/triggers-example.mdx new file mode 100644 index 000000000000..8e4f96eba14a --- /dev/null +++ b/website/pages/de/sps/triggers-example.mdx @@ -0,0 +1,140 @@ +--- +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" +--- + +## Prerequisites + +Before starting, make sure to: + +- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. +- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. + +## Step 1: Initialize Your Project + + + +1. Open your Dev Container and run the following command to initialize your project: + + ```bash + substreams init + ``` + +2. Select the "minimal" project option. + +3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: + +```yaml +specVersion: v0.1.0 +package: + name: my_project_sol + version: v0.1.0 + +imports: # Pass your spkg of interest + solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg + +modules: + - name: map_spl_transfers + use: solana:map_block # Select corresponding modules available within your spkg + initialBlock: 260000082 + + - name: map_transactions_by_programid + use: solana:solana:transactions_by_programid_without_votes + +network: solana-mainnet-beta + +params: # Modify the param fields to meet your needs + # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA + map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE +``` + +## Step 2: Generate the Subgraph Manifest + +Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: + +```bash +substreams codegen subgraph +``` + +You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: + +```yaml +--- +dataSources: + - kind: substreams + name: my_project_sol + network: solana-mainnet-beta + source: + package: + moduleName: map_spl_transfers # Module defined in the substreams.yaml + file: ./my-project-sol-v0.1.0.spkg + mapping: + apiVersion: 0.0.7 + kind: substreams/graph-entities + file: ./src/mappings.ts + handler: handleTriggers +``` + +## Step 3: Define Entities in `schema.graphql` + +Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. Here is an example: + +```graphql +type MyTransfer @entity { + id: ID! + amount: String! + source: String! + designation: String! + signers: [String!]! +} +``` + +This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. + +## Step 4: Handle Substreams Data in `mappings.ts` + +With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: + +```ts +import { Protobuf } from 'as-proto/assembly' +import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events' +import { MyTransfer } from '../generated/schema' + +export function handleTriggers(bytes: Uint8Array): void { + const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode) + + for (let i = 0; i < input.data.length; i++) { + const event = input.data[i] + + if (event.transfer != null) { + let entity_id: string = `${event.txnId}-${i}` + const entity = new MyTransfer(entity_id) + entity.amount = event.transfer!.instruction!.amount.toString() + entity.source = event.transfer!.accounts!.source + entity.designation = event.transfer!.accounts!.destination + + if (event.transfer!.accounts!.signer!.single != null) { + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] + } else if (event.transfer!.accounts!.signer!.multisig != null) { + entity.signers = event.transfer!.accounts!.signer!.multisig!.signers + } + entity.save() + } + } +} +``` + +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + +## Conclusion + +You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. + +For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). diff --git a/website/pages/de/sps/triggers.mdx b/website/pages/de/sps/triggers.mdx new file mode 100644 index 000000000000..ed19635d4768 --- /dev/null +++ b/website/pages/de/sps/triggers.mdx @@ -0,0 +1,37 @@ +--- +title: Substreams Triggers +--- + +Custom triggers allow you to send data directly into your subgraph mappings file and entities (similar to tables and fields), enabling full use of the GraphQL layer. By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data within your subgraph’s handler, ensuring efficient and streamlined data management within the subgraph framework. + +> Note: If you haven’t already, visit one of the How-To Guides found [here](./introduction) to scaffold your first project in the Development Container. + +The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. + +```tsx +export function handleTransactions(bytes: Uint8Array): void { + let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).trasanctions // 1. + if (transactions.length == 0) { + log.info('No transactions found', []) + return + } + + for (let i = 0; i < transactions.length; i++) { + // 2. + let transaction = transactions[i] + + let entity = new Transaction(transaction.hash) // 3. + entity.from = transaction.from + entity.to = transaction.to + entity.save() + } +} +``` + +Here's what you’re seeing in the `mappings.ts` file: + +1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object +2. Looping over the transactions +3. Create a new subgraph entity for every transaction + +To go through a detailed example of a trigger-based subgraph, [click here](./triggers-example). diff --git a/website/pages/de/substreams.mdx b/website/pages/de/substreams.mdx index 710e110012cc..a838a6924e2f 100644 --- a/website/pages/de/substreams.mdx +++ b/website/pages/de/substreams.mdx @@ -4,9 +4,11 @@ title: Substreams ![Substreams Logo](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## How Substreams Works in 4 Steps @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### Expand Your Knowledge - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/de/sunrise.mdx b/website/pages/de/sunrise.mdx index 32bf6c6d26d4..14d1444cf8cd 100644 --- a/website/pages/de/sunrise.mdx +++ b/website/pages/de/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Post-Sunrise + Upgrading to The Graph Network FAQ --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Note: The Sunrise of Decentralized Data ended June 12th, 2024. -## What is the Sunrise of Decentralized Data? +## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data is an initiative spearheaded by Edge & Node. The goal is to enable subgraph developers to seamlessly upgrade to The Graph’s decentralized network. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan draws on many previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs, and the ability to integrate new blockchain networks to The Graph. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### What are the phases of the Sunrise? +### What happened to the hosted service? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. -## Upgrading subgraphs to The Graph Network +During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. -### When will hosted service subgraphs no longer be available? +### Was Subgraph Studio impacted by this upgrade? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Why were subgraphs published to Arbitrum, did it start indexing a different network? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Will my hosted service subgraph be supported on The Graph Network? - -Yes, the upgrade Indexer will automatically support all hosted service subgraphs published to The Graph Network for a seamless upgrade experience. - -### How do I upgrade my hosted service subgraph? - -> Note: Upgrading a subgraph to The Graph Network cannot be undone. - - - -To upgrade a hosted service subgraph, you can visit the subgraph dashboard on the [hosted service](https://thegraph.com/hosted-service). - -1. Select the subgraph(s) you want to upgrade. -2. Select the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### How can I get support with the upgrade process? - -The Graph community is here to support developers as they move to The Graph Network. Join The Graph's [Discord server](https://discord.gg/vtvv7FP) and request help in the #upgrade-decentralized-network channel. - -### How can I ensure high quality of service and redundancy for subgraphs on The Graph Network? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Members from these blockchain communities are encouraged to integrate their chain through the [Chain Integration Process](/chain-integration-overview/). - -### How do I publish new versions to the network? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Upgrade to the latest version of [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Update your deploy command - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> Publishing requires Arbitrum ETH - upgrading your subgraph also airdrops a small amount to facilitate your first protocol interactions 🧑‍🚀 - -### I use a subgraph developed by someone else, how can I make sure that my service isn't interrupted? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### What happens if I don't upgrade my subgraph? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### How can I get started querying subgraphs on The Graph Network? - -You can explore available subgraphs on [Graph Explorer](https://thegraph.com/explorer). [Learn more about querying subgraphs on The Graph](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## About the Upgrade Indexer -### What is the upgrade Indexer? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> The upgrade Indexer is currently active. -### What chains does the upgrade Indexer support? +The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. -The upgrade Indexer supports chains that were previously only available on the hosted service. +### What does the upgrade Indexer do? -Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. ### What does the upgrade indexer mean for existing Indexers? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. + +However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As more subgraphs are upgraded from the hosted service to The Graph Network, Delegators stand to benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### Will the upgrade Indexer compete with existing Indexers for rewards? +### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer will only allocate the minimum amount per subgraph and will not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### How will this affect subgraph developers? +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### How does this benefit data consumers? +### How does the upgrade Indexer benefit data consumers? The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. -### How will the upgrade Indexer price queries? - -The upgrade Indexer will price queries at the market rate so as not to influence the query fee market. - -### What are the criteria for the upgrade Indexer to stop supporting a subgraph? - -The upgrade Indexer will serve a subgraph until it is sufficiently and successfully served with consistent queries served by at least 3 other Indexers. - -Furthermore, the upgrade Indexer will stop supporting a subgraph if it has not been queried in the last 30 days. - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## About The Graph Network - -### Do I need to run my own infrastructure? - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -Once your subgraph has reached adequate curation signal and other Indexers begin supporting it, the upgrade Indexer will gradually taper off, allowing other Indexers to collect indexing rewards and query fees. - -### Should I host my own indexing infrastructure? - -Running infrastructure for your own project is [significantly more resource intensive](/network/benefits/) when compared to using The Graph Network. - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -That being said, if you’re still interested in running a [Graph Node](https://github.com/graphprotocol/graph-node), consider joining The Graph Network [as an Indexer](https://thegraph.com/blog/how-to-become-indexer/) to earn indexing rewards and query fees by serving data on your subgraph and others. - -### Should I use a centralized indexing provider? - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -Here's a detailed breakdown of the benefits of The Graph over centralized hosting: +### How does the upgrade Indexer price queries? -- **Resilience and Redundancy**: Decentralized systems are inherently more robust and resilient due to their distributed nature. Data isn't stored on a single server or location. Instead, it's served by hundreds of independent Indexers around the globe. This reduces the risk of data loss or service interruptions if one node fails, leading to exceptional uptime (99.99%). +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **Quality of Service**: In addition to the impressive uptime, The Graph Network features a ~106ms median query speed (latency), and higher query success rates compared to hosted alternatives. Read more in [this blog](https://thegraph.com/blog/qos-the-graph-network/). +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -Just as you've chosen your blockchain network for its decentralized nature, security, and transparency, opting for The Graph Network is an extension of those same principles. By aligning your data infrastructure with these values, you ensure a cohesive, resilient, and trust-driven development environment. +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/de/supported-network-requirements.mdx b/website/pages/de/supported-network-requirements.mdx index df15ef48d762..afbf755c0a5a 100644 --- a/website/pages/de/supported-network-requirements.mdx +++ b/website/pages/de/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| Network | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | +| Network | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | diff --git a/website/pages/de/tap.mdx b/website/pages/de/tap.mdx new file mode 100644 index 000000000000..872ad6231e9c --- /dev/null +++ b/website/pages/de/tap.mdx @@ -0,0 +1,197 @@ +--- +title: TAP Migration Guide +--- + +Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. + +## Overview + +[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: + +- Efficiently handles micropayments. +- Adds a layer of consolidations to on-chain transactions and costs. +- Allows Indexers control of receipts and payments, guaranteeing payment for queries. +- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. + +## Specifics + +TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. + +For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. + +### RAV Details + +- It’s money that is waiting to be sent to the blockchain. + +- It will continue to send requests to aggregate and ensure that the total value of non-aggregated receipts does not exceed the `amount willing to lose`. + +- Each RAV can be redeemed once in the contracts, which is why they are sent after the allocation is closed. + +### Redeeming RAV + +As long as you run `tap-agent` and `indexer-agent`, everything will be executed automatically. The following provides a detailed breakdown of the process: + +1. An Indexer closes allocation. + +2. ` period, tap-agent` takes all pending receipts for that specific allocation and requests an aggregation into a RAV, marking it as `last`. + +3. `indexer-agent` takes all the last RAVS and sends redeem requests to the blockchain, which will update the value of `redeem_at`. + +4. During the `` period, `indexer-agent` monitors if the blockchain has any reorganizations that revert the transaction. + + - If it was reverted, the RAV is resent to the blockchain. If it was not reverted, it gets marked as `final`. + +## Blockchain Addresses + +### Contracts + +| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| ------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | +| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | +| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | + +### Gateway + +| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| ---------- | --------------------------------------------- | --------------------------------------------- | +| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | + +### Requirements + +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. + +- [Graph TAP Aribtrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) + +> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployement. As a result, you have to index it manually. + +## Migration Guide + +### Software versions + +| Component | Version | Image Link | +| --------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------- | +| indexer-service | v1.0.0-rc.6 | [indexer-service](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-service-rs/264320627?tag=1.0.0-rc.6) | +| indexer-agent | PR #995 | [indexer-agent](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent/266166026?tag=sha-d98cf80) | +| tap-agent | v1.0.0-rc.6 | [tap-agent](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-tap-agent/264320547?tag=1.0.0-rc.6) | + +### Steps + +1. **Indexer Agent** + + - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). + - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + +2. **Indexer Service** + + - Fully replace your current configuration with the [new Indexer Service rs](https://github.com/graphprotocol/indexer-rs). It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + - Like the older version, you can scale Indexer Service horizontally easily. It is still stateless. + +3. **TAP Agent** + + - Run _one_ single instance of [TAP Agent](https://github.com/graphprotocol/indexer-rs) at all times. It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + +4. **Configure Indexer Service and TAP Agent** + + Configuration is a TOML file shared between `indexer-service` and `tap-agent`, supplied with the argument `--config /path/to/config.toml`. + + Check out the full [configuration](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) and the [default values](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) + +For minimal configuration, use the following template: + +```bash +# You will have to change *all* the values below to match your setup. +# +# Some of the config below are global graph network values, which you can find here: +# +# +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: +# +# [database] +# postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" + +[indexer] +indexer_address = "0x1111111111111111111111111111111111111111" +operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" + +[database] +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. +postgres_url = "postgres://postgres@postgres:5432/postgres" + +[graph_node] +# URL to your graph-node's query endpoint +query_url = "" +# URL to your graph-node's status endpoint +status_url = "" + +[subgraphs.network] +# Query URL for the Graph Network subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[subgraphs.escrow] +# Query URL for the Escrow subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[blockchain] +# The chain ID of the network that the graph network is running on +chain_id = 1337 +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. +receipts_verifier_address = "0x2222222222222222222222222222222222222222" + +######################################## +# Specific configurations to tap-agent # +######################################## +[tap] +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: +# max_amount_willing_to_lose_grt = "0.1" +max_amount_willing_to_lose_grt = 20 + +[tap.sender_aggregator_endpoints] +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. +0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" +``` + +Notes: + +- Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/tap/#gateway). +- Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/tap/#contracts) using the appropriate chain id. + +**Log Level** + +- You can set the log level by using the `RUST_LOG` environment variable. +- It’s recommended that you set it to `RUST_LOG=indexer_tap_agent=debug,info`. + +## Monitoring + +### Metrics + +All components expose the port 7300 to be queried by prometheus. + +### Grafana Dashboard + +You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) and import. + +### Launchpad + +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer) diff --git a/website/pages/es/about.mdx b/website/pages/es/about.mdx index c745dcedd131..8b1a092a77b5 100644 --- a/website/pages/es/about.mdx +++ b/website/pages/es/about.mdx @@ -2,46 +2,66 @@ title: Acerca de The Graph --- -En esta página se explica qué es The Graph y cómo puedes empezar a utilizarlo. - ## Que es The Graph? -The Graph es un protocolo descentralizado que permite indexar y consultar datos de la blockchain. The Graph permite consultar datos los cuales son difíciles de consultar directamente. +The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. + +## Understanding the Basics + +Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. + +### Challenges Without The Graph + +In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. + +- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. + +- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. + +- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. + +### Why is this a problem? + +It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. + +Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. + +Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. -Los proyectos con contratos inteligentes complejos como [Uniswap](https://uniswap.org/) y las iniciativas de NFTs como [Bored Ape Yacht Club](https://boredapeyachtclub.com/) almacenan los datos en la blockchain de Ethereum, lo que hace realmente difícil leer algo más que los datos básicos directamente desde la blockchain. +## The Graph Provides a Solution -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. -También podrías crear tu propio servidor, procesar las transacciones allí, guardarlas en una base de datos y construir un punto de conexión de API encima de todo eso para consultar los datos. Sin embargo, esta opción [requiere muchos recursos](/network/benefits/), necesita mantenimiento, presenta un único punto de fallo y compromete las propiedades de seguridad importantes necesarias para la descentralización. +### How The Graph Functions -**Indexar los datos de la blockchain es muy, muy difícil.** +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +#### Specifics -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. -## ¿Cómo funciona The Graph? +- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -The Graph aprende, qué y cómo indexar los datos de Ethereum, basándose en las descripciones de los subgrafos, conocidas como el manifiesto de los subgrafos. La descripción del subgrafo define los contratos inteligentes de interés para este subgrafo, los eventos en esos contratos a los que prestar atención, y cómo mapear los datos de los eventos a los datos que The Graph almacenará en su base de datos. +- When creating a subgraph, you need to write a subgraph manifest. -Una vez que has escrito el `subgraph manifest`, utilizas el CLI de The Graph para almacenar la definición en IPFS y decirle al indexador que empiece a indexar los datos de ese subgrafo. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. -Este diagrama ofrece más detalles sobre el flujo de datos una vez que se ha deployado en el manifiesto para un subgrafo, que trata de las transacciones en Ethereum: +The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. ![Un gráfico explicando como The Graph usa Graph Node para servir consultas a los consumidores de datos](/img/graph-dataflow.png) El flujo sigue estos pasos: -1. Una aplicación descentralizada (dapp) añade datos a Ethereum a través de una transacción en un contrato inteligente. -2. El contrato inteligente emite uno o más eventos mientras procesa la transacción. -3. Graph Node escanea continuamente la red de Ethereum en busca de nuevos bloques y los datos de tu subgrafo que puedan contener. -4. Graph Node encuentra los eventos de la red Ethereum, a fin de proveerlos en tu subgrafo mediante estos bloques y ejecuta los mapping handlers que proporcionaste. El mapeo (mapping) es un módulo WASM que crea o actualiza las entidades de datos que Graph Node almacena en respuesta a los eventos de Ethereum. -5. La dapp consulta a través de Graph Node los datos indexados de la blockchain, utilizando el [GraphQL endpoint](https://graphql.org/learn/) del nodo. El Nodo de The Graph, a su vez, traduce las consultas GraphQL en consultas para su almacenamiento de datos subyacentes con el fin de obtener estos datos, haciendo uso de las capacidades de indexación que ofrece el almacenamiento. La dapp muestra estos datos en una interfaz muy completa para el usuario, a fin de que los end users que usan este subgrafo puedan emitir nuevas transacciones en Ethereum. El ciclo se repite. +1. Una aplicación descentralizada (dapp) añade datos a Ethereum a través de una transacción en un contrato inteligente. +2. El contrato inteligente emite uno o más eventos mientras procesa la transacción. +3. Graph Node escanea continuamente la red de Ethereum en busca de nuevos bloques y los datos de tu subgrafo que puedan contener. +4. Graph Node encuentra los eventos de la red Ethereum, a fin de proveerlos en tu subgrafo mediante estos bloques y ejecuta los mapping handlers que proporcionaste. El mapeo (mapping) es un módulo WASM que crea o actualiza las entidades de datos que Graph Node almacena en respuesta a los eventos de Ethereum. +5. La dapp consulta a través de Graph Node los datos indexados de la blockchain, utilizando el [GraphQL endpoint](https://graphql.org/learn/) del nodo. El Nodo de The Graph, a su vez, traduce las consultas GraphQL en consultas para su almacenamiento de datos subyacentes con el fin de obtener estos datos, haciendo uso de las capacidades de indexación que ofrece el almacenamiento. La dapp muestra estos datos en una interfaz muy completa para el usuario, a fin de que los end users que usan este subgrafo puedan emitir nuevas transacciones en Ethereum. El ciclo se repite. ## Próximos puntos -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +The following sections provide a more in-depth look at subgraphs, their deployment and data querying. -Before you start writing your own subgraph, you might want to have a look at [Graph Explorer](https://thegraph.com/explorer) and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/pages/es/arbitrum/arbitrum-faq.mdx b/website/pages/es/arbitrum/arbitrum-faq.mdx index 2b8812590990..b418ae4af15c 100644 --- a/website/pages/es/arbitrum/arbitrum-faq.mdx +++ b/website/pages/es/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: Preguntas frecuentes sobre Arbitrum Click [here](#billing-on-arbitrum-faqs) if you would like to skip to the Arbitrum Billing FAQs. -## ¿Por qué The Graph está implementando una solución L2? +## Why did The Graph implement an L2 Solution? -Al escalar The Graph en L2, los participantes de la red pueden esperar: +By scaling The Graph on L2, network participants can now benefit from: - Upwards of 26x savings on gas fees @@ -14,7 +14,7 @@ Al escalar The Graph en L2, los participantes de la red pueden esperar: - Security inherited from Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers could open and close allocations to index a greater number of subgraphs with greater frequency, developers could deploy and update subgraphs with greater ease, Delegators could delegate GRT with increased frequency, and Curators could add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion. @@ -41,27 +41,21 @@ Para aprovechar el uso de The Graph en L2, usa este conmutador desplegable para ## Como developer de subgrafos, consumidor de datos, Indexador, Curador o Delegador, ¿qué debo hacer ahora? -There is no immediate action required, however, network participants are encouraged to begin moving to Arbitrum to take advantage of the benefits of L2. +Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) for additional support. -Core developer teams are working to create L2 transfer tools that will make it significantly easier to move delegation, curation, and subgraphs to Arbitrum. Network participants can expect L2 transfer tools to be available by summer of 2023. +All indexing rewards are now entirely on Arbitrum. -A partir del 10 de abril de 2023, el 5% de todas las recompensas de indexación se están generando en Arbitrum. A medida que aumenta la participación en la red, y según lo apruebe el Council, las recompensas de indexación se desplazarán gradualmente de Ethereum a Arbitrum, moviéndose eventualmente por completo a Arbitrum. - -## Si me gustaría participar en la red en L2, ¿qué debo hacer? - -Please help [test the network](https://testnet.thegraph.com/explorer) on L2 and report feedback about your experience in [Discord](https://discord.gg/graphprotocol). - -## ¿Existe algún riesgo asociado con escalar la red a L2? +## Were there any risks associated with scaling the network to L2? All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). Everything has been tested thoroughly, and a contingency plan is in place to ensure a safe and seamless transition. Details can be found [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## ¿Seguirán funcionando los subgrafos existentes en Ethereum? +## Are existing subgraphs on Ethereum working? -Sí, los contratos de The Graph Network operarán en paralelo tanto en Ethereum como en Arbitrum hasta que pasen completamente a Arbitrum en una fecha posterior. +All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. -## ¿GRT tendrá un nuevo contrato inteligente implementado en Arbitrum? +## Does GRT have a new smart contract deployed on Arbitrum? Yes, GRT has an additional [smart contract on Arbitrum](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7). However, the Ethereum mainnet [GRT contract](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) will remain operational. diff --git a/website/pages/es/billing.mdx b/website/pages/es/billing.mdx index e73e074a8b44..604244a22148 100644 --- a/website/pages/es/billing.mdx +++ b/website/pages/es/billing.mdx @@ -14,7 +14,7 @@ There are two plans to use when querying subgraphs on The Graph Network. ## Query Payments with credit card -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) +- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Haz clic en el botón "Conectar wallet" en la esquina superior derecha de la página. Serás redirigido a la página de selección de wallet. Selecciona tu wallet y haz clic en "Conectar". 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. @@ -69,7 +69,7 @@ Once you bridge GRT, you can add it to your billing balance. 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. 4. Enter the amount of GRT you would like to withdraw. 5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. 6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. @@ -83,7 +83,7 @@ Once you bridge GRT, you can add it to your billing balance. - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. 5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. 6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. + - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. 7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. 8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. @@ -127,7 +127,7 @@ This will be a step by step guide for purchasing GRT on Binance. 7. Review your purchase and click "Buy GRT". 8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. 9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - Click on the "wallet" button, click withdraw, and select GRT. - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - Click "Continue" and confirm your transaction. @@ -198,7 +198,7 @@ You can learn more about getting ETH on Binance [here](https://www.binance.com/e ### How many queries will I need? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. @@ -208,6 +208,6 @@ Of course, both new and existing users can reach out to Edge & Node's BD team fo Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### What happens when my billing balance runs out? Will I get a warning? You will receive several email notifications before your billing balance runs out. diff --git a/website/pages/es/chain-integration-overview.mdx b/website/pages/es/chain-integration-overview.mdx index 2fe6c2580909..a142f3f817f9 100644 --- a/website/pages/es/chain-integration-overview.mdx +++ b/website/pages/es/chain-integration-overview.mdx @@ -6,12 +6,12 @@ A transparent and governance-based integration process was designed for blockcha ## Stage 1. Technical Integration -- Teams work on a Graph Node integration and Firehose for non-EVM based chains. [Here's how](/new-chain-integration/). +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. - Teams initiate the protocol integration process by creating a Forum thread [here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (New Data Sources sub-category under Governance & GIPs). Using the default Forum template is mandatory. ## Stage 2. Integration Validation -- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON RPC or Firehose endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - Graph Indexers test the integration on The Graph's testnet. - Core developers and Indexers monitor stability, performance, and data determinism. @@ -38,7 +38,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. -### 3. How much time will this process take? +### 3. How much time will the process of reaching full protocol support take? The time to mainnet is expected to be several weeks, varying based on the time of integration development, whether additional research is required, testing and bug fixes, and, as always, the timing of the governance process that requires community feedback. @@ -46,4 +46,4 @@ Protocol support for indexing rewards depends on the stakeholders' bandwidth to ### 4. How will priorities be handled? -Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. This is especially true for chains previously supported on the [hosted service](https://thegraph.com/hosted-service) or those relying on already tested stacks. +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/es/cookbook/arweave.mdx b/website/pages/es/cookbook/arweave.mdx index 3d2dac89dfd0..5692b89b3a78 100644 --- a/website/pages/es/cookbook/arweave.mdx +++ b/website/pages/es/cookbook/arweave.mdx @@ -105,7 +105,7 @@ La definición de esquema describe la estructura de la base de datos de subgrafo Los handlers para procesar eventos están escritos en [AssemblyScript](https://www.assemblyscript.org/). -La indexación de Arweave introduce tipos de datos específicos de Arweave en la [API de AssemblyScript](/developing/assemblyscript-api/). +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/es/cookbook/base-testnet.mdx b/website/pages/es/cookbook/base-testnet.mdx index d22f9310155e..c680dec0cf1e 100644 --- a/website/pages/es/cookbook/base-testnet.mdx +++ b/website/pages/es/cookbook/base-testnet.mdx @@ -63,10 +63,10 @@ Tu slug de subgrafo es un identificador para tu subgrafo. La herramienta CLI te The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. +- Schema (schema.graphql) - El esquema GraphQL define los datos que deseas recuperar del subgrafo. - AssemblyScript Mappings (mapping.ts) - Este es el código que traduce los datos de tus fuentes de datos a las entidades definidas en el esquema. -If you want to index additional data, you will need extend the manifest, schema and mappings. +If you want to index additional data, you will need to extend the manifest, schema and mappings. For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). diff --git a/website/pages/es/cookbook/cosmos.mdx b/website/pages/es/cookbook/cosmos.mdx index 708208a3290f..349511fedbeb 100644 --- a/website/pages/es/cookbook/cosmos.mdx +++ b/website/pages/es/cookbook/cosmos.mdx @@ -85,7 +85,7 @@ Schema definition describes the structure of the resulting subgraph database and Los controladores para procesar eventos están escritos en [AssemblyScript](https://www.assemblyscript.org/). -La indexación de Cosmos introduce tipos de datos específicos de Cosmos en la [AssemblyScript API](/developing/assemblyscript-api/). +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { @@ -170,7 +170,7 @@ Cada tipo de handler viene con su propia estructura de datos que se pasa como ar - Los handlers de transacciones reciben el tipo `TransactionData`. - Los handlers de mensajes reciben el tipo `MessageData`. -Como parte de `MessageData`, el message handler recibe un contexto de transacción, que contiene la información más importante sobre una transacción que abarca un mensaje. El contexto de transacción también está disponible en el tipo `EventData`, pero solo cuando el evento correspondiente está asociado con una transacción. Además, todos los controladores reciben una referencia a un bloque (`HeaderOnlyBlock`). +Como parte de `MessageData`, el message handler recibe un contexto de transacción, que contiene la información más importante sobre una transacción que abarca un mensaje. El contexto de transacción también está disponible en el tipo ` EventData `, pero solo cuando el evento correspondiente está asociado con una transacción. Además, todos los controladores reciben una referencia a un bloque (` HeaderOnlyBlock `). Puedes encontrar una lista completa de los tipos para la integración Cosmos aquí [here](https://github.com/graphprotocol/graph-ts/blob/4c064a8118dff43b110de22c7756e5d47fcbc8df/chain/cosmos.ts). diff --git a/website/pages/es/cookbook/grafting.mdx b/website/pages/es/cookbook/grafting.mdx index 30df42fca0a1..212250c06428 100644 --- a/website/pages/es/cookbook/grafting.mdx +++ b/website/pages/es/cookbook/grafting.mdx @@ -22,7 +22,7 @@ Para más información, puedes consultar: - [Grafting](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -En este tutorial vamos a cubrir un caso de uso básico. Reemplazaremos un contrato existente con un contrato idéntico (con una nueva dirección, pero el mismo código). Luego, haremos grafting del subgrafo existente en el subgrafo "base" que rastrea el nuevo contrato. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network @@ -30,7 +30,7 @@ En este tutorial vamos a cubrir un caso de uso básico. Reemplazaremos un contra ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. While this is an effective way to preserve data and save time on indexing, grafting may introduce complexities and potential issues when migrating from a hosted environment to the decentralized network. It is not possible to graft a subgraph from The Graph Network back to the hosted service or Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. ### Best Practices @@ -80,7 +80,7 @@ dataSources: ``` - La fuente de datos de `Lock` es el ABI y la dirección del contrato que obtendremos cuando compilemos y realicemos el deploy del contrato -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - La sección de `mapeo` define los disparadores de interés y las funciones que deben ejecutarse en respuesta a esos disparadores. En este caso, estamos escuchando el evento `Withdrawal` y llamando a la función `handleWithdrawal` cuando se emite. ## Definición del manifiesto de grafting @@ -191,7 +191,7 @@ Congrats! You have successfully grafted a subgraph onto another subgraph. ## Recursos Adicionales -Si quieres tener más experiencia con el grafting, aquí tienes algunos ejemplos de contratos populares: +If you want more experience with grafting, here are a few examples for popular contracts: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) diff --git a/website/pages/es/cookbook/near.mdx b/website/pages/es/cookbook/near.mdx index 70f2110f6757..253f3bda8cee 100644 --- a/website/pages/es/cookbook/near.mdx +++ b/website/pages/es/cookbook/near.mdx @@ -37,7 +37,7 @@ Hay tres aspectos de la definición de subgrafo: **schema.graphql:** un archivo de esquema que define qué datos se almacenan para su subgrafo y cómo consultarlos a través de GraphQL. Los requisitos para los subgrafos NEAR están cubiertos por [la documentación existente](/developing/creating-a-subgraph#the-graphql-schema). -**Asignaciones de AssemblyScript:** [Código de AssemblyScript](/developing/assemblyscript-api) que traduce los datos del evento a las entidades definidas en su esquema. La compatibilidad con NEAR introduce tipos de datos específicos de NEAR y una nueva funcionalidad de análisis de JSON. +**AssemblyScript Mappings:** [AssemblyScript code](/developing/graph-ts/api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. Durante el desarrollo del subgrafo hay dos comandos clave: @@ -98,7 +98,7 @@ La definición de esquema describe la estructura de la base de datos de subgrafo Los handlers para procesar eventos están escritos en [AssemblyScript](https://www.assemblyscript.org/). -La indexación NEAR introduce tipos de datos específicos de NEAR en la [API de AssemblyScript](/developing/assemblyscript-api). +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/graph-ts/api). ```typescript @@ -165,9 +165,9 @@ Estos tipos se pasan a block & handlers de recibos: - Los handlers de bloques recibirán un `Block` - Los handlers de recibos recibirán un `ReceiptWithOutcome` -De lo contrario, el resto de la [API de AssemblyScript](/developing/assemblyscript-api) está disponible para los desarrolladores de subgrafos NEAR durante la ejecución del mapeo. +Otherwise, the rest of the [AssemblyScript API](/developing/graph-ts/api) is available to NEAR subgraph developers during mapping execution. -Esto incluye una nueva función de análisis de JSON: los registros en NEAR se emiten con frecuencia como JSON en cadena. Una nueva función `json.fromString(...)` está disponible como parte de la [API JSON](/developing/assemblyscript-api#json-api) para permitir a los desarrolladores para procesar fácilmente estos registros. +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/graph-ts/api#json-api) to allow developers to easily process these logs. ## Deployando un subgrafo NEAR diff --git a/website/pages/es/cookbook/subgraph-uncrashable.mdx b/website/pages/es/cookbook/subgraph-uncrashable.mdx index d6ab2b8a0878..d7a39a67df81 100644 --- a/website/pages/es/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/es/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: Generador de código de subgrafo seguro - El marco también incluye una forma (a través del archivo de configuración) para crear funciones de establecimiento personalizadas, pero seguras, para grupos de variables de entidad. De esta forma, es imposible que el usuario cargue/utilice una entidad gráfica obsoleta y también es imposible olvidarse de guardar o configurar una variable requerida por la función. -- Los registros de advertencia se registran como registros que indican donde hay una infracción de la lógica del subgrafo para ayudar a solucionar el problema y garantizar la precisión de los datos. Estos registros se pueden ver en el servicio alojado de The Graph en la sección 'Registros'. +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable se puede ejecutar como un indicador opcional mediante el comando codegen Graph CLI. diff --git a/website/pages/es/cookbook/upgrading-a-subgraph.mdx b/website/pages/es/cookbook/upgrading-a-subgraph.mdx index f58d9879d96a..7fc8d973f686 100644 --- a/website/pages/es/cookbook/upgrading-a-subgraph.mdx +++ b/website/pages/es/cookbook/upgrading-a-subgraph.mdx @@ -136,7 +136,7 @@ Asegúrate de que la opción **Actualizar Detalles del Subgrafo en el Explorador ## Deprecar un Subgrafo en The Graph Network -Sigue los pasos [aquí](/managing/deprecating-a-subgraph) para retirar tu subgrafo y eliminarlo de la red de The Graph. +Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. ## Consulta de un Subgrafo + Facturación en The Graph Network diff --git a/website/pages/es/deploying/multiple-networks.mdx b/website/pages/es/deploying/multiple-networks.mdx new file mode 100644 index 000000000000..276e10f5d0d4 --- /dev/null +++ b/website/pages/es/deploying/multiple-networks.mdx @@ -0,0 +1,241 @@ +--- +title: Deploying a Subgraph to Multiple Networks +--- + +This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph). + +## Desplegando el subgráfo en múltiples redes + +En algunos casos, querrás desplegar el mismo subgrafo en múltiples redes sin duplicar todo su código. El principal reto que conlleva esto es que las direcciones de los contratos en estas redes son diferentes. + +### Using `graph-cli` + +Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: + +```sh +Options: + + ... + --network Network configuration to use from the networks config file + --network-file Networks config file path (default: "./networks.json") +``` + +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. + +> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. + +If you don't have a `networks.json` file, you'll need to manually create one with the following structure: + +```json +{ + "network1": { // the network name + "dataSource1": { // the dataSource name + "address": "0xabc...", // the contract address (optional) + "startBlock": 123456 // the startBlock (optional) + }, + "dataSource2": { + "address": "0x123...", + "startBlock": 123444 + } + }, + "network2": { + "dataSource1": { + "address": "0x987...", + "startBlock": 123 + }, + "dataSource2": { + "address": "0xxyz..", + "startBlock": 456 + } + }, + ... +} +``` + +> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. + +Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x123...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Este es el aspecto que debe tener el archivo de configuración de tu red: + +```json +{ + "mainnet": { + "Gravity": { + "address": "0x123..." + } + }, + "sepolia": { + "Gravity": { + "address": "0xabc..." + } + } +} +``` + +Ahora podemos ejecutar uno de los siguientes comandos: + +```sh +# Using default networks.json file +yarn build --network sepolia + +# Using custom named file +yarn build --network sepolia --network-file path/to/config +``` + +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: sepolia + source: + address: '0xabc...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Now you are ready to `yarn deploy`. + +> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: + +```sh +# Using default networks.json file +yarn deploy --network sepolia + +# Using custom named file +yarn deploy --network sepolia --network-file path/to/config +``` + +### Usando la plantilla subgraph.yaml + +One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). + +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: + +```json +{ + "network": "mainnet", + "address": "0x123..." +} +``` + +y + +```json +{ + "network": "sepolia", + "address": "0xabc..." +} +``` + +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + network: {{network}} + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + address: '{{address}}' + abi: Gravity + mapping: + kind: ethereum/events +``` + +In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: + +```json +{ + ... + "scripts": { + ... + "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", + "prepare:sepolia": "mustache config/sepolia.json subgraph.template.yaml > subgraph.yaml" + }, + "devDependencies": { + ... + "mustache": "^3.1.0" + } +} +``` + +To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: + +```sh +# Mainnet: +yarn prepare:mainnet && yarn deploy + +# Sepolia: +yarn prepare:sepolia && yarn deploy +``` + +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). + +**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. + +## Política de archivo de subgrafos en Subgraph Studio + +A subgraph version in Studio is archived if and only if it meets the following criteria: + +- The version is not published to the network (or pending publish) +- The version was created 45 or more days ago +- The subgraph hasn't been queried in 30 days + +In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. + +Cada subgrafo afectado por esta política tiene una opción para recuperar la versión en cuestión. + +## Comprobando la salud del subgrafo + +Si un subgrafo se sincroniza con éxito, es una buena señal de que seguirá funcionando bien para siempre. Sin embargo, los nuevos activadores en la red pueden hacer que tu subgrafo alcance una condición de error no probada o puede comenzar a retrasarse debido a problemas de rendimiento o problemas con los operadores de nodos. + +Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: + +```graphql +{ + indexingStatusForCurrentVersion(subgraphName: "org/subgraph") { + synced + health + fatalError { + message + block { + number + hash + } + handler + } + chains { + chainHeadBlock { + number + } + latestBlock { + number + } + } + } +} +``` + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/pages/es/developing/creating-a-subgraph.mdx b/website/pages/es/developing/creating-a-subgraph.mdx index cb3566206931..f471fa932ce2 100644 --- a/website/pages/es/developing/creating-a-subgraph.mdx +++ b/website/pages/es/developing/creating-a-subgraph.mdx @@ -2,45 +2,47 @@ title: Creación de un subgrafo --- -Un subgrafo extrae datos de una blockchain, los procesa y los almacena para que puedan consultarse fácilmente mediante GraphQL. +This detailed guide provides instructions to successfully create a subgraph. -![Definir un Subgrafo](/img/defining-a-subgraph.png) +A subgraph extracts data from a blockchain, processes it, and stores it for efficient querying via GraphQL. -La definición del subgrafo consta de unos cuantos archivos: +![Definir un Subgrafo](/img/defining-a-subgraph.png) -- `subgraph.yaml`: un archivo YAML que contiene el manifiesto del subgrafo +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. -- `schema.graphql`: un esquema GraphQL que define qué datos se almacenan para su subgrafo, y cómo consultarlos a través de GraphQL +## Empezando -- `AssemblyScript Mappings`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) codigo que traduce de los datos del evento a las entidades definidas en su esquema (por ejemplo `mapping.ts` en este tutorial) +### Instalar The Graph CLI -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key). It is recommended that you [add signal](/network/curating/#how-to-signal) to your subgraph with at least [3,000 GRT](/sunrise/#how-can-i-ensure-high-quality-of-service-and-redundancy-for-subgraphs-on-the-graph-network). +To build and deploy a subgraph, you will need the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-tooling) which you will need to build and deploy a subgraph. +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. -## Instalar The Graph CLI +En tu dispositivo, ejecuta alguno de los siguientes comandos: -The Graph CLI está escrito en JavaScript, y tendrás que instalar `yarn` o `npm` para utilizarlo; se asume que tienes yarn en lo que sigue. +#### Using [npm](https://www.npmjs.com/) -Una vez que tengas `yarn`, instala The Graph CLI ejecutando +```bash +npm install -g @graphprotocol/graph-cli@latest +``` -**Instalar con yarn:** +#### Using [yarn](https://yarnpkg.com/) ```bash yarn global add @graphprotocol/graph-cli ``` -**Instalar con npm:** +- The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. -```bash -npm install -g @graphprotocol/graph-cli -``` +- This `graph init` command can also create a subgraph in Subgraph Studio by passing in `--product subgraph-studio`. + +- If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. -Once installed, the `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. This command can be used to create a subgraph in Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to your preferred network, bootstrapping a new subgraph from that contract can be a good way to get started. +## Create a subgraph -## Desde un Contrato Existente +### From an existing contract -El siguiente comando crea un subgrafo que indexa todos los eventos de un contrato existente. Intenta obtener la ABI del contrato desde Etherscan y vuelve a solicitar una ruta de archivo local. Si falta alguno de los argumentos opcionales, te lleva a través de un formulario interactivo. +The following command creates a subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,21 +53,29 @@ graph init \ [] ``` -El `` es el ID de tu subgrafo en Subgraph Studio, y se puede encontrar en la página de detalles de tu subgrafo. +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. -## Desde un Subgrafo de Ejemplo +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. -El segundo modo que admite `graph init` es la creación de un nuevo proyecto a partir de un subgrafo de ejemplo. El siguiente comando lo hace: +### From an example subgraph + +The following command initializes a new project from an example subgraph: ```sh -graph init --studio +graph init --studio --from-example=example-subgraph ``` -The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. -## Añadir nuevas fuentes de datos a un subgrafo existente +## Add new `dataSources` to an existing subgraph -Desde `v0.31.0`, `graph-cli` permite añadir nuevos dataSources a un subgrafo existente mediante el comando `graph add`. +Since `v0.31.0`, the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: ```sh graph add
[] @@ -78,22 +88,45 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -El comando `add` obtendrá el ABI de Etherscan (a menos que se especifique una ruta ABI con la opción `--abi`), y creará un nuevo `dataSource` de la misma manera que el comando `graph init` crea un `dataSource` `--from-contract`, actualizando el esquema y los mappings de manera acorde. +### Specifics + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- La opción `--merge-entities` identifica cómo el desarrollador desea manejar los conflictos de nombres de `entity` y `event`: + + - Si es `true`: el nuevo `dataSource` debe utilizar los `eventHandlers`& `entities` existentes. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- La `address` del contrato se escribirá en el archivo `networks.json` para la red correspondiente. + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. -La opción `--merge-entities` identifica cómo el desarrollador desea manejar los conflictos de nombres de `entity` y `event`: +## Components of a subgraph -- Si es `true`: el nuevo `dataSource` debe utilizar los `eventHandlers`& `entities` existentes. -- Si es `false`: se creará una nueva entidad & event handler con `${dataSourceName}{EventName}`. +### El Manifiesto de Subgrafo -La `address` del contrato se escribirá en el archivo `networks.json` para la red correspondiente. +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -> **Nota**: Cuando se utiliza el cli interactivo, después de ejecutar correctamente `graph init`, se te pedirá que añadas un nuevo `dataSource`. +The **subgraph definition** consists of the following files: -## El Manifiesto de Subgrafo +- `subgraph.yaml`: Contains the subgraph manifest -El manifiesto del subgrafo `subgraph.yaml` define los contratos inteligentes que indexa tu subgrafo, a qué eventos de estos contratos prestar atención, y cómo mapear los datos de los eventos a las entidades que Graph Node almacena y permite consultar. La especificación completa de los manifiestos de subgrafos puede encontrarse en [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL -Para este subgrafo de ejemplo, `subgraph.yaml` es: +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +A single subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: ```yaml specVersion: 0.0.4 @@ -180,9 +213,9 @@ Un único subgrafo puede indexar datos de múltiples contratos inteligentes. Añ Las triggers de una fuente de datos dentro de un bloque se ordenan mediante el siguiente proceso: -1. Las triggers de eventos y calls se ordenan primero por el índice de la transacción dentro del bloque. -2. Los triggers de eventos y calls dentro de la misma transacción se ordenan siguiendo una convención: primero los triggers de eventos y luego los de calls, respetando cada tipo el orden en que se definen en el manifiesto. -3. Las triggers de bloques se ejecutan después de las triggers de eventos y calls, en el orden en que están definidos en el manifiesto. +1. Las triggers de eventos y calls se ordenan primero por el índice de la transacción dentro del bloque. +2. Los triggers de eventos y calls dentro de la misma transacción se ordenan siguiendo una convención: primero los triggers de eventos y luego los de calls, respetando cada tipo el orden en que se definen en el manifiesto. +3. Las triggers de bloques se ejecutan después de las triggers de eventos y calls, en el orden en que están definidos en el manifiesto. Estas normas de orden están sujetas a cambios. @@ -305,9 +338,9 @@ Imagine you have a subgraph that needs to make three Ethereum calls to fetch dat Traditionally, these calls might be made sequentially: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Total time taken = 3 + 2 + 4 = 9 seconds @@ -315,9 +348,9 @@ Total time taken = 3 + 2 + 4 = 9 seconds With this feature, you can declare these calls to be executed in parallel: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. @@ -325,9 +358,9 @@ Total time taken = max (3, 2, 4) = 4 seconds ### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. -2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. ### Example Configuration in Subgraph Manifest @@ -347,7 +380,7 @@ calls: Details for the example above: - `global0X128` is the declared `eth_call`. -- The text before colon(`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. @@ -360,17 +393,17 @@ calls: ### SpecVersion Releases -| Version | Notas del lanzamiento | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +| Version | Notas del lanzamiento | +|:-------:| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | ### Obtención de ABIs @@ -442,16 +475,16 @@ Para algunos tipos de entidad, el `id` se construye a partir de los id de otras Admitimos los siguientes escalares en nuestra API GraphQL: -| Tipo | Descripción | -| --- | --- | -| `Bytes` | Byte array, representado como un string hexadecimal. Comúnmente utilizado para los hashes y direcciones de Ethereum. | -| `String` | Escalar para valores `string`. Los caracteres nulos no son compatibles y se eliminan automáticamente. | -| `Boolean` | Escalar para valores `boolean`. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Números enteros grandes. Se utiliza para los tipos `uint32`, `int64`, `uint64`, ..., `uint256` de Ethereum. Nota: Todo por debajo de `uint32`, como `int32`, `uint24` o `int8` se representa como `i32`. | -| `BigDecimal` | `BigDecimal` Decimales de alta precisión representados como un signo y un exponente. El rango de exponentes va de -6143 a +6144. Redondeado a 34 dígitos significativos. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| Tipo | Descripción | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Byte array, representado como un string hexadecimal. Comúnmente utilizado para los hashes y direcciones de Ethereum. | +| `String` | Escalar para valores `string`. Los caracteres nulos no son compatibles y se eliminan automáticamente. | +| `Boolean` | Escalar para valores `boolean`. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Números enteros grandes. Se utiliza para los tipos `uint32`, `int64`, `uint64`, ..., `uint256` de Ethereum. Nota: Todo por debajo de `uint32`, como `int32`, `uint24` o `int8` se representa como `i32`. | +| `BigDecimal` | `BigDecimal` Decimales de alta precisión representados como un signo y un exponente. El rango de exponentes va de -6143 a +6144. Redondeado a 34 dígitos significativos. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | #### Enums @@ -593,7 +626,7 @@ Esta forma más elaborada de almacenar las relaciones many-to-many se traducirá #### Agregar comentarios al esquema -As per GraphQL spec, comments can be added above schema entity attributes using the hash symble `#`. This is illustrated in the example below: +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: ```graphql type MyFirstEntity @entity { @@ -676,10 +709,10 @@ Diccionarios de idiomas admitidos: Algoritmos admitidos para ordenar los resultados: -| Algoritmos | Descripción | -| --- | --- | -| rango | Usa la calidad de coincidencia (0-1) de la consulta de texto completo para ordenar los resultados. | -| rango de proximidad | Similar al rango, pero también incluye la proximidad de los matches. | +| Algoritmos | Descripción | +| ------------------- | -------------------------------------------------------------------------------------------------- | +| rango | Usa la calidad de coincidencia (0-1) de la consulta de texto completo para ordenar los resultados. | +| rango de proximidad | Similar al rango, pero también incluye la proximidad de los matches. | ## Escribir Mappings @@ -873,7 +906,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **Nota:** Un nuevo origen de datos sólo procesará las llamadas y los eventos del bloque en el que fue creado y todos los bloques siguientes, pero no procesará los datos históricos, es decir, los datos que están contenidos en bloques anteriores. -> +> > Si los bloques anteriores contienen datos relevantes para la nueva fuente de datos, lo mejor es indexar esos datos leyendo el estado actual del contrato y creando entidades que representen ese estado en el momento de crear la nueva fuente de datos. ### Contexto de la fuente de datos @@ -930,7 +963,7 @@ dataSources: ``` > **Nota:** El bloque de creación del contrato se puede buscar rápidamente en Etherscan: -> +> > 1. Busca el contrato introduciendo su dirección en la barra de búsqueda. > 2. Haz clic en el hash de la transacción de creación en la sección `Contract Creator`. > 3. Carga la página de detalles de la transacción, donde encontrarás el bloque inicial de ese contrato. @@ -945,9 +978,9 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde `indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: -1. `"never"`: No pruning of historical data; retains the entire history. -2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. -3. A specific number: Sets a custom limit on the number of historical blocks to retain. +1. `"never"`: No pruning of historical data; retains the entire history. +2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +3. A specific number: Sets a custom limit on the number of historical blocks to retain. ``` indexerHints: @@ -982,29 +1015,6 @@ indexerHints: prune: never ``` -You can check the earliest block (with historical state) for a given subgraph by querying the [Indexing Status API](/deploying/deploying-a-subgraph-to-hosted/#checking-subgraph-health): - -``` -{ - indexingStatuses(subgraphs: ["Qm..."]) { - subgraph - synced - health - chains { - earliestBlock { - number - } - latestBlock { - number - } - chainHeadBlock { number } - } - } -} -``` - -Note that the `earliestBlock` is the earliest block with historical data, which will be more recent than the `startBlock` specified in the manifest, if the subgraph has been pruned. - ## Event Handlers Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. @@ -1477,7 +1487,7 @@ The file data source must specifically mention all the entity types which it wil #### Crear un nuevo handler para procesar archivos -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/assemblyscript-api/#json-api)). +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). Se puede acceder al CID del archivo como un string legible a través del `dataSource` de la siguiente manera: diff --git a/website/pages/es/developing/developer-faqs.mdx b/website/pages/es/developing/developer-faqs.mdx index 55357e42a4ef..9441ff03e8da 100644 --- a/website/pages/es/developing/developer-faqs.mdx +++ b/website/pages/es/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: Preguntas Frecuentes de los Desarrolladores --- -## 1. ¿Qué es un subgrafo? +This page summarizes some of the most common questions for developers building on The Graph. -Un subgrafo es una API personalizada construida sobre datos de blockchain. Los subgrafos se consultan mediante el lenguaje de consulta GraphQL y son deployados en un Graph Node usando Graph CLI. Una vez deployados y publicados en la red descentralizada de The Graph, los indexadores procesan los subgrafos y los ponen a disposición de los consumidores de subgrafos para que los consulten. +## Subgraph Related -## 2. ¿Puedo eliminar mi subgrafo? +### 1. ¿Qué es un subgrafo? -No es posible eliminar los subgrafos una vez creados. +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. ¿Puedo cambiar el nombre de mi subgrafo? +### 2. What is the first step to create a subgraph? -No. Una vez que se crea un subgrafo, no se puede cambiar el nombre. Asegúrate de pensar en esto cuidadosamente antes de crear tu subgrafo para que sea fácil de buscar e identificar por otras dApps. +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. ¿Puedo cambiar la cuenta de GitHub asociada con mi subgrafo? +### 3. Can I still create a subgraph if my smart contracts don't have events? -No. Una vez que se crea un subgrafo, la cuenta de GitHub asociada no puede ser modificada. Asegúrate de pensarlo bien antes de crear tu subgrafo. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. ¿Todavía puedo crear un subgrafo si mis contratos inteligentes no tienen eventos? +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -Es muy recomendable que estructures tus contratos inteligentes para tener eventos asociados a los datos que te interesa consultar. Los handlers de eventos en el subgrafo son activados por los eventos del contrato y son, con mucho, la forma más rápida de recuperar datos útiles. +### 4. ¿Puedo cambiar la cuenta de GitHub asociada con mi subgrafo? -Si los contratos con los que estás trabajando no contienen eventos, tu subgrafo puede utilizar handlers de llamadas y bloques para activar la indexación. Aunque esto no se recomienda, ya que el rendimiento será significativamente más lento. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. ¿Es posible deployar un subgrafo con el mismo nombre para varias redes? +### 5. How do I update a subgraph on mainnet? -Necesitarás nombres separados para varias redes. Si bien no puedes tener diferentes subgrafos con el mismo nombre, existen formas convenientes de tener una base de código única para varias redes. Encuentra más sobre esto en nuestra documentación: [Redeploying a Subgraph](/implementación/implementación-de-un-subgráfico-a-alojamiento#reimplementación-de-un-subgráfico) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. ¿En qué se diferencian las plantillas de las fuentes de datos? +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -Las plantillas te permiten crear fuentes de datos sobre la marcha, mientras tu subgrafo está indexando. Puede darse el caso de que tu contrato genere nuevos contratos a medida que la gente interactúe con él, y dado que conoces la forma de esos contratos (ABI, eventos, etc.) por adelantado puedes definir cómo quieres indexarlos en una plantilla y cuando se generen tu subgrafo creará una fuente de datos dinámica proporcionando la dirección del contrato. +Tienes que volver a realizar el deploy del subgrafo, pero si el ID del subgrafo (hash IPFS) no cambia, no tendrá que sincronizarse desde el principio. + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +Dentro de un subgrafo, los eventos se procesan siempre en el orden en que aparecen en los bloques, independientemente de que sea a través de múltiples contratos o no. + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. Consulta la sección "Instantiating a data source template" en: [Data Source Templates](/developing/creating-a-subgraph#data-source-templates). -## 8. ¿Cómo puedo asegurarme de que estoy utilizando la última versión de graph-node para mis deploys locales? +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -Puedes ejecutar el siguiente comando: +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -docker pull graphprotocol/graph-node:latest -``` +You can also use `graph add` command to add a new dataSource. -**NOTA:** docker/docker-compose siempre utilizará la versión de graph-node que se sacó la primera vez que se ejecutó, por lo que es importante hacer esto para asegurarse de que estás al día con la última versión de graph-node. +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. ¿Cómo llamo a una función de contrato o accedo a una variable de estado pública desde mis mapeos de subgrafos? +Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. -## 10. ¿Es posible configurar un subgrafo usando `graph init` de `graph-cli` con dos contratos? ¿O debo agregar manualmente otra fuente de datos en `subgraph.yaml` después de ejecutar `graph init`? +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Yes. On `graph init` command itself you can add multiple datasources by entering contracts one after the other. You can also use `graph add` command to add new datasource. +Puedes ejecutar el siguiente comando: -## 11. Quiero contribuir o agregar un problema de GitHub. ¿Dónde puedo encontrar los repositorios de código abierto? +```sh +docker pull graphprotocol/graph-node:latest +``` -- [graph-node](https://github.com/graphprotocol/graph-node) -- [graph-tooling](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. ¿Cuál es la forma recomendada para crear ids "autogeneradas" para una entidad al manejar eventos? +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? Si sólo se crea una entidad durante el evento y si no hay nada mejor disponible, entonces el hash de la transacción + el índice del registro serían únicos. Puedes ofuscar esto convirtiendo eso en Bytes y luego pasándolo por `crypto.keccak256` pero esto no lo hará más único. -## Cuando se escuchan varios contratos, ¿es posible seleccionar el orden de los contratos para escuchar los eventos? +### 15. Can I delete my subgraph? -Dentro de un subgrafo, los eventos se procesan siempre en el orden en que aparecen en los bloques, independientemente de que sea a través de múltiples contratos o no. +It is not possible to delete subgraphs once they are created. However, you can [transfer and deprecate your subgraph](/managing/transfer-and-deprecate-a-subgraph/). -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +## Network Related + +### 16. What networks are supported by The Graph? + +Puedes encontrar la lista de redes admitidas [aquí](/developing/supported-networks). + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? Sí. Puedes hacerlo importando `graph-ts` como en el ejemplo siguiente: @@ -78,23 +99,21 @@ dataSource.network() dataSource.address() ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Do you support block and call handlers on Sepolia? Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. -## 16. ¿Puedo importar ethers.js u otras bibliotecas JS en mis mappings de subgrafos? - -Actualmente no, ya que los mapeos se escriben en AssemblyScript. Una posible solución alternativa a esto es almacenar los datos en bruto en entidades y realizar la lógica que requiere las bibliotecas JS en el cliente. +## Indexing & Querying Related -## 17. ¿Es posible especificar en qué bloque comenzar a indexar? +### 19. Is it possible to specify what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 18. ¿Hay algunos consejos para aumentar el rendimiento de la indexación? Mi subgrafo está tardando mucho en sincronizarse +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -Sí, debes echar un vistazo a la función de bloque de inicio opcional para comenzar a indexar desde el bloque en el que se implementó el contrato: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 19. ¿Hay alguna forma de consultar el subgrafo directamente para determinar el último número de bloque que ha indexado? +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? ¡Sí es posible! Prueba el siguiente comando, sustituyendo "organization/subgraphName" por la organización bajo la que se publica y el nombre de tu subgrafo: @@ -102,44 +121,27 @@ Sí, debes echar un vistazo a la función de bloque de inicio opcional para come curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -## 20. ¿Qué redes son compatibles con The Graph? - -Puedes encontrar la lista de redes admitidas [aquí](/developing/supported-networks). - -## 21. ¿Es posible duplicar un subgrafo en otra cuenta o endpoint sin volver a realizar el deploy? - -Tienes que volver a realizar el deploy del subgrafo, pero si el ID del subgrafo (hash IPFS) no cambia, no tendrá que sincronizarse desde el principio. - -## 22. ¿Es posible usar Apollo Federation encima de graph-node? +### 22. Is there a limit to how many objects The Graph can return per query? -Federation aún no es compatible, aunque queremos apoyarla en el futuro. Por el momento, algo que se puede hacer es utilizar el stitching de esquemas, ya sea en el cliente o a través de un servicio proxy. - -## 23. ¿Existe un límite en el número de objetos que The Graph puede devolver por consulta? - -Por defecto, las respuestas a las consultas están limitadas a 100 elementos por colección. Si quieres recibir más, puedes llegar hasta 1000 elementos por colección y más allá, puedes paginar con: +By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: ```graphql someCollection(first: 1000, skip: ) { ... } ``` -## 24. Si mi interfaz de dapp usa The Graph para realizar consultas, ¿debo escribir mi clave de consulta directamente en la interfaz? ¿Qué pasa si pagamos tarifas de consulta para los usuarios? ¿Los usuarios malintencionados harán que nuestras tarifas de consulta sean muy altas? - -Actualmente, el enfoque recomendado para una dapp es añadir la clave al frontend y exponerla a los usuarios finales. Dicho esto, puedes limitar esa clave a un nombre de host, como _yourdapp.io_ y subgrafo. La gateway se ejecuta actualmente por Edge & Node. Parte de la responsabilidad de un gateway es monitorear el comportamiento abusivo y bloquear el tráfico de clientes maliciosos. - -## 25. Where do I go to find my current subgraph on the hosted service? - -Head over to the hosted service in order to find subgraphs that you or others deployed to the hosted service. You can find it [here](https://thegraph.com/hosted-service). - -## 26. Will the hosted service start charging query fees? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -The Graph will never charge for the hosted service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The hosted service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to upgrade to the decentralized network as they are comfortable. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## 27. How do I update a subgraph on mainnet? +## Miscellaneous -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +### 24. Is it possible to use Apollo Federation on top of graph-node? -## 28. In what order are the event, block, and call handlers triggered for a data source? +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +- [graph-node](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/es/developing/graph-ts/api.mdx b/website/pages/es/developing/graph-ts/api.mdx index b2309f29cc83..6660507ae768 100644 --- a/website/pages/es/developing/graph-ts/api.mdx +++ b/website/pages/es/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: AssemblyScript API --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/release-notes/assemblyscript-migration-guide) +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -Esta página documenta qué API integradas se pueden usar al escribir mappings de subgrafos. Hay dos tipos de API disponibles listas para usar: +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## Referencias de API @@ -27,16 +29,16 @@ The `@graphprotocol/graph-ts` library provides the following APIs: The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. -| Version | Notas del lanzamiento | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Notas del lanzamiento | +| :-----: | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Tipos Incorporados @@ -252,7 +254,9 @@ export function handleTransfer(event: TransferEvent): void { When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. -Cada entidad debe tener un identificador único para evitar colisiones con otras entidades. Es bastante común que los parámetros de los eventos incluyan un identificador único que pueda ser utilizado. Nota: El uso del hash de la transacción como ID asume que ningún otro evento en la misma transacción crea entidades con este hash como ID. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### Carga de entidades desde el store @@ -268,15 +272,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### Buscando entidades creadas dentro de un bloque As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. -La API de almacenamiento facilita la recuperación de entidades que se crearon o actualizaron en el bloque actual. Una situación típica para esto es cuando un handler crea una Transacción a partir de algún evento en la cadena, y un handler posterior quiere acceder a esta transacción si existe. En el caso de que la transacción no exista, el subgrafo tendrá que ir a la base de datos solo para averiguar que la entidad no existe; si el autor del subgrafo ya sabe que la entidad debe haber sido creada en el mismo bloque, el uso de loadInBlock evita este viaje de ida y vuelta a la base de datos. Para algunos subgrafos, estas búsquedas perdidas pueden contribuir significativamente al tiempo de indexación. +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -503,7 +510,9 @@ Cualquier otro contrato que forme parte del subgrafo puede ser importado desde e #### Tratamiento de las Llamadas Revertidas -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -515,7 +524,7 @@ if (callResult.reverted) { } ``` -Ten en cuenta que un nodo Graph conectado a un cliente Geth o Infura puede no detectar todas las reversiones, si confías en esto te recomendamos que utilices un nodo Graph conectado a un cliente Parity. +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### Codificación/Descodificación ABI diff --git a/website/pages/es/developing/supported-networks.mdx b/website/pages/es/developing/supported-networks.mdx index dc663b17b3f1..86c379d637f5 100644 --- a/website/pages/es/developing/supported-networks.mdx +++ b/website/pages/es/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. +- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. - If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). diff --git a/website/pages/es/developing/unit-testing-framework.mdx b/website/pages/es/developing/unit-testing-framework.mdx index fbd2ab58df4b..7cf2df0d754a 100644 --- a/website/pages/es/developing/unit-testing-framework.mdx +++ b/website/pages/es/developing/unit-testing-framework.mdx @@ -1368,18 +1368,18 @@ La salida del log incluye la duración de la ejecución de la prueba. Aquí hay > Critical: Could not create WasmInstance from valid module with context: unknown import: wasi_snapshot_preview1::fd_write has not been defined -Esto significa que has utilizado `console.log` en tu código, que no es compatible con AssemblyScript. Considera usar la [API de registro](/developing/assemblyscript-api/#logging-api) +This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/graph-ts/api/#logging-api) > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Block(defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(18,12) -> +> > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Transaction(defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(24,12) La falta de coincidencia en los argumentos se debe a la falta de coincidencia en `graph-ts` y `matchstick-as`. La mejor manera de solucionar problemas como este es actualizar todo a la última versión publicada. diff --git a/website/pages/es/glossary.mdx b/website/pages/es/glossary.mdx index adabc2b4b467..2390d01b7fa8 100644 --- a/website/pages/es/glossary.mdx +++ b/website/pages/es/glossary.mdx @@ -10,11 +10,9 @@ title: Glosario - **Endpoint**: Una URL que se puede utilizar para consultar un subgrafo. El endpoint de prueba para Subgraph Studio es `https://api.studio.thegraph.com/query///` y el endpoint de Graph Explorer es `https://gateway.thegraph.com/api//subgraphs/id/`. El endpoint de Graph Explorer se utiliza para consultar subgrafos en la red descentralizada de The Graph. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Then, Indexers can begin indexing subgraphs to make them available to be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **Hosted service**: A temporary scaffold service for building and querying subgraphs as The Graph's decentralized network is maturing its cost of service, quality of service, and developer experience. - -- **Indexadores (Indexers)**: Participantes de la red que ejecutan nodos de indexación para indexar datos de la blockchain y servir consultas GraphQL. +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Flujos de ingresos de los indexadores (Indexer Revenue Streams)**: Los Indexadores son recompensados en GRT con dos componentes: reembolsos de tarifas de consulta y recompensas de indexación. @@ -24,17 +22,17 @@ title: Glosario - **Stake propio del Indexador (Indexer's Self Stake)**: La cantidad de GRT que los Indexadores depositan en stake para participar en la red descentralizada. El mínimo es de 100.000 GRT, y no hay límite superior. -- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service to The Graph Network. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegadores (Delegators)**: Participantes de la red que poseen GRT y delegan su GRT en Indexadores. Esto permite a los Indexadores aumentar su stake en los subgrafos de la red. A cambio, los Delegadores reciben una parte de las recompensas de indexación que reciben los Indexadores por procesar los subgrafos. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **Impuesto a la Delegación (Delegation Tax)**: Una tasa del 0,5% que pagan los Delegadores cuando delegan GRT en los Indexadores. El GRT utilizado para pagar la tasa se quema. -- **Curadores (Curators)**: Participantes de la red que identifican subgrafos de alta calidad y los "curan" (es decir, señalan GRT sobre ellos) a cambio de cuotas de curación. Cuando los Indexadores reclaman tarifas de consulta sobre un subgrafo, el 10% se distribuye entre los Curadores de ese subgrafo. Los Indexadores obtienen recompensas de indexación proporcionales a la señal en un subgrafo. Vemos una correlación entre la cantidad de GRT señalada y el número de Indexadores que indexan un subgrafo. +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **Impuesto a la Curación (Curation Tax)**: Una tasa del 1% pagada por los Curadores cuando señalan GRT en los subgrafos. El GRT utilizado para pagar la tasa se quema. -- **Consumidor de Subgrafos (Subgraph Consumer)**: Cualquier aplicación o usuario que consulte un subgrafo. +- **Data Consumer**: Any application or user that queries a subgraph. - **Developer de subgrafos (Subgraph developer)**: Developer que construye y realiza el deploy de un subgrafo en la red descentralizada de The Graph. @@ -46,11 +44,11 @@ title: Glosario 1. **Activa (Active)**: Una allocation se considera activa cuando se crea on-chain. Esto se llama abrir una allocation, e indica a la red que el Indexador está indexando activamente y sirviendo consultas para un subgrafo en particular. Las allocations activas acumulan recompensas de indexación proporcionales a la señal del subgrafo y a la cantidad de GRT asignada. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **Subgraph Studio**: Una potente aplicación para crear, deployar y publicar subgrafos. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed. Specifically, the Indexer will lose 2.5% of their self-stake of GRT. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. @@ -62,11 +60,11 @@ title: Glosario - **GRT**: El token de utilidad de trabajo de The Graph. GRT ofrece incentivos económicos a los participantes en la red por contribuir a ella. -- **POI or Proof of Indexing**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent Proof of Indexing (POI). Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node es el componente que indexa los subgrafos, y hace que los datos resultantes estén disponibles para su consulta a través de una API GraphQL. Como tal, es fundamental para el stack del Indexador, y el correcto funcionamiento de Graph Node es crucial para ejecutar un Indexador con éxito. +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Agente indexador (Indexer Agent)**: El agente del Indexador forma parte del stack del Indexador. Facilita las interacciones on-chain del Indexador, incluido el registro en la red, la gestión de deploys de subgrafos en su(s) Graph Node y la gestión de allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **Cliente The Graph (The Graph Client)**: Una biblioteca para construir dapps basadas en GraphQL de forma descentralizada. @@ -78,10 +76,6 @@ title: Glosario - **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self stake. -- **_Upgrading_ a subgraph to The Graph Network**: The process of moving a subgraph from the hosted service to The Graph Network. - -- **_Updating_ a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). - -- **Upgrade Window**: The countdown for hosted service users to upgrade their subgraph(s) to The Graph Network beginning on April 11th, and ending on June 12th 2024. diff --git a/website/pages/es/index.json b/website/pages/es/index.json index d3513ef6672c..7abab377ea71 100644 --- a/website/pages/es/index.json +++ b/website/pages/es/index.json @@ -21,10 +21,6 @@ "createASubgraph": { "title": "Crear un Subgrafo", "description": "Utiliza Studio para crear subgrafos" - }, - "migrateFromHostedService": { - "title": "Upgrade from the hosted service", - "description": "Upgrading subgraphs to The Graph Network" } }, "networkRoles": { diff --git a/website/pages/es/managing/transfer-and-deprecate-a-subgraph.mdx b/website/pages/es/managing/transfer-and-deprecate-a-subgraph.mdx new file mode 100644 index 000000000000..89160ad474ed --- /dev/null +++ b/website/pages/es/managing/transfer-and-deprecate-a-subgraph.mdx @@ -0,0 +1,65 @@ +--- +title: Transfer and Deprecate a Subgraph +--- + +## Transferencia de propiedad de un subgrafo + +Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. + +**Please note the following:** + +- Whoever owns the NFT controls the subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. +- You can easily move control of a subgraph to a multi-sig. +- A community member can create a subgraph on behalf of a DAO. + +### View your subgraph as an NFT + +To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: + +``` +https://opensea.io/your-wallet-address +``` + +Or a wallet explorer like **Rainbow.me**: + +``` +https://rainbow.me/your-wallet-addres +``` + +### Step-by-Step + +To transfer ownership of a subgraph, do the following: + +1. Use the UI built into Subgraph Studio: + + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) + +2. Choose the address that you would like to transfer the subgraph to: + + ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) + +Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: + +![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) + +## Deprecating a subgraph + +Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. + +### Step-by-Step + +To deprecate your subgraph, do the following: + +1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). +2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. +3. Your subgraph will no longer appear in searches on Graph Explorer. + +**Please note the following:** + +- The owner's wallet should call the `deprecateSubgraph` function. +- Los Curadores ya no podrán señalar en el subgrafo. +- Curators that already signaled on the subgraph can withdraw their signal at an average share price. +- Deprecated subgraphs will show an error message. + +> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. diff --git a/website/pages/es/mips-faqs.mdx b/website/pages/es/mips-faqs.mdx index e0a60ea776d5..71dfb6ba0aaf 100644 --- a/website/pages/es/mips-faqs.mdx +++ b/website/pages/es/mips-faqs.mdx @@ -6,10 +6,6 @@ title: Preguntas Frecuentes sobre MIPs > Note: the MIPs program is closed as of May 2023. Thank you to all the Indexers who participated! -It's an exciting time to be participating in The Graph ecosystem! During [Graph Day 2022](https://thegraph.com/graph-day/2022/) Yaniv Tal announced the [sunsetting of the hosted service](https://thegraph.com/blog/sunsetting-hosted-service/), a moment The Graph ecosystem has been working towards for many years. - -To support the sunsetting of the hosted service and the migration of all of it's activity to the decentralized network, The Graph Foundation has announced the [Migration Infrastructure Providers (MIPs) program](https://thegraph.com/blog/mips-multi-chain-indexing-incentivized-program). - The MIPs program is an incentivization program for Indexers to support them with resources to index chains beyond Ethereum mainnet and help The Graph protocol expand the decentralized network into a multi-chain infrastructure layer. The MIPs program has allocated 0.75% of the GRT supply (75M GRT), with 0.5% to reward Indexers who contribute to bootstrapping the network and 0.25% allocated to Network Grants for subgraph developers using multi-chain subgraphs. diff --git a/website/pages/es/network/benefits.mdx b/website/pages/es/network/benefits.mdx index f3740f62ffea..13c097cfc319 100644 --- a/website/pages/es/network/benefits.mdx +++ b/website/pages/es/network/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Comparación de costos | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Costo mensual del servidor\* | $350 por mes | $0 | -| Costos de consulta | $0+ | $0 per month | -| Tiempo de ingeniería | $400 por mes | Ninguno, integrado en la red con Indexadores distribuidos globalmente | -| Consultas por mes | Limitado a capacidades de infraestructura | 100,000 (Free Plan) | -| Costo por consulta | $0 | $0 | -| Infraestructura | Centralizado | Descentralizado | -| Redundancia geográfica | $750+ por nodo adicional | Incluido | -| Tiempo de actividad | Varía | 99.9%+ | -| Costos mensuales totales | $750+ | $0 | +| Comparación de costos | Self Hosted | The Graph Network | +|:--------------------------------:|:-----------------------------------------:|:---------------------------------------------------------------------:| +| Costo mensual del servidor\* | $350 por mes | $0 | +| Costos de consulta | $0+ | $0 per month | +| Tiempo de ingeniería | $400 por mes | Ninguno, integrado en la red con Indexadores distribuidos globalmente | +| Consultas por mes | Limitado a capacidades de infraestructura | 100,000 (Free Plan) | +| Costo por consulta | $0 | $0 | +| Infraestructura | Centralizado | Descentralizado | +| Redundancia geográfica | $750+ por nodo adicional | Incluido | +| Tiempo de actividad | Varía | 99.9%+ | +| Costos mensuales totales | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| Comparación de costos | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Costo mensual del servidor\* | $350 por mes | $0 | -| Costos de consulta | $500 por mes | $120 per month | -| Tiempo de ingeniería | $800 por mes | Ninguno, integrado en la red con Indexadores distribuidos globalmente | -| Consultas por mes | Limitado a capacidades de infraestructura | ~3,000,000 | -| Costo por consulta | $0 | $0.00004 | -| Infraestructura | Centralizado | Descentralizado | -| Gastos de ingeniería | $200 por hora | Incluido | -| Redundancia geográfica | $1,200 en costos totales por nodo adicional | Incluido | -| Tiempo de actividad | Varía | 99.9%+ | -| Costos mensuales totales | $1,650+ | $120 | +| Comparación de costos | Self Hosted | The Graph Network | +|:--------------------------------:|:-------------------------------------------:|:---------------------------------------------------------------------:| +| Costo mensual del servidor\* | $350 por mes | $0 | +| Costos de consulta | $500 por mes | $120 per month | +| Tiempo de ingeniería | $800 por mes | Ninguno, integrado en la red con Indexadores distribuidos globalmente | +| Consultas por mes | Limitado a capacidades de infraestructura | ~3,000,000 | +| Costo por consulta | $0 | $0.00004 | +| Infraestructura | Centralizado | Descentralizado | +| Gastos de ingeniería | $200 por hora | Incluido | +| Redundancia geográfica | $1,200 en costos totales por nodo adicional | Incluido | +| Tiempo de actividad | Varía | 99.9%+ | +| Costos mensuales totales | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| Comparación de costos | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Costo mensual del servidor\* | $1100 por mes, por nodo | $0 | -| Costos de consulta | $4000 | $1,200 per month | -| Número de nodos necesarios | 10 | No aplica | -| Tiempo de ingeniería | $6,000 o más por mes | Ninguno, integrado en la red con Indexadores distribuidos globalmente | -| Consultas por mes | Limitado a capacidades de infraestructura | ~30,000,000 | -| Costo por consulta | $0 | $0.00004 | -| Infraestructura | Centralizado | Descentralizado | -| Redundancia geográfica | $1,200 en costos totales por nodo adicional | Incluido | -| Tiempo de actividad | Varía | 99.9%+ | -| Costos mensuales totales | $11,000+ | $1,200 | +| Comparación de costos | Self Hosted | The Graph Network | +|:--------------------------------:|:-------------------------------------------:|:---------------------------------------------------------------------:| +| Costo mensual del servidor\* | $1100 por mes, por nodo | $0 | +| Costos de consulta | $4000 | $1,200 per month | +| Número de nodos necesarios | 10 | No aplica | +| Tiempo de ingeniería | $6,000 o más por mes | Ninguno, integrado en la red con Indexadores distribuidos globalmente | +| Consultas por mes | Limitado a capacidades de infraestructura | ~30,000,000 | +| Costo por consulta | $0 | $0.00004 | +| Infraestructura | Centralizado | Descentralizado | +| Redundancia geográfica | $1,200 en costos totales por nodo adicional | Incluido | +| Tiempo de actividad | Varía | 99.9%+ | +| Costos mensuales totales | $11,000+ | $1,200 | \*incluidos los costos de copia de seguridad: $50-$100 por mes diff --git a/website/pages/es/network/curating.mdx b/website/pages/es/network/curating.mdx index e8cdc12ea206..6ef961eefc88 100644 --- a/website/pages/es/network/curating.mdx +++ b/website/pages/es/network/curating.mdx @@ -10,7 +10,7 @@ Before consumers can query a subgraph, it must be indexed. This is where curatio Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. @@ -18,7 +18,7 @@ The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). @@ -34,7 +34,7 @@ Señalar una versión específica es especialmente útil cuando un subgrafo es u Hacer que tu señal migre automáticamente a la más reciente compilación de producción puede ser valioso para asegurarse de seguir acumulando tarifas de consulta. Cada vez que curas, se incurre en un impuesto de curación del 1%. También pagarás un impuesto de curación del 0,5% en cada migración. Se desaconseja a los desarrolladores de Subgrafos que publiquen con frecuencia nuevas versiones - tienen que pagar un impuesto de curación del 0,5% en todas las acciones de curación auto-migradas. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -49,7 +49,7 @@ However, it is recommended that curators leave their signaled GRT in place not o ## Riesgos 1. El mercado de consultas es inherentemente joven en The Graph y existe el riesgo de que su APY (Rentabilidad anualizada) sea más bajo de lo esperado debido a la dinámica del mercado que recién está empezando. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. 3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. Un subgrafo puede fallar debido a un error. Un subgrafo fallido no acumula tarifas de consulta. Como resultado, tendrás que esperar hasta que el desarrollador corrija el error e implemente una nueva versión. - Si estás suscrito a la versión más reciente de un subgrafo, tus acciones se migrarán automáticamente a esa nueva versión. Esto incurrirá un impuesto de curación del 0.5%. @@ -65,7 +65,7 @@ By signalling on a subgraph, you will earn a share of all the query fees that th Encontrar subgrafos de alta calidad es una tarea compleja, pero se puede abordar de muchas formas diferentes. Como Curador, quieres buscar subgrafos confiables que impulsen el volumen de consultas. Un subgrafo confiable puede ser valioso si es completo, preciso y respalda las necesidades de dicha dApp. Es posible que un subgrafo con una arquitectura deficiente deba revisarse o volver a publicarse, y también puede terminar fallando. Es fundamental que los Curadores revisen la arquitectura o el código de un subgrafo para evaluar si un subgrafo es valioso. Como resultado: -- Los curadores pueden usar su conocimiento de una red para intentar predecir cómo un subgrafo puede generar un volumen de consultas mayor o menor a largo plazo +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future - Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. ### 3. What’s the cost of updating a subgraph? @@ -78,50 +78,14 @@ It’s suggested that you don’t update your subgraphs too frequently. See the ### 5. ¿Puedo vender mis acciones de curación? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). ### 6. Am I eligible for a curation grant? Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## Bonding Curve 101 - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![Precio por acciones](/img/price-per-share.png) - -Como resultado, el precio aumenta linealmente, lo que significa que con el tiempo resultará más caro comprar una participación. A continuación, se muestra un ejemplo de lo que queremos decir; consulta la bonding curve a continuación: - -![Bonding curve](/img/bonding-curve.png) - -Imagina que tenemos dos curadores que acuñan acciones para un subgrafo: - -- El Curador A es el primero en señalar en el subgrafo. Al agregar 120.000 GRT en la curva, puede acuñar 2000 participaciones. -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- Dado que ambos curadores poseen la mitad participativa de dicha curación, recibirían una cantidad igual en las recompensas por ser curador. -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- El curador restante recibiría todas las recompensas en ese subgrafo. Si quemaran sus participaciones a fin de retirar sus GRT, recibirían 120.000 GRT. -- **TLDR (en resumen):** La valoración de GRT de las acciones de curación viene determinada por la bonding curva y puede ser volátil. Existe la posibilidad de incurrir grandes pérdidas. Señalar temprano significa que pones menos GRT por cada acción. Por extensión, esto significa que se ganan más derechos de curador por GRT que los curadores posteriores por el mismo subgrafo. - -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** - -En el caso de The Graph, se aprovecha [la implementación de una fórmula por parte de Bancor para la bonding curve](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA). - ¿Sigues confundido? Te invitamos a echarle un vistazo a nuestra guía en un vídeo que aborda todo sobre la curación: diff --git a/website/pages/es/network/delegating.mdx b/website/pages/es/network/delegating.mdx index 2d1b5ab66ee3..90d377dbfebb 100644 --- a/website/pages/es/network/delegating.mdx +++ b/website/pages/es/network/delegating.mdx @@ -2,13 +2,23 @@ title: Delegar --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## Guía del Delegador -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly, as that information is widely available online. There are three sections in this guide: @@ -24,15 +34,19 @@ A continuación se enumeran los principales riesgos de ser un Delegador en el pr Los Delegadores no pueden ser recortados por mal comportamiento, pero existe un impuesto sobre los Delegadores para desincentivar la toma de malas decisiones que puedan perjudicar la integridad de la red. -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +As a Delegator, it's important to understand the following: -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### El período de unbonding (desvinculación) de la delegación Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely.
![Delegation unbonding](/img/Delegation-Unbonding.png) _Ten en cuenta la tasa del 0,5% en la UI de la Delegación, así @@ -41,9 +55,13 @@ Another thing to consider is how to choose an Indexer wisely. If you choose an I ### Elige un Indexador fiable, que pague recompensas justas a sus Delegadores -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +#### Delegation Parameters + +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *El Indexador de arriba está dando a los Delegadores el 90% de @@ -51,38 +69,52 @@ Indexing Reward Cut - The indexing reward cut is the portion of the rewards that Delegadores*
-- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. + +As you can see, in order to choose the right Indexer, you must consider multiple things. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -### Calculando el retorno esperado para los Delegadores +## Calculating Delegators Expected Return -A Delegator must consider a lot of factors when determining the return. These include: +A Delegator must consider the following factors to determine a return: -- Un Delegador técnico también puede ver la capacidad de los Indexadores para usar los tokens que han sido delegados y la capacidad de disponibilidad a su favor. Si un Indexador no está asignando todos los tokens disponibles, no está obteniendo el beneficio máximo que podría obtener para sí mismo o para sus Delegadores. -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### Siempre ten en cuenta la tarifa por consulta y el recorte de recompensas para el Indexador -Como se ha descrito en las secciones anteriores, debes elegir un indexador que sea transparente y honesto a la hora de establecer su corte de tarifa de consulta y cortes de tarifa de indexación. Un Delegador también debe fijarse en el tiempo de enfriamiento de los parámetros para ver de cuánto tiempo disponen. Una vez hecho esto, es bastante sencillo calcular la cantidad de recompensas que reciben los Delegadores. La fórmula es: +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![Delegación Imagen 3](/img/Delegation-Reward-Formula.png) ### Tener en cuenta el pool de delegación de cada Indexador -Otra cosa que tiene que tener en cuenta un Delegador es qué proporción del Pool de Delegación posee. Todas las recompensas de la delegación se reparten de forma equitativa, con un simple reequilibrio del pool determinado por la cantidad que el Delegador haya depositado en el pool. De este modo, el Delegador recibe una parte del pool: +Delegators should consider the proportion of the Delegation Pool they own. -![Fórmula para compartir](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +This gives the Delegator a share of the pool: + +![Fórmula para compartir](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### Considerar la capacidad de delegación -Otra cosa a tener en cuenta es la capacidad de delegación. Actualmente, el Ratio de Delegación está fijado en 16. Esto significa que si un Indexador ha stakeado 1.000.000 GRT, su Capacidad de Delegación es de 16.000.000 GRT de tokens delegados que puede utilizar en el protocolo. Cualquier token delegado que supere esta cantidad diluirá todas las recompensas del Delegador. +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. @@ -90,16 +122,21 @@ Therefore, a Delegator should always consider the Delegation Capacity of an Inde ### Error de "Transacción Pendiente" en MetaMask -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### Ejemplo -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## Video guía de la interfaz de usuario de la red +## Video Guide -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/es/network/developing.mdx b/website/pages/es/network/developing.mdx index 223472818228..f15294d9845b 100644 --- a/website/pages/es/network/developing.mdx +++ b/website/pages/es/network/developing.mdx @@ -2,52 +2,88 @@ title: Desarrollando --- -Los desarrolladores representan el lado de la demanda del ecosistema The Graph. Los developers construyen subgrafos y los publican en The Graph Network. A continuación, consultan los subgrafos activos con GraphQL para potenciar sus aplicaciones. +To start coding right away, go to [Developer Quick Start](/quick-start/). + +## Descripción + +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. + +On The Graph, you can: + +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. + +### What is GraphQL? + +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +### Developer Actions + +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. + +## Subgraph Specifics + +### What are subgraphs? + +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. + +A subgraph primarily consists of the following files: + +- `subgraph.yaml`: this YAML file contains the [subgraph manifest](/developing/creating-a-subgraph/#the-subgraph-manifest). +- `subgraph.graphql`: this GraphQL schema defines what data is stored for your subgraph, and how to query it via [GraphQL](/developing/creating-a-subgraph/#the-graphql-schema). +- `mappings`: this [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) mappings file translates data from the event data to the entities defined in your schema. + +Learn the detailed specifics to [create a subgraph](/developing/creating-a-subgraph/). ## Ciclo de vida de un Subgrafo -Los subgrafos deployados en la red tienen un ciclo de vida definido. +Here is a general overview of a subgraph’s lifecycle: -### Construir a nivel local +![Ciclo de vida de un Subgrafo](/img/subgraph-lifecycle.png) -Al igual que con todo el desarrollo de subgrafos, se comienza con el desarrollo y prueba local. Los desarrolladores pueden utilizar la misma configuración local tanto si construyen para The Graph Network, el Servicio Alojado o un Graph Node local, aprovechando `graph-cli` y `graph-ts` para construir su subgrafo. Se anima a los desarrolladores a utilizar herramientas como [Matchstick](https://github.com/LimeChain/matchstick) para realizar pruebas unitarias y mejorar la solidez de sus subgrafos. +### Construir a nivel local -> Existen ciertas limitaciones en The Graph Network, en términos de características y soporte de red. Solo los subgrafos en [redes suportadas](/developing/supported-networks) obtienen recompensas de indexación, y los subgrafos que obtienen datos de IPFS tampoco son elegibles. +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. ### Deploy to Subgraph Studio -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. - -### Publicar a la red +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -Cuando el desarrollador está satisfecho con su subgrafo, puede publicarlo en The Graph Network. Esta es una acción on-chain, que registra el subgrafo para que pueda ser descubierto por los Indexadores. Los subgrafos publicados tienen su correspondiente NFT, que es fácilmente transferible. El subgrafo publicado tiene metadatos asociados, que proporcionan a otros participantes de la red un contexto e información útiles. +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. -### Señalar para fomentar la indexación +### Publicar a la red -Es poco probable que los subgrafos publicados sean recogidos por los Indexadores sin la adición de la señal. La señal es GRT bloqueado asociado a un subgrafo determinado, que indica a los Indexadores que un subgrafo determinado recibirá un volumen de consultas, y también contribuye a las recompensas de indexación disponibles por procesarlo. Los desarrolladores de subgrafos generalmente añadirán una señal a su subgrafo para fomentar la indexación. Los Curadores de terceros también pueden señalar un subgrafo determinado, si consideran que el subgrafo puede generar un volumen de consultas. +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. -### Consultas & desarrollo de aplicaciones +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-and-deprecate-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. -Una vez que un subgrafo ha sido procesado por los Indexadores y está disponible para su consulta, los desarrolladores pueden empezar a utilizar el subgrafo en sus aplicaciones. Los desarrolladores consultan los subgrafos a través de una Gateway, que reenvía sus consultas a un Indexador que haya procesado el subgrafo, pagando las tarifas de consulta en GRT. +### Add Curation Signal for Indexing -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. +#### What is signal? -### Updating Subgraphs +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. +### Consultas & desarrollo de aplicaciones -Once the Subgraph Developer is ready to update, they can initiate a transaction to point their subgraph at the new version. Updating the subgraph migrates any signal to the new version (assuming the user who applied the signal selected "auto-migrate"), which also incurs a migration tax. This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). -### Deprecar un Subgrafo +Learn more about [querying subgraphs](/querying/querying-the-graph/). -En algún momento un developer puede decidir que ya no necesita un subgrafo publicado. En ese momento pueden deprecar el subgrafo, lo que devuelve cualquier GRT señalada a los Curadores. +### Updating Subgraphs -### Diversos roles de desarrollador +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -Algunos desarrolladores participarán en el ciclo de vida completo de los subgrafos en la red, publicando, consultando e iterando sobre sus propios subgrafos. Algunos se centrarán en el desarrollo de subgrafos, creando APIs abiertas en las que otros puedan basarse. Otros pueden centrarse en la aplicación, consultando subgrafos deployados por otros. +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. -### Desarrolladores y economía de la red +### Deprecating & Transferring Subgraphs -Developers are a key economic actor in the network, locking up GRT in order to encourage indexing, and crucially querying subgraphs, which is the network's primary value exchange. Subgraph developers also burn GRT whenever a subgraph is updated. +If you no longer need a published subgraph, you can [deprecate or transfer a subgraph](/managing/transfer-and-deprecate-a-subgraph/). Deprecating a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/es/network/explorer.mdx b/website/pages/es/network/explorer.mdx index b2f43cebf2a2..7f8dee22a2e7 100644 --- a/website/pages/es/network/explorer.mdx +++ b/website/pages/es/network/explorer.mdx @@ -2,21 +2,35 @@ title: Explorador de Graph --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## Subgrafos -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -Cuando hagas clic en un subgrafo, podrás probar consultas en el playground y podrás aprovechar los detalles de la red para tomar decisiones informadas. También podrás señalar GRT en tu propio subgrafo o en los subgrafos de otros para que los indexadores sean conscientes de su importancia y calidad. Esto es fundamental porque señalar en un subgrafo incentiva su indexación, lo que significa que saldrá a la luz en la red para eventualmente entregar consultas. +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Imagen de Explorer 2](/img/Subgraph-Details.png) -En la página de cada subgrafo, aparecen varios detalles. Entre ellos se incluyen: +On each subgraph’s dedicated page, you can do the following: - Señalar/dejar de señalar un subgrafo - Ver más detalles como gráficos, ID de implementación actual y otros metadatos @@ -31,26 +45,32 @@ En la página de cada subgrafo, aparecen varios detalles. Entre ellos se incluye ## Participantes -Dentro de esta pestaña, tendras una mirada general de todas las personas que están participando en las actividades de la red, como los Indexadores, los Delegadores y los Curadores. A continuación, revisaremos en profundidad lo que significa cada pestaña para ti. +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### 1. Indexadores ![Imagen de Explorer 4](/img/Indexer-Pane.png) -Comencemos con los Indexadores. Los Indexadores son la columna vertebral del protocolo, ya que son los que stakean en los subgrafos, los indexan y proveen consultas a cualquiera que consuma subgrafos. En la tabla de Indexadores, podrás ver los parámetros de delegación de un Indexador, su participación, cuánto han stakeado en cada subgrafo y cuántos ingresos han obtenido por las tarifas de consulta y las recompensas de indexación. Profundizaremos un poco más a continuación: +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. + +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. -- Query Fee Cut: es el porcentaje de los reembolsos obtenidos por la tarifa de consulta que el Indexador conserva cuando se divide con los Delegadores -- Effective Reward Cut: es el recorte de recompensas por indexación que se aplica al pool de delegación. Si es negativo, significa que el Indexador está regalando parte de sus beneficios. Si es positivo, significa que el Indexador se queda con alguno de tus beneficios -- Cooldown Remaining: el tiempo restante que le permitirá al Indexador cambiar los parámetros de delegación. Los plazos de configuración son ajustados por los Indexadores cuando ellos actualizan sus parámetros de delegación -- Owned: esta es la participación (o el stake) depositado por el Indexador, la cual puede reducirse por su mal comportamiento -- Delegated: participación de los Delegadores que puede ser asignada por el Indexador, pero que no se puede recortar -- Allocated: es el stake que los indexadores están asignando activamente a los subgrafos que están indexando -- Available Delegation Capacity: la cantidad de participación delegada que los Indexadores aún pueden recibir antes de que se sobredeleguen +**Specifics** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity: la cantidad máxima de participación delegada que el Indexador puede aceptar de forma productiva. Un exceso de participación delegada no puede utilizarse para asignaciones o cálculos de recompensas. -- Query Fees: estas son las tarifas totales que los usuarios (clientes) han pagado por todas las consultas de un Indexador +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - Indexer Rewards: este es el total de recompensas del Indexador obtenidas por el Indexador y sus Delegadores durante todo el tiempo que trabajaron en conjunto. Las recompensas de los Indexadores se pagan mediante la emisión de GRT. -Los Indexadores pueden ganar tanto comisiones de consulta como recompensas de indexación. Funcionalmente, esto ocurre cuando los participantes de la red delegan GRT a un Indexador. Esto permite a los Indexadores recibir tarifas de consulta y recompensas en función de sus parámetros de indexación. Los parámetros de indexación se establecen haciendo clic en la parte derecha de la tabla, o entrando en el perfil de un Indexador y haciendo clic en el botón "Delegar". +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. Para obtener más información sobre cómo convertirte en un Indexador, puedes consultar la [documentación oficial](/network/indexing) o [The Graph Academy Indexer Guides.](https://thegraph.academy/delegators/ eligiendo-indexadores/) @@ -58,9 +78,13 @@ Para obtener más información sobre cómo convertirte en un Indexador, puedes c ### 2. Curadores -Los Curadores analizan los subgrafos para identificar cuáles son los de mayor calidad. Una vez que un Curador ha encontrado un subgrafo potencialmente atractivo, puede curarlo señalando su bonding curve. De este modo, los Curadores hacen saber a los Indexadores qué subgrafos son de alta calidad y deben ser indexados. +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -Los Curadores pueden ser miembros de la comunidad, consumidores de datos o incluso developers de subgrafos que señalan en sus propios subgrafos depositando tokens GRT en una bonding curve. Al depositar GRT, los Curadores anclan sus participaciones como curadores de un subgrafo. Como resultado, los Curadores son elegibles para ganar una parte de las tarifas de consulta que genera el subgrafo que han señalado. La bonding curve incentiva a los Curadores a curar fuentes de datos de la más alta calidad. La tabla de Curador en esta sección te permitirá ver: +In the The Curator table listed below you can see: - La fecha en que el Curador comenzó a curar - El número de GRT que se depositaron @@ -68,34 +92,36 @@ Los Curadores pueden ser miembros de la comunidad, consumidores de datos o inclu ![Imagen de Explorer 6](/img/Curation-Overview.png) -Si deseas obtener más información sobre el rol de Curador, puedes hacerlo visitando los siguientes enlaces de [The Graph Academy](https://thegraph.academy/curators/) o [documentación oficial.](/network/curating) +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### 3. Delegadores -Los Delegadores juegan un rol esencial en la seguridad y descentralización que conforman la red de The Graph. Participan en la red delegando (es decir, "stakeado") tokens GRT a uno o varios Indexadores. Sin Delegadores, es menos probable que los Indexadores obtengan recompensas y tarifas significativas. Por lo tanto, los Indexadores buscan atraer Delegadores ofreciéndoles una parte de las recompensas de indexación y las tarifas de consulta que ganan. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![Imagen de Explorer 7](/img/Delegation-Overview.png) -La tabla de Delegadores te permitirá ver los Delegadores activos en la comunidad, así como las siguientes métricas: +In the Delegators table you can see the active Delegators in the community and important metrics: - El número de Indexadores a los que delega este Delegador - La delegación principal de un Delegador - Las recompensas que han ido acumulando, pero que aún no han retirado del protocolo - Las recompensas realizadas, es decir, las que ya retiraron del protocolo - Cantidad total de GRT que tienen actualmente dentro del protocolo -- La fecha en la que delegaron por última vez +- The date they last delegated -Si deseas obtener más información sobre cómo convertirte en Delegador, ¡no busques más! Todo lo que tienes que hacer es dirigirte a la [documentación oficial](/network/delegating) o [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## Red -En la sección red, verás los KPI globales, así como la capacidad de cambiar a una base por ciclo y analizar las métricas de la red con más detalle. Estos detalles te darán una idea de cómo se está desempeñando la red a lo largo del tiempo. +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. ### Descripción -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +The overview section has both all the current network metrics and some cumulative metrics over time: - La cantidad total de stake que circula en estos momentos - La participación que se divide entre los Indexadores y sus Delegadores @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - Parámetros del protocolo como las recompensas de curación, tasa de inflación y más - Recompensas y tarifas del ciclo actual -Algunos detalles clave que vale la pena mencionar: +A few key details to note: -- **Las tarifas de consulta representan las tarifas generadas por los consumidores**, y que pueden ser reclamadas (o no) por los Indexadores después de un período de al menos 7 ciclos (ver más abajo) después de que se han cerrado las asignaciones hacia los subgrafos y los datos que servían han sido validados por los consumidores. -- **Las recompensas de indexación representan la cantidad de recompensas que los Indexadores reclamaron por la emisión de la red durante el ciclo.** Aunque la emisión del protocolo es fija, las recompensas solo se anclan una vez que los Indexadores cierran sus asignaciones hacia los subgrafos que han indexado. Por lo tanto, el número de recompensas por ciclo suele variar (es decir, durante algunos ciclos, es posible que los Indexadores hayan cerrado colectivamente asignaciones que han estado abiertas durante muchos días). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Imagen de Explorer 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ En la sección de Epochs, puedes analizar, por cada epoch, métricas como: - El ciclo activo es aquel en la que los indexadores actualmente asignan su participación (staking) y cobran tarifas por consultas - Los ciclos liquidados son aquellos en los que ya se han liquidado las recompensas y demás métricas. Esto significa que los Indexadores están sujetos a recortes si los consumidores abren disputas en su contra. - Los ciclos de distribución son los ciclos en los que los canales correspondientes a los ciclos son establecidos y los Indexadores pueden reclamar sus reembolsos correspondientes a las tarifas de consulta. - - Los ciclos finalizados son los ciclos que no tienen reembolsos en cuanto a las tarifas de consulta, estos son reclamados por parte de los Indexadores, por lo que estos se consideran como finalizados. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![Imagen de Explorer 9](/img/Epoch-Stats.png) ## Tu perfil de usuario -Ahora que hemos hablado de las estadísticas de la red, pasemos a tu perfil personal. Tu perfil personal es el lugar donde puedes ver tu actividad personal dentro de la red, sin importar cómo estés participando en la red. Tu crypto wallet actuará como tu perfil de usuario, y desde tu dashboard podrás ver lo siguiente: +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### Información general del perfil -Aquí es donde puedes ver las acciones actuales que realizaste. Aquí también podrás encontrar la información de tu perfil, la descripción y el sitio web (si agregaste uno). +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![Imagen de Explorer 10](/img/Profile-Overview.png) ### Pestaña de subgrafos -Si haces clic en la pestaña subgrafos, verás tus subgrafos publicados. Esto no incluirá ningún subgrafo implementado con la modalidad de CLI o con fines de prueba; los subgrafos solo aparecerán cuando se publiquen en la red descentralizada. +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Imagen de Explorer 11](/img/Subgraphs-Overview.png) ### Pestaña de indexación -Si haces clic en la pestaña Indexación, encontrarás una tabla con todas las asignaciones activas e históricas hacia los subgrafos, así como gráficos que puedes analizar y ver tu desempeño anterior como Indexador. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. Esta sección también incluirá detalles sobre las recompensas netas que obtienes como Indexador y las tarifas netas que recibes por cada consulta. Verás las siguientes métricas: @@ -158,7 +189,9 @@ Esta sección también incluirá detalles sobre las recompensas netas que obtien ### Pestaña de delegación -Los Delegadores son importantes para la red de The Graph. Un Delegador debe usar su conocimiento para elegir un Indexador que le proporcionará un retorno saludable y sostenible. Aquí puedes encontrar detalles de tus delegaciones activas e históricas, junto con las métricas de los Indexadores a los que delegaste en el pasado. +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. En la primera mitad de la página, puedes ver tu gráfico de delegación, así como el gráfico de recompensas históricas. A la izquierda, puedes ver los KPI que reflejan tus métricas de delegación actuales. diff --git a/website/pages/es/network/indexing.mdx b/website/pages/es/network/indexing.mdx index a57c640869f9..d3e49ca3226e 100644 --- a/website/pages/es/network/indexing.mdx +++ b/website/pages/es/network/indexing.mdx @@ -42,7 +42,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Muchos de los paneles creados por la comunidad incluyen valores de recompensas pendientes y se pueden verificar fácilmente de forma manual siguiendo estos pasos: -1. Consulta el [ mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) para obtener los ID de todas las allocations activas: +1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -113,11 +113,11 @@ Los indexadores pueden diferenciarse aplicando técnicas avanzadas para tomar de - **Grande**: Preparado para indexar todos los subgrafos utilizados actualmente y atender solicitudes para el tráfico relacionado. | Configuración | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Pequeño | 4 | 8 | 1 | 4 | 16 | -| Estándar | 8 | 30 | 1 | 12 | 48 | -| Medio | 16 | 64 | 2 | 32 | 64 | -| Grande | 72 | 468 | 3,5 | 48 | 184 | +| ------------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| +| Pequeño | 4 | 8 | 1 | 4 | 16 | +| Estándar | 8 | 30 | 1 | 12 | 48 | +| Medio | 16 | 64 | 2 | 32 | 64 | +| Grande | 72 | 468 | 3,5 | 48 | 184 | ### ¿Qué precauciones básicas de seguridad debe tomar un Indexador? @@ -149,20 +149,20 @@ Nota: Para admitir el escalado ágil, se recomienda que las inquietudes de consu #### Graph Node -| Puerto | Objeto | Rutas | Argumento CLI | Variable de Entorno | -| --- | --- | --- | --- | --- | -| 8000 | Servidor HTTP GraphQL
(para consultas de subgrafos) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(para suscripciones a subgrafos) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(para administrar implementaciones) | / | --admin-port | - | -| 8030 | API de estado de indexación de subgrafos | /graphql | --index-node-port | - | -| 8040 | Métricas de Prometheus | /metrics | --metrics-port | - | +| Puerto | Objeto | Rutas | Argumento CLI | Variable de Entorno | +| ------ | -------------------------------------------------------------- | ---------------------------------------------------- | ----------------- | ------------------- | +| 8000 | Servidor HTTP GraphQL
(para consultas de subgrafos) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(para suscripciones a subgrafos) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(para administrar implementaciones) | / | --admin-port | - | +| 8030 | API de estado de indexación de subgrafos | /graphql | --index-node-port | - | +| 8040 | Métricas de Prometheus | /metrics | --metrics-port | - | #### Servicio de Indexador -| Puerto | Objeto | Rutas | Argumento CLI | Variable de Entorno | -| --- | --- | --- | --- | --- | -| 7600 | Servidor HTTP GraphQL
(para consultas de subgrafo pagadas) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Métricas de Prometheus | /metrics | --metrics-port | - | +| Puerto | Objeto | Rutas | Argumento CLI | Variable de Entorno | +| ------ | --------------------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | Servidor HTTP GraphQL
(para consultas de subgrafo pagadas) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Métricas de Prometheus | /metrics | --metrics-port | - | #### Agente Indexador @@ -545,7 +545,7 @@ La **CLI del Indexador** se conecta al agente Indexador, normalmente a través d - `graph indexer rules maybe [options] ` - Configura `thedecisionBasis` para un deploy en `rules`, de modo que el agente Indexador use las reglas de indexación para decidir si debe indexar este deploy. -- `graph indexer actions get [options] ` - Obtiene una o más acciones usando `all` o deja `action-id` vacío para obtener todas las acciones. Un argumento adicional `--status` se puede utilizar para imprimir todas las acciones de un determinado estado. +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `graph indexer action queue allocate ` - Acción de allocation en fila diff --git a/website/pages/es/network/overview.mdx b/website/pages/es/network/overview.mdx index bd83a749410e..79bf2e4c1921 100644 --- a/website/pages/es/network/overview.mdx +++ b/website/pages/es/network/overview.mdx @@ -2,14 +2,20 @@ title: Visión general de la red --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## Descripción +## How does it work? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Specifics + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![Economía de los tokens](/img/Network-roles@2x.png) -Para garantizar la seguridad económica de la red de The Graph y la integridad de los datos que se consultan, los participantes hacen stake y utilizan Graph Tokens ([GRT](/tokenomics)). GRT es un token de utilidad que se utiliza para asignar recursos en la red y es un estándar ERC-20. +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/es/new-chain-integration.mdx b/website/pages/es/new-chain-integration.mdx index 652d1a26d51a..ea501d78d47a 100644 --- a/website/pages/es/new-chain-integration.mdx +++ b/website/pages/es/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: Integración de nuevas redes +title: New Chain Integration --- -El Graph Node actualmente puede indexar datos de los siguientes tipos de cadena: +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- Ethereum, a través de EVM JSON-RPC y [Ethereum Firehose] (https://github.com/streamingfast/firehose-ethereum) -- NEAR, a través de [NEAR Firehose] (https://github.com/streamingfast/near-firehose-indexer) -- Cosmos, a través de [Cosmos Firehose](https://github.com/graphprotocol/firehose-cosmos) -- Arweave, a través de [Arweave Firehose](https://github.com/graphprotocol/firehose-arweave) +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -Si estás interesado en alguna de esas cadenas, la integración es una cuestión de configuración y prueba de Graph Node. +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -If you are interested in a different chain type, a new integration with Graph Node must be built. Our recommended approach is developing a new Firehose for the chain in question and then the integration of that Firehose with Graph Node. More info below. +## Integration Strategies -**1. EVM JSON-RPC** +### 1. EVM JSON-RPC -Si la cadena de bloques es equivalente a EVM y el cliente/nodo expone la EVM JSON-RPC API estándar, Graph Node debería poder indexar la nueva cadena. Para obtener más información, consulte [Testing an EVM JSON-RPC](new-chain-integration#testing-an-evm-json-rpc). +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -**2. Firehose** +#### Probando un EVM JSON-RPC -For non-EVM-based chains, Graph Node must ingest blockchain data via gRPC and known type definitions. This can be done via [Firehose](firehose/), a new technology developed by [StreamingFast](https://www.streamingfast.io/) that provides a highly-scalable indexing blockchain solution using a files-based and streaming-first approach. Reach out to the [StreamingFast team](mailto:integrations@streamingfast.io/) if you need help with Firehose development. +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## Diferencia entre EVM JSON-RPC y Firehose +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, en una solicitud por lotes JSON-RPC +- `trace_filter` _(optionally required for Graph Node to support call handlers)_ -Si bien los dos son adecuados para subgrafos, siempre se requiere un Firehose para los desarrolladores que quieran compilar con [Substreams](substreams/), como crear [Substreams-powered subgraphs](cookbook/substreams-powered-subgraphs/). Además, Firehose permite velocidades de indexación mejoradas en comparación con JSON-RPC. +### 2. Firehose Integration -Los nuevos integradores de cadenas EVM también pueden considerar el enfoque basado en Firehose, dados los beneficios de los substreams y sus enormes capacidades de indexación en paralelo. El soporte de ambos permite a los desarrolladores elegir entre crear substreams o subgrafos para la nueva cadena. +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> **NOTA**: Una integración basada en Firehose para cadenas EVM aún requerirá que los indexadores ejecuten el nodo RPC de archivo de la cadena para indexar correctamente los subgrafos. Esto se debe a la incapacidad de Firehose para proporcionar un estado de contrato inteligente al que normalmente se puede acceder mediante el método RPC `eth_call`. (Vale la pena recordar que eth_calls [no es una buena práctica para desarrolladores] (https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## Probando un EVM JSON-RPC +#### Specific Firehose Instrumentation for EVM (`geth`) chains -Para que Graph Node pueda ingerir datos de una cadena EVM, el nodo RPC debe exponer los siguientes métodos EVM JSON RPC: +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \_(para bloques históricos, con EIP-1898 - requiere nodo de archivo): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`, en una solicitud por lotes JSON-RPC -- _`trace_filter`_ _(opcionalmente necesario para que Graph Node admita call handlers)_ +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### Configuración del Graph Node +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -**Empiece por preparar su entorno local** +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## Configuración del Graph Node + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) -2. Modifique [esta línea](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) para incluir el nuevo nombre de la red y la URL compatible con EVM JSON RPC - > No cambie el nombre de la var env. Debe seguir siendo "ethereum" incluso si el nombre de la red es diferente. -3. Ejecute un nodo IPFS o use el utilizado por The Graph: https://api.thegraph.com/ipfs/ -**Prueba la integración implementando localmente un subgrafo** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. Crea un subgrafo simple de prueba. Algunas opciones están a continuación: - 1. El contrato inteligente y el subgrafo [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) preempaquetados son un buen comienzo - 2. Arranca un subgrafo local desde cualquier contrato inteligente existente o entorno de desarrollo de solidity [usando Hardhat con un plugin Graph] (https://github.com/graphprotocol/hardhat-graph) -3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. -4. Crea tu subgrafo en Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. Publica tu subgrafo en Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -Graph Node debería sincronizar el subgrafo implementado si no hay errores. Dale tiempo para que se sincronice y luego envíe algunas queries GraphQL al punto final de la API impreso en los registros. +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## Integración de una nueva cadena habilitada para Firehose +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. Crea un subgrafo simple de prueba. Algunas opciones están a continuación: + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +Graph Node debería sincronizar el subgrafo implementado si no hay errores. Dale tiempo para que se sincronice y luego envíe algunas queries GraphQL al punto final de la API impreso en los registros. -También es posible integrar una nueva cadena utilizando el enfoque Firehose. Actualmente, esta es la mejor opción para cadenas que no son EVM y un requisito para el soporte de substreams. La documentación adicional se centra en cómo funciona Firehose, agregando soporte de Firehose para una nueva cadena e integrándola con Graph Node. Documentos recomendados para integradores: +## Substreams-powered Subgraphs -1. [Documentos generales sobre Firehose](firehose/) -2. [Adding Firehose support for a new chain](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [Integración de Graph Node con una nueva cadena a través de Firehose](https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/es/operating-graph-node.mdx b/website/pages/es/operating-graph-node.mdx index 2dc80685b400..99e966da590a 100644 --- a/website/pages/es/operating-graph-node.mdx +++ b/website/pages/es/operating-graph-node.mdx @@ -77,13 +77,13 @@ Puedes encontrar un ejemplo completo de configuración de Kubernetes en el [Inde Cuando está funcionando, Graph Node muestra los siguientes puertos: -| Puerto | Objeto | Rutas | Argumento CLI | Variable de Entorno | -| --- | --- | --- | --- | --- | -| 8000 | Servidor HTTP GraphQL
(para consultas de subgrafos) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(para suscripciones a subgrafos) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(para administrar implementaciones) | / | --admin-port | - | -| 8030 | API de estado de indexación de subgrafos | /graphql | --index-node-port | - | -| 8040 | Métricas de Prometheus | /metrics | --metrics-port | - | +| Puerto | Objeto | Rutas | Argumento CLI | Variable de Entorno | +| ------ | -------------------------------------------------------------- | ---------------------------------------------------- | ----------------- | ------------------- | +| 8000 | Servidor HTTP GraphQL
(para consultas de subgrafos) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(para suscripciones a subgrafos) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(para administrar implementaciones) | / | --admin-port | - | +| 8030 | API de estado de indexación de subgrafos | /graphql | --index-node-port | - | +| 8040 | Métricas de Prometheus | /metrics | --metrics-port | - | > **Importante**: Ten cuidado con exponer puertos públicamente - los **puertos de administración** deben mantenerse bloqueados. Esto incluye el punto final JSON-RPC de Graph Node. diff --git a/website/pages/es/querying/graphql-api.mdx b/website/pages/es/querying/graphql-api.mdx index 2086e994cd0a..e0281d0b1bdc 100644 --- a/website/pages/es/querying/graphql-api.mdx +++ b/website/pages/es/querying/graphql-api.mdx @@ -2,11 +2,19 @@ title: API GraphQL --- -This guide explains the GraphQL Query API that is used for The Graph Protocol. +Learn about the GraphQL Query API used in The Graph. -## Consultas +## What is GraphQL? -En tu esquema de subgrafos defines tipos llamados `Entities`. Por cada tipo de `Entity`, se generará un campo `entity` y `entities` en el nivel superior del tipo `Query`. Ten en cuenta que no es necesario incluir `query` en la parte superior de la consulta `graphql` cuando se utiliza The Graph. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +To understand the larger role that GraphQL plays, review [developing](/network/developing/) and [creating a subgraph](/developing/creating-a-subgraph/). + +## Queries with GraphQL + +In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. + +> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. ### Ejemplos @@ -21,7 +29,7 @@ Consulta por un solo `Token` definido en tu esquema: } ``` -> **Nota:** Cuando se consulta una sola entidad, el campo `id` es obligatorio y debe ser un string. +> Note: When querying for a single entity, the `id` field is required, and it must be writen as a string. Consulta todas las entidades `Token`: @@ -36,7 +44,10 @@ Consulta todas las entidades `Token`: ### Clasificación -Al consultar una colección, el parámetro `orderBy` puede utilizarse para ordenar por un atributo específico. Además, el `orderDirection` se puede utilizar para especificar la dirección de orden, `asc` para ascendente o `desc` para descendente. +When querying a collection, you may: + +- Use the `orderBy` parameter to sort by a specific attribute. +- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. #### Ejemplo @@ -53,7 +64,7 @@ Al consultar una colección, el parámetro `orderBy` puede utilizarse para orden A partir de Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0), las entidades se pueden ordenar con base en entidades anidadas. -En el siguiente ejemplo, ordenamos los tokens por el nombre de su propietario: +The following example shows tokens sorted by the name of their owner: ```graphql { @@ -70,11 +81,12 @@ En el siguiente ejemplo, ordenamos los tokens por el nombre de su propietario: ### Paginación -Al consultar una colección, el parámetro `first` puede utilizarse para paginar desde el principio de la colección. Cabe destacar que el orden por defecto es por ID en orden alfanumérico ascendente, no por tiempo de creación. - -Además, el parámetro `skip` puede utilizarse para saltar entidades y paginar. por ejemplo, `first:100` muestra las primeras 100 entidades y `first:100, skip:100` muestra las siguientes 100 entidades. +When querying a collection, it's best to: -Las consultas deben evitar el uso de valores de `skip` muy grandes, ya que suelen tener un rendimiento deficiente. Para recuperar un gran número de elementos, es mucho mejor para paginar recorrer las entidades basándose en un atributo, como se muestra en el último ejemplo. +- Use the `first` parameter to paginate from the beginning of the collection. + - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. +- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. #### Ejemplo usando `first` @@ -106,7 +118,7 @@ Consulta 10 entidades `Token`, desplazadas 10 lugares desde el principio de la c #### Ejemplo usando `first` y `id_ge` -Si un cliente necesita recuperar un gran número de entidades, es mucho más eficaz basar las consultas en un atributo y filtrar por ese atributo. Por ejemplo, un cliente podría recuperar un gran número de tokens utilizando esta consulta: +If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: ```graphql query manyTokens($lastID: String) { @@ -117,11 +129,12 @@ query manyTokens($lastID: String) { } ``` -La primera vez, enviaría la consulta con `lastID = ""`, y para las siguientes peticiones establecería `lastID` al atributo `id` de la última entidad de la petición anterior. Este enfoque tendrá un rendimiento significativamente mejor que el uso de valores crecientes de `skip`. +The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### Filtrado -Puedes utilizar el parámetro `where` en tus consultas para filtrar por diferentes propiedades. Puedes filtrar por múltiples valores dentro del parámetro `where`. +- You can use the `where` parameter in your queries to filter for different properties. +- You can filter on multiple values within the `where` parameter. #### Ejemplo usando `where` @@ -155,7 +168,7 @@ Puedes utilizar sufijos como `_gt`, `_lte` para la comparación de valores: #### Ejemplo de filtrado de bloques -También puedes filtrar entidades por el `_change_block(number_gte: Int)`: esto filtra las entidades que se actualizaron en o después del bloque especificado. +You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. Esto puede ser útil si buscas obtener solo las entidades que han cambiado, por ejemplo, desde la última vez que realizaste una encuesta. O, alternativamente, puede ser útil para investigar o depurar cómo cambian las entidades en tu subgrafo (si se combina con un filtro de bloque, puedes aislar solo las entidades que cambiaron en un bloque específico). @@ -193,7 +206,7 @@ A partir de Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/r ##### Operador `AND` -En el siguiente ejemplo, estamos filtrando desafíos con `coutcome` `succeeded` y `number` mayor o igual a `100`. +The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. ```graphql { @@ -208,7 +221,7 @@ En el siguiente ejemplo, estamos filtrando desafíos con `coutcome` `succeeded` ``` > **Azúcar sintáctico**: Puedes simplificar la consulta anterior eliminando el operador `and` pasando una subexpresión separada por comas. -> +> > ```graphql > { > challenges(where: { number_gte: 100, outcome: "succeeded" }) { @@ -223,7 +236,7 @@ En el siguiente ejemplo, estamos filtrando desafíos con `coutcome` `succeeded` ##### Operador `OR` -En el siguiente ejemplo, estamos filtrando desafíos con `coutcome` `succeeded` y `number` mayor o igual a `100`. +The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. ```graphql { @@ -278,9 +291,9 @@ _change_block(number_gte: Int) Puedes consultar el estado de tus entidades no solo para el último bloque, que es el predeterminado, sino también para un bloque arbitrario en el pasado. El bloque en el que debe ocurrir una consulta se puede especificar por su número de bloque o su hash de bloque al incluir un argumento `block` en los campos de nivel superior de las consultas. -El resultado de dicha consulta no cambiará con el tiempo, por ejemplo, consultar en un determinado bloque anterior devolverá el mismo resultado sin importar cuándo se ejecute, con la excepción de que si consultas en un bloque muy cerca de la cabecera de la cadena Ethereum, el resultado podría cambiar si ese bloque resulta no estar en la cadena principal y la cadena se reorganiza. Una vez que un bloque puede considerarse final, el resultado de la consulta no cambiará. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -Ten en cuenta que la implementación actual todavía está sujeta a ciertas limitaciones que podrían violar estas garantías. La implementación no siempre puede demostrar que un hash de bloque dado no está en la cadena principal, o que el resultado de una consulta por hash de bloque para un bloque que no puede considerarse final aún podría estar influenciado por una reorganización de bloque que se ejecuta simultáneamente con la consulta. Esto no afecta los resultados de consultas por hash de bloque cuando el bloque es final y se sabe que está en la cadena principal. [Este problema](https://github.com/graphprotocol/graph-node/issues/1405) explica en detalle cuáles son estas limitaciones. +> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. #### Ejemplo @@ -322,12 +335,12 @@ Las consultas de búsqueda de texto completo tienen un campo obligatorio, `text` Operadores de búsqueda de texto completo: -| Símbolo | Operador | Descripción | -| --- | --- | --- | -| `&` | `And` | Para combinar varios términos de búsqueda en un filtro para entidades que incluyen todos los términos proporcionados | -| | | `O` | Las consultas con varios términos de búsqueda separados por o el operador devolverá todas las entidades que coincidan con cualquiera de los términos proporcionados | -| `<->` | `Follow by` | Especifica la distancia entre dos palabras. | -| `:*` | `Prefijo` | Utilice el término de búsqueda del prefijo para encontrar palabras cuyo prefijo coincida (se requieren 2 caracteres.) | +| Símbolo | Operador | Descripción | +| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `&` | `And` | Para combinar varios términos de búsqueda en un filtro para entidades que incluyen todos los términos proporcionados | +| | | `O` | Las consultas con varios términos de búsqueda separados por o el operador devolverá todas las entidades que coincidan con cualquiera de los términos proporcionados | +| `<->` | `Follow by` | Especifica la distancia entre dos palabras. | +| `:*` | `Prefijo` | Utilice el término de búsqueda del prefijo para encontrar palabras cuyo prefijo coincida (se requieren 2 caracteres.) | #### Ejemplos @@ -376,11 +389,11 @@ Graph Node implementa una validación [basada en especificaciones](https://spec. ## Esquema -El esquema de tu fuente de datos, es decir, los tipos de entidad, los valores y las relaciones que están disponibles para consultar, se definen a través de [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/# sec-Type-System). +The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -Los esquemas de GraphQL generalmente definen tipos raíz para `queries`, `subscriptions` y `mutations`. The Graph solo admite `queries`. El tipo raíz `Query` para tu subgrafo se genera automáticamente a partir del esquema de GraphQL que se incluye en tu manifiesto de subgrafo. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> **Nota:** nuestra API no expone mutaciones porque se espera que los desarrolladores emitan transacciones directamente contra la cadena de bloques subyacente desde sus aplicaciones. +> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. ### Entidades diff --git a/website/pages/es/querying/querying-best-practices.mdx b/website/pages/es/querying/querying-best-practices.mdx index 82e8d0cb9da2..50b0c402d86d 100644 --- a/website/pages/es/querying/querying-best-practices.mdx +++ b/website/pages/es/querying/querying-best-practices.mdx @@ -2,11 +2,9 @@ title: Mejores Prácticas para Consultas --- -The Graph proporciona una forma descentralizada de consultar datos de la blockchain. +The Graph provides a decentralized way to query data from blockchains via GraphQL APIs, making it easier to query data with the GraphQL language. -Los datos de The Graph Network se exponen a través de una API GraphQL, lo que facilita la consulta de datos con el lenguaje GraphQL. - -Esta página te guiará a través de las reglas esenciales del lenguaje GraphQL y las mejores prácticas de consulta GraphQL. +Learn the essential GraphQL language rules and GraphQL querying best practices. --- @@ -71,7 +69,7 @@ GraphQL is a language and set of conventions that transport over HTTP. It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`). -However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), we recommend you to use our `graph-client` that supports unique features such as: +However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), it's recommend to use `graph-client` which supports unique features such as: - Manejo de subgrafos cross-chain: Consulta de varios subgrafos en una sola consulta - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) @@ -104,8 +102,6 @@ main() More GraphQL client alternatives are covered in ["Querying from an Application"](/querying/querying-from-an-application). -Now that we covered the basic rules of GraphQL queries syntax, let's now look at the best practices of GraphQL query writing. - --- ## Best Practices @@ -164,11 +160,11 @@ Doing so brings **many advantages**: - **Las variables pueden almacenarse en caché** a nivel de servidor - **Las consultas pueden ser analizadas estáticamente por herramientas** (más información al respecto en las secciones siguientes) -**Note: How to include fields conditionally in static queries** +### How to include fields conditionally in static queries -We might want to include the `owner` field only on a particular condition. +You might want to include the `owner` field only on a particular condition. -For this, we can leverage the `@include(if:...)` directive as follows: +For this, you can leverage the `@include(if:...)` directive as follows: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -191,7 +187,7 @@ const result = await execute(query, { }) ``` -Note: The opposite directive is `@skip(if: ...)`. +> Note: The opposite directive is `@skip(if: ...)`. ### Ask for what you want @@ -199,9 +195,8 @@ GraphQL became famous for its "Ask for what you want" tagline. For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually. -When querying GraphQL APIs, always think of querying only the fields that will be actually used. - -A common cause of over-fetching is collections of entities. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. Queries should therefore almost always set first explicitly, and make sure they only fetch as many entities as they actually need. This applies not just to top-level collections in a query, but even more so to nested collections of entities. +- When querying GraphQL APIs, always think of querying only the fields that will be actually used. +- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. For example, in the following query: @@ -337,8 +332,8 @@ query { Such repeated fields (`id`, `active`, `status`) bring many issues: -- más difícil de leer para consultas más extensas -- cuando se utilizan herramientas que generan tipos TypeScript basados en consultas (_más sobre esto en la última sección_), `newDelegate` y `oldDelegate` darán como resultado dos interfaces en línea distintas. +- More extensive queries become harder to read. +- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. A refactored version of the query would be the following: @@ -364,13 +359,13 @@ fragment DelegateItem on Transcoder { } ``` -Using GraphQL `fragment` will improve readability (especially at scale) but also will result in better TypeScript types generation. +Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_). ### Qué hacer y qué no hacer con los GraphQL Fragments -**Fragment base must be a type** +### Fragment base must be a type A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**: @@ -382,7 +377,7 @@ fragment MyFragment on BigInt { `BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base. -**How to spread a Fragment** +#### How to spread a Fragment Fragments are defined on specific types and should be used accordingly in queries. @@ -411,16 +406,16 @@ fragment VoteItem on Vote { It is not possible to spread a fragment of type `Vote` here. -**Define Fragment as an atomic business unit of data** +#### Define Fragment as an atomic business unit of data -GraphQL Fragment must be defined based on their usage. +GraphQL `Fragment`s must be defined based on their usage. For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient. -Here is a rule of thumb for using Fragment: +Here is a rule of thumb for using fragments: -- cuando se repiten campos del mismo tipo en una consulta, agruparlos en un Fragment -- cuando se repiten campos similares pero no iguales, crear varios Fragments, ej: +- When fields of the same type are repeated in a query, group them in a `Fragment`. +- When similar but different fields are repeated, create multiple fragments, for instance: ```graphql # base fragment (mostly used in listing) @@ -443,7 +438,7 @@ fragment VoteWithPoll on Vote { --- -## Las herramientas esenciales +## The Essential Tools ### Exploradores web GraphQL @@ -473,11 +468,11 @@ This will allow you to **catch errors without even testing queries** on the play The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get: -- resaltado de sintaxis -- sugerencias de autocompletar -- validación según el esquema -- fragmentos -- ir a la definición de fragments y tipos de entrada +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets +- Go to definition for fragments and input types If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly. @@ -485,9 +480,9 @@ If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketp The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing: -- resaltado de sintaxis -- sugerencias de autocompletar -- validación según el esquema -- fragmentos +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets -More information on this [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) that showcases all the plugin's main features. +For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. diff --git a/website/pages/es/quick-start.mdx b/website/pages/es/quick-start.mdx index b0765ca7fd36..e990fe25fb34 100644 --- a/website/pages/es/quick-start.mdx +++ b/website/pages/es/quick-start.mdx @@ -2,24 +2,18 @@ title: Comienzo Rapido --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph to Subgraph Studio. +Learn how to easily publish and query a [subgraph](/developing/developer-faqs/#1-what-is-a-subgraph) on The Graph. -Ensure that your subgraph will be indexing data from a [supported network](/developing/supported-networks). - -Esta guía está escrita asumiendo que tú tienes: +## Prerequisites for this guide - Una wallet crypto -- Una dirección de un smart contract en la red de tu preferencia - -## 1. Crea un subgrafo en el Subgraph Studio - -Go to the [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- A smart contract address on one of the [supported networks](/developing/supported-networks/) -Once your wallet is connected, you can begin by clicking “Create a Subgraph." It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name." +## Step-by-step -## 2. Instala the graph CLI +### 1. Instala The Graph CLI -The Graph CLI is written in TypeScript and you will need to have `node` and either `npm` or `yarn` installed to use it. Check that you have the most recent CLI version installed. +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. En tu dispositivo, ejecuta alguno de los siguientes comandos: @@ -35,133 +29,161 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -## 3. Initialize your subgraph from existing contract +### 2. Create your subgraph + +If your contract has events, the `init` command will automatically create a scaffold of a subgraph. + +#### Create via Graph CLI + +Use the following command to create a subgraph in Subgraph Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +#### Create via Subgraph Studio + +Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. -Initialize your subgraph from an existing contract by running the initialize command: +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +2. Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". + +For additional information on subgraph creation and the Graph CLI, see [Creating a Subgraph](/developing/creating-a-subgraph). + +### 3. Initialize your subgraph + +#### From an existing contract + +The following command initializes your subgraph from an existing contract: ```sh graph init --studio ``` -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> Note: If your contract was verified on Etherscan, then the ABI will automatically be created in the CLI. + +You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -Cuando inicies tu subgrafo, la herramienta CLI te preguntará por la siguiente información: +When you initialize your subgraph, the CLI will ask you for the following information: -- Protocol: elige el protocolo desde el cual tu subgrafo indexará datos -- Subgraph slug: crea un nombre para tu subgrafo. El slug de tu subgrafo es un identificador para el mismo. -- Directorio para crear el subgrafo: elige el directorio local de tu elección -- Red Ethereum (opcional): Es posible que debas especificar desde qué red compatible con EVM tu subgrafo indexará datos -- Dirección del contrato: Localiza la dirección del contrato inteligente del que deseas consultar los datos -- ABI: Si el ABI no se completa automáticamente, deberás ingresar los datos manualmente en formato JSON -- Start Block: se sugiere que ingreses el bloque de inicio para ahorrar tiempo mientras tu subgrafo indexa los datos de la blockchain. Puedes ubicar el bloque de inicio encontrando el bloque en el que se deployó tu contrato. -- Nombre del contrato: introduce el nombre de tu contrato -- Indexar eventos del contrato como entidades: se sugiere que lo establezcas en "verdadero" ya que automáticamente agregará mapeos a tu subgrafo para cada evento emitido -- Añade otro contrato(opcional): puedes añadir otro contrato +- Protocol: Choose the protocol your subgraph will be indexing data from. +- Subgraph slug: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. +- Directory to create the subgraph in: Choose your local directory. +- Ethereum network (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- Contract address: Locate the smart contract address you’d like to query data from. +- ABI: If the ABI is not auto-populated, you will need to input it manually as a JSON file. +- Start Block: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- Contract Name: Input the name of your contract. +- Index contract events as entities: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- Add another contract (optional): You can add another contract. Ve la siguiente captura para un ejemplo de que debes de esperar cuando inicializes tu subgrafo: ![Subgraph command](/img/subgraph-init-example.png) -## 4. Write your subgraph +### 4. Write your subgraph -Los comandos anteriores crean un subgrafo de andamio que puedes utilizar como punto de partida para construir tu subgrafo. Al realizar cambios en el subgrafo, trabajarás principalmente con tres archivos: +The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. -- Manifest (`subgraph.yaml`) - The manifest defines what datasources your subgraphs will index. -- Schema (`schema.graphql`) - The GraphQL schema defines what data you wish to retrieve from the subgraph. -- AssemblyScript Mappings (`mapping.ts`) - This is the code that translates data from your datasources to the entities defined in the schema. +When making changes to the subgraph, you will mainly work with three files: -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). +- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -## 5. Deploy to Subgraph Studio +For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -Una vez escrito tu subgrafo, ejecuta los siguientes comandos: +### 5. Deploy your subgraph -```sh -$ graph codegen -$ graph build -``` +Remember, deploying is not the same as publishing. + +- When you deploy a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. +- When you publish a subgraph, you are publishing it onchain to the decentralized network. + +1. Una vez escrito tu subgrafo, ejecuta los siguientes comandos: + + ```sh + graph codegen + graph build + ``` + +2. Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. -- Autentica y deploya tu subgrafo. La clave para deployar se puede encontrar en la página de Subgraph en Subgraph Studio. +![Deploy key](/img/subgraph-studio-deploy-key.jpg) +```` ```sh -$ graph auth --studio -$ graph deploy --studio -``` -You will be asked for a version label. It's strongly recommended to use [semver](https://semver.org/) for versioning like `0.0.1`. That said, you are free to choose any string as version such as:`v1`, `version1`, `asdf`. - -## 6. Prueba tu subgrafo - -In Subgraph Studio's playground environment, you can test your subgraph by making a sample query. - -Los registros te indicarán si hay algún error con tu subgrafo. Los registros de un subgrafo operativo se verán así: - -![Subgraph logs](/img/subgraph-logs-image.png) - -If your subgraph is failing, you can query the subgraph health by using the GraphiQL Playground. Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails, so you can debug accordingly: - -```graphql -{ - indexingStatuses(subgraphs: ["Qm..."]) { - node - synced - health - fatalError { - message - block { - number - hash - } - handler - } - nonFatalErrors { - message - block { - number - hash - } - handler - } - chains { - network - chainHeadBlock { - number - } - earliestBlock { - number - } - latestBlock { - number - } - lastHealthyBlock { - number - } - } - entityCount - } -} +graph auth --studio + +graph deploy --studio ``` +```` -## 7. Publish your subgraph to The Graph’s Decentralized Network +- The CLI will ask for a version label. + - It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. That said, you can choose any string for the version such as: `v1`, `version1`, `asdf`, etc. -Once your subgraph has been deployed to Subgraph Studio, you have tested it out, and you are ready to put it into production, you can then publish it to the decentralized network. +### 6. Review your subgraph -In Subgraph Studio, you will be able to click the publish button on the top right of your subgraph's page. +If you’d like to examine your subgraph before publishing it to the network, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: -Select the network you would like to publish your subgraph to. It is recommended to publish subgraphs to Arbitrum One to take advantage of the [faster transaction speeds and lower gas costs](/arbitrum/arbitrum-faq). +- Run a sample query. +- Analyze your subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: -The (upgrade Indexer)[/sunrise/#about-the-upgrade-indexer] will begin serving queries on your subgraph regardless of subgraph curation, and it will provide you with 100,000 free queries per month. + ![Subgraph logs](/img/subgraph-logs-image.png) -For a higher quality of service and stronger redundancy, you can curate your subgraph to attract more Indexers. At the time of writing, it is recommended that you curate your own subgraph with at least 3,000 GRT to ensure 3-5 additional Indexers begin serving queries on your subgraph. +### 7. Publish your subgraph to The Graph Network -Para ahorrar en costos de gas, puedes curar tu subgrafo en la misma transacción en la que lo publicas seleccionando este botón al publicar tu subgrafo en la red descentralizada de The Graph: +Publishing a subgraph to the decentralized network makes it available for [Curators](/network/curating/) to begin curating it and [Indexers](/network/indexing/) to begin indexing it. -![Subgraph publish](/img/publish-and-signal-tx.png) +#### Publishing with Subgraph Studio + +1. To publish your subgraph, click the Publish button in the dashboard. +2. Select the network to which you would like to publish your subgraph. + +#### Publishing from the CLI + +As of version 0.73.0, you can also publish your subgraph with the Graph CLI. + +1. Open the `graph-cli`. + +2. Use the following commands: + + ```sh + graph codegen && graph build + ``` + + Then, + + ```sh + graph publish + ``` -## 8. Query your subgraph +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +To customize your deployment, see [Publishing a Subgraph](/publishing/publishing-a-subgraph/). + +#### Adding signal to your subgraph + +1. To attract indexers to query your subgraph, you should add GRT curation signal to it. + + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + +2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. + + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + +To learn more about curation, read [Curating](/network/curating/). + +To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: + +![Subgraph publish](/img/publish-and-signal-tx.png) -Ahora puedes hacer consultas a tu subgrafo enviando consultas GraphQL a la URL de consulta de tu subgrafo, que puedes encontrar haciendo clic en el botón de consulta. +### 8. Query your subgraph -If you don't have your API key, you can query via the free, rate-limited development query URL, which can be used for development and staging. +Now, you can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read more [here](/querying/querying-the-graph/). +For more information about querying data from your subgraph, read [Querying The Graph](/querying/querying-the-graph/). diff --git a/website/pages/es/release-notes/assemblyscript-migration-guide.mdx b/website/pages/es/release-notes/assemblyscript-migration-guide.mdx index bfc973f982dd..c3770a5c9ef7 100644 --- a/website/pages/es/release-notes/assemblyscript-migration-guide.mdx +++ b/website/pages/es/release-notes/assemblyscript-migration-guide.mdx @@ -127,11 +127,8 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - Tendrás que cambiar el nombre de las variables duplicadas si tienes una variable shadowing. - ### Comparaciones Nulas - Al hacer la actualización en un subgrafo, a veces pueden aparecer errores como estos: ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - Para solucionarlo puedes simplemente cambiar la declaración `if` por algo así: ```typescript @@ -288,7 +284,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - Para solucionar este problema, puedes crear una variable para ese acceso a la propiedad de manera que el compilador pueda hacer la magia de la comprobación de nulidad: ```typescript diff --git a/website/pages/es/release-notes/graphql-validations-migration-guide.mdx b/website/pages/es/release-notes/graphql-validations-migration-guide.mdx index 55801738ddca..292c60a70cf9 100644 --- a/website/pages/es/release-notes/graphql-validations-migration-guide.mdx +++ b/website/pages/es/release-notes/graphql-validations-migration-guide.mdx @@ -406,6 +406,7 @@ query { user { id image # 'image' requiere un conjunto de selección para subcampos! + } } ``` diff --git a/website/pages/es/sps/introduction.mdx b/website/pages/es/sps/introduction.mdx new file mode 100644 index 000000000000..12e3f81c6d53 --- /dev/null +++ b/website/pages/es/sps/introduction.mdx @@ -0,0 +1,19 @@ +--- +title: Introduction to Substreams-powered Subgraphs +--- + +By using a Substreams package (`.spkg`) as a data source, your subgraph gains access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. + +There are two methods of enabling this technology: + +Using Substreams [triggers](./triggers): Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. + +Using [Entity Changes](https://substreams.streamingfast.io/documentation/consume/subgraph/graph-out): By writing more of the logic into Substreams, you can consume the module's output directly into graph-node. In graph-node, you can use the Substreams data to create your subgraph entities. + +It is really a matter of where you put your logic, in the subgraph or the Substreams. Keep in mind that having more of your logic in Substreams benefits from a parallelized model, whereas triggers will be linearly consumed in graph-node. + +Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: + +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/es/sps/triggers-example.mdx b/website/pages/es/sps/triggers-example.mdx new file mode 100644 index 000000000000..4ef4d9d24ceb --- /dev/null +++ b/website/pages/es/sps/triggers-example.mdx @@ -0,0 +1,140 @@ +--- +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" +--- + +## Prerrequisitos + +Before starting, make sure to: + +- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. +- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. + +## Step 1: Initialize Your Project + + + +1. Open your Dev Container and run the following command to initialize your project: + + ```bash + substreams init + ``` + +2. Select the "minimal" project option. + +3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: + +```yaml +specVersion: v0.1.0 +package: + name: my_project_sol + version: v0.1.0 + +imports: # Pass your spkg of interest + solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg + +modules: + - name: map_spl_transfers + use: solana:map_block # Select corresponding modules available within your spkg + initialBlock: 260000082 + + - name: map_transactions_by_programid + use: solana:solana:transactions_by_programid_without_votes + +network: solana-mainnet-beta + +params: # Modify the param fields to meet your needs + # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA + map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE +``` + +## Step 2: Generate the Subgraph Manifest + +Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: + +```bash +substreams codegen subgraph +``` + +You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: + +```yaml +--- +dataSources: + - kind: substreams + name: my_project_sol + network: solana-mainnet-beta + source: + package: + moduleName: map_spl_transfers # Module defined in the substreams.yaml + file: ./my-project-sol-v0.1.0.spkg + mapping: + apiVersion: 0.0.7 + kind: substreams/graph-entities + file: ./src/mappings.ts + handler: handleTriggers +``` + +## Step 3: Define Entities in `schema.graphql` + +Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. Here is an example: + +```graphql +type MyTransfer @entity { + id: ID! + amount: String! + source: String! + designation: String! + signers: [String!]! +} +``` + +This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. + +## Step 4: Handle Substreams Data in `mappings.ts` + +With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: + +```ts +import { Protobuf } from 'as-proto/assembly' +import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events' +import { MyTransfer } from '../generated/schema' + +export function handleTriggers(bytes: Uint8Array): void { + const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode) + + for (let i = 0; i < input.data.length; i++) { + const event = input.data[i] + + if (event.transfer != null) { + let entity_id: string = `${event.txnId}-${i}` + const entity = new MyTransfer(entity_id) + entity.amount = event.transfer!.instruction!.amount.toString() + entity.source = event.transfer!.accounts!.source + entity.designation = event.transfer!.accounts!.destination + + if (event.transfer!.accounts!.signer!.single != null) { + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] + } else if (event.transfer!.accounts!.signer!.multisig != null) { + entity.signers = event.transfer!.accounts!.signer!.multisig!.signers + } + entity.save() + } + } +} +``` + +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + +## Conclusion + +You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. + +For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). diff --git a/website/pages/es/sps/triggers.mdx b/website/pages/es/sps/triggers.mdx new file mode 100644 index 000000000000..ed19635d4768 --- /dev/null +++ b/website/pages/es/sps/triggers.mdx @@ -0,0 +1,37 @@ +--- +title: Substreams Triggers +--- + +Custom triggers allow you to send data directly into your subgraph mappings file and entities (similar to tables and fields), enabling full use of the GraphQL layer. By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data within your subgraph’s handler, ensuring efficient and streamlined data management within the subgraph framework. + +> Note: If you haven’t already, visit one of the How-To Guides found [here](./introduction) to scaffold your first project in the Development Container. + +The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. + +```tsx +export function handleTransactions(bytes: Uint8Array): void { + let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).trasanctions // 1. + if (transactions.length == 0) { + log.info('No transactions found', []) + return + } + + for (let i = 0; i < transactions.length; i++) { + // 2. + let transaction = transactions[i] + + let entity = new Transaction(transaction.hash) // 3. + entity.from = transaction.from + entity.to = transaction.to + entity.save() + } +} +``` + +Here's what you’re seeing in the `mappings.ts` file: + +1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object +2. Looping over the transactions +3. Create a new subgraph entity for every transaction + +To go through a detailed example of a trigger-based subgraph, [click here](./triggers-example). diff --git a/website/pages/es/substreams.mdx b/website/pages/es/substreams.mdx index 8fcd8349f986..dbff85ae4a76 100644 --- a/website/pages/es/substreams.mdx +++ b/website/pages/es/substreams.mdx @@ -4,9 +4,11 @@ title: Substreams ![Substreams Logo](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## How Substreams Works in 4 Steps @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### Expand Your Knowledge - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/es/sunrise.mdx b/website/pages/es/sunrise.mdx index 32bf6c6d26d4..14d1444cf8cd 100644 --- a/website/pages/es/sunrise.mdx +++ b/website/pages/es/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Post-Sunrise + Upgrading to The Graph Network FAQ --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Note: The Sunrise of Decentralized Data ended June 12th, 2024. -## What is the Sunrise of Decentralized Data? +## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data is an initiative spearheaded by Edge & Node. The goal is to enable subgraph developers to seamlessly upgrade to The Graph’s decentralized network. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan draws on many previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs, and the ability to integrate new blockchain networks to The Graph. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### What are the phases of the Sunrise? +### What happened to the hosted service? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. -## Upgrading subgraphs to The Graph Network +During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. -### When will hosted service subgraphs no longer be available? +### Was Subgraph Studio impacted by this upgrade? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Why were subgraphs published to Arbitrum, did it start indexing a different network? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Will my hosted service subgraph be supported on The Graph Network? - -Yes, the upgrade Indexer will automatically support all hosted service subgraphs published to The Graph Network for a seamless upgrade experience. - -### How do I upgrade my hosted service subgraph? - -> Note: Upgrading a subgraph to The Graph Network cannot be undone. - - - -To upgrade a hosted service subgraph, you can visit the subgraph dashboard on the [hosted service](https://thegraph.com/hosted-service). - -1. Select the subgraph(s) you want to upgrade. -2. Select the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### How can I get support with the upgrade process? - -The Graph community is here to support developers as they move to The Graph Network. Join The Graph's [Discord server](https://discord.gg/vtvv7FP) and request help in the #upgrade-decentralized-network channel. - -### How can I ensure high quality of service and redundancy for subgraphs on The Graph Network? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Members from these blockchain communities are encouraged to integrate their chain through the [Chain Integration Process](/chain-integration-overview/). - -### How do I publish new versions to the network? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Upgrade to the latest version of [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Update your deploy command - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> Publishing requires Arbitrum ETH - upgrading your subgraph also airdrops a small amount to facilitate your first protocol interactions 🧑‍🚀 - -### I use a subgraph developed by someone else, how can I make sure that my service isn't interrupted? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### What happens if I don't upgrade my subgraph? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### How can I get started querying subgraphs on The Graph Network? - -You can explore available subgraphs on [Graph Explorer](https://thegraph.com/explorer). [Learn more about querying subgraphs on The Graph](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## About the Upgrade Indexer -### What is the upgrade Indexer? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> The upgrade Indexer is currently active. -### What chains does the upgrade Indexer support? +The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. -The upgrade Indexer supports chains that were previously only available on the hosted service. +### What does the upgrade Indexer do? -Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. ### What does the upgrade indexer mean for existing Indexers? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. + +However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As more subgraphs are upgraded from the hosted service to The Graph Network, Delegators stand to benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### Will the upgrade Indexer compete with existing Indexers for rewards? +### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer will only allocate the minimum amount per subgraph and will not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### How will this affect subgraph developers? +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### How does this benefit data consumers? +### How does the upgrade Indexer benefit data consumers? The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. -### How will the upgrade Indexer price queries? - -The upgrade Indexer will price queries at the market rate so as not to influence the query fee market. - -### What are the criteria for the upgrade Indexer to stop supporting a subgraph? - -The upgrade Indexer will serve a subgraph until it is sufficiently and successfully served with consistent queries served by at least 3 other Indexers. - -Furthermore, the upgrade Indexer will stop supporting a subgraph if it has not been queried in the last 30 days. - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## About The Graph Network - -### Do I need to run my own infrastructure? - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -Once your subgraph has reached adequate curation signal and other Indexers begin supporting it, the upgrade Indexer will gradually taper off, allowing other Indexers to collect indexing rewards and query fees. - -### Should I host my own indexing infrastructure? - -Running infrastructure for your own project is [significantly more resource intensive](/network/benefits/) when compared to using The Graph Network. - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -That being said, if you’re still interested in running a [Graph Node](https://github.com/graphprotocol/graph-node), consider joining The Graph Network [as an Indexer](https://thegraph.com/blog/how-to-become-indexer/) to earn indexing rewards and query fees by serving data on your subgraph and others. - -### Should I use a centralized indexing provider? - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -Here's a detailed breakdown of the benefits of The Graph over centralized hosting: +### How does the upgrade Indexer price queries? -- **Resilience and Redundancy**: Decentralized systems are inherently more robust and resilient due to their distributed nature. Data isn't stored on a single server or location. Instead, it's served by hundreds of independent Indexers around the globe. This reduces the risk of data loss or service interruptions if one node fails, leading to exceptional uptime (99.99%). +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **Quality of Service**: In addition to the impressive uptime, The Graph Network features a ~106ms median query speed (latency), and higher query success rates compared to hosted alternatives. Read more in [this blog](https://thegraph.com/blog/qos-the-graph-network/). +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -Just as you've chosen your blockchain network for its decentralized nature, security, and transparency, opting for The Graph Network is an extension of those same principles. By aligning your data infrastructure with these values, you ensure a cohesive, resilient, and trust-driven development environment. +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/es/supported-network-requirements.mdx b/website/pages/es/supported-network-requirements.mdx index dfebec344880..347fcc74555d 100644 --- a/website/pages/es/supported-network-requirements.mdx +++ b/website/pages/es/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| Red | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | +| Red | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | diff --git a/website/pages/es/tap.mdx b/website/pages/es/tap.mdx new file mode 100644 index 000000000000..eb3a28471111 --- /dev/null +++ b/website/pages/es/tap.mdx @@ -0,0 +1,197 @@ +--- +title: TAP Migration Guide +--- + +Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. + +## Descripción + +[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: + +- Efficiently handles micropayments. +- Adds a layer of consolidations to on-chain transactions and costs. +- Allows Indexers control of receipts and payments, guaranteeing payment for queries. +- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. + +## Specifics + +TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. + +For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. + +### RAV Details + +- It’s money that is waiting to be sent to the blockchain. + +- It will continue to send requests to aggregate and ensure that the total value of non-aggregated receipts does not exceed the `amount willing to lose`. + +- Each RAV can be redeemed once in the contracts, which is why they are sent after the allocation is closed. + +### Redeeming RAV + +As long as you run `tap-agent` and `indexer-agent`, everything will be executed automatically. The following provides a detailed breakdown of the process: + +1. An Indexer closes allocation. + +2. ` period, tap-agent` takes all pending receipts for that specific allocation and requests an aggregation into a RAV, marking it as `last`. + +3. `indexer-agent` takes all the last RAVS and sends redeem requests to the blockchain, which will update the value of `redeem_at`. + +4. During the `` period, `indexer-agent` monitors if the blockchain has any reorganizations that revert the transaction. + + - If it was reverted, the RAV is resent to the blockchain. If it was not reverted, it gets marked as `final`. + +## Blockchain Addresses + +### Contracts + +| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| ------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | +| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | +| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | + +### Gateway + +| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| ---------- | --------------------------------------------- | --------------------------------------------- | +| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | + +### Requirements + +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. + +- [Graph TAP Aribtrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) + +> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployement. As a result, you have to index it manually. + +## Migration Guide + +### Software versions + +| Component | Version | Image Link | +| --------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------- | +| indexer-service | v1.0.0-rc.6 | [indexer-service](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-service-rs/264320627?tag=1.0.0-rc.6) | +| indexer-agent | PR #995 | [indexer-agent](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent/266166026?tag=sha-d98cf80) | +| tap-agent | v1.0.0-rc.6 | [tap-agent](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-tap-agent/264320547?tag=1.0.0-rc.6) | + +### Steps + +1. **Indexer Agent** + + - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). + - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + +2. **Indexer Service** + + - Fully replace your current configuration with the [new Indexer Service rs](https://github.com/graphprotocol/indexer-rs). It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + - Like the older version, you can scale Indexer Service horizontally easily. It is still stateless. + +3. **TAP Agent** + + - Run _one_ single instance of [TAP Agent](https://github.com/graphprotocol/indexer-rs) at all times. It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + +4. **Configure Indexer Service and TAP Agent** + + Configuration is a TOML file shared between `indexer-service` and `tap-agent`, supplied with the argument `--config /path/to/config.toml`. + + Check out the full [configuration](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) and the [default values](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) + +For minimal configuration, use the following template: + +```bash +# You will have to change *all* the values below to match your setup. +# +# Some of the config below are global graph network values, which you can find here: +# +# +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: +# +# [database] +# postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" + +[indexer] +indexer_address = "0x1111111111111111111111111111111111111111" +operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" + +[database] +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. +postgres_url = "postgres://postgres@postgres:5432/postgres" + +[graph_node] +# URL to your graph-node's query endpoint +query_url = "" +# URL to your graph-node's status endpoint +status_url = "" + +[subgraphs.network] +# Query URL for the Graph Network subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[subgraphs.escrow] +# Query URL for the Escrow subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[blockchain] +# The chain ID of the network that the graph network is running on +chain_id = 1337 +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. +receipts_verifier_address = "0x2222222222222222222222222222222222222222" + +######################################## +# Specific configurations to tap-agent # +######################################## +[tap] +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: +# max_amount_willing_to_lose_grt = "0.1" +max_amount_willing_to_lose_grt = 20 + +[tap.sender_aggregator_endpoints] +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. +0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" +``` + +Notas: + +- Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/tap/#gateway). +- Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/tap/#contracts) using the appropriate chain id. + +**Log Level** + +- You can set the log level by using the `RUST_LOG` environment variable. +- It’s recommended that you set it to `RUST_LOG=indexer_tap_agent=debug,info`. + +## Monitoring + +### Metrics + +All components expose the port 7300 to be queried by prometheus. + +### Grafana Dashboard + +You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) and import. + +### Launchpad + +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer) diff --git a/website/pages/fr/about.mdx b/website/pages/fr/about.mdx index ded4167cf102..b383a528ec83 100644 --- a/website/pages/fr/about.mdx +++ b/website/pages/fr/about.mdx @@ -2,46 +2,66 @@ title: À propos de The Graph --- -Cette page expliquera ce qu'est The Graph et comment vous pouvez commencer. - ## Qu’est-ce que The Graph ? -The Graph est un protocole décentralisé pour l'indexation et l'interrogation de données blockchain. The Graph permet d'interroger des données qui sont difficiles à interroger directement. +The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. + +## Understanding the Basics + +Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. + +### Challenges Without The Graph + +In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. + +- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. + +- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. + +- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. + +### Why is this a problem? + +It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. + +Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. + +Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. -Les projets avec des contrats intelligents complexes comme [Uniswap](https://uniswap.org/) et des projets NFT comme [Bored Ape](https://boredapeyachtclub.com/) Yacht Club stockent des données sur la blockchain Ethereum. La façon dont ces données sont stockées rend leur lecture difficile au-delà de quelques informations simples. +## The Graph Provides a Solution -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. -Vous pouvez également créer votre propre serveur, y traiter les transactions, les enregistrer dans une base de données et créer un point de terminaison d'API par-dessus tout cela afin d'interroger les données. Cependant, cette option est [consommatrice de ressources](/network/benefits/), nécessite une maintenance, présente un point de défaillance unique et brise d'importantes propriétés de sécurité requises pour la décentralisation. +### How The Graph Functions -**L’indexation des données blockchain est vraiment très difficile.** +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +#### Specifics -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. -## Fonctionnement du Graph +- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -The Graph apprend quoi et comment indexer les données Ethereum en fonction des descriptions de subgraphs, connues sous le nom de manifeste de subgraph. La description du subgraph définit les contrats intelligents d'intérêt pour un subgraph, les événements de ces contrats auxquels il faut prêter attention et comment mapper les données d'événement aux données que The Graph stockera dans sa base de données. +- When creating a subgraph, you need to write a subgraph manifest. -Une fois que vous avez écrit un `manifeste de subgraph`, vous utilisez le Graph CLI pour stocker la définition dans IPFS et vous indiquez par la même occasion à l'indexeur de commencer à indexer les données pour ce subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. -Ce diagramme donne plus de détails sur le flux de données une fois qu'un manifeste de subgraph a été déployé, traitant des transactions Ethereum : +The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. ![Un graphique expliquant comment The Graph utilise Graph Node pour répondre aux requêtes des consommateurs de données](/img/graph-dataflow.png) La description des étapes du flux : -1. Une dapp ajoute des données à Ethereum via une transaction sur un contrat intelligent. -2. Le contrat intelligent va alors produire un ou plusieurs événements lors du traitement de la transaction. -3. Parallèlement, Le nœud de The Graph scanne continuellement Ethereum à la recherche de nouveaux blocs et de nouvelles données intéressantes pour votre subgraph. -4. The Graph Node trouve alors les événements Ethereum d'intérêt pour votre subgraph dans ces blocs et vient exécuter les corrélations correspondantes que vous avez fournies. Le gestionnaire de corrélation se définit comme un module WASM qui crée ou met à jour les entités de données que le nœud de The Graph stocke en réponse aux événements Ethereum. -5. Le dapp interroge le Graph Node pour des données indexées à partir de la blockchain, à l'aide du [point de terminaison GraphQL](https://graphql.org/learn/) du noeud. À son tour, le Graph Node traduit les requêtes GraphQL en requêtes pour sa base de données sous-jacente afin de récupérer ces données, en exploitant les capacités d'indexation du magasin. Le dapp affiche ces données dans une interface utilisateur riche pour les utilisateurs finaux, qui s'en servent pour émettre de nouvelles transactions sur Ethereum. Le cycle se répète. +1. Une dapp ajoute des données à Ethereum via une transaction sur un contrat intelligent. +2. Le contrat intelligent va alors produire un ou plusieurs événements lors du traitement de la transaction. +3. Parallèlement, Le nœud de The Graph scanne continuellement Ethereum à la recherche de nouveaux blocs et de nouvelles données intéressantes pour votre subgraph. +4. The Graph Node trouve alors les événements Ethereum d'intérêt pour votre subgraph dans ces blocs et vient exécuter les corrélations correspondantes que vous avez fournies. Le gestionnaire de corrélation se définit comme un module WASM qui crée ou met à jour les entités de données que le nœud de The Graph stocke en réponse aux événements Ethereum. +5. Le dapp interroge le Graph Node pour des données indexées à partir de la blockchain, à l'aide du [point de terminaison GraphQL](https://graphql.org/learn/) du noeud. À son tour, le Graph Node traduit les requêtes GraphQL en requêtes pour sa base de données sous-jacente afin de récupérer ces données, en exploitant les capacités d'indexation du magasin. Le dapp affiche ces données dans une interface utilisateur riche pour les utilisateurs finaux, qui s'en servent pour émettre de nouvelles transactions sur Ethereum. Le cycle se répète. ## Les Étapes suivantes -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +The following sections provide a more in-depth look at subgraphs, their deployment and data querying. -Before you start writing your own subgraph, you might want to have a look at [Graph Explorer](https://thegraph.com/explorer) and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/pages/fr/arbitrum/arbitrum-faq.mdx b/website/pages/fr/arbitrum/arbitrum-faq.mdx index 85632d92168b..00ad147484e9 100644 --- a/website/pages/fr/arbitrum/arbitrum-faq.mdx +++ b/website/pages/fr/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: FAQ d'Arbitrum Cliquez [ici] (#billing-on-arbitrum-faqs) si vous souhaitez passer à la FAQ sur la facturation Arbitrum. -## Pourquoi The Graph met-il en place une solution L2 ? +## Why did The Graph implement an L2 Solution? -En faisant passer The Graph à l'échelle L2, les participants au réseau peuvent espérer : +By scaling The Graph on L2, network participants can now benefit from: - Jusqu'à 26 fois plus d'économies sur les frais de gaz @@ -14,7 +14,7 @@ En faisant passer The Graph à l'échelle L2, les participants au réseau peuven - La sécurité héritée d'Ethereum -La mise à l'échelle des contrats intelligents du protocole sur L2 permet aux participants au réseau d'interagir plus fréquemment pour un coût réduit en termes de frais de gaz. Par exemple, les indexeurs peuvent ouvrir et fermer des allocations pour indexer un plus grand nombre de subgraphs avec une plus grande fréquence, les développeurs peuvent déployer et mettre à jour des subgraphs plus facilement, les délégués peuvent déléguer des GRT avec une fréquence accrue, et les curateurs peuvent ajouter ou supprimer des signaux à un plus grand nombre de subgraphs - des actions auparavant considérées comme trop coûteuses pour être effectuées fréquemment en raison des frais de gaz. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. La communauté Graph a décidé d'avancer avec Arbitrum l'année dernière après le résultat de la discussion [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). @@ -41,27 +41,21 @@ Pour tirer parti de l'utilisation de The Graph sur L2, utilisez ce sélecteur d ## En tant que développeur de subgraphs, consommateur de données, indexeur, curateur ou délégateur, que dois-je faire maintenant ? -Aucune action immédiate n'est requise, cependant, les participants au réseau sont encouragés à commencer à migrer vers Arbitrum pour profiter des avantages de L2. +Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) for additional support. -Les équipes de développeurs de base travaillent à la création d'outils de transfert L2 qui faciliteront considérablement le transfert de la délégation, de la curation et des subgraphes vers Arbitrum. Les participants au réseau peuvent s'attendre à ce que les outils de transfert L2 soient disponibles d'ici l'été 2023. +All indexing rewards are now entirely on Arbitrum. -À partir du 10 avril 2023, 5 % de toutes les récompenses d'indexation sont frappées sur Arbitrum. Au fur et à mesure que la participation au réseau augmentera et que le Conseil l'approuvera, les récompenses d'indexation passeront progressivement de l'Ethereum à l'Arbitrum, pour finalement passer entièrement à l'Arbitrum. - -## Que dois-je faire si je souhaite participer au réseau L2 ? - -Veuillez aider à [tester le réseau](https://testnet.thegraph.com/explorer) sur L2 et signaler vos commentaires sur votre expérience dans [Discord](https://discord.gg/graphprotocol). - -## Existe-t-il des risques associés à la mise à l’échelle du réseau vers L2 ? +## Were there any risks associated with scaling the network to L2? All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). Tout a été testé minutieusement et un plan d’urgence est en place pour assurer une transition sûre et fluide. Les détails peuvent être trouvés [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and- considérations de sécurité-20). -## Les subgraphs existants sur Ethereum continueront-ils à fonctionner ? +## Are existing subgraphs on Ethereum working? -Oui, les contrats The Graph Network fonctionneront en parallèle sur Ethereum et Arbitrum jusqu'à leur passage complet à Arbitrum à une date ultérieure. +All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. -## GRT disposera-t-il d'un nouveau contrat intelligent déployé sur Arbitrum ? +## Does GRT have a new smart contract deployed on Arbitrum? Oui, GRT dispose d'un [contrat intelligent sur Arbitrum](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) supplémentaire. Cependant, le réseau principal Ethereum [contrat GRT](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) restera opérationnel. diff --git a/website/pages/fr/arbitrum/l2-transfer-tools-faq.mdx b/website/pages/fr/arbitrum/l2-transfer-tools-faq.mdx index 45ec79e9d4f9..d43463682da5 100644 --- a/website/pages/fr/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/pages/fr/arbitrum/l2-transfer-tools-faq.mdx @@ -336,11 +336,13 @@ Si vous n’avez transféré aucun solde de contrat de vesting à L2 et que votr ### J’utilise mon contrat de vesting pour investir dans mainnet. Puis-je transférer ma participation à Arbitrum? -Oui, mais si votre contrat est toujours acquis, vous ne pouvez transférer la participation que pour qu’elle soit détenue par votre contrat d’acquisition L2. Vous devez d’abord initialiser ce contrat L2 en transférant un solde de GRT à l’aide de l’outil de transfert de contrat d’acquisition dans Explorer. Si votre contrat est entièrement acquis, vous pouvez transférer votre participation à n’importe quelle adresse en L2, mais vous devez le définir au préalable et déposer des GRT pour l’outil de transfert L2 pour payer le gaz L2. +Oui, mais si votre contrat est toujours acquis, vous ne pouvez transférer la participation que pour qu’elle soit détenue par votre contrat d’acquisition L2. Vous devez d’abord initialiser ce contrat L2 en transférant un solde de GRT à l’aide de l’outil de transfert de contrat d’acquisition dans Explorer. Si +votre contrat est entièrement acquis, vous pouvez transférer votre participation à n’importe quelle adresse en L2, mais vous devez le définir au préalable et déposer des GRT pour l’outil de transfert L2 pour payer le gaz L2. ### J’utilise mon contrat de vesting pour déléguer sur mainnet. Puis-je transférer mes délégations à Arbitrum? -Oui, mais si votre contrat est toujours acquis, vous ne pouvez transférer la participation que pour qu’elle soit détenue par votre contrat de vesting L2. Vous devez d’abord initialiser ce contrat L2 en transférant un solde de GRT à l’aide de l’outil de transfert de contrat de vesting dans Explorer. Si votre contrat est entièrement acquis, vous pouvez transférer votre participation à n’importe quelle adresse en L2, mais vous devez le définir au préalable et déposer des GRT pour l’outil de transfert L2 pour payer le gaz L2. +Oui, mais si votre contrat est toujours acquis, vous ne pouvez transférer la participation que pour qu’elle soit détenue par votre contrat de vesting L2. Vous devez d’abord initialiser ce contrat L2 en transférant un solde de GRT à l’aide de l’outil de transfert de contrat de vesting dans Explorer. Si +votre contrat est entièrement acquis, vous pouvez transférer votre participation à n’importe quelle adresse en L2, mais vous devez le définir au préalable et déposer des GRT pour l’outil de transfert L2 pour payer le gaz L2. ### Puis-je spécifier un bénéficiaire différent pour mon contrat de vesting sur L2? diff --git a/website/pages/fr/billing.mdx b/website/pages/fr/billing.mdx index cb3b4c99bb2b..d6a66292f3f6 100644 --- a/website/pages/fr/billing.mdx +++ b/website/pages/fr/billing.mdx @@ -2,28 +2,28 @@ title: Facturation --- -## Subgraph Billing Plans +## Les Plans de Facturation des Subgraphs -There are two plans to use when querying subgraphs on The Graph Network. +Il y a deux plans à utiliser lorsqu'on interroge les subgraphs sur le réseau de The Graph. -- **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp. +- **Le Plan Gratuit**: Le Plan Gratuit comprend 100,000 requêtes mensuelles gratuites avec accès complet à l'environnement de l'analyse du Studio Subgraph. Ce plan est désigné pour les amateurs, les hackatonistes, et ceux avec des projets à côté à essayer The Graph avant de mettre leur dapp à l'échelle. -- **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases. +- **Plan de croissance**: Le plan de croissance comprend tout ce qui est inclus dans le plan gratuit avec toutes les requêtes après 100 000 requêtes mensuelles nécessitant des paiements avec GRT ou carte de crédit. Le plan de croissance est suffisamment flexible pour couvrir les équipes qui ont établi des dapps à travers une variété de cas d'utilisation. -## Query Payments with credit card +## Requête sur les paiements par carte de crédit -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) - 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). - 2. Cliquez sur le bouton « Connecter le portefeuille » dans le coin supérieur droit de la page. Vous serez redirigé vers la page de sélection du portefeuille. Sélectionnez votre portefeuille et cliquez sur "Connecter". - 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. - 4. To choose a credit card payment, choose “Credit card” as the payment method and fill out your credit card information. Those who have used Stripe before can use the Link feature to autofill their details. -- Invoices will be processed at the end of each month and require an active credit card on file for all queries beyond the free plan quota. +- Pour mettre en place la facturation par carte de crédit/débit, les utilisateurs doivent accéder à Subgraph Studio (https://thegraph.com/studio/) + 1. Accédez à la [page de facturation de Subgraph Studio](https://thegraph.com/studio/billing/). + 2. Click on the "Connect Wallet" button on the top right corner of the page. You'll be redirected to the wallet selection page. Select your wallet and click on "Connect". + 3. Choisissez « Mettre à niveau votre abonnement » si vous effectuez une mise à niveau depuis le plan gratuit, ou choisissez « Gérer l'abonnement » si vous avez déjà ajouté des GRT à votre solde de facturation par le passé. Ensuite, vous pouvez estimer le nombre de requêtes pour obtenir une estimation du prix, mais ce n'est pas une étape obligatoire. + 4. Pour choisir un paiement par carte de crédit, choisissez “Credit card” comme mode de paiement et remplissez les informations de votre carte de crédit. Ceux qui ont déjà utilisé Stripe peuvent utiliser la fonctionnalité Link pour remplir automatiquement leurs informations. +- Les factures seront traitées à la fin de chaque mois et nécessitent une carte de crédit valide enregistrée sur votre compte pour toute requête au-delà du quota du plan gratuit. ## Query Payments with GRT -Subgraph users can use The Graph Token (or GRT) to pay for queries on The Graph Network. With GRT, invoices will be processed at the end of each month and require a sufficient balance of GRT to make queries beyond the Free Plan quota of 100,000 monthly queries. You'll be required to pay fees generated from your API keys. Using the billing contract, you'll be able to: +Les utilisateurs de subgraphs peuvent utiliser le jeton natif de The Graph (GRT) pour payer les requêtes sur le réseau The Graph. Avec le GRT, les factures seront traitées à la fin de chaque mois et nécessiteront un solde suffisant de GRT pour effectuer des requêtes au-delà du quota du plan gratuit de 100 000 requêtes mensuelles. Vous devrez payer les frais générés par vos clés API. En utilisant le contrat de facturation, vous pourrez : - Ajoutez et retirez du GRT du solde de votre compte. - Gardez une trace de vos soldes en fonction du montant de GRT que vous avez ajouté au solde de votre compte, du montant que vous avez supprimé et de vos factures. @@ -31,7 +31,7 @@ Subgraph users can use The Graph Token (or GRT) to pay for queries on The Graph ### GRT on Arbitrum or Ethereum -The Graph’s billing system accepts GRT on Arbitrum, and users will need ETH on Arbitrum to pay their gas. While The Graph protocol started on Ethereum Mainnet, all activity, including the billing contracts, is now on Arbitrum One. +Le système de facturation de The Graph accepte le GRT sur Arbitrum, et les utilisateurs devront disposer d'ETH sur Arbitrum pour payer le gaz. Bien que le protocole The Graph ait commencé sur le réseau principal d'Ethereum, toutes les activités, y compris les contrats de facturation, sont désormais réalisées sur Arbitrum One. To pay for queries, you need GRT on Arbitrum. Here are a few different ways to achieve this: @@ -50,14 +50,14 @@ Once you bridge GRT, you can add it to your billing balance. ### Adding GRT using a wallet -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). -2. Cliquez sur le bouton « Connecter le portefeuille » dans le coin supérieur droit de la page. Vous serez redirigé vers la page de sélection du portefeuille. Sélectionnez votre portefeuille et cliquez sur "Connecter". +1. Accédez à la [page de facturation de Subgraph Studio](https://thegraph.com/studio/billing/). +2. Click on the "Connect Wallet" button on the top right corner of the page. You'll be redirected to the wallet selection page. Select your wallet and click on "Connect". 3. Select the "Manage" button near the top right corner. First time users will see an option to "Upgrade to Growth plan" while returning users will click "Deposit from wallet". -4. Use the slider to estimate the number of queries you expect to make on a monthly basis. - - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. +4. Utilisez le curseur pour estimer le nombre de requêtes que vous prévoyez d’effectuer sur une base mensuelle. + - Pour des suggestions sur le nombre de requêtes que vous pouvez utiliser, consultez notre page **Foire aux questions**. 5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. -6. Select the number of months you would like to prepay. - - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. +6. Sélectionnez le nombre de mois pour lesquels vous souhaitez effectuer un paiement anticipé. + - Le paiement anticipé ne vous engage pas sur une utilisation future. Vous ne serez facturé que pour ce que vous utiliserez et vous pourrez retirer votre solde à tout moment. 7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. @@ -67,23 +67,23 @@ Once you bridge GRT, you can add it to your billing balance. ### Withdrawing GRT using a wallet -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). +1. Accédez à la [page de facturation de Subgraph Studio](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. -4. Enter the amount of GRT you would like to withdraw. +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. +4. Entrez le montant de GRT que vous voudriez retirer. 5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. -6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. +6. Une fois que la transaction est confirmée, vous verrez le GRT qu'on a retiré de votre solde du compte dans votre portefeuille Arbitrum. ### Ajout de GRT à l'aide d'un portefeuille multisig -1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). +1. Allez à la page [Facturation de Studio Subgraph](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". If you're using [Gnosis-Safe](https://gnosis-safe.io/), you'll be able to connect your multisig as well as your signing wallet. Then, sign the associated message. This will not cost any gas. 3. Select the "Manage" button near the top right corner. First time users will see an option to "Upgrade to Growth plan" while returning users will click "Deposit from wallet". -4. Use the slider to estimate the number of queries you expect to make on a monthly basis. - - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. +4. Utilisez le curseur pour estimer le nombre de requêtes que vous prévoyez d’effectuer sur une base mensuelle. + - Pour des suggestions sur le nombre de requêtes que vous pouvez utiliser, consultez notre page **Foire aux questions**. 5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. -6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. +6. Sélectionnez le nombre de mois pour lesquels vous souhaitez effectuer un paiement anticipé. + - Le paiement anticipé ne vous engage pas sur une utilisation future. Vous ne serez facturé que pour ce que vous utiliserez et vous pourrez retirer votre solde à tout moment. 7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. 8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. @@ -92,18 +92,18 @@ Once you bridge GRT, you can add it to your billing balance. ## Getting GRT -This section will show you how to get GRT to pay for query fees. +Cette section vous montrera comment obtenir du GRT pour payer les frais de requête. ### Coinbase -This will be a step by step guide for purchasing GRT on Coinbase. +Voici un guide étape par étape pour acheter de GRT sur Coinbase. -1. Go to [Coinbase](https://www.coinbase.com/) and create an account. -2. Once you have created an account, you will need to verify your identity through a process known as KYC (or Know Your Customer). This is a standard procedure for all centralized or custodial crypto exchanges. -3. Once you have verified your identity, you can purchase GRT. You can do this by clicking on the "Buy/Sell" button on the top right of the page. -4. Select the currency you want to purchase. Select GRT. -5. Select the payment method. Select your preferred payment method. -6. Select the amount of GRT you want to purchase. +1. Accédez à [Coinbase](https://www.coinbase.com/) et créez un compte. +2. Dès que vous aurez créé un compte, vous devrez vérifier votre identité par le biais d'un processus connu sous le nom de KYC (Know Your Customer ou Connaître Votre Client). Il s'agit d'une procédure standard pour toutes les plateformes d'échange de crypto-monnaies centralisées ou dépositaires. +3. Une fois votre identité vérifiée, vous pouvez acheter des GRT. Pour ce faire, cliquez sur le bouton « Acheter/Vendre » en haut à droite de la page. +4. Sélectionnez la devise que vous souhaitez acheter. Sélectionnez GRT. +5. Sélectionnez le mode de paiement. Sélectionnez votre mode de paiement préféré. +6. Sélectionnez la quantité de GRT que vous souhaitez acheter. 7. Review your purchase. Review your purchase and click "Buy GRT". 8. Confirm your purchase. Confirm your purchase and you will have successfully purchased GRT. 9. You can transfer the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). @@ -119,15 +119,15 @@ You can learn more about getting GRT on Coinbase [here](https://help.coinbase.co This will be a step by step guide for purchasing GRT on Binance. 1. Go to [Binance](https://www.binance.com/en) and create an account. -2. Once you have created an account, you will need to verify your identity through a process known as KYC (or Know Your Customer). This is a standard procedure for all centralized or custodial crypto exchanges. +2. Dès que vous aurez créé un compte, vous devrez vérifier votre identité par le biais d'un processus connu sous le nom de KYC (Know Your Customer ou Connaître Votre Client). Il s'agit d'une procédure standard pour toutes les plateformes d'échange de crypto-monnaies centralisées ou dépositaires. 3. Once you have verified your identity, you can purchase GRT. You can do this by clicking on the "Buy Now" button on the homepage banner. 4. You will be taken to a page where you can select the currency you want to purchase. Select GRT. 5. Select your preferred payment method. You'll be able to pay with different fiat currencies such as Euros, US Dollars, and more. -6. Select the amount of GRT you want to purchase. +6. Sélectionnez la quantité de GRT que vous souhaitez acheter. 7. Review your purchase and click "Buy GRT". 8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. 9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - Click on the "wallet" button, click withdraw, and select GRT. - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - Click "Continue" and confirm your transaction. @@ -138,26 +138,26 @@ You can learn more about getting GRT on Binance [here](https://www.binance.com/e This is how you can purchase GRT on Uniswap. -1. Go to [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) and connect your wallet. -2. Select the token you want to swap from. Select ETH. -3. Select the token you want to swap to. Select GRT. - - Make sure you're swapping for the correct token. The GRT smart contract address on Arbitrum One is: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) -4. Enter the amount of ETH you want to swap. -5. Click "Swap". -6. Confirm the transaction in your wallet and you wait for the transaction to process. +1. Accédez à [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) et connectez votre portefeuille. +2. Sélectionnez le jeton dont vous souhaitez échanger. Sélectionnez ETH. +3. Sélectionnez le jeton vers lequel vous souhaitez échanger. Sélectionnez GRT. + - Assurez-vous que vous échangez contre le bon jeton. L'adresse du contrat intelligent GRT sur Arbitrum One est la suivante : [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) +4. Entrez le montant d'ETH que vous souhaitez échanger. +5. Cliquez sur « Échanger ». +6. Confirmez la transaction dans votre portefeuille et attendez qu'elle soit traitée. You can learn more about getting GRT on Uniswap [here](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-). ## Getting Ether -This section will show you how to get Ether (ETH) to pay for transaction fees or gas costs. ETH is necessary to execute operations on the Ethereum network such as transferring tokens or interacting with contracts. +Cette section vous montrera comment obtenir de l'Ether (ETH) pour payer les frais de transaction ou les coûts de gaz. L'ETH est nécessaire pour exécuter des opérations sur le réseau Ethereum telles que le transfert de jetons ou l'interaction avec des contrats. ### Coinbase -This will be a step by step guide for purchasing ETH on Coinbase. +Ce sera un guide étape par étape pour acheter de l'ETH sur Coinbase. -1. Go to [Coinbase](https://www.coinbase.com/) and create an account. -2. Once you have created an account, verify your identity through a process known as KYC (or Know Your Customer). This is a standard procedure for all centralized or custodial crypto exchanges. +1. Accédez à [Coinbase](https://www.coinbase.com/) et créez un compte. +2. Une fois que vous avez créé un compte, vérifiez votre identité via un processus appelé KYC (ou Know Your Customer). l s'agit d'une procédure standard pour toutes les plateformes d'échange de crypto-monnaies centralisées ou dépositaires. 3. Once you have verified your identity, purchase ETH by clicking on the "Buy/Sell" button on the top right of the page. 4. Select the currency you want to purchase. Select ETH. 5. Select your preferred payment method. @@ -198,7 +198,7 @@ Vous pouvez en savoir plus sur l'obtention d'ETH sur Binance [ici](https://www.b ### How many queries will I need? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. @@ -208,6 +208,6 @@ Of course, both new and existing users can reach out to Edge & Node's BD team fo Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### What happens when my billing balance runs out? Will I get a warning? You will receive several email notifications before your billing balance runs out. diff --git a/website/pages/fr/chain-integration-overview.mdx b/website/pages/fr/chain-integration-overview.mdx index 9310317d84ca..8e3cc18a00bd 100644 --- a/website/pages/fr/chain-integration-overview.mdx +++ b/website/pages/fr/chain-integration-overview.mdx @@ -6,12 +6,12 @@ Un processus d'intégration transparent et basé sur la gouvernance a été con ## Étape 1. Intégration technique -- Les équipes travaillent sur une intégration de Graph Node et Firehose pour les chaînes non basées sur EVM. [Voici comment](/new-chain-integration/). +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. - Les équipes lancent le processus d'intégration du protocole en créant un fil de discussion sur le forum [ici](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (sous-catégorie Nouvelles sources de données sous Gouvernance et GIPs ). L'utilisation du modèle de forum par défaut est obligatoire. ## Étape 2. Validation de l'intégration -- Les équipes collaborent avec les développeurs principaux, Graph Foundation et les opérateurs d'interfaces graphiques et de passerelles réseau, tels que [Subgraph Studio](https://thegraph.com/studio/), pour garantir un processus d'intégration fluide. Cela implique de fournir l'infrastructure backend nécessaire, telle que les points de terminaison JSON RPC ou Firehose de la chaîne d'intégration. Les équipes souhaitant éviter d'auto-héberger une telle infrastructure peuvent s'appuyer sur la communauté d'opérateurs de nœuds (indexeurs) de The Graph, ce que la Fondation peut aider à faire. +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - Les Graph Indexeurs testent l'intégration sur le réseau de testnet du graph. - Les développeurs principaux et les indexeurs surveillent la stabilité, les performances et le déterminisme des données. @@ -38,7 +38,7 @@ Ce processus est lié au service de données Subgraph, applicable uniquement aux Cela n’aurait un impact que sur la prise en charge du protocole pour l’indexation des récompenses sur les subgraphs alimentés par Substreams. La nouvelle implémentation de Firehose nécessiterait des tests sur testnet, en suivant la méthodologie décrite pour l'étape 2 de ce GIP. De même, en supposant que l'implémentation soit performante et fiable, un PR sur la [Matrice de support des fonctionnalités](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) serait requis ( Fonctionnalité de sous-graphe « Sous-flux de sources de données »), ainsi qu'un nouveau GIP pour la prise en charge du protocole pour l'indexation des récompenses. N'importe qui peut créer le PR et le GIP ; la Fondation aiderait à obtenir l'approbation du Conseil. -### 3. Combien de temps ce processus prendra-t-il ? +### 3. How much time will the process of reaching full protocol support take? Le temps nécessaire à la mise en réseau principal devrait être de plusieurs semaines, variant en fonction du temps de développement de l'intégration, de la nécessité ou non de recherches supplémentaires, de tests et de corrections de bugs et, comme toujours, du calendrier du processus de gouvernance qui nécessite les commentaires de la communauté. @@ -46,4 +46,4 @@ La prise en charge du protocole pour l'indexation des récompenses dépend de la ### 4. Comment les priorités seront-elles gérées ? -Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. This is especially true for chains previously supported on the [hosted service](https://thegraph.com/hosted-service) or those relying on already tested stacks. +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/fr/cookbook/arweave.mdx b/website/pages/fr/cookbook/arweave.mdx index d2b2bb03fcb5..a6f83e9e75c3 100644 --- a/website/pages/fr/cookbook/arweave.mdx +++ b/website/pages/fr/cookbook/arweave.mdx @@ -105,7 +105,7 @@ La définition du schéma décrit la structure de la base de données de subgrap Les gestionnaires pour le traitement des événements sont écrits en [AssemblyScript](https://www.assemblyscript.org/). -L'indexation Arweave introduit des types de données spécifiques à Arweave dans l'[API AssemblyScript](/developing/assemblyscript-api/). +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/fr/cookbook/base-testnet.mdx b/website/pages/fr/cookbook/base-testnet.mdx index b06bcbd77f05..b44854de9c8e 100644 --- a/website/pages/fr/cookbook/base-testnet.mdx +++ b/website/pages/fr/cookbook/base-testnet.mdx @@ -63,10 +63,10 @@ Votre nom de subgraph est un identifiant pour votre subgraph. L'outil CLI vous g The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. +- Schéma (schema.graphql) - Le schéma GraphQL définit les données que vous souhaitez récupérer du subgraph. - Mappages AssemblyScript (mapping.ts) - Il s'agit du code qui traduit les données de vos sources de données vers les entités définies dans le schéma. -If you want to index additional data, you will need extend the manifest, schema and mappings. +If you want to index additional data, you will need to extend the manifest, schema and mappings. For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). diff --git a/website/pages/fr/cookbook/cosmos.mdx b/website/pages/fr/cookbook/cosmos.mdx index 3881f082c08c..deb43bd5557e 100644 --- a/website/pages/fr/cookbook/cosmos.mdx +++ b/website/pages/fr/cookbook/cosmos.mdx @@ -48,30 +48,30 @@ La définition d'un subgraph comporte trois éléments clés : Le manifeste du subgraph (`subgraph.yaml`) identifie les sources de données du subgraph, les déclencheurs d'intérêt et les fonctions (`handlers`) qui doivent être exécutées en réponse à ces déclencheurs. Vous trouverez ci-dessous un exemple de manifeste de subgraph pour un subgraph Cosmos : ```yaml -version spec: 0.0.5 -description: Exemple de subgraph Cosmos +version spec : 0.0.5 +description : Exemple de subgraph Cosmos schéma: - fichier: ./schema.graphql # lien vers le fichier de schéma + fichier : ./schema.graphql # lien vers le fichier de schéma les sources de données: - - genre: cosmos - nom: CosmosHub - réseau: cosmoshub-4 # Cela changera pour chaque blockchain basée sur le cosmos. Dans ce cas, l’exemple utilise le mainnet Cosmos Hub. - source: - startBlock: 0 # Requis pour Cosmos, définissez-le sur 0 pour démarrer l'indexation à partir de la genèse de la chaîne - cartographie: - Version api: 0.0.7 - langage: wasm/assemblyscript - gestionnaires de blocs: - - handler: handleNewBlock # le nom de la fonction dans le fichier de mappage - Gestionnaires d'événements: - - event: récompenses # le type d'événement qui sera géré - handler: handleReward # le nom de la fonction dans le fichier de mappage - Gestionnaires de transactions: - - handler: handleTransaction # le nom de la fonction dans le fichier de mappage - Gestionnaires de messages: - - message: /cosmos.staking.v1beta1.MsgDelegate # le type d'un message - handler: handleMsgDelegate # le nom de la fonction dans le fichier de mappage - fichier: ./src/mapping.ts # lien vers le fichier avec les mappages Assemblyscript + - genre : cosmos + nom : CosmosHub + réseau : cosmoshub-4 # Cela changera pour chaque blockchain basée sur le cosmos. Dans ce cas, l’exemple utilise le mainnet Cosmos Hub. + source: + startBlock : 0 # Requis pour Cosmos, définissez-le sur 0 pour démarrer l'indexation à partir de la genèse de la chaîne + cartographie : + Version api : 0.0.7 + langage : wasm/assemblyscript + gestionnaires de blocs : + - handler: handleNewBlock # le nom de la fonction dans le fichier de mappage + Gestionnaires d'événements : + - event : récompenses # le type d'événement qui sera géré + handler: handleReward # le nom de la fonction dans le fichier de mappage + Gestionnaires de transactions : + - handler: handleTransaction # le nom de la fonction dans le fichier de mappage + Gestionnaires de messages : + - message : /cosmos.staking.v1beta1.MsgDelegate # le type d'un message + handler : handleMsgDelegate # le nom de la fonction dans le fichier de mappage + fichier : ./src/mapping.ts # lien vers le fichier avec les mappages Assemblyscript ``` - Les subgraphs cosmos introduisent un nouveau `type` de source de données (`cosmos`). @@ -85,7 +85,7 @@ Schema definition describes the structure of the resulting subgraph database and Les gestionnaires pour le traitement des événements sont écrits en [AssemblyScript](https://www.assemblyscript.org/). -L'indexation Cosmos introduit des types de données spécifiques à Cosmos dans l'[API AssemblyScript](/developing/assemblyscript-api/). +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/fr/cookbook/grafting.mdx b/website/pages/fr/cookbook/grafting.mdx index 7a7c618dc550..b255c571ec8b 100644 --- a/website/pages/fr/cookbook/grafting.mdx +++ b/website/pages/fr/cookbook/grafting.mdx @@ -22,7 +22,7 @@ Pour plus d’informations, vous pouvez vérifier : - [Greffage](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -Dans ce tutoriel, nous allons aborder un cas d'utilisation de base. Nous allons remplacer un contrat existant par un contrat identique (avec une nouvelle adresse, mais le même code). Ensuite, nous grefferons le subgraph existant sur le subgraph "de base" qui suit le nouveau contrat. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## Remarque importante sur le greffage lors de la mise à niveau vers le réseau @@ -30,7 +30,7 @@ Dans ce tutoriel, nous allons aborder un cas d'utilisation de base. Nous allons ### Pourquoi est-ce important? -La greffe est une fonctionnalité puissante qui permet de "greffer" un subgraph sur un autre, transférant ainsi les données historiques du subgraph existant vers une nouvelle version. Bien qu'il s'agisse d'un moyen efficace de préserver les données et de gagner du temps sur l'indexation, la greffe peut introduire des complexités et des problèmes potentiels lors de la migration d'un environnement hébergé vers le réseau décentralisé. Il n'est pas possible de greffer un subgraph du Graph Network vers le service hébergé ou le Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. ### Les meilleures pratiques @@ -80,7 +80,7 @@ dataSources: ``` - La source de données `Lock` est l'adresse abi et le contrat que nous obtiendrons lorsque nous compilerons et déploierons le contrat -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - La section `mapping` définit les déclencheurs d'intérêt et les fonctions qui doivent être exécutées en réponse à ces déclencheurs. Dans ce cas, nous écoutons l'événement `Withdrawal` et appelons la fonction `handleWithdrawal` lorsqu'elle est émise. ## Définition de manifeste de greffage @@ -191,7 +191,7 @@ Congrats! You have successfully grafted a subgraph onto another subgraph. ## Ressources complémentaires -Si vous souhaitez acquérir plus d'expérience en matière de greffes, voici quelques exemples de contrats populaires : +If you want more experience with grafting, here are a few examples for popular contracts: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) diff --git a/website/pages/fr/cookbook/near.mdx b/website/pages/fr/cookbook/near.mdx index b04a82624f03..28b1995e5a6a 100644 --- a/website/pages/fr/cookbook/near.mdx +++ b/website/pages/fr/cookbook/near.mdx @@ -37,7 +37,7 @@ La définition d'un subgraph comporte trois aspects : **schema.graphql** : un fichier de schéma qui définit quelles données sont stockées pour votre subgraph, et comment les interroger via GraphQL. Les exigences pour les subgraphs NEAR sont couvertes par la [documentation existante](/developing/creating-a-subgraph#the-graphql-schema). -**Mappages AssemblyScript :** [Code AssemblyScript](/developing/assemblyscript-api) qui traduit les données d'événement en entités définies dans votre schéma. La prise en charge de NEAR introduit des types de données spécifiques à NEAR et une nouvelle fonctionnalité d'analyse JSON. +**AssemblyScript Mappings:** [AssemblyScript code](/developing/graph-ts/api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. Lors du développement du subgraph, il y a deux commandes clés : @@ -51,23 +51,23 @@ $ graph build # génère le Web Assembly à partir des fichiers AssemblyScript, Le manifeste de subgraph (`subgraph.yaml`) identifie les sources de données pour le subgraph, les déclencheurs d'intérêt et les fonctions qui doivent être exécutées en réponse à ces déclencheurs. Voici un exemple de manifeste de subgraph pour un subgraph NEAR: ```yaml -specVersion: 0.0.2 -schema: - file: ./src/schema.graphql # lien vers le fichier de schéma -dataSources: - - kind: near - network: near-mainnet - source: - account: app.good-morning.near # Cette source de données surveillera ce compte - startBlock: 10662188 # Requis pour NEAR - mapping: - apiVersion: 0.0.5 - language: wasm/assemblyscript - blockHandlers: - - handler: handleNewBlock # le nom de la fonction dans le fichier de mapping - receiptHandlers: - - handler: handleReceipt # le nom de la fonction dans le fichier de mappage - file: ./src/mapping.ts # lien vers le fichier contenant les mappings Assemblyscript +specVersion : 0.0.2 +schema : + file : ./src/schema.graphql # lien vers le fichier de schéma +dataSources : + - kind : near + network : near-mainnet + source : + account : app.good-morning.near # Cette source de données surveillera ce compte + startBlock : 10662188 # Requis pour NEAR + mapping : + apiVersion : 0.0.5 + language : wasm/assemblyscript + blockHandlers : + - handler : handleNewBlock # le nom de la fonction dans le fichier de mapping + receiptHandlers : + - handler : handleReceipt # le nom de la fonction dans le fichier de mappage + file : ./src/mapping.ts # lien vers le fichier contenant les mappings Assemblyscript ``` - Les subgraphs NEAR introduisent un nouveau `type` de source de données (`near`) @@ -77,12 +77,12 @@ dataSources: ```yaml comptes: - préfixes: - - application - - bien - suffixes: - - matin.près - - matin.testnet + préfixes : + - application + - bien + suffixes : + - matin.près + - matin.testnet ``` Les fichiers de données NEAR prennent en charge deux types de gestionnaires : @@ -98,7 +98,7 @@ La définition du schema décrit la structure de la base de données de subgraph Les gestionnaires de traitement des événements sont écrits dans l'[AssemblyScript](https://www.assemblyscript.org/). -L'indexation NEAR introduit des types de données spécifiques à NEAR dans l'[API AssemblyScript](/developing/assemblyscript-api). +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/graph-ts/api). ```typescript @@ -165,9 +165,9 @@ Ces types sont passés au bloc & gestionnaires de reçus : - Les gestionnaires de blocs reçoivent un `Block` - Les gestionnaires de reçus reçoivent un `ReceiptWithOutcome` -Sinon, le reste de l'[API AssemblyScript](/developing/assemblyscript-api) est disponible pour les développeurs de subgraphs NEAR pendant l'exécution du mapping. +Otherwise, the rest of the [AssemblyScript API](/developing/graph-ts/api) is available to NEAR subgraph developers during mapping execution. -Cela inclut une nouvelle fonction d'analyse JSON - les journaux sur NEAR sont fréquemment émis sous forme de JSON stringifiés. Une nouvelle fonction `json.fromString(...)` est disponible dans le cadre de l'[API JSON](/developing/assemblyscript-api#json-api) pour permettre aux développeurs pour traiter facilement ces journaux. +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/graph-ts/api#json-api) to allow developers to easily process these logs. ## Déploiement d'un subgraph NEAR @@ -258,8 +258,8 @@ Si un `compte` est spécifié, il correspondra uniquement au nom exact du compte ```yaml comptes: - suffixes: - - mintbase1.near + suffixes : + - mintbase1.near ``` ### Les subgraphs NEAR peuvent-ils faire des appels de view aux comptes NEAR pendant les mappings? diff --git a/website/pages/fr/cookbook/subgraph-uncrashable.mdx b/website/pages/fr/cookbook/subgraph-uncrashable.mdx index 56b166b1056f..319851bc8579 100644 --- a/website/pages/fr/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/fr/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: Générateur de code de subgraph sécurisé - Le cadre comprend également un moyen (via le fichier de configuration) de créer des fonctions de définition personnalisées, mais sûres, pour des groupes de variables d'entité. De cette façon, il est impossible pour l'utilisateur de charger/utiliser une entité de graph obsolète et il est également impossible d'oublier de sauvegarder ou définissez une variable requise par la fonction. -- Les journaux d'avertissement sont enregistrés sous forme de journaux indiquant où il y a une violation de la logique de subgraph pour aider à corriger le problème afin de garantir l'exactitude des données. Ces journaux peuvent être consultés dans le service hébergé de The Graph dans la section "Journaux". +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable peut être exécuté en tant qu'indicateur facultatif à l'aide de la commande Graph CLI codegen. diff --git a/website/pages/fr/cookbook/upgrading-a-subgraph.mdx b/website/pages/fr/cookbook/upgrading-a-subgraph.mdx index 107982f69408..73e243a32eaf 100644 --- a/website/pages/fr/cookbook/upgrading-a-subgraph.mdx +++ b/website/pages/fr/cookbook/upgrading-a-subgraph.mdx @@ -75,7 +75,7 @@ Toutes nos félicitations! Vous êtes désormais un pionnier de la décentralisa ```graphql { - stakers(block: { number_gte: 14486109 }) { + stakers(block : { number_gte : 14486109 }) { id } } @@ -136,7 +136,7 @@ Assurez-vous que **Mettre à jour les détails du subgraph dans l'Explorateur** ## Dépréciation d'un subgraph sur le réseau de graph -Suivez les étapes [ici](/managing/deprecating-a-subgraph) pour déprécier votre subgraph et le retirer du réseau The Graph Network. +Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. ## Interrogation d'un subgraph + facturation sur le reseau The Graph diff --git a/website/pages/fr/deploying/multiple-networks.mdx b/website/pages/fr/deploying/multiple-networks.mdx new file mode 100644 index 000000000000..205f49c07bd6 --- /dev/null +++ b/website/pages/fr/deploying/multiple-networks.mdx @@ -0,0 +1,241 @@ +--- +title: Deploying a Subgraph to Multiple Networks +--- + +This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph). + +## Déploiement du subgraph sur plusieurs réseaux + +Dans certains cas, vous souhaiterez déployer le même subgraph sur plusieurs réseaux sans dupliquer tout son code. Le principal défi qui en découle est que les adresses contractuelles sur ces réseaux sont différentes. + +### Using `graph-cli` + +Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: + +```sh +Options: + + ... + --network Network configuration to use from the networks config file + --network-file Networks config file path (default: "./networks.json") +``` + +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. + +> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. + +If you don't have a `networks.json` file, you'll need to manually create one with the following structure: + +```json +{ + "network1": { // le nom du réseau + "dataSource1": { // le nom de la source de données + "address": "0xabc...", // l'adresse du contrat (facultatif) + "startBlock": 123456 // le bloc de départ (facultatif) + }, + "dataSource2": { + "address": "0x123...", + "startBlock": 123444 + } + }, + "network2": { + "dataSource1": { + "address": "0x987...", + "startBlock": 123 + }, + "dataSource2": { + "address": "0xxyz..", + "startBlock": 456 + } + }, + ... +} +``` + +> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. + +Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x123...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Voici à quoi devrait ressembler votre fichier de configuration réseau : + +```json +{ + "mainnet": { + "Gravity": { + "address": "0x123..." + } + }, + "sepolia": { + "Gravity": { + "address": "0xabc..." + } + } +} +``` + +Nous pouvons maintenant exécuter l'une des commandes suivantes : + +```sh +# Using default networks.json file +yarn build --network sepolia + +# Using custom named file +yarn build --network sepolia --network-file path/to/config +``` + +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: sepolia + source: + address: '0xabc...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Now you are ready to `yarn deploy`. + +> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: + +```sh +# Using default networks.json file +yarn deploy --network sepolia + +# Using custom named file +yarn deploy --network sepolia --network-file path/to/config +``` + +### Utilisation du modèle subgraph.yaml + +One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). + +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: + +```json +{ + "network": "mainnet", + "address": "0x123..." +} +``` + +et + +```json +{ + "network": "sepolia", + "address": "0xabc..." +} +``` + +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + network: {{network}} + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + address: '{{address}}' + abi: Gravity + mapping: + kind: ethereum/events +``` + +In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: + +```json +{ + ... + "scripts": { + ... + "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", + "prepare:sepolia": "mustache config/sepolia.json subgraph.template.yaml > subgraph.yaml" + }, + "devDependencies": { + ... + "mustache": "^3.1.0" + } +} +``` + +To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: + +```sh +# Mainnet: +yarn prepare:mainnet && yarn deploy + +# Sepolia: +yarn prepare:sepolia && yarn deploy +``` + +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). + +**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. + +## Politique d'archivage des subgraphs de Subgraph Studio + +A subgraph version in Studio is archived if and only if it meets the following criteria: + +- The version is not published to the network (or pending publish) +- The version was created 45 or more days ago +- The subgraph hasn't been queried in 30 days + +In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. + +Chaque subgraph concerné par cette politique dispose d'une option de restauration de la version en question. + +## Vérification de l'état des subgraphs + +Si un subgraph se synchronise avec succès, c'est un bon signe qu'il continuera à bien fonctionner pour toujours. Cependant, de nouveaux déclencheurs sur le réseau peuvent amener votre subgraph à rencontrer une condition d'erreur non testée ou il peut commencer à prendre du retard en raison de problèmes de performances ou de problèmes avec les opérateurs de nœuds. + +Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: + +```graphql +{ + indexingStatusForCurrentVersion(subgraphName: "org/subgraph") { + synced + health + fatalError { + message + block { + number + hash + } + handler + } + chains { + chainHeadBlock { + number + } + latestBlock { + number + } + } + } +} +``` + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/pages/fr/developing/creating-a-subgraph.mdx b/website/pages/fr/developing/creating-a-subgraph.mdx index d8acb3c6ebeb..e6681663539c 100644 --- a/website/pages/fr/developing/creating-a-subgraph.mdx +++ b/website/pages/fr/developing/creating-a-subgraph.mdx @@ -2,45 +2,47 @@ title: Comment créer un subgraph --- -Un subgraph récupère des données depuis une blockchain, les manipule puis les enregistre afin que ces données soient aisément accessibles via GraphQL. +This detailed guide provides instructions to successfully create a subgraph. -![Définition d'un subgraph](/img/defining-a-subgraph.png) - -Un subgraph se constitue des fichiers suivants : - -- `subgraph.yaml` : un fichier YAML qui contient le manifeste du subgraph +A subgraph extracts data from a blockchain, processes it, and stores it for efficient querying via GraphQL. -- `schema.graphql`: un schéma GraphQL qui définit les données stockées pour votre subgraph et comment les interroger via GraphQL +![Définition d'un subgraph](/img/defining-a-subgraph.png) -- `Mappages AssemblyScript` : [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) qui traduit les données d'événement en entités définies dans votre schéma (par exemple `mapping.ts` dans ce tutoriel) +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key). It is recommended that you [add signal](/network/curating/#how-to-signal) to your subgraph with at least [3,000 GRT](/sunrise/#how-can-i-ensure-high-quality-of-service-and-redundancy-for-subgraphs-on-the-graph-network). +## Démarrage -Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-tooling) which you will need to build and deploy a subgraph. +### Installation du Graph CLI -## Installation du Graph CLI +To build and deploy a subgraph, you will need the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -La CLI Graph est écrite en JavaScript et vous devrez installer soit `yarn` ou `npm` pour l'utiliser ; on suppose que vous avez du fil dans ce qui suit. +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. -Une fois que vous avez `yarn`, installez la CLI Graph en exécutant +Sur votre machine locale, exécutez l'une des commandes suivantes : -**Installation avec yarn :** +#### Using [npm](https://www.npmjs.com/) ```bash -npm install -g @graphprotocol/graph-cli +npm install -g @graphprotocol/graph-cli@latest ``` -**Installation avec npm :** +#### Using [yarn](https://yarnpkg.com/) ```bash npm install -g @graphprotocol/graph-cli ``` -Once installed, the `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. This command can be used to create a subgraph in Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to your preferred network, bootstrapping a new subgraph from that contract can be a good way to get started. +- The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. -## D'un contrat existant +- This `graph init` command can also create a subgraph in Subgraph Studio by passing in `--product subgraph-studio`. -La commande suivante crée un subgraph qui indexe tous les événements d'un contrat existant. Il essaie de récupérer l'ABI du contrat via Etherscan et utilise un chemin de fichier local en cas d'échec. Si l'un des arguments facultatifs manque, il vous guide à travers un formulaire interactif. +- If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. + +## Create a subgraph + +### From an existing contract + +The following command creates a subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,21 +53,29 @@ graph init \ [] ``` -The `` est l'ID de votre subgraph dans Subgraph Studio, il peut être trouvé sur la page d'information de votre subgraph. +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. -## A partir d'un exemple de subgraph +- If any of the optional arguments are missing, it guides you through an interactive form. -Le second mode `graph init` prend en charge est la création d'un nouveau projet à partir d'un exemple de subgraph. La commande suivante le fait : +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. + +### From an example subgraph + +The following command initializes a new project from an example subgraph: ```sh -graph init --studio +graph init --studio --from-example=example-subgraph ``` -The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. -## Ajouter de nouvelles sources de données à un subgraph existant +## Add new `dataSources` to an existing subgraph -Depuis `v0.31.0`, le `graph-cli` prend en charge l'ajout de nouvelles sources de données à un subgraph existant via la commande `graph add`. +Since `v0.31.0`, the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: ```sh graph add
[] @@ -78,22 +88,45 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -La commande `add` récupérera l'ABI depuis Etherscan (sauf si un chemin ABI est spécifié avec l'option `--abi`) et créera une nouvelle `dataSource` de la même manière que la commande `graph init` crée un `dataSource` `--from-contract`, mettant à jour le schéma et les mappages en conséquence. +### Specifics + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- L'option `--merge-entities` identifie la façon dont le développeur souhaite gérer les conflits de noms d'`entité` et d'`événement` : + + - Si `true` : le nouveau `dataSource` doit utiliser les `eventHandlers` & `entités`. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- L'`adresse` du contrat sera écrite dans le `networks.json` du réseau concerné. + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. -L'option `--merge-entities` identifie la façon dont le développeur souhaite gérer les conflits de noms d'`entité` et d'`événement` : +## Components of a subgraph -- Si `true` : le nouveau `dataSource` doit utiliser les `eventHandlers` & `entités`. -- Si `false` : une nouvelle entité & le gestionnaire d'événements doit être créé avec `${dataSourceName}{EventName}`. +### Le manifeste du subgraph -L'`adresse` du contrat sera écrite dans le `networks.json` du réseau concerné. +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -> **Remarque :** Lorsque vous utilisez la Cli interactive, après avoir exécuté avec succès `graph init`, vous serez invité à ajouter une nouvelle `dataSource`. +The **subgraph definition** consists of the following files: -## Le manifeste du subgraph +- `subgraph.yaml`: Contains the subgraph manifest -Le manifeste du subgraph `subgraph.yaml` définit les contrats intelligents que votre subgraph indexe, les événements de ces contrats auxquels prêter attention et comment mapper les données d'événements aux entités que Graph Node stocke et permet d'interroger. La spécification complète des manifestes de subgraphs peut être trouvée [ici](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL -Pour l'exemple de subgraph, `subgraph.yaml` est : +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +A single subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: ```yaml version spec : 0.0.4 @@ -180,9 +213,9 @@ Un seul subgraph peut indexer les données de plusieurs contrats intelligents. A Les déclencheurs d'une source de données au sein d'un bloc sont classés à l'aide du processus suivant : -1. Les déclencheurs d'événements et d'appels sont d'abord classés par index de transaction au sein du bloc. -2. Les déclencheurs d'événements et d'appels au sein d'une même transaction sont classés selon une convention : les déclencheurs d'événements d'abord, puis les déclencheurs d'appel, chaque type respectant l'ordre dans lequel ils sont définis dans le manifeste. -3. Les déclencheurs de bloc sont exécutés après les déclencheurs d'événement et d'appel, dans l'ordre dans lequel ils sont définis dans le manifeste. +1. Les déclencheurs d'événements et d'appels sont d'abord classés par index de transaction au sein du bloc. +2. Les déclencheurs d'événements et d'appels au sein d'une même transaction sont classés selon une convention : les déclencheurs d'événements d'abord, puis les déclencheurs d'appel, chaque type respectant l'ordre dans lequel ils sont définis dans le manifeste. +3. Les déclencheurs de bloc sont exécutés après les déclencheurs d'événement et d'appel, dans l'ordre dans lequel ils sont définis dans le manifeste. Ces règles de commande sont susceptibles de changer. @@ -305,9 +338,9 @@ Imagine you have a subgraph that needs to make three Ethereum calls to fetch dat Traditionally, these calls might be made sequentially: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Total time taken = 3 + 2 + 4 = 9 seconds @@ -315,9 +348,9 @@ Total time taken = 3 + 2 + 4 = 9 seconds With this feature, you can declare these calls to be executed in parallel: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. @@ -325,9 +358,9 @@ Total time taken = max (3, 2, 4) = 4 seconds ### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. -2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. ### Example Configuration in Subgraph Manifest @@ -347,7 +380,7 @@ calls: Details for the example above: - `global0X128` is the declared `eth_call`. -- The text before colon(`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. @@ -360,17 +393,17 @@ calls: ### SpecVersion Releases -| Version | Notes de version | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +| Version | Notes de version | +|:-------:| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | ### Obtenir les ABI @@ -442,16 +475,16 @@ Pour certains types d'entités, l'`id` est construit à partir des identifiants Nous prenons en charge les scalaires suivants dans notre API GraphQL : -| Type | Description | -| --- | --- | -| `Octets` | Tableau d'octets, représenté sous forme de chaîne hexadécimale. Couramment utilisé pour les hachages et adresses Ethereum. | -| `String` | Scalaire pour les valeurs `chaîne`. Les caractères nuls ne sont pas pris en charge et sont automatiquement supprimés. | -| `Boolean` | Scalar pour `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Grands entiers. Utilisé pour les types `uint32`, `int64`, `uint64`, ..., `uint256` d'Ethereum. Remarque : Tout ce qui se trouve en dessous de `uint32`, tel que `int32`, `uint24` ou `int8`, est représenté par `i32 **Remarque :** Une nouvelle source de données traitera uniquement les appels et les événements du bloc dans lequel elle a été créée et de tous les blocs suivants, mais ne traitera pas les données historiques, c'est-à-dire les données. qui est contenu dans les blocs précédents. -> +> > Si les blocs précédents contiennent des données pertinentes pour la nouvelle source de données, il est préférable d'indexer ces données en lisant l'état actuel du contrat et en créant des entités représentant cet état au moment de la création de la nouvelle source de données. ### Data Source Context @@ -930,7 +963,7 @@ dataSources: ``` > **Remarque :** Le bloc de création de contrat peut être rapidement consulté sur Etherscan : -> +> > 1. Recherchez le contrat en saisissant son adresse dans la barre de recherche. > 2. Cliquez sur le hachage de la transaction de création dans la section `Contract Creator`. > 3. Chargez la page des détails de la transaction où vous trouverez le bloc de départ de ce contrat. @@ -945,9 +978,9 @@ Le paramètre `indexerHints` dans le manifeste d'un subgraph fournit des directi `indexerHints.prune` : définit la conservation des données de bloc historiques pour un subgraph. Les options incluent : -1. `"never"`: No pruning of historical data; retains the entire history. -2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. -3. Un nombre spécifique : Fixe une limite personnalisée au nombre de blocs historiques à conserver. +1. `"never"` : pas d'élagage des données historiques ; conserve l'ensemble de l'historique. +2. `indexerHints. prune` : définit la conservation des données de bloc historiques pour un subgraph. Les options incluent . +3. Un nombre spécifique : Fixe une limite personnalisée au nombre de blocs historiques à conserver. ``` indexerHints: @@ -971,8 +1004,7 @@ Pour les subgraphs exploitant les [requêtes de voyage dans le temps](/querying/ Pour conserver une quantité spécifique de données historiques : ``` - indexerHints: - prune: 1000 # Replace 1000 with the desired number of blocks to retain + indexerHints : prune : 1000 # Remplacer 1000 par le nombre de blocs à conserver ``` Préserver l'histoire complète des États de l'entité : @@ -982,29 +1014,6 @@ indexerHints: prune: never ``` -You can check the earliest block (with historical state) for a given subgraph by querying the [Indexing Status API](/deploying/deploying-a-subgraph-to-hosted/#checking-subgraph-health): - -``` -{ - indexingStatuses(subgraphs: ["Qm..."]) { - subgraph - synced - health - chains { - earliestBlock { - number - } - latestBlock { - number - } - chainHeadBlock { number } - } - } -} -``` - -Note that the `earliestBlock` is the earliest block with historical data, which will be more recent than the `startBlock` specified in the manifest, if the subgraph has been pruned. - ## Event Handlers Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. @@ -1477,7 +1486,7 @@ The file data source must specifically mention all the entity types which it wil #### Créer un nouveau gestionnaire pour traiter les fichiers -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/assemblyscript-api/#json-api)). +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). Le CID du fichier sous forme de chaîne lisible est accessible via `dataSource` comme suit : @@ -1531,7 +1540,7 @@ import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' //Cet exemple de code concerne un sous-graphe de Crypto coven. Le hachage ipfs ci-dessus est un répertoire contenant les métadonnées des jetons pour toutes les NFT de l'alliance cryptographique. -export function handleTransfer(event: TransferEvent): void { +export function handleTransfer(event : TransferEvent) : void { let token = Token.load(event.params.tokenId.toString()) if (!token) { token = new Token(event.params.tokenId.toString()) diff --git a/website/pages/fr/developing/developer-faqs.mdx b/website/pages/fr/developing/developer-faqs.mdx index e46bbbcfeb19..20ea3603346a 100644 --- a/website/pages/fr/developing/developer-faqs.mdx +++ b/website/pages/fr/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: FAQs pour les développeurs --- -## 1. Qu'est-ce qu'un subgraph ? +This page summarizes some of the most common questions for developers building on The Graph. -Un subgraph est une API personnalisée construite sur des données de blockchain. Les subgraphs sont interrogés à l'aide du langage de requête GraphQL et sont déployés sur un nœud de graph à l'aide de Graphe CLI . Dès qu'ils sont déployés et publiés sur le réseau décentralisé de The Graph, Les indexeurs traitent les subgraphs et les rendent disponibles pour être interrogés par les consommateurs de subgraphs. +## Subgraph Related -## 2. Puis-je supprimer mon subgraph ? +### 1. Qu'est-ce qu'un subgraph ? -Il n'est pas possible de supprimer des subgraphs une fois qu'ils sont créés. +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. Puis-je changer le nom de mon subgraph ? +### 2. What is the first step to create a subgraph? -Non. Une fois qu'un subgraph est créé, son nom ne peut plus être modifié. Assurez-vous d'y réfléchir attentivement avant de créer votre subgraph afin qu'il soit facilement consultable et identifiable par d'autres dapps. +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. Puis-je modifier le compte GitHub associé à mon subgraph ? +### 3. Can I still create a subgraph if my smart contracts don't have events? -Non. Dès qu'un subgraph est créé, le compte GitHub associé ne peut pas être modifié. Assurez-vous d'y réfléchir attentivement avant de créer votre subgraph. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. Suis-je toujours en mesure de créer un subgraph si mes smart contracts n'ont pas d'événements ? +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -Il est fortement recommandé de structurer vos smart contracts pour avoir des événements associés aux données que vous souhaitez interroger. Les gestionnaires d'événements du subgraph sont déclenchés par des événements de contrat et constituent le moyen le plus rapide de récupérer des données utiles. +### 4. Puis-je modifier le compte GitHub associé à mon subgraph ? -Si les contrats avec lesquels vous travaillez ne contiennent pas d'événements, votre subgraph peut utiliser des gestionnaires d'appels et de blocs pour déclencher l'indexation. Bien que cela ne soit pas recommandé, les performances seront considérablement plus lentes. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. Est-il possible de déployer un subgraph portant le même nom pour plusieurs réseaux ? +### 5. How do I update a subgraph on mainnet? -Vous aurez besoin de noms distincts pour plusieurs réseaux. Bien que vous ne puissiez pas avoir différents subgraphs sous le même nom, il existe des moyens pratiques d'avoir une seule base de code pour plusieurs réseaux. Retrouvez plus d'informations à ce sujet dans notre documentation : [Déploiement d'un subgraph](/deploying/deploying-a-subgraph-to-hosted#redeploying-an-subgraph) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. En quoi les modèles sont-ils différents des sources de données ? +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -Les modèles vous permettent de créer des sources de données à la volée, pendant l'indexation de votre subgraph. Il se peut que votre contrat engendre de nouveaux contrats au fur et à mesure que les gens interagissent avec lui, et puisque vous connaissez la forme de ces contrats (ABI, événements, etc.) à l'avance, vous pouvez définir comment vous souhaitez les indexer dans un modèle et lorsqu'ils sont générés, votre subgraph créera une source de données dynamique en fournissant l'adresse du contrat. +Vous devez redéployer le subgraph, mais si l'ID de subgraph (hachage IPFS) ne change pas, il n'aura pas à se synchroniser depuis le début. + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +Dans un subgraph, les événements sont toujours traités dans l'ordre dans lequel ils apparaissent dans les blocs, que ce soit sur plusieurs contrats ou non. + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. Consultez la section "Instanciation d'un modèle de source de données" sur : [Modèles de source de données](/developing/creating-a-subgraph#data-source-templates). -## 8. Comment m'assurer que j'utilise la dernière version de graph-node pour mes déploiements locaux ? +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -Vous pouvez exécuter la commande suivante : +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -docker pull graphprotocol/graph-node:dernier -``` +You can also use `graph add` command to add a new dataSource. -**REMARQUE :** docker / docker-compose utilisera toujours la version de graph-node extraite la première fois que vous l'avez exécuté, il est donc important de le faire pour vous assurer que vous êtes à jour avec la dernière version de graph-node. +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. Comment appeler une fonction de contrat ou accéder à une variable d'état publique à partir de mes mappages de subgraphs ? +Les gestionnaires d'événements et d'appels sont d'abord classés par index de transaction à l'intérieur du bloc. Les gestionnaires d'événements et d'appels au sein d'une même transaction sont ordonnés selon une convention : d'abord les gestionnaires d'événements, puis les gestionnaires d'appels, chaque type respectant l'ordre défini dans le manifeste. Les gestionnaires de blocs sont exécutés après les gestionnaires d'événements et d'appels, dans l'ordre où ils sont définis dans le manifeste. Ces règles d'ordre sont également susceptibles d'être modifiées. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +Lorsque de nouvelles sources de données dynamiques sont créées, les gestionnaires définis pour les sources de données dynamiques ne commenceront à être traités qu'une fois que tous les gestionnaires de sources de données existantes auront été traités, et ils se répéteront dans la même séquence chaque fois qu'ils seront déclenchés. -## 10. Est-il possible de configurer un subgraph en utilisant `graph init` à partir de `graph-cli` avec deux contrats ? Ou dois-je ajouter manuellement une autre source de données dans `subgraph.yaml` après avoir exécuté `graph init` ? +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Oui. Dans la commande `graph init` elle-même, vous pouvez ajouter plusieurs sources de données en saisissant les contrats l'un après l'autre. Vous pouvez également utiliser la commande `graph add` pour ajouter une nouvelle source de données. +Vous pouvez exécuter la commande suivante : -## 11. Je souhaite contribuer ou ajouter un problème GitHub. Où puis-je trouver les référentiels open source ? +```sh +docker pull graphprotocol/graph-node:dernier +``` -- [graph-node](https://github.com/graphprotocol/graph-node) -- [l'outil de graph](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. Quelle est la méthode recommandée pour créer des identifiants « générés automatiquement » pour une entité lors du traitement des événements ? +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? Si une seule entité est créée lors de l'événement et s'il n'y a rien de mieux disponible,alors le hachage de transaction + index de journal serait unique. Vous pouvez les masquer en les convertissant en octets, puis en les redirigeant vers `crypto.keccak256`, mais cela ne le rendra pas plus unique. -## 13. Lorsqu'on écoute plusieurs contrats, est-il possible de sélectionner l'ordre des contrats pour écouter les événements ? +### 15. Can I delete my subgraph? -Dans un subgraph, les événements sont toujours traités dans l'ordre dans lequel ils apparaissent dans les blocs, que ce soit sur plusieurs contrats ou non. +It is not possible to delete subgraphs once they are created. However, you can [transfer and deprecate your subgraph](/managing/transfer-and-deprecate-a-subgraph/). + +## Network Related + +### 16. What networks are supported by The Graph? + +Vous pouvez trouver la liste des réseaux supportés [ici](/developing/supported-networks). -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? Oui. Vous pouvez le faire en important `graph-ts` comme dans l'exemple ci-dessous : @@ -78,23 +99,21 @@ dataSource.network() dataSource.address() ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Do you support block and call handlers on Sepolia? Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. -## 16. Puis-je importer ethers.js ou d'autres bibliothèques JS dans mes mappages de subgraphs ? +## Indexing & Querying Related -Pas pour le moment, car les mappages sont écrits en AssemblyScript. Une autre solution possible consiste à stocker les données brutes dans des entités et à exécuter une logique qui nécessite des bibliothèques JS du client. +### 19. Is it possible to specify what block to start indexing on? -## 17. Est-il possible de spécifier sur quel bloc démarrer l'indexation ? +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -## 18. Existe-t-il des astuces pour améliorer les performances de l'indexation ? La synchronisation de mon subgraph prend beaucoup de temps +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -Oui, vous devriez jeter un coup d'œil à la fonctionnalité optionnelle de bloc de départ pour commencer l'indexation à partir du bloc où le contrat a été déployé : [Blocs de départ](/developing/creating-a-subgraph#start-blocks) - -## 19. Existe-t-il un moyen d'interroger directement le subgraph pour déterminer le dernier numéro de bloc qu'il a indexé ? +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? Oui ! Essayez la commande suivante, en remplaçant "organization/subgraphName" par l'organisation sous laquelle elle est publiée et le nom de votre subgraphe : @@ -102,44 +121,27 @@ Oui ! Essayez la commande suivante, en remplaçant "organization/subgraphName" curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/ index-node/graphql ``` -## 20. Quels réseaux sont pris en charge par The Graph ? - -Vous pouvez trouver la liste des réseaux supportés [ici](/developing/supported-networks). - -## 21. Est-il possible de dupliquer un subgraph sur un autre compte ou point de terminaison sans redéployer ? - -Vous devez redéployer le subgraph, mais si l'ID de subgraph (hachage IPFS) ne change pas, il n'aura pas à se synchroniser depuis le début. - -## 22. Est-il possible d'utiliser Apollo Federation au-dessus du graph-node ? - -La fédération n'est pas encore supportée, bien que nous souhaitions la prendre en charge à l'avenir. Pour le moment, vous pouvez utiliser l'assemblage de schémas, soit sur le client, soit via un service proxy. - -## 23. Y a-t-il une limite au nombre d'objets que The Graph peut renvoyer par requête ? +### 22. Is there a limit to how many objects The Graph can return per query? -Par défaut, les réponses aux requêtes sont limitées à 100 éléments par collection. Si vous souhaitez en recevoir plus, vous pouvez aller jusqu'à 1000 articles par collection et au-delà, vous pouvez paginer avec : +By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: ```graphql quelquesCollection(first: 1000, skip: ) { ... } ``` -## 24. Si mon interface dapp utilise The Graph pour les requêtes, dois-je écrire ma clé de requête directement dans l'interface ? Et si nous payons des frais de requête pour les utilisateurs : les utilisateurs malveillants rendront-ils nos frais de requête très élevés ? - -Actuellement, l'approche recommandée pour une dapp consiste à ajouter la clé à l'interface et à l'exposer aux utilisateurs finaux. Cela dit, vous pouvez limiter cette clé à un nom d'hôte, comme _yourdapp.io_ et subgraph. La passerelle est actuellement gérée par Edge & Node. Une partie de la responsabilité d'une passerelle est de surveiller les comportements abusifs et de bloquer le trafic des clients malveillants. - -## 25. Where do I go to find my current subgraph on the hosted service? - -Rendez-vous sur le service hébergé afin de trouver les subgraphs que vous ou d'autres personnes avez déployés sur le service hébergé. Vous pouvez le trouver [ici](https://thegraph.com/hosted-service). +### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -## 26. Will the hosted service start charging query fees? +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -The Graph ne facturera jamais le service hébergé. The Graph est un protocole décentralisé, et faire payer un service centralisé n'est pas conforme aux valeurs du Graphe. Le service hébergé a toujours été une étape temporaire pour aider à passer au réseau décentralisé. Les développeurs disposeront d'un délai suffisant pour passer au réseau décentralisé lorsqu'ils le souhaiteront. +## Miscellaneous -## 27. How do I update a subgraph on mainnet? +### 24. Is it possible to use Apollo Federation on top of graph-node? -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -## 28. Dans quel ordre les gestionnaires d'événements, de blocages et d'appels sont-ils déclenchés pour une source de données ? +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -Les gestionnaires d'événements et d'appels sont d'abord classés par index de transaction à l'intérieur du bloc. Les gestionnaires d'événements et d'appels au sein d'une même transaction sont ordonnés selon une convention : d'abord les gestionnaires d'événements, puis les gestionnaires d'appels, chaque type respectant l'ordre défini dans le manifeste. Les gestionnaires de blocs sont exécutés après les gestionnaires d'événements et d'appels, dans l'ordre où ils sont définis dans le manifeste. Ces règles d'ordre sont également susceptibles d'être modifiées. - -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +- [graph-node](https://github.com/graphprotocol/graph-node) +- [l'outil de graph](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/fr/developing/graph-ts/api.mdx b/website/pages/fr/developing/graph-ts/api.mdx index 842054226e4d..6f788ac5a496 100644 --- a/website/pages/fr/developing/graph-ts/api.mdx +++ b/website/pages/fr/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: API AssemblyScript --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/release-notes/assemblyscript-migration-guide) +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -Cette page documente les API intégrées qui peuvent être utilisées lors de l'écriture de mappages de subgraphs. Deux types d'API sont disponibles prêtes à l'emploi : +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## Référence API @@ -27,16 +29,16 @@ The `@graphprotocol/graph-ts` library provides the following APIs: The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. -| Version | Notes de version | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Notes de version | +| :-----: | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Types intégrés @@ -252,7 +254,9 @@ export function handleTransfer(event: TransferEvent): void { When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. -Chaque entité doit avoir un identifiant unique pour éviter les collisions avec d'autres entités. Il est assez courant que les paramètres d'événement incluent un identifiant unique pouvant être utilisé. Remarque : L'utilisation du hachage de transaction comme ID suppose qu'aucun autre événement dans la même transaction ne crée d'entités avec ce hachage comme ID. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### Chargement d'entités depuis le magasin @@ -268,15 +272,18 @@ if (transfer == null) { // Utiliser l'entité Transfer comme précédemment ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### Recherche d'entités créées dans un bloc As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. -L'API du magasin facilite la récupération des entités créées ou mises à jour dans le bloc actuel. Une situation typique est qu'un gestionnaire crée une transaction à partir d'un événement en chaîne et qu'un gestionnaire ultérieur souhaite accéder à cette transaction si elle existe. Dans le cas où la transaction n'existe pas, le ubgraph devra se rendre dans la base de données juste pour découvrir que l'entité n'existe pas ; si l'auteur du subgraph sait déjà que l'entité doit avoir été créée dans le même bloc, l'utilisation de loadInBlock évite cet aller-retour dans la base de données. Pour certains subgraphs, ces recherches manquées peuvent contribuer de manière significative au temps d'indexation. +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // ou de toute autre manière dont l'ID est construit @@ -503,7 +510,9 @@ Tout autre contrat faisant partie du subgraph peut être importé à partir du c #### Gestion des appels retournés -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -515,7 +524,7 @@ if (callResult.reverted) { } ``` -Notez qu'un nœud Graph connecté à un client Geth ou Infura peut ne pas détecter tous les retours, si vous comptez sur cela, nous vous recommandons d'utiliser un nœud Graph connecté à un client Parity. +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### Encodage/décodage ABI diff --git a/website/pages/fr/developing/supported-networks.mdx b/website/pages/fr/developing/supported-networks.mdx index b45431b63a2f..c13e9e32aa69 100644 --- a/website/pages/fr/developing/supported-networks.mdx +++ b/website/pages/fr/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. +- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. - If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - Pour une liste complète des fonctionnalités prises en charge par le réseau décentralisé, voir [cette page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). diff --git a/website/pages/fr/developing/unit-testing-framework.mdx b/website/pages/fr/developing/unit-testing-framework.mdx index c31a5bb8dd1b..61919d0d2001 100644 --- a/website/pages/fr/developing/unit-testing-framework.mdx +++ b/website/pages/fr/developing/unit-testing-framework.mdx @@ -941,7 +941,7 @@ Les utilisateurs peuvent également simuler une panne critique, comme ceci : ```typescript test('Tout faire exploser', () => { - log.critical('Boom!') + log.critical('Boom!') }) ``` @@ -1368,18 +1368,18 @@ La sortie du journal inclut la durée de l’exécution du test. Voici un exempl > Critique : impossible de créer WasmInstance à partir d'un module valide avec un contexte : importation inconnue : wasi_snapshot_preview1::fd_write n'a pas été défini -Cela signifie que vous avez utilisé `console.log` dans votre code, ce qui n'est pas pris en charge par AssemblyScript. Veuillez envisager d'utiliser l'[API Logging](/developing/assemblyscript-api/#logging-api) +This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/graph-ts/api/#logging-api) > ERREUR TS2554 : attendu ? arguments, mais j'ai eu ?. -> +> > renvoyer le nouveau ethereum.Block (defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt) ; -> +> > dans ~lib/matchstick-as/assembly/defaults.ts(18,12) -> +> > ERROR TS2554: Expected ? arguments, but got ?. -> +> > renvoyer un nouveau ethereum.Transaction (defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt) ; -> +> > dans ~lib/matchstick-as/assembly/defaults.ts(24,12) L'inadéquation des arguments est causée par une inadéquation entre `graph-ts` et `matchstick-as`. La meilleure façon de résoudre des problèmes comme celui-ci est de tout mettre à jour vers la dernière version publiée. diff --git a/website/pages/fr/glossary.mdx b/website/pages/fr/glossary.mdx index e709ff578441..40165e544aac 100644 --- a/website/pages/fr/glossary.mdx +++ b/website/pages/fr/glossary.mdx @@ -10,11 +10,9 @@ title: Glossaire - **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Then, Indexers can begin indexing subgraphs to make them available to be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **Hosted service**: A temporary scaffold service for building and querying subgraphs as The Graph's decentralized network is maturing its cost of service, quality of service, and developer experience. - -- **Indexeurs** : participants au réseau qui exécutent des nœuds d'indexation pour indexer les données des blockchains et servir des requêtes GraphQL. +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Flux de revenus des indexeurs** : Les indexeurs sont récompensés en GRT avec deux composantes : les remises sur les frais de requête et les récompenses d'indexation. @@ -24,17 +22,17 @@ title: Glossaire - **Participation personnelle de l'indexeur** : le montant de GRT que les indexeurs mettent en jeu pour participer au réseau décentralisé. Le minimum est de 100 000 GRT et il n’y a pas de limite supérieure. -- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service to The Graph Network. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Mise à niveau de l'indexeur** : un indexeur temporaire conçu pour servir de solution de secours pour les requêtes de subgraphs non prises en charge par d'autres indexeurs du réseau. Il garantit une transition transparente pour la mise à niveau des subgraphs à partir du service hébergé en répondant facilement à leurs requêtes dès leur publication. L'indexeur de mise à niveau n'est pas compétitif par rapport aux autres indexeurs et prend en charge les chaînes qui étaient auparavant exclusives au service hébergé. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **Taxe de délégation** : Une taxe de 0,5 % payée par les délégués lorsqu'ils délèguent des GRT aux indexeurs. Les GRT utilisés pour payer la taxe sont brûlés. -- **Curateurs** : participants au réseau qui identifient des subgraphs de haute qualité et les « organisent » (c'est-à-dire signalent GRT sur eux) en échange de partages de curation. Lorsque les indexeurs réclament des frais de requête sur un subgraph, 10 % sont distribués aux conservateurs de ce subgraph. Les indexeurs gagnent des récompenses d'indexation proportionnelles au signal sur un subgraph. Nous voyons une corrélation entre la quantité de GRT signalée et le nombre d'indexeurs indexant un subgraph. +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **Taxe de curation** : Une taxe de 1% payée par les curateurs lorsqu'ils signalent des GRT sur des subgraphs. Le GRT utilisé pour payer la taxe est brûlé. -- **Consommateur de subgraphs** : Toute application ou utilisateur qui interroge un subgraph. +- **Data Consumer**: Any application or user that queries a subgraph. - **Développeur de subgraphs** : un développeur qui crée et déploie un subgraph sur le réseau décentralisé de The Graph. @@ -46,11 +44,11 @@ title: Glossaire 1. **Actif** : Une allocation est considérée comme active lorsqu'elle est créée sur la chaîne. Cela s'appelle ouvrir une allocation, et indique au réseau que l'indexeur indexe et sert activement les requêtes pour un subgraph particulier. Les allocations actives accumulent des récompenses d'indexation proportionnelles au signal sur le subgraph et à la quantité de GRT allouée. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **Subgraph Studio** : une application puissante pour créer, déployer et publier des subgraphs. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed. Specifically, the Indexer will lose 2.5% of their self-stake of GRT. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. @@ -62,11 +60,11 @@ title: Glossaire - .**GRT** : le jeton d'utilité du travail de The Graph, le GRT offre des incitations économiques aux participants du réseau pour leur contribution au réseau. -- **POI ou preuve d'indexation** : lorsqu'un indexeur clôture son allocation et souhaite réclamer ses récompenses d'indexation accumulées sur un subgraph donné, il doit fournir une preuve d'indexation valide et récente ( POI). Les pêcheurs peuvent contester le POI fourni par un indexeur. Un différend résolu en faveur du pêcheur entraînera la suppression de l'indexeur. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node** : Graph Node est le composant qui indexe les subgraphs et rend les données résultantes disponibles pour interrogation via une API GraphQL. En tant que tel, il est au cœur de la pile de l’indexeur, et le bon fonctionnement de Graph Node est crucial pour exécuter un indexeur réussi. +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Agent de l'indexeur** : l'agent de l'indexeur fait partie de la pile de l'indexeur. Il facilite les interactions de l'indexeur sur la chaîne, notamment l'enregistrement sur le réseau, la gestion des déploiements de subgraphs vers son ou son(ses) noed(s) de graph et la gestion des allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client** : une bibliothèque pour créer des dapps basées sur GraphQL de manière décentralisée. @@ -78,10 +76,6 @@ title: Glossaire - **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self stake. -- **_Mise à niveau_ d'un subgraph vers The Graph Network** : processus de déplacement d'un subgraph du service hébergé vers The Graph Network . - -- **_Mise à jour_ d'un subgraph** : processus de publication d'une nouvelle version de subgraph avec des mises à jour du manifeste, du schéma ou du subgraph. cartographies. +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). - -- **Upgrade Window**: The countdown for hosted service users to upgrade their subgraph(s) to The Graph Network beginning on April 11th, and ending on June 12th 2024. diff --git a/website/pages/fr/index.json b/website/pages/fr/index.json index 1758912454fa..dd7f5b249c72 100644 --- a/website/pages/fr/index.json +++ b/website/pages/fr/index.json @@ -1,13 +1,13 @@ { "title": "Commencer", - "intro": "Découvrez The Graph, un protocole décentralisé pour indexer et interroger les données des blockchains.", + "intro": "Découvrez The Graph, un protocole décentralisé d'indexation et d'interrogation des données provenant des blockchains.", "shortcuts": { "aboutTheGraph": { - "title": "À propos du Graph", + "title": "À propos de The Graph", "description": "En savoir plus sur The Graph" }, "quickStart": { - "title": "Début rapide", + "title": "Démarrage rapide", "description": "Lancez-vous et commencez avec The Graph" }, "developerFaqs": { @@ -15,21 +15,17 @@ "description": "Questions fréquemment posées" }, "queryFromAnApplication": { - "title": "Requête d'une application", - "description": "Apprenez à exécuter vos requêtes d'une application" + "title": "Requête depuis une application", + "description": "Apprenez à exécuter vos requêtes à partir d'une application" }, "createASubgraph": { "title": "Créer un subgraph", "description": "Utiliser le « Studio » pour créer des subgraphs" - }, - "migrateFromHostedService": { - "title": "Upgrade from the hosted service", - "description": "Upgrading subgraphs to The Graph Network" } }, "networkRoles": { "title": "Les divers rôles du réseau", - "description": "Découvrez les rôles réseau de The Graph.", + "description": "Découvrez les divers rôles du réseau The Graph.", "roles": { "developer": { "title": "Développeur", @@ -60,16 +56,12 @@ "graphExplorer": { "title": "Graph Explorer", "description": "Explorer les subgraphs et interagir avec le protocole" - }, - "hostedService": { - "title": "Service hébergé", - "description": "Create and explore subgraphs on the hosted service" } } }, "supportedNetworks": { "title": "Réseaux pris en charge", - "description": "The Graph supports the following networks.", - "footer": "For more details, see the {0} page." + "description": "The Graph prend en charge les réseaux suivants.", + "footer": "Pour plus de détails, consultez la page {0}." } } diff --git a/website/pages/fr/managing/transfer-and-deprecate-a-subgraph.mdx b/website/pages/fr/managing/transfer-and-deprecate-a-subgraph.mdx new file mode 100644 index 000000000000..3358288c3518 --- /dev/null +++ b/website/pages/fr/managing/transfer-and-deprecate-a-subgraph.mdx @@ -0,0 +1,65 @@ +--- +title: Transfer and Deprecate a Subgraph +--- + +## Transférer la propriété d'un subgraph + +Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. + +**Please note the following:** + +- Whoever owns the NFT controls the subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. +- You can easily move control of a subgraph to a multi-sig. +- A community member can create a subgraph on behalf of a DAO. + +### View your subgraph as an NFT + +To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: + +``` +https://opensea.io/your-wallet-address +``` + +Or a wallet explorer like **Rainbow.me**: + +``` +https://rainbow.me/adresse-de-votre-portefeuille +``` + +### Step-by-Step + +To transfer ownership of a subgraph, do the following: + +1. Use the UI built into Subgraph Studio: + + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) + +2. Choose the address that you would like to transfer the subgraph to: + + ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) + +Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: + +![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) + +## Deprecating a subgraph + +Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. + +### Step-by-Step + +To deprecate your subgraph, do the following: + +1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). +2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. +3. Your subgraph will no longer appear in searches on Graph Explorer. + +**Please note the following:** + +- The owner's wallet should call the `deprecateSubgraph` function. +- Les curateurs ne seront plus en mesure de signaler le subgraph. +- Curators that already signaled on the subgraph can withdraw their signal at an average share price. +- Deprecated subgraphs will show an error message. + +> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. diff --git a/website/pages/fr/mips-faqs.mdx b/website/pages/fr/mips-faqs.mdx index 7276003edb79..5701d25d9f84 100644 --- a/website/pages/fr/mips-faqs.mdx +++ b/website/pages/fr/mips-faqs.mdx @@ -6,10 +6,6 @@ title: MIPs FAQs > Remarque : le programme MIPs est fermé depuis mai 2023. Merci à tous les indexeurs qui ont participé ! -C'est une période passionnante pour participer à l'écosystème The Graph ! Lors du [Graph Day 2022](https://thegraph.com/graph-day/2022/), Yaniv Tal a annoncé la [cessation du service hébergé](https://thegraph.com/blog/sunsetting-hosted-service/), un moment vers lequel l’écosystème Graph travaille depuis de nombreuses années. - -Pour prendre en charge la cessation du service hébergé et la migration de toutes ses activités vers le réseau décentralisé, la Graph Foundation a annoncé le \[programme de fournisseurs d'infrastructures de migration (MIP)(https://thegraph.com/blog/mips-multi -programme-d'incitation-à-indexation-en-chaîne). - Le programme MIPs est un programme d'incitation destiné aux indexeurs pour les soutenir avec des ressources pour indexer les chaînes au-delà du mainnet Ethereum et aider le protocole The Graph à étendre le réseau décentralisé en une couche d'infrastructure multi-chaînes. Le programme MIPs a alloué 0,75 % de l'offre de GRT (75 millions de GRT), dont 0,5 % pour récompenser les indexeurs qui contribuent au démarrage du réseau et 0,25 % alloués aux subventions de réseau pour les développeurs de sous-graphes utilisant des subgraphs multi-chaînes. @@ -24,7 +20,8 @@ Le programme MIPs a alloué 0,75 % de l'offre de GRT (75 millions de GRT), dont ### 1. Est-il possible de générer une preuve d'indexation (POI) valide même si un subgraph a échoué ? -Oui, c'est effectivement le cas. . +Oui, c'est effectivement le cas. +. Pour le contexte, la charte d'arbitrage, [en savoir plus sur la charte ici](https://hackmd.io/@4Ln8SAS4RX-505bIHZTeRw/BJcHzpHDu#Abstract), précise la méthodologie de génération d'un POI pour un subgraph défaillant. diff --git a/website/pages/fr/network/benefits.mdx b/website/pages/fr/network/benefits.mdx index 30eb7202be81..247fe2ac7295 100644 --- a/website/pages/fr/network/benefits.mdx +++ b/website/pages/fr/network/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Cost Comparison | Auto-hébergé | The Graph Network | -| :-: | :-: | :-: | -| Coût mensuel du serveur\* | 350 $ au mois | 0 $ | -| Frais de requête | + 0 $ | $0 per month | -| Temps d'ingénierie | 400 $ au mois | Aucun, intégré au réseau avec des indexeurs distribués à l'échelle mondiale | -| Requêtes au mois | Limité aux capacités infra | 100,000 (Free Plan) | -| Tarif par requête | 0 $ | $0 | -| Les infrastructures | Centralisée | Décentralisée | -| La redondance géographique | 750$+ par nœud complémentaire | Compris | -| Temps de disponibilité | Variable | + 99.9% | -| Total des coûts mensuels | + 750 $ | 0 $ | +| Cost Comparison | Auto-hébergé | The Graph Network | +|:------------------------------:|:-----------------------------------------:|:---------------------------------------------------------------------------:| +| Coût mensuel du serveur\* | 350 $ au mois | 0 $ | +| Frais de requête | + 0 $ | $0 per month | +| Temps d'ingénierie | 400 $ au mois | Aucun, intégré au réseau avec des indexeurs distribués à l'échelle mondiale | +| Requêtes au mois | Limité aux capacités infra | 100,000 (Free Plan) | +| Tarif par requête | 0 $ | $0 | +| Les infrastructures | Centralisée | Décentralisée | +| La redondance géographique | 750$+ par nœud complémentaire | Compris | +| Temps de disponibilité | Variable | + 99.9% | +| Total des coûts mensuels | + 750 $ | 0 $ | ## Medium Volume User (~3M queries per month) -| Comparaison de coût | Auto-hébergé | The Graph Network | -| :-: | :-: | :-: | -| Coût mensuel du serveur\* | 350 $ au mois | 0 $ | -| Frais de requête | 500 $ au mois | $120 per month | -| Temps d'ingénierie | 800 $ au mois | Aucun, intégré au réseau avec des indexeurs distribués à l'échelle mondiale | -| Requêtes au mois | Limité aux capacités infra | ~3,000,000 | -| Tarif par requête | 0 $ | $0.00004 | -| L'infrastructure | Centralisée | Décentralisée | -| Frais d'ingénierie | 200 $ au mois | Compris | -| La redondance géographique | 1 200 $ coût total par nœud supplémentaire | Compris | -| Temps de disponibilité | Variable | + 99.9% | -| Total des coûts mensuels | + 1650 $ | $120 | +| Comparaison de coût | Auto-hébergé | The Graph Network | +|:------------------------------:|:-------------------------------------------:|:---------------------------------------------------------------------------:| +| Coût mensuel du serveur\* | 350 $ au mois | 0 $ | +| Frais de requête | 500 $ au mois | $120 per month | +| Temps d'ingénierie | 800 $ au mois | Aucun, intégré au réseau avec des indexeurs distribués à l'échelle mondiale | +| Requêtes au mois | Limité aux capacités infra | ~3,000,000 | +| Tarif par requête | 0 $ | $0.00004 | +| L'infrastructure | Centralisée | Décentralisée | +| Frais d'ingénierie | 200 $ au mois | Compris | +| La redondance géographique | 1 200 $ coût total par nœud supplémentaire | Compris | +| Temps de disponibilité | Variable | + 99.9% | +| Total des coûts mensuels | + 1650 $ | $120 | ## High Volume User (~30M queries per month) -| Comparaison des coûts | Auto-hébergé | The Graph Network | -| :-: | :-: | :-: | -| Coût mensuel du serveur\* | 1100 $ au mois, par nœud | 0 $ | -| Frais de requête | 4000 $ | $1,200 per month | -| Nombre de nœuds obligatoires | 10 | Sans objet | -| Temps d'ingénierie | 6000 $ ou plus au mois | Aucun, intégré au réseau avec des indexeurs distribués à l'échelle mondiale | -| Requêtes au mois | Limité aux capacités infra | ~30,000,000 | -| Tarif par requête | 0 $ | $0.00004 | -| L'infrastructure | Centralisée | Décentralisée | -| La redondance géographique | 1 200 $ de coûts totaux par nœud supplémentaire | Compris | -| Temps de disponibilité | Variable | + 99.9% | -| Total des coûts mensuels | + 11 000 $ | $1,200 | +| Comparaison des coûts | Auto-hébergé | The Graph Network | +|:------------------------------:|:-----------------------------------------------:|:---------------------------------------------------------------------------:| +| Coût mensuel du serveur\* | 1100 $ au mois, par nœud | 0 $ | +| Frais de requête | 4000 $ | $1,200 per month | +| Nombre de nœuds obligatoires | 10 | Sans objet | +| Temps d'ingénierie | 6000 $ ou plus au mois | Aucun, intégré au réseau avec des indexeurs distribués à l'échelle mondiale | +| Requêtes au mois | Limité aux capacités infra | ~30,000,000 | +| Tarif par requête | 0 $ | $0.00004 | +| L'infrastructure | Centralisée | Décentralisée | +| La redondance géographique | 1 200 $ de coûts totaux par nœud supplémentaire | Compris | +| Temps de disponibilité | Variable | + 99.9% | +| Total des coûts mensuels | + 11 000 $ | $1,200 | \*y compris les coûts de sauvegarde : $50-$ à 100 dollars au mois diff --git a/website/pages/fr/network/curating.mdx b/website/pages/fr/network/curating.mdx index f7b06fd56ac2..bba1505a7955 100644 --- a/website/pages/fr/network/curating.mdx +++ b/website/pages/fr/network/curating.mdx @@ -10,7 +10,7 @@ Before consumers can query a subgraph, it must be indexed. This is where curatio Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. @@ -18,7 +18,7 @@ The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). @@ -34,7 +34,7 @@ La signalisation sur une version spécifique est particulièrement utile lorsqu' La migration automatique de votre signal vers la version de production la plus récente peut s'avérer utile pour vous assurer que vous continuez à accumuler des frais de requête. Chaque fois que vous effectuez une curation, une taxe de curation de 1 % est appliquée. Vous paierez également une taxe de curation de 0,5 % à chaque migration. Les développeurs de subgraphs sont découragés de publier fréquemment de nouvelles versions - ils doivent payer une taxe de curation de 0,5 % sur toutes les parts de curation migrées automatiquement. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -49,7 +49,7 @@ However, it is recommended that curators leave their signaled GRT in place not o ## Risques 1. Le marché des requêtes est intrinsèquement jeune chez The Graph et il y a un risque que votre %APY soit inférieur à vos attentes en raison de la dynamique naissante du marché. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. 3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. Un subgraph peut échouer à cause d'un bug. Un subgraph qui échoue n'accumule pas de frais de requête. Par conséquent, vous devrez attendre que le développeur corrige le bogue et déploie une nouvelle version. - Si vous êtes abonné à la version la plus récente d'un subgraph, vos parts migreront automatiquement vers cette nouvelle version. Cela entraînera une taxe de curation de 0,5 %. @@ -65,7 +65,7 @@ By signalling on a subgraph, you will earn a share of all the query fees that th Trouver des subgraphs de haute qualité est une tâche complexe, mais elle peut être abordée de plusieurs manières différentes. En tant que Curateur, vous voulez rechercher des subgraphs fiables qui génèrent un volume de requêtes. Un subgraph fiable peut être précieux s'il est complet, précis et répond aux besoins en données d'une dApp. Un subgraph mal architecturé pourrait nécessiter d'être révisé ou republié, et peut également échouer. Il est crucial pour les Curateurs d'examiner l'architecture ou le code d'un subgraph afin d'évaluer si un subgraph est précieux. En conséquence : -- Les curateurs peuvent utiliser leur compréhension d'un réseau pour essayer de prédire comment un subgraph individuel peut générer un volume de requêtes plus ou moins élevé à l'avenir +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future - Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. ### 3. Quel est le coût de la mise à jour d'un subgraph ? @@ -78,50 +78,14 @@ Il est conseillé de ne pas mettre à jour vos subgraphs trop fréquemment. Voir ### 5. Puis-je vendre mes parts de curateurs ? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). ### 6. Am I eligible for a curation grant? Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## Courbe de liaison 101 - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![Prix des actions](/img/price-per-share.png) - -Par conséquent, le prix augmente de façon linéaire, ce qui signifie qu'il est de plus en plus cher d'acheter une action au fil du temps. Voici un exemple de ce que nous entendons par là, voir la courbe de liaison ci-dessous : - -![Courbe de liaison](/img/bonding-curve.png) - -Considérons que nous avons deux curateurs qui monnayent des actions pour un subgraph : - -- Le curateur A est le premier à signaler sur le subgraph. En ajoutant 120 000 GRT dans la courbe, il est capable de frapper 2000 parts. -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- Comme les deux curateurs détiennent la moitié du total des parts de curation, ils recevraient un montant égal de redevances de curateur. -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- Le curateur restant recevrait alors toutes les redevances de curateur pour ce subgraph. S'il brûlait ses pièces pour retirer la GRT, il recevrait 120 000 GRT. -- **TLDR** : La valeur en GRT des parts de curation est déterminée par la courbe de liaison et peut-être volatile. Il est possible de subir de grosses pertes. Signer tôt signifie que vous investissez moins de GRT pour chaque action. Par extension, cela signifie que vous gagnez plus de redevances de curation par GRT que les curateurs ultérieurs pour le même subgraph. - -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** - -Dans le cas de The Graph, la [mise en œuvre par Bancor d'une formule de courbe de liaison](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) est exploitée. - Vous ne savez toujours pas où vous en êtes ? Regardez notre guide vidéo sur la curation ci-dessous : diff --git a/website/pages/fr/network/delegating.mdx b/website/pages/fr/network/delegating.mdx index ba31be5aa31f..aad023c34655 100644 --- a/website/pages/fr/network/delegating.mdx +++ b/website/pages/fr/network/delegating.mdx @@ -2,13 +2,23 @@ title: Délégation --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## Guide du délégué -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly, as that information is widely available online. There are three sections in this guide: @@ -24,64 +34,86 @@ Les principaux risques liés à la fonction de délégué dans le protocole sont Les délégués ne peuvent pas être licenciés en cas de mauvais comportement, mais ils sont soumis à une taxe visant à décourager les mauvaises décisions susceptibles de nuire à l'intégrité du réseau. -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +As a Delegator, it's important to understand the following: -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### La période de découplage de la délégation Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely.
- !Délégation débondage](/img/Delegation-Unbonding.png) _Notez la commission de 0,5% dans l'interface utilisateur de la - délégation, ainsi que la période de débondage de 28 jours. de 28 jours + !Délégation débondage](/img/Delegation-Unbonding.png) _Notez la commission de 0,5% dans l'interface utilisateur de la délégation, ainsi que la période de débondage de 28 jours. + de 28 jours
### Choisir un indexeur digne de confiance avec une rémunération équitable pour les délégués -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +#### Delegation Parameters + +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *Le meilleur indexeur donne aux délégués 90 % des récompenses. Le - celui du milieu donne 20 % aux délégués. Celui du bas donne aux délégués environ 83 %.* + celui du milieu donne 20 % aux délégués. Celui du bas donne aux délégués environ 83 %.*
-- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. + +As you can see, in order to choose the right Indexer, you must consider multiple things. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -### Calcul du rendement attendu des délégués +## Calculating Delegators Expected Return -A Delegator must consider a lot of factors when determining the return. These include: +A Delegator must consider the following factors to determine a return: -- Un délégué technique peut également examiner la capacité de l'indexeur à utiliser les jetons délégués dont il dispose. Si un indexeur n'alloue pas tous les jetons disponibles, il ne réalise pas le profit maximum qu'il pourrait réaliser pour lui-même ou pour ses délégués. -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### Considérant la réduction des frais d'interrogation et la réduction des frais d'indexation -Comme décrit dans les sections précédentes, vous devez choisir un indexeur qui est transparent et honnête dans la fixation de sa réduction des frais de requête et d'indexation. Un délégué doit également examiner le temps de refroidissement des paramètres pour voir de combien de temps il dispose. Après cela, il est assez simple de calculer le montant des récompenses que les délégués reçoivent. La formule est la suivante : +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![Délégation Image 3](/img/Delegation-Reward-Formula.png) ### Compte tenu du pool de délégation de l'indexeur -Une autre chose qu'un délégant doit prendre en compte est la proportion du pool de délégation qu'il possède. Toutes les récompenses de délégation sont partagées équitablement, avec un simple rééquilibrage du pool déterminé par le montant que le délégant a déposé dans le pool. Cela donne au délégant une part du pool : +Delegators should consider the proportion of the Delegation Pool they own. -![Formule de partage](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +This gives the Delegator a share of the pool: + +![Formule de partage](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### Compte tenu de la capacité de délégation -Une autre chose à considérer est la capacité de délégation. Actuellement, le ratio de délégation est fixé à 16. Cela signifie que si un indexeur a mis en jeu 1 000 000 GRT, sa capacité de délégation est de 16 000 000 GRT de jetons délégués qu'il peut utiliser dans le protocole. Tout jeton délégué dépassant ce montant diluera toutes les récompenses du délégué. +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. @@ -89,16 +121,21 @@ Therefore, a Delegator should always consider the Delegation Capacity of an Inde ### Bug MetaMask « Transaction en attente » -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### Exemple -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## Guide vidéo pour l'interface utilisateur du réseau +## Video Guide -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/fr/network/developing.mdx b/website/pages/fr/network/developing.mdx index 9379c97ea334..f1f54430800f 100644 --- a/website/pages/fr/network/developing.mdx +++ b/website/pages/fr/network/developing.mdx @@ -2,52 +2,88 @@ title: Le Développement --- -Les développeurs constituent le côté demande de l’écosystème The Graph. Les développeurs créent des subgraphs et les publient sur The Graph Network. Ensuite, ils interrogent les subgraphs en direct avec GraphQL afin d'alimenter leurs applications. +To start coding right away, go to [Developer Quick Start](/quick-start/). + +## Aperçu + +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. + +On The Graph, you can: + +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. + +### What is GraphQL? + +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +### Developer Actions + +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. + +## Subgraph Specifics + +### What are subgraphs? + +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. + +A subgraph primarily consists of the following files: + +- `subgraph.yaml`: this YAML file contains the [subgraph manifest](/developing/creating-a-subgraph/#the-subgraph-manifest). +- `subgraph.graphql`: this GraphQL schema defines what data is stored for your subgraph, and how to query it via [GraphQL](/developing/creating-a-subgraph/#the-graphql-schema). +- `mappings`: this [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) mappings file translates data from the event data to the entities defined in your schema. + +Learn the detailed specifics to [create a subgraph](/developing/creating-a-subgraph/). ## Flux du cycle de vie des subgraphs -Les subgraphs déployés sur le réseau ont un cycle de vie défini. +Here is a general overview of a subgraph’s lifecycle: -### Développer localement +![Flux du cycle de vie des subgraphes](/img/subgraph-lifecycle.png) -Comme pour tout développement de subgraphs, cela commence par le développement et les tests locaux. Les développeurs peuvent utiliser la même configuration locale, qu'ils construisent pour The Graph Network, le service hébergé ou un nœud Graph local, en tirant parti de `graph-cli` et `graph-ts` pour créer leur subgraph. Les développeurs sont encouragés à utiliser des outils tels que [Matchstick](https://github.com/LimeChain/matchstick) pour les tests unitaires afin d'améliorer la robustesse de leurs subgraphs. +### Développer localement -> Le réseau de graphes est soumis à certaines contraintes, en termes de fonctionnalités et de réseaux pris en charge. Seuls les subgraphs des [réseaux pris en charge](/developing/supported-networks) obtiendront des récompenses en matière d'indexation, et les subgraphs qui récupèrent des données à partir d'IPFS ne sont pas non plus éligibles. +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. ### Deploy to Subgraph Studio -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. - -### Publier sur le réseau +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -Lorsque le développeur est satisfait de son subgraph, il peut le publier sur le réseau The Graph. Il s'agit d'une action 'on-chain', qui enregistre le subgraph afin qu'il puisse être découvert par les indexeurs. Les subgraphs publiés ont un NFT correspondant, qui est alors facilement transférable. Le subgraph publié est associé à des métadonnées qui fournissent aux autres participants du réseau un contexte et des informations utiles. +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. -### Signal pour encourager l'indexation +### Publier sur le réseau -Les subgraphs publiés ont peu de chances d'être repérés par les indexeurs sans l'ajout d'un signal. Le signal est constitué de GRT verrouillés associés à un subgraph donné, ce qui indique aux indexeurs qu'un subgraph donné recevra du volume de requêtes et contribue également aux récompenses d'indexation disponibles pour le traiter. Les développeurs de subgraphs ajoutent généralement un signal à leur subgraph afin d'encourager l'indexation. Les curateurs tiers peuvent également ajouter un signal à un subgraph donné s'ils estiment que ce dernier est susceptible de générer un volume de requêtes. +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. -### Interrogation & Développement d'applications +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-and-deprecate-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. -Une fois qu'un subgraph a été traité par les indexeurs et est disponible pour l'interrogation, les développeurs peuvent commencer à utiliser le subgraph dans leurs applications. Les développeurs interrogent les subgraphs via une passerelle, qui transmet leurs requêtes à un indexeur qui a traité le subgraph, en payant les frais de requête en GRT. +### Add Curation Signal for Indexing -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. +#### What is signal? -### Mise à jour des subgraphs +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. +### Interrogation & Développement d'applications -Une fois que le développeur de subgraph est prêt à mettre à jour, il peut lancer une transaction pour pointer son subgraph vers la nouvelle version. La mise à jour du subgraph migre tout signal vers la nouvelle version (en supposant que l'utilisateur qui a appliqué le signal a sélectionné "migrer automatiquement"), ce qui entraîne également une taxe de migration. Cette migration de signal devrait inciter les indexeurs à commencer à indexer la nouvelle version du subgraph, elle devrait donc bientôt être disponible pour les interrogations. +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). -### Dépréciation des subgraphs +Learn more about [querying subgraphs](/querying/querying-the-graph/). -À un moment donné, un développeur peut décider qu'il n'a plus besoin d'un subgraph publié. À ce stade, ils peuvent déprécier le subgraph, qui renvoie tout GRT signalé aux curateurs. +### Mise à jour des subgraphs -### Diversité des rôles des développeurs +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -Certains développeurs s'engageront dans le cycle de vie complet des subgraphs sur le réseau, en publiant, en interrogeant et en itérant sur leurs propres subgraphs. D'autres se concentreront sur le développement de subgraphs, en créant des API ouvertes sur lesquelles d'autres pourront s'appuyer. D'autres peuvent se concentrer sur les applications, en interrogeant les subgraphs déployés par d'autres. +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. -### Economie du réseau et des développeurs +### Deprecating & Transferring Subgraphs -Les développeurs sont des acteurs économiques clés dans le réseau, bloquant des GRT pour encourager l'indexation et, surtout, interroger des subgraphs, ce qui constitue l'échange de valeur principal du réseau. Les développeurs de subgraphs brûlent également des GRT à chaque mise à jour d'un subgraph. +If you no longer need a published subgraph, you can [deprecate or transfer a subgraph](/managing/transfer-and-deprecate-a-subgraph/). Deprecating a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/fr/network/explorer.mdx b/website/pages/fr/network/explorer.mdx index 8542686d9f27..ba305e19a5d4 100644 --- a/website/pages/fr/network/explorer.mdx +++ b/website/pages/fr/network/explorer.mdx @@ -2,21 +2,35 @@ title: Graph Explorer --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## Subgraphs -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -Lorsque vous cliquerez sur un subgraph, vous pourrez tester des requêtes dans l'aire de jeu et exploiter les détails du réseau pour prendre des décisions éclairées. Vous pourrez également signaler le GRT sur votre propre subgraph ou sur les subgraphs d'autres personnes afin de sensibiliser les indexeurs à son importance et à sa qualité. Ceci est essentiel car le fait de signaler un subgraph incite à l'indexer, ce qui signifie qu'il fera surface sur le réseau pour éventuellement répondre à des requêtes. +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -Sur la page dédiée à chaque subgraph, plusieurs détails font surface. Il s'agit notamment de: +On each subgraph’s dedicated page, you can do the following: - Signal/Un-signal sur les subgraphs - Afficher plus de détails tels que des graphs, l'ID de déploiement actuel et d'autres métadonnées @@ -31,26 +45,32 @@ Sur la page dédiée à chaque subgraph, plusieurs détails font surface. Il s'a ## Participants -Dans cet onglet, vous aurez une vue d'ensemble de toutes les personnes qui participent aux activités du réseau, telles que les indexeurs, les délégateurs et les curateurs. Ci-dessous, nous examinerons en profondeur ce que chaque onglet signifie pour vous. +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### 1. Indexeurs ![Explorer Image 4](/img/Indexer-Pane.png) -Commençons par les indexeurs. Les indexeurs sont l'épine dorsale du protocole, étant ceux qui misent sur les subgraphs, les indexent et envoient des requêtes à toute personne consommant des subgraphs. Dans le tableau Indexeurs, vous pourrez voir les paramètres de délégation d'un indexeur, sa participation, le montant qu'ils ont misé sur chaque subgraph et le montant des revenus qu'ils ont tirés des frais de requête et des récompenses d'indexation. Analyses approfondies ci-dessous : +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. + +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. -- Query Fee Cut - le pourcentage des remises sur les frais de requête que l'indexeur conserve lorsqu'il les partage avec les délégués -- Réduction de récompense effective - la réduction de récompense d'indexation appliquée au pool de délégation. S’il est négatif, cela signifie que l’indexeur distribue une partie de ses récompenses. S'il est positif, cela signifie que l'indexeur conserve une partie de ses récompenses -- Cooldown Remaining : temps restant jusqu'à ce que l'indexeur puisse modifier les paramètres de délégation ci-dessus. Des périodes de refroidissement sont définies par les indexeurs lorsqu'ils mettent à jour leurs paramètres de délégation -- Propriété : il s'agit de la participation déposée par l'indexeur, qui peut être réduite en cas de comportement malveillant ou incorrect -- Délégué - Participation des délégués qui peut être allouée par l'indexeur, mais ne peut pas être réduite -- Alloué - Participation que les indexeurs allouent activement aux subgraphs qu'ils indexent -- Capacité de délégation disponible - le montant de la participation déléguée que les indexeurs peuvent encore recevoir avant qu'ils ne soient surdélégués +**Specifics** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Capacité de délégation maximale : montant maximum de participation déléguée que l'indexeur peut accepter de manière productive. Une mise déléguée excédentaire ne peut pas être utilisée pour le calcul des allocations ou des récompenses. -- Frais de requête - il s'agit du total des frais que les utilisateurs finaux ont payés pour les requêtes d'un indexeur pendant toute la durée de l'indexation +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - Récompenses de l'indexeur - il s'agit du total des récompenses de l'indexeur gagnées par l'indexeur et ses délégués sur toute la durée. Les récompenses des indexeurs sont payées par l'émission de GRT. -Les indexeurs peuvent gagner à la fois des frais de requête et des récompenses d'indexation. Fonctionnellement, cela se produit lorsque les participants au réseau délèguent GRT à un indexeur. Cela permet aux indexeurs de recevoir des frais de requête et des récompenses en fonction de leurs paramètres d'indexeur. Les paramètres d'indexation sont définis en cliquant sur le côté droit du tableau, ou en accédant au profil d'un indexeur et en cliquant sur le bouton « Délégué ». +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. Pour en savoir plus sur la façon de devenir un indexeur, vous pouvez consulter la [documentation officielle](/network/indexing) ou les [guides de l'indexeur de la Graph Academy.](https://thegraph.academy/delegators/choosing-indexers/) @@ -58,9 +78,13 @@ Pour en savoir plus sur la façon de devenir un indexeur, vous pouvez consulter ### 2. Curateurs -Les curateurs analysent les subgraphs afin d'identifier ceux qui sont de la plus haute qualité. Une fois qu'un curateur a trouvé un subgraph potentiellement intéressant, il peut le curer en signalant sa courbe de liaison. Ce faisant, les curateurs indiquent aux indexeurs quels sont les subgraphs de haute qualité qui devraient être indexés. +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -Les conservateurs peuvent être des membres de la communauté, des consommateurs de données ou même des développeurs de subgraphs qui signalent sur leurs propres subgraphs en déposant des jetons GRT dans une courbe de liaison. En déposant GRT, les curateurs créent des actions de curation d'un subgraph. En conséquence, les curateurs sont éligibles pour gagner une partie des frais de requête générés par le subgraph sur lequel ils ont signalé. La courbe de liaison incite les curateurs à conserver des sources de données de la plus haute qualité. Le Tableau Curateurs de cette section vous permettra de voir : +In the The Curator table listed below you can see: - La date à laquelle le curateur a commencé à organiser - Le nombre de GRT déposés @@ -68,34 +92,36 @@ Les conservateurs peuvent être des membres de la communauté, des consommateurs ![Explorer Image 6](/img/Curation-Overview.png) -Si vous souhaitez en savoir plus sur le rôle de curateur, vous pouvez le faire en visitant les liens suivants de [The Graph Academy](https://thegraph.academy/curators/) ou de la [documentation officielle.](/network/curating) +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### 3. Délégués -Les délégués jouent un rôle clé dans le maintien de la sécurité et de la décentralisation de The Graph Network. Ils participent au réseau en déléguant (c'est-à-dire en « jalonnant ») des jetons GRT à un ou plusieurs indexeurs. Sans délégués, les indexeurs sont moins susceptibles de gagner des récompenses et des frais importants. Par conséquent, les indexeurs cherchent à attirer les délégants en leur offrant une partie des récompenses d'indexation et des frais de requête qu'ils gagnent. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -Les délégués, quant à eux, sélectionnent les indexeurs sur la base d'un certain nombre de variables différentes, telles que les performances passées, les taux de récompense de l'indexation et les réductions des frais d'interrogation. La réputation au sein de la communauté peut également jouer un rôle à cet égard ! Il est recommandé d'entrer en contact avec les indexeurs sélectionnés via le [Discord du Graph](https://discord.gg/graphprotocol) ou le [Forum du Graph](https://forum.thegraph.com/) ! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![Explorer Image 7](/img/Delegation-Overview.png) -Le tableau des délégués vous permet de voir les délégués actifs dans la communauté, ainsi que des indicateurs tels que : +In the Delegators table you can see the active Delegators in the community and important metrics: - Le nombre d’indexeurs auxquels un délégant délègue - Délégation originale d’un délégant - Les récompenses qu'ils ont accumulées mais qu'ils n'ont pas retirées du protocole - Les récompenses obtenues qu'ils ont retirées du protocole - Quantité totale de GRT qu'ils ont actuellement dans le protocole -- La date de leur dernière délégation à +- The date they last delegated -Si vous voulez en savoir plus sur la façon de devenir un délégué, ne cherchez plus ! Il vous suffit de consulter la [documentation officielle](/network/delegating) ou [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## Réseau -Dans la section Réseau, vous verrez des indicateurs globaux ainsi que la possibilité de passer à une base par écho et d'analyser les paramètres du réseau de manière plus détaillée. Ces détails vous donneront une idée des performances du réseau au fil du temps. +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. ### Aperçu -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +The overview section has both all the current network metrics and some cumulative metrics over time: - L’enjeu total actuel du réseau - La répartition des enjeux entre les indexeurs et leurs délégués @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - Paramètres du protocole tels que la récompense de la curation, le taux d'inflation, etc - Récompenses et frais de l'époque actuelle -Quelques détails clés qui méritent d'être mentionnés : +A few key details to note: -- Les **Frais de requête représentent les frais générés par les consommateurs**, et ils peuvent être réclamés (ou non) par les indexeurs après une période d'au moins 7 époques (voir ci-dessous) après la clôture de leurs allocations vers les subgraphs. et les données qu'ils ont servies ont été validées par les consommateurs. -- **Les récompenses d'indexation représentent le montant des récompenses que les indexeurs ont réclamé à l'émission du réseau au cours de l'époque.** Bien que l'émission du protocole soit fixe, les récompenses ne sont frappées qu'une fois que les indexeurs ont clôturé leurs allocations vers les subgraphs qu'ils ont indexés. Ainsi, le nombre de récompenses par époque varie (par exemple, au cours de certaines époques, les indexeurs peuvent avoir fermé collectivement des allocations qui étaient ouvertes depuis plusieurs jours). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ Dans la section Époques, vous pouvez analyser, époque par époque, des métriq - L'époque active est celle où les indexeurs sont en train d'allouer des enjeux et de collecter des frais de requête - Les époques de règlement sont celles au cours desquelles les canaux d'État sont réglées. Cela signifie que les indexeurs sont soumis à des réductions si les consommateurs ouvrent des litiges à leur encontre. - Les époques de distribution sont les époques au cours desquelles les canaux d'État pour les époques sont réglés et les indexeurs peuvent réclamer leurs remises sur les frais de requête. - - Les époques finalisées sont les époques pour lesquelles il ne reste plus aucune remise sur les frais de requête à réclamer par les indexeurs, et sont donc finalisées. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![Explorer Image 9](/img/Epoch-Stats.png) ## Votre profil d'utilisateur -Maintenant que nous avons parlé des statistiques du réseau, passons à votre profil personnel. Votre profil personnel vous permet de voir votre activité sur le réseau, quelle que soit la manière dont vous participez au réseau. Votre portefeuille crypto fera office de profil utilisateur, et avec le tableau de bord utilisateur, vous pourrez voir : +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### Aperçu du profil -C'est ici que vous pouvez voir toutes les actions en cours que vous avez entreprises. Vous y trouverez également les informations relatives à votre profil, votre description et votre site web (si vous en avez ajouté un). +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![Explorer Image 10](/img/Profile-Overview.png) ### Onglet Subgraphs -Si vous cliquez sur l'onglet Subgraphs, vous verrez vos subgraphs publiés. Cela n'inclut pas les subgraphs déployés avec l'interface de programmation à des fins de test - les subgraphs ne s'affichent que lorsqu'ils sont publiés sur le réseau décentralisé. +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Onglet Indexation -Si vous cliquez sur l'onglet Indexation, vous trouverez un tableau avec toutes les allocations actives et historiques vers les subgraphs, ainsi que des graphs que vous pouvez analyser et voir vos performances passées en tant qu'indexeur. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. Cette section comprendra également des détails sur vos récompenses nettes d'indexeur et vos frais de requête nets. Vous verrez les métriques suivantes : @@ -158,7 +189,9 @@ Cette section comprendra également des détails sur vos récompenses nettes d'i ### Onglet Délégation -Les délégués sont importants pour le Graph Network. Un délégant doit utiliser ses connaissances pour choisir un indexeur qui fournira un bon retour sur récompenses. Vous trouverez ici les détails de vos délégations actives et historiques, ainsi que les mesures des indexeurs vers lesquels vous avez délégué. +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. Dans la première moitié de la page, vous pouvez voir votre diagramme de délégation, ainsi que le diagramme des récompenses uniquement. À gauche, vous pouvez voir les indicateurs clés de performance qui reflètent vos paramètres de délégation actuels. diff --git a/website/pages/fr/network/indexing.mdx b/website/pages/fr/network/indexing.mdx index 6fd8178366cd..299e1d6c8d38 100644 --- a/website/pages/fr/network/indexing.mdx +++ b/website/pages/fr/network/indexing.mdx @@ -42,7 +42,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap De nombreux tableaux de bord créés par la communauté incluent des valeurs de récompenses en attente et ils peuvent être facilement vérifiés manuellement en suivant ces étapes : -1. Interrogez le [subgraph du mainnet](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) pour obtenir les ID de toutes les allocations actives : +1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -113,11 +113,11 @@ Les indexeurs peuvent se différencier en appliquant des techniques avancées po - **Large** : Prêt à indexer tous les subgraphs actuellement utilisés et à répondre aux demandes pour le trafic associé. | Installation | Postgres
(CPUs) | Postgres
(mémoire en Gbs) | Postgres
(disque en TB) | VMs
(CPUs) | VMs
(mémoire en Gbs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Petit | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 11 | 12 | 48 | -| Moyen | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | | +| ------------ |:--------------------------:|:------------------------------------:|:----------------------------------:|:---------------------:|:-------------------------------:| +| Petit | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 11 | 12 | 48 | +| Moyen | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | | ### Quelles sont les précautions de sécurité de base qu’un indexeur doit prendre ? @@ -149,20 +149,20 @@ Remarque : Pour prendre en charge la mise à l'échelle agile, il est recommand #### Nœud de The Graph -| Port | Objectif | Routes | Argument CLI | Variable d'environnement | -| --- | --- | --- | --- | --- | -| 8000 | Serveur HTTP GraphQL
(pour les requêtes de subgraphs) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(pour les abonnements aux subgraphs) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(pour gérer les déploiements) | / | --admin-port | - | -| 8030 | API de statut d'indexation des subgraphs | /graphq | --index-node-port | - | -| 8040 | Métriques Prometheus | /metrics | --metrics-port | - | +| Port | Objectif | Routes | Argument CLI | Variable d'environnement | +| ---- | ---------------------------------------------------------------- | ---------------------------------------------------- | ----------------- | ------------------------ | +| 8000 | Serveur HTTP GraphQL
(pour les requêtes de subgraphs) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(pour les abonnements aux subgraphs) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(pour gérer les déploiements) | / | --admin-port | - | +| 8030 | API de statut d'indexation des subgraphs | /graphq | --index-node-port | - | +| 8040 | Métriques Prometheus | /metrics | --metrics-port | - | #### Service d'indexation -| Port | Objectif | Routes | Argument CLI | Variable d'environnement | -| --- | --- | --- | --- | --- | -| 7600 | Serveur HTTP GraphQL
(pour les requêtes payantes de subgraphs) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Métriques Prometheus | /metrics | --metrics-port | - | +| Port | Objectif | Routes | Argument CLI | Variable d'environnement | +| ---- | ------------------------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ------------------------ | +| 7600 | Serveur HTTP GraphQL
(pour les requêtes payantes de subgraphs) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Métriques Prometheus | /metrics | --metrics-port | - | #### Agent indexeur @@ -545,7 +545,7 @@ La **Indexer CLI** se connecte à l'agent Indexer, généralement via la redirec - `règles de l'indexeur graphique peut-être [options] ` — Définissez le `decisionBasis` pour un déploiement sur `rules`, afin que l'agent indexeur utilisez des règles d'indexation pour décider d'indexer ou non ce déploiement. -- `graph indexer actions get [options] ` - Récupère une ou plusieurs actions en utilisant `all` ou laissez `action-id` vide pour obtenir toutes les actions. Un argument supplémentaire `--status` peut être utilisé pour imprimer toutes les actions d'un certain statut. +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `file d'attente d'action de l'indexeur de graphs alloue ` - Action d'allocation de file d'attente diff --git a/website/pages/fr/network/overview.mdx b/website/pages/fr/network/overview.mdx index 09210f52ce37..b41081a34824 100644 --- a/website/pages/fr/network/overview.mdx +++ b/website/pages/fr/network/overview.mdx @@ -2,14 +2,20 @@ title: Présentation du réseau --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## Aperçu +## How does it work? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Specifics + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![Économie des jetons](/img/Network-roles@2x.png) -Pour garantir la sécurité économique du Graph Network et l'intégrité des données interrogées, les participants misent et utilisent des jetons Graph ([GRT](/tokenomics)). GRT est un jeton utilitaire de travail qui est un ERC-20 utilisé pour allouer des ressources dans le réseau. +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/fr/new-chain-integration.mdx b/website/pages/fr/new-chain-integration.mdx index ec6a7423d079..398e4770837e 100644 --- a/website/pages/fr/new-chain-integration.mdx +++ b/website/pages/fr/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: Intégration de nouveaux réseaux +title: New Chain Integration --- -Graph Node peut actuellement indexer les données des types de chaînes suivants : +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- Ethereum, via EVM JSON-RPC and [Ethereum Firehose](https://github.com/streamingfast/firehose-ethereum) -- NEAR, via a [NEAR Firehose](https://github.com/streamingfast/near-firehose-indexer) -- Cosmos, via a [Cosmos Firehose](https://github.com/graphprotocol/firehose-cosmos) -- Arweave, via an [Arweave Firehose](https://github.com/graphprotocol/firehose-arweave) +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -Si l'une de ces chaînes vous intéresse, l'intégration est une question de configuration et de test de Graph Node. +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -If you are interested in a different chain type, a new integration with Graph Node must be built. Our recommended approach is developing a new Firehose for the chain in question and then the integration of that Firehose with Graph Node. More info below. +## Integration Strategies -**1. EVM JSON-RPC** +### 1. EVM JSON-RPC -Si la blockchain est équivalente à EVM et que le client/nœud expose l'API EVM JSON-RPC standard, Graph Node devrait pouvoir indexer la nouvelle chaîne. Pour plus d'informations, reportez-vous à [Test d'un EVM JSON-RPC] (new-chain-integration#testing-an-evm-json-rpc). +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -**2. Firehose** +#### Tester un EVM JSON-RPC -Pour les chaînes non basées sur EVM, Graph Node doit ingérer des données de blockchain via gRPC et des définitions de type connues. Cela peut être fait via [Firehose](firehose/), une nouvelle technologie développée par [StreamingFast](https://www.streamingfast.io/) qui fournit une solution de blockchain d'indexation hautement évolutive utilisant un système de streaming et de fichiers basé sur des fichiers. première approche. Contactez l'[équipe StreamingFast](mailto:integrations@streamingfast.io/) si vous avez besoin d'aide pour le développement de Firehose. +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## Différence entre EVM JSON-RPC et Firehose +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, in a JSON-RPC batch request +- `trace_filter` _(optionally required for Graph Node to support call handlers)_ -Bien que les deux conviennent aux subgraphs, un Firehose est toujours requis pour les développeurs souhaitant construire avec [Substreams](substreams/), comme la construction de [Substreams-powered subgraphs](cookbook/substreams-powered-subgraphs/). De plus, Firehose permet des vitesses d'indexation améliorées par rapport à JSON-RPC. +### 2. Firehose Integration -Les nouveaux intégrateurs de chaîne EVM peuvent également envisager l'approche basée sur Firehose, compte tenu des avantages des sous-flux et de ses capacités d'indexation parallélisées massives. La prise en charge des deux permet aux développeurs de choisir entre la création de sous-flux ou de subgraphs pour la nouvelle chaîne. +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> **REMARQUE** : Une intégration basée sur Firehose pour les chaînes EVM nécessitera toujours que les indexeurs exécutent le nœud RPC d'archive de la chaîne pour indexer correctement les subgraph. Cela est dû à l'incapacité de Firehose à fournir un état de contrat intelligent généralement accessible par la méthode RPC `eth_call`. (Il convient de rappeler que les eth_calls ne sont [pas une bonne pratique pour les développeurs](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## Tester un EVM JSON-RPC +#### Specific Firehose Instrumentation for EVM (`geth`) chains -Pour que Graph Node puisse ingérer des données à partir d'une chaîne EVM, le nœud RPC doit exposer les méthodes EVM JSON RPC suivantes : +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \_(for historical blocks, with EIP-1898 - requires archive node): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`, in a JSON-RPC batch request -- _`trace_filter`_ _(optionally required for Graph Node to support call handlers)_ +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### Configuration Graph Node +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -**Commencez par préparer votre environnement local** +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## Configuration Graph Node + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) -2. Modifiez [cette ligne](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) pour inclure le nouveau nom de réseau et l'URL compatible avec le RPC JSON EVM - > Ne modifiez pas le nom de la variable d'environnement lui-même. Il doit rester « Ethereum » même si le nom du réseau est différent. -3. Exécutez un nœud IPFS ou utilisez celui utilisé par The Graph : https://api.thegraph.com/ipfs/ -**Testez l'intégration en déployant localement un subgraph** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. Créez un exemple de subgraph simple. Certaines options sont ci-dessous : - 1. Le contrat intelligent et le subgraph pré-emballés [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) sont un bon point de départ - 2. Amorcez un subgraph local à partir de n'importe quel contrat intelligent ou environnement de développement Solidity existant [en utilisant Hardhat avec un plugin Graph](https://github.com/graphprotocol/hardhat-graph) -3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. -4. Créez votre subgraph dans Graph Node : `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. Publiez votre subgraph sur Graph Node : `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -Graph Node devrait synchroniser le subgraph déployé s'il n'y a pas d'erreurs. Laissez-lui le temps de se synchroniser, puis envoyez des requêtes GraphQL au point de terminaison de l'API indiqué dans les journaux. +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## Intégration d'une nouvelle chaîne Firehose +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. Créez un exemple de subgraph simple. Certaines options sont ci-dessous : + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +Graph Node devrait synchroniser le subgraph déployé s'il n'y a pas d'erreurs. Laissez-lui le temps de se synchroniser, puis envoyez des requêtes GraphQL au point de terminaison de l'API indiqué dans les journaux. -L'intégration d'une nouvelle chaîne est également possible en utilisant l'approche Firehose. Il s'agit actuellement de la meilleure option pour les chaînes non-EVM et d'une exigence pour la prise en charge des substreams. La documentation supplémentaire se concentre sur le fonctionnement de Firehose, l'ajout de la prise en charge de Firehose pour une nouvelle chaîne et son intégration avec Graph Node. Documentation recommandée aux intégrateurs : +## Substreams-powered Subgraphs -1. [Documentation générale sur Firehose](firehose/) -2. [Ajout du support Firehose pour une nouvelle chaîne](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [Intégration de Graph Node avec une nouvelle chaîne via Firehose] (https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/fr/operating-graph-node.mdx b/website/pages/fr/operating-graph-node.mdx index 252894e113a0..329f1ac72adf 100644 --- a/website/pages/fr/operating-graph-node.mdx +++ b/website/pages/fr/operating-graph-node.mdx @@ -77,13 +77,13 @@ Un exemple complet de configuration de Kubernetes est disponible dans le [dépô Lorsqu'il est en cours d'exécution, Graph Node expose les ports suivants : -| Port | Objectif | Routes | Argument CLI | Variable d'environnement | -| --- | --- | --- | --- | --- | -| 8000 | Serveur HTTP GraphQL
(pour les requêtes de subgraphs) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(pour les abonnements aux subgraphs) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(pour gérer les déploiements) | / | --admin-port | - | -| 8030 | API de statut d'indexation des subgraphs | /graphq | --index-node-port | - | -| 8040 | Métriques Prometheus | /metrics | --metrics-port | - | +| Port | Objectif | Routes | Argument CLI | Variable d'environnement | +| ---- | ---------------------------------------------------------------- | ---------------------------------------------------- | ----------------- | ------------------------ | +| 8000 | Serveur HTTP GraphQL
(pour les requêtes de subgraphs) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(pour les abonnements aux subgraphs) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(pour gérer les déploiements) | / | --admin-port | - | +| 8030 | API de statut d'indexation des subgraphs | /graphq | --index-node-port | - | +| 8040 | Métriques Prometheus | /metrics | --metrics-port | - | > **Important** : Soyez prudent lorsque vous exposez les ports publiquement : les **ports d'administration** doivent rester verrouillés. Cela inclut le point de terminaison Graph Node JSON-RPC. diff --git a/website/pages/fr/querying/distributed-systems.mdx b/website/pages/fr/querying/distributed-systems.mdx index cc52558ba647..270b55bfd86d 100644 --- a/website/pages/fr/querying/distributed-systems.mdx +++ b/website/pages/fr/querying/distributed-systems.mdx @@ -34,21 +34,21 @@ Le graph fournit l'API `block : { number_gte: $minBlock }`, qui garantit que la Nous pouvons utiliser `number_gte` pour nous assurer que le temps ne recule jamais lorsque nous interrogeons des données dans une boucle. Voici un exemple : ```javascript -/// Met à jour la variable protocol.paused avec la dernière valeur +/// Met à jour la variable protocol.paused avec la dernière valeur /// connue dans une boucle en la récupérant en utilisant The Graph. async function updateProtocolPaused() { // Il est correct de commencer avec minBlock à 0. La requête sera servie // en utilisant le dernier bloc disponible. Définir minBlock à 0 revient // à omettre cet argument. let minBlock = 0 - + for (;;) { - // Planifie une promesse qui sera prête une fois que le prochain bloc + // Planifie une promesse qui sera prête une fois que le prochain bloc // Ethereum sera probablement disponible. const nextBlock = new Promise((f) => { setTimeout(f, 14000) }) - + const query = ` query GetProtocol($minBlock: Int!) { protocol(block: { number_gte: $minBlock } id: "0") { @@ -60,14 +60,14 @@ async function updateProtocolPaused() { } } }` - + const variables = { minBlock } const response = await graphql(query, variables) minBlock = response._meta.block.number - + // TODO: Faites quelque chose avec les données de réponse ici au lieu de les journaliser. console.log(response.protocol.paused) - + // Dort pour attendre le prochain bloc await nextBlock } @@ -87,9 +87,9 @@ async function getDomainNames() { let pages = 5 const perPage = 1000 - // La première requête obtiendra la première page de résultats et obtiendra également le + // La première requête obtiendra la première page de résultats et obtiendra également le // hachage du bloc afin que le reste des requêtes soit cohérent avec la première. - const listDomainsQuery = ` + const listDomainsQuery = ` query ListDomains($perPage: Int!) { domains(first: $perPage) { name diff --git a/website/pages/fr/querying/graphql-api.mdx b/website/pages/fr/querying/graphql-api.mdx index 9bbc39b6b69b..dee1be9de925 100644 --- a/website/pages/fr/querying/graphql-api.mdx +++ b/website/pages/fr/querying/graphql-api.mdx @@ -2,11 +2,19 @@ title: API GraphQL --- -This guide explains the GraphQL Query API that is used for The Graph Protocol. +Learn about the GraphQL Query API used in The Graph. -## Requêtes +## What is GraphQL? -Dans votre schéma de subgraph, vous définissez des types appelés `Entités`. Pour chaque type `Entity`, un champ `entity` et `entities` sera généré sur le type `Query` de niveau supérieur. Notez que `query` n'a pas besoin d'être inclus en haut de la requête `graphql` lors de l'utilisation de The Graph. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +To understand the larger role that GraphQL plays, review [developing](/network/developing/) and [creating a subgraph](/developing/creating-a-subgraph/). + +## Queries with GraphQL + +In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. + +> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. ### Exemples @@ -21,7 +29,7 @@ Requête pour une seule entité `Token` définie dans votre schéma : } ``` -> **Remarque :** Lors d'une requête pour une seule entité, le champ `id` est obligatoire et il doit s'agir d'une chaîne. +> Note: When querying for a single entity, the `id` field is required, and it must be writen as a string. Interrogez toutes les entités `Token` : @@ -36,7 +44,10 @@ Interrogez toutes les entités `Token` : ### Tri -Lors de l'interrogation d'une collection, le paramètre `orderBy` peut être utilisé pour trier selon un attribut spécifique. De plus, `orderDirection` peut être utilisé pour spécifier le sens du tri, `asc` pour ascendant ou `desc` pour décroissant. +When querying a collection, you may: + +- Use the `orderBy` parameter to sort by a specific attribute. +- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. #### Exemple @@ -53,7 +64,7 @@ Lors de l'interrogation d'une collection, le paramètre `orderBy` peut être uti Depuis Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0), les entités peuvent être triées sur la base d'entités imbriquées. -Dans l'exemple suivant, nous trions les jetons en fonction de leur nom propre: +The following example shows tokens sorted by the name of their owner: ```graphql { @@ -72,11 +83,12 @@ Dans l'exemple suivant, nous trions les jetons en fonction de leur nom propre: ### Pagination -Lors de l'interrogation d'une collection, le paramètre `first` peut être utilisé pour paginer depuis le début de la collection. Il convient de noter que l'ordre de tri par défaut est par ID dans l'ordre alphanumérique croissant, et non par heure de création. - -De plus, le paramètre `skip` peut être utilisé pour ignorer des entités et paginer. par exemple. `first:100` affiche les 100 premières entités et `first:100, skip:100` affiche les 100 entités suivantes. +When querying a collection, it's best to: -Les requêtes doivent éviter d'utiliser de très grandes valeurs de `skip` car elles fonctionnent généralement mal. Pour récupérer un grand nombre d'éléments, il est préférable de parcourir les entités sur la base d'un attribut, comme le montre le dernier exemple. +- Use the `first` parameter to paginate from the beginning of the collection. + - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. +- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. #### Exemple utilisant `first` @@ -108,7 +120,7 @@ Interrogez 10 entités `Token`, décalées de 10 places depuis le début de la #### Exemple utilisant `first` et `id_ge` -Si un client a besoin de récupérer un grand nombre d'entités, il est beaucoup plus performant de baser les requêtes sur un attribut et de filtrer par cet attribut. Par exemple, un client récupérerait un grand nombre de jetons en utilisant cette requête : +If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: ```graphql query manyTokens($lastID: String) { @@ -119,11 +131,12 @@ query manyTokens($lastID: String) { } ``` -La première fois, il enverrait la requête avec `lastID = ""`, et pour les requêtes suivantes, il définirait `lastID` sur l'attribut `id` du dernier entité dans la demande précédente. Cette approche fonctionnera nettement mieux que l'utilisation de valeurs `skip` croissantes. +The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### Filtration -Vous pouvez utiliser le paramètre `where` dans vos requêtes pour filtrer différentes propriétés. Vous pouvez filtrer sur plusieurs valeurs dans le paramètre `where`. +- You can use the `where` parameter in your queries to filter for different properties. +- You can filter on multiple values within the `where` parameter. #### Exemple utilisant `where` @@ -157,7 +170,7 @@ Vous pouvez utiliser des suffixes comme `_gt`, `_lte` pour comparer les valeurs #### Exemple de filtrage par bloc -Vous pouvez également filtrer les entités par `_change_block(number_gte: Int)` - cela filtre les entités qui ont été mises à jour dans ou après le bloc spécifié. +You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. Cela peut être utile si vous cherchez à récupérer uniquement les entités qui ont changé, par exemple depuis la dernière fois que vous avez interrogé. Ou bien, il peut être utile d'étudier ou de déboguer la façon dont les entités changent dans votre subgraph (si combiné avec un filtre de bloc, vous pouvez isoler uniquement les entités qui ont changé dans un bloc spécifique). @@ -195,7 +208,7 @@ Depuis Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releas ##### Opérateur `AND` -Dans l'exemple suivant, nous filtrons les défis (Challenges) dont le résultat et le numéro `number` supérieur ou égal à `100`. +The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. ```graphql { @@ -210,7 +223,7 @@ Dans l'exemple suivant, nous filtrons les défis (Challenges) dont le résultat ``` > **Sucre syntaxique :** Vous pouvez simplifier la requête ci-dessus en supprimant l'opérateur `et` en passant une sous-expression séparée par des virgules. -> +> > ```graphql > { > challenges(where: { number_gte: 100, outcome: "succeeded" }) { @@ -225,7 +238,7 @@ Dans l'exemple suivant, nous filtrons les défis (Challenges) dont le résultat ##### Opérateur `OR` -Dans l'exemple suivant, nous filtrons les défis dont le résultat réussi (`outcome` `succeeded`) ou le numéro (`number`) est supérieur ou égal à `100`. +The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. ```graphql { @@ -280,9 +293,9 @@ _change_block(numéro_gte : Int) Vous pouvez interroger l'état de vos entités non seulement pour le dernier bloc, qui est la valeur par défaut, mais également pour un bloc arbitraire du passé. Le bloc dans lequel une requête doit se produire peut être spécifié soit par son numéro de bloc, soit par son hachage de bloc en incluant un argument `block` dans les champs de niveau supérieur des requêtes. -Le résultat d'une telle requête ne changera pas au fil du temps, c'est-à-dire qu'une requête portant sur un certain bloc passé renverra le même résultat quel que soit le moment où elle est exécutée, à l'exception d'une requête portant sur un bloc très proche de la tête de la chaîne, dont le résultat pourrait changer s'il s'avérait que ce bloc ne se trouvait pas sur la chaîne principale et que la chaîne était réorganisée. Une fois qu'un bloc peut être considéré comme définitif, le résultat de la requête ne changera pas. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -Il convient de noter que l'implémentation actuelle est encore sujette à certaines limitations qui pourraient violer ces garanties. L'implémentation ne peut pas toujours dire qu'un bloc donné n'est pas du tout sur la chaîne principale, ou que le résultat d'une requête par bloc de hachage pour un bloc qui ne peut pas encore être considéré comme final peut être influencé par une réorganisation du bloc qui se déroule en même temps que la requête. Ils n'affectent pas les résultats des requêtes par hachage de bloc lorsque le bloc est final et que l'on sait qu'il se trouve sur la chaîne principale. [Ce numéro](https://github.com/graphprotocol/graph-node/issues/1405) explique ces limitations en détail. +> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. #### Exemple @@ -324,12 +337,12 @@ Les champs de requête de recherche en texte `intégral`, fournissent une Api de Opérateurs de recherche en texte intégral : -| Symbole | Opérateur | Description | -| --- | --- | --- | -| `&` | `And` | Pour combiner plusieurs termes de recherche dans un filtre pour les entités incluant tous les termes fournis | -| | | `Or` | Les requêtes comportant plusieurs termes de recherche séparés par l'opérateur ou renverront toutes les entités correspondant à l'un des termes fournis | -| `<>` | `Follow by` | Spécifiez la distance entre deux mots. | -| `:*` | `Prefix` | Utilisez le terme de recherche de préfixe pour trouver les mots dont le préfixe correspond (2 caractères requis.) | +| Symbole | Opérateur | Description | +| ---------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | Pour combiner plusieurs termes de recherche dans un filtre pour les entités incluant tous les termes fournis | +| | | `Or` | Les requêtes comportant plusieurs termes de recherche séparés par l'opérateur ou renverront toutes les entités correspondant à l'un des termes fournis | +| `<>` | `Follow by` | Spécifiez la distance entre deux mots. | +| `:*` | `Prefix` | Utilisez le terme de recherche de préfixe pour trouver les mots dont le préfixe correspond (2 caractères requis.) | #### Exemples @@ -378,11 +391,11 @@ Graph Node met en œuvre une validation [basée sur les spécifications](https:/ ## Schema -Le schéma de votre source de données, c'est-à-dire les types d'entités, les valeurs et les relations disponibles pour l'interrogation, est défini par le [Langage de définition d'interface GraphQL (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -Les schémas GraphQL définissent généralement des types racine pour les `requêtes`, les `abonnements` et les `mutations`. Le graphe ne prend en charge que les `requêtes`. Le type racine `Query` pour votre subgraph est automatiquement généré à partir du schéma GraphQL inclus dans le manifeste de votre subgraph. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> **Note:** Notre API n'expose pas les mutations car les développeurs sont censés émettre des transactions directement contre la blockchain sous-jacente à partir de leurs applications. +> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. ### Entities diff --git a/website/pages/fr/querying/querying-best-practices.mdx b/website/pages/fr/querying/querying-best-practices.mdx index 007b5752c493..12db3e241f51 100644 --- a/website/pages/fr/querying/querying-best-practices.mdx +++ b/website/pages/fr/querying/querying-best-practices.mdx @@ -2,11 +2,9 @@ title: Bonnes pratiques d'interrogation --- -Le Graph fournit un moyen décentralisé d’interroger les données des blockchains. +The Graph provides a decentralized way to query data from blockchains via GraphQL APIs, making it easier to query data with the GraphQL language. -Les données du réseau Graph sont exposées via une API GraphQL, ce qui facilite l'interrogation des données avec le langage GraphQL. - -Cette page vous guidera à travers les règles essentielles du langage GraphQL et les meilleures pratiques en matière de requêtes GraphQL. +Learn the essential GraphQL language rules and GraphQL querying best practices. --- @@ -71,7 +69,7 @@ GraphQL est un langage et un ensemble de conventions qui transportent via HTTP. Cela signifie que vous pouvez interroger une API GraphQL en utilisant le standard `fetch` (nativement ou via `@whatwg-node/fetch` ou `isomorphic-fetch`). -Cependant, comme indiqué dans ["Requête à partir d'une application"](/querying/querying-from-an-application), nous vous recommandons d'utiliser notre `graph-client` qui prend en charge des fonctionnalités uniques telles que : +However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), it's recommend to use `graph-client` which supports unique features such as: - Gestion des subgraphs inter-chaînes : interrogation à partir de plusieurs subgraphs en une seule requête - [Suivi automatique des blocs](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) @@ -104,8 +102,6 @@ main() D'autres alternatives client GraphQL sont couvertes dans ["Requête à partir d'une application"](/querying/querying-from-an-application). -Maintenant que nous avons couvert les règles de base de la syntaxe des requêtes GraphQL, examinons maintenant les meilleures pratiques d'écriture de requêtes GraphQL. - --- ## Les meilleures pratiques @@ -164,11 +160,11 @@ Cela apporte de **de nombreux avantages** : - Les **variables peuvent être mises en cache** au niveau du serveur - **Les requêtes peuvent être analysées statiquement par des outils** (plus d'informations à ce sujet dans les sections suivantes) -**Remarque : Comment inclure des champs de manière conditionnelle dans les requêtes statiques** +### How to include fields conditionally in static queries -Nous pourrions vouloir inclure le champ `owner` uniquement dans une condition particulière. +You might want to include the `owner` field only on a particular condition. -Pour cela, nous pouvons exploiter la directive `@include(if:...)` comme suit : +For this, you can leverage the `@include(if:...)` directive as follows: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -191,7 +187,7 @@ const result = await execute(query, { }) ``` -Remarque : La directive opposée est `@skip(if: ...)`. +> Remarque : La directive opposée est `@skip(if: ...)`. ### Ask for what you want @@ -199,9 +195,8 @@ GraphQL est devenu célèbre pour son slogan « Demandez ce que vous voulez ». Pour cette raison, il n'existe aucun moyen, dans GraphQL, d'obtenir tous les champs disponibles sans avoir à les lister individuellement. -Lorsque vous interrogez les API GraphQL, pensez toujours à interroger uniquement les champs qui seront réellement utilisés. - -Les collections d’entités sont une cause fréquente de surextraction. Par défaut, les requêtes récupèrent 100 entités dans une collection, ce qui est généralement bien plus que ce qui sera réellement utilisé, par exemple pour l'affichage à l'utilisateur. Les requêtes doivent donc presque toujours être définies explicitement en premier et s'assurer qu'elles ne récupèrent que le nombre d'entités dont elles ont réellement besoin. Cela s'applique non seulement aux collections de niveau supérieur dans une requête, mais encore plus aux collections d'entités imbriquées. +- Lorsque vous interrogez les API GraphQL, pensez toujours à interroger uniquement les champs qui seront réellement utilisés. +- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. Par exemple, dans la requête suivante : @@ -337,8 +332,8 @@ query { De tels champs répétés (`id`, `active`, `status`) posent de nombreux problèmes : -- plus difficile à lire pour des requêtes plus approfondies -- lors de l'utilisation d'outils qui génèrent des types TypeScript basés sur des requêtes (_plus d'informations à ce sujet dans la dernière section_), `newDelegate` et `oldDelegate` entraînera deux interfaces en ligne distinctes. +- More extensive queries become harder to read. +- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. Une version refactorisée de la requête serait la suivante : @@ -364,13 +359,13 @@ fragment DelegateItem on Transcoder { } ``` -L'utilisation de GraphQL `fragment` améliorera la lisibilité (en particulier à grande échelle) mais entraînera également une meilleure génération de types TypeScript. +Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. Lors de l'utilisation de l'outil de génération de types, la requête ci-dessus générera un type `DelegateItemFragment` approprié (_voir la dernière section "Outils"_). ### Bonnes pratiques et erreurs à éviter avec les fragments GraphQL -**La base du fragment doit être un type** +### La base du fragment doit être un type Un Fragment ne peut pas être basé sur un type non applicable, en bref, **sur un type n'ayant pas de champs** : @@ -382,7 +377,7 @@ fragment MyFragment on BigInt { `BigInt` est un **scalaire** (type natif "plain") qui ne peut pas être utilisé comme base d'un fragment. -**Comment diffuser un fragment** +#### Comment diffuser un fragment Les fragments sont définis sur des types spécifiques et doivent être utilisés en conséquence dans les requêtes. @@ -411,16 +406,16 @@ fragment VoteItem on Vote { Il n'est pas possible de diffuser ici un fragment de type `Vote`. -**Définir Fragment comme une unité commerciale atomique de données** +#### Définir Fragment comme une unité commerciale atomique de données -Le fragment GraphQL doit être défini en fonction de son utilisation. +GraphQL `Fragment`s must be defined based on their usage. Pour la plupart des cas d'utilisation, définir un fragment par type (en cas d'utilisation répétée de champs ou de génération de type) est suffisant. -Voici une règle générale pour utiliser Fragment : +Here is a rule of thumb for using fragments: -- lorsque des champs du même type sont répétés dans une requête, regroupez-les dans un Fragment -- lorsque des champs similaires mais différents sont répétés, créez plusieurs fragments, ex  : +- When fields of the same type are repeated in a query, group them in a `Fragment`. +- When similar but different fields are repeated, create multiple fragments, for instance: ```graphql # fragment de base (utilisé principalement pour les listes) @@ -443,7 +438,7 @@ fragment VoteWithPoll on Vote { --- -## Les outils indispensables +## The Essential Tools ### Explorateurs Web GraphQL @@ -473,11 +468,11 @@ Cela vous permettra de **détecter les erreurs sans même tester les requêtes** L'[extension GraphQL VSCode](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) est un excellent ajout à votre flux de travail de développement pour obtenir : -- coloration syntaxique -- suggestions d'autocomplétion -- validation par rapport au schéma -- snippets -- aller à la définition des fragments et des types d'entrée +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets +- Go to definition for fragments and input types Si vous utilisez `graphql-eslint`, l'[extension ESLint VSCode](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) est un incontournable pour visualiser correctement les erreurs et les avertissements intégrés dans votre code. @@ -485,9 +480,9 @@ Si vous utilisez `graphql-eslint`, l'[extension ESLint VSCode](https://marketpla Le [plug-in JS GraphQL](https://plugins.jetbrains.com/plugin/8097-graphql/) améliorera considérablement votre expérience lorsque vous travaillez avec GraphQL en fournissant : -- coloration syntaxique -- suggestions d'autocomplétion -- validation par rapport au schéma -- snippets +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets -Plus d'informations sur cet [article WebStorm](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) qui présente toutes les principales fonctionnalités du plugin. +For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. diff --git a/website/pages/fr/quick-start.mdx b/website/pages/fr/quick-start.mdx index 25d4ae1f22c2..b2c47dd5400a 100644 --- a/website/pages/fr/quick-start.mdx +++ b/website/pages/fr/quick-start.mdx @@ -2,24 +2,18 @@ title: Démarrage Rapide --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph to Subgraph Studio. +Learn how to easily publish and query a [subgraph](/developing/developer-faqs/#1-what-is-a-subgraph) on The Graph. -Assurez-vous que votre subgraph indexera les données d'un [réseau pris en charge] (/developing/supported-networks). - -Ce guide est rédigé en supposant que vous possédez : +## Prerequisites for this guide - Un portefeuille crypto -- Une adresse de smart contract sur le réseau de votre choix - -## 1. Créez un subgraph sur Subgraph Studio - -Go to the [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- A smart contract address on one of the [supported networks](/developing/supported-networks/) -Once your wallet is connected, you can begin by clicking “Create a Subgraph." It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name." +## Step-by-step -## 2. Installez la CLI Graph +### 1. Installez la CLI de The Graph -The Graph CLI is written in TypeScript and you will need to have `node` and either `npm` or `yarn` installed to use it. Check that you have the most recent CLI version installed. +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. Sur votre machine locale, exécutez l'une des commandes suivantes : @@ -35,133 +29,161 @@ Using [yarn](https://yarnpkg.com/): npm install -g @graphprotocol/graph-cli ``` -## 3. Initialize your subgraph from existing contract +### 2. Create your subgraph + +If your contract has events, the `init` command will automatically create a scaffold of a subgraph. + +#### Create via Graph CLI + +Use the following command to create a subgraph in Subgraph Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +#### Create via Subgraph Studio + +Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. -Initialize your subgraph from an existing contract by running the initialize command: +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +2. Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". + +For additional information on subgraph creation and the Graph CLI, see [Creating a Subgraph](/developing/creating-a-subgraph). + +### 3. Initialize your subgraph + +#### From an existing contract + +The following command initializes your subgraph from an existing contract: ```sh graph init --studio ``` -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> Note: If your contract was verified on Etherscan, then the ABI will automatically be created in the CLI. + +You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -Lorsque vous initialisez votre subgraph, l'outil CLI vous demande les informations suivantes : +When you initialize your subgraph, the CLI will ask you for the following information: -- Protocole : choisissez le protocole à partir duquel votre subgraph indexera les données -- Slug de subgraph : créez un nom pour votre subgraph. Votre slug de subgraph est un identifiant pour votre subgraph. -- Répertoire dans lequel créer le subgraph : choisissez votre répertoire local -- Réseau Ethereum (facultatif) : vous devrez peut-être spécifier à partir de quel réseau compatible EVM votre subgraph indexera les données -- Adresse du contrat : localisez l'adresse du contrat intelligent à partir de laquelle vous souhaitez interroger les données -- ABI : si l'ABI n'est pas renseigné automatiquement, vous devrez le saisir manuellement sous forme de fichier JSON -- Bloc de démarrage : il est suggéré de saisir le bloc de démarrage pour gagner du temps pendant que votre subgraph indexe les données de la blockchain. Vous pouvez localiser le bloc de démarrage en recherchant le bloc dans lequel votre contrat a été déployé. -- Nom du contrat : saisissez le nom de votre contrat -- Indexer les événements de contrat en tant qu'entités : il est suggéré de définir cette valeur sur true car cela ajoutera automatiquement des mappages à votre subgraph pour chaque événement émis -- Ajouter un autre contrat (facultatif) : vous pouvez ajouter un autre contrat +- Protocol: Choose the protocol your subgraph will be indexing data from. +- Subgraph slug: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. +- Directory to create the subgraph in: Choose your local directory. +- Ethereum network (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- Contract address: Locate the smart contract address you’d like to query data from. +- ABI: If the ABI is not auto-populated, you will need to input it manually as a JSON file. +- Start Block: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- Contract Name: Input the name of your contract. +- Index contract events as entities: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- Add another contract (optional): You can add another contract. La capture d'écran suivante donne un exemple de ce qui vous attend lors de l'initialisation de votre subgraph : ![Subgraph command](/img/subgraph-init-example.png) -## 4. Write your subgraph +### 4. Write your subgraph -Les commandes précédentes créent un subgraph d'échafaudage que vous pouvez utiliser comme point de départ pour construire votre propre subgraph. Lorsque vous apporterez des modifications au subgraph, vous travaillerez principalement avec trois fichiers : +The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. -- Manifest (`subgraph.yaml`) - The manifest defines what datasources your subgraphs will index. -- Schema (`schema.graphql`) - The GraphQL schema defines what data you wish to retrieve from the subgraph. -- AssemblyScript Mappings (`mapping.ts`) - This is the code that translates data from your datasources to the entities defined in the schema. +When making changes to the subgraph, you will mainly work with three files: -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). +- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -## 5. Deploy to Subgraph Studio +For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -Une fois votre subgraph écrit, exécutez les commandes suivantes : +### 5. Deploy your subgraph -```sh -$ graph codegen -$ graph build -``` +Remember, deploying is not the same as publishing. + +- When you deploy a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. +- When you publish a subgraph, you are publishing it onchain to the decentralized network. + +1. Une fois votre subgraph écrit, exécutez les commandes suivantes : + + ```sh + graph codegen + graph build + ``` + +2. Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. -- Authentifiez et déployez votre subgraph. La clé de déploiement se trouve sur la page du subgraph dans Subgraph Studio. +![Deploy key](/img/subgraph-studio-deploy-key.jpg) +```` ```sh -$ graph auth --studio -$ graph deploy --studio -``` -You will be asked for a version label. It's strongly recommended to use [semver](https://semver.org/) for versioning like `0.0.1`. That said, you are free to choose any string as version such as:`v1`, `version1`, `asdf`. - -## 6. Testez votre subgraph - -In Subgraph Studio's playground environment, you can test your subgraph by making a sample query. - -Les registres ou logs vous indiqueront s'il y a des erreurs avec votre subgraph. Les logs d'un subgraph opérationnel ressembleront à ceci : - -![Subgraph logs](/img/subgraph-logs-image.png) - -If your subgraph is failing, you can query the subgraph health by using the GraphiQL Playground. Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails, so you can debug accordingly: - -```graphql -{ - indexingStatuses(subgraphs: ["Qm..."]) { - node - synced - health - fatalError { - message - block { - number - hash - } - handler - } - nonFatalErrors { - message - block { - number - hash - } - handler - } - chains { - network - chainHeadBlock { - number - } - earliestBlock { - number - } - latestBlock { - number - } - lastHealthyBlock { - number - } - } - entityCount - } -} +graph auth --studio + +graph deploy --studio ``` +```` -## 7. Publish your subgraph to The Graph’s Decentralized Network +- The CLI will ask for a version label. + - It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. That said, you can choose any string for the version such as: `v1`, `version1`, `asdf`, etc. -Once your subgraph has been deployed to Subgraph Studio, you have tested it out, and you are ready to put it into production, you can then publish it to the decentralized network. +### 6. Review your subgraph -In Subgraph Studio, you will be able to click the publish button on the top right of your subgraph's page. +If you’d like to examine your subgraph before publishing it to the network, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: -Select the network you would like to publish your subgraph to. It is recommended to publish subgraphs to Arbitrum One to take advantage of the [faster transaction speeds and lower gas costs](/arbitrum/arbitrum-faq). +- Run a sample query. +- Analyze your subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: -The (upgrade Indexer)[/sunrise/#about-the-upgrade-indexer] will begin serving queries on your subgraph regardless of subgraph curation, and it will provide you with 100,000 free queries per month. + ![Subgraph logs](/img/subgraph-logs-image.png) -For a higher quality of service and stronger redundancy, you can curate your subgraph to attract more Indexers. At the time of writing, it is recommended that you curate your own subgraph with at least 3,000 GRT to ensure 3-5 additional Indexers begin serving queries on your subgraph. +### 7. Publish your subgraph to The Graph Network -Pour économiser sur les coûts de gaz, vous pouvez organiser votre subgraph dans la même transaction que celle où vous l'avez publié en sélectionnant ce bouton lorsque vous publiez votre subgraph sur le réseau décentralisé de The Graph : +Publishing a subgraph to the decentralized network makes it available for [Curators](/network/curating/) to begin curating it and [Indexers](/network/indexing/) to begin indexing it. -![Subgraph publish](/img/publish-and-signal-tx.png) +#### Publishing with Subgraph Studio + +1. To publish your subgraph, click the Publish button in the dashboard. +2. Select the network to which you would like to publish your subgraph. + +#### Publishing from the CLI + +As of version 0.73.0, you can also publish your subgraph with the Graph CLI. + +1. Open the `graph-cli`. + +2. Use the following commands: + + ```sh + graph codegen && graph build + ``` + + Then, + + ```sh + graph publish + ``` -## 8. Query your subgraph +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +To customize your deployment, see [Publishing a Subgraph](/publishing/publishing-a-subgraph/). + +#### Adding signal to your subgraph + +1. To attract indexers to query your subgraph, you should add GRT curation signal to it. + + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + +2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. + + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + +To learn more about curation, read [Curating](/network/curating/). + +To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: + +![Subgraph publish](/img/publish-and-signal-tx.png) -Désormais, vous pouvez interroger votre subgraph en envoyant des requêtes GraphQL à l'URL de requête de votre subgraph, que vous pouvez trouver en cliquant sur le bouton de requête. +### 8. Query your subgraph -If you don't have your API key, you can query via the free, rate-limited development query URL, which can be used for development and staging. +Now, you can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read more [here](/querying/querying-the-graph/). +For more information about querying data from your subgraph, read [Querying The Graph](/querying/querying-the-graph/). diff --git a/website/pages/fr/release-notes/assemblyscript-migration-guide.mdx b/website/pages/fr/release-notes/assemblyscript-migration-guide.mdx index 49e76d908653..619ba98b326a 100644 --- a/website/pages/fr/release-notes/assemblyscript-migration-guide.mdx +++ b/website/pages/fr/release-notes/assemblyscript-migration-guide.mdx @@ -127,11 +127,8 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - Vous devrez renommer vos variables en double si vous conservez une observation de variables. - ### Comparaisons nulles - En effectuant la mise à niveau sur votre subgraph, vous pouvez parfois obtenir des erreurs comme celles-ci : ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - Pour résoudre, vous pouvez simplement remplacer l'instruction `if` par quelque chose comme ceci : ```typescript @@ -213,7 +209,7 @@ Dans ces cas-là, vous pouvez utiliser la fonction `changetype` : class Bytes extends Uint8Array {} let uint8Array = new Uint8Array(2) -changetype(uint8Array) // fonctionne :) +changetype(uint8Array ) // fonctionne :) ``` ```typescript @@ -288,7 +284,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - Pour résoudre ce problème, vous pouvez créer une variable pour l'accès à cette propriété afin que le compilateur puisse effectuer la vérification magique de la nullité : ```typescript @@ -406,7 +401,7 @@ type Total @entity { let total = Total.load('latest') if (total === null) { - total = new Total('latest') // initialise déjà les propriétés non-nullables + total = new Total('latest') // initialise déjà les propriétés non-nullables } total.amount = total.amount + BigInt.fromI32(1) @@ -488,12 +483,12 @@ Vous ne pouvez désormais plus définir de champs dans vos types qui sont des li ```graphql type Something @entity { - id: Bytes! + id: Bytes! } type MyEntity @entity { - id: Bytes! - invalidField: [Something]! # n'est plus valide + id: Bytes! + invalidField: [Something]! # n'est plus valide } ``` diff --git a/website/pages/fr/release-notes/graphql-validations-migration-guide.mdx b/website/pages/fr/release-notes/graphql-validations-migration-guide.mdx index 62e5435c0fc3..567231e0bedf 100644 --- a/website/pages/fr/release-notes/graphql-validations-migration-guide.mdx +++ b/website/pages/fr/release-notes/graphql-validations-migration-guide.mdx @@ -103,7 +103,7 @@ query myData { } query myData2 { - # renommer la deuxième requête + # renommer la deuxième requête name } ``` @@ -158,7 +158,7 @@ _Solution:_ ```graphql query myData($id: String) { - # conserver la variable pertinente (ici : `$id: String`) + # conserver la variable pertinente (ici : `$id: String`) id ...MyFields } @@ -259,7 +259,7 @@ query { ```graphql # Différents arguments peuvent conduire à des données différentes, -# donc nous ne pouvons pas supposer que les champs seront les mêmes. +# donc nous ne pouvons pas supposer que les champs seront les mêmes. query { dogs { doesKnowCommand(dogCommand: SIT) diff --git a/website/pages/fr/sps/introduction.mdx b/website/pages/fr/sps/introduction.mdx new file mode 100644 index 000000000000..12e3f81c6d53 --- /dev/null +++ b/website/pages/fr/sps/introduction.mdx @@ -0,0 +1,19 @@ +--- +title: Introduction to Substreams-powered Subgraphs +--- + +By using a Substreams package (`.spkg`) as a data source, your subgraph gains access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. + +There are two methods of enabling this technology: + +Using Substreams [triggers](./triggers): Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. + +Using [Entity Changes](https://substreams.streamingfast.io/documentation/consume/subgraph/graph-out): By writing more of the logic into Substreams, you can consume the module's output directly into graph-node. In graph-node, you can use the Substreams data to create your subgraph entities. + +It is really a matter of where you put your logic, in the subgraph or the Substreams. Keep in mind that having more of your logic in Substreams benefits from a parallelized model, whereas triggers will be linearly consumed in graph-node. + +Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: + +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/fr/sps/triggers-example.mdx b/website/pages/fr/sps/triggers-example.mdx new file mode 100644 index 000000000000..69c8d90bdaf2 --- /dev/null +++ b/website/pages/fr/sps/triggers-example.mdx @@ -0,0 +1,140 @@ +--- +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" +--- + +## Conditions préalables + +Before starting, make sure to: + +- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. +- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. + +## Step 1: Initialize Your Project + + + +1. Open your Dev Container and run the following command to initialize your project: + + ```bash + substreams init + ``` + +2. Select the "minimal" project option. + +3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: + +```yaml +specVersion: v0.1.0 +package: + name: my_project_sol + version: v0.1.0 + +imports: # Pass your spkg of interest + solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg + +modules: + - name: map_spl_transfers + use: solana:map_block # Select corresponding modules available within your spkg + initialBlock: 260000082 + + - name: map_transactions_by_programid + use: solana:solana:transactions_by_programid_without_votes + +network: solana-mainnet-beta + +params: # Modify the param fields to meet your needs + # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA + map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE +``` + +## Step 2: Generate the Subgraph Manifest + +Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: + +```bash +substreams codegen subgraph +``` + +You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: + +```yaml +--- +dataSources: + - kind: substreams + name: my_project_sol + network: solana-mainnet-beta + source: + package: + moduleName: map_spl_transfers # Module defined in the substreams.yaml + file: ./my-project-sol-v0.1.0.spkg + mapping: + apiVersion: 0.0.7 + kind: substreams/graph-entities + file: ./src/mappings.ts + handler: handleTriggers +``` + +## Step 3: Define Entities in `schema.graphql` + +Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. Here is an example: + +```graphql +type MyTransfer @entity { + id: ID! + amount: String! + source: String! + designation: String! + signers: [String!]! +} +``` + +This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. + +## Step 4: Handle Substreams Data in `mappings.ts` + +With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: + +```ts +import { Protobuf } from 'as-proto/assembly' +import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events' +import { MyTransfer } from '../generated/schema' + +export function handleTriggers(bytes: Uint8Array): void { + const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode) + + for (let i = 0; i < input.data.length; i++) { + const event = input.data[i] + + if (event.transfer != null) { + let entity_id: string = `${event.txnId}-${i}` + const entity = new MyTransfer(entity_id) + entity.amount = event.transfer!.instruction!.amount.toString() + entity.source = event.transfer!.accounts!.source + entity.designation = event.transfer!.accounts!.destination + + if (event.transfer!.accounts!.signer!.single != null) { + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] + } else if (event.transfer!.accounts!.signer!.multisig != null) { + entity.signers = event.transfer!.accounts!.signer!.multisig!.signers + } + entity.save() + } + } +} +``` + +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + +## Conclusion + +You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. + +For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). diff --git a/website/pages/fr/sps/triggers.mdx b/website/pages/fr/sps/triggers.mdx new file mode 100644 index 000000000000..ed19635d4768 --- /dev/null +++ b/website/pages/fr/sps/triggers.mdx @@ -0,0 +1,37 @@ +--- +title: Substreams Triggers +--- + +Custom triggers allow you to send data directly into your subgraph mappings file and entities (similar to tables and fields), enabling full use of the GraphQL layer. By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data within your subgraph’s handler, ensuring efficient and streamlined data management within the subgraph framework. + +> Note: If you haven’t already, visit one of the How-To Guides found [here](./introduction) to scaffold your first project in the Development Container. + +The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. + +```tsx +export function handleTransactions(bytes: Uint8Array): void { + let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).trasanctions // 1. + if (transactions.length == 0) { + log.info('No transactions found', []) + return + } + + for (let i = 0; i < transactions.length; i++) { + // 2. + let transaction = transactions[i] + + let entity = new Transaction(transaction.hash) // 3. + entity.from = transaction.from + entity.to = transaction.to + entity.save() + } +} +``` + +Here's what you’re seeing in the `mappings.ts` file: + +1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object +2. Looping over the transactions +3. Create a new subgraph entity for every transaction + +To go through a detailed example of a trigger-based subgraph, [click here](./triggers-example). diff --git a/website/pages/fr/substreams.mdx b/website/pages/fr/substreams.mdx index 1797d3816bc2..8aff2b62c74d 100644 --- a/website/pages/fr/substreams.mdx +++ b/website/pages/fr/substreams.mdx @@ -4,9 +4,11 @@ title: Substreams ![Logo Substreams](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## Le fonctionnement de Substreams en 4 étapes @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### Élargissez vos connaissances - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/fr/sunrise.mdx b/website/pages/fr/sunrise.mdx index 92fd7447ac3f..20f23e1a01f1 100644 --- a/website/pages/fr/sunrise.mdx +++ b/website/pages/fr/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Post-Sunrise + Upgrading to The Graph Network FAQ --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Note: The Sunrise of Decentralized Data ended June 12th, 2024. -## Quel est l'essor des données décentralisées ? +## What was the Sunrise of Decentralized Data? -Le lever du soleil sur les données décentralisées est une initiative lancée par Edge & Node. L'objectif est de permettre aux développeurs de subgraphs de passer en toute transparence au réseau décentralisé de The Graph. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -Ce plan s'appuie sur de nombreux développements antérieurs de l'écosystème The Graph, y compris un indexeur de mise à niveau pour servir des requêtes sur les subgraphs nouvellement publiés, et la capacité d'intégrer de nouveaux réseaux de blockchain à The Graph. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### What are the phases of the Sunrise? +### What happened to the hosted service? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. -## Mise à jour des subgraphs dans le réseau The Graph +During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. -### Quand les subgraphs du service hébergé ne seront-ils plus disponibles ? +### Was Subgraph Studio impacted by this upgrade? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Why were subgraphs published to Arbitrum, did it start indexing a different network? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Mon subgraph de service hébergé sera-t-il pris en charge sur le réseau The Graph ? - -Oui, l'indexeur de mise à niveau prendra automatiquement en charge tous les subgraphs de services hébergés publiés sur le réseau The Graph pour une expérience de mise à niveau transparente. - -### Comment mettre à jour mon subgraph de services hébergés ? - -> Note : La mise à niveau d'un subgraph vers le réseau The Graph ne peut pas être annulée. - - - -To upgrade a hosted service subgraph, you can visit the subgraph dashboard on the [hosted service](https://thegraph.com/hosted-service). - -1. Sélectionnez le(s) subgraph(s) que vous souhaitez mettre à niveau. -2. Select the receiving wallet (the wallet that will become the owner of the subgraph). -3. Cliquez sur le bouton "Mise à niveau". - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### Comment puis-je obtenir de l'aide pour le processus de mise à niveau ? - -La communauté Graph est là pour aider les développeurs à passer au réseau Graph. Rejoignez le [serveur Discord] de The Graph (https://discord.gg/vtvv7FP) et demandez de l'aide dans le #canal mise à niveau-du réseau-décentralisé. - -### Comment puis-je garantir une haute qualité de service et une redondance pour les subgraphs sur le réseau The Graph ? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Les membres de ces communautés blockchain sont encouragés à intégrer leur chaîne via le [Processus d'intégration de la chaîne](/chain-integration-overview/). - -### Comment publier les nouvelles versions sur le réseau ? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Mettez à niveau vers la dernière version de [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Mettez à jour votre commande de déploiement - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> La publication nécessite Arbitrum ETH - la mise à niveau de votre subgraph permet également de déposer une petite quantité pour faciliter vos premières interactions avec le protocole 🧑‍🚀 - -### J'utilise un subgraph développé par quelqu'un d'autre, comment puis-je m'assurer que mon service n'est pas interrompu ? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### Que se passe-t-il si je ne mets pas à jour mon subgraph ? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### Comment puis-je commencer à interroger des subgraphs sur The Graph Network ? - -Vous pouvez explorer les subgraphs disponibles sur [Graph Explorer](https://thegraph.com/explorer). [En savoir plus sur l'interrogation des subgraphs sur The Graph](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## À propos de l'indexeur de mise à niveau -### Qu'est-ce que l'indexeur de mise à niveau ? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> The upgrade Indexer is currently active. -### Quelles chaînes l’indexeur de mise à niveau prend-il en charge ? +The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. -L'indexeur de mise à niveau prend en charge les chaînes qui n'étaient auparavant disponibles que sur le service hébergé. +### What does the upgrade Indexer do? -Vous trouverez une liste complète des chaînes soutenues [ici] (/developing/supported-networks/). +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Pourquoi Edge & Node exécutent-ils l'indexeur de mise à niveau ? -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. ### Que signifie la mise à niveau de l'indexeur pour les indexeurs existants ? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. + +However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -L'indexeur de mise à niveau fournit également à la communauté des indexeurs des informations sur la demande potentielle de subgraphs et de nouvelles chaînes sur le réseau de graphs. +The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -L'indexeur de mise à niveau offre une opportunité puissante pour les délégués. Au fur et à mesure que de nouveaux subgraphs sont transférés du service hébergé vers le réseau The Graph, les délégués devraient bénéficier de l'activité accrue du réseau. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### L'indexeur mis à niveau sera-t-il en concurrence avec les indexeurs existants pour l'obtention de récompenses ? +### Did the upgrade Indexer compete with existing Indexers for rewards? -Non, l'indexeur de mise à niveau allouera uniquement le montant minimum par subgraph et ne collectera pas de récompenses d'indexation. +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### Comment cela affectera-t-il les développeurs de subgraphs ? +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### How does this benefit data consumers? +### How does the upgrade Indexer benefit data consumers? L'indexeur de mise à niveau active les chaînes sur le réseau qui n'étaient auparavant prises en charge que sur le service hébergé. Par conséquent, il élargit la portée et la disponibilité des données pouvant être interrogées sur le réseau. -### Quel sera le prix des requêtes de l'indexeur de mise à niveau ? - -L'indexeur de mise à niveau fixera le prix des requêtes au taux du marché afin de ne pas influencer le marché des frais de requête. - -### Quels sont les critères pour que l'indexeur de mise à niveau cesse de prendre en charge un subgraph ? - -L'indexeur de mise à niveau servira un subgraph jusqu'à ce qu'il soit suffisamment et correctement servi avec des requêtes cohérentes servies par au moins 3 autres indexeurs. - -En outre, l'indexeur de mise à niveau cessera de prendre en charge un subgraph s'il n'a pas été intégré au cours des 30 derniers jours. - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## À propos du réseau Graph - -### Dois-je gérer ma propre infrastructure ? - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -Une fois que votre subgraph a atteint un signal de curation adéquat et que d'autres indexeurs commencent à le soutenir, l'indexeur de mise à niveau se retire progressivement, ce qui permet aux autres indexeurs de percevoir des récompenses d'indexation et des frais d'interrogation. - -### Dois-je héberger ma propre infrastructure d'indexation ? - -L'exploitation d'une infrastructure pour votre propre projet est [nettement plus gourmande en ressources] (/network/benefits/) par rapport à l'utilisation du réseau Graph. - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -Cela dit, si vous êtes toujours intéressé par l'exploitation d'un [nœud de graphe] (https://github.com/graphprotocol/graph-node), envisagez de rejoindre The Graph Network [en tant qu'indexeur] (https://thegraph.com/blog/how-to-become-indexer/) pour gagner des récompenses d'indexation et des frais de requête en servant des données sur votre subgraph et d'autres. - -### Dois-je utiliser un fournisseur d’indexation centralisé ? - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -Voici une description détaillée des avantages de The Graph par rapport à l'hébergement centralisé : +### How does the upgrade Indexer price queries? -- **Résilience et redondance** : Les systèmes décentralisés sont intrinsèquement plus robustes et résilients en raison de leur nature distribuée. Les données ne sont pas stockées sur un seul serveur ou emplacement. Au lieu de cela, elles sont servies par des centaines d'indexeurs indépendants répartis dans le monde entier. Cela réduit le risque de perte de données ou d'interruption de service en cas de défaillance d'un nœud, ce qui se traduit par un temps de disponibilité exceptionnel (99,99 %). +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **Qualité de service** : En plus d'un temps de disponibilité impressionnant, The Graph Network se caractérise par une vitesse médiane d'interrogation (latence) d'environ 106 ms et par des taux de réussite des requêtes plus élevés que ceux des autres solutions hébergées. Pour en savoir plus, consultez [ce blog] (https://thegraph.com/blog/qos-the-graph-network/). +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -Tout comme vous avez choisi votre réseau blockchain pour sa nature décentralisée, sa sécurité et sa transparence, opter pour The Graph Network est une extension de ces mêmes principes. En alignant votre infrastructure de données sur ces valeurs, vous garantissez un environnement de développement cohésif, résilient et axé sur la confiance. +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/fr/supported-network-requirements.mdx b/website/pages/fr/supported-network-requirements.mdx index 40d53ab2c69b..8df2bb4bc660 100644 --- a/website/pages/fr/supported-network-requirements.mdx +++ b/website/pages/fr/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| Réseau | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | +| Réseau | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 13 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 3 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
Ubuntu 22.04
32GB+ RAM
>= 10 TiB NVMe SSD
_last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
Debian 12
16GB+ RAM
>= 1 TiB NVMe SSD
_last updated 3rd April 2024_ | ✅ | diff --git a/website/pages/fr/tap.mdx b/website/pages/fr/tap.mdx new file mode 100644 index 000000000000..609191a78594 --- /dev/null +++ b/website/pages/fr/tap.mdx @@ -0,0 +1,197 @@ +--- +title: TAP Migration Guide +--- + +Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. + +## Aperçu + +[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: + +- Efficiently handles micropayments. +- Adds a layer of consolidations to on-chain transactions and costs. +- Allows Indexers control of receipts and payments, guaranteeing payment for queries. +- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. + +## Specifics + +TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. + +For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. + +### RAV Details + +- It’s money that is waiting to be sent to the blockchain. + +- It will continue to send requests to aggregate and ensure that the total value of non-aggregated receipts does not exceed the `amount willing to lose`. + +- Each RAV can be redeemed once in the contracts, which is why they are sent after the allocation is closed. + +### Redeeming RAV + +As long as you run `tap-agent` and `indexer-agent`, everything will be executed automatically. The following provides a detailed breakdown of the process: + +1. An Indexer closes allocation. + +2. ` period, tap-agent` takes all pending receipts for that specific allocation and requests an aggregation into a RAV, marking it as `last`. + +3. `indexer-agent` takes all the last RAVS and sends redeem requests to the blockchain, which will update the value of `redeem_at`. + +4. During the `` period, `indexer-agent` monitors if the blockchain has any reorganizations that revert the transaction. + + - If it was reverted, the RAV is resent to the blockchain. If it was not reverted, it gets marked as `final`. + +## Blockchain Addresses + +### Contracts + +| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| ------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | +| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | +| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | + +### Gateway + +| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| ---------- | --------------------------------------------- | --------------------------------------------- | +| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | + +### Exigences + +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. + +- [Graph TAP Aribtrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) + +> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployement. As a result, you have to index it manually. + +## Migration Guide + +### Software versions + +| Component | Version | Image Link | +| --------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------- | +| indexer-service | v1.0.0-rc.6 | [indexer-service](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-service-rs/264320627?tag=1.0.0-rc.6) | +| indexer-agent | PR #995 | [indexer-agent](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent/266166026?tag=sha-d98cf80) | +| tap-agent | v1.0.0-rc.6 | [tap-agent](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-tap-agent/264320547?tag=1.0.0-rc.6) | + +### Steps + +1. **Indexer Agent** + + - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). + - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + +2. **Indexer Service** + + - Fully replace your current configuration with the [new Indexer Service rs](https://github.com/graphprotocol/indexer-rs). It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + - Like the older version, you can scale Indexer Service horizontally easily. It is still stateless. + +3. **TAP Agent** + + - Run _one_ single instance of [TAP Agent](https://github.com/graphprotocol/indexer-rs) at all times. It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + +4. **Configure Indexer Service and TAP Agent** + + Configuration is a TOML file shared between `indexer-service` and `tap-agent`, supplied with the argument `--config /path/to/config.toml`. + + Check out the full [configuration](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) and the [default values](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) + +For minimal configuration, use the following template: + +```bash +# You will have to change *all* the values below to match your setup. +# +# Some of the config below are global graph network values, which you can find here: +# +# +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: +# +# [database] +# postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" + +[indexer] +indexer_address = "0x1111111111111111111111111111111111111111" +operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" + +[database] +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. +postgres_url = "postgres://postgres@postgres:5432/postgres" + +[graph_node] +# URL to your graph-node's query endpoint +query_url = "" +# URL to your graph-node's status endpoint +status_url = "" + +[subgraphs.network] +# Query URL for the Graph Network subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[subgraphs.escrow] +# Query URL for the Escrow subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[blockchain] +# The chain ID of the network that the graph network is running on +chain_id = 1337 +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. +receipts_verifier_address = "0x2222222222222222222222222222222222222222" + +######################################## +# Specific configurations to tap-agent # +######################################## +[tap] +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: +# max_amount_willing_to_lose_grt = "0.1" +max_amount_willing_to_lose_grt = 20 + +[tap.sender_aggregator_endpoints] +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. +0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" +``` + +Notez : + +- Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/tap/#gateway). +- Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/tap/#contracts) using the appropriate chain id. + +**Log Level** + +- You can set the log level by using the `RUST_LOG` environment variable. +- It’s recommended that you set it to `RUST_LOG=indexer_tap_agent=debug,info`. + +## Monitoring + +### Metrics + +All components expose the port 7300 to be queried by prometheus. + +### Grafana Dashboard + +You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) and import. + +### Launchpad + +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer) diff --git a/website/pages/hi/about.mdx b/website/pages/hi/about.mdx index d410edbe8d76..be570e564599 100644 --- a/website/pages/hi/about.mdx +++ b/website/pages/hi/about.mdx @@ -2,46 +2,66 @@ title: ग्राफ के बारे में --- -यह पृष्ठ समझाएगा कि ग्राफ़ क्या है और आप कैसे आरंभ कर सकते हैं। - ## ग्राफ क्या है? -एथेरियम से शुरू होने वाले ब्लॉकचेन से डेटा को अनुक्रमित करने और क्वेरी करने के लिए ग्राफ़ एक विकेन्द्रीकृत प्रोटोकॉल है। यह डेटा को क्वेरी करना संभव बनाता है जिसे सीधे क्वेरी करना मुश्किल होता है। +The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. + +## Understanding the Basics + +Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. + +### Challenges Without The Graph + +In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. + +- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. + +- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. + +- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. + +### Why is this a problem? + +It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. + +Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. + +Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. -[Uniswap](https://uniswap.org/) जैसे जटिल स्मार्ट अनुबंध वाले प्रोजेक्ट और [Bored Ape Yacht Club](https://boredapeyachtclub.com/) जैसे NFT पहल एथेरियम ब्लॉकचैन पर डेटा स्टोर करें, जिससे ब्लॉकचैन से सीधे बुनियादी डेटा के अलावा कुछ भी पढ़ना मुश्किल हो जाता है। +## The Graph Provides a Solution -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. -आप अपना स्वयं का सर्वर भी बना सकते हैं, लेनदेन को वहां संसाधित कर सकते हैं, उन्हें डेटाबेस में सहेज सकते हैं, और डेटा को क्वेरी करने के लिए इन सबसे ऊपर एक एपीआई एंडपॉइंट बना सकते हैं। हालाँकि, यह विकल्प संसाधन गहन है, रखरखाव की आवश्यकता है, विफलता का एक बिंदु प्रस्तुत करता है, और विकेंद्रीकरण के लिए आवश्यक महत्वपूर्ण सुरक्षा गुणों को तोड़ता है। +### How The Graph Functions -**ब्लॉकचेन डेटा को इंडेक्स करना वास्तव में कठिन है।** +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +#### Specifics -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. -## ग्राफ कैसे काम करता है +- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -ग्राफ सीखता है कि सबग्राफ विवरण के आधार पर एथेरियम डेटा को क्या और कैसे अनुक्रमित किया जाए, जिसे सबग्राफ मेनिफेस्ट के रूप में जाना जाता है। सबग्राफ विवरण एक सबग्राफ के लिए ब्याज के स्मार्ट अनुबंधों को परिभाषित करता है, उन अनुबंधों की घटनाओं पर ध्यान देना है, और इवेंट डेटा को उस डेटा से कैसे मैप करना है जिसे ग्राफ़ अपने डेटाबेस में संग्रहीत करेगा। +- When creating a subgraph, you need to write a subgraph manifest. -एक बार जब आप `सबग्राफ मेनिफेस्ट` लिख लेते हैं, तो आप IPFS में परिभाषा को स्टोर करने के लिए ग्राफ़ सीएलआई का उपयोग करते हैं और इंडेक्सर को उस सबग्राफ के लिए इंडेक्सिंग डेटा शुरू करने के लिए कहते हैं। +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. -एथेरियम लेनदेन से निपटने के लिए, एक बार सबग्राफ मेनिफेस्ट तैनात किए जाने के बाद, यह आरेख डेटा के प्रवाह के बारे में अधिक विवरण देता है: +The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. ![एक ग्राफ़िक समझाता है कि कैसे ग्राफ़ डेटा उपभोक्ताओं को क्वेरीज़ प्रदान करने के लिए ग्राफ़ नोड का उपयोग करता है](/img/graph-dataflow.png) प्रवाह इन चरणों का पालन करता है: -1. एक विकेंद्रीकृत एप्लिकेशन स्मार्ट अनुबंध पर लेनदेन के माध्यम से एथेरियम में डेटा जोड़ता है। -2. लेन-देन संसाधित करते समय स्मार्ट अनुबंध एक या अधिक घटनाओं का उत्सर्जन करता है। -3. ग्राफ़ नोड लगातार नए ब्लॉकों के लिए एथेरियम को स्कैन करता है और आपके सबग्राफ के डेटा में शामिल हो सकता है। -4. ग्राफ नोड इन ब्लॉकों में आपके सबग्राफ के लिए एथेरियम ईवेंट ढूंढता है और आपके द्वारा प्रदान किए गए मैपिंग हैंडलर को चलाता है। मैपिंग एक WASM मॉड्यूल है जो एथेरियम घटनाओं के जवाब में ग्राफ़ नोड द्वारा संग्रहीत डेटा संस्थाओं को बनाता या अपडेट करता है। -5. नोड के [GraphQL समापन बिंदु](https://graphql.org/learn/) का उपयोग करते हुए, विकेन्द्रीकृत एप्लिकेशन ब्लॉकचैन से अनुक्रमित डेटा के लिए ग्राफ़ नोड से पूछताछ करता है। ग्राफ़ नोड बदले में इस डेटा को प्राप्त करने के लिए, स्टोर की इंडेक्सिंग क्षमताओं का उपयोग करते हुए, अपने अंतर्निहित डेटा स्टोर के लिए ग्राफ़कॉल प्रश्नों का अनुवाद करता है। विकेंद्रीकृत एप्लिकेशन इस डेटा को एंड-यूजर्स के लिए एक समृद्ध यूआई में प्रदर्शित करता है, जिसका उपयोग वे एथेरियम पर नए लेनदेन जारी करने के लिए करते हैं। चक्र दोहराता है। +1. एक विकेंद्रीकृत एप्लिकेशन स्मार्ट अनुबंध पर लेनदेन के माध्यम से एथेरियम में डेटा जोड़ता है। +2. लेन-देन संसाधित करते समय स्मार्ट अनुबंध एक या अधिक घटनाओं का उत्सर्जन करता है। +3. ग्राफ़ नोड लगातार नए ब्लॉकों के लिए एथेरियम को स्कैन करता है और आपके सबग्राफ के डेटा में शामिल हो सकता है। +4. ग्राफ नोड इन ब्लॉकों में आपके सबग्राफ के लिए एथेरियम ईवेंट ढूंढता है और आपके द्वारा प्रदान किए गए मैपिंग हैंडलर को चलाता है। मैपिंग एक WASM मॉड्यूल है जो एथेरियम घटनाओं के जवाब में ग्राफ़ नोड द्वारा संग्रहीत डेटा संस्थाओं को बनाता या अपडेट करता है। +5. नोड के [GraphQL समापन बिंदु](https://graphql.org/learn/) का उपयोग करते हुए, विकेन्द्रीकृत एप्लिकेशन ब्लॉकचैन से अनुक्रमित डेटा के लिए ग्राफ़ नोड से पूछताछ करता है। ग्राफ़ नोड बदले में इस डेटा को प्राप्त करने के लिए, स्टोर की इंडेक्सिंग क्षमताओं का उपयोग करते हुए, अपने अंतर्निहित डेटा स्टोर के लिए ग्राफ़कॉल प्रश्नों का अनुवाद करता है। विकेंद्रीकृत एप्लिकेशन इस डेटा को एंड-यूजर्स के लिए एक समृद्ध यूआई में प्रदर्शित करता है, जिसका उपयोग वे एथेरियम पर नए लेनदेन जारी करने के लिए करते हैं। चक्र दोहराता है। ## अगले कदम -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +The following sections provide a more in-depth look at subgraphs, their deployment and data querying. -Before you start writing your own subgraph, you might want to have a look at [Graph Explorer](https://thegraph.com/explorer) and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/pages/hi/arbitrum/arbitrum-faq.mdx b/website/pages/hi/arbitrum/arbitrum-faq.mdx index a64a009e8616..3fa4fafdd8c5 100644 --- a/website/pages/hi/arbitrum/arbitrum-faq.mdx +++ b/website/pages/hi/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: Arbitrum FAQ Click [here](#billing-on-arbitrum-faqs) if you would like to skip to the Arbitrum Billing FAQs. -## The Graph L2 Solution क्यों लागू कर रहा है? +## Why did The Graph implement an L2 Solution? -L2 पर Graph को scale करके, network participants उम्मीद कर सकते हैं: +By scaling The Graph on L2, network participants can now benefit from: - Upwards of 26x savings on gas fees @@ -14,7 +14,7 @@ L2 पर Graph को scale करके, network participants उम्मी - Security inherited from Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers could open and close allocations to index a greater number of subgraphs with greater frequency, developers could deploy and update subgraphs with greater ease, Delegators could delegate GRT with increased frequency, and Curators could add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion. @@ -41,27 +41,21 @@ L2 पर द ग्राफ़ का उपयोग करने का ल ## सबग्राफ डेवलपर, डेटा उपभोक्ता, इंडेक्सर, क्यूरेटर, या डेलिगेटर के रूप में, अब मुझे क्या करने की आवश्यकता है? -तत्काल कार्रवाई की आवश्यकता नहीं है, हालांकि, नेटवर्क प्रतिभागियों को एल2 के लाभों का लाभ उठाने के लिए आर्बिट्रम में जाना शुरू करने के लिए प्रोत्साहित किया जाता है। +Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) for additional support. -कोर डेवलपर टीमें एल2 ट्रांसफर टूल बनाने के लिए काम कर रही हैं, जिससे डेलिगेशन, क्यूरेशन और सबग्राफ को आर्बिट्रम में स्थानांतरित करना काफी आसान हो जाएगा। नेटवर्क प्रतिभागी उम्मीद कर सकते हैं कि 2023 की गर्मियों तक एल2 ट्रांसफर टूल उपलब्ध हो जाएंगे। +All indexing rewards are now entirely on Arbitrum. -10 अप्रैल, 2023 तक, सभी इंडेक्सिंग पुरस्कारों का 5% आर्बिट्रम पर खनन किया जा रहा है। जैसे-जैसे नेटवर्क की भागीदारी बढ़ती है, और जैसे ही परिषद इसे मंजूरी देती है, अनुक्रमण पुरस्कार धीरे-धीरे एथेरियम से आर्बिट्रम में स्थानांतरित हो जाएंगे, अंततः पूरी तरह से आर्बिट्रम में चले जाएंगे। - -## यदि मैं L2 पर नेटवर्क में भाग लेना चाहता हूँ, तो मुझे क्या करना चाहिए? - -Please help [test the network](https://testnet.thegraph.com/explorer) on L2 and report feedback about your experience in [Discord](https://discord.gg/graphprotocol). - -## क्या नेटवर्क को L2 तक स्केल करने से जुड़े कोई जोखिम हैं? +## Were there any risks associated with scaling the network to L2? All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). हर चीज़ का पूरी तरह से परीक्षण किया गया है, और एक सुरक्षित और निर्बाध संक्रमण सुनिश्चित करने के लिए एक आकस्मिक योजना बनाई गई है। विवरण यहां पाया जा सकता है [here] (https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and- सुरक्षा-विचार-20). -## क्या एथेरियम पर मौजूदा सबग्राफ काम करना जारी रखेंगे? +## Are existing subgraphs on Ethereum working? -हां, द ग्राफ नेटवर्क कॉन्ट्रैक्ट बाद की तारीख में पूरी तरह से आर्बिट्रम में जाने तक एथेरियम और आर्बिट्रम दोनों पर समानांतर रूप से काम करेगा। +All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. -## क्या जीआरटी के पास आर्बिट्रम पर एक नया स्मार्ट अनुबंध होगा? +## Does GRT have a new smart contract deployed on Arbitrum? Yes, GRT has an additional [smart contract on Arbitrum](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7). However, the Ethereum mainnet [GRT contract](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) will remain operational. diff --git a/website/pages/hi/billing.mdx b/website/pages/hi/billing.mdx index d6377dbf57be..4800696fb359 100644 --- a/website/pages/hi/billing.mdx +++ b/website/pages/hi/billing.mdx @@ -14,7 +14,7 @@ There are two plans to use when querying subgraphs on The Graph Network. ## Query Payments with credit card -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) +- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. पृष्ठ के ऊपरी दाएं कोने पर "कनेक्ट वॉलेट" बटन पर क्लिक करें। आपको बटुआ चयन पृष्ठ पर पुनर्निर्देशित किया जाएगा। अपना बटुआ चुनें और "कनेक्ट" पर क्लिक करें। 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. @@ -69,7 +69,7 @@ Once you bridge GRT, you can add it to your billing balance. 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. 4. Enter the amount of GRT you would like to withdraw. 5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. 6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. @@ -83,7 +83,7 @@ Once you bridge GRT, you can add it to your billing balance. - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. 5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. 6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. + - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. 7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. 8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. @@ -127,7 +127,7 @@ This will be a step by step guide for purchasing GRT on Binance. 7. Review your purchase and click "Buy GRT". 8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. 9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - Click on the "wallet" button, click withdraw, and select GRT. - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - Click "Continue" and confirm your transaction. @@ -198,7 +198,7 @@ You can learn more about getting ETH on Binance [here](https://www.binance.com/e ### How many queries will I need? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. @@ -208,6 +208,6 @@ Of course, both new and existing users can reach out to Edge & Node's BD team fo Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### What happens when my billing balance runs out? Will I get a warning? You will receive several email notifications before your billing balance runs out. diff --git a/website/pages/hi/chain-integration-overview.mdx b/website/pages/hi/chain-integration-overview.mdx index 2fe6c2580909..a142f3f817f9 100644 --- a/website/pages/hi/chain-integration-overview.mdx +++ b/website/pages/hi/chain-integration-overview.mdx @@ -6,12 +6,12 @@ A transparent and governance-based integration process was designed for blockcha ## Stage 1. Technical Integration -- Teams work on a Graph Node integration and Firehose for non-EVM based chains. [Here's how](/new-chain-integration/). +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. - Teams initiate the protocol integration process by creating a Forum thread [here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (New Data Sources sub-category under Governance & GIPs). Using the default Forum template is mandatory. ## Stage 2. Integration Validation -- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON RPC or Firehose endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - Graph Indexers test the integration on The Graph's testnet. - Core developers and Indexers monitor stability, performance, and data determinism. @@ -38,7 +38,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. -### 3. How much time will this process take? +### 3. How much time will the process of reaching full protocol support take? The time to mainnet is expected to be several weeks, varying based on the time of integration development, whether additional research is required, testing and bug fixes, and, as always, the timing of the governance process that requires community feedback. @@ -46,4 +46,4 @@ Protocol support for indexing rewards depends on the stakeholders' bandwidth to ### 4. How will priorities be handled? -Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. This is especially true for chains previously supported on the [hosted service](https://thegraph.com/hosted-service) or those relying on already tested stacks. +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/hi/cookbook/arweave.mdx b/website/pages/hi/cookbook/arweave.mdx index 3c79cdad4622..427918da1033 100644 --- a/website/pages/hi/cookbook/arweave.mdx +++ b/website/pages/hi/cookbook/arweave.mdx @@ -105,7 +105,7 @@ dataSources: इवेंट्स को प्रोसेस करने के लिए हैंडलर्स [असेंबली स्क्रिप्ट](https://www.assemblyscript.org/) में लिखे गए हैं| -आरवीव इंडेक्सिंग आरवीव-विशेष डाटा टाइप्स को [AssemblyScript API](/developing/assemblyscript-api/) में लाती है| +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/hi/cookbook/base-testnet.mdx b/website/pages/hi/cookbook/base-testnet.mdx index 186dd9481c97..cce1a7ce7e70 100644 --- a/website/pages/hi/cookbook/base-testnet.mdx +++ b/website/pages/hi/cookbook/base-testnet.mdx @@ -63,10 +63,10 @@ graph init --studio The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. +- स्कीमा (schema.graphql) - ग्राफक्यूएल स्कीमा परिभाषित करता है कि आप सबग्राफ से कौन सा डेटा प्राप्त करना चाहते हैं। - असेंबलीस्क्रिप्ट मैपिंग (mapping.ts) - यह वह कोड है जो स्कीमा में परिभाषित इकाई के लिए आपके डेटा सोर्स से डेटा का अनुवाद करता है। -If you want to index additional data, you will need extend the manifest, schema and mappings. +If you want to index additional data, you will need to extend the manifest, schema and mappings. For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). diff --git a/website/pages/hi/cookbook/cosmos.mdx b/website/pages/hi/cookbook/cosmos.mdx index ae8af7e316b8..8d001fca8e95 100644 --- a/website/pages/hi/cookbook/cosmos.mdx +++ b/website/pages/hi/cookbook/cosmos.mdx @@ -85,7 +85,7 @@ Schema definition describes the structure of the resulting subgraph database and इवेंट्स को प्रोसेस करने के हैंडलर्स [असेंबली स्क्रिप्ट ](https://www.assemblyscript.org/) में लिखे गए हैं| -कॉसमॉस इंडेक्सिंग कॉसमॉस-विशिष्ट डाटा प्रकारो को [असेंबली स्क्रिप्ट ए पी आई](/developing/assemblyscript-api/) में ले कर आती है| +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/hi/cookbook/grafting.mdx b/website/pages/hi/cookbook/grafting.mdx index 88c95a19b144..90c6dd79a5be 100644 --- a/website/pages/hi/cookbook/grafting.mdx +++ b/website/pages/hi/cookbook/grafting.mdx @@ -22,7 +22,7 @@ title: एक कॉन्ट्रैक्ट बदलें और उसक - [ग्राफ्टिंग](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -इस अनुशिक्षण में हम एक बुनियादी उदहारण देखेंगे| हम एक मौजूदा कॉन्ट्रैक्ट को एक समान कॉन्ट्रैक्ट से बदल देंगे( नए एड्रेस के साथ, मगर सामान कोड). उसके बाद हम एक मौजूदा सब-ग्राफ एक "बेस" सब-ग्राफ में ग्राफ्ट कर देंगे नए कॉन्ट्रैक्ट की निगरानी करेगा| +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network @@ -30,7 +30,7 @@ title: एक कॉन्ट्रैक्ट बदलें और उसक ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. While this is an effective way to preserve data and save time on indexing, grafting may introduce complexities and potential issues when migrating from a hosted environment to the decentralized network. It is not possible to graft a subgraph from The Graph Network back to the hosted service or Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. ### Best Practices @@ -80,7 +80,7 @@ dataSources: ``` - `Lock` डाटा सोर्स वह ऐ बी आई और कॉन्ट्रैक्ट एड्रेस है जो कि हमे तब मिलेगा जब हम अपना कॉन्ट्रैक्ट संकलित और तैनात करते हैं| -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - `mapping` सेक्शन उन ट्रिगर ऑफ़ इंटरेस्ट और उनके जवाब में चलने वाले फंक्शन्स को परिभासित करता है| इस स्थिति में, हम `Withdrawal` फंक्शन को सुनते हैं और `handleWithdrawal` फंक्शन को कॉल करते हैं| ## ग्राफ्टिंग मैनिफेस्ट की परिभाषा @@ -191,7 +191,7 @@ Congrats! You have successfully grafted a subgraph onto another subgraph. ## अतिरिक्त संसाधन -अगर आप ग्राफ्टिंग के साथ और अधिक अनुभव चाहते हैं तोह आपके लिए निम्न कुछ लोकप्रिय कॉन्ट्रैक्ट्स हैं: +If you want more experience with grafting, here are a few examples for popular contracts: - [कर्व](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [इ आर सी - 721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) diff --git a/website/pages/hi/cookbook/near.mdx b/website/pages/hi/cookbook/near.mdx index 196558fc8c2a..47b83681983b 100644 --- a/website/pages/hi/cookbook/near.mdx +++ b/website/pages/hi/cookbook/near.mdx @@ -37,7 +37,7 @@ NEAR सबग्राफ डेवलपमेंट के लिए `graph-c **schema.graphql:** एक स्कीमा फ़ाइल जो परिभाषित करती है कि आपके सबग्राफ के लिए कौन सा डेटा इकट्ठा होगा, और इसे ग्राफ़क्यूएल के माध्यम से कैसे क्वेरी करें। NEAR सबग्राफ की आवश्यकताएं [मौजूदा दस्तावेज़ीकरण](/Developing/creating-a-subgraph#the-graphql-schema) द्वारा कवर की गई हैं। -**असेंबलीस्क्रिप्ट मैपिंग:** [असेंबलीस्क्रिप्ट कोड](/developing/assemblyscript-api) जो इवेंट डेटा से आपके स्कीमा में परिभाषित इकाइयों में अनुवाद करता है। NEAR समर्थन NEAR-विशिष्ट डेटा प्रकारों और नई JSON पार्सिंग कार्यक्षमता का परिचय देता है। +**AssemblyScript Mappings:** [AssemblyScript code](/developing/graph-ts/api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. सबग्राफ डेवलपमेंट के दौरान दो प्रमुख कमांड होते हैं: @@ -98,7 +98,7 @@ NEAR डेटा स्रोत दो प्रकार के हैंड इवेंट को प्रोसेस करने के लिए हैंडलर [AssemblyScript](https://www.assemblyscript.org/) में लिखे होते हैं। -NEAR इंडेक्सिंग [AssemblyScript API](/Developing/assemblyscript-api) के लिए NEAR-विशिष्ट डेटा प्रकारों का परिचय देता है। +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/graph-ts/api). ```typescript @@ -165,9 +165,9 @@ class ReceiptWithOutcome { - ब्लॉक हैंडलर्स को एक `ब्लॉक` प्राप्त होगा - रसीद संचालकों को `ReceiptWithOutcome` प्राप्त होगा -अन्यथा, शेष [AssemblyScript API](/Developing/assemblyscript-api) मैपिंग निष्पादन के दौरान NEAR सबग्राफ डेवलपर्स के लिए उपलब्ध है। +Otherwise, the rest of the [AssemblyScript API](/developing/graph-ts/api) is available to NEAR subgraph developers during mapping execution. -इसमें एक नया JSON पार्सिंग फ़ंक्शन शामिल है - NEAR पर logs अक्सर stringified JSON के रूप में उत्सर्जित होते हैं। एक नया `json.fromString(...)` फ़ंक्शन [JSON API](/developing/assemblyscript-api#json-api) के भाग के रूप में डेवलपर्स को अनुमति देने के लिए उपलब्ध है इन logs को आसानी से संसाधित करने के लिए। +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/graph-ts/api#json-api) to allow developers to easily process these logs. ## एक NEAR सबग्राफ की तैनाती diff --git a/website/pages/hi/cookbook/subgraph-debug-forking.mdx b/website/pages/hi/cookbook/subgraph-debug-forking.mdx index da72d452509e..89d0d58605ff 100644 --- a/website/pages/hi/cookbook/subgraph-debug-forking.mdx +++ b/website/pages/hi/cookbook/subgraph-debug-forking.mdx @@ -8,7 +8,7 @@ As with many systems processing large amounts of data, The Graph's Indexers (Gra **सबग्राफ फोर्किंग** आलसी ढंग से _दूसरे_ सबग्राफ के स्टोर (आमतौर पर एक परोक्ष सबग्राफ) से इकाइयां को लाने की प्रक्रिया है। -डिबगिंग के संदर्भ में, **सबग्राफ फोर्किंग** आपको ब्लॉक*X* को सिंक-अप करने के लिए बिना प्रतीक्षा किए ब्लॉक _X_ पर अपने विफल सबग्राफ को डीबग करने की अनुमति देता है । +डिबगिंग के संदर्भ में, **सबग्राफ फोर्किंग** आपको ब्लॉक_X_ को सिंक-अप करने के लिए बिना प्रतीक्षा किए ब्लॉक _X_ पर अपने विफल सबग्राफ को डीबग करने की अनुमति देता है । ## क्या?! कैसे? diff --git a/website/pages/hi/cookbook/subgraph-uncrashable.mdx b/website/pages/hi/cookbook/subgraph-uncrashable.mdx index 7038a99ce5ce..2e8489da36a2 100644 --- a/website/pages/hi/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/hi/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: सुरक्षित सबग्राफ कोड जेनरे - फ्रेमवर्क में इकाई वैरिएबल के समूहों के लिए कस्टम, लेकिन सुरक्षित, सेटर फ़ंक्शन बनाने का एक तरीका (कॉन्फिग फ़ाइल के माध्यम से) भी शामिल है। इस तरह उपयोगकर्ता के लिए एक पुरानी ग्राफ़ इकाई को लोड/उपयोग करना असंभव है और फ़ंक्शन द्वारा आवश्यक वैरिएबल को सहेजना या सेट करना भूलना भी असंभव है। -- चेतावनी लॉग्स लॉग्स के रूप में रिकॉर्ड किए जाते हैं जो इंगित करते हैं कि डेटा सटीकता सुनिश्चित करने के लिए समस्या को ठीक करने में मदद करने के लिए सबग्राफ लॉजिक का उल्लंघन कहां हुआ है। इन लॉग्स को 'लॉग्स' सेक्शन के तहत द ग्राफ की होस्टेड सेवा में देखा जा सकता है। +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. सबग्राफ अनक्रैशेबल को ग्राफ़ CLI codegen कमांड का उपयोग करके एक वैकल्पिक फ़्लैग के रूप में चलाया जा सकता है। diff --git a/website/pages/hi/cookbook/upgrading-a-subgraph.mdx b/website/pages/hi/cookbook/upgrading-a-subgraph.mdx index 3ea603cdfc7e..0152c7c54e84 100644 --- a/website/pages/hi/cookbook/upgrading-a-subgraph.mdx +++ b/website/pages/hi/cookbook/upgrading-a-subgraph.mdx @@ -136,7 +136,7 @@ Make sure **Update Subgraph Details in Explorer** is checked and click on **Save ## ग्राफ़ नेटवर्क पर एक सबग्राफ का बहिष्कार करना -Follow the steps [here](/managing/deprecating-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. +Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. ## ग्राफ़ नेटवर्क पर एक सबग्राफ + बिलिंग को क्वेरी करना diff --git a/website/pages/hi/deploying/multiple-networks.mdx b/website/pages/hi/deploying/multiple-networks.mdx new file mode 100644 index 000000000000..959030355e0c --- /dev/null +++ b/website/pages/hi/deploying/multiple-networks.mdx @@ -0,0 +1,241 @@ +--- +title: Deploying a Subgraph to Multiple Networks +--- + +This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph). + +## सबग्राफ को कई नेटवर्क पर तैनात करना + +कुछ मामलों में, आप एक ही सबग्राफ को इसके सभी कोड को डुप्लिकेट किए बिना कई नेटवर्क पर तैनात करना चाहेंगे। इसके साथ आने वाली मुख्य चुनौती यह है कि इन नेटवर्कों पर अनुबंध के पते अलग-अलग हैं। + +### Using `graph-cli` + +Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: + +```sh +Options: + + ... + --network Network configuration to use from the networks config file + --network-file Networks config file path (default: "./networks.json") +``` + +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. + +> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. + +If you don't have a `networks.json` file, you'll need to manually create one with the following structure: + +```json +{ + "network1": { // the network name + "dataSource1": { // the dataSource name + "address": "0xabc...", // the contract address (optional) + "startBlock": 123456 // the startBlock (optional) + }, + "dataSource2": { + "address": "0x123...", + "startBlock": 123444 + } + }, + "network2": { + "dataSource1": { + "address": "0x987...", + "startBlock": 123 + }, + "dataSource2": { + "address": "0xxyz..", + "startBlock": 456 + } + }, + ... +} +``` + +> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. + +Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x123...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +आपकी नेटवर्क कॉन्फ़िग फ़ाइल इस तरह दिखनी चाहिए: + +```json +{ + "mainnet": { + "Gravity": { + "address": "0x123..." + } + }, + "sepolia": { + "Gravity": { + "address": "0xabc..." + } + } +} +``` + +अब हम निम्न में से कोई एक कमांड चला सकते हैं: + +```sh +# Using default networks.json file +yarn build --network sepolia + +# Using custom named file +yarn build --network sepolia --network-file path/to/config +``` + +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: sepolia + source: + address: '0xabc...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Now you are ready to `yarn deploy`. + +> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: + +```sh +# Using default networks.json file +yarn deploy --network sepolia + +# Using custom named file +yarn deploy --network sepolia --network-file path/to/config +``` + +### Subgraph.yaml टेम्पलेट का उपयोग करना + +One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). + +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: + +```json +{ + "network": "mainnet", + "address": "0x123..." +} +``` + +और + +```json +{ + "network": "sepolia", + "address": "0xabc..." +} +``` + +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + network: {{network}} + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + address: '{{address}}' + abi: Gravity + mapping: + kind: ethereum/events +``` + +In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: + +```json +{ + ... + "scripts": { + ... + "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", + "prepare:sepolia": "mustache config/sepolia.json subgraph.template.yaml > subgraph.yaml" + }, + "devDependencies": { + ... + "mustache": "^3.1.0" + } +} +``` + +To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: + +```sh +# Mainnet: +yarn prepare:mainnet && yarn deploy + +# Sepolia: +yarn prepare:sepolia && yarn deploy +``` + +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). + +**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. + +## सबग्राफ स्टूडियो सबग्राफ संग्रह नीति + +A subgraph version in Studio is archived if and only if it meets the following criteria: + +- The version is not published to the network (or pending publish) +- The version was created 45 or more days ago +- The subgraph hasn't been queried in 30 days + +In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. + +इस नीति से प्रभावित प्रत्येक सबग्राफ के पास विचाराधीन संस्करण को वापस लाने का विकल्प है। + +## सबग्राफ स्वास्थ्य की जाँच करना + +यदि एक सबग्राफ सफलतापूर्वक सिंक हो जाता है, तो यह एक अच्छा संकेत है कि यह हमेशा के लिए अच्छी तरह से चलता रहेगा। हालांकि, नेटवर्क पर नए ट्रिगर्स के कारण आपका सबग्राफ एक अनुपयोगी त्रुटि स्थिति में आ सकता है या यह प्रदर्शन समस्याओं या नोड ऑपरेटरों के साथ समस्याओं के कारण पीछे पड़ना शुरू हो सकता है। + +Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: + +```graphql +{ + indexingStatusForCurrentVersion(subgraphName: "org/subgraph") { + synced + health + fatalError { + message + block { + number + hash + } + handler + } + chains { + chainHeadBlock { + number + } + latestBlock { + number + } + } + } +} +``` + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/pages/hi/developing/creating-a-subgraph.mdx b/website/pages/hi/developing/creating-a-subgraph.mdx index e413825306fe..ba2a90b0285f 100644 --- a/website/pages/hi/developing/creating-a-subgraph.mdx +++ b/website/pages/hi/developing/creating-a-subgraph.mdx @@ -2,45 +2,47 @@ title: सबग्राफ बनाना --- -एक सबग्राफ एक ब्लॉकचेन से डेटा निकालता है, इसे प्रोसेस करता है और इसे स्टोर करता है ताकि इसे ग्राफक्यूएल के माध्यम से आसानी से क्वेरी किया जा सके। +This detailed guide provides instructions to successfully create a subgraph. -![एक सबग्राफ को परिभाषित करना](/img/defining-a-subgraph.png) +A subgraph extracts data from a blockchain, processes it, and stores it for efficient querying via GraphQL. -सबग्राफ की परिभाषा में कुछ फाइलें होती हैं: +![एक सबग्राफ को परिभाषित करना](/img/defining-a-subgraph.png) -- `subgraph.yaml`: एक YAML फ़ाइल जिसमें सबग्राफ मेनिफ़ेस्ट होता है +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. -- `schema.graphql`: एक ग्राफक्यूएल स्कीमा जो परिभाषित करता है कि आपके सबग्राफ के लिए कौन सा डेटा संग्रहीत है, और इसे ग्राफक्यूएल के माध्यम से कैसे क्वेरी करें +## शुरू करना -- `AssemblyScript मैपिंग`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) कोड जो इवेंट डेटा से आपके स्कीमा में परिभाषित इकाइयों में अनुवाद करता है (उदाहरण के लिए `mapping.ts` इस ट्यूटोरियल में) +### . ग्राफ़ सीएलआई इनस्टॉल करें -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key). It is recommended that you [add signal](/network/curating/#how-to-signal) to your subgraph with at least [3,000 GRT](/sunrise/#how-can-i-ensure-high-quality-of-service-and-redundancy-for-subgraphs-on-the-graph-network). +To build and deploy a subgraph, you will need the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-tooling) which you will need to build and deploy a subgraph. +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. -## . ग्राफ़ सीएलआई इनस्टॉल करें +अपनी स्थानीय मशीन पर, निम्न आदेशों में से कोई एक चलाएँ: -ग्राफ़ सीएलआई जावास्क्रिप्ट में लिखा गया है, और इसका उपयोग करने के लिए आपको या तो `yarn` या `npm` स्थापित करना होगा; यह माना जाता है कि आपके पास निम्नलिखित में yarn है। +#### Using [npm](https://www.npmjs.com/) -एक बार जब आपके पास `yarn` हो जाए, तो चलाकर ग्राफ़ सीएलआई स्थापित करें +```bash +npm install -g @graphprotocol/graph-cli@latest +``` -**Yarn के साथ स्थापित करें:** +#### Using [yarn](https://yarnpkg.com/) ```bash yarn global add @graphprotocol/graph-cli ``` -**एनपीएम के साथ स्थापित करें:** +- The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. -```bash -npm install -g @graphprotocol/graph-cli -``` +- This `graph init` command can also create a subgraph in Subgraph Studio by passing in `--product subgraph-studio`. + +- If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. -Once installed, the `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. This command can be used to create a subgraph in Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to your preferred network, bootstrapping a new subgraph from that contract can be a good way to get started. +## Create a subgraph -## एक मौजूदा कॉन्ट्रैक्ट से +### From an existing contract -निम्न आदेश एक सबग्राफ बनाता है जो मौजूदा अनुबंध की सभी घटनाओं को अनुक्रमित करता है। यह एथरस्कैन से अनुबंध एबीआई लाने का प्रयास करता है और स्थानीय फ़ाइल पथ का अनुरोध करने के लिए वापस आ जाता है। यदि कोई वैकल्पिक तर्क गायब है, तो यह आपको एक संवादात्मक रूप में ले जाता है। +The following command creates a subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,21 +53,29 @@ graph init \ [] ``` -`` सबग्राफ स्टूडियो में आपके सबग्राफ की आईडी है, यह आपके सबग्राफ विवरण पृष्ठ पर पाया जा सकता है। +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. -## एक उदाहरण सबग्राफ से +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. -दूसरा मोड `graph init` सपोर्ट करता है, एक उदाहरण सबग्राफ से एक नया प्रोजेक्ट बना रहा है। निम्न आदेश यह करता है: +### From an example subgraph + +The following command initializes a new project from an example subgraph: ```sh -graph init --studio +graph init --studio --from-example=example-subgraph ``` -The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. -## मौजूदा सबग्राफ में नए डेटा स्रोत जोड़ें +## Add new `dataSources` to an existing subgraph -चूँकि `v0.31.0` `graph-cli` `graph add` कमांड के माध्यम से मौजूदा सबग्राफ में नए डेटा स्रोतों को जोड़ने का समर्थन करता है। +Since `v0.31.0`, the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: ```sh graph add
[] @@ -78,22 +88,45 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -The `add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option), and will create a new `dataSource` in the same way that `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. +### Specifics + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- `--merge-entities` विकल्प यह बताता है कि डेवलपर `entity` और `event` नाम के विरोधों को कैसे हैंडल करना चाहता है: + + - अगर `सही`: नए `dataSource` को मौजूदा `eventHandlers` & `इकाइयां`। + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- संबंधित नेटवर्क के लिए `networks.json` को अनुबंध `पता` लिखा जाएगा। + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. -`--merge-entities` विकल्प यह बताता है कि डेवलपर `entity` और `event` नाम के विरोधों को कैसे हैंडल करना चाहता है: +## Components of a subgraph -- अगर `सही`: नए `dataSource` को मौजूदा `eventHandlers` & `इकाइयां`। -- अगर `गलत`: एक नई इकाई & ईवेंट हैंडलर को `${dataSourceName}{EventName}` के साथ बनाया जाना चाहिए। +### द सबग्राफ मेनिफेस्ट -संबंधित नेटवर्क के लिए `networks.json` को अनुबंध `पता` लिखा जाएगा। +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -> **ध्यान दें:** इंटरैक्टिव क्ली का उपयोग करते समय, `ग्राफ़ इनिट` को सफलतापूर्वक चलाने के बाद, आपको एक नया `डेटा स्रोत` जोड़ने के लिए कहा जाएगा । +The **subgraph definition** consists of the following files: -## द सबग्राफ मेनिफेस्ट +- `subgraph.yaml`: Contains the subgraph manifest -सबग्राफ मेनिफेस्ट `subgraph.yaml` आपके सबग्राफ इंडेक्स के स्मार्ट कॉन्ट्रैक्ट्स को परिभाषित करता है, इन कॉन्ट्रैक्ट्स से किन इवेंट्स पर ध्यान देना है, और इवेंट डेटा को उन संस्थाओं से कैसे मैप करना है जो ग्राफ़ नोड स्टोर करता है और क्वेरी करने की अनुमति देता है। सबग्राफ मेनिफ़ेस्ट के लिए पूर्ण विशिष्टता [यहां](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md) पाई जा सकती है। +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL -उदाहरण के सबग्राफ के लिए `subgraph.yaml` है: +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +A single subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: ```yaml specVersion: 0.0.4 @@ -180,9 +213,9 @@ dataSources: निम्नलिखित प्रक्रिया का उपयोग करके एक ब्लॉक के भीतर डेटा स्रोत के लिए ट्रिगर्स का आदेश दिया गया है: -1. ईवेंट और कॉल ट्रिगर्स को पहले ब्लॉक के भीतर ट्रांजैक्शन इंडेक्स द्वारा ऑर्डर किया जाता है। -2. एक ही लेन-देन के भीतर ईवेंट और कॉल ट्रिगर्स को एक कन्वेंशन का उपयोग करके ऑर्डर किया जाता है: ईवेंट पहले ट्रिगर करता है फिर ट्रिगर्स को कॉल करता है, प्रत्येक प्रकार के ऑर्डर का सम्मान करते हुए उन्हें मेनिफेस्ट में परिभाषित किया जाता है। -3. ब्लॉक ट्रिगर इवेंट और कॉल ट्रिगर के बाद चलाए जाते हैं, जिस क्रम में उन्हें मेनिफेस्ट में परिभाषित किया गया है। +1. ईवेंट और कॉल ट्रिगर्स को पहले ब्लॉक के भीतर ट्रांजैक्शन इंडेक्स द्वारा ऑर्डर किया जाता है। +2. एक ही लेन-देन के भीतर ईवेंट और कॉल ट्रिगर्स को एक कन्वेंशन का उपयोग करके ऑर्डर किया जाता है: ईवेंट पहले ट्रिगर करता है फिर ट्रिगर्स को कॉल करता है, प्रत्येक प्रकार के ऑर्डर का सम्मान करते हुए उन्हें मेनिफेस्ट में परिभाषित किया जाता है। +3. ब्लॉक ट्रिगर इवेंट और कॉल ट्रिगर के बाद चलाए जाते हैं, जिस क्रम में उन्हें मेनिफेस्ट में परिभाषित किया गया है। ये आदेश नियम परिवर्तन के अधीन हैं। @@ -305,9 +338,9 @@ Imagine you have a subgraph that needs to make three Ethereum calls to fetch dat Traditionally, these calls might be made sequentially: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Total time taken = 3 + 2 + 4 = 9 seconds @@ -315,9 +348,9 @@ Total time taken = 3 + 2 + 4 = 9 seconds With this feature, you can declare these calls to be executed in parallel: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. @@ -325,9 +358,9 @@ Total time taken = max (3, 2, 4) = 4 seconds ### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. -2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. ### Example Configuration in Subgraph Manifest @@ -347,7 +380,7 @@ calls: Details for the example above: - `global0X128` is the declared `eth_call`. -- The text before colon(`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. @@ -360,17 +393,17 @@ calls: ### SpecVersion Releases -| संस्करण | रिलीज नोट्स | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +| संस्करण | रिलीज नोट्स | +|:-------:| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | ### एबीआई प्राप्त करना @@ -442,16 +475,16 @@ type GravatarDeclined @entity { हम अपने ग्राफक्यूएल एपीआई में निम्नलिखित स्केलर्स का समर्थन करते हैं: -| प्रकार | विवरण | -| --- | --- | -| `Bytes` | बाइट सरणी, एक हेक्साडेसिमल स्ट्रिंग के रूप में दर्शाया गया है। आमतौर पर एथेरियम हैश और पतों के लिए उपयोग किया जाता है। | -| `String` | `स्ट्रिंग` मानों के लिए स्केलर। अशक्त वर्ण समर्थित नहीं हैं और स्वचालित रूप से हटा दिए जाते हैं। | -| `Boolean` | `boolean` मानों के लिए स्केलर। | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | बड़े पूर्णांक। एथेरियम के `uint32`, `int64`, `uint64`, ..., `uint256` प्रकारों के लिए उपयोग किया जाता है। नोट: `uint32` के नीचे सब कुछ, जैसे `int32`, `uint24` या `int8` को `i32` के रूप में दर्शाया गया है। | -| `BigDecimal` | `BigDecimal` उच्च परिशुद्धता दशमलव एक महत्व और एक प्रतिपादक के रूप में दर्शाया गया है। एक्सपोनेंट रेंज -6143 से +6144 तक है। 34 महत्वपूर्ण अंकों तक गोल। | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| प्रकार | विवरण | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | बाइट सरणी, एक हेक्साडेसिमल स्ट्रिंग के रूप में दर्शाया गया है। आमतौर पर एथेरियम हैश और पतों के लिए उपयोग किया जाता है। | +| `String` | `स्ट्रिंग` मानों के लिए स्केलर। अशक्त वर्ण समर्थित नहीं हैं और स्वचालित रूप से हटा दिए जाते हैं। | +| `Boolean` | `boolean` मानों के लिए स्केलर। | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | बड़े पूर्णांक। एथेरियम के `uint32`, `int64`, `uint64`, ..., `uint256` प्रकारों के लिए उपयोग किया जाता है। नोट: `uint32` के नीचे सब कुछ, जैसे `int32`, `uint24` या `int8` को `i32` के रूप में दर्शाया गया है। | +| `BigDecimal` | `BigDecimal` उच्च परिशुद्धता दशमलव एक महत्व और एक प्रतिपादक के रूप में दर्शाया गया है। एक्सपोनेंट रेंज -6143 से +6144 तक है। 34 महत्वपूर्ण अंकों तक गोल। | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | #### Enums @@ -593,7 +626,7 @@ query usersWithOrganizations { #### स्कीमा में टिप्पणियां जोड़ना -As per GraphQL spec, comments can be added above schema entity attributes using the hash symble `#`. This is illustrated in the example below: +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: ```graphql type MyFirstEntity @entity { @@ -873,7 +906,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **ध्यान दें:** एक नया डेटा स्रोत केवल उस ब्लॉक के लिए कॉल और ईवेंट को प्रोसेस करेगा जिसमें इसे बनाया गया था और सभी बाद के ब्लॉक, लेकिन ऐतिहासिक डेटा, यानी डेटा को प्रोसेस नहीं करेगा जो पिछले ब्लॉकों में निहित है। -> +> > यदि पिछले ब्लॉक में नए डेटा स्रोत के लिए प्रासंगिक डेटा है, तो उस डेटा को अनुबंध की वर्तमान स्थिति को पढ़कर और नए डेटा स्रोत के निर्माण के समय उस स्थिति का प्रतिनिधित्व करने वाली संस्थाओं का निर्माण करना सबसे अच्छा है। ### डेटा स्रोत प्रसंग @@ -930,7 +963,7 @@ dataSources: ``` > **ध्यान दें:** इथरस्कैन पर अनुबंध निर्माण ब्लॉक को जल्दी से देखा जा सकता है: -> +> > 1. खोज बार में उसका पता दर्ज करके अनुबंध की खोज करें। > 2. `अनुबंध निर्माता` अनुभाग में निर्माण लेनदेन हैश पर क्लिक करें। > 3. लेन-देन विवरण पृष्ठ लोड करें जहां आपको उस अनुबंध के लिए प्रारंभ ब्लॉक मिलेगा। @@ -945,9 +978,9 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde `indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: -1. `"never"`: No pruning of historical data; retains the entire history. -2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. -3. A specific number: Sets a custom limit on the number of historical blocks to retain. +1. `"never"`: No pruning of historical data; retains the entire history. +2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +3. A specific number: Sets a custom limit on the number of historical blocks to retain. ``` indexerHints: @@ -982,29 +1015,6 @@ indexerHints: prune: never ``` -You can check the earliest block (with historical state) for a given subgraph by querying the [Indexing Status API](/deploying/deploying-a-subgraph-to-hosted/#checking-subgraph-health): - -``` -{ - indexingStatuses(subgraphs: ["Qm..."]) { - subgraph - synced - health - chains { - earliestBlock { - number - } - latestBlock { - number - } - chainHeadBlock { number } - } - } -} -``` - -Note that the `earliestBlock` is the earliest block with historical data, which will be more recent than the `startBlock` specified in the manifest, if the subgraph has been pruned. - ## Event Handlers Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. @@ -1477,7 +1487,7 @@ The file data source must specifically mention all the entity types which it wil #### फ़ाइलों को संसाधित करने के लिए एक नया हैंडलर बनाएँ -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/assemblyscript-api/#json-api)). +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). पढ़ने योग्य स्ट्रिंग के रूप में फ़ाइल की CID को `dataSource` के माध्यम से निम्नानुसार एक्सेस किया जा सकता है: diff --git a/website/pages/hi/developing/developer-faqs.mdx b/website/pages/hi/developing/developer-faqs.mdx index ce378ef44162..9f517b3d9261 100644 --- a/website/pages/hi/developing/developer-faqs.mdx +++ b/website/pages/hi/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: डेवलपर अक्सर पूछे जाने वाले प्रश्न --- -## 1. सबग्राफ क्या है? +This page summarizes some of the most common questions for developers building on The Graph. -एक सबग्राफ ब्लॉकचैन डेटा पर निर्मित एक कस्टम एपीआई है। सबग्राफ को ग्राफ़क्यूएल क्वेरी भाषा का उपयोग करके पूछताछ की जाती है और ग्राफ़ सीएलआई का उपयोग करके ग्राफ़ नोड पर तैनात किया जाता है। एक बार द ग्राफ के विकेंद्रीकृत नेटवर्क पर तैनात और प्रकाशित होने के बाद, इंडेक्सर्स सबग्राफ को प्रोसेस करते हैं और उन्हें सबग्राफ उपभोक्ताओं द्वारा पूछे जाने के लिए उपलब्ध कराते हैं। +## Subgraph Related -## 2. क्या मैं अपना सबग्राफ मिटा सकता हूँ? +### 1. सबग्राफ क्या है? -सबग्राफ बनाने के बाद उन्हें हटाना संभव नहीं है। +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. क्या मैं अपना सबग्राफ नाम बदल सकता हूँ? +### 2. What is the first step to create a subgraph? -नहीं। एक बार सबग्राफ बन जाने के बाद, नाम बदला नहीं जा सकता। अपना सबग्राफ बनाने से पहले इस पर सावधानी से विचार करना सुनिश्चित करें ताकि यह आसानी से खोजा जा सके और अन्य डैप द्वारा पहचाना जा सके। +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. क्या मैं अपने सबग्राफ से जुड़े GitHub खाते को बदल सकता हूँ? +### 3. Can I still create a subgraph if my smart contracts don't have events? -नहीं। एक बार सबग्राफ बन जाने के बाद, संबंधित GitHub खाते को बदला नहीं जा सकता। अपना सबग्राफ बनाने से पहले इस पर ध्यान से विचार करना सुनिश्चित करें। +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. अगर मेरे स्मार्ट कॉन्ट्रैक्ट में इवेंट नहीं हैं तो क्या मैं अब भी सबग्राफ बना सकता हूं? +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -यह अत्यधिक अनुशंसा की जाती है कि आप अपने स्मार्ट अनुबंधों को उस डेटा से संबंधित घटनाओं के लिए तैयार करें, जिसे आप क्वेरी करने में रुचि रखते हैं। सबग्राफ में ईवेंट हैंडलर अनुबंध की घटनाओं से ट्रिगर होते हैं और उपयोगी डेटा को पुनः प्राप्त करने का सबसे तेज़ तरीका हैं। +### 4. क्या मैं अपने सबग्राफ से जुड़े GitHub खाते को बदल सकता हूँ? -यदि आप जिन अनुबंधों के साथ काम कर रहे हैं, उनमें घटनाएँ नहीं हैं, तो आपका सबग्राफ इंडेक्सिंग को ट्रिगर करने के लिए कॉल और ब्लॉक हैंडलर का उपयोग कर सकता है। हालांकि यह अनुशंसित नहीं है, क्योंकि प्रदर्शन काफी धीमा होगा। +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. क्या कई नेटवर्क के लिए एक ही नाम के साथ एक सबग्राफ को तैनात करना संभव है? +### 5. How do I update a subgraph on mainnet? -आपको कई नेटवर्क के लिए अलग-अलग नामों की आवश्यकता होगी। जबकि आपके पास एक ही नाम के तहत अलग-अलग सबग्राफ नहीं हो सकते हैं, कई नेटवर्क के लिए एक ही कोडबेस रखने के सुविधाजनक तरीके हैं। हमारे दस्तावेज़ में इस पर अधिक जानकारी प्राप्त करें: [Redeploying-a-subgraph](/deploying/deploying-a-subgraph-to-hosted#redeploying-a-subgraph) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. टेम्प्लेट डेटा स्रोतों से कैसे भिन्न हैं? +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -टेम्प्लेट आपको तुरंत डेटा स्रोत बनाने की अनुमति देते हैं, जबकि आपका सबग्राफ इंडेक्स कर रहा होता है। यह मामला हो सकता है कि आपका अनुबंध नए अनुबंधों को जन्म देगा क्योंकि लोग इसके साथ बातचीत करते हैं, और चूंकि आप उन अनुबंधों (एबीआई, घटनाओं, आदि) के आकार को जानते हैं, इसलिए आप परिभाषित कर सकते हैं कि आप उन्हें एक टेम्पलेट में कैसे अनुक्रमित करना चाहते हैं और जब वे आपका सबग्राफ अनुबंध के पते की आपूर्ति करके एक गतिशील डेटा स्रोत बनाएगा। +आपको सबग्राफ को फिर से तैनात करना होगा, लेकिन अगर सबग्राफ आईडी (आईपीएफएस हैश) नहीं बदलता है, तो इसे शुरुआत से सिंक नहीं करना पड़ेगा। + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +एक सबग्राफ के भीतर, घटनाओं को हमेशा उसी क्रम में संसाधित किया जाता है जिस क्रम में वे ब्लॉक में दिखाई देते हैं, भले ही वह कई अनुबंधों में हो या नहीं। + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. "डेटा स्रोत टेम्प्लेट को तत्काल बनाना" अनुभाग देखें: [डेटा स्रोत टेम्प्लेट](/developing/creating-a-subgraph#data-source-templates)। -## 8. मैं कैसे सुनिश्चित करूं कि मैं अपने स्थानीय परिनियोजन के लिए ग्राफ-नोड के नवीनतम संस्करण का उपयोग कर रहा हूं? +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -आप निम्न आदेश चला सकते हैं: +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -docker pull graphprotocol/graph-node:latest -``` +You can also use `graph add` command to add a new dataSource. -**ध्यान दें:** docker / docker-compose हमेशा किसी भी ग्राफ-नोड संस्करण का उपयोग करेगा जिसे आपने पहली बार चलाया था, इसलिए यह सुनिश्चित करने के लिए ऐसा करना महत्वपूर्ण है कि आप नवीनतम संस्करण के साथ अद्यतित हैं ग्राफ-नोड का। +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. मैं अपने सबग्राफ मैपिंग से किसी अनुबंध फ़ंक्शन को कैसे कॉल करूं या किसी सार्वजनिक स्थिति चर तक कैसे पहुंचूं? +Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. -## 10. क्या दो अनुबंधों के साथ `graph-cli` से `graph init` का उपयोग करके एक सबग्राफ सेट करना संभव है? या `graph init` चलाने के बाद मुझे `subgraph.yaml` में मैन्युअल रूप से एक और डेटा स्रोत जोड़ना चाहिए? +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Yes. On `graph init` command itself you can add multiple datasources by entering contracts one after the other. You can also use `graph add` command to add new datasource. +आप निम्न आदेश चला सकते हैं: -## 11. मैं गिटहब मुद्दे में योगदान देना चाहता हूं या जोड़ना चाहता हूं। मुझे ओपन सोर्स रिपॉजिटरी कहां मिल सकती है? +```sh +docker pull graphprotocol/graph-node:latest +``` -- [graph-node](https://github.com/graphprotocol/graph-node) -- [graph-tooling](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. घटनाओं को संभालते समय एक इकाई के लिए "ऑटोजेनरेटेड" आईडी बनाने का अनुशंसित तरीका क्या है? +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? यदि घटना के दौरान केवल एक इकाई बनाई जाती है और यदि कुछ भी बेहतर उपलब्ध नहीं है, तो लेन-देन हैश + लॉग इंडेक्स अद्वितीय होगा। आप इन्हें बाइट्स में परिवर्तित करके और फिर इसे `crypto.keccak256` के माध्यम से पाइप करके अस्पष्ट कर सकते हैं, लेकिन यह इसे और अधिक विशिष्ट नहीं बनाएगा। -## 13. एकाधिक अनुबंधों को सुनते समय, क्या घटनाओं को सुनने के लिए अनुबंध आदेश का चयन करना संभव है? +### 15. Can I delete my subgraph? -एक सबग्राफ के भीतर, घटनाओं को हमेशा उसी क्रम में संसाधित किया जाता है जिस क्रम में वे ब्लॉक में दिखाई देते हैं, भले ही वह कई अनुबंधों में हो या नहीं। +It is not possible to delete subgraphs once they are created. However, you can [transfer and deprecate your subgraph](/managing/transfer-and-deprecate-a-subgraph/). -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +## Network Related + +### 16. What networks are supported by The Graph? + +आप समर्थित नेटवर्क की सूची [यहां](/Developing/supported-networks) प्राप्त कर सकते हैं। + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? हाँ। नीचे दिए गए उदाहरण के अनुसार आप `ग्राफ़-टीएस` आयात करके ऐसा कर सकते हैं: @@ -78,23 +99,21 @@ dataSource.network() dataSource.address() ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Do you support block and call handlers on Sepolia? Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. -## 16. क्या मैं अपने सबग्राफ मैपिंग में ethers.js या अन्य JS लाइब्रेरी आयात कर सकता हूँ? - -वर्तमान में नहीं, क्योंकि मैपिंग असेंबलीस्क्रिप्ट में लिखे गए हैं। इसका एक संभावित वैकल्पिक समाधान संस्थाओं में कच्चे डेटा को स्टोर करना और तर्क करना है जिसके लिए क्लाइंट पर जेएस पुस्तकालयों की आवश्यकता होती है। +## Indexing & Querying Related -## 17. क्या यह निर्दिष्ट करना संभव है कि किस ब्लॉक पर अनुक्रमण शुरू करना है? +### 19. Is it possible to specify what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 18. क्या इंडेक्सिंग के प्रदर्शन को बढ़ाने के लिए कुछ टिप्स हैं? मेरा सबग्राफ सिंक होने में काफी समय ले रहा है +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -हां, आपको उस ब्लॉक से अनुक्रमण शुरू करने के लिए वैकल्पिक स्टार्ट ब्लॉक सुविधा पर एक नज़र डालनी चाहिए जिसे अनुबंध तैनात किया गया था: [स्टार्ट ब्लॉक](/developing/creating-a-subgraph#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 19. क्या इंडेक्स किए गए नवीनतम ब्लॉक नंबर को निर्धारित करने के लिए सीधे सबग्राफ से पूछताछ करने का कोई तरीका है? +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? हाँ! निम्न आदेश का प्रयास करें, "संगठन/सबग्राफनाम" को उस संगठन के साथ प्रतिस्थापित करें जिसके अंतर्गत वह प्रकाशित है और आपके सबग्राफ का नाम: @@ -102,44 +121,27 @@ Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the n curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -## 20. ग्राफ़ द्वारा कौन से नेटवर्क समर्थित हैं? - -आप समर्थित नेटवर्क की सूची [यहां](/Developing/supported-networks) प्राप्त कर सकते हैं। - -## 21. क्या किसी सबग्राफ को किसी अन्य खाते या समापन बिंदु पर पुन: नियोजित किए बिना डुप्लिकेट करना संभव है? - -आपको सबग्राफ को फिर से तैनात करना होगा, लेकिन अगर सबग्राफ आईडी (आईपीएफएस हैश) नहीं बदलता है, तो इसे शुरुआत से सिंक नहीं करना पड़ेगा। - -## 22. क्या ग्राफ-नोड के शीर्ष पर अपोलो फेडरेशन का उपयोग करना संभव है? +### 22. Is there a limit to how many objects The Graph can return per query? -फेडरेशन अभी समर्थित नहीं है, हालांकि हम भविष्य में इसका समर्थन करना चाहते हैं। इस समय, आप जो कुछ कर सकते हैं वह क्लाइंट पर या प्रॉक्सी सेवा के माध्यम से स्कीमा सिलाई का उपयोग कर रहा है। - -## 23. क्या इसकी कोई सीमा है कि ग्राफ़ प्रति क्वेरी कितने ऑब्जेक्ट लौटा सकता है? - -डिफ़ॉल्ट रूप से, क्वेरी प्रतिसाद प्रति संग्रह 100 आइटम तक सीमित हैं। यदि आप अधिक प्राप्त करना चाहते हैं, तो आप प्रति संग्रह 1000 आइटम तक जा सकते हैं और उससे अधिक, आप पृष्ठांकित कर सकते हैं: +By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: ```graphql someCollection(first: 1000, skip: ) { ... } ``` -## 24. यदि मेरा डैप फ़्रंटएंड क्वेरी करने के लिए ग्राफ़ का उपयोग करता है, तो क्या मुझे अपनी क्वेरी कुंजी को सीधे फ़्रंटेंड में लिखने की आवश्यकता है? क्या होगा यदि हम उपयोगकर्ताओं के लिए क्वेरी शुल्क का भुगतान करते हैं - क्या दुर्भावनापूर्ण उपयोगकर्ता हमारी क्वेरी फीस बहुत अधिक होने का कारण बनेंगे? - -वर्तमान में, डैप के लिए अनुशंसित तरीका फ्रंटएंड में कुंजी जोड़ना और अंतिम उपयोगकर्ताओं के लिए इसे उजागर करना है। उस ने कहा, आप उस कुंजी को होस्टनाम तक सीमित कर सकते हैं, जैसे _yourdapp.io_ और सबग्राफ। गेटवे वर्तमान में Edge & द्वारा चलाया जा रहा है; नोड। गेटवे की जिम्मेदारी का हिस्सा अपमानजनक व्यवहार की निगरानी करना और दुर्भावनापूर्ण ग्राहकों से आने वाले ट्रैफ़िक को ब्लॉक करना है। - -## 25. Where do I go to find my current subgraph on the hosted service? - -Head over to the hosted service in order to find subgraphs that you or others deployed to the hosted service. You can find it [here](https://thegraph.com/hosted-service). - -## 26. Will the hosted service start charging query fees? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -The Graph will never charge for the hosted service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The hosted service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to upgrade to the decentralized network as they are comfortable. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## 27. How do I update a subgraph on mainnet? +## Miscellaneous -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +### 24. Is it possible to use Apollo Federation on top of graph-node? -## 28. In what order are the event, block, and call handlers triggered for a data source? +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +- [graph-node](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/hi/developing/graph-ts/api.mdx b/website/pages/hi/developing/graph-ts/api.mdx index 7c27b5762589..ce96d8fba2c1 100644 --- a/website/pages/hi/developing/graph-ts/api.mdx +++ b/website/pages/hi/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: असेंबलीस्क्रिप्ट एपीआई --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/release-notes/assemblyscript-migration-guide) +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -यह पृष्ठ दस्तावेज करता है कि सबग्राफ मैपिंग लिखते समय किन अंतर्निहित एपीआई का उपयोग किया जा सकता है। बॉक्स से बाहर दो प्रकार के एपीआई उपलब्ध हैं: +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## एपीआई संदर्भ @@ -27,16 +29,16 @@ The `@graphprotocol/graph-ts` library provides the following APIs: The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. -| संस्करण | रिलीज नोट्स | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| संस्करण | रिलीज नोट्स | +| :-----: | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
`ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
`etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### अंतर्निहित प्रकार @@ -252,7 +254,9 @@ export function handleTransfer(event: TransferEvent): void { When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. -अन्य संस्थाओं के साथ टकराव से बचने के लिए प्रत्येक इकाई के पास एक विशिष्ट आईडी होनी चाहिए। ईवेंट पैरामीटर के लिए एक अद्वितीय पहचानकर्ता शामिल करना काफी सामान्य है जिसका उपयोग किया जा सकता है। नोट: आईडी के रूप में लेन-देन हैश का उपयोग करना मानता है कि एक ही लेन-देन में कोई अन्य घटना इस हैश के साथ आईडी के रूप में संस्था नहीं बनाती है। +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### Loading entities from the store @@ -268,15 +272,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### एक ब्लॉक के साथ बनाई गई संस्थाओं को देखना As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. -स्टोर एपीआई उन संस्थाओं की पुनर्प्राप्ति की सुविधा देता है जो वर्तमान ब्लॉक में बनाई या अपडेट की गई थीं। इसके लिए एक विशिष्ट स्थिति यह है कि एक हैंडलर कुछ ऑन-चेन ईवेंट से लेन-देन बनाता है, और बाद में हैंडलर मौजूद होने पर इस लेनदेन तक पहुंचना चाहता है। ऐसे मामले में जहां लेन-देन मौजूद नहीं है, सबग्राफ को केवल यह पता लगाने के लिए डेटाबेस में जाना होगा कि इकाई मौजूद नहीं है; अगर सबग्राफ लेखक पहले से ही जानता है कि इकाई को उसी ब्लॉक में बनाया जाना चाहिए, तो loadInBlock का उपयोग करके इस डेटाबेस राउंडट्रिप से बचा जाता है। कुछ सबग्राफ के लिए, ये छूटे हुए लुकअप इंडेक्सिंग समय में महत्वपूर्ण योगदान दे सकते हैं। +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -503,7 +510,9 @@ As long as the `ERC20Contract` on Ethereum has a public read-only function calle #### रिवर्टेड कॉल्स को हैंडल करना -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -515,7 +524,7 @@ if (callResult.reverted) { } ``` -ध्यान दें कि Geth या Infura क्लाइंट से जुड़ा एक ग्राफ़ नोड सभी रिवर्ट का पता नहीं लगा सकता है, अगर आप इस पर भरोसा करते हैं तो हम पैरिटी क्लाइंट से जुड़े ग्राफ़ नोड का उपयोग करने की सलाह देते हैं। +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### एन्कोडिंग/डिकोडिंग एबीआई @@ -586,11 +595,7 @@ The `log` API includes the following functions: The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. ```typescript -log.info('संदेश प्रदर्शित किया जाना है: {}, {}, {}', [ - value.toString(), - OtherValue.toString(), - 'पहले से ही एक स्ट्रिंग', -]) +log.info ('संदेश प्रदर्शित किया जाना है: {}, {}, {}', [value.toString (), OtherValue.toString (), 'पहले से ही एक स्ट्रिंग']) ``` #### एक या अधिक मान लॉग करना diff --git a/website/pages/hi/developing/supported-networks.mdx b/website/pages/hi/developing/supported-networks.mdx index 7c2d8d858261..797202065e99 100644 --- a/website/pages/hi/developing/supported-networks.mdx +++ b/website/pages/hi/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. +- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. - If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). diff --git a/website/pages/hi/developing/unit-testing-framework.mdx b/website/pages/hi/developing/unit-testing-framework.mdx index 7f07a77b550c..9b3cd145d997 100644 --- a/website/pages/hi/developing/unit-testing-framework.mdx +++ b/website/pages/hi/developing/unit-testing-framework.mdx @@ -411,7 +411,7 @@ describe('handleUpdatedGravatars', () => { उदाहरण: -प्रत्येक परीक्षण के बाद `afterEach` के अंदर का कोड निष्पादित होगा। +प्रत्येक परीक्षण के बाद ` afterEach ` के अंदर का कोड निष्पादित होगा। ```typescript import { describe, test, beforeEach, afterEach } from "matchstick-as/assembly/index" @@ -450,7 +450,7 @@ describe("handleUpdatedGravatar", () => { }) ``` -उस वर्णन में प्रत्येक परीक्षण के बाद `afterEach` के अंदर का कोड निष्पादित होगा +उस वर्णन में प्रत्येक परीक्षण के बाद ` afterEach ` के अंदर का कोड निष्पादित होगा ```typescript import { describe, test, beforeEach, afterEach } from "matchstick-as/assembly/index" @@ -1368,18 +1368,18 @@ Global test coverage: 22.2% (2/9 handlers). > गंभीर: संदर्भ के साथ मान्य मॉड्यूल से WasmInstance नहीं बना सका: अज्ञात आयात: wasi_snapshot_preview1::fd_write परिभाषित नहीं किया गया है -इसका अर्थ है कि आपने अपने कोड में `console.log` का उपयोग किया है, जो कि असेंबलीस्क्रिप्ट द्वारा समर्थित नहीं है। कृपया [लॉगिंग API](/Developing/assemblyscript-api/#logging-api) का उपयोग करने पर विचार करें +This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/graph-ts/api/#logging-api) > त्रुटि TS2554: अपेक्षित? तर्क, लेकिन मिला ?. -> +> > return new ethereum.Block(defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(18,12) -> +> > त्रुटि TS2554: अपेक्षित? तर्क, लेकिन मिला ?. -> +> > return new ethereum.Transaction(defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(24,12) तर्कों में बेमेल `ग्राफ़-टीएस` और `मैचस्टिक-एज़` में बेमेल होने के कारण होता है। इस तरह की समस्याओं को ठीक करने का सबसे अच्छा तरीका है कि सभी चीज़ों को नवीनतम रिलीज़ किए गए संस्करण में अपडेट कर दिया जाए. diff --git a/website/pages/hi/glossary.mdx b/website/pages/hi/glossary.mdx index aa9f07691bff..01c18acfc249 100644 --- a/website/pages/hi/glossary.mdx +++ b/website/pages/hi/glossary.mdx @@ -10,11 +10,9 @@ title: शब्दकोष - **समाप्ति बिंदु**: एक URL जिसका उपयोग किसी सबग्राफ को क्वेरी करने के लिए किया जा सकता है। सबग्राफ स्टूडियो के लिए टेस्टिंग एंडपॉइंट `https://api.studio.thegraph.com/query///` है और ग्राफ एक्सप्लोरर एंडपॉइंट `https है: //gateway.thegraph.com/api//subgraphs/id/`। ग्राफ़ एक्सप्लोरर समापन बिंदु का उपयोग ग्राफ़ के विकेन्द्रीकृत नेटवर्क पर उप-अनुच्छेदों को क्वेरी करने के लिए किया जाता है। -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Then, Indexers can begin indexing subgraphs to make them available to be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **Hosted service**: A temporary scaffold service for building and querying subgraphs as The Graph's decentralized network is maturing its cost of service, quality of service, and developer experience. - -- **इंडेक्सर्स**: नेटवर्क प्रतिभागी जो ब्लॉकचेन से डेटा को इंडेक्स करने के लिए इंडेक्सिंग नोड्स चलाते हैं और ग्राफक्यूएल क्वेरीज सर्व करते हैं। +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **इंडेक्सर रेवेन्यू स्ट्रीम**: इंडेक्सर्स को जीआरटी में दो घटकों के साथ पुरस्कृत किया जाता है: क्वेरी शुल्क छूट और इंडेक्सिंग पुरस्कार। @@ -24,17 +22,17 @@ title: शब्दकोष - **इंडेक्सर का सेल्फ स्टेक**: GRT की वह राशि जो इंडेक्सर्स विकेंद्रीकृत नेटवर्क में भाग लेने के लिए दांव पर लगाते हैं। न्यूनतम 100,000 जीआरटी है, और कोई ऊपरी सीमा नहीं है। -- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service to The Graph Network. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **डेलीगेटर्स**: नेटवर्क प्रतिभागी जो GRT के मालिक हैं और अपने GRT को इंडेक्सर्स को सौंपते हैं। यह इंडेक्सर्स को नेटवर्क पर सबग्राफ में अपनी हिस्सेदारी बढ़ाने की अनुमति देता है। बदले में, डेलिगेटर्स को इंडेक्सिंग रिवॉर्ड्स का एक हिस्सा मिलता है जो इंडेक्सर्स को सबग्राफ प्रोसेसिंग के लिए मिलता है। +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **प्रत्यायोजन कर**: प्रतिनिधि द्वारा 0.5% शुल्क का भुगतान किया जाता है, जब वे अनुक्रमणकों को GRT प्रत्यायोजित करते हैं. शुल्क का भुगतान करने के लिए प्रयुक्त GRT जल गया है। -- **क्यूरेटर**: नेटवर्क प्रतिभागी जो उच्च-गुणवत्ता वाले सबग्राफ की पहचान करते हैं, और क्यूरेशन शेयरों के बदले उन्हें "क्यूरेट" करते हैं (यानी, उन पर जीआरटी का संकेत देते हैं)। जब इंडेक्सर्स एक सबग्राफ पर क्वेरी फीस का दावा करते हैं, तो उस सबग्राफ के क्यूरेटर को 10% वितरित किया जाता है। इंडेक्सर्स एक सबग्राफ पर संकेत के अनुपात में इंडेक्सिंग पुरस्कार अर्जित करते हैं। हम संकेतित GRT की मात्रा और किसी सबग्राफ को अनुक्रमणित करने वाले अनुक्रमणकों की संख्या के बीच सहसंबंध देखते हैं। +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **क्यूरेशन टैक्स**: क्यूरेटर द्वारा सबग्राफ पर GRT का संकेत देने पर 1% शुल्क का भुगतान किया जाता है। शुल्क का भुगतान करने के लिए प्रयुक्त GRT जल गया है। -- **सबग्राफ उपभोक्ता**: कोई भी एप्लिकेशन या उपयोगकर्ता जो सबग्राफ पर सवाल उठाता है। +- **Data Consumer**: Any application or user that queries a subgraph. - **सबग्राफ डेवलपर**: एक डेवलपर जो ग्राफ़ के विकेंद्रीकृत नेटवर्क के लिए एक सबग्राफ़ बनाता और तैनात करता है। @@ -46,11 +44,11 @@ title: शब्दकोष 1. **सक्रिय**: एक आवंटन को तब सक्रिय माना जाता है जब इसे ऑन-चेन बनाया जाता है। इसे ओपनिंग आबंटन कहा जाता है, और यह नेटवर्क को इंगित करता है कि इंडेक्सर सक्रिय रूप से अनुक्रमण कर रहा है और किसी विशेष सबग्राफ के लिए प्रश्नों की सेवा कर रहा है। सक्रिय आबंटन उप-अनुच्छेद पर संकेत के अनुपात में अनुक्रमित पुरस्कार अर्जित करते हैं, और आवंटित जीआरटी की राशि। - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **सबग्राफ स्टूडियो**: सबग्राफ बनाने, लगाने और प्रकाशित करने के लिए एक शक्तिशाली डैप। -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed. Specifically, the Indexer will lose 2.5% of their self-stake of GRT. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. @@ -62,11 +60,11 @@ title: शब्दकोष - **GRT**: ग्राफ़ का कार्य उपयोगिता टोकन। जीआरटी नेटवर्क प्रतिभागियों को नेटवर्क में योगदान करने के लिए आर्थिक प्रोत्साहन प्रदान करता है। -- **POI or Proof of Indexing**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent Proof of Indexing (POI). Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **ग्राफ़ नोड**: ग्राफ़ नोड वह घटक है जो सबग्राफ़ को अनुक्रमित करता है, और परिणामी डेटा को ग्राफ़क्यूएल एपीआई के माध्यम से क्वेरी के लिए उपलब्ध कराता है। इस तरह यह इंडेक्सर स्टैक के लिए केंद्रीय है, और एक सफल इंडेक्सर चलाने के लिए ग्राफ नोड का सही संचालन महत्वपूर्ण है। +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **इंडेक्सर एजेंट**: इंडेक्सर एजेंट इंडेक्सर स्टैक का हिस्सा है। यह श्रृंखला पर अनुक्रमणिका के अन्योन्यक्रियाओं की सुविधा प्रदान करता है, जिसमें नेटवर्क पर पंजीकरण, इसके ग्राफ़ नोड(ओं) में सबग्राफ परिनियोजन प्रबंधित करना और आवंटन प्रबंधित करना शामिल है। +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **द ग्राफ़ क्लाइंट**: विकेंद्रीकृत तरीके से ग्राफ़कॉल-आधारित डैप बनाने के लिए एक लाइब्रेरी। @@ -78,10 +76,6 @@ title: शब्दकोष - **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self stake. -- **_Upgrading_ a subgraph to The Graph Network**: The process of moving a subgraph from the hosted service to The Graph Network. - -- **_Updating_ a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). - -- **Upgrade Window**: The countdown for hosted service users to upgrade their subgraph(s) to The Graph Network beginning on April 11th, and ending on June 12th 2024. diff --git a/website/pages/hi/index.json b/website/pages/hi/index.json index 91d40099e248..a0eb922b9ea7 100644 --- a/website/pages/hi/index.json +++ b/website/pages/hi/index.json @@ -21,10 +21,6 @@ "createASubgraph": { "title": "एक सबग्राफ बनाएं", "description": "सबग्राफ बनाने के लिए स्टूडियो का प्रयोग करें" - }, - "migrateFromHostedService": { - "title": "Upgrade from the hosted service", - "description": "Upgrading subgraphs to The Graph Network" } }, "networkRoles": { @@ -60,10 +56,6 @@ "graphExplorer": { "title": "Graph Explorer", "description": "Explore subgraphs and interact with the protocol" - }, - "hostedService": { - "title": "Hosted Service", - "description": "Create and explore subgraphs on the hosted service" } } }, diff --git a/website/pages/hi/managing/transfer-and-deprecate-a-subgraph.mdx b/website/pages/hi/managing/transfer-and-deprecate-a-subgraph.mdx new file mode 100644 index 000000000000..b5cd0449a682 --- /dev/null +++ b/website/pages/hi/managing/transfer-and-deprecate-a-subgraph.mdx @@ -0,0 +1,65 @@ +--- +title: Transfer and Deprecate a Subgraph +--- + +## सबग्राफ का स्वामित्व स्थानांतरित करना + +Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. + +**Please note the following:** + +- Whoever owns the NFT controls the subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. +- You can easily move control of a subgraph to a multi-sig. +- A community member can create a subgraph on behalf of a DAO. + +### View your subgraph as an NFT + +To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: + +``` +https://opensea.io/your-wallet-address +``` + +Or a wallet explorer like **Rainbow.me**: + +``` +https://rainbow.me/your-wallet-addres +``` + +### Step-by-Step + +To transfer ownership of a subgraph, do the following: + +1. Use the UI built into Subgraph Studio: + + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) + +2. Choose the address that you would like to transfer the subgraph to: + + ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) + +Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: + +![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) + +## Deprecating a subgraph + +Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. + +### Step-by-Step + +To deprecate your subgraph, do the following: + +1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). +2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. +3. Your subgraph will no longer appear in searches on Graph Explorer. + +**Please note the following:** + +- The owner's wallet should call the `deprecateSubgraph` function. +- क्यूरेटर अब सबग्राफ पर संकेत नहीं दे पाएंगे। +- Curators that already signaled on the subgraph can withdraw their signal at an average share price. +- Deprecated subgraphs will show an error message. + +> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. diff --git a/website/pages/hi/mips-faqs.mdx b/website/pages/hi/mips-faqs.mdx index de45376c7e5c..d38469e543e6 100644 --- a/website/pages/hi/mips-faqs.mdx +++ b/website/pages/hi/mips-faqs.mdx @@ -6,10 +6,6 @@ title: एमआईपी अक्सर पूछे जाने वाले > Note: the MIPs program is closed as of May 2023. Thank you to all the Indexers who participated! -It's an exciting time to be participating in The Graph ecosystem! During [Graph Day 2022](https://thegraph.com/graph-day/2022/) Yaniv Tal announced the [sunsetting of the hosted service](https://thegraph.com/blog/sunsetting-hosted-service/), a moment The Graph ecosystem has been working towards for many years. - -To support the sunsetting of the hosted service and the migration of all of it's activity to the decentralized network, The Graph Foundation has announced the [Migration Infrastructure Providers (MIPs) program](https://thegraph.com/blog/mips-multi-chain-indexing-incentivized-program). - The MIPs program is an incentivization program for Indexers to support them with resources to index chains beyond Ethereum mainnet and help The Graph protocol expand the decentralized network into a multi-chain infrastructure layer. The MIPs program has allocated 0.75% of the GRT supply (75M GRT), with 0.5% to reward Indexers who contribute to bootstrapping the network and 0.25% allocated to Network Grants for subgraph developers using multi-chain subgraphs. diff --git a/website/pages/hi/network/benefits.mdx b/website/pages/hi/network/benefits.mdx index 910f3e18dff7..09fa69e7893d 100644 --- a/website/pages/hi/network/benefits.mdx +++ b/website/pages/hi/network/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| लागत तुलना | स्वयं होस्ट किया गया | The Graph Network | -| :-: | :-: | :-: | -| मासिक सर्वर लागत\* | $350 प्रति माह | $0 | -| पूछताछ लागत | $0+ | $0 per month | -| इंजीनियरिंग का समय | $ 400 प्रति माह | कोई नहीं, विश्व स्तर पर वितरित इंडेक्सर्स के साथ नेटवर्क में बनाया गया | -| प्रति माह प्रश्न | इन्फ्रा क्षमताओं तक सीमित | 100,000 (Free Plan) | -| लागत प्रति क्वेरी | $0 | $0 | -| आधारभूत संरचना | केंद्रीकृत | विकेन्द्रीकृत | -| भौगोलिक अतिरेक | $750+ प्रति अतिरिक्त नोड | शामिल | -| अपटाइम | भिन्न | 99.9%+ | -| कुल मासिक लागत | $750+ | $0 | +| लागत तुलना | स्वयं होस्ट किया गया | The Graph Network | +|:------------------------------:|:---------------------------------------:|:----------------------------------------------------------------------:| +| मासिक सर्वर लागत\* | $350 प्रति माह | $0 | +| पूछताछ लागत | $0+ | $0 per month | +| इंजीनियरिंग का समय | $ 400 प्रति माह | कोई नहीं, विश्व स्तर पर वितरित इंडेक्सर्स के साथ नेटवर्क में बनाया गया | +| प्रति माह प्रश्न | इन्फ्रा क्षमताओं तक सीमित | 100,000 (Free Plan) | +| लागत प्रति क्वेरी | $0 | $0 | +| आधारभूत संरचना | केंद्रीकृत | विकेन्द्रीकृत | +| भौगोलिक अतिरेक | $750+ प्रति अतिरिक्त नोड | शामिल | +| अपटाइम | भिन्न | 99.9%+ | +| कुल मासिक लागत | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| लागत तुलना | स्वयं होस्ट किया गया | The Graph Network | -| :-: | :-: | :-: | -| मासिक सर्वर लागत\* | $350 प्रति माह | $0 | -| पूछताछ लागत | $ 500 प्रति माह | $120 per month | -| इंजीनियरिंग का समय | $800 प्रति माह | कोई नहीं, विश्व स्तर पर वितरित इंडेक्सर्स के साथ नेटवर्क में बनाया गया | -| प्रति माह प्रश्न | इन्फ्रा क्षमताओं तक सीमित | ~3,000,000 | -| लागत प्रति क्वेरी | $0 | $0.00004 | -| आधारभूत संरचना | केंद्रीकृत | विकेन्द्रीकृत | -| इंजीनियरिंग खर्च | $ 200 प्रति घंटा | शामिल | -| भौगोलिक अतिरेक | प्रति अतिरिक्त नोड कुल लागत में $1,200 | शामिल | -| अपटाइम | भिन्न | 99.9%+ | -| कुल मासिक लागत | $1,650+ | $120 | +| लागत तुलना | स्वयं होस्ट किया गया | The Graph Network | +|:------------------------------:|:------------------------------------------:|:----------------------------------------------------------------------:| +| मासिक सर्वर लागत\* | $350 प्रति माह | $0 | +| पूछताछ लागत | $ 500 प्रति माह | $120 per month | +| इंजीनियरिंग का समय | $800 प्रति माह | कोई नहीं, विश्व स्तर पर वितरित इंडेक्सर्स के साथ नेटवर्क में बनाया गया | +| प्रति माह प्रश्न | इन्फ्रा क्षमताओं तक सीमित | ~3,000,000 | +| लागत प्रति क्वेरी | $0 | $0.00004 | +| आधारभूत संरचना | केंद्रीकृत | विकेन्द्रीकृत | +| इंजीनियरिंग खर्च | $ 200 प्रति घंटा | शामिल | +| भौगोलिक अतिरेक | प्रति अतिरिक्त नोड कुल लागत में $1,200 | शामिल | +| अपटाइम | भिन्न | 99.9%+ | +| कुल मासिक लागत | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| लागत तुलना | स्वयं होस्ट किया गया | The Graph Network | -| :-: | :-: | :-: | -| मासिक सर्वर लागत\* | $1100 प्रति माह, प्रति नोड | $0 | -| पूछताछ लागत | $4000 | $1,200 per month | -| आवश्यक नोड्स की संख्या | 10 | Not applicable | -| इंजीनियरिंग का समय | $6,000 or more per month | कोई नहीं, विश्व स्तर पर वितरित इंडेक्सर्स के साथ नेटवर्क में बनाया गया | -| प्रति माह प्रश्न | इन्फ्रा क्षमताओं तक सीमित | ~30,000,000 | -| लागत प्रति क्वेरी | $0 | $0.00004 | -| आधारभूत संरचना | केंद्रीकृत | विकेन्द्रीकृत | -| भौगोलिक अतिरेक | प्रति अतिरिक्त नोड कुल लागत में $1,200 | शामिल | -| अपटाइम | भिन्न | 99.9%+ | -| कुल मासिक लागत | $11,000+ | $1,200 | +| लागत तुलना | स्वयं होस्ट किया गया | The Graph Network | +|:------------------------------:|:-------------------------------------------:|:----------------------------------------------------------------------:| +| मासिक सर्वर लागत\* | $1100 प्रति माह, प्रति नोड | $0 | +| पूछताछ लागत | $4000 | $1,200 per month | +| आवश्यक नोड्स की संख्या | 10 | Not applicable | +| इंजीनियरिंग का समय | $6,000 or more per month | कोई नहीं, विश्व स्तर पर वितरित इंडेक्सर्स के साथ नेटवर्क में बनाया गया | +| प्रति माह प्रश्न | इन्फ्रा क्षमताओं तक सीमित | ~30,000,000 | +| लागत प्रति क्वेरी | $0 | $0.00004 | +| आधारभूत संरचना | केंद्रीकृत | विकेन्द्रीकृत | +| भौगोलिक अतिरेक | प्रति अतिरिक्त नोड कुल लागत में $1,200 | शामिल | +| अपटाइम | भिन्न | 99.9%+ | +| कुल मासिक लागत | $11,000+ | $1,200 | \*बैकअप की लागत सहित: $50-$100 प्रति माह diff --git a/website/pages/hi/network/curating.mdx b/website/pages/hi/network/curating.mdx index e9bf6371f0c3..3d50a73052be 100644 --- a/website/pages/hi/network/curating.mdx +++ b/website/pages/hi/network/curating.mdx @@ -10,7 +10,7 @@ Before consumers can query a subgraph, it must be indexed. This is where curatio Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. @@ -18,7 +18,7 @@ The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). @@ -34,7 +34,7 @@ Within the Curator tab in Graph Explorer, curators will be able to signal and un अपने सिग्नल को स्वचालित रूप से नवीनतम उत्पादन बिल्ड में माइग्रेट करना यह सुनिश्चित करने के लिए मूल्यवान हो सकता है कि आप क्वेरी शुल्क अर्जित करते रहें। हर बार जब आप क्यूरेट करते हैं, तो 1% क्यूरेशन टैक्स लगता है। आप हर माइग्रेशन पर 0.5% क्यूरेशन टैक्स भी देंगे। सबग्राफ डेवलपर्स को बार-बार नए संस्करण प्रकाशित करने से हतोत्साहित किया जाता है - उन्हें सभी ऑटो-माइग्रेटेड क्यूरेशन शेयरों पर 0.5% क्यूरेशन टैक्स देना पड़ता है। -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -49,7 +49,7 @@ However, it is recommended that curators leave their signaled GRT in place not o ## जोखिम 1. क्वेरी बाजार द ग्राफ में स्वाभाविक रूप से युवा है और इसमें जोखिम है कि नवजात बाजार की गतिशीलता के कारण आपका %APY आपकी अपेक्षा से कम हो सकता है। -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. 3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. बग के कारण सबग्राफ विफल हो सकता है। एक विफल सबग्राफ क्वेरी शुल्क अर्जित नहीं करता है। नतीजतन, आपको तब तक इंतजार करना होगा जब तक कि डेवलपर बग को ठीक नहीं करता है और एक नया संस्करण तैनात करता है। - यदि आपने सबग्राफ के नवीनतम संस्करण की सदस्यता ली है, तो आपके शेयर उस नए संस्करण में स्वत: माइग्रेट हो जाएंगे। इस पर 0.5% क्यूरेशन टैक्स लगेगा। @@ -65,7 +65,7 @@ By signalling on a subgraph, you will earn a share of all the query fees that th उच्च-गुणवत्ता वाले सबग्राफ ढूँढना एक जटिल कार्य है, लेकिन इसे कई अलग-अलग तरीकों से संपर्क किया जा सकता है। क्यूरेटर के रूप में, आप भरोसेमंद सबग्राफ देखना चाहते हैं जो क्वेरी वॉल्यूम बढ़ा रहे हैं। एक भरोसेमंद सबग्राफ मूल्यवान हो सकता है यदि यह पूर्ण, सटीक है और डीएपी की डेटा जरूरतों का समर्थन करता है। खराब ढंग से तैयार किए गए सबग्राफ को संशोधित करने या फिर से प्रकाशित करने की आवश्यकता हो सकती है, और यह विफल भी हो सकता है। सबग्राफ मूल्यवान है या नहीं, इसका आकलन करने के लिए क्यूरेटर के लिए सबग्राफ के आर्किटेक्चर या कोड की समीक्षा करना महत्वपूर्ण है। नतीजतन: -- क्यूरेटर नेटवर्क की अपनी समझ का उपयोग करके यह अनुमान लगाने की कोशिश कर सकते हैं कि कैसे एक व्यक्तिगत सबग्राफ भविष्य में उच्च या निम्न क्वेरी मात्रा उत्पन्न कर सकता है। +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future - Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. ### 3. What’s the cost of updating a subgraph? @@ -78,50 +78,14 @@ It’s suggested that you don’t update your subgraphs too frequently. See the ### 5. क्या मैं अपने क्यूरेशन शेयर बेच सकता हूँ? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). ### 6. Am I eligible for a curation grant? Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## बॉन्डिंग कर्व 101 - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![मूल्य प्रति शेयर](/img/price-per-share.png) - -नतीजतन, मूल्य रैखिक रूप से बढ़ता है, जिसका अर्थ है कि समय के साथ शेयर खरीदना अधिक महंगा हो जाएगा। यहाँ एक उदाहरण है कि हमारा क्या मतलब है, नीचे बॉन्डिंग कर्व देखें: - -![बंधन वक्र](/img/bonding-curve.png) - -विचार करें कि हमारे पास दो क्यूरेटर हैं जो मिंट एक सबग्राफ के लिए शेयर करते हैं: - -- क्यूरेटर ए सबसे पहले सबग्राफ पर संकेत देता है। कर्व में 120,000 जीआरटी जोड़कर, वे 2000 शेयरों का खनन करने में सक्षम हैं। -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- चूंकि दोनों क्यूरेटर कुल क्यूरेशन शेयरों का आधा हिस्सा रखते हैं, इसलिए उन्हें क्यूरेटर रॉयल्टी की समान राशि प्राप्त होगी। -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- शेष क्यूरेटर अब उस सबग्राफ के लिए सभी क्यूरेटर रॉयल्टी प्राप्त करेंगे। यदि वे GRT निकालने के लिए अपने शेयर जलाते हैं, तो उन्हें 120,000 GRT प्राप्त होंगे। -- **TLDR:** क्यूरेशन शेयरों का GRT मूल्यांकन बॉन्डिंग कर्व द्वारा निर्धारित किया जाता है और अस्थिर हो सकता है। बड़ा नुकसान होने की संभावना है। जल्दी संकेत देने का मतलब है कि आप प्रत्येक शेयर के लिए कम जीआरटी डालते हैं। विस्तार से, इसका मतलब है कि आप उसी सबग्राफ के लिए बाद के क्यूरेटरों की तुलना में प्रति जीआरटी अधिक क्यूरेटर रॉयल्टी अर्जित करते हैं। - -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** - -द ग्राफ़ के मामले में, [बैंकर का बॉन्डिंग कर्व फ़ॉर्मूला लागू करना](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) लीवरेज्ड है। - अभी भी उलझन में? नीचे हमारे क्यूरेशन वीडियो गाइड देखें: diff --git a/website/pages/hi/network/delegating.mdx b/website/pages/hi/network/delegating.mdx index a9785500c51a..32a08b134eb9 100644 --- a/website/pages/hi/network/delegating.mdx +++ b/website/pages/hi/network/delegating.mdx @@ -2,13 +2,23 @@ title: Delegating --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## प्रतिनिधि गाइड -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly, as that information is widely available online. There are three sections in this guide: @@ -24,64 +34,86 @@ There are three sections in this guide: प्रतिनिधियों को खराब व्यवहार के लिए कम नहीं किया जा सकता है, लेकिन खराब निर्णय लेने को हतोत्साहित करने के लिए प्रतिनिधियों पर एक कर है जो नेटवर्क की अखंडता को नुकसान पहुंचा सकता है। -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +As a Delegator, it's important to understand the following: -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### प्रतिनिधिमंडल बंधन अवधि Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely.
- ![डेलिगेशन अनबॉन्डिंग](/img/Delegation-Unbonding.png) _डेलीगेशन UI में 0.5% शुल्क और साथ ही 28 दिन पर ध्यान दें बंधन - अवधि._ + ![डेलिगेशन अनबॉन्डिंग](/img/Delegation-Unbonding.png) _डेलीगेशन UI में 0.5% शुल्क और साथ ही 28 दिन पर ध्यान दें + बंधन अवधि._
### डेलीगेटर्स के लिए उचित इनाम भुगतान के साथ एक भरोसेमंद इंडेक्सर चुनना -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +#### Delegation Parameters + +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
- ![इंडेक्सिंग रिवॉर्ड कट](/img/Indexing-Reward-Cut.png) *शीर्ष इंडेक्सर डेलीगेटर्स को पुरस्कारों का 90% दे रहा है। बीच - वाला प्रतिनिधि को 20% दे रहा है। नीचे वाला डेलिगेटरों को ~83% दे रहा है।* + ![इंडेक्सिंग रिवॉर्ड कट](/img/Indexing-Reward-Cut.png) *शीर्ष इंडेक्सर डेलीगेटर्स को पुरस्कारों का 90% दे रहा है। + बीच वाला प्रतिनिधि को 20% दे रहा है। नीचे वाला डेलिगेटरों को ~83% दे रहा है।*
-- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. + +As you can see, in order to choose the right Indexer, you must consider multiple things. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -### डेलीगेटर्स की अपेक्षित वापसी की गणना +## Calculating Delegators Expected Return -A Delegator must consider a lot of factors when determining the return. These include: +A Delegator must consider the following factors to determine a return: -- एक तकनीकी डेलीगेटर इंडेक्सर की उनके लिए उपलब्ध प्रत्यायोजित टोकन का उपयोग करने की क्षमता को भी देख सकता है। यदि कोई इंडेक्सर उपलब्ध सभी टोकन आवंटित नहीं कर रहा है, तो वे स्वयं या उनके प्रतिनिधियों के लिए अधिकतम लाभ अर्जित नहीं कर रहे हैं। -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### प्रश्न शुल्क में कटौती और अनुक्रमण शुल्क में कटौती को ध्यान में रखते हुए -जैसा कि ऊपर दिए गए अनुभागों में बताया गया है, आपको एक ऐसा इंडेक्सर चुनना चाहिए जो पारदर्शी हो और उनके प्रश्न शुल्क कट और इंडेक्सिंग शुल्क कटौती को सेट करने के बारे में ईमानदार हो। प्रतिनिधि को यह देखने के लिए कि उनके पास कितना समय बफर है, पैरामीटर्स कूलडाउन समय को भी देखना चाहिए। उसके बाद, प्रतिनिधियों को मिलने वाले पुरस्कारों की मात्रा की गणना करना काफी सरल है। सूत्र है: +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![प्रतिनिधिमंडल छवि 3](/img/Delegation-Reward-Formula.png) ### अनुक्रमणिका के प्रतिनिधिमंडल पूल को ध्यान में रखते हुए -एक अन्य बात पर एक प्रतिनिधि को विचार करना होता है कि वह प्रतिनिधिमंडल पूल का कितना अनुपात रखता है। सभी प्रतिनिधि पुरस्कारों को समान रूप से साझा किया जाता है, पूल के एक साधारण पुनर्संतुलन के साथ, प्रतिनिधि द्वारा पूल में जमा की गई राशि द्वारा निर्धारित किया जाता है। यह प्रतिनिधि को पूल का एक हिस्सा देता है: +Delegators should consider the proportion of the Delegation Pool they own. -![साझा सूत्र](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +This gives the Delegator a share of the pool: + +![साझा सूत्र](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### प्रतिनिधिमंडल की क्षमता को ध्यान में रखते हुए -एक और बात पर विचार करना प्रतिनिधिमंडल की क्षमता है। वर्तमान में, प्रत्यायोजन अनुपात 16 पर सेट है। इसका अर्थ है कि यदि किसी अनुक्रमणिका ने 1,000,000 GRT दांव पर लगा दिया है, तो उनकी प्रत्यायोजन क्षमता प्रत्यायोजित टोकन की 16,000,000 GRT है जिसे वे प्रोटोकॉल में उपयोग कर सकते हैं। इस राशि से अधिक का कोई भी प्रत्यायोजित टोकन सभी प्रतिनिधि पुरस्कारों को कम कर देगा। +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. @@ -89,16 +121,21 @@ Therefore, a Delegator should always consider the Delegation Capacity of an Inde ### मेटामास्क "लंबित लेन-देन" बग -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### उदाहरण -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## नेटवर्क यूआई के लिए वीडियो गाइड +## Video Guide -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/hi/network/developing.mdx b/website/pages/hi/network/developing.mdx index bb2091a777df..abc2f76e38c4 100644 --- a/website/pages/hi/network/developing.mdx +++ b/website/pages/hi/network/developing.mdx @@ -2,52 +2,88 @@ title: विकसित होना --- -डेवलपर्स द ग्राफ इकोसिस्टम के डिमांड साइड हैं। डेवलपर्स सबग्राफ बनाते हैं और उन्हें ग्राफ़ नेटवर्क पर प्रकाशित करते हैं। फिर, वे अपने अनुप्रयोगों को शक्ति प्रदान करने के लिए ग्राफकलाइन के साथ लाइव सबग्राफ को क्वेरी करते हैं। +To start coding right away, go to [Developer Quick Start](/quick-start/). + +## अवलोकन + +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. + +On The Graph, you can: + +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. + +### What is GraphQL? + +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +### Developer Actions + +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. + +## Subgraph Specifics + +### What are subgraphs? + +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. + +A subgraph primarily consists of the following files: + +- `subgraph.yaml`: this YAML file contains the [subgraph manifest](/developing/creating-a-subgraph/#the-subgraph-manifest). +- `subgraph.graphql`: this GraphQL schema defines what data is stored for your subgraph, and how to query it via [GraphQL](/developing/creating-a-subgraph/#the-graphql-schema). +- `mappings`: this [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) mappings file translates data from the event data to the entities defined in your schema. + +Learn the detailed specifics to [create a subgraph](/developing/creating-a-subgraph/). ## सबग्राफ जीवनचक्र -नेटवर्क में परिनियोजित सबग्राफ का एक परिभाषित जीवनचक्र होता है। +Here is a general overview of a subgraph’s lifecycle: -### स्थानीय रूप से बनाएँ +![सबग्राफ जीवनचक्र](/img/subgraph-lifecycle.png) -जैसा कि सभी सबग्राफ विकास के साथ होता है, यह स्थानीय विकास और परीक्षण से शुरू होता है। डेवलपर उसी स्थानीय सेटअप का उपयोग कर सकते हैं, चाहे वे ग्राफ़ नेटवर्क, होस्ट की गई सेवा या स्थानीय ग्राफ़ नोड के लिए बना रहे हों, अपने निर्माण के लिए `ग्राफ़-क्ली` और `ग्राफ़-टीएस` का लाभ उठा रहे हों सबग्राफ। डेवलपर्स को अपने सबग्राफ की मजबूती में सुधार करने के लिए इकाई परीक्षण के लिए [मैचस्टिक](https://github.com/LimeChain/matchstick) जैसे उपकरणों का उपयोग करने के लिए प्रोत्साहित किया जाता है। +### स्थानीय रूप से बनाएँ -> फीचर और नेटवर्क सपोर्ट के मामले में द ग्राफ नेटवर्क पर कुछ बाधाएं हैं। केवल [समर्थित नेटवर्क](/developing/supported-networks) पर सबग्राफ इंडेक्सिंग पुरस्कार अर्जित करेंगे, और IPFS से डेटा प्राप्त करने वाले सबग्राफ भी योग्य नहीं हैं। +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. ### Deploy to Subgraph Studio -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. - -### नेटवर्क पर प्रकाशित करें +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -जब डेवलपर अपने सबग्राफ से खुश होता है, तो वे इसे द ग्राफ़ नेटवर्क पर प्रकाशित कर सकते हैं। यह एक ऑन-चेन एक्शन है, जो सबग्राफ को पंजीकृत करता है ताकि यह इंडेक्सर्स द्वारा खोजा जा सके। प्रकाशित सबग्राफ में संबंधित एनएफटी होता है, जो तब आसानी से हस्तांतरणीय होता है। प्रकाशित सबग्राफ में मेटाडेटा जुड़ा हुआ है, जो अन्य नेटवर्क प्रतिभागियों को उपयोगी संदर्भ और जानकारी प्रदान करता है। +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. -### इंडेक्सिंग को प्रोत्साहित करने के लिए सिग्नल +### नेटवर्क पर प्रकाशित करें -प्रकाशित उपग्राफों को संकेत जोड़े बिना अनुक्रमणकों द्वारा उठाए जाने की संभावना नहीं है। सिग्नल किसी दिए गए सबग्राफ से जुड़ा GRT लॉक है, जो इंडेक्सर्स को इंगित करता है कि एक दिया गया सबग्राफ क्वेरी वॉल्यूम प्राप्त करेगा, और इसे प्रोसेस करने के लिए उपलब्ध इंडेक्सिंग रिवार्ड्स में भी योगदान देता है। इंडेक्सिंग को प्रोत्साहित करने के लिए सबग्राफ डेवलपर्स आमतौर पर अपने सबग्राफ में सिग्नल जोड़ते हैं। तीसरे पक्ष के क्यूरेटर किसी दिए गए सबग्राफ पर भी संकेत दे सकते हैं, अगर उन्हें लगता है कि सबग्राफ से क्वेरी वॉल्यूम बढ़ने की संभावना है। +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. -### पूछताछ & एप्लीकेशन का विकास +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-and-deprecate-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. -एक बार एक सबग्राफ इंडेक्सर्स द्वारा संसाधित किया गया है और पूछताछ के लिए उपलब्ध है, डेवलपर्स अपने अनुप्रयोगों में सबग्राफ का उपयोग करना शुरू कर सकते हैं। विकासकर्ता एक गेटवे के माध्यम से सबग्राफ को क्वेरी करते हैं, जो उनके प्रश्नों को एक इंडेक्सर को अग्रेषित करता है जिसने सबग्राफ को संसाधित किया है, जीआरटी में क्वेरी शुल्क का भुगतान करता है। +### Add Curation Signal for Indexing -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. +#### What is signal? -### Updating Subgraphs +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. +### पूछताछ & एप्लीकेशन का विकास -Once the Subgraph Developer is ready to update, they can initiate a transaction to point their subgraph at the new version. Updating the subgraph migrates any signal to the new version (assuming the user who applied the signal selected "auto-migrate"), which also incurs a migration tax. This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). -### सबग्राफ का बहिष्कार करना +Learn more about [querying subgraphs](/querying/querying-the-graph/). -किसी बिंदु पर एक डेवलपर यह तय कर सकता है कि उन्हें अब प्रकाशित सबग्राफ की आवश्यकता नहीं है। उस बिंदु पर वे सबग्राफ को बहिष्कृत कर सकते हैं, जो किसी भी संकेतित जीआरटी को क्यूरेटर को लौटाता है। +### Updating Subgraphs -### विविध डेवलपर भूमिकाएँ +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -कुछ डेवलपर नेटवर्क पर पूर्ण सबग्राफ जीवनचक्र के साथ संलग्न होंगे, अपने स्वयं के सबग्राफ पर प्रकाशन, पूछताछ और पुनरावृति करेंगे। कुछ सबग्राफ विकास पर ध्यान केंद्रित कर सकते हैं, खुले एपीआई का निर्माण कर सकते हैं, जिस पर अन्य निर्माण कर सकते हैं। कुछ एप्लिकेशन केंद्रित हो सकते हैं, दूसरों द्वारा तैनात किए गए सबग्राफ को क्वेरी कर सकते हैं। +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. -### डेवलपर्स और नेटवर्क अर्थशास्त्र +### Deprecating & Transferring Subgraphs -Developers are a key economic actor in the network, locking up GRT in order to encourage indexing, and crucially querying subgraphs, which is the network's primary value exchange. Subgraph developers also burn GRT whenever a subgraph is updated. +If you no longer need a published subgraph, you can [deprecate or transfer a subgraph](/managing/transfer-and-deprecate-a-subgraph/). Deprecating a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/hi/network/explorer.mdx b/website/pages/hi/network/explorer.mdx index 7732e274ec69..2b63819f0219 100644 --- a/website/pages/hi/network/explorer.mdx +++ b/website/pages/hi/network/explorer.mdx @@ -2,21 +2,35 @@ title: Graph Explorer --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## सबग्राफ -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![एक्सप्लोरर छवि 1](/img/Subgraphs-Explorer-Landing.png) -जब आप एक सबग्राफ में क्लिक करते हैं, तो आप खेल के मैदान में प्रश्नों का परीक्षण कर पाएंगे और सूचित निर्णय लेने के लिए नेटवर्क विवरण का लाभ उठाने में सक्षम होंगे। आप अपने खुद के सबग्राफ या दूसरों के सबग्राफ पर जीआरटी का संकेत देने में भी सक्षम होंगे ताकि इंडेक्सर्स को इसके महत्व और गुणवत्ता से अवगत कराया जा सके। यह महत्वपूर्ण है क्योंकि एक सबग्राफ पर संकेतन इसे अनुक्रमित करने के लिए प्रोत्साहित करता है, जिसका अर्थ है कि यह अंततः प्रश्नों को पूरा करने के लिए नेटवर्क पर सतह पर आ जाएगा। +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![एक्सप्लोरर छवि 2](/img/Subgraph-Details.png) -प्रत्येक सबग्राफ के समर्पित पृष्ठ पर, कई विवरण सामने आते हैं। इसमे शामिल है: +On each subgraph’s dedicated page, you can do the following: - सबग्राफ पर सिग्नल/अन-सिग्नल - चार्ट, वर्तमान परिनियोजन आईडी और अन्य मेटाडेटा जैसे अधिक विवरण देखें @@ -31,26 +45,32 @@ First things first, if you just finished deploying and publishing your subgraph ## प्रतिभागियों -इस टैब के भीतर, आपको उन सभी लोगों का एक विहंगम दृश्य मिलेगा जो नेटवर्क गतिविधियों में भाग ले रहे हैं, जैसे कि इंडेक्सर्स, डेलिगेटर्स और क्यूरेटर। नीचे, हम इसकी गहन समीक्षा करेंगे कि प्रत्येक टैब आपके लिए क्या मायने रखता है। +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### 1. इंडेक्सर्स ![एक्सप्लोरर छवि 4](/img/Indexer-Pane.png) -इंडेक्सर्स से शुरू करते हैं। इंडेक्सर्स प्रोटोकॉल की रीढ़ हैं, वे हैं जो सबग्राफ पर दांव लगाते हैं, उन्हें इंडेक्स करते हैं, और सबग्राफ का उपभोग करने वाले किसी भी व्यक्ति को पूछताछ करते हैं। इंडेक्सर्स टेबल में, आप इंडेक्सर्स के डेलिगेशन पैरामीटर्स, उनकी हिस्सेदारी, उन्होंने प्रत्येक सबग्राफ में कितना दांव लगाया है, और उन्होंने क्वेरी फीस और इंडेक्सिंग रिवार्ड्स से कितना राजस्व कमाया है, यह देखने में सक्षम होंगे। नीचे गहरा गोता: +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. -- क्वेरी शुल्क में कटौती - क्वेरी शुल्क का % छूट देता है कि इंडेक्सर डेलिगेटर के साथ विभाजित होने पर रखता है -- प्रभावी रिवार्ड कट - इंडेक्सिंग रिवॉर्ड कट डेलिगेशन पूल पर लागू होता है। यदि यह ऋणात्मक है, तो इसका अर्थ है कि अनुक्रमणक अपने पुरस्कारों का एक भाग दे रहा है। यदि यह धनात्मक है, तो इसका अर्थ है कि अनुक्रमणिका अपने कुछ पुरस्कार रख रहा है -- कूलडाउन शेष - शेष समय जब तक अनुक्रमणिका उपरोक्त प्रत्यायोजन पैरामीटरों को परिवर्तित नहीं कर सकता। जब इंडेक्सर्स अपने डेलिगेशन पैरामीटर्स को अपडेट करते हैं तो कूलडाउन पीरियड्स को इंडेक्सर्स द्वारा सेट किया जाता है -- स्वामित्व - यह इंडेक्सर की जमा हिस्सेदारी है, जिसे दुर्भावनापूर्ण या गलत व्यवहार के लिए घटाया जा सकता है -- प्रत्यायोजित - डेलिगेटर्स से हिस्सेदारी जिसे इंडेक्सर द्वारा आवंटित किया जा सकता है, लेकिन इसे घटाया नहीं जा सकता -- आबंटित - हिस्सेदारी जो इंडेक्सर सक्रिय रूप से उन सबग्राफ के लिए आवंटित कर रहे हैं जिन्हें वे इंडेक्स कर रहे हैं -- उपलब्ध प्रत्यायोजित क्षमता - प्रत्यायोजित हिस्सेदारी की राशि जो सूचकांककर्ता अति प्रत्यायोजित होने से पहले प्राप्त कर सकते हैं +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. + +**Specifics** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - अधिकतम प्रत्यायोजन क्षमता - प्रत्यायोजित हिस्सेदारी की अधिकतम राशि जिसे इंडेक्सर उत्पादक रूप से स्वीकार कर सकता है। आवंटन या पुरस्कार गणना के लिए एक अतिरिक्त प्रत्यायोजित हिस्सेदारी का उपयोग नहीं किया जा सकता है। -- प्रश्न शुल्क - यह वह कुल शुल्क है जो अंतिम उपयोगकर्ताओं ने एक अनुक्रमणिका से प्रश्नों के लिए भुगतान किया है +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - इंडेक्सर रिवार्ड्स - यह इंडेक्सर और उनके प्रतिनिधियों द्वारा हर समय अर्जित किए गए कुल इंडेक्सर पुरस्कार हैं। इंडेक्सर पुरस्कार का भुगतान जीआरटी जारी करने के माध्यम से किया जाता है। -इंडेक्सर्स क्वेरी फीस और इंडेक्सिंग पुरस्कार दोनों अर्जित कर सकते हैं। कार्यात्मक रूप से, ऐसा तब होता है जब नेटवर्क प्रतिभागी किसी अनुक्रमणिका को GRT सौंपते हैं। यह इंडेक्सर्स को उनके इंडेक्सर पैरामीटर के आधार पर क्वेरी फीस और पुरस्कार प्राप्त करने में सक्षम बनाता है। तालिका के दायीं ओर क्लिक करके, या अनुक्रमणिका के प्रोफ़ाइल में जाकर और "प्रतिनिधि" बटन पर क्लिक करके अनुक्रमण पैरामीटर सेट किए जाते हैं। +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. इंडेक्सर कैसे बनें, इस बारे में अधिक जानने के लिए, आप [आधिकारिक दस्तावेज़ीकरण](/network/indexing) या [द ग्राफ एकेडमी इंडेक्सर गाइड्स।](https://thegraph.academy/delegators/ पर नज़र डाल सकते हैं चूइंग-इंडेक्सर्स/) @@ -58,9 +78,13 @@ First things first, if you just finished deploying and publishing your subgraph ### 2. क्यूरेटर -क्यूरेटर सबग्राफ का विश्लेषण यह पहचानने के लिए करते हैं कि कौन से सबग्राफ उच्चतम गुणवत्ता वाले हैं। एक बार क्यूरेटर को संभावित रूप से आकर्षक सबग्राफ मिल जाने के बाद, वे इसके बॉन्डिंग कर्व पर संकेत देकर इसे क्यूरेट कर सकते हैं। ऐसा करने में, क्यूरेटर इंडेक्सर्स को बताते हैं कि कौन से सबग्राफ उच्च गुणवत्ता वाले हैं और उन्हें इंडेक्स किया जाना चाहिए। +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -क्यूरेटर समुदाय के सदस्य, डेटा उपभोक्ता या यहां तक कि सबग्राफ डेवलपर भी हो सकते हैं, जो बॉन्डिंग कर्व में जीआरटी टोकन जमा करके अपने स्वयं के सबग्राफ पर संकेत देते हैं। जीआरटी जमा करके, क्यूरेटर एक सबग्राफ के क्यूरेशन शेयरों का निर्माण करते हैं। नतीजतन, क्यूरेटर क्वेरी फीस के एक हिस्से को अर्जित करने के लिए पात्र हैं, जो उपग्राफ उत्पन्न करता है, जिस पर उन्होंने संकेत दिया है। बॉन्डिंग कर्व क्यूरेटर को उच्चतम गुणवत्ता वाले डेटा स्रोतों को क्यूरेट करने के लिए प्रोत्साहित करता है। इस खंड में क्यूरेटर तालिका आपको देखने की अनुमति देगी: +In the The Curator table listed below you can see: - क्यूरेटर द्वारा क्यूरेट करना शुरू करने की तारीख - जमा किए गए जीआरटी की संख्या @@ -68,34 +92,36 @@ First things first, if you just finished deploying and publishing your subgraph ![एक्सप्लोरर छवि 6](/img/Curation-Overview.png) -यदि आप क्यूरेटर की भूमिका के बारे में अधिक जानना चाहते हैं, तो आप [द ग्राफ़ अकादमी](https://thegraph.academy/curators/) या [आधिकारिक दस्तावेज़ीकरण](/network/curating) के निम्न लिंक पर जाकर ऐसा कर सकते हैं। +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### 3. प्रतिनिधि -द ग्राफ नेटवर्क की सुरक्षा और विकेंद्रीकरण को बनाए रखने में प्रतिनिधि महत्वपूर्ण भूमिका निभाते हैं। वे एक या एक से अधिक इंडेक्सर्स को GRT टोकन सौंपकर (यानी, "स्टेकिंग") नेटवर्क में भाग लेते हैं। डेलीगेटर्स के बिना, इंडेक्सर्स के महत्वपूर्ण पुरस्कार और शुल्क अर्जित करने की संभावना कम होती है। इसलिए, इंडेक्सर्स डेलिगेटर्स को इंडेक्सिंग रिवार्ड्स और उनके द्वारा अर्जित क्वेरी फीस के एक हिस्से की पेशकश करके आकर्षित करना चाहते हैं। +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![एक्सप्लोरर छवि 7](/img/Delegation-Overview.png) -प्रतिनिधि तालिका आपको समुदाय में सक्रिय प्रतिनिधियों को देखने की अनुमति देगी, साथ ही मेट्रिक्स जैसे कि: +In the Delegators table you can see the active Delegators in the community and important metrics: - एक डेलीगेटर कितने इंडेक्सर्स की ओर डेलिगेट कर रहा है - एक प्रतिनिधि का मूल प्रतिनिधिमंडल - उन्होंने जो पुरस्कार जमा किए हैं, लेकिन प्रोटोकॉल से वापस नहीं लिए हैं - एहसास हुआ पुरस्कार वे प्रोटोकॉल से वापस ले लिया - वर्तमान में उनके पास प्रोटोकॉल में जीआरटी की कुल राशि है -- जिस तारीख को उन्होंने आखिरी बार प्रत्यायोजित किया था +- The date they last delegated -यदि आप प्रतिनिधि बनने के तरीके के बारे में अधिक जानना चाहते हैं, तो आगे मत देखिए! आपको बस इतना करना है कि [आधिकारिक दस्तावेज़ीकरण](/network/delegating) या [द ग्राफ़ अकादमी](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers) पर जाना है। +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## नेटवर्क -नेटवर्क अनुभाग में, आप वैश्विक KPI के साथ-साथ प्रति युग के आधार पर स्विच करने की क्षमता देखेंगे और नेटवर्क मेट्रिक्स का अधिक विस्तार से विश्लेषण करेंगे। ये विवरण आपको इस बात का बोध कराएंगे कि समय के साथ नेटवर्क कैसा प्रदर्शन कर रहा है। +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. ### अवलोकन -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +The overview section has both all the current network metrics and some cumulative metrics over time: - वर्तमान कुल नेटवर्क हिस्सेदारी - इंडेक्सर्स और उनके प्रतिनिधियों के बीच हिस्सेदारी विभाजित हो गई @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - प्रोटोकॉल पैरामीटर जैसे क्यूरेशन रिवॉर्ड, महंगाई दर और बहुत कुछ - वर्तमान युग पुरस्कार और शुल्क -कुछ प्रमुख विवरण जो ध्यान देने योग्य हैं: +A few key details to note: -- **क्वेरी शुल्क उपभोक्ताओं द्वारा उत्पन्न शुल्क का प्रतिनिधित्व करते हैं**, और सबग्राफ के लिए उनके आवंटन बंद होने के बाद कम से कम 7 युगों (नीचे देखें) की अवधि के बाद इंडेक्सर्स द्वारा उनका दावा (या नहीं) किया जा सकता है। और उनके द्वारा प्रदान किए गए डेटा को उपभोक्ताओं द्वारा मान्य किया गया है। -- **इंडेक्सिंग रिवार्ड्स युग के दौरान नेटवर्क जारी करने से इंडेक्सर्स द्वारा दावा किए गए पुरस्कारों की राशि का प्रतिनिधित्व करते हैं।** हालांकि प्रोटोकॉल जारी करना तय है, इंडेक्सर्स द्वारा अपने आवंटन को बंद करने के बाद ही पुरस्कार प्राप्त होते हैं। उन उप-अनुच्छेदों की ओर जिन्हें वे अनुक्रमित कर रहे हैं। इस प्रकार पुरस्कारों की प्रति-युग संख्या भिन्न होती है (अर्थात। कुछ युगों के दौरान, इंडेक्सर्स सामूहिक रूप से उन आवंटनों को बंद कर सकते हैं जो कई दिनों से खुले हैं)। +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![एक्सप्लोरर छवि 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ The overview section has all the current network metrics as well as some cumulat - सक्रिय युग वह है जिसमें इंडेक्सर्स वर्तमान में हिस्सेदारी आवंटित कर रहे हैं और क्वेरी फीस जमा कर रहे हैं - बसने वाले युग वे हैं जिनमें राज्य चैनलों को बसाया जा रहा है। इसका मतलब यह है कि अगर उपभोक्ता उनके खिलाफ विवाद खोलते हैं तो इंडेक्सर्स स्लैशिंग के अधीन हैं। - वितरण युग ऐसे युग हैं जिनमें युगों के लिए राज्य चैनल तय किए जा रहे हैं और अनुक्रमणकर्ता अपनी क्वेरी शुल्क छूट का दावा कर सकते हैं। - - अंतिम रूप दिए गए युग वे युग हैं जिनमें अनुक्रमणकों द्वारा दावा करने के लिए कोई क्वेरी शुल्क छूट नहीं बची है, इस प्रकार अंतिम रूप दिया जा रहा है। + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![एक्सप्लोरर छवि 9](/img/Epoch-Stats.png) ## आपका उपयोगकर्ता प्रोफ़ाइल -अब जबकि हमने नेटवर्क आँकड़ों के बारे में बात कर ली है, चलिए आपकी व्यक्तिगत प्रोफ़ाइल पर चलते हैं। आपकी व्यक्तिगत प्रोफ़ाइल आपके लिए आपकी नेटवर्क गतिविधि देखने का स्थान है, चाहे आप नेटवर्क पर कैसे भी भाग ले रहे हों। आपका एथेरियम वॉलेट आपके उपयोगकर्ता प्रोफ़ाइल के रूप में कार्य करेगा, और उपयोगकर्ता डैशबोर्ड के साथ, आप देख पाएंगे: +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### प्रोफ़ाइल अवलोकन -यह वह जगह है जहाँ आप अपने द्वारा की गई कोई भी वर्तमान कार्रवाई देख सकते हैं। यह वह जगह भी है जहां आप अपनी प्रोफ़ाइल जानकारी, विवरण और वेबसाइट (यदि आपने एक जोड़ा है) पा सकते हैं। +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![एक्सप्लोरर छवि 10](/img/Profile-Overview.png) ### सबग्राफ टैब -यदि आप सबग्राफ टैब में क्लिक करते हैं, तो आप अपने प्रकाशित सबग्राफ देखेंगे। इसमें परीक्षण उद्देश्यों के लिए सीएलआई के साथ तैनात कोई सबग्राफ शामिल नहीं होगा - सबग्राफ केवल तभी दिखाई देंगे जब वे विकेंद्रीकृत नेटवर्क पर प्रकाशित होंगे। +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![एक्सप्लोरर छवि 11](/img/Subgraphs-Overview.png) ### अनुक्रमण टैब -यदि आप इंडेक्सिंग टैब में क्लिक करते हैं, तो आपको सबग्राफ के लिए सभी सक्रिय और ऐतिहासिक आवंटन के साथ एक तालिका मिलेगी, साथ ही चार्ट भी मिलेंगे जिनका आप विश्लेषण कर सकते हैं और एक इंडेक्सर के रूप में अपने पिछले प्रदर्शन को देख सकते हैं। +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. इस खंड में आपके नेट इंडेक्सर रिवार्ड्स और नेट क्वेरी फीस के विवरण भी शामिल होंगे। आपको ये मेट्रिक दिखाई देंगे: @@ -158,7 +189,9 @@ The overview section has all the current network metrics as well as some cumulat ### प्रतिनिधि टैब -प्रतिनिधि ग्राफ़ नेटवर्क के लिए महत्वपूर्ण हैं। एक प्रतिनिधि को एक इंडेक्सर चुनने के लिए अपने ज्ञान का उपयोग करना चाहिए जो पुरस्कारों पर एक स्वस्थ रिटर्न प्रदान करेगा। यहां आप अपने सक्रिय और ऐतिहासिक प्रतिनिधिमंडलों का विवरण पा सकते हैं, साथ ही उन इंडेक्सर्स के मेट्रिक्स के साथ जिन्हें आपने प्रत्यायोजित किया है। +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. पृष्ठ के पहले भाग में, आप अपना प्रतिनिधिमंडल चार्ट और साथ ही केवल-पुरस्कार चार्ट देख सकते हैं। बाईं ओर, आप वे KPI देख सकते हैं जो आपके वर्तमान डेलिगेशन मेट्रिक्स को दर्शाते हैं। diff --git a/website/pages/hi/network/indexing.mdx b/website/pages/hi/network/indexing.mdx index 2acc45128080..494b9ea49b95 100644 --- a/website/pages/hi/network/indexing.mdx +++ b/website/pages/hi/network/indexing.mdx @@ -42,7 +42,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap कई समुदाय-निर्मित डैशबोर्ड में लंबित पुरस्कार मान शामिल हैं और इन चरणों का पालन करके उन्हें आसानी से मैन्युअल रूप से चेक किया जा सकता है: -1. सभी सक्रिय आवंटनों के लिए आईडी प्राप्त करने के लिए [मेननेट सबग्राफ](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) को क्वेरी करें: +1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -112,12 +112,12 @@ Once an allocation has been closed the rebates are available to be claimed by th - **माध्यम** - प्रोडक्शन इंडेक्सर प्रति सेकंड 100 सबग्राफ और 200-500 अनुरोधों का समर्थन करता है। - **बड़ा** - वर्तमान में उपयोग किए जाने वाले सभी सबग्राफ को अनुक्रमित करने और संबंधित ट्रैफ़िक के अनुरोधों को पूरा करने के लिए तैयार है। -| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) | +| -------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | ### एक इंडेक्सर को कौन सी बुनियादी सुरक्षा सावधानियां बरतनी चाहिए? @@ -149,20 +149,20 @@ Once an allocation has been closed the rebates are available to be claimed by th #### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
(सबग्राफ प्रश्नों के लिए) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
(सबग्राफ सब्सक्रिप्शन के लिए) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
(तैनाती के प्रबंधन के लिए) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | -------------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
(सबग्राफ प्रश्नों के लिए) | /subgraphs/id/...
/subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
(सबग्राफ सब्सक्रिप्शन के लिए) | /subgraphs/id/...
/subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
(तैनाती के प्रबंधन के लिए) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
(सबग्राफ प्रश्नों के लिए) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | -------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
(सबग्राफ प्रश्नों के लिए) | /subgraphs/id/...
/status
/channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Agent @@ -545,7 +545,7 @@ graph indexer status - `ग्राफ़ अनुक्रमणिका नियम हो सकता है [विकल्प] ` — परिनियोजन के लिए `निर्णय आधार` को `नियमों` पर सेट करें, ताकि अनुक्रमणिका एजेंट अनुक्रमण नियमों का उपयोग करेगा यह तय करने के लिए कि इस परिनियोजन को अनुक्रमित करना है या नहीं। -- `ग्राफ़ अनुक्रमणिका क्रियाओं को [विकल्प] <कार्रवाई-आईडी>` मिलता है - `सभी` का उपयोग करके एक या अधिक क्रियाएं प्राप्त करें या प्राप्त करने के लिए `कार्रवाई-आईडी` खाली छोड़ दें सभी क्रियाएं। एक अतिरिक्त तर्क `--status` का उपयोग किसी निश्चित स्थिति के सभी कार्यों को प्रिंट करने के लिए किया जा सकता है। +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `ग्राफ इंडेक्सर एक्शन कतार आवंटन ` - कतार आवंटन कार्रवाई @@ -559,7 +559,7 @@ graph indexer status - `ग्राफ़ अनुक्रमणिका क्रियाएँ स्वीकृत निष्पादित करती हैं` - कार्यकर्ता को स्वीकृत क्रियाओं को तुरंत निष्पादित करने के लिए बाध्य करें -सभी आदेश जो आउटपुट में नियम प्रदर्शित करते हैं, समर्थित आउटपुट स्वरूपों (`तालिका`, `yaml`, और `json`) के बीच `का उपयोग करके चुन सकते हैं - आउटपुट` तर्क। +सभी आदेश जो आउटपुट में नियम प्रदर्शित करते हैं, समर्थित आउटपुट स्वरूपों (`तालिका`, `yaml`, और `json`) के बीच ` का उपयोग करके चुन सकते हैं - आउटपुट ` तर्क। #### Indexing rules @@ -623,7 +623,7 @@ graph indexer rules delete QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK - सभी कतारबद्ध क्रियाओं को देखने के लिए इंडेक्सर `indexer-cli` का उपयोग कर सकता है - इंडेक्सर (या अन्य सॉफ़्टवेयर) `इंडेक्सर-क्ली` का उपयोग करके कतार में क्रियाओं को स्वीकृत या रद्द कर सकता है। स्वीकृति और रद्द करने के आदेश इनपुट के रूप में क्रिया आईडी की एक सरणी लेते हैं। - निष्पादन कार्यकर्ता नियमित रूप से स्वीकृत कार्यों के लिए कतार का चुनाव करता है। यह कतार से `अनुमोदित` कार्यों को पकड़ लेगा, उन्हें निष्पादित करने का प्रयास करेगा, और निष्पादन की स्थिति के आधार पर डीबी में मूल्यों को `सफलता` या `विफल< पर अपडेट करेगा। /0>. -
  • If an action is successful the worker will ensure that there is an indexing rule present that tells the agent how to manage the allocation moving forward, useful when taking manual actions while the agent is in auto` or `oversight` mode. +
  • If an action is successful the worker will ensure that there is an indexing rule present that tells the agent how to manage the allocation moving forward, useful when taking manual actions while the agent is in auto` or `oversight` mode. - अनुक्रमणक क्रिया निष्पादन के इतिहास को देखने के लिए क्रिया कतार की निगरानी कर सकता है और यदि आवश्यक हो तो निष्पादन विफल होने पर क्रिया आइटम को पुन: अनुमोदित और अद्यतन कर सकता है। क्रिया कतार पंक्तिबद्ध और की गई सभी कार्रवाइयों का इतिहास प्रदान करती है। डेटा मॉडल: diff --git a/website/pages/hi/network/overview.mdx b/website/pages/hi/network/overview.mdx index b9ff51cb4f59..9fd38d7f052d 100644 --- a/website/pages/hi/network/overview.mdx +++ b/website/pages/hi/network/overview.mdx @@ -2,14 +2,20 @@ title: नेटवर्क अवलोकन --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## अवलोकन +## How does it work? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Specifics + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![टोकन अर्थशास्त्र](/img/Network-roles@2x.png) -द ग्राफ़ नेटवर्क की आर्थिक सुरक्षा और क्वेरी किए जा रहे डेटा की अखंडता सुनिश्चित करने के लिए, प्रतिभागी ग्राफ़ टोकन ([GRT](/tokenomics)) को दांव पर लगाते हैं और उनका उपयोग करते हैं। GRT एक वर्क यूटिलिटी टोकन है जो एक ERC-20 है जिसका उपयोग नेटवर्क में संसाधन आवंटित करने के लिए किया जाता है। प्रसंग +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/hi/new-chain-integration.mdx b/website/pages/hi/new-chain-integration.mdx index 35b2bc7c8b4a..534d6701efdb 100644 --- a/website/pages/hi/new-chain-integration.mdx +++ b/website/pages/hi/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: Integrating New Networks +title: New Chain Integration --- -Graph Node can currently index data from the following chain types: +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- Ethereum, via EVM JSON-RPC and [Ethereum Firehose](https://github.com/streamingfast/firehose-ethereum) -- NEAR, via a [NEAR Firehose](https://github.com/streamingfast/near-firehose-indexer) -- Cosmos, via a [Cosmos Firehose](https://github.com/graphprotocol/firehose-cosmos) -- Arweave, via an [Arweave Firehose](https://github.com/graphprotocol/firehose-arweave) +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -If you are interested in any of those chains, integration is a matter of Graph Node configuration and testing. +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -If you are interested in a different chain type, a new integration with Graph Node must be built. Our recommended approach is developing a new Firehose for the chain in question and then the integration of that Firehose with Graph Node. More info below. +## Integration Strategies -**1. EVM JSON-RPC** +### 1. EVM JSON-RPC -If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. For more information, refer to [Testing an EVM JSON-RPC](new-chain-integration#testing-an-evm-json-rpc). +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -**2. Firehose** +#### Testing an EVM JSON-RPC -For non-EVM-based chains, Graph Node must ingest blockchain data via gRPC and known type definitions. This can be done via [Firehose](firehose/), a new technology developed by [StreamingFast](https://www.streamingfast.io/) that provides a highly-scalable indexing blockchain solution using a files-based and streaming-first approach. Reach out to the [StreamingFast team](mailto:integrations@streamingfast.io/) if you need help with Firehose development. +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## Difference between EVM JSON-RPC & Firehose +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, in a JSON-RPC batch request +- `trace_filter` _(optionally required for Graph Node to support call handlers)_ -While the two are suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](substreams/), like building [Substreams-powered subgraphs](cookbook/substreams-powered-subgraphs/). In addition, Firehose allows for improved indexing speeds when compared to JSON-RPC. +### 2. Firehose Integration -New EVM chain integrators may also consider the Firehose-based approach, given the benefits of substreams and its massive parallelized indexing capabilities. Supporting both allows developers to choose between building substreams or subgraphs for the new chain. +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> **NOTE**: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that eth_calls are [not a good practice for developers](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## Testing an EVM JSON-RPC +#### Specific Firehose Instrumentation for EVM (`geth`) chains -For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON RPC methods: +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \_(for historical blocks, with EIP-1898 - requires archive node): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`, in a JSON-RPC batch request -- _`trace_filter`_ _(optionally required for Graph Node to support call handlers)_ +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### Graph Node Configuration +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -**Start by preparing your local environment** +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## Graph Node Configuration + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) -2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON RPC compliant URL - > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ -**Test the integration by locally deploying a subgraph** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. Create a simple example subgraph. Some options are below: - 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point - 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) -3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. -4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -Graph Node should be syncing the deployed subgraph if there are no errors. Give it time to sync, then send some GraphQL queries to the API endpoint printed in the logs. +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## Integrating a new Firehose-enabled chain +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. Create a simple example subgraph. Some options are below: + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +Graph Node should be syncing the deployed subgraph if there are no errors. Give it time to sync, then send some GraphQL queries to the API endpoint printed in the logs. -Integrating a new chain is also possible using the Firehose approach. This is currently the best option for non-EVM chains and a requirement for substreams support. Additional documentation focuses on how Firehose works, adding Firehose support for a new chain and integrating it with Graph Node. Recommended docs for integrators: +## Substreams-powered Subgraphs -1. [General docs on Firehose](firehose/) -2. [Adding Firehose support for a new chain](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [Integrating Graph Node with a new chain via Firehose](https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/hi/operating-graph-node.mdx b/website/pages/hi/operating-graph-node.mdx index d8707dae894c..706cfcb7f112 100644 --- a/website/pages/hi/operating-graph-node.mdx +++ b/website/pages/hi/operating-graph-node.mdx @@ -77,13 +77,13 @@ cargo run -p graph-node --release -- \ जब यह चल रहा होता है तो ग्राफ़ नोड निम्नलिखित पोर्ट को उजागर करता है: -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
    (सबग्राफ प्रश्नों के लिए) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    (सबग्राफ सब्सक्रिप्शन के लिए) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (तैनाती के प्रबंधन के लिए) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | -------------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
    (सबग्राफ प्रश्नों के लिए) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    (सबग्राफ सब्सक्रिप्शन के लिए) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (तैनाती के प्रबंधन के लिए) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | > **महत्वपूर्ण**: बंदरगाहों को सार्वजनिक रूप से उजागर करने के बारे में सावधान रहें - **प्रशासन बंदरगाहों** को बंद रखा जाना चाहिए। इसमें ग्राफ़ नोड JSON-RPC समापन बिंदु शामिल है। diff --git a/website/pages/hi/querying/graphql-api.mdx b/website/pages/hi/querying/graphql-api.mdx index f4fbbeaac0e1..4f7873aeafcd 100644 --- a/website/pages/hi/querying/graphql-api.mdx +++ b/website/pages/hi/querying/graphql-api.mdx @@ -2,11 +2,19 @@ title: ग्राफक्यूएल एपीआई --- -This guide explains the GraphQL Query API that is used for The Graph Protocol. +Learn about the GraphQL Query API used in The Graph. -## Queries +## What is GraphQL? -अपने सबग्राफ स्कीमा में आप `Entities` नामक प्रकारों को परिभाषित करते हैं। प्रत्येक `एंटिटी` प्रकार के लिए, एक `एंटिटी` और `एंटियां` फ़ील्ड शीर्ष-स्तरीय `क्वेरी` प्रकार पर जेनरेट की जाएंगी। ध्यान दें कि ग्राफ़ का उपयोग करते समय `क्वेरी` को `graphql` क्वेरी के शीर्ष पर शामिल करने की आवश्यकता नहीं है। +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +To understand the larger role that GraphQL plays, review [developing](/network/developing/) and [creating a subgraph](/developing/creating-a-subgraph/). + +## Queries with GraphQL + +In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. + +> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. ### उदाहरण @@ -21,7 +29,7 @@ This guide explains the GraphQL Query API that is used for The Graph Protocol. } ``` -> **ध्यान दें:** किसी एक इकाई के लिए क्वेरी करते समय, `id` फ़ील्ड की आवश्यकता होती है, और यह एक स्ट्रिंग होना चाहिए। +> Note: When querying for a single entity, the `id` field is required, and it must be writen as a string. सभी `टोकन` संस्थाओं को क्वेरी करें: @@ -36,7 +44,10 @@ This guide explains the GraphQL Query API that is used for The Graph Protocol. ### Sorting -When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. +When querying a collection, you may: + +- Use the `orderBy` parameter to sort by a specific attribute. +- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. #### उदाहरण @@ -53,7 +64,7 @@ When querying a collection, the `orderBy` parameter may be used to sort by a spe ग्राफ़ नोड के अनुसार [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) संस्थाओं को क्रमबद्ध किया जा सकता है नेस्टेड संस्थाओं के आधार पर। -निम्नलिखित उदाहरण में, हम टोकन को उनके स्वामी के नाम से क्रमित करते हैं: +The following example shows tokens sorted by the name of their owner: ```graphql { @@ -70,11 +81,12 @@ When querying a collection, the `orderBy` parameter may be used to sort by a spe ### पृष्ठ पर अंक लगाना -When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. - -इसके अलावा, `छोड़ें` पैरामीटर का उपयोग इकाइयों को छोड़ने और पेजिनेट करने के लिए किया जा सकता है। उदा. `first:100` पहले 100 इकाइयां दिखाता है और `first:100, skip:100` अगली 100 इकाइयां दिखाता है। +When querying a collection, it's best to: -प्रश्नों को बहुत बड़े उपयोग से बचना चाहिए `छोड़ें` मान क्योंकि वे आम तौर पर खराब प्रदर्शन करते हैं। बड़ी संख्या में आइटम प्राप्त करने के लिए, पिछले उदाहरण में दिखाए गए विशेषता के आधार पर संस्थाओं के माध्यम से पृष्ठ बनाना बेहतर है। +- Use the `first` parameter to paginate from the beginning of the collection. + - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. +- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. #### उदाहरण का उपयोग करना `पहले` @@ -93,7 +105,7 @@ When querying a collection, the `first` parameter can be used to paginate from t #### उदाहरण का उपयोग करना `पहले` और `छोड़ें` -क्वेरी 10 `टोकन` इकाइयां, संग्रह की शुरुआत से 10 स्थानों से ऑफसेट: +क्वेरी 10 ` टोकन ` इकाइयां, संग्रह की शुरुआत से 10 स्थानों से ऑफसेट: ```graphql { @@ -106,7 +118,7 @@ When querying a collection, the `first` parameter can be used to paginate from t #### उदाहरण का उपयोग करना `पहले` और ` id_ge` -यदि किसी ग्राहक को बड़ी संख्या में संस्थाओं को पुनः प्राप्त करने की आवश्यकता होती है, तो यह एक विशेषता पर आधार क्वेरी और उस विशेषता द्वारा फ़िल्टर करने के लिए बहुत अधिक प्रदर्शनकारी होता है। उदाहरण के लिए, क्लाइंट इस क्वेरी का उपयोग करके बड़ी संख्या में टोकन प्राप्त करेगा: +If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: ```graphql query manyTokens($lastID: String) { @@ -117,11 +129,12 @@ query manyTokens($lastID: String) { } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### छनन -आप अपनी क्वेरी में `कहाँ` पैरामीटर का उपयोग विभिन्न गुणों के लिए फ़िल्टर करने के लिए कर सकते हैं। आप `जहां` पैरामीटर के भीतर एकाधिक मानों पर फ़िल्टर कर सकते हैं। +- You can use the `where` parameter in your queries to filter for different properties. +- You can filter on multiple values within the `where` parameter. #### उदाहरण का उपयोग करना `कहाँ` @@ -155,7 +168,7 @@ The first time, it would send the query with `lastID = ""`, and for subsequent r #### ब्लॉक फ़िल्टरिंग के लिए उदाहरण -आप `_change_block(number_gte: Int)` द्वारा संस्थाओं को भी फ़िल्टर कर सकते हैं - यह उन संस्थाओं को फ़िल्टर करता है जिन्हें निर्दिष्ट ब्लॉक में या उसके बाद अपडेट किया गया था। +You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). @@ -193,7 +206,7 @@ This can be useful if you are looking to fetch only entities which have changed, ##### `AND` Operator -निम्नलिखित उदाहरण में, हम `परिणाम` `सफल` और `number` `100` से अधिक या उसके बराबर वाली चुनौतियों के लिए फ़िल्टर कर रहे हैं। +The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. ```graphql { @@ -208,7 +221,7 @@ This can be useful if you are looking to fetch only entities which have changed, ``` > **सिंटैक्टिक शुगर:** आप `और` ऑपरेटर को कॉमा द्वारा अलग किए गए सब-एक्सप्रेशन को पास करके उपरोक्त क्वेरी को सरल बना सकते हैं। -> +> > ```graphql > { > challenges(where: { number_gte: 100, outcome: "succeeded" }) { @@ -223,7 +236,7 @@ This can be useful if you are looking to fetch only entities which have changed, ##### `OR` Operator -निम्नलिखित उदाहरण में, हम `परिणाम` `सफल` या `number` `100` से अधिक या उसके बराबर वाली चुनौतियों के लिए फ़िल्टर कर रहे हैं। +The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. ```graphql { @@ -278,9 +291,9 @@ _change_block(number_gte: Int) You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -Note that the current implementation is still subject to certain limitations that might violate these gurantees. The implementation can not always tell that a given block hash is not on the main chain at all, or that the result of a query by block hash for a block that can not be considered final yet might be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. #### उदाहरण @@ -322,12 +335,12 @@ Fulltext search query fields provide an expressive text search API that can be a पूर्ण पाठ खोज ऑपरेटर: -| प्रतीक | ऑपरेटर | विवरण | -| --- | --- | --- | -| `&` | `And` | सभी प्रदान किए गए शब्दों को शामिल करने वाली संस्थाओं के लिए एक से अधिक खोज शब्दों को फ़िल्टर में संयोजित करने के लिए | -| | | `Or` | या ऑपरेटर द्वारा अलग किए गए एकाधिक खोज शब्दों वाली क्वेरी सभी संस्थाओं को प्रदान की गई शर्तों में से किसी से मेल के साथ वापस कर देगी | -| `<->` | `Follow by` | दो शब्दों के बीच की दूरी निर्दिष्ट करें। | -| `:*` | `Prefix` | उन शब्दों को खोजने के लिए उपसर्ग खोज शब्द का उपयोग करें जिनके उपसर्ग मेल खाते हैं (2 वर्ण आवश्यक हैं।) | +| प्रतीक | ऑपरेटर | विवरण | +| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | सभी प्रदान किए गए शब्दों को शामिल करने वाली संस्थाओं के लिए एक से अधिक खोज शब्दों को फ़िल्टर में संयोजित करने के लिए | +| | | `Or` | या ऑपरेटर द्वारा अलग किए गए एकाधिक खोज शब्दों वाली क्वेरी सभी संस्थाओं को प्रदान की गई शर्तों में से किसी से मेल के साथ वापस कर देगी | +| `<->` | `Follow by` | दो शब्दों के बीच की दूरी निर्दिष्ट करें। | +| `:*` | `Prefix` | उन शब्दों को खोजने के लिए उपसर्ग खोज शब्द का उपयोग करें जिनके उपसर्ग मेल खाते हैं (2 वर्ण आवश्यक हैं।) | #### उदाहरण @@ -376,11 +389,11 @@ Combine fulltext operators to make more complex filters. With a pretext search o ## योजना -The schema of your data source--that is, the entity types, values, and relationships that are available to query--are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -ग्राफक्यूएल स्कीमा आम तौर पर `क्वेरी`, `सदस्यता` और `म्यूटेशन` के रूट प्रकारों को परिभाषित करते हैं। ग्राफ़ केवल `क्वेरी` का समर्थन करता है। आपके सबग्राफ के लिए रूट `क्वेरी` प्रकार स्वचालित रूप से आपके सबग्राफ मेनिफ़ेस्ट में शामिल ग्राफ़क्यूएल स्कीमा से उत्पन्न होता है। +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> **ध्यान दें:** हमारा एपीआई म्यूटेशन को उजागर नहीं करता है क्योंकि डेवलपर्स से उम्मीद की जाती है कि वे अपने एप्लिकेशन से अंतर्निहित ब्लॉकचेन के खिलाफ सीधे लेनदेन जारी करेंगे। +> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. ### Entities diff --git a/website/pages/hi/querying/querying-best-practices.mdx b/website/pages/hi/querying/querying-best-practices.mdx index d78720b40001..51a680aacb83 100644 --- a/website/pages/hi/querying/querying-best-practices.mdx +++ b/website/pages/hi/querying/querying-best-practices.mdx @@ -2,11 +2,9 @@ title: सर्वोत्तम प्रथाओं को क्वेरी करना --- -ग्राफ़ ब्लॉकचेन से डेटा क्वेरी करने के लिए विकेंद्रीकृत तरीका प्रदान करता है। +The Graph provides a decentralized way to query data from blockchains via GraphQL APIs, making it easier to query data with the GraphQL language. -ग्राफ़ नेटवर्क के डेटा को ग्राफ़िकल एपीआई के माध्यम से उजागर किया जाता है, जिससे ग्राफ़िकल भाषा के साथ डेटा को क्वेरी करना आसान हो जाता है। - -यह पृष्ठ आवश्यक ग्राफ़िकल भाषा नियमों और ग्राफ़कॉल प्रश्नों के सर्वोत्तम अभ्यासों के माध्यम से आपका मार्गदर्शन करेगा। +Learn the essential GraphQL language rules and GraphQL querying best practices. --- @@ -71,7 +69,7 @@ GraphQL is a language and set of conventions that transport over HTTP. It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`). -However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), we recommend you to use our `graph-client` that supports unique features such as: +However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), it's recommend to use `graph-client` which supports unique features such as: - क्रॉस-चेन सबग्राफ हैंडलिंग: एक ही क्वेरी में कई सबग्राफ से पूछताछ - [स्वचालित ब्लॉक ट्रैकिंग](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) @@ -104,8 +102,6 @@ main() More GraphQL client alternatives are covered in ["Querying from an Application"](/querying/querying-from-an-application). -Now that we covered the basic rules of GraphQL queries syntax, let's now look at the best practices of GraphQL query writing. - --- ## Best Practices @@ -162,11 +158,11 @@ Doing so brings **many advantages**: - सर्वर-स्तर पर **वैरिएबल को कैश किया जा सकता है** - **क्वेरी का सांख्यिकीय रूप से विश्लेषण टूल द्वारा किया जा सकता है** (निम्न अनुभागों में इस पर अधिक) -**Note: How to include fields conditionally in static queries** +### How to include fields conditionally in static queries -We might want to include the `owner` field only on a particular condition. +You might want to include the `owner` field only on a particular condition. -For this, we can leverage the `@include(if:...)` directive as follows: +For this, you can leverage the `@include(if:...)` directive as follows: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -189,7 +185,7 @@ const result = await execute(query, { }) ``` -Note: The opposite directive is `@skip(if: ...)`. +> Note: The opposite directive is `@skip(if: ...)`. ### Ask for what you want @@ -197,9 +193,8 @@ GraphQL became famous for its "Ask for what you want" tagline. For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually. -When querying GraphQL APIs, always think of querying only the fields that will be actually used. - -A common cause of over-fetching is collections of entities. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. Queries should therefore almost always set first explicitly, and make sure they only fetch as many entities as they actually need. This applies not just to top-level collections in a query, but even more so to nested collections of entities. +- When querying GraphQL APIs, always think of querying only the fields that will be actually used. +- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. For example, in the following query: @@ -335,8 +330,8 @@ query { Such repeated fields (`id`, `active`, `status`) bring many issues: -- अधिक व्यापक प्रश्नों के लिए पढ़ना कठिन है -- प्रश्नों के आधार पर टाइपस्क्रिप्ट प्रकार उत्पन्न करने वाले टूल का उपयोग करते समय (_उस पर अधिक पिछले खंड में_), `newDelegate` और `oldDelegate` के परिणामस्वरूप दो अलग-अलग इनलाइन होंगे इंटरफेस। +- More extensive queries become harder to read. +- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. A refactored version of the query would be the following: @@ -362,13 +357,13 @@ fragment DelegateItem on Transcoder { } ``` -Using GraphQL `fragment` will improve readability (especially at scale) but also will result in better TypeScript types generation. +Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_). ### ग्राफकॉल फ्रैगमेंट क्या करें और क्या न करें -**Fragment base must be a type** +### Fragment base must be a type A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**: @@ -380,7 +375,7 @@ fragment MyFragment on BigInt { `BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base. -**How to spread a Fragment** +#### How to spread a Fragment Fragments are defined on specific types and should be used accordingly in queries. @@ -409,16 +404,16 @@ fragment VoteItem on Vote { It is not possible to spread a fragment of type `Vote` here. -**Define Fragment as an atomic business unit of data** +#### Define Fragment as an atomic business unit of data -GraphQL Fragment must be defined based on their usage. +GraphQL `Fragment`s must be defined based on their usage. For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient. -Here is a rule of thumb for using Fragment: +Here is a rule of thumb for using fragments: -- जब किसी क्वेरी में एक ही प्रकार के फ़ील्ड दोहराए जाते हैं, तो उन्हें एक फ़्रैगमेंट में समूहित करें -- जब समान लेकिन समान फ़ील्ड दोहराए जाते हैं, तो एकाधिक टुकड़े बनाएं, उदा: +- When fields of the same type are repeated in a query, group them in a `Fragment`. +- When similar but different fields are repeated, create multiple fragments, for instance: ```graphql # base fragment (mostly used in listing) @@ -441,7 +436,7 @@ fragment VoteWithPoll on Vote { --- -## आवश्यक उपकरण +## The Essential Tools ### ग्राफक्यूएल वेब-आधारित खोजकर्ता @@ -471,11 +466,11 @@ This will allow you to **catch errors without even testing queries** on the play The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get: -- वाक्य - विन्यास पर प्रकाश डालना -- स्वत: पूर्ण सुझाव -- स्कीमा के खिलाफ सत्यापन -- snippets -- अंशों और इनपुट प्रकारों के लिए परिभाषा पर जाएं +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets +- Go to definition for fragments and input types If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly. @@ -483,9 +478,9 @@ If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketp The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing: -- वाक्य - विन्यास पर प्रकाश डालना -- स्वत: पूर्ण सुझाव -- स्कीमा के खिलाफ सत्यापन -- snippets +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets -More information on this [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) that showcases all the plugin's main features. +For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. diff --git a/website/pages/hi/quick-start.mdx b/website/pages/hi/quick-start.mdx index ad090e4226e8..4c586230c3b7 100644 --- a/website/pages/hi/quick-start.mdx +++ b/website/pages/hi/quick-start.mdx @@ -2,24 +2,18 @@ title: जल्दी शुरू --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph to Subgraph Studio. +Learn how to easily publish and query a [subgraph](/developing/developer-faqs/#1-what-is-a-subgraph) on The Graph. -Ensure that your subgraph will be indexing data from a [supported network](/developing/supported-networks). - -यह मार्गदर्शिका यह मानते हुए लिखी गई है कि आपके पास: +## Prerequisites for this guide - एक क्रिप्टो वॉलेट -- आपकी पसंद के नेटवर्क पर एक स्मार्ट अनुबंध पता - -## 1. सबग्राफ स्टूडियो पर एक सबग्राफ बनाएं - -Go to the [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- A smart contract address on one of the [supported networks](/developing/supported-networks/) -Once your wallet is connected, you can begin by clicking “Create a Subgraph." It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name." +## Step-by-step -## 2. ग्राफ़ सीएलआई स्थापित करें +### 1. ग्राफ़ सीएलआई इनस्टॉल करें -The Graph CLI is written in TypeScript and you will need to have `node` and either `npm` or `yarn` installed to use it. Check that you have the most recent CLI version installed. +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. अपनी स्थानीय मशीन पर, निम्न आदेशों में से कोई एक चलाएँ: @@ -35,133 +29,161 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -## 3. Initialize your subgraph from existing contract +### 2. Create your subgraph + +If your contract has events, the `init` command will automatically create a scaffold of a subgraph. + +#### Create via Graph CLI + +Use the following command to create a subgraph in Subgraph Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +#### Create via Subgraph Studio + +Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. -Initialize your subgraph from an existing contract by running the initialize command: +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +2. Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". + +For additional information on subgraph creation and the Graph CLI, see [Creating a Subgraph](/developing/creating-a-subgraph). + +### 3. Initialize your subgraph + +#### From an existing contract + +The following command initializes your subgraph from an existing contract: ```sh graph init --studio ``` -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> Note: If your contract was verified on Etherscan, then the ABI will automatically be created in the CLI. + +You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -जब आप अपना सबग्राफ इनिशियलाइज़ करते हैं, तो सीएलआई टूल आपसे निम्नलिखित जानकारी मांगेगा: +When you initialize your subgraph, the CLI will ask you for the following information: -- प्रोटोकॉल: वह प्रोटोकॉल चुनें जिससे आपका सबग्राफ डेटा को अनुक्रमित करेगा -- सबग्राफ स्लग: अपने सबग्राफ के लिए एक नाम बनाएं। आपका सबग्राफ स्लग आपके सबग्राफ के लिए एक पहचानकर्ता है। -- सबग्राफ बनाने के लिए निर्देशिका: अपनी स्थानीय निर्देशिका चुनें -- एथेरियम नेटवर्क (वैकल्पिक): आपको यह निर्दिष्ट करने की आवश्यकता हो सकती है कि आपका सबग्राफ किस ईवीएम-संगत नेटवर्क से डेटा को अनुक्रमित करेगा -- अनुबंध का पता: उस स्मार्ट अनुबंध के पते का पता लगाएं, जिससे आप डेटा की क्वेरी करना चाहते हैं -- ABI: यदि ABI ऑटोपॉप्युलेटेड नहीं है, तो आपको इसे JSON फ़ाइल के रूप में मैन्युअल रूप से इनपुट करना होगा -- स्टार्ट ब्लॉक: यह सुझाव दिया जाता है कि आप समय बचाने के लिए स्टार्ट ब्लॉक इनपुट करें जबकि आपका सबग्राफ ब्लॉकचैन डेटा को अनुक्रमित करता है। आप उस ब्लॉक को ढूंढकर स्टार्ट ब्लॉक का पता लगा सकते हैं जहां आपका अनुबंध तैनात किया गया था। -- अनुबंध का नाम: अपने अनुबंध का नाम इनपुट करें -- इकाइयों के रूप में अनुक्रमणिका अनुबंध ईवेंट: यह सुझाव दिया जाता है कि आप इसे सही पर सेट करें क्योंकि यह प्रत्येक उत्सर्जित ईवेंट के लिए स्वचालित रूप से आपके सबग्राफ में मैपिंग जोड़ देगा -- दूसरा अनुबंध जोड़ें (वैकल्पिक): आप एक और अनुबंध जोड़ सकते हैं +- Protocol: Choose the protocol your subgraph will be indexing data from. +- Subgraph slug: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. +- Directory to create the subgraph in: Choose your local directory. +- Ethereum network (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- Contract address: Locate the smart contract address you’d like to query data from. +- ABI: If the ABI is not auto-populated, you will need to input it manually as a JSON file. +- Start Block: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- Contract Name: Input the name of your contract. +- Index contract events as entities: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- Add another contract (optional): You can add another contract. अपने सबग्राफ को इनिशियलाइज़ करते समय क्या अपेक्षा की जाए, इसके उदाहरण के लिए निम्न स्क्रीनशॉट देखें: ![Subgraph command](/img/subgraph-init-example.png) -## 4. Write your subgraph +### 4. Write your subgraph -पिछले आदेश एक मचान सबग्राफ बनाते हैं जिसका उपयोग आप अपने सबग्राफ के निर्माण के लिए शुरुआती बिंदु के रूप में कर सकते हैं। सबग्राफ में बदलाव करते समय, आप मुख्य रूप से तीन फाइलों के साथ काम करेंगे: +The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. -- Manifest (`subgraph.yaml`) - The manifest defines what datasources your subgraphs will index. -- Schema (`schema.graphql`) - The GraphQL schema defines what data you wish to retrieve from the subgraph. -- AssemblyScript Mappings (`mapping.ts`) - This is the code that translates data from your datasources to the entities defined in the schema. +When making changes to the subgraph, you will mainly work with three files: -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). +- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -## 5. Deploy to Subgraph Studio +For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -एक बार आपका सबग्राफ लिखे जाने के बाद, निम्नलिखित कमांड चलाएँ: +### 5. Deploy your subgraph -```sh -$ graph codegen -$ graph build -``` +Remember, deploying is not the same as publishing. + +- When you deploy a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. +- When you publish a subgraph, you are publishing it onchain to the decentralized network. + +1. एक बार आपका सबग्राफ लिखे जाने के बाद, निम्नलिखित कमांड चलाएँ: + + ```sh + graph codegen + graph build + ``` + +2. Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. -- अपने सबग्राफ को प्रमाणित और तैनात करें। तैनाती key सबग्राफ स्टूडियो में सबग्राफ पेज पर पाई जा सकती है। +![Deploy key](/img/subgraph-studio-deploy-key.jpg) +```` ```sh -$ graph auth --studio -$ graph deploy --studio -``` -You will be asked for a version label. It's strongly recommended to use [semver](https://semver.org/) for versioning like `0.0.1`. That said, you are free to choose any string as version such as:`v1`, `version1`, `asdf`. - -## 6. अपने सबग्राफ का परीक्षण करें - -In Subgraph Studio's playground environment, you can test your subgraph by making a sample query. - -लॉग आपको बताएंगे कि क्या आपके सबग्राफ में कोई त्रुटि है। एक ऑपरेशनल सबग्राफ के लॉग इस तरह दिखेंगे: - -![Subgraph logs](/img/subgraph-logs-image.png) - -If your subgraph is failing, you can query the subgraph health by using the GraphiQL Playground. Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails, so you can debug accordingly: - -```graphql -{ - indexingStatuses(subgraphs: ["Qm..."]) { - node - synced - health - fatalError { - message - block { - number - hash - } - handler - } - nonFatalErrors { - message - block { - number - hash - } - handler - } - chains { - network - chainHeadBlock { - number - } - earliestBlock { - number - } - latestBlock { - number - } - lastHealthyBlock { - number - } - } - entityCount - } -} +graph auth --studio + +graph deploy --studio ``` +```` -## 7. Publish your subgraph to The Graph’s Decentralized Network +- The CLI will ask for a version label. + - It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. That said, you can choose any string for the version such as: `v1`, `version1`, `asdf`, etc. -Once your subgraph has been deployed to Subgraph Studio, you have tested it out, and you are ready to put it into production, you can then publish it to the decentralized network. +### 6. Review your subgraph -In Subgraph Studio, you will be able to click the publish button on the top right of your subgraph's page. +If you’d like to examine your subgraph before publishing it to the network, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: -Select the network you would like to publish your subgraph to. It is recommended to publish subgraphs to Arbitrum One to take advantage of the [faster transaction speeds and lower gas costs](/arbitrum/arbitrum-faq). +- Run a sample query. +- Analyze your subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: -The (upgrade Indexer)[/sunrise/#about-the-upgrade-indexer] will begin serving queries on your subgraph regardless of subgraph curation, and it will provide you with 100,000 free queries per month. + ![Subgraph logs](/img/subgraph-logs-image.png) -For a higher quality of service and stronger redundancy, you can curate your subgraph to attract more Indexers. At the time of writing, it is recommended that you curate your own subgraph with at least 3,000 GRT to ensure 3-5 additional Indexers begin serving queries on your subgraph. +### 7. Publish your subgraph to The Graph Network -गैस की लागत बचाने के लिए, जब आप ग्राफ़ के विकेंद्रीकृत नेटवर्क पर अपना सबग्राफ प्रकाशित करते हैं, तो आप अपने सबग्राफ को उसी लेन-देन में क्यूरेट कर सकते हैं, जिसे आपने इस बटन का चयन करके प्रकाशित किया था: +Publishing a subgraph to the decentralized network makes it available for [Curators](/network/curating/) to begin curating it and [Indexers](/network/indexing/) to begin indexing it. -![Subgraph publish](/img/publish-and-signal-tx.png) +#### Publishing with Subgraph Studio + +1. To publish your subgraph, click the Publish button in the dashboard. +2. Select the network to which you would like to publish your subgraph. + +#### Publishing from the CLI + +As of version 0.73.0, you can also publish your subgraph with the Graph CLI. + +1. Open the `graph-cli`. + +2. Use the following commands: + + ```sh + graph codegen && graph build + ``` + + Then, + + ```sh + graph publish + ``` -## 8. Query your subgraph +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +To customize your deployment, see [Publishing a Subgraph](/publishing/publishing-a-subgraph/). + +#### Adding signal to your subgraph + +1. To attract indexers to query your subgraph, you should add GRT curation signal to it. + + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + +2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. + + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + +To learn more about curation, read [Curating](/network/curating/). + +To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: + +![Subgraph publish](/img/publish-and-signal-tx.png) -अब, आप अपने सबग्राफ को अपने सबग्राफ के क्वेरी URL पर ग्राफ़क्यूएल क्वेरी भेजकर क्वेरी कर सकते हैं, जिसे आप क्वेरी बटन पर क्लिक करके पा सकते हैं। +### 8. Query your subgraph -If you don't have your API key, you can query via the free, rate-limited development query URL, which can be used for development and staging. +Now, you can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read more [here](/querying/querying-the-graph/). +For more information about querying data from your subgraph, read [Querying The Graph](/querying/querying-the-graph/). diff --git a/website/pages/hi/release-notes/assemblyscript-migration-guide.mdx b/website/pages/hi/release-notes/assemblyscript-migration-guide.mdx index 6bc091a4083a..031b1d2dd195 100644 --- a/website/pages/hi/release-notes/assemblyscript-migration-guide.mdx +++ b/website/pages/hi/release-notes/assemblyscript-migration-guide.mdx @@ -127,11 +127,8 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - यदि आपके पास वेरिएबल शैडोइंग है, तो आपको अपने डुप्लिकेट वेरिएबल्स का नाम बदलने की आवश्यकता होगी। - ### Null Comparisons - अपने सबग्राफ पर अपग्रेड करने से, कभी-कभी आपको इस तरह की त्रुटियाँ मिल सकती हैं: ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - हल करने के लिए आप केवल `if` कथन को कुछ इस तरह से बदल सकते हैं: ```typescript @@ -287,7 +283,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ``` - इस समस्या को ठीक करने के लिए, आप उस प्रॉपर्टी एक्सेस के लिए एक वेरिएबल बना सकते हैं ताकि कंपाइलर अशक्तता जांच जादू कर सके: ```typescript diff --git a/website/pages/hi/sps/introduction.mdx b/website/pages/hi/sps/introduction.mdx new file mode 100644 index 000000000000..12e3f81c6d53 --- /dev/null +++ b/website/pages/hi/sps/introduction.mdx @@ -0,0 +1,19 @@ +--- +title: Introduction to Substreams-powered Subgraphs +--- + +By using a Substreams package (`.spkg`) as a data source, your subgraph gains access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. + +There are two methods of enabling this technology: + +Using Substreams [triggers](./triggers): Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. + +Using [Entity Changes](https://substreams.streamingfast.io/documentation/consume/subgraph/graph-out): By writing more of the logic into Substreams, you can consume the module's output directly into graph-node. In graph-node, you can use the Substreams data to create your subgraph entities. + +It is really a matter of where you put your logic, in the subgraph or the Substreams. Keep in mind that having more of your logic in Substreams benefits from a parallelized model, whereas triggers will be linearly consumed in graph-node. + +Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: + +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/hi/sps/triggers-example.mdx b/website/pages/hi/sps/triggers-example.mdx new file mode 100644 index 000000000000..da797598b050 --- /dev/null +++ b/website/pages/hi/sps/triggers-example.mdx @@ -0,0 +1,140 @@ +--- +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" +--- + +## आवश्यक शर्तें + +Before starting, make sure to: + +- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. +- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. + +## Step 1: Initialize Your Project + + + +1. Open your Dev Container and run the following command to initialize your project: + + ```bash + substreams init + ``` + +2. Select the "minimal" project option. + +3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: + +```yaml +specVersion: v0.1.0 +package: + name: my_project_sol + version: v0.1.0 + +imports: # Pass your spkg of interest + solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg + +modules: + - name: map_spl_transfers + use: solana:map_block # Select corresponding modules available within your spkg + initialBlock: 260000082 + + - name: map_transactions_by_programid + use: solana:solana:transactions_by_programid_without_votes + +network: solana-mainnet-beta + +params: # Modify the param fields to meet your needs + # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA + map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE +``` + +## Step 2: Generate the Subgraph Manifest + +Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: + +```bash +substreams codegen subgraph +``` + +You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: + +```yaml +--- +dataSources: + - kind: substreams + name: my_project_sol + network: solana-mainnet-beta + source: + package: + moduleName: map_spl_transfers # Module defined in the substreams.yaml + file: ./my-project-sol-v0.1.0.spkg + mapping: + apiVersion: 0.0.7 + kind: substreams/graph-entities + file: ./src/mappings.ts + handler: handleTriggers +``` + +## Step 3: Define Entities in `schema.graphql` + +Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. Here is an example: + +```graphql +type MyTransfer @entity { + id: ID! + amount: String! + source: String! + designation: String! + signers: [String!]! +} +``` + +This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. + +## Step 4: Handle Substreams Data in `mappings.ts` + +With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: + +```ts +import { Protobuf } from 'as-proto/assembly' +import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events' +import { MyTransfer } from '../generated/schema' + +export function handleTriggers(bytes: Uint8Array): void { + const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode) + + for (let i = 0; i < input.data.length; i++) { + const event = input.data[i] + + if (event.transfer != null) { + let entity_id: string = `${event.txnId}-${i}` + const entity = new MyTransfer(entity_id) + entity.amount = event.transfer!.instruction!.amount.toString() + entity.source = event.transfer!.accounts!.source + entity.designation = event.transfer!.accounts!.destination + + if (event.transfer!.accounts!.signer!.single != null) { + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] + } else if (event.transfer!.accounts!.signer!.multisig != null) { + entity.signers = event.transfer!.accounts!.signer!.multisig!.signers + } + entity.save() + } + } +} +``` + +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + +## Conclusion + +You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. + +For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). diff --git a/website/pages/hi/sps/triggers.mdx b/website/pages/hi/sps/triggers.mdx new file mode 100644 index 000000000000..ed19635d4768 --- /dev/null +++ b/website/pages/hi/sps/triggers.mdx @@ -0,0 +1,37 @@ +--- +title: Substreams Triggers +--- + +Custom triggers allow you to send data directly into your subgraph mappings file and entities (similar to tables and fields), enabling full use of the GraphQL layer. By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data within your subgraph’s handler, ensuring efficient and streamlined data management within the subgraph framework. + +> Note: If you haven’t already, visit one of the How-To Guides found [here](./introduction) to scaffold your first project in the Development Container. + +The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. + +```tsx +export function handleTransactions(bytes: Uint8Array): void { + let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).trasanctions // 1. + if (transactions.length == 0) { + log.info('No transactions found', []) + return + } + + for (let i = 0; i < transactions.length; i++) { + // 2. + let transaction = transactions[i] + + let entity = new Transaction(transaction.hash) // 3. + entity.from = transaction.from + entity.to = transaction.to + entity.save() + } +} +``` + +Here's what you’re seeing in the `mappings.ts` file: + +1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object +2. Looping over the transactions +3. Create a new subgraph entity for every transaction + +To go through a detailed example of a trigger-based subgraph, [click here](./triggers-example). diff --git a/website/pages/hi/substreams.mdx b/website/pages/hi/substreams.mdx index e88c7d5a3a93..9034fd6fa4cf 100644 --- a/website/pages/hi/substreams.mdx +++ b/website/pages/hi/substreams.mdx @@ -4,9 +4,11 @@ title: सबस्ट्रीम ![Substreams Logo](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## How Substreams Works in 4 Steps @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### Expand Your Knowledge - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/hi/sunrise.mdx b/website/pages/hi/sunrise.mdx index 32bf6c6d26d4..14d1444cf8cd 100644 --- a/website/pages/hi/sunrise.mdx +++ b/website/pages/hi/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Post-Sunrise + Upgrading to The Graph Network FAQ --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Note: The Sunrise of Decentralized Data ended June 12th, 2024. -## What is the Sunrise of Decentralized Data? +## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data is an initiative spearheaded by Edge & Node. The goal is to enable subgraph developers to seamlessly upgrade to The Graph’s decentralized network. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan draws on many previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs, and the ability to integrate new blockchain networks to The Graph. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### What are the phases of the Sunrise? +### What happened to the hosted service? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. -## Upgrading subgraphs to The Graph Network +During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. -### When will hosted service subgraphs no longer be available? +### Was Subgraph Studio impacted by this upgrade? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Why were subgraphs published to Arbitrum, did it start indexing a different network? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Will my hosted service subgraph be supported on The Graph Network? - -Yes, the upgrade Indexer will automatically support all hosted service subgraphs published to The Graph Network for a seamless upgrade experience. - -### How do I upgrade my hosted service subgraph? - -> Note: Upgrading a subgraph to The Graph Network cannot be undone. - - - -To upgrade a hosted service subgraph, you can visit the subgraph dashboard on the [hosted service](https://thegraph.com/hosted-service). - -1. Select the subgraph(s) you want to upgrade. -2. Select the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### How can I get support with the upgrade process? - -The Graph community is here to support developers as they move to The Graph Network. Join The Graph's [Discord server](https://discord.gg/vtvv7FP) and request help in the #upgrade-decentralized-network channel. - -### How can I ensure high quality of service and redundancy for subgraphs on The Graph Network? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Members from these blockchain communities are encouraged to integrate their chain through the [Chain Integration Process](/chain-integration-overview/). - -### How do I publish new versions to the network? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Upgrade to the latest version of [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Update your deploy command - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> Publishing requires Arbitrum ETH - upgrading your subgraph also airdrops a small amount to facilitate your first protocol interactions 🧑‍🚀 - -### I use a subgraph developed by someone else, how can I make sure that my service isn't interrupted? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### What happens if I don't upgrade my subgraph? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### How can I get started querying subgraphs on The Graph Network? - -You can explore available subgraphs on [Graph Explorer](https://thegraph.com/explorer). [Learn more about querying subgraphs on The Graph](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## About the Upgrade Indexer -### What is the upgrade Indexer? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> The upgrade Indexer is currently active. -### What chains does the upgrade Indexer support? +The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. -The upgrade Indexer supports chains that were previously only available on the hosted service. +### What does the upgrade Indexer do? -Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. ### What does the upgrade indexer mean for existing Indexers? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. + +However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As more subgraphs are upgraded from the hosted service to The Graph Network, Delegators stand to benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### Will the upgrade Indexer compete with existing Indexers for rewards? +### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer will only allocate the minimum amount per subgraph and will not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### How will this affect subgraph developers? +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### How does this benefit data consumers? +### How does the upgrade Indexer benefit data consumers? The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. -### How will the upgrade Indexer price queries? - -The upgrade Indexer will price queries at the market rate so as not to influence the query fee market. - -### What are the criteria for the upgrade Indexer to stop supporting a subgraph? - -The upgrade Indexer will serve a subgraph until it is sufficiently and successfully served with consistent queries served by at least 3 other Indexers. - -Furthermore, the upgrade Indexer will stop supporting a subgraph if it has not been queried in the last 30 days. - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## About The Graph Network - -### Do I need to run my own infrastructure? - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -Once your subgraph has reached adequate curation signal and other Indexers begin supporting it, the upgrade Indexer will gradually taper off, allowing other Indexers to collect indexing rewards and query fees. - -### Should I host my own indexing infrastructure? - -Running infrastructure for your own project is [significantly more resource intensive](/network/benefits/) when compared to using The Graph Network. - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -That being said, if you’re still interested in running a [Graph Node](https://github.com/graphprotocol/graph-node), consider joining The Graph Network [as an Indexer](https://thegraph.com/blog/how-to-become-indexer/) to earn indexing rewards and query fees by serving data on your subgraph and others. - -### Should I use a centralized indexing provider? - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -Here's a detailed breakdown of the benefits of The Graph over centralized hosting: +### How does the upgrade Indexer price queries? -- **Resilience and Redundancy**: Decentralized systems are inherently more robust and resilient due to their distributed nature. Data isn't stored on a single server or location. Instead, it's served by hundreds of independent Indexers around the globe. This reduces the risk of data loss or service interruptions if one node fails, leading to exceptional uptime (99.99%). +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **Quality of Service**: In addition to the impressive uptime, The Graph Network features a ~106ms median query speed (latency), and higher query success rates compared to hosted alternatives. Read more in [this blog](https://thegraph.com/blog/qos-the-graph-network/). +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -Just as you've chosen your blockchain network for its decentralized nature, security, and transparency, opting for The Graph Network is an extension of those same principles. By aligning your data infrastructure with these values, you ensure a cohesive, resilient, and trust-driven development environment. +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/hi/supported-network-requirements.mdx b/website/pages/hi/supported-network-requirements.mdx index b5bb4a1ec8d5..1a695a5bcbb4 100644 --- a/website/pages/hi/supported-network-requirements.mdx +++ b/website/pages/hi/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| नेटवर्क | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | +| नेटवर्क | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | diff --git a/website/pages/hi/tap.mdx b/website/pages/hi/tap.mdx new file mode 100644 index 000000000000..8e55de5a9b7f --- /dev/null +++ b/website/pages/hi/tap.mdx @@ -0,0 +1,197 @@ +--- +title: TAP Migration Guide +--- + +Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. + +## अवलोकन + +[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: + +- Efficiently handles micropayments. +- Adds a layer of consolidations to on-chain transactions and costs. +- Allows Indexers control of receipts and payments, guaranteeing payment for queries. +- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. + +## Specifics + +TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. + +For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. + +### RAV Details + +- It’s money that is waiting to be sent to the blockchain. + +- It will continue to send requests to aggregate and ensure that the total value of non-aggregated receipts does not exceed the `amount willing to lose`. + +- Each RAV can be redeemed once in the contracts, which is why they are sent after the allocation is closed. + +### Redeeming RAV + +As long as you run `tap-agent` and `indexer-agent`, everything will be executed automatically. The following provides a detailed breakdown of the process: + +1. An Indexer closes allocation. + +2. ` period, tap-agent` takes all pending receipts for that specific allocation and requests an aggregation into a RAV, marking it as `last`. + +3. `indexer-agent` takes all the last RAVS and sends redeem requests to the blockchain, which will update the value of `redeem_at`. + +4. During the `` period, `indexer-agent` monitors if the blockchain has any reorganizations that revert the transaction. + + - If it was reverted, the RAV is resent to the blockchain. If it was not reverted, it gets marked as `final`. + +## Blockchain Addresses + +### Contracts + +| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| ------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | +| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | +| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | + +### Gateway + +| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| ---------- | --------------------------------------------- | --------------------------------------------- | +| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | + +### Requirements + +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. + +- [Graph TAP Aribtrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) + +> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployement. As a result, you have to index it manually. + +## Migration Guide + +### Software versions + +| Component | संस्करण | Image Link | +| --------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------- | +| indexer-service | v1.0.0-rc.6 | [indexer-service](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-service-rs/264320627?tag=1.0.0-rc.6) | +| indexer-agent | PR #995 | [indexer-agent](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent/266166026?tag=sha-d98cf80) | +| tap-agent | v1.0.0-rc.6 | [tap-agent](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-tap-agent/264320547?tag=1.0.0-rc.6) | + +### Steps + +1. **Indexer Agent** + + - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). + - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + +2. **Indexer Service** + + - Fully replace your current configuration with the [new Indexer Service rs](https://github.com/graphprotocol/indexer-rs). It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + - Like the older version, you can scale Indexer Service horizontally easily. It is still stateless. + +3. **TAP Agent** + + - Run _one_ single instance of [TAP Agent](https://github.com/graphprotocol/indexer-rs) at all times. It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + +4. **Configure Indexer Service and TAP Agent** + + Configuration is a TOML file shared between `indexer-service` and `tap-agent`, supplied with the argument `--config /path/to/config.toml`. + + Check out the full [configuration](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) and the [default values](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) + +For minimal configuration, use the following template: + +```bash +# You will have to change *all* the values below to match your setup. +# +# Some of the config below are global graph network values, which you can find here: +# +# +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: +# +# [database] +# postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" + +[indexer] +indexer_address = "0x1111111111111111111111111111111111111111" +operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" + +[database] +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. +postgres_url = "postgres://postgres@postgres:5432/postgres" + +[graph_node] +# URL to your graph-node's query endpoint +query_url = "" +# URL to your graph-node's status endpoint +status_url = "" + +[subgraphs.network] +# Query URL for the Graph Network subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[subgraphs.escrow] +# Query URL for the Escrow subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[blockchain] +# The chain ID of the network that the graph network is running on +chain_id = 1337 +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. +receipts_verifier_address = "0x2222222222222222222222222222222222222222" + +######################################## +# Specific configurations to tap-agent # +######################################## +[tap] +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: +# max_amount_willing_to_lose_grt = "0.1" +max_amount_willing_to_lose_grt = 20 + +[tap.sender_aggregator_endpoints] +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. +0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" +``` + +टिप्पणियाँ: + +- Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/tap/#gateway). +- Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/tap/#contracts) using the appropriate chain id. + +**Log Level** + +- You can set the log level by using the `RUST_LOG` environment variable. +- It’s recommended that you set it to `RUST_LOG=indexer_tap_agent=debug,info`. + +## Monitoring + +### Metrics + +All components expose the port 7300 to be queried by prometheus. + +### Grafana Dashboard + +You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) and import. + +### Launchpad + +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer) diff --git a/website/pages/it/about.mdx b/website/pages/it/about.mdx index 6c93da5503e5..d1cfbfad3b82 100644 --- a/website/pages/it/about.mdx +++ b/website/pages/it/about.mdx @@ -2,46 +2,66 @@ title: Informazioni su The Graph --- -Questa pagina spiega cos'è iThe Graph e come si può iniziare. - ## Che cos'è The Graph? -Il Graph è un protocollo decentralizzato per l'indicizzazione e query delle dati della blockchain. Il Graph permette di effettuare query dei dati difficili da effettuare query direttamente. +The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. + +## Understanding the Basics + +Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. + +### Challenges Without The Graph + +In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. + +- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. + +- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. + +- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. + +### Why is this a problem? + +It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. + +Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. + +Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. -I progetti con smart contract complessi come [Uniswap](https://uniswap.org/) e le iniziative NFT come [Bored Ape Yacht Club](https://boredapeyachtclub.com/) memorizzano i dati sulla blockchain di Ethereum, rendendo davvero difficile leggere qualcosa di diverso dai dati di base direttamente dalla blockchain. +## The Graph Provides a Solution -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. -Si potrebbe anche creare un proprio server, elaborare le transazioni, salvarle in un database e creare un endpoint API per effettuare query dai dati. Tuttavia, questa opzione richiede [molte risorse](/network/benefits/), necessita di manutenzione, presenta un singolo punto di guasto e infrange importanti proprietà di sicurezza necessarie per la decentralizzazione. +### How The Graph Functions -**L'indicizzazione dei dati della blockchain è molto, molto difficile.** +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +#### Specifics -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. -## Come funziona il Graph +- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -The Graph impara cosa e come indicizzare i dati di Ethereum in base alle descrizioni dei subgraph, note come manifesto del subgraph. La descrizione del subgraph definisce gli smart contract di interesse per un subgraph, gli eventi di quei contratti a cui prestare attenzione e come mappare i dati degli eventi ai dati che The Graph memorizzerà nel suo database. +- When creating a subgraph, you need to write a subgraph manifest. -Una volta scritto un `subgraph manifest`, si usa la Graph CLI per memorizzare la definizione in IPFS e dire all'Indexer di iniziare l'indicizzazione dei dati per quel subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. -Questo diagramma fornisce maggiori dettagli sul flusso di dati una volta che è stato distribuito un subgraph manifest, che tratta le transazioni Ethereum: +The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. ![Un grafico che spiega come The Graph utilizza Graph Node per servire le query ai consumatori di dati](/img/graph-dataflow.png) Il flusso segue questi passi: -1. Una dapp aggiunge dati a Ethereum attraverso una transazione su uno smart contract. -2. Lo smart contract emette uno o più eventi durante l'elaborazione della transazione. -3. Graph Node scansiona continuamente Ethereum alla ricerca di nuovi blocchi e dei dati del vostro subgraph che possono contenere. -4. Graph Node trova gli eventi Ethereum per il vostro subgraph in questi blocchi ed esegue i gestori di mappatura che avete fornito. La mappatura è un modulo WASM che crea o aggiorna le entità di dati che Graph Node memorizza in risposta agli eventi Ethereum. -5. La dapp effettua query del Graph Node per ottenere dati indicizzati dalla blockchain, utilizzando il [ GraphQL endpoint del nodo](https://graphql.org/learn/). Il Graph Node a sua volta traduce le query GraphQL in query per il suo archivio dati sottostante, al fine di recuperare questi dati, sfruttando le capacità di indicizzazione dell'archivio. La dapp visualizza questi dati in una ricca interfaccia utente per gli utenti finali, che li utilizzano per emettere nuove transazioni su Ethereum. Il ciclo si ripete. +1. Una dapp aggiunge dati a Ethereum attraverso una transazione su uno smart contract. +2. Lo smart contract emette uno o più eventi durante l'elaborazione della transazione. +3. Graph Node scansiona continuamente Ethereum alla ricerca di nuovi blocchi e dei dati del vostro subgraph che possono contenere. +4. Graph Node trova gli eventi Ethereum per il vostro subgraph in questi blocchi ed esegue i gestori di mappatura che avete fornito. La mappatura è un modulo WASM che crea o aggiorna le entità di dati che Graph Node memorizza in risposta agli eventi Ethereum. +5. La dapp effettua query del Graph Node per ottenere dati indicizzati dalla blockchain, utilizzando il [ GraphQL endpoint del nodo](https://graphql.org/learn/). Il Graph Node a sua volta traduce le query GraphQL in query per il suo archivio dati sottostante, al fine di recuperare questi dati, sfruttando le capacità di indicizzazione dell'archivio. La dapp visualizza questi dati in una ricca interfaccia utente per gli utenti finali, che li utilizzano per emettere nuove transazioni su Ethereum. Il ciclo si ripete. ## I prossimi passi -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +The following sections provide a more in-depth look at subgraphs, their deployment and data querying. -Before you start writing your own subgraph, you might want to have a look at [Graph Explorer](https://thegraph.com/explorer) and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/pages/it/arbitrum/arbitrum-faq.mdx b/website/pages/it/arbitrum/arbitrum-faq.mdx index bff15519f682..220dbe4480c8 100644 --- a/website/pages/it/arbitrum/arbitrum-faq.mdx +++ b/website/pages/it/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: Arbitrum FAQ Clicca [qui](#billing-on-arbitrum-faqs) se desideri saltare alle domande frequenti sulla fatturazione su Arbitrum. -## Perché The Graph sta implementando una soluzione L2? +## Why did The Graph implement an L2 Solution? -Scalando su L2, i partecipanti alla rete The Graph possono aspettarsi: +By scaling The Graph on L2, network participants can now benefit from: - Risparmi fino a 26 volte sulle commissioni di gas @@ -14,7 +14,7 @@ Scalando su L2, i partecipanti alla rete The Graph possono aspettarsi: - Sicurezza ereditata da Ethereum -La scalabilità degli smart contract del protocollo su L2 consente ai partecipanti della rete di interagire più frequentemente a un costo ridotto delle commissioni di gas. Ad esempio, gli Indexer potrebbero aprire e chiudere allocazioni per indicizzare un maggior numero di subgraph con maggiore frequenza, gli sviluppatori potrebbero distribuire e aggiornare i subgraph con maggiore facilità, i Delegator potrebbero delegare GRT con maggiore frequenza e i Curator potrebbero aggiungere o rimuovere segnali a un maggior numero di subgraph - azioni che in passato erano considerate troppo costose da eseguire frequentemente a causa delle commissioni di gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. La comunità di The Graph ha deciso di procedere con Arbitrum l'anno scorso dopo l'esito della discussione [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). @@ -41,27 +41,21 @@ Per sfruttare l'utilizzo di The Graph su L2, utilizza il selettore a discesa per ## In quanto sviluppatore di subgraph, consumatore di dati, Indexer, Curator o Delegator, cosa devo fare ora? -Non è richiesta alcuna azione immediata, tuttavia si incoraggiano i partecipanti della rete a iniziare a passare ad Arbitrum per beneficiare dei vantaggi di L2. +Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) for additional support. -I team di sviluppatori principali stanno lavorando per creare strumenti di trasferimento a L2 che faciliteranno notevolmente il passaggio di deleghe, cure e subgraph su Arbitrum. Ci si aspetta che gli strumenti di trasferimento a L2 siano disponibili entro l'estate del 2023. +All indexing rewards are now entirely on Arbitrum. -I principali team di sviluppo stanno lavorando per creare strumenti di trasferimento a L2 che faciliteranno notevolmente il passaggio dei GRT delegati, curati, e dei subgraph su Arbitrum. Ci si aspetta che gli strumenti di trasferimento a L2 siano disponibili entro l'estate del 2023. - -## Se desiderassi partecipare alla rete su L2, cosa devo fare? - -Please help [test the network](https://testnet.thegraph.com/explorer) on L2 and report feedback about your experience in [Discord](https://discord.gg/graphprotocol). - -## Ci sono rischi associati alla scalabilità della rete su L2? +## Were there any risks associated with scaling the network to L2? All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). Tutto è stato testato accuratamente e un piano di contingenza è in atto per garantire una transizione sicura e senza intoppi. I dettagli possono essere trovati [qui](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## I subgraph esistenti su Ethereum continueranno a funzionare? +## Are existing subgraphs on Ethereum working? -Sì, gli smart contract di The Graph Network opereranno parallelamente su entrambe le reti Ethereum e Arbitrum fino a quando non si sposteranno completamente su Arbitrum in una data successiva. +All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. -## Verrà implementato un nuovo smart contract per i GRT su Arbitrum? +## Does GRT have a new smart contract deployed on Arbitrum? Sì, GRT avrà un nuovo [smart contract su Arbitrum](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7). Tuttavia, il contratto principale [GRT](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) su Ethereum continuerà a essere operativo. diff --git a/website/pages/it/billing.mdx b/website/pages/it/billing.mdx index 37f9c840d00b..dec5cfdadc12 100644 --- a/website/pages/it/billing.mdx +++ b/website/pages/it/billing.mdx @@ -14,7 +14,7 @@ There are two plans to use when querying subgraphs on The Graph Network. ## Query Payments with credit card -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) +- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. You'll be redirected to the wallet selection page. Select your wallet and click on "Connect". 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. @@ -69,7 +69,7 @@ Once you bridge GRT, you can add it to your billing balance. 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. 4. Enter the amount of GRT you would like to withdraw. 5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. 6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. @@ -83,7 +83,7 @@ Once you bridge GRT, you can add it to your billing balance. - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. 5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. 6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. + - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. 7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. 8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. @@ -127,7 +127,7 @@ This will be a step by step guide for purchasing GRT on Binance. 7. Review your purchase and click "Buy GRT". 8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. 9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - Click on the "wallet" button, click withdraw, and select GRT. - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - Click "Continue" and confirm your transaction. @@ -198,7 +198,7 @@ You can learn more about getting ETH on Binance [here](https://www.binance.com/e ### How many queries will I need? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. @@ -208,6 +208,6 @@ Of course, both new and existing users can reach out to Edge & Node's BD team fo Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### What happens when my billing balance runs out? Will I get a warning? You will receive several email notifications before your billing balance runs out. diff --git a/website/pages/it/chain-integration-overview.mdx b/website/pages/it/chain-integration-overview.mdx index 2fe6c2580909..a142f3f817f9 100644 --- a/website/pages/it/chain-integration-overview.mdx +++ b/website/pages/it/chain-integration-overview.mdx @@ -6,12 +6,12 @@ A transparent and governance-based integration process was designed for blockcha ## Stage 1. Technical Integration -- Teams work on a Graph Node integration and Firehose for non-EVM based chains. [Here's how](/new-chain-integration/). +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. - Teams initiate the protocol integration process by creating a Forum thread [here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (New Data Sources sub-category under Governance & GIPs). Using the default Forum template is mandatory. ## Stage 2. Integration Validation -- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON RPC or Firehose endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - Graph Indexers test the integration on The Graph's testnet. - Core developers and Indexers monitor stability, performance, and data determinism. @@ -38,7 +38,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. -### 3. How much time will this process take? +### 3. How much time will the process of reaching full protocol support take? The time to mainnet is expected to be several weeks, varying based on the time of integration development, whether additional research is required, testing and bug fixes, and, as always, the timing of the governance process that requires community feedback. @@ -46,4 +46,4 @@ Protocol support for indexing rewards depends on the stakeholders' bandwidth to ### 4. How will priorities be handled? -Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. This is especially true for chains previously supported on the [hosted service](https://thegraph.com/hosted-service) or those relying on already tested stacks. +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/it/cookbook/arweave.mdx b/website/pages/it/cookbook/arweave.mdx index 15538454e3ff..b079da30a013 100644 --- a/website/pages/it/cookbook/arweave.mdx +++ b/website/pages/it/cookbook/arweave.mdx @@ -105,7 +105,7 @@ Schema definition describes the structure of the resulting subgraph database and The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). -Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/assemblyscript-api/). +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/it/cookbook/base-testnet.mdx b/website/pages/it/cookbook/base-testnet.mdx index 3a1d98a44103..0cc5ad365dfd 100644 --- a/website/pages/it/cookbook/base-testnet.mdx +++ b/website/pages/it/cookbook/base-testnet.mdx @@ -63,10 +63,10 @@ Your subgraph slug is an identifier for your subgraph. The CLI tool will walk yo The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. +- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retrieve from the subgraph. - AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema. -If you want to index additional data, you will need extend the manifest, schema and mappings. +If you want to index additional data, you will need to extend the manifest, schema and mappings. For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). diff --git a/website/pages/it/cookbook/cosmos.mdx b/website/pages/it/cookbook/cosmos.mdx index 5e9edfd82931..a8c359b3098c 100644 --- a/website/pages/it/cookbook/cosmos.mdx +++ b/website/pages/it/cookbook/cosmos.mdx @@ -85,7 +85,7 @@ Schema definition describes the structure of the resulting subgraph database and The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). -Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/assemblyscript-api/). +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/it/cookbook/grafting.mdx b/website/pages/it/cookbook/grafting.mdx index 5137472eca06..d6c443a46538 100644 --- a/website/pages/it/cookbook/grafting.mdx +++ b/website/pages/it/cookbook/grafting.mdx @@ -22,7 +22,7 @@ For more information, you can check: - [Grafting](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic usecase. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network @@ -30,7 +30,7 @@ In this tutorial, we will be covering a basic usecase. We will replace an existi ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. While this is an effective way to preserve data and save time on indexing, grafting may introduce complexities and potential issues when migrating from a hosted environment to the decentralized network. It is not possible to graft a subgraph from The Graph Network back to the hosted service or Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. ### Best Practices @@ -80,7 +80,7 @@ dataSources: ``` - The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. ## Grafting Manifest Definition @@ -191,7 +191,7 @@ Congrats! You have successfully grafted a subgraph onto another subgraph. ## Additional Resources -If you want more experience with grafting, here's a few examples for popular contracts: +If you want more experience with grafting, here are a few examples for popular contracts: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) diff --git a/website/pages/it/cookbook/near.mdx b/website/pages/it/cookbook/near.mdx index 5336c8c0c1c3..36d5ba913b3d 100644 --- a/website/pages/it/cookbook/near.mdx +++ b/website/pages/it/cookbook/near.mdx @@ -37,7 +37,7 @@ There are three aspects of subgraph definition: **schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph#the-graphql-schema). -**AssemblyScript Mappings:** [AssemblyScript code](/developing/assemblyscript-api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. +**AssemblyScript Mappings:** [AssemblyScript code](/developing/graph-ts/api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. During subgraph development there are two key commands: @@ -98,7 +98,7 @@ Schema definition describes the structure of the resulting subgraph database and The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). -NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/assemblyscript-api). +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/graph-ts/api). ```typescript @@ -165,9 +165,9 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/developing/assemblyscript-api) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/developing/graph-ts/api) is available to NEAR subgraph developers during mapping execution. -This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/assemblyscript-api#json-api) to allow developers to easily process these logs. +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/graph-ts/api#json-api) to allow developers to easily process these logs. ## Deploying a NEAR Subgraph diff --git a/website/pages/it/cookbook/subgraph-uncrashable.mdx b/website/pages/it/cookbook/subgraph-uncrashable.mdx index 989310a3f9a0..0cc91a0fa2c3 100644 --- a/website/pages/it/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/it/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: Safe Subgraph Code Generator - The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. These logs can be viewed in the The Graph's hosted service under the 'Logs' section. +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. diff --git a/website/pages/it/cookbook/upgrading-a-subgraph.mdx b/website/pages/it/cookbook/upgrading-a-subgraph.mdx index 60165285850e..c7ff2b1213f0 100644 --- a/website/pages/it/cookbook/upgrading-a-subgraph.mdx +++ b/website/pages/it/cookbook/upgrading-a-subgraph.mdx @@ -136,7 +136,7 @@ Make sure **Update Subgraph Details in Explorer** is checked and click on **Save ## Deprecating a Subgraph on The Graph Network -Follow the steps [here](/managing/deprecating-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. +Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. ## Querying a Subgraph + Billing on The Graph Network diff --git a/website/pages/it/deploying/multiple-networks.mdx b/website/pages/it/deploying/multiple-networks.mdx new file mode 100644 index 000000000000..a9385c5e1509 --- /dev/null +++ b/website/pages/it/deploying/multiple-networks.mdx @@ -0,0 +1,241 @@ +--- +title: Deploying a Subgraph to Multiple Networks +--- + +This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph). + +## Distribuzione del subgraph su più reti + +In alcuni casi, si desidera distribuire lo stesso subgraph su più reti senza duplicare tutto il suo codice. Il problema principale è che gli indirizzi dei contratti su queste reti sono diversi. + +### Using `graph-cli` + +Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: + +```sh +Options: + + ... + --network Network configuration to use from the networks config file + --network-file Networks config file path (default: "./networks.json") +``` + +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. + +> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. + +If you don't have a `networks.json` file, you'll need to manually create one with the following structure: + +```json +{ + "network1": { // il nome della rete + "dataSource1": { // il nome del dataSource + "address": "0xabc...", // l'indirizzo del contratto (opzionale) + "startBlock": 123456 // il startBlock (opzionale) + }, + "dataSource2": { + "address": "0x123...", + "startBlock": 123444 + } + }, + "network2": { + "dataSource1": { + "address": "0x987...", + "startBlock": 123 + }, + "dataSource2": { + "address": "0xxyz..", + "startBlock": 456 + } + }, + ... +} +``` + +> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. + +Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x123...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Questo è l'aspetto del file di configurazione delle reti: + +```json +{ + "mainnet": { + "Gravity": { + "address": "0x123..." + } + }, + "sepolia": { + "Gravity": { + "address": "0xabc..." + } + } +} +``` + +Ora possiamo eseguire uno dei seguenti comandi: + +```sh +# Using default networks.json file +yarn build --network sepolia + +# Using custom named file +yarn build --network sepolia --network-file path/to/config +``` + +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: sepolia + source: + address: '0xabc...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Now you are ready to `yarn deploy`. + +> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: + +```sh +# Using default networks.json file +yarn deploy --network sepolia + +# Using custom named file +yarn deploy --network sepolia --network-file path/to/config +``` + +### Utilizzo del template subgraph.yaml + +One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). + +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: + +```json +{ + "network": "mainnet", + "address": "0x123..." +} +``` + +e + +```json +{ + "network": "sepolia", + "address": "0xabc..." +} +``` + +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + network: {{network}} + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + address: '{{address}}' + abi: Gravity + mapping: + kind: ethereum/events +``` + +In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: + +```json +{ + ... + "scripts": { + ... + "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", + "prepare:sepolia": "mustache config/sepolia.json subgraph.template.yaml > subgraph.yaml" + }, + "devDependencies": { + ... + "mustache": "^3.1.0" + } +} +``` + +To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: + +```sh +# Mainnet: +yarn prepare:mainnet && yarn deploy + +# Sepolia: +yarn prepare:sepolia && yarn deploy +``` + +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). + +**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. + +## Politica di archiviazione dei subgraph di Subgraph Studio + +A subgraph version in Studio is archived if and only if it meets the following criteria: + +- The version is not published to the network (or pending publish) +- The version was created 45 or more days ago +- The subgraph hasn't been queried in 30 days + +In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. + +Ogni subgraph colpito da questa politica ha un'opzione per recuperare la versione in questione. + +## Verifica dello stato di salute del subgraph + +Se un subgraph si sincronizza con successo, è un buon segno che continuerà a funzionare bene per sempre. Tuttavia, nuovi trigger sulla rete potrebbero far sì che il subgraph si trovi in una condizione di errore non testata o che inizi a rimanere indietro a causa di problemi di prestazioni o di problemi con gli operatori dei nodi. + +Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: + +```graphql +{ + indexingStatusForCurrentVersion(subgraphName: "org/subgraph") { + synced + health + fatalError { + message + block { + number + hash + } + handler + } + chains { + chainHeadBlock { + number + } + latestBlock { + number + } + } + } +} +``` + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/pages/it/developing/creating-a-subgraph.mdx b/website/pages/it/developing/creating-a-subgraph.mdx index 39ed1e67aa19..df991bce6fc2 100644 --- a/website/pages/it/developing/creating-a-subgraph.mdx +++ b/website/pages/it/developing/creating-a-subgraph.mdx @@ -2,45 +2,47 @@ title: Creare un subgraph --- -Un subgraph estrae i dati da una blockchain, li elabora e li memorizza in modo che possano essere facilmente interrogati tramite GraphQL. +This detailed guide provides instructions to successfully create a subgraph. -![Definizione di un Subgraph](/img/defining-a-subgraph.png) +A subgraph extracts data from a blockchain, processes it, and stores it for efficient querying via GraphQL. -La definizione del subgraph consiste in alcuni file: +![Definizione di un Subgraph](/img/defining-a-subgraph.png) -- `subgraph.yaml`: un file YAML contenente il manifesto del subgraph +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. -- `schema.graphql`: uno schema GraphQL che definisce quali dati sono memorizzati per il subgraph e come interrogarli via GraphQL +## Getting Started -- `AssemblyScript Mappings`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) codice che traduce i dati dell'evento nelle entità definite nello schema (ad esempio `mapping.ts` in questo tutorial) +### Installare the Graph CLI -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key). It is recommended that you [add signal](/network/curating/#how-to-signal) to your subgraph with at least [3,000 GRT](/sunrise/#how-can-i-ensure-high-quality-of-service-and-redundancy-for-subgraphs-on-the-graph-network). +To build and deploy a subgraph, you will need the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-tooling) which you will need to build and deploy a subgraph. +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. -## Installare the Graph CLI +On your local machine, run one of the following commands: -The Graph CLI è scritta in JavaScript e per utilizzarla è necessario installare `yarn` oppure `npm`; in quanto segue si presume che si disponga di yarn. +#### Using [npm](https://www.npmjs.com/) -Una volta che si dispone di `yarn`, installare the Graph CLI eseguendo +```bash +npm install -g @graphprotocol/graph-cli@latest +``` -**Installare con yarn:** +#### Using [yarn](https://yarnpkg.com/) ```bash yarn global add @graphprotocol/graph-cli ``` -**Installare con npm:** +- The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. -```bash -npm install -g @graphprotocol/graph-cli -``` +- This `graph init` command can also create a subgraph in Subgraph Studio by passing in `--product subgraph-studio`. + +- If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. -Once installed, the `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. This command can be used to create a subgraph in Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to your preferred network, bootstrapping a new subgraph from that contract can be a good way to get started. +## Create a subgraph -## Da un contratto esistente +### From an existing contract -Il comando seguente crea un subgraph che indicizza tutti gli eventi di un contratto esistente. Tenta di recuperare l'ABI del contratto da Etherscan e torna a richiedere il percorso di un file locale. Se manca uno qualsiasi degli argomenti opzionali, il comando viene eseguito attraverso un modulo interattivo. +The following command creates a subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,21 +53,29 @@ graph init \ [] ``` -Il `` è l'ID del subgraph in Subgraph Studio, che si trova nella pagina dei dettagli del subgraph. +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. -## Da un subgraph di esempio +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. -La seconda modalità supportata da `graph init` è la creazione di un nuovo progetto a partire da un subgraph di esempio. Il comando seguente esegue questa operazione: +### From an example subgraph + +The following command initializes a new project from an example subgraph: ```sh -graph init --studio +graph init --studio --from-example=example-subgraph ``` -The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. -## Aggiungere nuove data sources a un subgraph esistente +## Add new `dataSources` to an existing subgraph -Dalla `v0.31.0` il `graph-cli` supporta l'aggiunta di nuove sorgenti di dati a un subgraph esistente tramite il comando `graph add`. +Since `v0.31.0`, the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: ```sh graph add
    [] @@ -78,22 +88,45 @@ Opzioni: --network-file Percorso del file di configurazione della rete (predefinito: "./networks.json") ``` -Il comando `add` recupera l'ABI da Etherscan (a meno che non sia specificato un percorso ABI con l'opzione `--abi`) e crea una nuova `dataSource` nello stesso modo in cui il comando `graph init` crea una `dataSource` `-from-contract`, aggiornando di conseguenza lo schema e le mappature. +### Specifics + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- L'opzione `--merge-entities` identifica il modo in cui lo sviluppatore desidera gestire i conflitti tra i nomi di `entità` e `evento`: + + - If `true`: il nuovo `dataSource` dovrebbe utilizzare gli `eventHandler` & `entità` esistenti. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- Il contratto `address` sarà scritto in `networks.json` per la rete rilevante. + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. -L'opzione `--merge-entities` identifica il modo in cui lo sviluppatore desidera gestire i conflitti tra i nomi di `entità` e `evento`: +## Components of a subgraph -- If `true`: il nuovo `dataSource` dovrebbe utilizzare gli `eventHandler` & `entità` esistenti. -- If `false`: una nuova entità & il gestore dell'evento deve essere creato con `${dataSourceName}{EventName}`. +### Manifesto di Subgraph -Il contratto `address` sarà scritto in `networks.json` per la rete rilevante. +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -> **Nota:** Quando si utilizza il cli interattivo, dopo aver eseguito con successo `graph init`, verrà richiesto di aggiungere un nuovo `dataSource`. +The **subgraph definition** consists of the following files: -## Manifesto di Subgraph +- `subgraph.yaml`: Contains the subgraph manifest -Il manifesto del subgraph `subgraph.yaml` definisce gli smart contract che il subgraph indicizza, a quali eventi di questi contratti prestare attenzione e come mappare i dati degli eventi alle entità che Graph Node memorizza e permette di effettuare query. Le specifiche complete dei manifesti dei subgraph sono disponibili [qui](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL -Per il subgraph di esempio, `subgraph.yaml` è: +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +A single subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: ```yaml specVersion: 0.0.4 @@ -180,9 +213,9 @@ Un singolo subgraph può indicizzare i dati di più smart contract. Aggiungere a I trigger per una data source all'interno di un blocco sono ordinati secondo il seguente processo: -1. I trigger di eventi e chiamate sono ordinati prima per indice di transazione all'interno del blocco. -2. I trigger di eventi e chiamate all'interno della stessa transazione sono ordinati secondo una convenzione: prima i trigger di eventi e poi quelli di chiamate, rispettando l'ordine in cui sono definiti nel manifesto. -3. I trigger di blocco vengono eseguiti dopo i trigger di evento e di chiamata, nell'ordine in cui sono definiti nel manifesto. +1. I trigger di eventi e chiamate sono ordinati prima per indice di transazione all'interno del blocco. +2. I trigger di eventi e chiamate all'interno della stessa transazione sono ordinati secondo una convenzione: prima i trigger di eventi e poi quelli di chiamate, rispettando l'ordine in cui sono definiti nel manifesto. +3. I trigger di blocco vengono eseguiti dopo i trigger di evento e di chiamata, nell'ordine in cui sono definiti nel manifesto. Queste regole di ordinazione sono soggette a modifiche. @@ -305,9 +338,9 @@ Imagine you have a subgraph that needs to make three Ethereum calls to fetch dat Traditionally, these calls might be made sequentially: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Total time taken = 3 + 2 + 4 = 9 seconds @@ -315,9 +348,9 @@ Total time taken = 3 + 2 + 4 = 9 seconds With this feature, you can declare these calls to be executed in parallel: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. @@ -325,9 +358,9 @@ Total time taken = max (3, 2, 4) = 4 seconds ### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. -2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. ### Example Configuration in Subgraph Manifest @@ -347,7 +380,7 @@ calls: Details for the example above: - `global0X128` is the declared `eth_call`. -- The text before colon(`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. @@ -360,17 +393,17 @@ calls: ### SpecVersion Releases -| Version | Release notes | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +| Version | Release notes | +|:-------:| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | ### Ottenere gli ABI @@ -442,16 +475,16 @@ Per alcuni tipi di entità, l'`id` è costruito a partire dagli id di altre due Nella nostra API GraphQL supportiamo i seguenti scalari: -| Tipo | Descrizione | -| --- | --- | -| `Bytes` | Byte array, rappresentato come una stringa esadecimale. Comunemente utilizzato per gli hash e gli indirizzi di Ethereum. | -| `String` | Scalare per valori `string`. I caratteri nulli non sono supportati e vengono rimossi automaticamente. | -| `Boolean` | Scalare per valori `boolean`. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | Un intero firmato a 8 byte, noto anche come intero firmato a 64 bit, può memorizzare valori nell'intervallo da -9,223,372,036,854,775,808 a 9,223,372,036,854,775,807. È preferibile utilizzare questo per rappresentare `i64` da ethereum. | -| `BigInt` | Numeri interi grandi. Utilizzati per i tipi `uint32`, `int64`, `uint64`, ..., `uint256` di Ethereum. Nota: Tutto ciò che è inferiore a `uint32` come `int32`, `uint24` oppure `int8` è rappresentato come `i32`. | -| `BigDecimal` | `BigDecimal` Decimali ad alta precisione rappresentati come un significante e un esponente. L'intervallo degli esponenti va da -6143 a +6144. Arrotondato a 34 cifre significative. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| Tipo | Descrizione | +| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Byte array, rappresentato come una stringa esadecimale. Comunemente utilizzato per gli hash e gli indirizzi di Ethereum. | +| `String` | Scalare per valori `string`. I caratteri nulli non sono supportati e vengono rimossi automaticamente. | +| `Boolean` | Scalare per valori `boolean`. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | Un intero firmato a 8 byte, noto anche come intero firmato a 64 bit, può memorizzare valori nell'intervallo da -9,223,372,036,854,775,808 a 9,223,372,036,854,775,807. È preferibile utilizzare questo per rappresentare `i64` da ethereum. | +| `BigInt` | Numeri interi grandi. Utilizzati per i tipi `uint32`, `int64`, `uint64`, ..., `uint256` di Ethereum. Nota: Tutto ciò che è inferiore a `uint32` come `int32`, `uint24` oppure `int8` è rappresentato come `i32`. | +| `BigDecimal` | `BigDecimal` Decimali ad alta precisione rappresentati come un significante e un esponente. L'intervallo degli esponenti va da -6143 a +6144. Arrotondato a 34 cifre significative. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | #### Enum @@ -593,7 +626,7 @@ Questo modo più elaborato di memorizzare le relazioni molti-a-molti si traduce #### Aggiungere commenti allo schema -As per GraphQL spec, comments can be added above schema entity attributes using the hash symble `#`. This is illustrated in the example below: +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: ```graphql type MyFirstEntity @entity { @@ -873,7 +906,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **Nota:** Una nuova data source elaborerà solo le chiamate e gli eventi del blocco in cui è stata creata e di tutti i blocchi successivi, ma non elaborerà i dati storici, cioè quelli contenuti nei blocchi precedenti. -> +> > Se i blocchi precedenti contengono dati rilevanti per la nuova data source, è meglio indicizzare tali dati leggendo lo stato attuale del contratto e creando entità che rappresentino tale stato al momento della creazione della nuova data source. ### Contesto del Data Source @@ -930,7 +963,7 @@ dataSources: ``` > **Nota:** Il blocco di creazione del contratto può essere rapidamente consultato su Etherscan: -> +> > 1. Cercare il contratto inserendo l'indirizzo nella barra di ricerca. > 2. Fare clic sull'hash della transazione di creazione nella sezione `Contract Creator`. > 3. Caricare la pagina dei dettagli della transazione, dove si trova il blocco iniziale per quel contratto. @@ -945,9 +978,9 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde `indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: -1. `"never"`: No pruning of historical data; retains the entire history. -2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. -3. A specific number: Sets a custom limit on the number of historical blocks to retain. +1. `"never"`: No pruning of historical data; retains the entire history. +2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +3. A specific number: Sets a custom limit on the number of historical blocks to retain. ``` indexerHints: @@ -982,29 +1015,6 @@ indexerHints: prune: never ``` -You can check the earliest block (with historical state) for a given subgraph by querying the [Indexing Status API](/deploying/deploying-a-subgraph-to-hosted/#checking-subgraph-health): - -``` -{ - indexingStatuses(subgraphs: ["Qm..."]) { - subgraph - synced - health - chains { - earliestBlock { - number - } - latestBlock { - number - } - chainHeadBlock { number } - } - } -} -``` - -Note that the `earliestBlock` is the earliest block with historical data, which will be more recent than the `startBlock` specified in the manifest, if the subgraph has been pruned. - ## Event Handlers Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. @@ -1477,7 +1487,7 @@ The file data source must specifically mention all the entity types which it wil #### Creare un nuovo gestore per elaborare i file -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/assemblyscript-api/#json-api)). +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). Il CID del file, come stringa leggibile, è accessibile tramite `dataSource` come segue: diff --git a/website/pages/it/developing/developer-faqs.mdx b/website/pages/it/developing/developer-faqs.mdx index b4af2c711bc8..c8906615c081 100644 --- a/website/pages/it/developing/developer-faqs.mdx +++ b/website/pages/it/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: Developer FAQs --- -## 1. What is a subgraph? +This page summarizes some of the most common questions for developers building on The Graph. -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using the Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available to be queried by subgraph consumers. +## Subgraph Related -## 2. Can I delete my subgraph? +### 1. What is a subgraph? -It is not possible to delete subgraphs once they are created. +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. Can I change my subgraph name? +### 2. What is the first step to create a subgraph? -No. Once a subgraph is created, the name cannot be changed. Make sure to think of this carefully before you create your subgraph so it is easily searchable and identifiable by other dapps. +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. Can I change the GitHub account associated with my subgraph? +### 3. Can I still create a subgraph if my smart contracts don't have events? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Make sure to think of this carefully before you create your subgraph. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. Am I still able to create a subgraph if my smart contracts don't have events? +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are by far the fastest way to retrieve useful data. +### 4. Can I change the GitHub account associated with my subgraph? -If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended, as performance will be significantly slower. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. Is it possible to deploy one subgraph with the same name for multiple networks? +### 5. How do I update a subgraph on mainnet? -You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/deploying/deploying-a-subgraph-to-hosted#redeploying-a-subgraph) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. How are templates different from data sources? +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) upfront you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. +You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph#data-source-templates). -## 8. How do I make sure I'm using the latest version of graph-node for my local deployments? +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -You can run the following command: +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -docker pull graphprotocol/graph-node:latest -``` +You can also use `graph add` command to add a new dataSource. -**NOTE:** docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so it is important to do this to make sure you are up to date with the latest version of graph-node. +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. How do I call a contract function or access a public state variable from my subgraph mappings? +Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. -## 10. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Yes. On `graph init` command itself you can add multiple datasources by entering contracts one after the other. You can also use `graph add` command to add new datasource. +You can run the following command: -## 11. I want to contribute or add a GitHub issue. Where can I find the open source repositories? +```sh +docker pull graphprotocol/graph-node:latest +``` -- [graph-node](https://github.com/graphprotocol/graph-node) -- [graph-tooling](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. What is the recommended way to build "autogenerated" ids for an entity when handling events? +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -## 13. When listening to multiple contracts, is it possible to select the contract order to listen to events? +### 15. Can I delete my subgraph? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +It is not possible to delete subgraphs once they are created. However, you can [transfer and deprecate your subgraph](/managing/transfer-and-deprecate-a-subgraph/). -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +## Network Related + +### 16. What networks are supported by The Graph? + +You can find the list of the supported networks [here](/developing/supported-networks). + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? Yes. You can do this by importing `graph-ts` as per the example below: @@ -78,23 +99,21 @@ dataSource.network() dataSource.address() ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Do you support block and call handlers on Sepolia? Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. -## 16. Can I import ethers.js or other JS libraries into my subgraph mappings? - -Not currently, as mappings are written in AssemblyScript. One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. +## Indexing & Querying Related -## 17. Is it possible to specify what block to start indexing on? +### 19. Is it possible to specify what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 18. Are there some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 19. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: @@ -102,19 +121,7 @@ Yes! Try the following command, substituting "organization/subgraphName" with th curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -## 20. What networks are supported by The Graph? - -You can find the list of the supported networks [here](/developing/supported-networks). - -## 21. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? - -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. - -## 22. Is this possible to use Apollo Federation on top of graph-node? - -Federation is not supported yet, although we do want to support it in the future. At the moment, something you can do is use schema stitching, either on the client or via a proxy service. - -## 23. Is there a limit to how many objects The Graph can return per query? +### 22. Is there a limit to how many objects The Graph can return per query? By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: @@ -122,24 +129,19 @@ By default, query responses are limited to 100 items per collection. If you want someCollection(first: 1000, skip: ) { ... } ``` -## 24. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## 25. Where do I go to find my current subgraph on the hosted service? - -Head over to the hosted service in order to find subgraphs that you or others deployed to the hosted service. You can find it [here](https://thegraph.com/hosted-service). - -## 26. Will the hosted service start charging query fees? - -The Graph will never charge for the hosted service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The hosted service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to upgrade to the decentralized network as they are comfortable. - -## 27. How do I update a subgraph on mainnet? +## Miscellaneous -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +### 24. Is it possible to use Apollo Federation on top of graph-node? -## 28. In what order are the event, block, and call handlers triggered for a data source? +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +- [graph-node](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/it/developing/graph-ts/api.mdx b/website/pages/it/developing/graph-ts/api.mdx index 012b07655db8..4edadedcff5e 100644 --- a/website/pages/it/developing/graph-ts/api.mdx +++ b/website/pages/it/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: API AssemblyScript --- -> Nota: se si hai creato un subgraph prima di `graph-cli`/`graph-ts` versione `0.22.0`, stai usando una versione precedente di AssemblyScript. Consigliamo di dare un'occhiata alla [`Guida alla migrazione`](/release-notes/assemblyscript-migration-guide) +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -Questa pagina documenta quali API integrate possono essere utilizzate per scrivere mappature di subgraph. Sono disponibili due tipi di API: +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- codice generato da file di subgraph da `graph codegen`. +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -È anche possibile aggiungere altre librerie come dipendenze, purché siano compatibili con [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Poiché questo è il linguaggio in cui sono scritte le mappature, il wiki [AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) è una buona fonte per le caratteristiche del linguaggio e delle librerie standard. +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## Riferimento API @@ -27,16 +29,16 @@ La libreria `@graphprotocol/graph-ts` fornisce le seguenti API: La `apiVersion` nel manifest del subgraph specifica la versione dell'API di mappatura che viene eseguita da the Graph Node per un dato subgraph. -| Versione | Note di rilascio | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Aggiunte le classi `TransactionReceipt` e `Log` ai tipi di Ethereum
    Aggiunto il campo `receipt` all'oggetto Ethereum Event | -| 0.0.6 | Aggiunto il campo `nonce` all'oggetto Ethereum Transaction
    Aggiunto `baseFeePerGas` all'oggetto Ethereum Block | -| 0.0.5 | AssemblyScript aggiornato alla versione 0.19.10 (questo include modifiche di rottura, consultare la [`Guida alla migrazione`](/release-notes/assemblyscript-migration-guide))
    `ethereum.transaction.gasUsed` rinominato in `ethereum.transaction.gasLimit` | -| 0.0.4 | Aggiunto il campo `functionSignature` all'oggetto Ethereum SmartContractCall | -| 0.0.3 | Aggiunto il campo `from` all'oggetto Ethereum Call
    `etherem.call.address` rinominato in `ethereum.call.to` | -| 0.0.2 | Aggiunto il campo `input` all'oggetto Ethereum Transaction | +| Versione | Note di rilascio | +| :------: | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Aggiunte le classi `TransactionReceipt` e `Log` ai tipi di Ethereum
    Aggiunto il campo `receipt` all'oggetto Ethereum Event | +| 0.0.6 | Aggiunto il campo `nonce` all'oggetto Ethereum Transaction
    Aggiunto `baseFeePerGas` all'oggetto Ethereum Block | +| 0.0.5 | AssemblyScript aggiornato alla versione 0.19.10 (questo include modifiche di rottura, consultare la [`Guida alla migrazione`](/release-notes/assemblyscript-migration-guide))
    `ethereum.transaction.gasUsed` rinominato in `ethereum.transaction.gasLimit` | +| 0.0.4 | Aggiunto il campo `functionSignature` all'oggetto Ethereum SmartContractCall | +| 0.0.3 | Aggiunto il campo `from` all'oggetto Ethereum Call
    `etherem.call.address` rinominato in `ethereum.call.to` | +| 0.0.2 | Aggiunto il campo `input` all'oggetto Ethereum Transaction | ### Tipi integrati @@ -252,7 +254,9 @@ export function handleTransfer(event: TransferEvent): void { Quando un evento `Transfer` viene incontrato durante l'elaborazione della chain, viene passato al gestore dell'evento `handleTransfer` usando il tipo `Transfer` generato (qui alias `TransferEvent` per evitare un conflitto di nomi con il tipo di entità). Questo tipo consente di accedere a dati quali la transazione genitore dell'evento e i suoi parametri. -Ogni entità deve avere un ID univoco per evitare collisioni con altre entità. È abbastanza comune che i parametri degli eventi includano un identificatore unico che può essere utilizzato. Nota: l'uso dell'hash della transazione come ID presuppone che nessun altro evento della stessa transazione crei entità con questo hash come ID. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### Caricare le entità dallo store @@ -268,15 +272,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -Poiché l'entità potrebbe non esistere ancora nel negozio, il metodo `load` restituisce un valore di tipo `Transfer | null`. Potrebbe quindi essere necessario verificare il caso `null` prima di utilizzare il valore. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> **Nota: ** Il caricamento delle entità è necessario solo se le modifiche apportate alla mappatura dipendono dai dati precedenti di un'entità. Vedere la sezione successiva per i due modi di aggiornare le entità esistenti. +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### Ricerca delle entità create all'interno di un blocco A partire da `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 e `@graphprotocol/graph-cli` v0.49.0 il metodo `loadInBlock` è disponibile per tutti i tipi di entità. -L'API Store facilita il recupero delle entità create o aggiornate nel blocco corrente. Una situazione tipica è quella in cui un gestore crea una transazione da qualche evento sulla catena e un gestore successivo vuole accedere a questa transazione, se esiste. Nel caso in cui la transazione non esista, il subgraph dovrà andare nel database solo per scoprire che l'entità non esiste; se l'autore del subgraph sa già che l'entità deve essere stata creata nello stesso blocco, l'uso di loadInBlock evita questo viaggio nel database. Per alcuni subgraph, queste ricerche mancate possono contribuire in modo significativo al tempo di indicizzazione. +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -503,7 +510,9 @@ Qualsiasi altro contratto che faccia parte del subgraph può essere importato da #### Gestione delle chiamate annullate -Se i metodi di sola lettura del contratto possono essere annullati, si deve gestire la situazione chiamando il metodo del contratto generato con il prefisso `try_`. Per esempio, il contratto Gravity espone il metodo `gravatarToOwner`. Questo codice sarebbe in grado di gestire un revert in quel metodo: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -515,7 +524,7 @@ if (callResult.reverted) { } ``` -Si noti che un Graph node collegato a un client Geth o Infura potrebbe non rilevare tutti i reverts; se si fa affidamento su questo si consiglia di utilizzare un Graph node collegato a un client Parity. +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### Codifica/decodifica ABI @@ -761,44 +770,44 @@ Quando il tipo di un valore è certo, può essere convertito in un [tipo incorpo ### Riferimento alle conversioni di tipo -| Fonte(i) | Destinazione | Funzione di conversione | -| -------------------- | -------------------- | --------------------------- | -| Address | Bytes | none | -| Address | String | s.toHexString() | -| BigDecimal | String | s.toString() | -| BigInt | BigDecimal | s.toBigDecimal() | -| BigInt | String (hexadecimal) | s.toHexString() o s.toHex() | -| BigInt | String (unicode) | s.toString() | -| BigInt | i32 | s.toI32() | -| Boolean | Boolean | none | -| Bytes (signed) | BigInt | BigInt.fromSignedBytes(s) | -| Bytes (unsigned) | BigInt | BigInt.fromUnsignedBytes(s) | -| Bytes | String (hexadecimal) | s.toHexString() o s.toHex() | -| Bytes | String (unicode) | s.toString() | -| Bytes | String (base58) | s.toBase58() | -| Bytes | i32 | s.toI32() | -| Bytes | u32 | s.toU32() | -| Bytes | JSON | json.fromBytes(s) | -| int8 | i32 | none | -| int32 | i32 | none | -| int32 | BigInt | BigInt.fromI32(s) | -| uint24 | i32 | none | -| int64 - int256 | BigInt | none | -| uint32 - uint256 | BigInt | none | -| JSON | boolean | s.toBool() | -| JSON | i64 | s.toI64() | -| JSON | u64 | s.toU64() | -| JSON | f64 | s.toF64() | -| JSON | BigInt | s.toBigInt() | -| JSON | string | s.toString() | -| JSON | Array | s.toArray() | -| JSON | Object | s.toObject() | -| String | Address | Address.fromString(s) | -| Bytes | Address | Address.fromBytes(s) | -| String | BigInt | BigInt.fromString(s) | -| String | BigDecimal | BigDecimal.fromString(s) | -| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | -| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | +| Fonte(i) | Destinazione | Funzione di conversione | +| -------------------- | --------------------- | -------------------------------- | +| Address | Bytes | none | +| Address | String | s.toHexString() | +| BigDecimal | String | s.toString() | +| BigInt | BigDecimal | s.toBigDecimal() | +| BigInt | String (hexadecimal) | s.toHexString() o s.toHex() | +| BigInt | String (unicode) | s.toString() | +| BigInt | i32 | s.toI32() | +| Boolean | Boolean | none | +| Bytes (signed) | BigInt | BigInt.fromSignedBytes(s) | +| Bytes (unsigned) | BigInt | BigInt.fromUnsignedBytes(s) | +| Bytes | String (hexadecimal) | s.toHexString() o s.toHex() | +| Bytes | String (unicode) | s.toString() | +| Bytes | String (base58) | s.toBase58() | +| Bytes | i32 | s.toI32() | +| Bytes | u32 | s.toU32() | +| Bytes | JSON | json.fromBytes(s) | +| int8 | i32 | none | +| int32 | i32 | none | +| int32 | BigInt | BigInt.fromI32(s) | +| uint24 | i32 | none | +| int64 - int256 | BigInt | none | +| uint32 - uint256 | BigInt | none | +| JSON | boolean | s.toBool() | +| JSON | i64 | s.toI64() | +| JSON | u64 | s.toU64() | +| JSON | f64 | s.toF64() | +| JSON | BigInt | s.toBigInt() | +| JSON | string | s.toString() | +| JSON | Array | s.toArray() | +| JSON | Object | s.toObject() | +| String | Address | Address.fromString(s) | +| Bytes | Address | Address.fromBytes(s) | +| String | BigInt | BigInt.fromString(s) | +| String | BigDecimal | BigDecimal.fromString(s) | +| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | +| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | ### Metadati della Data Source diff --git a/website/pages/it/developing/supported-networks.mdx b/website/pages/it/developing/supported-networks.mdx index 7c2d8d858261..797202065e99 100644 --- a/website/pages/it/developing/supported-networks.mdx +++ b/website/pages/it/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. +- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. - If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). diff --git a/website/pages/it/developing/unit-testing-framework.mdx b/website/pages/it/developing/unit-testing-framework.mdx index f826a5ccb209..308135181ccb 100644 --- a/website/pages/it/developing/unit-testing-framework.mdx +++ b/website/pages/it/developing/unit-testing-framework.mdx @@ -1368,18 +1368,18 @@ The log output includes the test run duration. Here's an example: > Critical: Could not create WasmInstance from valid module with context: unknown import: wasi_snapshot_preview1::fd_write has not been defined -This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/assemblyscript-api/#logging-api) +This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/graph-ts/api/#logging-api) > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Block(defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(18,12) -> +> > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Transaction(defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(24,12) The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as`. The best way to fix issues like this one is to update everything to the latest released version. diff --git a/website/pages/it/glossary.mdx b/website/pages/it/glossary.mdx index cd24a22fd4d5..2978ecce3561 100644 --- a/website/pages/it/glossary.mdx +++ b/website/pages/it/glossary.mdx @@ -10,11 +10,9 @@ title: Glossary - **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Then, Indexers can begin indexing subgraphs to make them available to be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **Hosted service**: A temporary scaffold service for building and querying subgraphs as The Graph's decentralized network is maturing its cost of service, quality of service, and developer experience. - -- **Indexers**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. @@ -24,17 +22,17 @@ title: Glossary - **Indexer's Self Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. -- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service to The Graph Network. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegators**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curators**: Network participants that identify high-quality subgraphs, and “curate” them (i.e., signal GRT on them) in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. Indexers earn indexing rewards proportional to the signal on a subgraph. We see a correlation between the amount of GRT signalled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. -- **Subgraph Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a subgraph. - **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. @@ -46,11 +44,11 @@ title: Glossary 1. **Active**: An allocation is considered active when it is created on-chain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed. Specifically, the Indexer will lose 2.5% of their self-stake of GRT. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. @@ -62,11 +60,11 @@ title: Glossary - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **POI or Proof of Indexing**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent Proof of Indexing (POI). Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. @@ -78,10 +76,6 @@ title: Glossary - **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self stake. -- **_Upgrading_ a subgraph to The Graph Network**: The process of moving a subgraph from the hosted service to The Graph Network. - -- **_Updating_ a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). - -- **Upgrade Window**: The countdown for hosted service users to upgrade their subgraph(s) to The Graph Network beginning on April 11th, and ending on June 12th 2024. diff --git a/website/pages/it/index.json b/website/pages/it/index.json index 498fe1ae6b91..ae779150a432 100644 --- a/website/pages/it/index.json +++ b/website/pages/it/index.json @@ -21,10 +21,6 @@ "createASubgraph": { "title": "Create a Subgraph", "description": "Use Studio to create subgraphs" - }, - "migrateFromHostedService": { - "title": "Upgrade from the hosted service", - "description": "Upgrading subgraphs to The Graph Network" } }, "networkRoles": { diff --git a/website/pages/it/managing/transfer-and-deprecate-a-subgraph.mdx b/website/pages/it/managing/transfer-and-deprecate-a-subgraph.mdx new file mode 100644 index 000000000000..d2c2280f924a --- /dev/null +++ b/website/pages/it/managing/transfer-and-deprecate-a-subgraph.mdx @@ -0,0 +1,65 @@ +--- +title: Transfer and Deprecate a Subgraph +--- + +## Transferring ownership of a subgraph + +Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. + +**Please note the following:** + +- Whoever owns the NFT controls the subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. +- You can easily move control of a subgraph to a multi-sig. +- A community member can create a subgraph on behalf of a DAO. + +### View your subgraph as an NFT + +To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: + +``` +https://opensea.io/your-wallet-address +``` + +Or a wallet explorer like **Rainbow.me**: + +``` +https://rainbow.me/your-wallet-addres +``` + +### Step-by-Step + +To transfer ownership of a subgraph, do the following: + +1. Use the UI built into Subgraph Studio: + + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) + +2. Choose the address that you would like to transfer the subgraph to: + + ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) + +Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: + +![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) + +## Deprecating a subgraph + +Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. + +### Step-by-Step + +To deprecate your subgraph, do the following: + +1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). +2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. +3. Your subgraph will no longer appear in searches on Graph Explorer. + +**Please note the following:** + +- The owner's wallet should call the `deprecateSubgraph` function. +- I Curator non potranno più segnalare il subgraph. +- Curators that already signaled on the subgraph can withdraw their signal at an average share price. +- Deprecated subgraphs will show an error message. + +> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. diff --git a/website/pages/it/mips-faqs.mdx b/website/pages/it/mips-faqs.mdx index 69bc785ee5ef..85cbe010d47a 100644 --- a/website/pages/it/mips-faqs.mdx +++ b/website/pages/it/mips-faqs.mdx @@ -6,10 +6,6 @@ title: MIPs FAQs > Note: the MIPs program is closed as of May 2023. Thank you to all the Indexers who participated! -It's an exciting time to be participating in The Graph ecosystem! During [Graph Day 2022](https://thegraph.com/graph-day/2022/) Yaniv Tal announced the [sunsetting of the hosted service](https://thegraph.com/blog/sunsetting-hosted-service/), a moment The Graph ecosystem has been working towards for many years. - -To support the sunsetting of the hosted service and the migration of all of it's activity to the decentralized network, The Graph Foundation has announced the [Migration Infrastructure Providers (MIPs) program](https://thegraph.com/blog/mips-multi-chain-indexing-incentivized-program). - The MIPs program is an incentivization program for Indexers to support them with resources to index chains beyond Ethereum mainnet and help The Graph protocol expand the decentralized network into a multi-chain infrastructure layer. The MIPs program has allocated 0.75% of the GRT supply (75M GRT), with 0.5% to reward Indexers who contribute to bootstrapping the network and 0.25% allocated to Network Grants for subgraph developers using multi-chain subgraphs. diff --git a/website/pages/it/network/benefits.mdx b/website/pages/it/network/benefits.mdx index f0542f2b5731..7456efc877a0 100644 --- a/website/pages/it/network/benefits.mdx +++ b/website/pages/it/network/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Confronto costi | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Costo mensile del server\* | $350 al mese | $0 | -| Costi di query | $0+ | $0 per month | -| Tempo di progettazione | $400 al mese | Nessuno, integrato nella rete con indicizzatori distribuiti a livello globale | -| Query al mese | Limitato alle capacità di infra | 100,000 (Free Plan) | -| Costo per query | $0 | $0 | -| Infrastruttura | Centralizzato | Decentralizzato | -| Ridondanza geografica | $750+ per nodo aggiuntivo | Incluso | -| Tempo di attività | Variabile | 99.9%+ | -| Costo totale mensile | $750+ | $0 | +| Confronto costi | Self Hosted | The Graph Network | +|:----------------------------------:|:---------------------------------------:|:-----------------------------------------------------------------------------:| +| Costo mensile del server\* | $350 al mese | $0 | +| Costi di query | $0+ | $0 per month | +| Tempo di progettazione | $400 al mese | Nessuno, integrato nella rete con indicizzatori distribuiti a livello globale | +| Query al mese | Limitato alle capacità di infra | 100,000 (Free Plan) | +| Costo per query | $0 | $0 | +| Infrastruttura | Centralizzato | Decentralizzato | +| Ridondanza geografica | $750+ per nodo aggiuntivo | Incluso | +| Tempo di attività | Variabile | 99.9%+ | +| Costo totale mensile | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| Confronto dei costi | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Costo mensile del server\* | $350 al mese | $0 | -| Costi di query | $500 al mese | $120 per month | -| Tempo di progettazione | $800 al mese | Nessuno, integrato nella rete con indicizzatori distribuiti a livello globale | -| Query al mese | Limitato alle capacità dell'infrastruttura | ~3,000,000 | -| Costo per query | $0 | $0.00004 | -| Infrastruttura | Centralizzato | Decentralizzato | -| Costi di ingegneria | $200 all'ora | Incluso | -| Ridondanza geografica | $1.200 di costi totali per nodo aggiuntivo | Incluso | -| Tempo di attività | Variabile | 99.9%+ | -| Costo totale mensile | $1,650+ | $120 | +| Confronto dei costi | Self Hosted | The Graph Network | +|:----------------------------------:|:------------------------------------------:|:-----------------------------------------------------------------------------:| +| Costo mensile del server\* | $350 al mese | $0 | +| Costi di query | $500 al mese | $120 per month | +| Tempo di progettazione | $800 al mese | Nessuno, integrato nella rete con indicizzatori distribuiti a livello globale | +| Query al mese | Limitato alle capacità dell'infrastruttura | ~3,000,000 | +| Costo per query | $0 | $0.00004 | +| Infrastruttura | Centralizzato | Decentralizzato | +| Costi di ingegneria | $200 all'ora | Incluso | +| Ridondanza geografica | $1.200 di costi totali per nodo aggiuntivo | Incluso | +| Tempo di attività | Variabile | 99.9%+ | +| Costo totale mensile | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| Confronto costi | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Costo mensile del server\* | $1100 al mese, per nodo | $0 | -| Costi di query | $4000 | $1,200 per month | -| Numero di nodi necessari | 10 | Non applicabile | -| Tempo di progettazione | $6.000 o più al mese | Nessuno, integrato nella rete con indicizzatori distribuiti a livello globale | -| Query al mese | Limitato alle capacità di infra | ~30,000,000 | -| Costo per query | $0 | $0.00004 | -| Infrastruttura | Centralizzato | Decentralizzato | -| Ridondanza geografica | $1.200 di costi totali per nodo aggiuntivo | Incluso | -| Tempo di attività | Variabile | 99.9%+ | -| Costo totale mensile | $11,000+ | $1,200 | +| Confronto costi | Self Hosted | The Graph Network | +|:----------------------------------:|:-------------------------------------------:|:-----------------------------------------------------------------------------:| +| Costo mensile del server\* | $1100 al mese, per nodo | $0 | +| Costi di query | $4000 | $1,200 per month | +| Numero di nodi necessari | 10 | Non applicabile | +| Tempo di progettazione | $6.000 o più al mese | Nessuno, integrato nella rete con indicizzatori distribuiti a livello globale | +| Query al mese | Limitato alle capacità di infra | ~30,000,000 | +| Costo per query | $0 | $0.00004 | +| Infrastruttura | Centralizzato | Decentralizzato | +| Ridondanza geografica | $1.200 di costi totali per nodo aggiuntivo | Incluso | +| Tempo di attività | Variabile | 99.9%+ | +| Costo totale mensile | $11,000+ | $1,200 | \*inclusi i costi per il backup: $50-$100 al mese diff --git a/website/pages/it/network/curating.mdx b/website/pages/it/network/curating.mdx index 20cb357b49b0..355595bf7745 100644 --- a/website/pages/it/network/curating.mdx +++ b/website/pages/it/network/curating.mdx @@ -10,7 +10,7 @@ Before consumers can query a subgraph, it must be indexed. This is where curatio Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. @@ -18,7 +18,7 @@ The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). @@ -34,7 +34,7 @@ La segnalazione di una versione specifica è particolarmente utile quando un sub La migrazione automatica del segnale alla più recente versione di produzione può essere utile per garantire l'accumulo di tariffe di query. Ogni volta che si effettua una curation, si paga una tassa di curation del 1%. Si pagherà anche una tassa di curation del 0,5% per ogni migrazione. Gli sviluppatori di subgraph sono scoraggiati dal pubblicare frequentemente nuove versioni: devono pagare una tassa di curation del 0,5% su tutte le quote di curation auto-migrate. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -49,7 +49,7 @@ However, it is recommended that curators leave their signaled GRT in place not o ## Rischi 1. Il mercato delle query è intrinsecamente giovane per The Graph e c'è il rischio che la vostra %APY possa essere inferiore a quella prevista a causa delle dinamiche di mercato nascenti. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. 3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. Un subgraph può fallire a causa di un bug. Un subgraph fallito non matura commissioni della query. Di conseguenza, si dovrà attendere che lo sviluppatore risolva il bug e distribuisca una nuova versione. - Se siete iscritti alla versione più recente di un subgraph, le vostre quote di partecipazione migreranno automaticamente a quella nuova versione. Questo comporta una tassa di curation di 0,5%. @@ -65,7 +65,7 @@ By signalling on a subgraph, you will earn a share of all the query fees that th Trovare subgraph di alta qualità è un compito complesso, ma può essere affrontato in molti modi diversi. Come Curator, si desidera cercare subgraph affidabili che generano un volume di query. Un subgraph affidabile può essere utile se è completo, accurato e supporta le esigenze di dati di una dApp. Un subgraph mal progettato potrebbe dover essere rivisto o ripubblicato e potrebbe anche finire per fallire. È fondamentale che i Curator rivedano l'architettura o il codice di un subgraph per valutarne il valore. Di conseguenza: -- I curator possono utilizzare la loro comprensione di una rete per cercare di prevedere come un singolo subgraph possa generare un volume di query più o meno elevato in futuro +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future - Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. ### 3. Qual è il costo dell'aggiornamento di un subgraph? @@ -78,50 +78,14 @@ Si suggerisce di non aggiornare i subgraph troppo frequentemente. Si veda la dom ### 5. Posso vendere le mie quote di curation? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). ### 6. Am I eligible for a curation grant? Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## Bonding Curve 101 - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![Prezzo per quote di partecipazione](/img/price-per-share.png) - -Di conseguenza, il prezzo aumenta linearmente, il che significa che l'acquisto di una quota diventerà più costoso nel tempo. Ecco un esempio di ciò che intendiamo, vedi la curva di legame qui sotto: - -![Curva di legame](/img/bonding-curve.png) - -Si consideri che abbiamo due curation che coniano quote di partecipazione per un subgraph: - -- Il curator A è il primo a segnalare il subgraph. Aggiungendo 120,000 GRT alla curva, riesce a coniarne 2000 quote di partecipazione. -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- Dal momento che entrambi i curator detengono la metà del totale delle quote di curation, riceveranno una quantità uguale di royalties di curation. -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- Il curator rimanente riceverebbe ora tutte le royalties di curation per quel subgraph. Se dovessero bruciare le loro quote per ritirare GRT, riceverebbero 120.000 GRT. -- **TLDR:** La valutazione del GRT delle quote di curation è determinata dalla curva di legame e può essere volatile. È possibile subire grosse perdite. Segnalare in anticipo significa investire meno GRT per ogni quota di partecipazione. Per estensione, ciò significa che si guadagnano più royalties di curation per GRT rispetto ai curator successivi per lo stesso subgraph. - -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** - -Nel caso di The Graph, [l'implementazione da parte di Bancor della formula della curva di legame](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) viene sfruttata. - Ancora confusi? Date un'occhiata alla nostra video-guida sulla Curation: diff --git a/website/pages/it/network/delegating.mdx b/website/pages/it/network/delegating.mdx index 11f73bec5765..17687dd480ea 100644 --- a/website/pages/it/network/delegating.mdx +++ b/website/pages/it/network/delegating.mdx @@ -2,13 +2,23 @@ title: Delegazione --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## Guida per i delegator -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly, as that information is widely available online. There are three sections in this guide: @@ -24,64 +34,86 @@ Di seguito sono elencati i principali rischi essere il Delegator nel protocollo. I delegator non possono essere penalizzati per un comportamento scorretto, ma c'è una tassa sui delegator per disincentivare un processo decisionale insufficiente che potrebbero danneggiare l'integrità della rete. -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +As a Delegator, it's important to understand the following: -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### Il periodo di sblocco di delegazione Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely.
    - ![Sblocco di Delegator](/img/Delegation-Unbonding.png) _Nota la commissione del 0,5% nel UI della delegazione, così - come il periodo di sblocco di 28 giorni. periodo di sblocco._ + ![Sblocco di Delegator](/img/Delegation-Unbonding.png) _Nota la commissione del 0,5% nel UI della delegazione, così come il periodo di sblocco di 28 giorni. + periodo di sblocco._
    ### Scegliere un Indexer affidabile con una giusta ricompensa per i Delegator -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +#### Delegation Parameters + +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
    - ![Taglio delle ricompense dell'indicizzazione](/img/Indexing-Reward-Cut.png) *Il top Indexer sta dando ai Delegator il - 90% delle ricompense. Il centrale dà ai Delegator il 20%. Quello in basso dà ai Delegator ~83%.* + ![Taglio delle ricompense dell'indicizzazione](/img/Indexing-Reward-Cut.png) *Il top Indexer sta dando ai Delegator il 90% delle ricompense. Il + centrale dà ai Delegator il 20%. Quello in basso dà ai Delegator ~83%.*
    -- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. + +As you can see, in order to choose the right Indexer, you must consider multiple things. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -### Calcolo del rendimento previsto dei Delegator +## Calculating Delegators Expected Return -A Delegator must consider a lot of factors when determining the return. These include: +A Delegator must consider the following factors to determine a return: -- Un Delegator tecnico può anche esaminare la capacità dell' Indexer di utilizzare i token delegati a sua disposizione. Se un Indexer non sta allocando tutti i token disponibili, non sta guadagnando il massimo profitto che potrebbe ottenere per sé o per i suoi Delegator. -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### Considerando la riduzione delle tariffe di query e la riduzione delle tariffe di indicizzazione -Come descritto nelle sezioni precedenti, è necessario scegliere un Indexer che sia trasparente e onesto nell'impostare il taglio delle tariffe di query e tagli delle tariffe di indicizzazione. Il Delegator dovrebbe anche controllare il tempo di Cooldown dei Parametri per vedere quanto tempo di riserva ha a disposizione. Una volta fatto questo, è abbastanza semplice calcolare la quantità delle ricompense che i Delegator ricevono. La formula è: +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![Immagine delegator 3](/img/Delegation-Reward-Formula.png) ### Considerando il delegation pool del Indexer -Un altro aspetto che un Delegator deve considerare è il proporzione del Delegation Pool che possiede. Tutte le ricompense della delega sono condivise in modo uniforme, con un semplice ribilanciamento del pool determinato dall'importo che il Delegator ha depositato nel pool. In questo modo il Delegator ha una quota del pool: +Delegators should consider the proportion of the Delegation Pool they own. -![Formula di condivisione](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +This gives the Delegator a share of the pool: + +![Formula di condivisione](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### Considerando la capacità di delegazione -Un altro aspetto da considerare è la capacità di Delegator. Attualmente, il Delegation Ratio è impostato su 16. Ciò significa che se un Indexer ha fatto un stake di 1,000,000 GRT, la sua Delegation Capacity è di 16,000,000 di GRT di token delegati che può utilizzare nel protocollo. Tutti i token delegati che superano questa quantità diluiranno tutte le ricompense dei Delegator. +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. @@ -89,16 +121,21 @@ Therefore, a Delegator should always consider the Delegation Capacity of an Inde ### MetaMask "Transazione in sospeso" Bug -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### Esempio -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## Guida video per UI della rete +## Video Guide -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/it/network/developing.mdx b/website/pages/it/network/developing.mdx index 55dba24dc9a1..b3e7c255cb5a 100644 --- a/website/pages/it/network/developing.mdx +++ b/website/pages/it/network/developing.mdx @@ -2,52 +2,88 @@ title: Sviluppo --- -Gli sviluppatori sono il lato della domanda di The Graph Ecosystem. Gli sviluppatori costruiscono subgraph e li pubblicano su The Graph Network. Quindi, fanno query sui subgraph in tempo reale con GraphQL per alimentare le loro applicazioni. +To start coding right away, go to [Developer Quick Start](/quick-start/). + +## Panoramica + +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. + +On The Graph, you can: + +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. + +### What is GraphQL? + +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +### Developer Actions + +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. + +## Subgraph Specifics + +### What are subgraphs? + +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. + +A subgraph primarily consists of the following files: + +- `subgraph.yaml`: this YAML file contains the [subgraph manifest](/developing/creating-a-subgraph/#the-subgraph-manifest). +- `subgraph.graphql`: this GraphQL schema defines what data is stored for your subgraph, and how to query it via [GraphQL](/developing/creating-a-subgraph/#the-graphql-schema). +- `mappings`: this [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) mappings file translates data from the event data to the entities defined in your schema. + +Learn the detailed specifics to [create a subgraph](/developing/creating-a-subgraph/). ## Ciclo di vita dei subgraph -I subgraph distribuiti nella rete hanno un ciclo di vita definito. +Here is a general overview of a subgraph’s lifecycle: -### Costruire a livello locale +![Ciclo di vita del subgraph](/img/subgraph-lifecycle.png) -Come per lo sviluppo di tutti i subgraph, si inizia con lo sviluppo e il test in locale. Gli sviluppatori possono usare la stessa configurazione locale sia che stiano costruendo per The Graph Network, per il hosted service o per un Graph Node locale, sfruttando `graph-cli` and `graph-ts` per costruire il loro subgraph. Gli sviluppatori sono incoraggiati a usare strumenti come [Matchstick](https://github.com/LimeChain/matchstick) per i test unitari, per migliorare la solidità dei loro subgraph. +### Costruire a livello locale -> Ci sono alcuni vincoli su The Graph Network, in termini di funzionalità e supporto di rete. Solo i subgraph su [reti supportate](/developing/supported-networks) otterranno ricompense per l'indicizzazione e i subgraph che recuperano dati da IPFS non sono ammissibili. +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. ### Deploy to Subgraph Studio -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. - -### Pubblicare nella rete +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -Quando lo sviluppatore è soddisfatto del suo subgraph, può pubblicarlo su The Graph Network. Si tratta di un'azione on-chain, che registra il subgraph in modo che possa essere scoperto dagli Indexer. I subgraph pubblicati hanno un NFT corrispondente, che è poi facilmente trasferibile. Il subgraph pubblicato ha metadati associati, che forniscono agli altri partecipanti alla rete un contesto e informazioni utili. +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. -### Segnale per incoraggiare l'indicizzazione +### Pubblicare nella rete -È improbabile che i subgraph pubblicati vengano raccolti dagli Indexer senza l'aggiunta di un segnale. Il segnale è un GRT bloccato associato a un determinato subgraph, che indica agli Indexer che un dato subgraph riceverà un volume di query, inoltre contribuisce anche ai premi di indicizzazione disponibili per la sua elaborazione. Gli sviluppatori di subgraph aggiungono generalmente un segnale al loro subgraph, per incoraggiarne l'indicizzazione. Anche i Curator di terze parti possono aggiungere un segnale a un determinato subgraph, se ritengono che il subgraph possa generare un volume di query. +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. -### Query e sviluppo di applicazioni +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-and-deprecate-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. -Una volta che un subgraph è stato elaborato dagli Indexer ed è disponibile per fare query, gli sviluppatori possono iniziare a utilizzare il subgraph nelle loro applicazioni. Gli sviluppatori fanno query di subgraph tramite un gateway, che inoltra le loro queries a un Indexer che ha elaborato il subgraph, pagando le tariffe di query in GRT. +### Add Curation Signal for Indexing -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. +#### What is signal? -### Aggiornare i subgraph +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. +### Query e sviluppo di applicazioni -Una volta che lo sviluppatore del sottografo è pronto per l'aggiornamento, può avviare una transazione per puntare il suo subgraph alla nuova versione. L'aggiornamento del subgraph migra qualsiasi segnale alla nuova versione (supponendo che l'utente che ha applicato il segnale abbia selezionato "auto-migrate"), il che comporta anche una tassa di migrazione. La migrazione del segnale dovrebbe indurre gli Indexer a iniziare l'indicizzazione della nuova versione del subgraph, che dovrebbe quindi diventare presto disponibile per le query. +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). -### Deprecazione dei Subgraph +Learn more about [querying subgraphs](/querying/querying-the-graph/). -A un certo punto uno sviluppatore può decidere di non aver più bisogno di un subgraph pubblicato. A quel punto può deprecare il subgraph, restituendo ai Curator ogni GRT segnalato. +### Aggiornare i subgraph -### Diversi Ruoli dello Sviluppatore +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -Alcuni sviluppatori si occuperanno dell'intero ciclo di vita dei subgraph sulla rete, pubblicando, facendo query e iterando i propri subgraph. Alcuni si concentreranno sullo sviluppo di subgraph, costruendo API aperte su cui altri potranno basarsi. Alcuni possono concentrarsi sulle applicazioni, interrogando i subgraph distribuiti da altri. +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. -### Sviluppatori ed economia di rete +### Deprecating & Transferring Subgraphs -Gli sviluppatori sono un attore economico fondamentale nella rete, in quanto bloccano i GRT per incoraggiare l'indicizzazione e, soprattutto, le query dei subgraph, che rappresenta il principale scambio di valore della rete. Anche gli sviluppatori di subgraph bruciano GRT ogni volta che un subgraph viene aggiornato. +If you no longer need a published subgraph, you can [deprecate or transfer a subgraph](/managing/transfer-and-deprecate-a-subgraph/). Deprecating a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/it/network/explorer.mdx b/website/pages/it/network/explorer.mdx index f1d1a7a9f431..662aa790d17a 100644 --- a/website/pages/it/network/explorer.mdx +++ b/website/pages/it/network/explorer.mdx @@ -2,21 +2,35 @@ title: Graph Explorer --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## Subgraph -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -Quando si fa clic su un subgraph, è possibile testare le query nel playground e sfruttare i dettagli della rete per prendere decisioni informate. Sarà inoltre possibile segnalare il GRT sul proprio subgraph o su quello di altri per far capire agli Indexer la sua importanza e qualità. Questo è fondamentale perché la segnalazione di un subgraph ne incentiva l'indicizzazione, il che significa che verrà fuori dalla rete per servire le query. +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -Nella pagina dedicata a ciascun subgraph vengono visualizzati diversi dettagli. Questi includono: +On each subgraph’s dedicated page, you can do the following: - Segnala/non segnala i subgraph - Visualizza ulteriori dettagli, come grafici, ID di distribuzione corrente e altri metadati @@ -31,26 +45,32 @@ Nella pagina dedicata a ciascun subgraph vengono visualizzati diversi dettagli. ## Partecipanti -In questo tab è possibile avere una vista dall'alto di tutte le persone che partecipano alle attività della rete, come Indexer, Delegator e Curator. Di seguito, esamineremo in modo approfondito il significato di ogni tab per voi. +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### 1. Indexer ![Explorer Image 4](/img/Indexer-Pane.png) -Cominciamo con gli Indexer. Gli Indexer sono la spina dorsale del protocollo, in quanto sono quelli che puntano sui subgraph, li indicizzano e servono le query a chiunque consumi i subgraph. Nella tabella degli Indexer, è possibile vedere i parametri di delega di un Indexer, la sua stake, quanto ha fatto il stake su ogni subgraph e quanto ha guadagnato con le tariffe di query e le ricompense per l'indicizzazione. Approfondimenti di seguito: +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. + +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. -- Query Fee Cut - la percentuale dei rimborsi delle tariffe di query che l'Indexer trattiene quando divide con i Delegator -- Effective Reward Cut - il taglio della ricompensa di indicizzazione applicato al pool di delegator. Se è negativo, significa che l'Indexer sta cedendo parte delle sue ricompense. Se è positivo, significa che l'Indexer sta conservando una parte delle sue ricompense -- Cooldown Remaining - il tempo rimanente prima che l'Indexer possa modificare i parametri di delega di cui sopra. I periodi di Cooldown sono impostati dagli Indexer quando aggiornano i loro parametri di delegazione -- Owned - Si tratta del stake depositato dall'Indexer, che può essere ridotto in caso di comportamento dannoso o scorretto -- Delegated - Stake del Delegator che può essere allocato dall'Indexer, ma non può essere tagliato -- Allocated - Lo Stake che gli Indexer stanno attivamente allocando verso i subgraph che stanno indicizzando -- Available Delegation Capacity - la quantità di stake delegato che gli Indexer possono ancora ricevere prima di diventare sovra-delegati +**Specifics** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - l'importo massimo di stake delegato che l'Indexer può accettare in modo produttivo. Uno stake delegato in eccesso non può essere utilizzato per l'allocazione o per il calcolo dei premi. -- Query Fees - è il totale delle tariffe che gli utenti finali hanno pagato per le query da un Indexer in tutto il tempo +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - Indexer Rewards - è il totale delle ricompense dell'Indexer guadagnate dall'Indexer e dai suoi Delegator in tutto il tempo. Le ricompense degli Indexer vengono pagate tramite l'emissione di GRT. -Gli Indexer possono guadagnare sia tariffe di query che ricompense per l'indicizzazione. Funzionalmente, ciò avviene quando i partecipanti alla rete delegano il GRT a un Indexer. Ciò consente agli Indexer di ricevere tariffe di query e ricompense in base ai loro parametri di indicizzazione. I parametri di indicizzazione si impostano facendo clic sul lato destro della tabella o accedendo al profilo dell'Indexer e facendo clic sul pulsante "Delegate". +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. Per saperne di più su come diventare un Indexer, è possibile consultare la [documentazione ufficiale](/network/indexing) oppure [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) @@ -58,9 +78,13 @@ Per saperne di più su come diventare un Indexer, è possibile consultare la [do ### 2. Curator -I Curator analizzano i subgraph per identificare quelli di maggiore qualità. Una volta che un Curator ha trovato un subgraph potenzialmente interessante, può curarlo segnalando la sua bonding curve. In questo modo, i Curator fanno sapere agli Indexer quali subgraph sono di alta qualità e dovrebbero essere indicizzati. +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -I Curator possono essere membri della comunità, consumatori di dati o anche sviluppatori di subgraph che segnalano i propri subgraph depositando token GRT in una bonding curve. Depositando GRT, i Curator coniano quote di curation di un subgraph. Di conseguenza, i Curator hanno diritto a guadagnare una parte delle tariffe di query generate dal subgraph che hanno segnalato. La bonding curve incentiva i Curator a curare le fonti di dati di maggiore qualità. La tabella dei Curator in questa sezione consente di vedere: +In the The Curator table listed below you can see: - La data in cui il Curator ha iniziato a curare - Il numero di GRT depositato @@ -68,34 +92,36 @@ I Curator possono essere membri della comunità, consumatori di dati o anche svi ![Explorer Image 6](/img/Curation-Overview.png) -Per saperne di più sul ruolo del Curator, è possibile visitare i seguenti link [The Graph Academy](https://thegraph.academy/curators/) oppure la [documentazione ufficiale.](/network/curating) +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### 3. Delegator -I Delegator svolgono un ruolo chiave nel mantenere la sicurezza e la decentralizzazione di The Graph Network. Partecipano alla rete delegando (cioè "staking") i token GRT a uno o più Indexer. Senza Delegator, gli Indexer hanno meno probabilità di guadagnare ricompense e commissioni significative. Pertanto, gli Indexer cercano di attrarre i Delegator offrendo loro una parte delle ricompense per l'indicizzazione e delle tariffe di query che guadagnano. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -I Delegator, a loro volta, selezionano gli Indexer in base a una serie di variabili diverse, come le prestazioni passate, i tassi di ricompensa per l'indicizzazione e le tariffe di query. Anche la reputazione all'interno della comunità può giocare un ruolo importante! Si consiglia di entrare in contatto con gli Indexer selezionati tramite [The Graph’s Discord](https://discord.gg/graphprotocol) oppure [The Graph Forum](https://forum.thegraph.com/)! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![Explorer Image 7](/img/Delegation-Overview.png) -La tabella dei Delegator consente di visualizzare i Delegator attivi nella comunità, oltre a metriche quali: +In the Delegators table you can see the active Delegators in the community and important metrics: - Il numero di Indexer verso cui un Delegator sta delegando - La delega originale di un Delegator - Le ricompense che hanno accumulato ma non ritirato dal protocollo - Le ricompense realizzate ritirate dal protocollo - Quantità totale di GRT che hanno attualmente nel protocollo -- La data dell'ultima delegazione +- The date they last delegated -Se volete saperne di più su come diventare Delegator, non cercate oltre! Tutto ciò che dovete fare è andare alla [documentazione ufficiale](/network/delegating) oppure su [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## La rete -Nella sezione Rete, oltre a trovare i KPI globali, è possibile passare a una base per epoche e analizzare le metriche di rete in modo più dettagliato. Questi dettagli danno un'idea dell'andamento della rete nel tempo. +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. ### Panoramica -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +The overview section has both all the current network metrics and some cumulative metrics over time: - L'attuale stake totale della rete - La ripartizione dello stake tra gli Indexer e i loro Delegator @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - Parametri di protocollo come la ricompensa per la curation, il tasso di inflazione e altro ancora - Premi e commissioni dell'epoca attuale -Alcuni dettagli chiave che meritano di essere notati: +A few key details to note: -- **Le tariffe di query rappresentano le commissioni generate dai consumatori**, e possono essere reclamati (oppure no) dagli Indexer dopo un periodo di almeno 7 epoche (vedi sotto) dopo che le loro allocation verso i subgraph sono state chiuse e i dati che hanno servito sono stati convalidati dai consumatori. -- **Le ricompense dell'indicizzazione rappresentano la quantità di ricompense che gli Indexer hanno richiesto all'emissione della rete durante l'epoca.** Sebbene l'emissione del protocollo sia fissa, le ricompense vengono coniate solo quando gli Indexer chiudono le loro allocation verso i subgraph che hanno indicizzato. Pertanto, il numero di ricompense per ogni epoca varia (ad esempio, durante alcune epoche, gli Indexer potrebbero aver chiuso collettivamente allocation aperte da molti giorni). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ Nella sezione Epoche è possibile analizzare su base epocale metriche come: - L'epoca attiva è quella in cui gli Indexer stanno allocando le stake e riscuotendo le tariffe di query - Le epoche di assestamento sono quelle in cui i canali di stato sono in fase di definizione. Ciò significa che gli Indexer sono soggetti a taglio se i consumatori aprono controversie contro di loro. - Le epoche di distribuzione sono le epoche in cui i canali di stato per le epoche vengono regolati e gli Indexer possono richiedere gli sconti sulle tariffe di query. - - Le epoche finalizzate sono le epoche in cui gli Indexer non hanno più sconti sulle tariffe di query da richiedere e sono quindi finalizzate. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![Explorer Image 9](/img/Epoch-Stats.png) ## Il profilo utente -Dopo aver parlato delle statistiche di rete, passiamo al profilo personale. Il vostro profilo personale è il luogo in cui potete vedere la vostra attività, indipendentemente da come state partecipando alla rete. Il vostro wallet fungerà da profilo utente e, grazie alla User Dashboard, sarete in grado di vedere: +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### Panoramica del profilo -Qui si possono vedere le azioni in corso. Qui si trovano anche le informazioni sul profilo, la descrizione e il sito web (se ne avete aggiunto uno). +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![Explorer Image 10](/img/Profile-Overview.png) ### Scheda di subgraph -Se si fa clic sulla scheda Subgraph, si vedranno i subgraph pubblicati. Questi non includono i subgraph distribuiti con la CLI a scopo di test: i subgraph vengono visualizzati solo quando sono pubblicati sulla rete decentralizzata. +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Scheda di indicizzazione -Se si fa clic sulla scheda Indicizzazione, si troverà una tabella con tutte le allocation attive e storiche verso i subgraph, oltre a grafici che consentono di analizzare le performance passate come Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. Questa sezione include anche i dettagli sui compensi netti degli Indexer e sulle tariffe nette di query. Verranno visualizzate le seguenti metriche: @@ -158,7 +189,9 @@ Questa sezione include anche i dettagli sui compensi netti degli Indexer e sulle ### Scheda di delege -I delegator sono importanti per The Graph Network. Un Delegator deve utilizzare le proprie conoscenze per scegliere un Indexer che fornisca un buon ritorno sulle ricompense. Qui potete trovare i dettagli delle vostre delegazioni attive e storiche, insieme alle metriche degli Indexer verso cui avete delegato. +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. Nella prima metà della pagina è possibile vedere il grafico delle deleghe e quello dei sole ricompense. A sinistra, si possono vedere i KPI che riflettono le metriche delle delega attuali. diff --git a/website/pages/it/network/indexing.mdx b/website/pages/it/network/indexing.mdx index 8f09a9794a7e..ce773b182d09 100644 --- a/website/pages/it/network/indexing.mdx +++ b/website/pages/it/network/indexing.mdx @@ -42,7 +42,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Molte delle dashboard create dalla comunità includono i valori delle ricompense in sospeso, che possono essere facilmente controllate manualmente seguendo questi passaggi: -1. Fare query su [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) per ottenere gli ID di tutte le allocazioni attive: +1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -112,12 +112,12 @@ Gli Indexer possono differenziarsi applicando tecniche avanzate per prendere dec - **Medio** - Indexer di produzione che supporta 100 subgraph e 200-500 richieste al secondo. - **Grande** - È pronto a indicizzare tutti i subgraph attualmente utilizzati e a servire le richieste per il relativo traffico. -| Setup | Postgres
    (CPUs) | Postgres
    (memoria in GBs) | Postgres
    (disco in TBs) | VMs
    (CPUs) | VMs
    (memoria in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Piccolo | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medio | 16 | 64 | 2 | 32 | 64 | -| Grande | 72 | 468 | 3.5 | 48 | 184 | +| Setup | Postgres
    (CPUs) | Postgres
    (memoria in GBs) | Postgres
    (disco in TBs) | VMs
    (CPUs) | VMs
    (memoria in GBs) | +| -------- |:--------------------------:|:------------------------------------:|:----------------------------------:|:---------------------:|:-------------------------------:| +| Piccolo | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medio | 16 | 64 | 2 | 32 | 64 | +| Grande | 72 | 468 | 3.5 | 48 | 184 | ### Quali sono le precauzioni di base per la sicurezza che un Indexer dovrebbe adottare? @@ -149,20 +149,20 @@ Nota: Per supportare una scalabilità agile, si consiglia di separare le attivit #### Graph Node -| Porta | Obiettivo | Routes | Argomento CLI | Variabile d'ambiente | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
    (per le query di subgraph) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    (per le sottoscrizioni ai subgraph) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (per la gestione dei deployment) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Metriche di Prometheus | /metrics | --metrics-port | - | +| Porta | Obiettivo | Routes | Argomento CLI | Variabile d'ambiente | +| ----- | --------------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
    (per le query di subgraph) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    (per le sottoscrizioni ai subgraph) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (per la gestione dei deployment) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Metriche di Prometheus | /metrics | --metrics-port | - | #### Servizio Indexer -| Porta | Obiettivo | Routes | Argomento CLI | Variabile d'ambiente | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
    (per le query di subgraph a pagamento) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Metriche di Prometheus | /metrics | --metrics-port | - | +| Porta | Obiettivo | Routes | Argomento CLI | Variabile d'ambiente | +| ----- | --------------------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
    (per le query di subgraph a pagamento) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Metriche di Prometheus | /metrics | --metrics-port | - | #### Indexer Agent @@ -545,7 +545,7 @@ Il **Indexer CLI** si connette all'Indexer Agent, in genere tramite port-forward - `graph indexer rules maybe [options] ` — Impostare il `decisionBasis` per una distribuzione a `rules`, in modo che l' Indexer agent utilizzi le regole di indicizzazione per decidere se indicizzare questa distribuzione. -- `graph indexer actions get [options] ` - Recuperare una o più azioni utilizzando `all` oppure lasciare `action-id` vuoto per ottenere tutte le azioni. Un'argomento aggiuntivo `--status` può essere usato per stampare tutte le azioni di un certo stato. +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `graph indexer action queue allocate ` - Azione di allocation della coda diff --git a/website/pages/it/network/overview.mdx b/website/pages/it/network/overview.mdx index 05337a3f3eca..298945cefa41 100644 --- a/website/pages/it/network/overview.mdx +++ b/website/pages/it/network/overview.mdx @@ -2,14 +2,20 @@ title: Panoramica della rete --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## Panoramica +## How does it work? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Specifics + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![Economia del token](/img/Network-roles@2x.png) -Per garantire la sicurezza economica di The Graph Network e l'integrità dei dati interrogati, i partecipanti puntano e utilizzano i Graph Token ([GRT](/tokenomics)). Il GRT è un work utility token, è ERC-20 utilizzato per fare allocation di risorse nella rete. +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/it/new-chain-integration.mdx b/website/pages/it/new-chain-integration.mdx index 35b2bc7c8b4a..534d6701efdb 100644 --- a/website/pages/it/new-chain-integration.mdx +++ b/website/pages/it/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: Integrating New Networks +title: New Chain Integration --- -Graph Node can currently index data from the following chain types: +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- Ethereum, via EVM JSON-RPC and [Ethereum Firehose](https://github.com/streamingfast/firehose-ethereum) -- NEAR, via a [NEAR Firehose](https://github.com/streamingfast/near-firehose-indexer) -- Cosmos, via a [Cosmos Firehose](https://github.com/graphprotocol/firehose-cosmos) -- Arweave, via an [Arweave Firehose](https://github.com/graphprotocol/firehose-arweave) +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -If you are interested in any of those chains, integration is a matter of Graph Node configuration and testing. +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -If you are interested in a different chain type, a new integration with Graph Node must be built. Our recommended approach is developing a new Firehose for the chain in question and then the integration of that Firehose with Graph Node. More info below. +## Integration Strategies -**1. EVM JSON-RPC** +### 1. EVM JSON-RPC -If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. For more information, refer to [Testing an EVM JSON-RPC](new-chain-integration#testing-an-evm-json-rpc). +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -**2. Firehose** +#### Testing an EVM JSON-RPC -For non-EVM-based chains, Graph Node must ingest blockchain data via gRPC and known type definitions. This can be done via [Firehose](firehose/), a new technology developed by [StreamingFast](https://www.streamingfast.io/) that provides a highly-scalable indexing blockchain solution using a files-based and streaming-first approach. Reach out to the [StreamingFast team](mailto:integrations@streamingfast.io/) if you need help with Firehose development. +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## Difference between EVM JSON-RPC & Firehose +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, in a JSON-RPC batch request +- `trace_filter` _(optionally required for Graph Node to support call handlers)_ -While the two are suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](substreams/), like building [Substreams-powered subgraphs](cookbook/substreams-powered-subgraphs/). In addition, Firehose allows for improved indexing speeds when compared to JSON-RPC. +### 2. Firehose Integration -New EVM chain integrators may also consider the Firehose-based approach, given the benefits of substreams and its massive parallelized indexing capabilities. Supporting both allows developers to choose between building substreams or subgraphs for the new chain. +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> **NOTE**: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that eth_calls are [not a good practice for developers](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## Testing an EVM JSON-RPC +#### Specific Firehose Instrumentation for EVM (`geth`) chains -For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON RPC methods: +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \_(for historical blocks, with EIP-1898 - requires archive node): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`, in a JSON-RPC batch request -- _`trace_filter`_ _(optionally required for Graph Node to support call handlers)_ +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### Graph Node Configuration +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -**Start by preparing your local environment** +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## Graph Node Configuration + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) -2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON RPC compliant URL - > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ -**Test the integration by locally deploying a subgraph** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. Create a simple example subgraph. Some options are below: - 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point - 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) -3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. -4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -Graph Node should be syncing the deployed subgraph if there are no errors. Give it time to sync, then send some GraphQL queries to the API endpoint printed in the logs. +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## Integrating a new Firehose-enabled chain +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. Create a simple example subgraph. Some options are below: + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +Graph Node should be syncing the deployed subgraph if there are no errors. Give it time to sync, then send some GraphQL queries to the API endpoint printed in the logs. -Integrating a new chain is also possible using the Firehose approach. This is currently the best option for non-EVM chains and a requirement for substreams support. Additional documentation focuses on how Firehose works, adding Firehose support for a new chain and integrating it with Graph Node. Recommended docs for integrators: +## Substreams-powered Subgraphs -1. [General docs on Firehose](firehose/) -2. [Adding Firehose support for a new chain](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [Integrating Graph Node with a new chain via Firehose](https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/it/operating-graph-node.mdx b/website/pages/it/operating-graph-node.mdx index b3b3ca24e204..276c3022ec6c 100644 --- a/website/pages/it/operating-graph-node.mdx +++ b/website/pages/it/operating-graph-node.mdx @@ -77,13 +77,13 @@ Un esempio completo di configurazione Kubernetes si trova nel [repository indexe Quando è in funzione, Graph Node espone le seguenti porte: -| Porta | Obiettivo | Routes | Argomento CLI | Variabile d'ambiente | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
    (per le query di subgraph) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    (per le sottoscrizioni ai subgraph) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (per la gestione dei deployment) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Metriche di Prometheus | /metrics | --metrics-port | - | +| Porta | Obiettivo | Routes | Argomento CLI | Variabile d'ambiente | +| ----- | --------------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
    (per le query di subgraph) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    (per le sottoscrizioni ai subgraph) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (per la gestione dei deployment) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Metriche di Prometheus | /metrics | --metrics-port | - | > **Importante**: fare attenzione a esporre le porte pubblicamente - le porte di **amministrazione** devono essere tenute sotto chiave. Questo include l'endpoint JSON-RPC del Graph Node. diff --git a/website/pages/it/querying/graphql-api.mdx b/website/pages/it/querying/graphql-api.mdx index 27fb075488ec..4c89c5b31f13 100644 --- a/website/pages/it/querying/graphql-api.mdx +++ b/website/pages/it/querying/graphql-api.mdx @@ -2,11 +2,19 @@ title: API GraphQL --- -This guide explains the GraphQL Query API that is used for The Graph Protocol. +Learn about the GraphQL Query API used in The Graph. -## Query +## What is GraphQL? -Nello schema di subgraph si definiscono tipi chiamati `Entities`. Per ogni tipo di `Entity`, un'`entity` e un campo `entities` saranno generati sul tipo `Query` di livello superiore. Si noti che `query` non deve essere inclusa all'inizio della query `graphql` quando si usa The Graph. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +To understand the larger role that GraphQL plays, review [developing](/network/developing/) and [creating a subgraph](/developing/creating-a-subgraph/). + +## Queries with GraphQL + +In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. + +> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. ### Esempi @@ -21,7 +29,7 @@ Eseguire query di una singola entità `Token` definita nello schema: } ``` -> **Nota:** Quando si esegue una query per una singola entità, il campo `id` è obbligatorio e deve essere una stringa. +> Note: When querying for a single entity, the `id` field is required, and it must be writen as a string. Eseguire query di tutte le entità `Token`: @@ -36,7 +44,10 @@ Eseguire query di tutte le entità `Token`: ### Ordinamento -Quando si esegue query di una collezione, il parametro `orderBy` può essere usato per ordinare in base a un attributo specifico. Inoltre, l'opzione `orderDirection` può essere usata per specificare la direzione dell'ordinamento, `asc` per l'ascendente oppure `desc` per il discendente. +When querying a collection, you may: + +- Use the `orderBy` parameter to sort by a specific attribute. +- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. #### Esempio @@ -53,7 +64,7 @@ Quando si esegue query di una collezione, il parametro `orderBy` può essere usa A partire da Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) le entità possono essere ordinate sulla base delle entità annidate. -Nell'esempio seguente, ordiniamo i token in base al nome del loro proprietario: +The following example shows tokens sorted by the name of their owner: ```graphql { @@ -70,11 +81,12 @@ Nell'esempio seguente, ordiniamo i token in base al nome del loro proprietario: ### Impaginazione -Quando si esegue una query di una collezione, il parametro `first` può essere usato per impaginare dall'inizio della raccolta. Va notato che l'ordinamento predefinito è per ID in ordine alfanumerico crescente, non per ora di creazione. - -Inoltre, il parametro `skip` può essere usato per saltare le entità ed impaginare. Ad esempio, `first:100` mostra le prime 100 entità e `first:100, skip:100` mostra le 100 entità successive. +When querying a collection, it's best to: -Le query dovrebbero evitare di usare valori `skip` molto grandi, perché in genere hanno un rendimento scarso. Per recuperare un gran numero di elementi, è molto meglio sfogliare le entità in base a un attributo, come mostrato nell'ultimo esempio. +- Use the `first` parameter to paginate from the beginning of the collection. + - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. +- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. #### Esempio di utilizzo di `first` @@ -106,7 +118,7 @@ Eseguire query di 10 entità `Token`, sfalsate di 10 posizioni rispetto all'iniz #### Esempio di utilizzo di `first` e `id_ge` -Se un client deve recuperare un gran numero di entità, è molto più performante basare le query su un attributo e filtrare in base a tale attributo. Ad esempio, un client potrebbe recuperare un gran numero di token utilizzando questa query: +If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: ```graphql query manyTokens($lastID: String) { @@ -117,11 +129,12 @@ query manyTokens($lastID: String) { } ``` -La prima volta, si invierebbe la query con `lastID = ""` e per le richieste successive si imposterebbe `lastID` sull'attributo `id` dell'ultima entità della richiesta precedente. Questo approccio è nettamente migliore rispetto all'utilizzo di valori di `skip` crescenti. +The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### Filtraggio -È possibile utilizzare il parametro `where` nelle query per filtrare diverse proprietà. È possibile filtrare su più valori all'interno del parametro `where`. +- You can use the `where` parameter in your queries to filter for different properties. +- You can filter on multiple values within the `where` parameter. #### Esempio di utilizzo di `where` @@ -155,7 +168,7 @@ Query con esito `failed`: #### Esempio di filtraggio dei blocchi -È anche possibile filtrare le entità in base al metodo `_change_block(number_gte: Int)` - questo filtra le entità che sono state aggiornate nel o dopo il blocco specificato. +You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. Questo può essere utile se si vuole recuperare solo le entità che sono cambiate, ad esempio dall'ultima volta che è stato effettuato il polling. In alternativa, può essere utile per indagare o fare il debug di come le entità stanno cambiando nel subgraph (se combinato con un filtro di blocco, è possibile isolare solo le entità che sono cambiate in un blocco specifico). @@ -193,7 +206,7 @@ A partire dalla versione Graph Node [`v0.30.0`](https://github.com/graphprotocol ##### Operatore `AND` -Nell'esempio seguente, si filtrano le sfide con `outcome` `succeeded` e `number` maggiore o uguale a `100`. +The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. ```graphql { @@ -208,7 +221,7 @@ Nell'esempio seguente, si filtrano le sfide con `outcome` `succeeded` e `number` ``` > **Syntactic sugar:** Si può semplificare la query precedente eliminando l'operatore `and` passando una sottoespressione separata da virgole. -> +> > ```graphql > { > challenges(where: { number_gte: 100, outcome: "succeeded" }) { @@ -223,7 +236,7 @@ Nell'esempio seguente, si filtrano le sfide con `outcome` `succeeded` e `number` ##### Operatore `OR` -Nell'esempio seguente, si filtrano le sfide con `outcome` `succeeded` oppure `number` maggiore o uguale a `100`. +The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. ```graphql { @@ -278,9 +291,9 @@ _change_block(number_gte: Int) È possibile effetuare query dello stato delle entità non solo per l'ultimo blocco, che è quello predefinito, ma anche per un blocco nel passato. Il blocco in cui deve avvenire una query può essere specificato dal suo numero di blocco o dal suo hash, includendo un argomento `block` nei campi di livello superiore delle query. -Il risultato di una query di questo tipo non cambia nel tempo, cioè la query di un determinato blocco passato restituirà lo stesso risultato indipendentemente dal momento in cui viene eseguita, con l'eccezione che se si fa query di un blocco molto vicino alla testa della catena, il risultato potrebbe cambiare se quel blocco risulta non essere nella catena principale e la catena viene riorganizzata. Una volta che un blocco può essere considerato definitivo, il risultato della query non cambierà. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -Si noti che l'attuale implementazione è ancora soggetta ad alcune limitazioni che potrebbero violare queste garanzie. L'implementazione non è sempre in grado di dire che un determinato block hash non è affatto presente nella chain principale, o che il risultato di una query per il block hash per un blocco che non può ancora essere considerato definitivo potrebbe essere influenzato da una riorganizzazione di blocco in corso contemporaneamente alla query. Non influiscono sui risultati delle query in base all'block hash quando il blocco è definitivo e si sa che si trova nella chain principale. [Qui](https://github.com/graphprotocol/graph-node/issues/1405) è spiegato in dettaglio quali sono queste limitazioni. +> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. #### Esempio @@ -322,12 +335,12 @@ Le query di ricerca fulltext hanno un campo obbligatorio, `text`, per fornire i Operatori di ricerca fulltext: -| Simbolo | Operatore | Descrizione | -| --- | --- | --- | -| `&` | `And` | Per combinare più termini di ricerca in un filtro per le entità che includono tutti i termini forniti | -| | | `Or` | Le query con più termini di ricerca separati dall'operatore Or restituiranno tutte le entità con una corrispondenza tra i termini forniti | -| `<->` | `Follow by` | Specifica la distanza tra due parole. | -| `:*` | `Prefisso` | Utilizzare il termine di ricerca del prefisso per trovare le parole il cui prefisso corrisponde (sono richiesti 2 caratteri.) | +| Simbolo | Operatore | Descrizione | +| ----------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------- | +| `&` | `And` | Per combinare più termini di ricerca in un filtro per le entità che includono tutti i termini forniti | +| | | `Or` | Le query con più termini di ricerca separati dall'operatore Or restituiranno tutte le entità con una corrispondenza tra i termini forniti | +| `<->` | `Follow by` | Specifica la distanza tra due parole. | +| `:*` | `Prefisso` | Utilizzare il termine di ricerca del prefisso per trovare le parole il cui prefisso corrisponde (sono richiesti 2 caratteri.) | #### Esempi @@ -376,11 +389,11 @@ Graph Node implementa la validazione [basata sulle specifiche](https://spec.grap ## Schema -Lo schema dell'origine di dati-- cioè i tipi di entità, i valori e le relazioni disponibili per le query -- sono definiti attraverso [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -Gli schemi GraphQL in genere definiscono i tipi di radice per le `query`, le `sottoscrizioni` e le `mutazioni`. The Graph supporta solo le `query`. Il tipo di `Query` principale per il subgraph viene generato automaticamente dallo schema GraphQL incluso nel manifest del subgraph. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> **Nota:** La nostra API non espone mutazioni perché gli sviluppatori devono emettere transazioni direttamente contro la blockchain sottostante dalle loro applicazioni. +> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. ### Entità diff --git a/website/pages/it/querying/querying-best-practices.mdx b/website/pages/it/querying/querying-best-practices.mdx index 169d9258b397..9029797adfac 100644 --- a/website/pages/it/querying/querying-best-practices.mdx +++ b/website/pages/it/querying/querying-best-practices.mdx @@ -2,11 +2,9 @@ title: Querying Best Practices --- -The Graph fornisce un modo decentralizzato per effettuare query dei dati delle blockchain. +The Graph provides a decentralized way to query data from blockchains via GraphQL APIs, making it easier to query data with the GraphQL language. -I dati di The Graph network sono esposti attraverso un API GraphQL, che facilita query dei dati con il linguaggio GraphQL. - -Questa pagina vi guiderà attraverso le regole essenziali del linguaggio GraphQL e le best practice delle query in GraphQL. +Learn the essential GraphQL language rules and GraphQL querying best practices. --- @@ -71,7 +69,7 @@ GraphQL è un linguaggio e una serie di convenzioni che si trasportano su HTTP. Significa che è possibile effettuare query di un API GraphQL utilizzando lo standard `fetch` (in modo nativo o tramite `@whatwg-node/fetch` or `isomorphic-fetch`). -Tuttavia, come indicato in ["Eseguire una query da un'applicazione"](/querying/querying-from-an-application), si consiglia di utilizzare il nostro `graph-client` che supporta caratteristiche uniche come: +However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), it's recommend to use `graph-client` which supports unique features such as: - Gestione dei subgraph a cross-chain: effettuare query di più subgraph in un'unica query - [Tracciamento automatico dei blocchi](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) @@ -104,8 +102,6 @@ main() Altre alternative di client GraphQL sono trattate in ["Eseguire una query da un'applicazione"](/querying/querying-from-an-application). -Ora che abbiamo trattato le regole di base della sintassi delle query GraphQL, esaminiamo le best practices di scrittura delle query GraphQL. - --- ## Best Practices @@ -164,11 +160,11 @@ Questo comporta **molti vantaggi**: - **Le variabili possono essere messe in cache** a livello di server - **Le query possono essere analizzate staticamente dagli strumenti** (maggiori informazioni nelle sezioni successive) -**Nota: come includere i campi in modo condizionato nelle query statiche** +### How to include fields conditionally in static queries -Si potrebbe voler includere il campo `owner` solo in una condizione particolare. +You might want to include the `owner` field only on a particular condition. -Per questo possiamo sfruttare la direttiva `@include(if:...)` come segue: +For this, you can leverage the `@include(if:...)` directive as follows: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -191,7 +187,7 @@ const result = await execute(query, { }) ``` -Nota: La direttiva opposta è `@skip(if: ...)`. +> Nota: La direttiva opposta è `@skip(if: ...)`. ### Ask for what you want @@ -199,9 +195,8 @@ GraphQL è diventato famoso per il suo slogan "Chiedi quello che vuoi". Per questo motivo, non c'è modo, in GraphQL, di ottenere tutti i campi disponibili senza doverli elencare singolarmente. -Quando si interrogano le GraphQL API, si deve sempre pensare di effettuare query di solo i campi che verranno effettivamente utilizzati. - -Una causa comune di over-fetching sono le collezioni di entità. Per impostazione predefinita, le query recuperano 100 entità in un collezione, che di solito sono molte di più di quelle effettivamente utilizzate, ad esempio per la visualizzazione all'utente. Le query dovrebbero quindi essere impostate quasi sempre in modo esplicito e assicurarsi di recuperare solo il numero di entità di cui hanno effettivamente bisogno. Questo vale non solo per le collezioni di primo livello in una query, ma ancora di più per le collezioni di entità annidate. +- Quando si interrogano le GraphQL API, si deve sempre pensare di effettuare query di solo i campi che verranno effettivamente utilizzati. +- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. Ad esempio, nella seguente query: @@ -337,8 +332,8 @@ query { Tali campi ripetuti (`id`, `active`, `status`) comportano molti problemi: -- più difficile da leggere per le query più estese -- quando si usano strumenti che generano tipi TypeScript basati su query (_per saperne di più nell'ultima sezione_), `newDelegate` e `oldDelegate` risulteranno in due interfacce inline distinte. +- More extensive queries become harder to read. +- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. Una versione riadattata della query sarebbe la seguente: @@ -364,13 +359,13 @@ fragment DelegateItem on Transcoder { } ``` -L'uso di GraphQL `fragment` migliorerà la leggibilità (soprattutto in scala), ma anche la generazione di tipi TypeScript. +Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. Quando si usa lo strumento di generazione dei tipi, la query di cui sopra genererà un tipo `DelegateItemFragment` corretto (_vedi l'ultima sezione "Strumenti"_). ### I frammenti GraphQL da fare e da non fare -**La base del frammento deve essere un tipo** +### La base del frammento deve essere un tipo Un frammento non può essere basato su un tipo non applicabile, in breve, **su un tipo che non ha campi**: @@ -382,7 +377,7 @@ fragment MyFragment on BigInt { `BigInt` è un **scalare** (tipo nativo "semplice") che non può essere usato come base di un frammento. -**Come diffondere un frammento** +#### Come diffondere un frammento I frammenti sono definiti su tipi specifici e devono essere usati di conseguenza nelle query. @@ -411,16 +406,16 @@ fragment VoteItem on Vote { Non è possibile diffondere un frammento di tipo `Vote` qui. -**Definire il frammento come unità aziendale atomica di dati** +#### Definire il frammento come unità aziendale atomica di dati -I Fragment GraphQL devono essere definiti in base al loro utilizzo. +GraphQL `Fragment`s must be defined based on their usage. Per la maggior parte dei casi d'uso, è sufficiente definire un fragment per tipo (nel caso di utilizzo di campi ripetuti o di generazione di tipi). -Ecco una regola empirica per l'utilizzo di Fragment: +Here is a rule of thumb for using fragments: -- quando i campi dello stesso tipo si ripetono in una query, raggrupparli in un Fragment -- quando si ripetono campi simili ma non uguali, creare più fragment, ad esempio: +- When fields of the same type are repeated in a query, group them in a `Fragment`. +- When similar but different fields are repeated, create multiple fragments, for instance: ```graphql # base fragment (mostly used in listing) @@ -443,7 +438,7 @@ fragment VoteWithPoll on Vote { --- -## Gli strumenti essenziali +## The Essential Tools ### Esploratori web GraphQL @@ -473,11 +468,11 @@ Questo vi permetterà di **cogliere gli errori senza nemmeno testare le query** [L'estensione GraphQL VSCode](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) è un'eccellente aggiunta al vostro flusso di lavoro di sviluppo: -- evidenziazione della sintassi -- suggerimenti per il completamento automatico -- validazione rispetto allo schema -- frammenti -- vai alla definizione dei frammenti e dei tipi dell'input +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets +- Go to definition for fragments and input types Se si utilizza `graphql-eslint`, [l'estensione ESLint VSCode](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) è indispensabile per visualizzare correttamente gli errori e gli avvertimenti inseriti nel codice. @@ -485,9 +480,9 @@ Se si utilizza `graphql-eslint`, [l'estensione ESLint VSCode](https://marketplac [Il plugin JS GraphQL](https://plugins.jetbrains.com/plugin/8097-graphql/) migliorerà significativamente l'esperienza di lavoro con GraphQL fornendo: -- evidenziazione della sintassi -- suggerimenti per il completamento automatico -- validazione rispetto allo schema -- frammenti +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets -Maggiori informazioni in questo [articolo di WebStorm](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) che illustra tutte le caratteristiche principali del plugin. +For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. diff --git a/website/pages/it/quick-start.mdx b/website/pages/it/quick-start.mdx index cba2247457b8..9560a1389911 100644 --- a/website/pages/it/quick-start.mdx +++ b/website/pages/it/quick-start.mdx @@ -2,24 +2,18 @@ title: Quick Start --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph to Subgraph Studio. +Learn how to easily publish and query a [subgraph](/developing/developer-faqs/#1-what-is-a-subgraph) on The Graph. -Ensure that your subgraph will be indexing data from a [supported network](/developing/supported-networks). - -This guide is written assuming that you have: +## Prerequisites for this guide - A crypto wallet -- A smart contract address on the network of your choice - -## 1. Create a subgraph on Subgraph Studio - -Go to the [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- A smart contract address on one of the [supported networks](/developing/supported-networks/) -Once your wallet is connected, you can begin by clicking “Create a Subgraph." It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name." +## Step-by-step -## 2. Install the Graph CLI +### 1. Install the Graph CLI -The Graph CLI is written in TypeScript and you will need to have `node` and either `npm` or `yarn` installed to use it. Check that you have the most recent CLI version installed. +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. On your local machine, run one of the following commands: @@ -35,133 +29,161 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -## 3. Initialize your subgraph from existing contract +### 2. Create your subgraph + +If your contract has events, the `init` command will automatically create a scaffold of a subgraph. + +#### Create via Graph CLI + +Use the following command to create a subgraph in Subgraph Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +#### Create via Subgraph Studio + +Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. -Initialize your subgraph from an existing contract by running the initialize command: +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +2. Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". + +For additional information on subgraph creation and the Graph CLI, see [Creating a Subgraph](/developing/creating-a-subgraph). + +### 3. Initialize your subgraph + +#### From an existing contract + +The following command initializes your subgraph from an existing contract: ```sh graph init --studio ``` -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> Note: If your contract was verified on Etherscan, then the ABI will automatically be created in the CLI. + +You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -When you initialize your subgraph, the CLI tool will ask you for the following information: +When you initialize your subgraph, the CLI will ask you for the following information: -- Protocol: choose the protocol your subgraph will be indexing data from -- Subgraph slug: create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- Directory to create the subgraph in: choose your local directory -- Ethereum network(optional): you may need to specify which EVM-compatible network your subgraph will be indexing data from -- Contract address: Locate the smart contract address you’d like to query data from -- ABI: If the ABI is not autopopulated, you will need to input it manually as a JSON file -- Start Block: it is suggested that you input the start block to save time while your subgraph indexes blockchain data. You can locate the start block by finding the block where your contract was deployed. -- Contract Name: input the name of your contract -- Index contract events as entities: it is suggested that you set this to true as it will automatically add mappings to your subgraph for every emitted event -- Add another contract(optional): you can add another contract +- Protocol: Choose the protocol your subgraph will be indexing data from. +- Subgraph slug: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. +- Directory to create the subgraph in: Choose your local directory. +- Ethereum network (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- Contract address: Locate the smart contract address you’d like to query data from. +- ABI: If the ABI is not auto-populated, you will need to input it manually as a JSON file. +- Start Block: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- Contract Name: Input the name of your contract. +- Index contract events as entities: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- Add another contract (optional): You can add another contract. See the following screenshot for an example for what to expect when initializing your subgraph: ![Subgraph command](/img/subgraph-init-example.png) -## 4. Write your subgraph +### 4. Write your subgraph -The previous commands create a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: +The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. -- Manifest (`subgraph.yaml`) - The manifest defines what datasources your subgraphs will index. -- Schema (`schema.graphql`) - The GraphQL schema defines what data you wish to retrieve from the subgraph. -- AssemblyScript Mappings (`mapping.ts`) - This is the code that translates data from your datasources to the entities defined in the schema. +When making changes to the subgraph, you will mainly work with three files: -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). +- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -## 5. Deploy to Subgraph Studio +For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -Once your subgraph is written, run the following commands: +### 5. Deploy your subgraph -```sh -$ graph codegen -$ graph build -``` +Remember, deploying is not the same as publishing. + +- When you deploy a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. +- When you publish a subgraph, you are publishing it onchain to the decentralized network. + +1. Once your subgraph is written, run the following commands: + + ```sh + graph codegen + graph build + ``` + +2. Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. -- Authenticate and deploy your subgraph. The deploy key can be found on the Subgraph page in Subgraph Studio. +![Deploy key](/img/subgraph-studio-deploy-key.jpg) +```` ```sh -$ graph auth --studio -$ graph deploy --studio -``` -You will be asked for a version label. It's strongly recommended to use [semver](https://semver.org/) for versioning like `0.0.1`. That said, you are free to choose any string as version such as:`v1`, `version1`, `asdf`. - -## 6. Test your subgraph - -In Subgraph Studio's playground environment, you can test your subgraph by making a sample query. - -The logs will tell you if there are any errors with your subgraph. The logs of an operational subgraph will look like this: - -![Subgraph logs](/img/subgraph-logs-image.png) - -If your subgraph is failing, you can query the subgraph health by using the GraphiQL Playground. Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails, so you can debug accordingly: - -```graphql -{ - indexingStatuses(subgraphs: ["Qm..."]) { - node - synced - health - fatalError { - message - block { - number - hash - } - handler - } - nonFatalErrors { - message - block { - number - hash - } - handler - } - chains { - network - chainHeadBlock { - number - } - earliestBlock { - number - } - latestBlock { - number - } - lastHealthyBlock { - number - } - } - entityCount - } -} +graph auth --studio + +graph deploy --studio ``` +```` -## 7. Publish your subgraph to The Graph’s Decentralized Network +- The CLI will ask for a version label. + - It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. That said, you can choose any string for the version such as: `v1`, `version1`, `asdf`, etc. -Once your subgraph has been deployed to Subgraph Studio, you have tested it out, and you are ready to put it into production, you can then publish it to the decentralized network. +### 6. Review your subgraph -In Subgraph Studio, you will be able to click the publish button on the top right of your subgraph's page. +If you’d like to examine your subgraph before publishing it to the network, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: -Select the network you would like to publish your subgraph to. It is recommended to publish subgraphs to Arbitrum One to take advantage of the [faster transaction speeds and lower gas costs](/arbitrum/arbitrum-faq). +- Run a sample query. +- Analyze your subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: -The (upgrade Indexer)[/sunrise/#about-the-upgrade-indexer] will begin serving queries on your subgraph regardless of subgraph curation, and it will provide you with 100,000 free queries per month. + ![Subgraph logs](/img/subgraph-logs-image.png) -For a higher quality of service and stronger redundancy, you can curate your subgraph to attract more Indexers. At the time of writing, it is recommended that you curate your own subgraph with at least 3,000 GRT to ensure 3-5 additional Indexers begin serving queries on your subgraph. +### 7. Publish your subgraph to The Graph Network -To save on gas costs, you can curate your subgraph in the same transaction that you published it by selecting this button when you publish your subgraph to The Graph’s decentralized network: +Publishing a subgraph to the decentralized network makes it available for [Curators](/network/curating/) to begin curating it and [Indexers](/network/indexing/) to begin indexing it. -![Subgraph publish](/img/publish-and-signal-tx.png) +#### Publishing with Subgraph Studio + +1. To publish your subgraph, click the Publish button in the dashboard. +2. Select the network to which you would like to publish your subgraph. + +#### Publishing from the CLI + +As of version 0.73.0, you can also publish your subgraph with the Graph CLI. + +1. Open the `graph-cli`. + +2. Use the following commands: + + ```sh + graph codegen && graph build + ``` + + Then, + + ```sh + graph publish + ``` -## 8. Query your subgraph +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +To customize your deployment, see [Publishing a Subgraph](/publishing/publishing-a-subgraph/). + +#### Adding signal to your subgraph + +1. To attract indexers to query your subgraph, you should add GRT curation signal to it. + + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + +2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. + + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + +To learn more about curation, read [Curating](/network/curating/). + +To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: + +![Subgraph publish](/img/publish-and-signal-tx.png) -Now, you can query your subgraph by sending GraphQL queries to your subgraph’s Query URL, which you can find by clicking on the query button. +### 8. Query your subgraph -If you don't have your API key, you can query via the free, rate-limited development query URL, which can be used for development and staging. +Now, you can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read more [here](/querying/querying-the-graph/). +For more information about querying data from your subgraph, read [Querying The Graph](/querying/querying-the-graph/). diff --git a/website/pages/it/release-notes/assemblyscript-migration-guide.mdx b/website/pages/it/release-notes/assemblyscript-migration-guide.mdx index b6bd7ecc38d2..64cbf23decf7 100644 --- a/website/pages/it/release-notes/assemblyscript-migration-guide.mdx +++ b/website/pages/it/release-notes/assemblyscript-migration-guide.mdx @@ -59,7 +59,7 @@ dataSources: 2. Aggiornare il `graph-cli` in uso alla versione `ultima` eseguendo: ```bash -# se è installato globalmente +# se è installato globalmente npm install --global @graphprotocol/graph-cli@latest # o nel proprio subgraph, se è una dipendenza di dev @@ -127,11 +127,8 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - È necessario rinominare le variabili duplicate se si dispone di un'ombreggiatura delle variabili. - ### Confronti nulli - Eseguendo l'aggiornamento sul subgraph, a volte si possono ottenere errori come questi: ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - Per risolvere il problema è sufficiente modificare l'istruzione `if` in qualcosa di simile: ```typescript @@ -288,7 +284,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - Per risolvere questo problema, si può creare una variabile per l'accesso alla proprietà, in modo che il compilatore possa fare la magia del controllo di annullabilità: ```typescript diff --git a/website/pages/it/sps/introduction.mdx b/website/pages/it/sps/introduction.mdx new file mode 100644 index 000000000000..12e3f81c6d53 --- /dev/null +++ b/website/pages/it/sps/introduction.mdx @@ -0,0 +1,19 @@ +--- +title: Introduction to Substreams-powered Subgraphs +--- + +By using a Substreams package (`.spkg`) as a data source, your subgraph gains access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. + +There are two methods of enabling this technology: + +Using Substreams [triggers](./triggers): Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. + +Using [Entity Changes](https://substreams.streamingfast.io/documentation/consume/subgraph/graph-out): By writing more of the logic into Substreams, you can consume the module's output directly into graph-node. In graph-node, you can use the Substreams data to create your subgraph entities. + +It is really a matter of where you put your logic, in the subgraph or the Substreams. Keep in mind that having more of your logic in Substreams benefits from a parallelized model, whereas triggers will be linearly consumed in graph-node. + +Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: + +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/it/sps/triggers-example.mdx b/website/pages/it/sps/triggers-example.mdx new file mode 100644 index 000000000000..8e4f96eba14a --- /dev/null +++ b/website/pages/it/sps/triggers-example.mdx @@ -0,0 +1,140 @@ +--- +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" +--- + +## Prerequisites + +Before starting, make sure to: + +- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. +- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. + +## Step 1: Initialize Your Project + + + +1. Open your Dev Container and run the following command to initialize your project: + + ```bash + substreams init + ``` + +2. Select the "minimal" project option. + +3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: + +```yaml +specVersion: v0.1.0 +package: + name: my_project_sol + version: v0.1.0 + +imports: # Pass your spkg of interest + solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg + +modules: + - name: map_spl_transfers + use: solana:map_block # Select corresponding modules available within your spkg + initialBlock: 260000082 + + - name: map_transactions_by_programid + use: solana:solana:transactions_by_programid_without_votes + +network: solana-mainnet-beta + +params: # Modify the param fields to meet your needs + # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA + map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE +``` + +## Step 2: Generate the Subgraph Manifest + +Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: + +```bash +substreams codegen subgraph +``` + +You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: + +```yaml +--- +dataSources: + - kind: substreams + name: my_project_sol + network: solana-mainnet-beta + source: + package: + moduleName: map_spl_transfers # Module defined in the substreams.yaml + file: ./my-project-sol-v0.1.0.spkg + mapping: + apiVersion: 0.0.7 + kind: substreams/graph-entities + file: ./src/mappings.ts + handler: handleTriggers +``` + +## Step 3: Define Entities in `schema.graphql` + +Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. Here is an example: + +```graphql +type MyTransfer @entity { + id: ID! + amount: String! + source: String! + designation: String! + signers: [String!]! +} +``` + +This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. + +## Step 4: Handle Substreams Data in `mappings.ts` + +With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: + +```ts +import { Protobuf } from 'as-proto/assembly' +import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events' +import { MyTransfer } from '../generated/schema' + +export function handleTriggers(bytes: Uint8Array): void { + const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode) + + for (let i = 0; i < input.data.length; i++) { + const event = input.data[i] + + if (event.transfer != null) { + let entity_id: string = `${event.txnId}-${i}` + const entity = new MyTransfer(entity_id) + entity.amount = event.transfer!.instruction!.amount.toString() + entity.source = event.transfer!.accounts!.source + entity.designation = event.transfer!.accounts!.destination + + if (event.transfer!.accounts!.signer!.single != null) { + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] + } else if (event.transfer!.accounts!.signer!.multisig != null) { + entity.signers = event.transfer!.accounts!.signer!.multisig!.signers + } + entity.save() + } + } +} +``` + +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + +## Conclusion + +You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. + +For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). diff --git a/website/pages/it/sps/triggers.mdx b/website/pages/it/sps/triggers.mdx new file mode 100644 index 000000000000..ed19635d4768 --- /dev/null +++ b/website/pages/it/sps/triggers.mdx @@ -0,0 +1,37 @@ +--- +title: Substreams Triggers +--- + +Custom triggers allow you to send data directly into your subgraph mappings file and entities (similar to tables and fields), enabling full use of the GraphQL layer. By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data within your subgraph’s handler, ensuring efficient and streamlined data management within the subgraph framework. + +> Note: If you haven’t already, visit one of the How-To Guides found [here](./introduction) to scaffold your first project in the Development Container. + +The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. + +```tsx +export function handleTransactions(bytes: Uint8Array): void { + let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).trasanctions // 1. + if (transactions.length == 0) { + log.info('No transactions found', []) + return + } + + for (let i = 0; i < transactions.length; i++) { + // 2. + let transaction = transactions[i] + + let entity = new Transaction(transaction.hash) // 3. + entity.from = transaction.from + entity.to = transaction.to + entity.save() + } +} +``` + +Here's what you’re seeing in the `mappings.ts` file: + +1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object +2. Looping over the transactions +3. Create a new subgraph entity for every transaction + +To go through a detailed example of a trigger-based subgraph, [click here](./triggers-example). diff --git a/website/pages/it/substreams.mdx b/website/pages/it/substreams.mdx index 710e110012cc..a838a6924e2f 100644 --- a/website/pages/it/substreams.mdx +++ b/website/pages/it/substreams.mdx @@ -4,9 +4,11 @@ title: Substreams ![Substreams Logo](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## How Substreams Works in 4 Steps @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### Expand Your Knowledge - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/it/sunrise.mdx b/website/pages/it/sunrise.mdx index 32bf6c6d26d4..14d1444cf8cd 100644 --- a/website/pages/it/sunrise.mdx +++ b/website/pages/it/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Post-Sunrise + Upgrading to The Graph Network FAQ --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Note: The Sunrise of Decentralized Data ended June 12th, 2024. -## What is the Sunrise of Decentralized Data? +## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data is an initiative spearheaded by Edge & Node. The goal is to enable subgraph developers to seamlessly upgrade to The Graph’s decentralized network. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan draws on many previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs, and the ability to integrate new blockchain networks to The Graph. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### What are the phases of the Sunrise? +### What happened to the hosted service? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. -## Upgrading subgraphs to The Graph Network +During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. -### When will hosted service subgraphs no longer be available? +### Was Subgraph Studio impacted by this upgrade? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Why were subgraphs published to Arbitrum, did it start indexing a different network? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Will my hosted service subgraph be supported on The Graph Network? - -Yes, the upgrade Indexer will automatically support all hosted service subgraphs published to The Graph Network for a seamless upgrade experience. - -### How do I upgrade my hosted service subgraph? - -> Note: Upgrading a subgraph to The Graph Network cannot be undone. - - - -To upgrade a hosted service subgraph, you can visit the subgraph dashboard on the [hosted service](https://thegraph.com/hosted-service). - -1. Select the subgraph(s) you want to upgrade. -2. Select the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### How can I get support with the upgrade process? - -The Graph community is here to support developers as they move to The Graph Network. Join The Graph's [Discord server](https://discord.gg/vtvv7FP) and request help in the #upgrade-decentralized-network channel. - -### How can I ensure high quality of service and redundancy for subgraphs on The Graph Network? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Members from these blockchain communities are encouraged to integrate their chain through the [Chain Integration Process](/chain-integration-overview/). - -### How do I publish new versions to the network? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Upgrade to the latest version of [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Update your deploy command - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> Publishing requires Arbitrum ETH - upgrading your subgraph also airdrops a small amount to facilitate your first protocol interactions 🧑‍🚀 - -### I use a subgraph developed by someone else, how can I make sure that my service isn't interrupted? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### What happens if I don't upgrade my subgraph? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### How can I get started querying subgraphs on The Graph Network? - -You can explore available subgraphs on [Graph Explorer](https://thegraph.com/explorer). [Learn more about querying subgraphs on The Graph](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## About the Upgrade Indexer -### What is the upgrade Indexer? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> The upgrade Indexer is currently active. -### What chains does the upgrade Indexer support? +The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. -The upgrade Indexer supports chains that were previously only available on the hosted service. +### What does the upgrade Indexer do? -Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. ### What does the upgrade indexer mean for existing Indexers? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. + +However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As more subgraphs are upgraded from the hosted service to The Graph Network, Delegators stand to benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### Will the upgrade Indexer compete with existing Indexers for rewards? +### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer will only allocate the minimum amount per subgraph and will not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### How will this affect subgraph developers? +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### How does this benefit data consumers? +### How does the upgrade Indexer benefit data consumers? The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. -### How will the upgrade Indexer price queries? - -The upgrade Indexer will price queries at the market rate so as not to influence the query fee market. - -### What are the criteria for the upgrade Indexer to stop supporting a subgraph? - -The upgrade Indexer will serve a subgraph until it is sufficiently and successfully served with consistent queries served by at least 3 other Indexers. - -Furthermore, the upgrade Indexer will stop supporting a subgraph if it has not been queried in the last 30 days. - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## About The Graph Network - -### Do I need to run my own infrastructure? - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -Once your subgraph has reached adequate curation signal and other Indexers begin supporting it, the upgrade Indexer will gradually taper off, allowing other Indexers to collect indexing rewards and query fees. - -### Should I host my own indexing infrastructure? - -Running infrastructure for your own project is [significantly more resource intensive](/network/benefits/) when compared to using The Graph Network. - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -That being said, if you’re still interested in running a [Graph Node](https://github.com/graphprotocol/graph-node), consider joining The Graph Network [as an Indexer](https://thegraph.com/blog/how-to-become-indexer/) to earn indexing rewards and query fees by serving data on your subgraph and others. - -### Should I use a centralized indexing provider? - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -Here's a detailed breakdown of the benefits of The Graph over centralized hosting: +### How does the upgrade Indexer price queries? -- **Resilience and Redundancy**: Decentralized systems are inherently more robust and resilient due to their distributed nature. Data isn't stored on a single server or location. Instead, it's served by hundreds of independent Indexers around the globe. This reduces the risk of data loss or service interruptions if one node fails, leading to exceptional uptime (99.99%). +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **Quality of Service**: In addition to the impressive uptime, The Graph Network features a ~106ms median query speed (latency), and higher query success rates compared to hosted alternatives. Read more in [this blog](https://thegraph.com/blog/qos-the-graph-network/). +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -Just as you've chosen your blockchain network for its decentralized nature, security, and transparency, opting for The Graph Network is an extension of those same principles. By aligning your data infrastructure with these values, you ensure a cohesive, resilient, and trust-driven development environment. +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/it/supported-network-requirements.mdx b/website/pages/it/supported-network-requirements.mdx index 7eed955d1013..f2861cb89a4c 100644 --- a/website/pages/it/supported-network-requirements.mdx +++ b/website/pages/it/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| La rete | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | +| La rete | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | diff --git a/website/pages/it/tap.mdx b/website/pages/it/tap.mdx new file mode 100644 index 000000000000..891f55e17dbf --- /dev/null +++ b/website/pages/it/tap.mdx @@ -0,0 +1,197 @@ +--- +title: TAP Migration Guide +--- + +Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. + +## Panoramica + +[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: + +- Efficiently handles micropayments. +- Adds a layer of consolidations to on-chain transactions and costs. +- Allows Indexers control of receipts and payments, guaranteeing payment for queries. +- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. + +## Specifics + +TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. + +For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. + +### RAV Details + +- It’s money that is waiting to be sent to the blockchain. + +- It will continue to send requests to aggregate and ensure that the total value of non-aggregated receipts does not exceed the `amount willing to lose`. + +- Each RAV can be redeemed once in the contracts, which is why they are sent after the allocation is closed. + +### Redeeming RAV + +As long as you run `tap-agent` and `indexer-agent`, everything will be executed automatically. The following provides a detailed breakdown of the process: + +1. An Indexer closes allocation. + +2. ` period, tap-agent` takes all pending receipts for that specific allocation and requests an aggregation into a RAV, marking it as `last`. + +3. `indexer-agent` takes all the last RAVS and sends redeem requests to the blockchain, which will update the value of `redeem_at`. + +4. During the `` period, `indexer-agent` monitors if the blockchain has any reorganizations that revert the transaction. + + - If it was reverted, the RAV is resent to the blockchain. If it was not reverted, it gets marked as `final`. + +## Blockchain Addresses + +### Contracts + +| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| ------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | +| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | +| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | + +### Gateway + +| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| ---------- | --------------------------------------------- | --------------------------------------------- | +| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | + +### Requirements + +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. + +- [Graph TAP Aribtrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) + +> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployement. As a result, you have to index it manually. + +## Migration Guide + +### Software versions + +| Component | Version | Image Link | +| --------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------- | +| indexer-service | v1.0.0-rc.6 | [indexer-service](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-service-rs/264320627?tag=1.0.0-rc.6) | +| indexer-agent | PR #995 | [indexer-agent](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent/266166026?tag=sha-d98cf80) | +| tap-agent | v1.0.0-rc.6 | [tap-agent](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-tap-agent/264320547?tag=1.0.0-rc.6) | + +### Steps + +1. **Indexer Agent** + + - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). + - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + +2. **Indexer Service** + + - Fully replace your current configuration with the [new Indexer Service rs](https://github.com/graphprotocol/indexer-rs). It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + - Like the older version, you can scale Indexer Service horizontally easily. It is still stateless. + +3. **TAP Agent** + + - Run _one_ single instance of [TAP Agent](https://github.com/graphprotocol/indexer-rs) at all times. It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + +4. **Configure Indexer Service and TAP Agent** + + Configuration is a TOML file shared between `indexer-service` and `tap-agent`, supplied with the argument `--config /path/to/config.toml`. + + Check out the full [configuration](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) and the [default values](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) + +For minimal configuration, use the following template: + +```bash +# You will have to change *all* the values below to match your setup. +# +# Some of the config below are global graph network values, which you can find here: +# +# +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: +# +# [database] +# postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" + +[indexer] +indexer_address = "0x1111111111111111111111111111111111111111" +operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" + +[database] +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. +postgres_url = "postgres://postgres@postgres:5432/postgres" + +[graph_node] +# URL to your graph-node's query endpoint +query_url = "" +# URL to your graph-node's status endpoint +status_url = "" + +[subgraphs.network] +# Query URL for the Graph Network subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[subgraphs.escrow] +# Query URL for the Escrow subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[blockchain] +# The chain ID of the network that the graph network is running on +chain_id = 1337 +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. +receipts_verifier_address = "0x2222222222222222222222222222222222222222" + +######################################## +# Specific configurations to tap-agent # +######################################## +[tap] +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: +# max_amount_willing_to_lose_grt = "0.1" +max_amount_willing_to_lose_grt = 20 + +[tap.sender_aggregator_endpoints] +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. +0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" +``` + +Notes: + +- Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/tap/#gateway). +- Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/tap/#contracts) using the appropriate chain id. + +**Log Level** + +- You can set the log level by using the `RUST_LOG` environment variable. +- It’s recommended that you set it to `RUST_LOG=indexer_tap_agent=debug,info`. + +## Monitoring + +### Metrics + +All components expose the port 7300 to be queried by prometheus. + +### Grafana Dashboard + +You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) and import. + +### Launchpad + +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer) diff --git a/website/pages/ja/about.mdx b/website/pages/ja/about.mdx index 564163774630..db8864a69b0f 100644 --- a/website/pages/ja/about.mdx +++ b/website/pages/ja/about.mdx @@ -2,46 +2,66 @@ title: The Graphについて --- -このページでは、「The Graph」とは何か、どのようにして始めるのかを説明します。 - ## とは「ザ・グラフ」 -グラフは、ブロックチェーン データのインデックス作成とクエリを行うための分散型プロトコルです。 +The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. + +## Understanding the Basics + +Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. + +### Challenges Without The Graph + +In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. + +- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. + +- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. + +- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. + +### Why is this a problem? + +It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. + +Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. + +Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. -[Uniswap](https://uniswap.org/)のような複雑なスマートコントラクトを持つプロジェクトや、[Bored Ape Yacht Club](https://boredapeyachtclub.com/) のような NFT の取り組みでは、Ethereum のブロックチェーンにデータを保存しているため、基本的なデータ以外をブロックチェーンから直接読み取ることは実に困難です。 +## The Graph Provides a Solution -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. -独自のサーバーを構築し、そこでトランザクションを処理してデータベースに保存し、その上に API エンドポイントを構築してデータをクエリすることもできます。ただし、このオプションは[リソース集約的](/network/benefits/)であり、メンテナンスが必要であり、単一障害点が存在し、分散化に必要な重要なセキュリティ プロパティが壊れます。 +### How The Graph Functions -**ブロックチェーンデータのインデックス作成は非常に困難です。** +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +#### Specifics -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. -## The Graph の仕組み +- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -The Graph は、サブグラフマニフェストと呼ばれるサブグラフ記述に基づいて、Ethereum のデータに何をどのようにインデックスするかを学習します。 サブグラフマニフェストは、そのサブグラフで注目すべきスマートコントラクト、注目すべきコントラクト内のイベント、イベントデータと The Graph がデータベースに格納するデータとのマッピング方法などを定義します。 +- When creating a subgraph, you need to write a subgraph manifest. -`サブグラフのマニフェスト`を書いたら、グラフの CLI を使ってその定義を IPFS に保存し、インデクサーにそのサブグラフのデータのインデックス作成を開始するように指示します。 +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. -この図では、サブグラフ・マニフェストがデプロイされた後のデータの流れについて、Ethereum のトランザクションを扱って詳しく説明しています。 +The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. ![グラフがグラフ ノードを使用してデータ コンシューマーにクエリを提供する方法を説明する図](/img/graph-dataflow.png) フローは以下のステップに従います。 -1. Dapp は、スマート コントラクトのトランザクションを通じて Ethereum にデータを追加します。 -2. スマートコントラクトは、トランザクションの処理中に 1 つまたは複数のイベントを発行します。 -3. Graph Node は、Ethereum の新しいブロックと、それに含まれる自分のサブグラフのデータを継続的にスキャンします。 -4. Graph Node は、これらのブロックの中からあなたのサブグラフの Ethereum イベントを見つけ出し、あなたが提供したマッピングハンドラーを実行します。 マッピングとは、イーサリアムのイベントに対応して Graph Node が保存するデータエンティティを作成または更新する WASM モジュールのことです。 -5. Dapp は、ノードの [GraphQL エンドポイント](https://graphql.org/learn/) を使用して、ブロックチェーンからインデックス付けされたデータをグラフ ノードに照会します。グラフ ノードは、ストアのインデックス作成機能を利用して、このデータを取得するために、GraphQL クエリを基盤となるデータ ストアのクエリに変換します。 dapp は、このデータをエンドユーザー向けの豊富な UI に表示し、エンドユーザーはそれを使用して Ethereum で新しいトランザクションを発行します。サイクルが繰り返されます。 +1. Dapp は、スマート コントラクトのトランザクションを通じて Ethereum にデータを追加します。 +2. スマートコントラクトは、トランザクションの処理中に 1 つまたは複数のイベントを発行します。 +3. Graph Node は、Ethereum の新しいブロックと、それに含まれる自分のサブグラフのデータを継続的にスキャンします。 +4. Graph Node は、これらのブロックの中からあなたのサブグラフの Ethereum イベントを見つけ出し、あなたが提供したマッピングハンドラーを実行します。 マッピングとは、イーサリアムのイベントに対応して Graph Node が保存するデータエンティティを作成または更新する WASM モジュールのことです。 +5. Dapp は、ノードの [GraphQL エンドポイント](https://graphql.org/learn/) を使用して、ブロックチェーンからインデックス付けされたデータをグラフ ノードに照会します。グラフ ノードは、ストアのインデックス作成機能を利用して、このデータを取得するために、GraphQL クエリを基盤となるデータ ストアのクエリに変換します。 dapp は、このデータをエンドユーザー向けの豊富な UI に表示し、エンドユーザーはそれを使用して Ethereum で新しいトランザクションを発行します。サイクルが繰り返されます。 ## 次のステップ -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +The following sections provide a more in-depth look at subgraphs, their deployment and data querying. -Before you start writing your own subgraph, you might want to have a look at [Graph Explorer](https://thegraph.com/explorer) and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/pages/ja/arbitrum/arbitrum-faq.mdx b/website/pages/ja/arbitrum/arbitrum-faq.mdx index 0cc4f7829ce1..8b416bd7cfe2 100644 --- a/website/pages/ja/arbitrum/arbitrum-faq.mdx +++ b/website/pages/ja/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: Arbitrum FAQ Arbitrum Billing FAQ にスキップしたい場合は[here](#billing-on-arbitrum-faqs) をクリックしてください。 -## The GraphがL2ソリューションを導入する理由は? +## Why did The Graph implement an L2 Solution? -The GraphをL2でスケールさせることで、ネットワーク参加者は以下を期待できます: +By scaling The Graph on L2, network participants can now benefit from: - ガス料金を 26 倍以上節約 @@ -14,7 +14,7 @@ The GraphをL2でスケールさせることで、ネットワーク参加者は - イーサリアムから継承したセキュリティ -プロトコル スマート コントラクトを L2 に拡張すると、ネットワーク参加者はガス料金を削減しながら、より頻繁に対話できるようになります。たとえば、インデクサーは割り当てを開いたり閉じたりして、より多くのサブグラフにインデックスをより頻繁に付けることができ、開発者はサブグラフのデプロイと更新をより簡単に行うことができ、委任者はより高い頻度で GRT を委任でき、キュレーターはより多くのサブグラフにシグナルを追加または削除できます。サブグラフ – 以前は、ガスのために頻繁に実行するにはコストが高すぎると考えられていたアクション。 +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. Graph コミュニティは、[GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) の議論の結果を受けて、昨年 Arbitrum を進めることを決定しました。 @@ -41,27 +41,21 @@ L2でのThe Graphの活用には、このドロップダウンスイッチャー ## サブグラフ開発者、データ消費者、インデクサー、キュレーター、デリゲーターは何をする必要がありますか? -直ちに対応する必要はありませんが、ネットワーク参加者は L2 の利点を活用するために Arbitrum への移行を開始することをお勧めします。 +Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) for additional support. -コア開発者チームは、委任、キュレーション、およびサブグラフを Arbitrum に移行するのを大幅に容易にする L2 転送ツールの作成に取り組んでいます。ネットワーク参加者は、2023 年の夏までに L2 転送ツールが利用可能になることを期待できます。 +All indexing rewards are now entirely on Arbitrum. -2023 年 4 月 10 日の時点で、すべてのインデックス作成報酬の 5% が Arbitrum で鋳造されています。ネットワークへの参加が増加し、評議会がそれを承認すると、インデックス作成の報酬はイーサリアムからアービトラムに徐々に移行し、最終的には完全にアービトラムに移行します。 - -## L2でのネットワークに参加したい場合は、どうすればいいのでしょうか? - -L2 の [test the network](https://testnet.thegraph.com/explorer) にご協力いただき、[Discord](https://discord.gg/graphprotocol) でのエクスペリエンスに関するフィードバックを報告してください。 - -## L2へのネットワーク拡張に伴うリスクはありますか? +## Were there any risks associated with scaling the network to L2? All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). すべてが徹底的にテストされており、安全かつシームレスな移行を保証するための緊急時対応計画が整備されています。詳細は[here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20)をご覧ください。 -## イーサリアムの既存のサブグラフは引き続き使えるのでしょうか? +## Are existing subgraphs on Ethereum working? -はい、グラフネットワークのコントラクトは、後日Arbitrumに完全に移行するまでは、EthereumとArbitrumの両方で並行して運用される予定です。 +All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. -## GRTはArbitrumに新しいスマートコントラクトをデプロイするのでしょうか? +## Does GRT have a new smart contract deployed on Arbitrum? はい、GRT には追加の [Arbitrum 上のスマート コントラクト](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) があります。ただし、イーサリアムのメインネット [GRT 契約](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) は引き続き運用されます。 diff --git a/website/pages/ja/billing.mdx b/website/pages/ja/billing.mdx index c41bdbafe2e5..a8c32eefdd56 100644 --- a/website/pages/ja/billing.mdx +++ b/website/pages/ja/billing.mdx @@ -14,7 +14,7 @@ There are two plans to use when querying subgraphs on The Graph Network. ## Query Payments with credit card -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) +- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. ページ右上の「Connect Wallet」をクリックします。ウォレット選択ページに遷移します。ウォレットを選択し、「Connect」をクリックします。 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. @@ -69,7 +69,7 @@ Once you bridge GRT, you can add it to your billing balance. 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. 4. Enter the amount of GRT you would like to withdraw. 5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. 6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. @@ -83,7 +83,7 @@ Once you bridge GRT, you can add it to your billing balance. - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. 5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. 6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. + - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. 7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. 8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. @@ -127,7 +127,7 @@ This will be a step by step guide for purchasing GRT on Binance. 7. Review your purchase and click "Buy GRT". 8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. 9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - Click on the "wallet" button, click withdraw, and select GRT. - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - Click "Continue" and confirm your transaction. @@ -198,7 +198,7 @@ BinanceでETHを入手する詳細については、[こちら](https://www.bina ### How many queries will I need? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. @@ -208,6 +208,6 @@ Of course, both new and existing users can reach out to Edge & Node's BD team fo Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### What happens when my billing balance runs out? Will I get a warning? You will receive several email notifications before your billing balance runs out. diff --git a/website/pages/ja/chain-integration-overview.mdx b/website/pages/ja/chain-integration-overview.mdx index 7d37bf9c9393..2a72e99283a4 100644 --- a/website/pages/ja/chain-integration-overview.mdx +++ b/website/pages/ja/chain-integration-overview.mdx @@ -6,12 +6,12 @@ title: チェーン統合プロセスの概要 ## ステージ 1. 技術的統合 -- チームは、非 EVM ベースのチェーン用の Graph Node 統合と Firehose に取り組んでいます。 [Here's how](/new-chain-integration/)。 +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. - チームは、プロトコルの統合プロセスを開始するために、[here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71)のフォーラムスレッドを作成します(Governance & GIPsの下にあるNew Data Sourcesのサブカテゴリ内)。デフォルトのフォーラムテンプレートの使用が必須です。 ## ステージ 2. 統合の検証 -- チームは、コア開発者、Graph Foundation、および [Subgraph Studio](https://thegraph.com/studio/) のようなGUIやネットワークゲートウェイのオペレーターと協力して、スムーズな統合プロセスを確保しています。これには、統合するチェーンのJSON RPCやFirehoseエンドポイントなどの必要なバックエンドインフラストラクチャを提供することが含まれます。このようなインフラストラクチャをセルフホスティングしたくないチームは、The Graphのノードオペレーター(インデクサー)のコミュニティを活用して、それを行うことができます。これに関しては、Foundationがサポートを提供できます。 +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - Graph Indexer は、The Graph のテストネットで統合をテストします。 - コア開発者とインデクサーは、安定性、パフォーマンス、およびデータの決定性を監視します。 @@ -38,7 +38,7 @@ The Graph Network の未来を形作る準備はできていますか? [Start yo これは、サブストリームで動作するサブグラフに対するインデックスリワードのプロトコルサポートに影響を与えるものです。新しいFirehoseの実装は、このGIPのステージ2に概説されている方法論に従って、テストネットでテストされる必要があります。同様に、実装がパフォーマンスが良く信頼性があると仮定して、[Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md)へのPR(「Substreamsデータソース」サブグラフ機能)が必要です。また、インデックスリワードのプロトコルサポートに関する新しいGIPも必要です。誰でもPRとGIPを作成できますが、Foundationは評議会の承認をサポートします。 -### 3. このプロセスにはどのくらい時間がかかりますか? +### 3. How much time will the process of reaching full protocol support take? メインネットへの移行にかかる時間は、統合開発の進捗によるもの、追加の調査が必要かどうか、テストとバグ修正、そして常にコミュニティのフィードバックを必要とするガバナンスプロセスのタイミングに応じて異なりますが、数週間を予想しています。 @@ -46,4 +46,4 @@ The Graph Network の未来を形作る準備はできていますか? [Start yo ### 4. 優先順位はどのように扱われますか? -3.と同様、全体的な準備状況や関係者の帯域幅によります。例えば、Firehoseを導入したばかりの新しいチェーンは、すでにテスト済みの統合や、ガバナンスプロセスが進んでいる統合よりも時間がかかるかもしれません。これは特に、以前[ホスティングサービス](https://thegraph.com/hosted-service)でサポートされていたチェーンや、すでにテスト済みのスタックに依存しているチェーンに当てはまります。 +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/ja/cookbook/arweave.mdx b/website/pages/ja/cookbook/arweave.mdx index 8eec2a73f453..06623a6ecbae 100644 --- a/website/pages/ja/cookbook/arweave.mdx +++ b/website/pages/ja/cookbook/arweave.mdx @@ -105,7 +105,7 @@ Arweaveデータソースは 2 種類のハンドラーをサポートしてい イベントを処理するハンドラは、[AssemblyScript](https://www.assemblyscript.org/) で記述されています。 -Arweaveのインデックス作成は、[AssemblyScript API](/developing/assemblyscript-api/)にArweave固有のデータ型を導入しています。 +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/ja/cookbook/base-testnet.mdx b/website/pages/ja/cookbook/base-testnet.mdx index be8f071f5c51..1a1673cfde71 100644 --- a/website/pages/ja/cookbook/base-testnet.mdx +++ b/website/pages/ja/cookbook/base-testnet.mdx @@ -63,10 +63,10 @@ graph init --studio The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. +- スキーマ (schema.graphql) - GraphQL スキーマは、サブグラフから取得するデータを定義します. - AssemblyScript Mappings (mapping.ts) - データソースからのデータを、スキーマで定義されたエンティティに変換するコードです。 -If you want to index additional data, you will need extend the manifest, schema and mappings. +If you want to index additional data, you will need to extend the manifest, schema and mappings. For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). diff --git a/website/pages/ja/cookbook/cosmos.mdx b/website/pages/ja/cookbook/cosmos.mdx index dbf616aa4d58..000c371e5c57 100644 --- a/website/pages/ja/cookbook/cosmos.mdx +++ b/website/pages/ja/cookbook/cosmos.mdx @@ -85,7 +85,7 @@ Schema definition describes the structure of the resulting subgraph database and イベントを処理するためのハンドラは[AssemblyScript](https://www.assemblyscript.org/)で書かれています。 -Cosmosインデックスでは、Cosmos特有のデータ型を[AssemblyScript API](/developing/assemblyscript-api/)に導入しています。 +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/ja/cookbook/grafting.mdx b/website/pages/ja/cookbook/grafting.mdx index c0febe43753c..2df8229bce59 100644 --- a/website/pages/ja/cookbook/grafting.mdx +++ b/website/pages/ja/cookbook/grafting.mdx @@ -22,7 +22,7 @@ title: グラフティングでコントラクトを取り替え、履歴を残 - [Grafting](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -このチュートリアルでは、基本的なユースケースについて説明します。既存の契約を同一の契約に置き換えます(新しい住所ですが、コードは同じです)。次に、新しいコントラクトを追跡する「ベース」サブグラフに既存のサブグラフを移植します +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## ネットワークにアップグレードする際の移植に関する重要な注意事項 @@ -30,7 +30,7 @@ title: グラフティングでコントラクトを取り替え、履歴を残 ### 何でこれが大切ですか? -グラフティングは、既存のサブグラフから新しいバージョンに歴史的なデータを効果的に転送することを可能にする、強力な機能です。これはデータを保存し、インデックス作業に時間を節約する効果的な方法ですが、ホスト環境から分散ネットワークに移行する際に複雑さや潜在的な問題を導入する可能性があります。The Graph NetworkからホストされたサービスやSubgraph Studioにサブグラフを戻すことはできません。 +Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. ### ベストプラクティス @@ -80,7 +80,7 @@ dataSources: ``` - `Lock`データソースは、コンパイルとデプロイ時に取得するアビとコントラクトのアドレスです。 -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - `mapping`セクションでは、関心のあるトリガーと、それらのトリガーに応答して実行されるべき関数を定義しています。この場合、`Withdrawal`イベントをリスニングし、それが発信されたときに`handleWithdrawal`関数を呼び出すことにしています。 ## グラフティングマニフェストの定義 @@ -191,7 +191,7 @@ Congrats! You have successfully grafted a subgraph onto another subgraph. ## その他のリソース -もっとグラフティングを体験したい方に、人気のあるコントラクトの例をご紹介します: +If you want more experience with grafting, here are a few examples for popular contracts: - [曲線](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) diff --git a/website/pages/ja/cookbook/near.mdx b/website/pages/ja/cookbook/near.mdx index 91dc4d55e87d..e8847ef47f93 100644 --- a/website/pages/ja/cookbook/near.mdx +++ b/website/pages/ja/cookbook/near.mdx @@ -37,7 +37,7 @@ NEAR サブグラフの開発には、バージョン`0.23.0`以上の`graph-cli **schema.graphql:**: サブグラフのためにどのようなデータが保存されているか、そして GraphQL を介してどのようにクエリを行うかを定義するスキーマファイル。NEAR サブグラフの要件は、[既存のドキュメント](/developing/creating-a-subgraph#the-graphql-schema)でカバーされています。 -**AssemblyScript Mappings:**: [AssemblyScript code](/developing/assemblyscript-api)は、イベントデータから、スキーマで定義されたエンティティに変換するコードです。NEAR サポートでは、NEAR 固有のデータタイプと、新しい JSON パース機能が導入されています。 +**AssemblyScript Mappings:** [AssemblyScript code](/developing/graph-ts/api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. サブグラフの開発には 2 つの重要なコマンドがあります: @@ -98,7 +98,7 @@ NEAR データソースは 2 種類のハンドラーをサポートしていま イベントを処理するためのハンドラは[AssemblyScript](https://www.assemblyscript.org/)で書かれています。 -NEAR インデックスは、[AssemblyScript API](/developing/assemblyscript-api)に NEAR 固有のデータタイプを導入します。 +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/graph-ts/api). ```typescript @@ -165,9 +165,9 @@ class ReceiptWithOutcome { - ブロックハンドラーは、`Block`を受け取ります - レシートハンドラーは`ReceiptWithOutcome`を受け取ります -その他、マッピング実行中の NEAR サブグラフ開発者は、 [AssemblyScript API](/developing/assemblyscript-api)の残りの部分を利用できます。 +Otherwise, the rest of the [AssemblyScript API](/developing/graph-ts/api) is available to NEAR subgraph developers during mapping execution. -これには、新しい JSON parsing function が含まれています。NEAR のログは、頻繁に文字列化された JSON として出力されます。新しい`json.fromString(...)`関数は、開発者がこれらのログを簡単に処理できるように、[JSON API](/developing/assemblyscript-api#json-api)の一部として利用できます。 +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/graph-ts/api#json-api) to allow developers to easily process these logs. ## NEAR サブグラフの展開 diff --git a/website/pages/ja/cookbook/subgraph-debug-forking.mdx b/website/pages/ja/cookbook/subgraph-debug-forking.mdx index a18cf73d17b7..6fbb5070e2ec 100644 --- a/website/pages/ja/cookbook/subgraph-debug-forking.mdx +++ b/website/pages/ja/cookbook/subgraph-debug-forking.mdx @@ -6,9 +6,9 @@ As with many systems processing large amounts of data, The Graph's Indexers (Gra ## さて、それは何でしょうか? -**サブグラフのフォーク**とは、*他*のサブグラフのストア(通常はリモート) からエンティティをフェッチするプロセスです。 +**サブグラフのフォーク**とは、_他_のサブグラフのストア(通常はリモート) からエンティティをフェッチするプロセスです。 -デバッグの文脈では、**サブグラフのフォーク**により、ブロック*X*への同期を待つことなく、ブロック*X*で失敗したサブグラフのデバッグを行うことができます。 +デバッグの文脈では、**サブグラフのフォーク**により、ブロック_X_への同期を待つことなく、ブロック_X_で失敗したサブグラフのデバッグを行うことができます。 ## その方法は? @@ -69,7 +69,7 @@ Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph St 回答: -1. `fork-base`は「ベース」URLで、*subgraph id*が追加されたときのURL (`/`) はサブグラフのストアに対する有効な GraphQL endpoint であることを示します。 +1. `fork-base`は「ベース」URLで、_subgraph id_が追加されたときのURL (`/`) はサブグラフのストアに対する有効な GraphQL endpoint であることを示します。 2. フォーキングは簡単であり煩雑な手間はありません ```bash diff --git a/website/pages/ja/cookbook/subgraph-uncrashable.mdx b/website/pages/ja/cookbook/subgraph-uncrashable.mdx index f50944b02a9c..97d6d7fb8fe4 100644 --- a/website/pages/ja/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/ja/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: 安全なサブグラフのコード生成 - また、このフレームワークには、エンティティ変数のグループに対して、カスタムだが安全なセッター関数を作成する方法が(設定ファイルを通じて)含まれています。この方法では、ユーザーが古いグラフ・エンティティをロード/使用することは不可能であり、また、関数が必要とする変数の保存や設定を忘れることも不可能です。 -- 警告ログは、サブグラフのロジックに違反がある場所を示すログとして記録され、データの正確性を確保するための問題の修正に役立ちます。これらのログは、The Graphのホスティングサービスの「Logs」セクションで確認することができます。 +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashableは、Graph CLI codegenコマンドでオプションのフラグとして実行することができます。 diff --git a/website/pages/ja/cookbook/upgrading-a-subgraph.mdx b/website/pages/ja/cookbook/upgrading-a-subgraph.mdx index fc1b71f8c0a2..b80172a3554c 100644 --- a/website/pages/ja/cookbook/upgrading-a-subgraph.mdx +++ b/website/pages/ja/cookbook/upgrading-a-subgraph.mdx @@ -136,7 +136,7 @@ You can update the metadata of your subgraphs without having to publish a new ve ## The Graph Network のサブグラフを廃止する -[here](/managing/deprecating-a-subgraph) の手順に従って、サブグラフを非推奨にし、グラフ ネットワークから削除します。 +Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. ## The Graph Network でのサブグラフのクエリと課金について diff --git a/website/pages/ja/deploying/multiple-networks.mdx b/website/pages/ja/deploying/multiple-networks.mdx new file mode 100644 index 000000000000..d4f2935cb302 --- /dev/null +++ b/website/pages/ja/deploying/multiple-networks.mdx @@ -0,0 +1,241 @@ +--- +title: Deploying a Subgraph to Multiple Networks +--- + +This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph). + +## サブグラフを複数のネットワークにデプロイする + +場合によっては、すべてのコードを複製せずに、同じサブグラフを複数のネットワークに展開する必要があります。これに伴う主な課題は、これらのネットワークのコントラクト アドレスが異なることです。 + +### Using `graph-cli` + +Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: + +```sh +Options: + + ... + --network Network configuration to use from the networks config file + --network-file Networks config file path (default: "./networks.json") +``` + +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. + +> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. + +If you don't have a `networks.json` file, you'll need to manually create one with the following structure: + +```json +{ + "network1": { // the network name + "dataSource1": { // the dataSource name + "address": "0xabc...", // the contract address (optional) + "startBlock": 123456 // the startBlock (optional) + }, + "dataSource2": { + "address": "0x123...", + "startBlock": 123444 + } + }, + "network2": { + "dataSource1": { + "address": "0x987...", + "startBlock": 123 + }, + "dataSource2": { + "address": "0xxyz..", + "startBlock": 456 + } + }, + ... +} +``` + +> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. + +Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x123...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +ネットワーク設定ファイルはこのようになっているはずです: + +```json +{ + "mainnet": { + "Gravity": { + "address": "0x123..." + } + }, + "sepolia": { + "Gravity": { + "address": "0xabc..." + } + } +} +``` + +これで、次のいずれかのコマンドを実行できるようになりました: + +```sh +# Using default networks.json file +yarn build --network sepolia + +# Using custom named file +yarn build --network sepolia --network-file path/to/config +``` + +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: sepolia + source: + address: '0xabc...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Now you are ready to `yarn deploy`. + +> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: + +```sh +# Using default networks.json file +yarn deploy --network sepolia + +# Using custom named file +yarn deploy --network sepolia --network-file path/to/config +``` + +### subgraph.yamlテンプレートの使用 + +One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). + +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: + +```json +{ + "network": "mainnet", + "address": "0x123..." +} +``` + +and/と + +```json +{ + "network": "sepolia", + "address": "0xabc..." +} +``` + +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + network: {{network}} + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + address: '{{address}}' + abi: Gravity + mapping: + kind: ethereum/events +``` + +In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: + +```json +{ + ... + "scripts": { + ... + "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", + "prepare:sepolia": "mustache config/sepolia.json subgraph.template.yaml > subgraph.yaml" + }, + "devDependencies": { + ... + "mustache": "^3.1.0" + } +} +``` + +To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: + +```sh +# Mainnet: +yarn prepare:mainnet && yarn deploy + +# Sepolia: +yarn prepare:sepolia && yarn deploy +``` + +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). + +**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. + +## Subgraph Studio・サブグラフ・アーカイブポリシー + +A subgraph version in Studio is archived if and only if it meets the following criteria: + +- The version is not published to the network (or pending publish) +- The version was created 45 or more days ago +- The subgraph hasn't been queried in 30 days + +In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. + +このポリシーで影響を受けるすべてのサブグラフには、問題のバージョンを戻すオプションがあります。 + +## サブグラフのヘルスチェック + +サブグラフが正常に同期された場合、それはそれが永久に正常に動作し続けることを示す良い兆候です。ただし、ネットワーク上の新しいトリガーにより、サブグラフがテストされていないエラー状態に陥ったり、パフォーマンスの問題やノード オペレーターの問題により遅れが生じたりする可能性があります。 + +Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: + +```graphql +{ + indexingStatusForCurrentVersion(subgraphName: "org/subgraph") { + synced + health + fatalError { + message + block { + number + hash + } + handler + } + chains { + chainHeadBlock { + number + } + latestBlock { + number + } + } + } +} +``` + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/pages/ja/developing/creating-a-subgraph.mdx b/website/pages/ja/developing/creating-a-subgraph.mdx index 4c8e884b4c50..cf4df4899a5f 100644 --- a/website/pages/ja/developing/creating-a-subgraph.mdx +++ b/website/pages/ja/developing/creating-a-subgraph.mdx @@ -2,45 +2,47 @@ title: サブグラフの作成 --- -サブグラフは、ブロックチェーンからデータを抽出し、加工して保存し、GraphQLで簡単にクエリできるようにします。 +This detailed guide provides instructions to successfully create a subgraph. -![サブグラフの定義](/img/defining-a-subgraph.png) +A subgraph extracts data from a blockchain, processes it, and stores it for efficient querying via GraphQL. -サブグラフの定義は、いくつかのファイルで構成されています。 +![サブグラフの定義](/img/defining-a-subgraph.png) -- `subgraph.yaml`:サブグラフのマニフェストを含む YAML ファイル +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. -- `schema.graphql`: サブグラフにどのようなデータが保存されているか、また GraphQL を使ってどのようにクエリを行うかを定義する GraphQL スキーマ +## はじめに -- `AssemblyScript Mappings`: イベントデータをスキーマで定義されたエンティティに変換する[AssemblyScript](https://github.com/AssemblyScript/assemblyscript)コード (例: このチュートリアルでは`mapping.ts`) +### Graph CLI のインストール -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key). It is recommended that you [add signal](/network/curating/#how-to-signal) to your subgraph with at least [3,000 GRT](/sunrise/#how-can-i-ensure-high-quality-of-service-and-redundancy-for-subgraphs-on-the-graph-network). +To build and deploy a subgraph, you will need the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-tooling) which you will need to build and deploy a subgraph. +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. -## Graph CLI のインストール +ローカル マシンで、次のいずれかのコマンドを実行します。 -Graph CLI は JavaScript で書かれており、使用するには`yarn`または `npm`のいずれかをインストールする必要があります。 +#### Using [npm](https://www.npmjs.com/) -`yarn`をインストールしたら、次のコマンドを実行して Graph CLI をインストールする。 +```bash +npm install -g @graphprotocol/graph-cli@latest +``` -**Install with yarn:** +#### Using [yarn](https://yarnpkg.com/) ```bash yarn global add @graphprotocol/graph-cli ``` -**Install with npm:** +- The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. -```bash -npm install -g @graphprotocol/graph-cli -``` +- This `graph init` command can also create a subgraph in Subgraph Studio by passing in `--product subgraph-studio`. + +- If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. -Once installed, the `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. This command can be used to create a subgraph in Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to your preferred network, bootstrapping a new subgraph from that contract can be a good way to get started. +## Create a subgraph -## 既存のコントラクトから +### From an existing contract -次のコマンドは、既存のコントラクトのすべてのイベントにインデックスを付けるサブグラフを作成します。Etherscan からコントラクト ABI をフェッチしようとしますが、ローカルファイルパスの要求にフォールバックします。オプションの引数のいずれかが欠けている場合は、対話形式で行われます。 +The following command creates a subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,21 +53,29 @@ graph init \ [] ``` -``は、Subgraph Studio でのサブグラフの ID で、サブグラフの詳細ページに記載されています。 +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. -## サブグラフの例から +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. -`graph init`がサポートする 2 つ目のモードは、例となるサブグラフから新しいプロジェクトを作成することです。以下のコマンドがこれを行います: +### From an example subgraph + +The following command initializes a new project from an example subgraph: ```sh -graph init --studio +graph init --studio --from-example=example-subgraph ``` -The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. -## 既存のサブグラフに新しいデータソースを追加する +## Add new `dataSources` to an existing subgraph -`v0.31.0` 以降、`graph-cli`は、`graph add` コマンドにより既存のサブグラフに新しいデータソースを追加することをサポートしました。 +Since `v0.31.0`, the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: ```sh graph add
    [] @@ -78,22 +88,45 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -`add` コマンドは Etherscan から ABI を取得し (`--abi` オプションで ABI パスが指定されていない限り)、 `graph init` コマンドが `dataSource` `--from-contract` を作成したのと同じ方法で新しい `dataSource` を作成してスキーマとマッピングをそれに従って更新します。 +### Specifics + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- `--merge-entities` オプションは、開発者が `entity` と `event` の名前の衝突をどのように処理したいかを指定します。 + + - If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- 契約書の`address`は、該当するネットワークの`networks.json`に書き込まれることになります + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. -`--merge-entities` オプションは、開発者が `entity` と `event` の名前の衝突をどのように処理したいかを指定します。 +## Components of a subgraph -- If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. -- If `false`: a new entity & event handler should be created with `${dataSourceName}{EventName}`. +### サブグラフ・マニフェスト -契約書の`address`は、該当するネットワークの`networks.json`に書き込まれることになります +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -> **Note:** 対話型CLIを使用している場合、`graph init`を正常に実行した後、新しい`dataSource`を追加するよう促されます。 +The **subgraph definition** consists of the following files: -## サブグラフ・マニフェスト +- `subgraph.yaml`: Contains the subgraph manifest -サブグラフ・マニフェスト`subgraph.yaml`は、サブグラフがインデックスするスマート・コントラクト、これらのコントラクトからのどのイベントに注目するか、そしてイベント・データをグラフ・ノードが保存するエンティティにどのようにマッピングするかを定義し、クエリを可能にします。サブグラフ・マニフェストの完全な仕様は、[こちら](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md)をご覧ください。 +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL -例のサブグラフの場合、`subgraph.yaml`は次のようになっています: +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +A single subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: ```yaml specVersion: 0.0.4 @@ -180,9 +213,9 @@ dataSources: ブロック内のデータソースのトリガーは、以下のプロセスを使用して順序付けられます: -1. イベントとコールのトリガーは、ブロック内のトランザクションインデックスで最初に並べられます。 -2. 同じトランザクション内のイベントトリガーとコールトリガーは、マニフェストで定義されている順序にしたがって、イベントトリガーが先、コールトリガーが後という規則で並べられます。 -3. ブロックトリガーは、イベントトリガーとコールトリガーの後に、マニフェストで定義されている順番で実行されます。 +1. イベントとコールのトリガーは、ブロック内のトランザクションインデックスで最初に並べられます。 +2. 同じトランザクション内のイベントトリガーとコールトリガーは、マニフェストで定義されている順序にしたがって、イベントトリガーが先、コールトリガーが後という規則で並べられます。 +3. ブロックトリガーは、イベントトリガーとコールトリガーの後に、マニフェストで定義されている順番で実行されます。 これらの順序規則は変更されることがあります。 @@ -305,9 +338,9 @@ Imagine you have a subgraph that needs to make three Ethereum calls to fetch dat Traditionally, these calls might be made sequentially: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Total time taken = 3 + 2 + 4 = 9 seconds @@ -315,9 +348,9 @@ Total time taken = 3 + 2 + 4 = 9 seconds With this feature, you can declare these calls to be executed in parallel: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. @@ -325,9 +358,9 @@ Total time taken = max (3, 2, 4) = 4 seconds ### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. -2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. ### Example Configuration in Subgraph Manifest @@ -347,7 +380,7 @@ calls: Details for the example above: - `global0X128` is the declared `eth_call`. -- The text before colon(`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. @@ -360,17 +393,17 @@ calls: ### SpecVersion Releases -| バージョン | リリースノート | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | +| バージョン | リリースノート | +|:-----:| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | | 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | ### ABI を取得する @@ -442,16 +475,16 @@ Null 以外のフィールド 'name' の null 値が解決されました GraphQL API では、以下の Scalar をサポートしています: -| タイプ | 説明書き | -| --- | --- | -| `Bytes` | Byte 配列で、16 進数の文字列で表されます。Ethereum のハッシュやアドレスによく使われます。 | -| `String` | `string`値の Scalar であり、Null 文字はサポートされておらず、自動的に削除されます。 | -| `Boolean` | `boolean`値を表す Scalar。 | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | 大きな整数。Ethereum の`uint32`, `int64`, `uint64`, ..., `uint256` タイプに使用されます。注: `int32`, `uint24` `int8`など`uint32`以下のものは`i32`として表現されます。 | -| `BigDecimal` | `BigDecimal`は、高精度の 10 進数を記号と指数で表します。指数の範囲は -6143 ~ +6144 です。有効数字 34 桁にまとめられます。 | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| タイプ | 説明書き | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Byte 配列で、16 進数の文字列で表されます。Ethereum のハッシュやアドレスによく使われます。 | +| `String` | `string`値の Scalar であり、Null 文字はサポートされておらず、自動的に削除されます。 | +| `Boolean` | `boolean`値を表す Scalar。 | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | 大きな整数。Ethereum の`uint32`, `int64`, `uint64`, ..., `uint256` タイプに使用されます。注: `int32`, `uint24` `int8`など`uint32`以下のものは`i32`として表現されます。 | +| `BigDecimal` | `BigDecimal`は、高精度の 10 進数を記号と指数で表します。指数の範囲は -6143 ~ +6144 です。有効数字 34 桁にまとめられます。 | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | #### Enums @@ -593,7 +626,7 @@ query usersWithOrganizations { #### スキーマへのコメントの追加 -As per GraphQL spec, comments can be added above schema entity attributes using the hash symble `#`. This is illustrated in the example below: +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: ```graphql type MyFirstEntity @entity { @@ -653,33 +686,33 @@ query { サポートされている言語の辞書: -| コード | 辞書 | -| ------ | ------------ | -| simple | General | -| da | Danish | -| nl | Dutch | -| en | English | -| fi | Finnish | -| fr | French | -| de | German | -| hu | Hungarian | -| it | Italian | -| no | Norwegian | -| pt | ポルトガル語 | -| ro | Romanian | -| ru | Russian | -| es | Spanish | -| sv | Swedish | -| tr | Turkish | +| コード | 辞書 | +| ------ | --------- | +| simple | General | +| da | Danish | +| nl | Dutch | +| en | English | +| fi | Finnish | +| fr | French | +| de | German | +| hu | Hungarian | +| it | Italian | +| no | Norwegian | +| pt | ポルトガル語 | +| ro | Romanian | +| ru | Russian | +| es | Spanish | +| sv | Swedish | +| tr | Turkish | ### ランキングアルゴリズム サポートされている結果の順序付けのアルゴリズム: -| アルゴリズム | 説明書き | -| ------------- | ------------------------------------------------------------------- | -| rank | フルテキストクエリのマッチ品質 (0-1) を使用して結果を並べ替えます。 | -| proximityRank | ProximityRank rank に似ていますが、マッチの近接性も含みます。 | +| アルゴリズム | 説明書き | +| ------------- | ---------------------------------------- | +| rank | フルテキストクエリのマッチ品質 (0-1) を使用して結果を並べ替えます。 | +| proximityRank | ProximityRank rank に似ていますが、マッチの近接性も含みます。 | ## マッピングの記述 @@ -794,7 +827,7 @@ Code generation does not check your mapping code in `src/mapping.ts`. If you wan EVM 互換のスマート コントラクトの一般的なパターンは、レジストリ コントラクトまたはファクトリ コントラクトの使用です。1 つのコントラクトが、それぞれ独自の状態とイベントを持つ任意の数の他のコントラクトを作成、管理、または参照します。 -これらのサブコントラクトのアドレスは、事前にわかっている場合とわかっていない場合があり、これらのコントラクトの多くは、時間の経過とともに作成および/または追加される可能性があります。このような場合、単一のデータ ソースまたは固定数のデータ ソースを定義することは不可能であり、より動的なアプローチ、つまり *データ ソース テンプレート*が必要とされるのはこのためです。 +これらのサブコントラクトのアドレスは、事前にわかっている場合とわかっていない場合があり、これらのコントラクトの多くは、時間の経過とともに作成および/または追加される可能性があります。このような場合、単一のデータ ソースまたは固定数のデータ ソースを定義することは不可能であり、より動的なアプローチ、つまり _データ ソース テンプレート_が必要とされるのはこのためです。 ### メインコントラクトのデータソース @@ -874,7 +907,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **注:** 新しいデータ ソースは、それが作成されたブロックとそれに続くすべてのブロックの呼び出しとイベントのみを処理しますが、履歴データ (データなど) は処理しません。それは前のブロックに含まれています。 -> +> > 以前のブロックに新しいデータソースに関連するデータが含まれている場合は、コントラクトの現在の状態を読み取り、新しいデータソースが作成された時点でその状態を表すエンティティを作成することで、そのデータにインデックスを付けることが最善です。 ### データソースコンテクスト @@ -931,7 +964,7 @@ dataSources: ``` > **注:** コントラクト作成ブロックは、Etherscan ですばやく検索できます。 -> +> > 1. 検索バーにアドレスを入力してコントラクトを検索します。 > 2. `Contract Creator` セクションの作成トランザクションハッシュをクリックします。 > 3. トランザクションの詳細ページを読み込んで、そのコントラクトの開始ブロックを見つけます。 @@ -946,9 +979,9 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde `indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: -1. `"never"`: No pruning of historical data; retains the entire history. -2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. -3. A specific number: Sets a custom limit on the number of historical blocks to retain. +1. `"never"`: No pruning of historical data; retains the entire history. +2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +3. A specific number: Sets a custom limit on the number of historical blocks to retain. ``` indexerHints: @@ -983,29 +1016,6 @@ indexerHints: prune: never ``` -You can check the earliest block (with historical state) for a given subgraph by querying the [Indexing Status API](/deploying/deploying-a-subgraph-to-hosted/#checking-subgraph-health): - -``` -{ - indexingStatuses(subgraphs: ["Qm..."]) { - subgraph - synced - health - chains { - earliestBlock { - number - } - latestBlock { - number - } - chainHeadBlock { number } - } - } -} -``` - -Note that the `earliestBlock` is the earliest block with historical data, which will be more recent than the `startBlock` specified in the manifest, if the subgraph has been pruned. - ## Event Handlers Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. @@ -1225,9 +1235,9 @@ eventHandlers: `specVersion` `0.0.4`以降、サブグラフ機能はマニフェストファイルのトップレベルにある`features`セクションで、以下の表のように`camelCase` の名前を使って明示的に宣言する必要があります: -| 特徴 | 名前 | +| 特徴 | 名前 | | ---------------------------------------------------- | ---------------- | -| [致命的でないエラー](#non-fatal-errors) | `nonFatalErrors` | +| [致命的でないエラー](#non-fatal-errors) | `nonFatalErrors` | | [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` | | [Grafting](#grafting-onto-existing-subgraphs) | `grafting` | @@ -1478,7 +1488,7 @@ The file data source must specifically mention all the entity types which it wil #### ファイルを処理するハンドラーを新規に作成 -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/assemblyscript-api/#json-api)). +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). 読みやすい文字列としてのファイルのCIDは、`dataSource`を介して次のようにアクセスできます: diff --git a/website/pages/ja/developing/developer-faqs.mdx b/website/pages/ja/developing/developer-faqs.mdx index 81b14f3a8117..b1469a04ea96 100644 --- a/website/pages/ja/developing/developer-faqs.mdx +++ b/website/pages/ja/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: 開発者 FAQ --- -## 1. サブグラフとは +This page summarizes some of the most common questions for developers building on The Graph. -サブグラフは、ブロックチェーンデータを基に構築されたカスタムAPIです。サブグラフはGraphQLクエリ言語を使ってクエリされ、Graph CLIを使ってGraph Nodeにデプロイされます。デプロイされ、The Graphの分散型ネットワークに公開されると、インデクサーはサブグラフを処理し、サブグラフの消費者がクエリできるようにします。 +## Subgraph Related -## 2. サブグラフを削除できますか? +### 1. サブグラフとは -一度作成したサブグラフの削除はできません。 +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. サブグラフ名を変更できますか? +### 2. What is the first step to create a subgraph? -一度作成したサブグラフの名前を変更することはできません。サブグラフを作成する際には、他の dapps から検索しやすく、識別しやすい名前になるよう、よく考えてから作成してください。 +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. サブグラフに関連付けられている GitHub アカウントを変更できますか? +### 3. Can I still create a subgraph if my smart contracts don't have events? -一度作成したサブグラフに関連する GitHub のアカウントは変更できません。サブグラフを作成する前に、この点をよく考えてください。 +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. スマート コントラクトにイベントがない場合でもサブグラフを作成できますか? +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -スマートコントラクトを構成して、クエリしたいデータに関連するイベントを持つことを強くお勧めします。サブグラフ内のイベントハンドラは、コントラクトのイベントによってトリガされ、有用なデータを取得するための圧倒的に速い方法です。 +### 4. サブグラフに関連付けられている GitHub アカウントを変更できますか? -使用しているコントラクトにイベントが含まれていない場合、サブグラフはコールハンドラとブロックハンドラを使用してインデックス作成をトリガすることができます。しかし、パフォーマンスが大幅に低下するため、これは推奨されません。 +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. 複数のネットワークに同じ名前の 1 つのサブグラフを展開することは可能ですか? +### 5. How do I update a subgraph on mainnet? -複数のネットワークには別々の名前が必要です。同じ名前で異なるサブグラフを持つことはできませんが、単一のコードベースで複数のネットワークに対応する便利な方法があります。詳しくはドキュメントをご覧ください: [Redeploying a Subgraph](/deploying/deploying-a-subgraph-to-hosted#redeploying-a-subgraph) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. テンプレートとデータ ソースの違いは何ですか? +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -テンプレートは、サブグラフがインデックスを作成している間に、その場でデータソースを作成することができます。また、コントラクトの形状(ABI、イベントなど)を前もって知っているので、テンプレートでどのようにインデックスを作成するかを定義することができ、コントラクトが作成されると、サブグラフはコントラクトのアドレスを供給することで動的なデータソースを作成します。 +サブグラフを再デプロイする必要がありますが、サブグラフの ID(IPFS ハッシュ)が変わらなければ、最初から同期する必要はありません。 + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +サブグラフ内では、複数のコントラクトにまたがっているかどうかにかかわらず、イベントは常にブロックに表示される順序で処理されます。 + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. データソース・テンプレートのインスタンス化」のセクションをご覧ください: [Data Source Templates](/developing/creating-a-subgraph#data-source-templates) -## 8. ローカル展開に最新バージョンのグラフノードを使用していることを確認するにはどうすればよいですか? +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -以下のコマンドを実行してください: +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -docker pull graphprotocol/graph-node:latest -``` +You can also use `graph add` command to add a new dataSource. -**注:** docker / docker-compose は、最初に実行したときにプルされた graph-node のバージョンを常に使用しますので、最新版の graph-node を使用していることを確認するために、このコマンドを実行することが重要です。 +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. コントラクト関数を呼び出したり、サブグラフ マッピングから公開状態変数にアクセスするにはどうすればよいですか? +Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. -## 10. 2 つのコントラクトを持つ `graph-cli` から `graph init` を使用してサブグラフをセットアップすることは可能ですか?または、`graph init` を実行した後、`subgraph.yaml` に別のデータソースを手動で追加する必要がありますか? +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Yes. On `graph init` command itself you can add multiple datasources by entering contracts one after the other. You can also use `graph add` command to add new datasource. +以下のコマンドを実行してください: -## 11. GitHub の問題に貢献または追加したい。オープンソースのリポジトリはどこにありますか? +```sh +docker pull graphprotocol/graph-node:latest +``` -- [graph-node](https://github.com/graphprotocol/graph-node) -- [graph-tooling](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. イベントを処理するときに、エンティティの「自動生成」Id を作成するための推奨される方法は何ですか? +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? もし、イベント中に 1 つのエンティティしか作成されず、他に利用できるものがなければ、トランザクションハッシュ+ログインデックスがユニークになります。Bytes に変換して`crypto.keccak256`に通すことで難読化することができますが、これでは一意性は高まりません。 -## 13. 複数の契約を聞く場合、契約順を選択してイベントを聞くことはできますか? +### 15. Can I delete my subgraph? -サブグラフ内では、複数のコントラクトにまたがっているかどうかにかかわらず、イベントは常にブロックに表示される順序で処理されます。 +It is not possible to delete subgraphs once they are created. However, you can [transfer and deprecate your subgraph](/managing/transfer-and-deprecate-a-subgraph/). -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +## Network Related + +### 16. What networks are supported by The Graph? + +対応ネットワークの一覧は[こちら](/developing/supported-networks)で確認できます。 + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? はい、以下の例のように`graph-ts`をインポートすることで可能です。 @@ -78,23 +99,21 @@ dataSource.network() dataSource.address() ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Do you support block and call handlers on Sepolia? Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. -## 16. サブグラフ マッピングに ethers.js または他の JS ライブラリをインポートできますか? - -マッピングは AssemblyScript で書かれているため、現在はできません。代替案としては、生データをエンティティに格納し、JS ライブラリを必要とするロジックをクライアントで実行することが考えられます。 +## Indexing & Querying Related -## 17. インデックス作成を開始するブロックを指定することはできますか? +### 19. Is it possible to specify what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 18. インデックス作成のパフォーマンスを向上させるためのヒントはありますか? サブグラフの同期に非常に時間がかかる +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -はい、コントラクトがデプロイされたブロックからインデックス作成を開始するオプションのスタートブロック機能をご利用ください: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 19. サブグラフに直接クエリを実行して、インデックスが作成された最新のブロック番号を特定する方法はありますか? +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? はい、あります。organization/subgraphName」を公開先の組織とサブグラフの名前に置き換えて、以下のコマンドを実行してみてください: @@ -102,44 +121,27 @@ Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the n curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -## 20. The Graph はどのネットワークをサポートしていますか? - -対応ネットワークの一覧は[こちら](/developing/supported-networks)で確認できます。 - -## 21. 再デプロイせずにサブグラフを別のアカウントまたはエンドポイントに複製することは可能ですか? - -サブグラフを再デプロイする必要がありますが、サブグラフの ID(IPFS ハッシュ)が変わらなければ、最初から同期する必要はありません。 - -## 22. グラフノード上で Apollo Federation を使用することは可能ですか? +### 22. Is there a limit to how many objects The Graph can return per query? -将来的にはサポートしたいと考えていますが、フェデレーションはまだサポートされていません。現時点でできることは、クライアント上またはプロキシサービス経由でスキーマステッチを使用することです。 - -## 23. グラフがクエリごとに返すことができるオブジェクトの数に制限はありますか? - -デフォルトでは、クエリの応答は 1 つのコレクションにつき 100 アイテムに制限されています。それ以上の数を受け取りたい場合は、1 コレクションあたり 1000 アイテムまで、それ以上は以下のようにページネーションすることができます: +By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: ```graphql someCollection(first: 1000, skip: ) { ... } ``` -## 24. dapp フロントエンドがクエリに The Graph を使用する場合、クエリ キーをフロントエンドに直接書き込む必要がありますか? ユーザーにクエリ料金を支払う場合はどうなりますか? 悪意のあるユーザーによってクエリ料金が非常に高くなることはありますか? - -現在、dapp の推奨されるアプローチは、キーをフロントエンドに追加し、それをエンド ユーザーに公開することです。とはいえ、そのキーを _yourdapp.io_ や subgraph.ゲートウェイは現在 Edge & によって実行されています。ノード。ゲートウェイの責任の一部は、不正行為を監視し、悪意のあるクライアントからのトラフィックをブロックすることです。 - -## 25. ホスティングサービス上の現在のサブグラフはどこで見ることができますか? - -自分または他の人がホストされたサービスにデプロイしたサブグラフを見つけるには、ホストされたサービスに移動します。 [こちら](https://thegraph.com/hosted-service)でご覧いただけます。 - -## 26. ホスティングサービスはクエリ料金を請求するようになりますか? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Graph は、ホストされるサービスに対して料金を請求することはありません。 Graph は分散型プロトコルであり、集中型サービスに対する課金は The Graph の価値観と一致していません。ホスト型サービスは常に、分散型ネットワークにアクセスするための一時的なステップでした。開発者には、快適に分散ネットワークにアップグレードするのに十分な時間があります。 +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## 27. メインネットのサブグラフを更新するには? +## Miscellaneous -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +### 24. Is it possible to use Apollo Federation on top of graph-node? -## 28. In what order are the event, block, and call handlers triggered for a data source? +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +- [graph-node](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/ja/developing/graph-ts/api.mdx b/website/pages/ja/developing/graph-ts/api.mdx index a3ac1026258b..cea9a55eb75c 100644 --- a/website/pages/ja/developing/graph-ts/api.mdx +++ b/website/pages/ja/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: AssemblyScript API --- -> 注意: `graph-cli`/`graph-ts` のバージョン `0.22.0` より前にサブグラフを作成した場合、古いバージョンのAssemblyScriptを使用しているので、[`マイグレーションガイド`](/release-notes/assemblyscript-migration-guide) を参照することをお勧めします。 +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -このページでは、サブグラフのマッピングを記述する際に、どのような組み込み API を使用できるかを説明します。 すぐに使える API は 2 種類あります: +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- graph codegenによってサブグラフファイルから生成されたコードです。 +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -また、AssemblyScriptとの互換性があれば、他のライブラリを依存関係に追加することも可能です。 マッピングはこの言語で書かれているので、言語や標準ライブラリの機能については、 AssemblyScript wikiが参考になります。 +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## API リファレンス @@ -27,16 +29,16 @@ title: AssemblyScript API サブグラフマニフェストapiVersionは、特定のサブグラフのマッピングAPIバージョンを指定します。このバージョンは、Graph Nodeによって実行されます。 -| バージョン | リリースノート | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Ethereum タイプに `TransactionReceipt` と `Log` クラスを追加
    Ethereum Event オブジェクトに `receipt` フィールドを追加。 | -| 0.0.6 | Ethereum Transactionオブジェクトに`nonce`フィールドを追加
    Ethereum Blockオブジェクトに`baseFeePerGas`を追加。 | +| バージョン | リリースノート | +| :---: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Ethereum タイプに `TransactionReceipt` と `Log` クラスを追加
    Ethereum Event オブジェクトに `receipt` フィールドを追加。 | +| 0.0.6 | Ethereum Transactionオブジェクトに`nonce`フィールドを追加
    Ethereum Blockオブジェクトに`baseFeePerGas`を追加。 | | 0.0.5 | AssemblyScriptはバージョン0.19.10にアップグレードされました(このバージョンアップには変更点が含まれていますので Migration Guide) をご覧ください)。
    ethereum.transaction.gasUsedの名前がethereum.transaction.gasLimitに変更 | -| 0.0.4 | Ethereum SmartContractCall オブジェクトにfunctionSignatureフィールドを追加 | -| 0.0.3 | イーサリアムコールオブジェクトに`from`フィールドを追加
    `etherem.call.address`を`ethereum.call.to`に変更。 | -| 0.0.2 | Ethereum Transaction オブジェクトに inputフィールドを追加 | +| 0.0.4 | Ethereum SmartContractCall オブジェクトにfunctionSignatureフィールドを追加 | +| 0.0.3 | イーサリアムコールオブジェクトに`from`フィールドを追加
    `etherem.call.address`を`ethereum.call.to`に変更。 | +| 0.0.2 | Ethereum Transaction オブジェクトに inputフィールドを追加 | ### 組み込み型 @@ -98,7 +100,7 @@ _Math_ - div(y: BigDecimal): BigDecimal`-`x / y\` と書くことができます。 - equals(y: BigDecimal): bool`-`x == y\` と書くことができます。 - notEqual(y: BigDecimal): bool`-`x != y\` と書くことができます。 -- lt(y: BigDecimal): bool`-`x \< y\` と書くことができます。 +- lt(y: BigDecimal): bool`-`x < y\` と書くことができます。 - `le(y: BigDecimal): bool` - `x <= y` と書くことができます。 - `gt(y: BigDecimal): bool` と書くことができます。 - `ge(y: BigDecimal): bool`と書くことができます。 @@ -252,7 +254,9 @@ export function handleTransfer(event: TransferEvent): void { チェーンの処理中に `Transfer` イベントが発生すると、生成された `Transfer` 型(エンティティ型との名前の衝突を避けるために、ここでは `TransferEvent` のエイリアス)を使用して `handleTransfer` イベントハンドラに渡されます。この型はイベントの親トランザクションやそのパラメータなどのデータにアクセスすることを可能にします。 -各エンティティは、他のエンティティとの衝突を避けるために、ユニークな ID を持たなければなりません。 イベントのパラメータには、使用可能な一意の識別子が含まれているのが一般的です。 注:トランザクションのハッシュを ID として使用することは、同じトランザクション内の他のイベントがこのハッシュを ID としてエンティティを作成しないことを前提としています。 +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### ストアからのエンティティの読み込み @@ -268,18 +272,21 @@ if (transfer == null) { // Use the Transfer entity as before ``` -エンティティはまだストアに存在していない可能性があるため、loadメソッドはTransfer | null型の値を返します。 そのため、値を使用する前に、nullのケースをチェックする必要があるかもしれません。 +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> Note: エンティティのロードは、マッピングでの変更がエンティティの以前のデータに依存する場合にのみ必要です。 既存のエンティティを更新する 2 つの方法については、次のセクションを参照してください。 +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### ブロック内で作成されたエンティティの検索 graph-node v0.31.0、@graphprotocol/graph-ts v0.30.0、および @graphprotocol/graph-cli v0.49.0 以降、 loadInBlock メソッドはすべてのエンティティ タイプで使用できます。 -ストア API を使用すると、現在のブロックで作成または更新されたエンティティの取得が容易になります。この一般的な状況は、あるハンドラーがオンチェーン イベントからトランザクションを作成し、後のハンドラーがこのトランザクションが存在する場合にアクセスしようとすることです。トランザクションが存在しない場合、サブグラフはエンティティが存在しないことを確認するためだけにデータベースにアクセスする必要があります。エンティティが同じブロック内に作成されている必要があることをサブグラフの作成者がすでに知っている場合は、loadInBlock を使用すると、このデータベースのラウンドトリップが回避されます。一部のサブグラフでは、これらのルックアップの欠落がインデックス作成時間に大きく影響する可能性があります。 +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript -let id = event.transaction.hash // または ID が構築される方法 +let id =event.transaction.hash // または ID が構築される方法 let transfer = Transfer.loadInBlock(id) if (transfer == null) { transfer = 新しい転送(id) @@ -503,7 +510,9 @@ Ethereum の ERC20Contractにsymbolというパブリックな読み取り専用 #### リバートされた呼び出しの処理 -コントラクトの読み取り専用メソッドが復帰する可能性がある場合は、try\_を前置して生成されたコントラクトメソッドを呼び出すことで対処しなければなりません。 例えば、Gravity コントラクトではgravatarToOwnerメソッドを公開しています。 このコードでは、そのメソッドの復帰を処理することができます。 +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -515,7 +524,7 @@ if (callResult.reverted) { } ``` -ただし、Geth や Infura のクライアントに接続された Graph ノードでは、すべてのリバートを検出できない場合があるので、これに依存する場合は Parity のクライアントに接続された Graph ノードを使用することをお勧めします。 +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### 符号化/復号化 ABI @@ -761,44 +770,44 @@ if (value.kind == JSONValueKind.BOOL) { ### タイプ 変換参照 -| Source(s) | Destination | Conversion function | -| -------------------- | -------------------- | ---------------------------- | -| Address | Bytes | none | -| Address | String | s.toHexString() | -| BigDecimal | String | s.toString() | -| BigInt | BigDecimal | s.toBigDecimal() | -| BigInt | String (hexadecimal) | s.toHexString() or s.toHex() | -| BigInt | String (unicode) | s.toString() | -| BigInt | i32 | s.toI32() | -| Boolean | Boolean | none | -| Bytes (signed) | BigInt | BigInt.fromSignedBytes(s) | -| Bytes (unsigned) | BigInt | BigInt.fromUnsignedBytes(s) | -| Bytes | String (hexadecimal) | s.toHexString() or s.toHex() | -| Bytes | String (unicode) | s.toString() | -| Bytes | String (base58) | s.toBase58() | -| Bytes | i32 | s.toI32() | -| Bytes | u32 | s.toU32() | -| Bytes | JSON | json.fromBytes(s) | -| int8 | i32 | none | -| int32 | i32 | none | -| int32 | BigInt | Bigint.fromI32(s) | -| uint24 | i32 | none | -| int64 - int256 | BigInt | none | -| uint32 - uint256 | BigInt | none | -| JSON | boolean | s.toBool() | -| JSON | i64 | s.toI64() | -| JSON | u64 | s.toU64() | -| JSON | f64 | s.toF64() | -| JSON | BigInt | s.toBigInt() | -| JSON | string | s.toString() | -| JSON | Array | s.toArray() | -| JSON | Object | s.toObject() | -| String | Address | Address.fromString(s) | -| Bytes | Address | Address.fromString(s) | -| String | BigInt | BigDecimal.fromString(s) | -| String | BigDecimal | BigDecimal.fromString(s) | -| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | -| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | +| Source(s) | Destination | Conversion function | +| -------------------- | -------------------- | -------------------------------- | +| Address | Bytes | none | +| Address | String | s.toHexString() | +| BigDecimal | String | s.toString() | +| BigInt | BigDecimal | s.toBigDecimal() | +| BigInt | String (hexadecimal) | s.toHexString() or s.toHex() | +| BigInt | String (unicode) | s.toString() | +| BigInt | i32 | s.toI32() | +| Boolean | Boolean | none | +| Bytes (signed) | BigInt | BigInt.fromSignedBytes(s) | +| Bytes (unsigned) | BigInt | BigInt.fromUnsignedBytes(s) | +| Bytes | String (hexadecimal) | s.toHexString() or s.toHex() | +| Bytes | String (unicode) | s.toString() | +| Bytes | String (base58) | s.toBase58() | +| Bytes | i32 | s.toI32() | +| Bytes | u32 | s.toU32() | +| Bytes | JSON | json.fromBytes(s) | +| int8 | i32 | none | +| int32 | i32 | none | +| int32 | BigInt | Bigint.fromI32(s) | +| uint24 | i32 | none | +| int64 - int256 | BigInt | none | +| uint32 - uint256 | BigInt | none | +| JSON | boolean | s.toBool() | +| JSON | i64 | s.toI64() | +| JSON | u64 | s.toU64() | +| JSON | f64 | s.toF64() | +| JSON | BigInt | s.toBigInt() | +| JSON | string | s.toString() | +| JSON | Array | s.toArray() | +| JSON | Object | s.toObject() | +| String | Address | Address.fromString(s) | +| Bytes | Address | Address.fromString(s) | +| String | BigInt | BigDecimal.fromString(s) | +| String | BigDecimal | BigDecimal.fromString(s) | +| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | +| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | ### データソースのメタデータ diff --git a/website/pages/ja/developing/supported-networks.mdx b/website/pages/ja/developing/supported-networks.mdx index d0d2749a2fb5..75feee5305b5 100644 --- a/website/pages/ja/developing/supported-networks.mdx +++ b/website/pages/ja/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. +- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. - If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - 分散型ネットワークでサポートされている機能の完全なリストについては、[このページ](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md)を参照してください。 diff --git a/website/pages/ja/developing/unit-testing-framework.mdx b/website/pages/ja/developing/unit-testing-framework.mdx index 37aaaa651ca5..38766ae2c413 100644 --- a/website/pages/ja/developing/unit-testing-framework.mdx +++ b/website/pages/ja/developing/unit-testing-framework.mdx @@ -1368,18 +1368,18 @@ Global test coverage: 22.2% (2/9 handlers). > Critical: 有効なモジュールから WasmInstance を作成できない。コンテキストが不明 インポート: wasi_snapshot_preview1::fd_write が定義されていない -これは、コード内で`console.log`を使用していることを意味し、AssemblyScriptではサポートされていません。[Logging API](/developing/assemblyscript-api/#logging-api) の利用をご検討ください。 +This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/graph-ts/api/#logging-api) > ERROR TS2554: 期待された引数は? -> +> > return new ethereum.Block(defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(18,12) -> +> > ERROR TS2554: 期待された引数は? -> +> > return new ethereum.Transaction(defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(24,12) 引数の不一致は、`graph-ts`と`matchstick-as`の不一致によって起こります。このような問題を解決する最善の方法は、すべてを最新のリリース版にアップデートすることです。 diff --git a/website/pages/ja/glossary.mdx b/website/pages/ja/glossary.mdx index 8b41d244d3e7..c53bbba6825e 100644 --- a/website/pages/ja/glossary.mdx +++ b/website/pages/ja/glossary.mdx @@ -10,11 +10,9 @@ title: 用語集 - **エンドポイント**: サブグラフのクエリに使用できる URL。 Subgraph Studio のテスト エンドポイントは `https://api.studio.thegraph.com/query///` であり、Graph Explorer エンドポイントは `https: //gateway.thegraph.com/api//subgraphs/id/`. Graph Explorer エンドポイントは、The Graph の分散型ネットワーク上のサブグラフをクエリするために使用されます。 -- **サブグラフ**: ブロックチェーンからデータを抽出し、処理し、GraphQLで簡単にクエリできるように保存するオープンAPIです。開発者はサブグラフを構築し、デプロイし、グラフネットワークに公開することができます。その後、インデクサーはサブグラフのインデックス作成を開始して、誰でもクエリできるようにします。 +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **ホスティングサービス**: The Graphの分散型ネットワークが、サービスコスト、サービス品質、開発者エクスペリエンスを成熟させつつある中、サブグラフの構築とクエリのための一時的な足場となるサービスです。 - -- **インデクサー**:ブロックチェーンからデータをインデックスし、GraphQLクエリを提供するためにインデックスノードを実行するネットワーク参加者です。 +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **インデクサー報酬**:GRTでは、インデクサー報酬は、クエリ料金のリベートとインデックスの報酬の2つの要素で成り立っています。 @@ -24,17 +22,17 @@ title: 用語集 - **インデクサーのセルフステーク**:インデクサーが分散型ネットワークに参加するためにステークするGRTの金額です。最低額は100,000GRTで、上限はありません。 -- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service to The Graph Network. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **デリゲーター**:GRTを所有し、そのGRTをインデクサーに委任するネットワーク参加者です。これにより、インデクサーはネットワーク上のサブグラフへの出資比率を高めることができます。デリゲーターは、インデクサーがサブグラフを処理する際に受け取るインデクサー報酬の一部を受け取ります。 +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **デリゲーション・タックス**。デリゲーターがインデクサーにGRTを委任する際に支払う0.5%の手数料です。手数料の支払いに使われたGRTはバーンされます。 -- **キュレーター**:質の高いサブグラフを特定し、キュレーションシェアと引き換えにそれらを「キュレーション」する(つまり、その上でGRTをシグナルする)ネットワーク参加者。インデクサーがサブグラフのクエリ料を請求すると、10%がそのサブグラフのCuratorに分配されます。インデクサーは、サブグラフ上のシグナルに比例してインデックス作成報酬を得ます。GRTのシグナル量と、サブグラフのインデックスを作成するインデクサーの数には相関関係があります。 +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **キュレーション税**。キュレーターがサブグラフにGRTのシグナルを送る際に支払う1%の手数料。手数料を支払うために使用されるGRTはバーンされます。 -- **サブグラフ・コンシューマー**。消費者。サブグラフにクエリをするアプリケーションやユーザーが主となります。 +- **Data Consumer**: Any application or user that queries a subgraph. - **サブグラフ・デベロッパー**:The Graphの分散型ネットワークにサブグラフを構築し、デプロイする開発者のことです。 @@ -46,11 +44,11 @@ title: 用語集 1. **アクティブ**:アロケーションは、オンチェーンで作成されたときにアクティブとみなされます。これはアロケーションを開くと呼ばれ、インデクサーが特定のサブグラフのために積極的にインデックスを作成し、クエリを提供していることをネットワークに示しています。アクティブなアロケーションは、サブグラフ上のシグナルと割り当てられたGRTの量に比例してインデックス作成報酬を発生させます。 - 2. **クローズ**: インデックス作成者は、最近の有効なインデックス証明(POI) を提出することで、 与えられたサブグラフに発生したインデックス報酬を要求することができる。これは割り当てを終了することとして知られている。割り当てを閉じるには、最低1エポック開いていなければなりません。最大割当期間は28エポックです。もしインデクサが28エポックを超えて割り当てを開いたままにしておくと、その割り当ては古くなった割り当てとして知られています。割り当てが**クローズ**状態にあるときでも、 フィッシャーマンは、虚偽のデータを提供したとして、 インデクサーに異議を申し立てるために紛争を開くことができます。 + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **Subgraph Studio**:サブグラフの構築、デプロイ、公開のための強力なDAPです。 -- **フィッシャーマン**: The Graph Networkの中で、Indexersが提供するデータの正確性と完全性を監視する参加者が持つ役割。フィッシャーマンは、不正確であると思われるクエリ応答やPOIを特定した場合、インデクサに対して論争を開始することができる。もし紛争がフィッシャーマンに有利な裁定を下した場合、 インデックサーは切り捨てられます。具体的には、インデクサーのGRTの2.5%を失う。この額のうち、50%は警戒に対する報奨金としてフィッシャーマンに与えられ、残りの50%は流通から外される(燃やされる)。この仕組みは、インデクサーの提供するデータに対する責任を保証することで、ネットワークの信頼性維持に貢献するようフィッシャーマンを奨励するためのものです。 +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **アービトレーター(仲裁人)**: 仲裁人は、ガバナンスプロセスを通じて任命されるネットワーク参加者です。仲裁人の役割は、インデックス作成とクエリの論争の結果を決定することです。その目的は、グラフネットワークの実用性と信頼性を最大化することです。 @@ -62,11 +60,11 @@ title: 用語集 - **GRT**: Graphのワークユーティリティトークン。GRTは、ネットワーク参加者にネットワークへの貢献に対する経済的インセンティブを提供します。 -- **POIまたはインデックス証明**: インデクサーが割り当てを終了し、与えられたサブグラフについて発生したインデックス報酬を要求する場合、有効で最近のインデックスの証明(POI)を提出しなければなりません。フィッシャーマンは、インデクサーの提示した POI に異議を唱えることができます。フィッシャーマンに有利に解決された論争は、そのインデクサーのスラッシュに繋がります。 +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **グラフノード**:Graph Nodeは、サブグラフにインデックスを付け、その結果得られたデータをGraphQL APIを介してクエリに利用できるようにするコンポーネントです。そのため、インデクサースタックの中心であり、グラフノードの正しい動作は、成功するインデクサを実行するために重要です。 +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **インデクサエージェント**:インデクサエージェントは、インデクサスタックの一部です。ネットワークへの登録、グラフノードへのサブグラフの展開の管理、割り当ての管理など、チェーン上でのインデクサーのインタラクションを促進します。 +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **グラフクライアント**:GraphQLベースのDappsを分散的に構築するためのライブラリです。 @@ -78,10 +76,6 @@ title: 用語集 - **L2転送ツール**:ネットワーク参加者がイーサリアムメインネットからArbitrum Oneにネットワーク関連資産を転送できるようにするスマートコントラクトとUIです。ネットワーク参加者は、委任されたGRT、サブグラフ、キュレーションシェア、およびインデクサーのセルフステークを転送できます。 -- サブグラフを Graph Network に***アップグレード*する**: サブグラフをホストされたサービスから Graph Network に移動するプロセス。 - -- サブグラフ**の*更新***: サブグラフのマニフェスト、スキーマ、または更新を含む新しいサブグラフ バージョンをリリースするプロセス。マッピング。 +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **移行**:サブグラフの古いバージョンから新しいバージョンに移行するキュレーション共有のプロセスです(例えば、v0.0.1がv0.0.2に更新される場合)。 - -- **Upgrade Window**: The countdown for hosted service users to upgrade their subgraph(s) to The Graph Network beginning on April 11th, and ending on June 12th 2024. diff --git a/website/pages/ja/index.json b/website/pages/ja/index.json index 7e3937043c91..55ca30b1bf39 100644 --- a/website/pages/ja/index.json +++ b/website/pages/ja/index.json @@ -21,10 +21,6 @@ "createASubgraph": { "title": "サブグラフの作成", "description": "スタジオを使ってサブグラフを作成" - }, - "migrateFromHostedService": { - "title": "ホスティングサービスからのアップグレード", - "description": "The Graph Networkへのサブグラフのアップグレード" } }, "networkRoles": { diff --git a/website/pages/ja/managing/transfer-and-deprecate-a-subgraph.mdx b/website/pages/ja/managing/transfer-and-deprecate-a-subgraph.mdx new file mode 100644 index 000000000000..744e8a66fa69 --- /dev/null +++ b/website/pages/ja/managing/transfer-and-deprecate-a-subgraph.mdx @@ -0,0 +1,65 @@ +--- +title: Transfer and Deprecate a Subgraph +--- + +## サブグラフの所有権の譲渡 + +Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. + +**Please note the following:** + +- Whoever owns the NFT controls the subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. +- You can easily move control of a subgraph to a multi-sig. +- A community member can create a subgraph on behalf of a DAO. + +### View your subgraph as an NFT + +To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: + +``` +https://opensea.io/your-wallet-address +``` + +Or a wallet explorer like **Rainbow.me**: + +``` +https://rainbow.me/your-wallet-addres +``` + +### Step-by-Step + +To transfer ownership of a subgraph, do the following: + +1. Use the UI built into Subgraph Studio: + + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) + +2. Choose the address that you would like to transfer the subgraph to: + + ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) + +Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: + +![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) + +## Deprecating a subgraph + +Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. + +### Step-by-Step + +To deprecate your subgraph, do the following: + +1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). +2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. +3. Your subgraph will no longer appear in searches on Graph Explorer. + +**Please note the following:** + +- The owner's wallet should call the `deprecateSubgraph` function. +- キュレーターは、サブグラフにシグナルを送ることができなくなります。 +- Curators that already signaled on the subgraph can withdraw their signal at an average share price. +- Deprecated subgraphs will show an error message. + +> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. diff --git a/website/pages/ja/mips-faqs.mdx b/website/pages/ja/mips-faqs.mdx index b9d0538f7fa5..6f0cdbfc3ab6 100644 --- a/website/pages/ja/mips-faqs.mdx +++ b/website/pages/ja/mips-faqs.mdx @@ -6,10 +6,6 @@ title: MIPs FAQs > 注意:2023年5月をもって、MIPsプログラムは終了しました。参加してくれたすべてのインデクサーに感謝します! -The Graph エコシステムに参加できるのは今がエキサイティングな時期です。 [Graph Day 2022](https://thegraph.com/graph-day/2022/) 中に、Yaniv Tal は [sunsetting of the hosted service](https://thegraph.com/blog/sunsetting-hosted-service/) を発表しました。 )、グラフ エコシステムが長年にわたって取り組んできた瞬間です。 - -To support the sunsetting of the hosted service and the migration of all of it's activity to the decentralized network, The Graph Foundation has announced the [Migration Infrastructure Providers (MIPs) program](https://thegraph.com/blog/mips-multi-chain-indexing-incentivized-program). - MIPsプログラムは、Indexersをサポートするためのインセンティブプログラムで、Ethereumメインネット以外のチェーンをインデックスするためのリソースを提供し、The Graphプロトコルを分散型ネットワークを多チェーンのインフラストラクチャレイヤーに拡張するのを支援します。 MIPsプログラムは、GRT供給量の0.75%(75M GRT)を割り当てており、ネットワークをブートストラップするのに貢献するIndexersに0.5%が割り当てられ、マルチチェーンサブグラフを使用するサブグラフ開発者向けのネットワークグラントに0.25%が割り当てられています。 diff --git a/website/pages/ja/network/benefits.mdx b/website/pages/ja/network/benefits.mdx index 29663f61afb4..5328cfd5a9bf 100644 --- a/website/pages/ja/network/benefits.mdx +++ b/website/pages/ja/network/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| コスト比較 | セルフホスト | グラフネットワーク | -| :-: | :-: | :-: | -| 月額サーバー代 | $350/月 | $0 | -| クエリコスト | $0+ | $0 per month | -| エンジニアリングタイム | $400/月 | なし/ グローバルに分散されたインデクサーでネットワークに組み込まれる | -| 月ごとのクエリ | インフラ機能に限定 | 100,000 (Free Plan) | -| クエリごとのコスト | $0 | $0 | -| インフラストラクチャ | 集中管理型 | 分散型 | -| 地理的な冗長性 | 追加1ノードにつき$750+ | 含まれる | -| アップタイム | バリエーション | 99.9%+ | -| 月額費用合計 | $750+ | $0 | +| コスト比較 | セルフホスト | グラフネットワーク | +|:-----------------------:|:---------------------------------------:|:-----------------------------------:| +| 月額サーバー代 | $350/月 | $0 | +| クエリコスト | $0+ | $0 per month | +| エンジニアリングタイム | $400/月 | なし/ グローバルに分散されたインデクサーでネットワークに組み込まれる | +| 月ごとのクエリ | インフラ機能に限定 | 100,000 (Free Plan) | +| クエリごとのコスト | $0 | $0 | +| インフラストラクチャ | 集中管理型 | 分散型 | +| 地理的な冗長性 | 追加1ノードにつき$750+ | 含まれる | +| アップタイム | バリエーション | 99.9%+ | +| 月額費用合計 | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| コスト比較 | セルフホスト | グラフネットワーク | -| :-: | :-: | :-: | -| 月額サーバー代 | $350/月 | $0 | -| クエリコスト | $500/月 | $120 per month | -| エンジニアリングタイム | $800/月 | なし/ グローバルに分散されたインデクサーでネットワークに組み込まれる | -| 月ごとのクエリ | インフラ機能に限定 | ~3,000,000 | -| クエリごとのコスト | $0 | $0.00004 | -| インフラストラクチャ | 中央管理型 | 分散型 | -| エンジニアリングコスト | $200/時 | 含まれる | -| 地理的な冗長性 | ノード追加1台につき合計1,200ドル | 含まれる | -| アップタイム | 変動 | 99.9%+ | -| 月額費用合計 | $1,650+ | $120 | +| コスト比較 | セルフホスト | グラフネットワーク | +|:-----------------------:|:------------------------------------------:|:-----------------------------------:| +| 月額サーバー代 | $350/月 | $0 | +| クエリコスト | $500/月 | $120 per month | +| エンジニアリングタイム | $800/月 | なし/ グローバルに分散されたインデクサーでネットワークに組み込まれる | +| 月ごとのクエリ | インフラ機能に限定 | ~3,000,000 | +| クエリごとのコスト | $0 | $0.00004 | +| インフラストラクチャ | 中央管理型 | 分散型 | +| エンジニアリングコスト | $200/時 | 含まれる | +| 地理的な冗長性 | ノード追加1台につき合計1,200ドル | 含まれる | +| アップタイム | 変動 | 99.9%+ | +| 月額費用合計 | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| コスト比較 | セルフホスト | グラフネットワーク | -| :-: | :-: | :-: | -| 月額サーバー代 | $1100/月(ノードごと) | $0 | -| クエリコスト | $4000 | $1,200 per month | -| 必要ノード数 | 10 | 該当なし | -| エンジニアリングタイム | $6,000/月 | なし/ グローバルに分散されたインデクサーでネットワークに組み込まれる | -| 月ごとのクエリ | インフラ機能に限定 | ~30,000,000 | -| クエリごとのコスト | $0 | $0.00004 | -| インフラストラクチャ | 集中管理型 | 分散型 | -| 地理的な冗長性 | ノード追加1台につき合計1,200ドル | 含まれる | -| アップタイム | 変動 | 99.9%+ | -| 月額費用合計 | $11,000+ | $1,200 | +| コスト比較 | セルフホスト | グラフネットワーク | +|:-----------------------:|:-------------------------------------------:|:-----------------------------------:| +| 月額サーバー代 | $1100/月(ノードごと) | $0 | +| クエリコスト | $4000 | $1,200 per month | +| 必要ノード数 | 10 | 該当なし | +| エンジニアリングタイム | $6,000/月 | なし/ グローバルに分散されたインデクサーでネットワークに組み込まれる | +| 月ごとのクエリ | インフラ機能に限定 | ~30,000,000 | +| クエリごとのコスト | $0 | $0.00004 | +| インフラストラクチャ | 集中管理型 | 分散型 | +| 地理的な冗長性 | ノード追加1台につき合計1,200ドル | 含まれる | +| アップタイム | 変動 | 99.9%+ | +| 月額費用合計 | $11,000+ | $1,200 | \*バックアップ費用含む:月額$50〜$100 diff --git a/website/pages/ja/network/curating.mdx b/website/pages/ja/network/curating.mdx index e961e914c063..d64e5c9f115c 100644 --- a/website/pages/ja/network/curating.mdx +++ b/website/pages/ja/network/curating.mdx @@ -10,7 +10,7 @@ Before consumers can query a subgraph, it must be indexed. This is where curatio Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. @@ -18,7 +18,7 @@ The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). @@ -34,7 +34,7 @@ Within the Curator tab in Graph Explorer, curators will be able to signal and un シグナルを最新のプロダクションビルドに自動的に移行させることは、クエリー料金の発生を確実にするために有効です。 キュレーションを行うたびに、1%のキュレーション税が発生します。 また、移行ごとに 0.5%のキュレーション税を支払うことになります。 つまり、サブグラフの開発者が、頻繁に新バージョンを公開することは推奨されません。 自動移行された全てのキュレーションシェアに対して、0.5%のキュレーション税を支払わなければならないからです。 -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -49,7 +49,7 @@ However, it is recommended that curators leave their signaled GRT in place not o ## リスク 1. The Graph では、クエリ市場は本質的に歴史が浅く、初期の市場ダイナミクスのために、あなたの%APY が予想より低くなるリスクがあります。 -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. 3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. サブグラフはバグで失敗することがあります。 失敗したサブグラフは、クエリフィーが発生しません。 結果的に、開発者がバグを修正して新しいバージョンを展開するまで待たなければならなくなります。 - サブグラフの最新バージョンに加入している場合、シェアはその新バージョンに自動移行します。 これには 0.5%のキュレーション税がかかります。 @@ -65,7 +65,7 @@ By signalling on a subgraph, you will earn a share of all the query fees that th 高品質のサブグラフを見つけるのは複雑な作業ですが、さまざまな方法でアプローチできます。 キュレーターとしては、クエリボリュームを牽引している信頼できるサブグラフを探したいと考えます。 信頼できるサブグラフは、それが完全で正確であり、Dap のデータニーズをサポートしていれば価値があるかもしれません。 アーキテクチャが不十分なサブグラフは、修正や再公開が必要になるかもしれませんし、失敗に終わることもあります。 キュレーターにとって、サブグラフが価値あるものかどうかを評価するために、サブグラフのアーキテクチャやコードをレビューすることは非常に重要です。 その結果として: -- キュレーターはネットワークの理解を利用して、個々のサブグラフが将来的にどのように高いまたは低いクエリボリュームを生成するかを予測することができます。 +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future - Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. ### 3. サブグラフの更新にかかるコストはいくらですか? @@ -78,50 +78,14 @@ Migrating your curation shares to a new subgraph version incurs a curation tax o ### 5. キュレーションのシェアを売却することはできますか? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). ### 6. Am I eligible for a curation grant? Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## ボンディングカーブ 101 - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![シェアあたりの価格](/img/price-per-share.png) - -その結果、価格は直線的に上昇し、時間の経過とともにシェアの購入価格が高くなることを意味しています。 下のボンディングカーブを見て、その例を示します: - -![ボンディングカーブ](/img/bonding-curve.png) - -あるサブグラフのシェアを作成する 2 人のキュレーターがいるとします。 - -- キュレーター A は、サブグラフに最初にシグナルを送ります。 120,000GRT をボンディングカーブに加えることで、2000 もシェアをミントすることができます。 -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- 両方のキュレーターがキュレーションシェアの合計の半分を保有しているので、彼らは同額のキュレーターロイヤルティを受け取ることになります。 -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- 残りのキュレーターは、そのサブグラフのキュレーター・ロイヤリティーをすべて受け取ることになります。 もし彼らが自分のシェアをバーンして GRT を引き出す場合、彼らは 120,000GRT を受け取ることになります。 -- **TLDR:** キュレーションシェアの GRT 評価はボンディングカーブによって決まるため、変動しやすいという傾向があります。 また、大きな損失を被る可能性があります。 早期にシグナリングするということは、1 つのシェアに対してより少ない GRT を投入することを意味します。 ひいては、同じサブグラフの後続のキュレーターよりも、GRT あたりのキュレーター・ロイヤリティーを多く得られることになります。 - -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** - -The Graph の場合は、 [Bancor が実装しているボンディングカーブ式](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) を活用しています。 - まだ不明点がありますか? その他の不明点に関しては、 以下のキュレーションビデオガイドをご覧ください: diff --git a/website/pages/ja/network/delegating.mdx b/website/pages/ja/network/delegating.mdx index 80b87a30a8e2..3a973b24cca9 100644 --- a/website/pages/ja/network/delegating.mdx +++ b/website/pages/ja/network/delegating.mdx @@ -2,13 +2,23 @@ title: 委任 --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## デリゲーターガイド -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly, as that information is widely available online. There are three sections in this guide: @@ -24,15 +34,19 @@ There are three sections in this guide: デリゲーターは悪意の行動をしてもスラッシュされないが、デリゲーターにはデポジット税が課せられ、ネットワークの整合性を損なう可能性のある悪い意思決定を抑止します。 -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +As a Delegator, it's important to understand the following: -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### 委任期間無制限 Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely.
    デリゲーション UIの0.5%の手数料と、28日間のアンボンディング期間に注目してください。 @@ -40,47 +54,65 @@ Another thing to consider is how to choose an Indexer wisely. If you choose an I ### デリゲーターに公平な報酬を支払う信頼できるインデクサーの選択 -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +#### Delegation Parameters + +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
    ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) ※上位インデクサーは委任者に90%の報酬を与えています。の 中央のものは委任者に 20% を与えています。一番下のものは委任者に ~83% を与えています.*
    -- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. + +As you can see, in order to choose the right Indexer, you must consider multiple things. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -### デリゲーターの期待リターンを計算 +## Calculating Delegators Expected Return -A Delegator must consider a lot of factors when determining the return. These include: +A Delegator must consider the following factors to determine a return: -- デリゲーターは、インデクサーが利用可能なデリゲートトークンを使用する能力にも目を向けることができます。 もしインデクサーが利用可能なトークンをすべて割り当てていなければ、彼らは自分自身やデリゲーターのために得られる最大の利益を得られないことになります。 -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### クエリフィーのカットとインデックスフィーのカットの検討 -上記のセクションで説明したように、クエリ料金カットとインデックス作成料金カットの設定について透明性があり誠実なインデクサーを選択する必要があります。デリゲーターは、パラメーターのクールダウン時間も調べて、どれだけの時間バッファーがあるかを確認する必要があります。その後、委任者が受け取る報酬の額を計算するのは非常に簡単です。式は次のとおりです。 +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![デリゲーション イメージ 3](/img/Delegation-Reward-Formula.png) ### インデクサーのデリゲーションプールを考慮する -委任者が考慮しなければならないもう 1 つのことは、自分が所有する委任プールの割合です。すべての委任報酬は均等に共有され、委任者がプールに入金した金額によって決定されるプールの単純なリバランスが行われます。これにより、委任者にプールのシェアが与えられます。 +Delegators should consider the proportion of the Delegation Pool they own. -![シェアの計算式](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +This gives the Delegator a share of the pool: + +![シェアの計算式](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### デリゲーションの容量を考慮する -もうひとつ考慮しなければならないのが、デリゲーション能力です。 現在、デリゲーションレシオは 16 に設定されています。 これは、インデクサーが 1,000,000GRT をステークしている場合、そのデリゲーション容量はプロトコルで使用できる 16,000,000GRT のデリゲーショントークンであることを意味します。 この量を超えるデリゲートされたトークンは、全てのデリゲーター報酬を薄めてしまいます。 +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. @@ -88,16 +120,21 @@ Therefore, a Delegator should always consider the Delegation Capacity of an Inde ### MetaMask "Pending Transaction "バグ -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### 例 -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## ネットワーク UI のビデオガイド +## Video Guide -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/ja/network/developing.mdx b/website/pages/ja/network/developing.mdx index beca8747424f..a8c89dc2e4d4 100644 --- a/website/pages/ja/network/developing.mdx +++ b/website/pages/ja/network/developing.mdx @@ -2,52 +2,88 @@ title: 現像 --- -開発者は、The Graphのエコシステムの需要側である。開発者はサブグラフを構築し、それをThe Graph Networkに公開する。そして、アプリケーションを動かすために、GraphQLでサブグラフをクエリします。 +To start coding right away, go to [Developer Quick Start](/quick-start/). + +## 概要 + +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. + +On The Graph, you can: + +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. + +### What is GraphQL? + +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +### Developer Actions + +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. + +## Subgraph Specifics + +### What are subgraphs? + +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. + +A subgraph primarily consists of the following files: + +- `subgraph.yaml`: this YAML file contains the [subgraph manifest](/developing/creating-a-subgraph/#the-subgraph-manifest). +- `subgraph.graphql`: this GraphQL schema defines what data is stored for your subgraph, and how to query it via [GraphQL](/developing/creating-a-subgraph/#the-graphql-schema). +- `mappings`: this [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) mappings file translates data from the event data to the entities defined in your schema. + +Learn the detailed specifics to [create a subgraph](/developing/creating-a-subgraph/). ## サブグラフのライフサイクル -ネットワークに配置されたサブグラフは、ライフサイクルが定義されています。 +Here is a general overview of a subgraph’s lifecycle: -### ローカルでビルド +![サブグラフのライフサイクル](/img/subgraph-lifecycle.png) -すべてのサブグラフ開発と同様に、ローカルでの開発とテストから始まります。開発者は、`graph-cli` と `graph-ts` を利用して、The Graph Network、ホステッド サービス、またはローカル グラフ ノードのいずれを構築する場合でも、同じローカル セットアップを使用して構築できます。サブグラフ。開発者は、[Matchstick](https://github.com/LimeChain/matchstick) などのツールを単体テストに使用して、サブグラフの堅牢性を向上させることをお勧めします。 +### ローカルでビルド -> The Graph Network には、機能とネットワーク サポートに関して一定の制約があります。 [サポートされているネットワーク](/developing/supported-networks)のサブグラフのみがインデックス作成の報酬を獲得できます。また、IPFS からデータを取得するサブグラフも資格がありません。 +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. ### Deploy to Subgraph Studio -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. - -### ネットワークに公開 +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -開発者がサブグラフに満足したら、それをグラフネットワークに公開することができます。これはオンチェーンアクションであり、インデックス作成者が発見できるようにサブグラフを登録します。公開されたサブグラフは対応するNFTを持ち、これは簡単に転送できます。公開されたサブグラフには関連するメタデータがあり、他のネットワーク参加者に有用なコンテキストと情報を提供します。 +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. -### 索引作成を促すシグナル +### ネットワークに公開 -公開されたサブグラフは、シグナルを追加しないとインデックス作成者に拾われにくいです。シグナルは、与えられたサブグラフに関連するロックされたGRTで、与えられたサブグラフがクエリー量を受け取ることをインデックス作成者に示し、またその処理に利用できるインデックス作成報酬に寄与します。サブグラフの開発者は、インデックス作成を促進するために、一般的にそのサブグラフにシグナルを追加する。サードパーティのキュレーターも、そのサブグラフがクエリ量を増加させると判断した場合、そのサブグラフにシグナルを追加することができます。 +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. -### クエリ& アプリケーション開発 +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-and-deprecate-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. -サブグラフがインデクサーによって処理され、クエリに使用できるようになると、開発者はアプリケーションでサブグラフの使用を開始できます。開発者は、サブグラフを処理したインデクサーにクエリを転送するゲートウェイを介してサブグラフにクエリを実行し、GRT でクエリ料金を支払います。 +### Add Curation Signal for Indexing -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. +#### What is signal? -### サブグラフの更新 +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. +### クエリ& アプリケーション開発 -サブグラフ開発者は、更新の準備が完了すると、トランザクションを開始してサブグラフを新しいバージョンに向けることができます。サブグラフを更新すると、すべてのシグナルが新しいバージョンに移行されます (シグナルを適用したユーザーが「自動移行」を選択したと仮定します)。これには移行税もかかります。このシグナルの移行により、インデクサーは新しいバージョンのサブグラフのインデックス作成を開始するよう促されるため、すぐにクエリに使用できるようになるはずです。 +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). -### サブグラフの廃止 +Learn more about [querying subgraphs](/querying/querying-the-graph/). -ある時点で、開発者は公開されたサブグラフが不要になったと判断することがあります。そのとき、開発者はサブグラフを非推奨とし、キュレータにシグナライズされたGRTを返します +### サブグラフの更新 -### 多様な開発者の役割 +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -開発者の中には、ネットワーク上のサブグラフのライフサイクルに関与し、自分のサブグラフを公開し、クエリし、反復する者もいる。サブグラフの開発に重点を置き、他の人が構築できるオープンなAPIを構築する人もいます。また、アプリケーションに焦点を当て、他の人が配置したサブグラフをクエリすることもあります。 +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. -### 開発者とネットワーク経済学 +### Deprecating & Transferring Subgraphs -開発者はネットワークにおける主要な経済的主体であり、インデックス作成を促進するために GRT をロックアップし、ネットワークの主要な価値交換であるサブグラフのクエリを非常に重要にしています。サブグラフ開発者は、サブグラフが更新されるたびに GRT も書き込みます。 +If you no longer need a published subgraph, you can [deprecate or transfer a subgraph](/managing/transfer-and-deprecate-a-subgraph/). Deprecating a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/ja/network/explorer.mdx b/website/pages/ja/network/explorer.mdx index 4d62708f191d..1fd4f1692eda 100644 --- a/website/pages/ja/network/explorer.mdx +++ b/website/pages/ja/network/explorer.mdx @@ -2,21 +2,35 @@ title: グラフエクスプローラ --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## サブグラフ -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![エクスプローラー画像1](/img/Subgraphs-Explorer-Landing.png) -サブグラフをクリックすると、プレイグラウンドでクエリをテストすることができ、ネットワークの詳細を活用して情報に基づいた意思決定を行うことができます。 また、自分のサブグラフや他の人のサブグラフで GRT をシグナリングして、その重要性や品質をインデクサに認識させることができます。 これは、サブグラフにシグナルを送ることで、そのサブグラフがインデックス化され、最終的にクエリに対応するためにネットワーク上に現れてくることを意味するため、非常に重要です。 +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![エクスプローラーイメージ 2](/img/Subgraph-Details.png) -各サブグラフの専用ページでは、いくつかの詳細が表示されます。 その内容は以下の通りです: +On each subgraph’s dedicated page, you can do the following: - サブグラフのシグナル/アンシグナル - チャート、現在のデプロイメント ID、その他のメタデータなどの詳細情報の表示 @@ -31,26 +45,32 @@ First things first, if you just finished deploying and publishing your subgraph ## 参加者 -このタブでは、Indexer、Delegator、Curators など、ネットワークアクティビティに参加している全ての人を俯瞰できます。 以下では、各タブの意味を詳しく説明します。 +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### 1. インデクサー(Indexers) ![エクスプローラーイメージ 4](/img/Indexer-Pane.png) -まず、インデクサーから説明します。 インデクサーはプロトコルのバックボーンであり、サブグラフに利害関係を持ち、インデックスを作成し、サブグラフを消費する人にクエリを提供します。 インデクサーテーブルでは、インデクサーのデリゲーションパラメータ、ステーク、各サブグラフへのステーク量、クエリフィーとインデクシング報酬による収益を確認することができます。 詳細は以下のとおりです: +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. + +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. -- Query Fee Cut - デリゲーターとの分配時にインデクサーが保持するクエリーフィーリベートの割合 -- Effective Reward Cut - デリゲーションプールに適用されるインデックス報酬のカット。 これがマイナスの場合、インデクサーが報酬の一部を手放していることを意味します。 プラスの場合は、インデクサーが報酬の一部を保持していることを意味します -- Cooldown Remaining - インデクサーが上記のデリゲーションパラメータを変更できるようになるまでの残り時間です。 クールダウン期間は、インデクサーがデリゲーションパラメータを更新する際に設定します -- Owned - インデクサーが預けているステークで、悪意のある行為や不正な行為があった場合にスラッシュされる可能性があります -- Delegated- インデクサーによって割り当てることはできますが、スラッシュすることはできませんデリゲーターからのステーク -- Allocated - インデックスを作成中のサブグラフに対してインデクサーが割り当てているステーク額 -- Available Delegation Capacity - インデクサーが過度に委任される前にまだ受け取ることができる委任されたステークの量 +**Specifics** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - インデクサーが生産的に受け入れることができる委任されたステークの最大量。超過した委任されたステークは、割り当てや報酬の計算には使用できません。 -- Query Fees - あるインデクサーのクエリに対してエンドユーザーが支払った手数料の合計額です +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - Indexer Rewards - インデクサーとそのデリゲーターが過去に獲得したインデクサー報酬の総額。 インデクサー報酬は GRT の発行によって支払われます -インデクサーはクエリ報酬とインデックス報酬の両方を得ることができます。 機能的には、ネットワーク参加者が GRT をインデクサーにデリゲーションしたときに発生します。 これにより、インデクサーはそのインデクサーパラメータに応じてクエリフィーや報酬を受け取ることができます。 インデックスパラメータの設定は、表の右側をクリックするか、インデクサーのプロフィールにアクセスして「Delegate」ボタンをクリックすることで行うことができます。 +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. インデクサーになるには、[公式ドキュメント](/network/indexing)や[The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/)を見てみてください。 @@ -58,9 +78,13 @@ First things first, if you just finished deploying and publishing your subgraph ### 2. キュレーター -キュレーターはサブグラフを分析し、どのサブグラフが最高品質であるかを特定します。 キュレーターが魅力的なサブグラフを見つけたら、そのボンディングカーブにシグナルを送ることでキュレーションすることができます。 そうすることで、キュレーターはインデクサーにどのサブグラフが高品質であり、インデックスを作成すべきかを知らせることができます。 +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -キュレーターはコミュニティのメンバー、データ消費者、あるいはサブグラフの開発者でもあり、GRT トークンをボンディングカーブに預けることで自分のサブグラフにシグナルを送ります。 GRT を預け入れることで、キュレーターはサブグラフのキュレーションシェアを獲得します。 その結果、キュレーターは、自分がシグナルを送ったサブグラフが生成したクエリフィーの一部を得ることができます。 ボンディングカーブは、キュレーターが最高品質のデータソースをキュレーションする動機付けとして機能します。 このセクションの「Curator」テーブルでは、以下を確認することができます: +In the The Curator table listed below you can see: - キュレーターがキュレーションを開始した日付 - デポジットされた GRT の数 @@ -68,34 +92,36 @@ First things first, if you just finished deploying and publishing your subgraph ![エクスプローラーイメージ 6](/img/Curation-Overview.png) -キュレーターの役割についてさらに詳しく知りたい場合は、[The Graph Academy](https://thegraph.academy/curators/) か [official documentation.](/network/curating)を参照してください。 +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### 3. デリゲーター -デリゲーターは、グラフネットワークの安全性と分散性を維持するための重要な役割を担っています。 デリゲーターは、GRT トークンを 1 人または複数のインデクサーにデリゲート(=「ステーク」)することでネットワークに参加します。 デリゲーターがいなければ、インデクサーは大きな報酬や手数料を得ることができません。 そのため、インデクサーは獲得したインデクシング報酬やクエリフィーの一部をデリゲーターに提供することで、デリゲーターの獲得を目指します。 +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -デリゲーターは、過去のパフォーマンス、インデクシング報酬率、クエリ手数料の割引率など、さまざまな要因に基づいてインデクサーを選択します。コミュニティ内での評判もこれに影響を与える可能性があります!選ばれたインデクサーと連携することをお勧めします。それには[The Graph's Discord](https://discord.gg/graphprotocol)or[The Graph Forum](https://forum.thegraph.com/)を利用することができます! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![エクスプローラーイメージ 7](/img/Delegation-Overview.png) -「Delegators」テーブルでは、コミュニティ内のアクティブなデリゲーターを確認できるほか、以下のような指標も確認できます: +In the Delegators table you can see the active Delegators in the community and important metrics: - デリゲーターがデリゲーションしているインデクサー数 - デリゲーターの最初のデリゲーション内容 - デリゲーターが蓄積したがプロトコルから引き出していない報酬 - プロトコルから撤回済みの報酬 - 現在プロトコルに保持している GRT 総量 -- 最後にデリゲートした日 +- The date they last delegated -委任者になる方法について詳しく知りたい場合は、もう探す必要はありません。 [公式ドキュメント](/network/delegating)または[The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers)にアクセスするだけです。 +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## ネットワーク -「Network」セクションでは、グローバルな KPI に加えて、エポック単位に切り替えてネットワークメトリクスをより詳細に分析する機能があります。 これらの詳細を見ることで、ネットワークが時系列でどのようなパフォーマンスをしているかを知ることができます。 +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. ### 概要 -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +The overview section has both all the current network metrics and some cumulative metrics over time: - 現在のネットワーク全体のステーク額 - インデクサーとデリゲーター間のステーク配分 @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - キュレーション報酬、インフレーション・レートなどのプロトコルパラメータ - 現在のエポックの報酬と料金 -特筆すべき重要な詳細をいくつか挙げます: +A few key details to note: -- **クエリフィーは消費者によって生成された報酬を表し**、サブグラフへの割り当てが終了し、提供したデータが消費者によって検証された後、少なくとも 7 エポック(下記参照)の期間後にインデクサが請求することができます(または請求しないこともできます)。 -- **I インデックス報酬は、エポック期間中にインデクサーがネットワーク発行から請求した報酬の量を表しています。**プロトコルの発行は固定されていますが、報酬はインデクサーがインデックスを作成したサブグラフへの割り当てを終了して初めてミントされます。 そのため、エポックごとの報酬数は変動します(例えば、あるエポックでは、インデクサーが何日も前から開いていた割り当てをまとめて閉じたかもしれません)。 +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![エクスプローラーイメージ 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ The overview section has all the current network metrics as well as some cumulat - アクティブエポックとは、インデクサーが現在ステークを割り当て、クエリフィーを収集しているエポックのこと - 決済エポックとは、状態のチャンネルを決済しているエポックのこと。 つまり、消費者がインデクサーに対して異議を唱えた場合、インデクサーはスラッシュされる可能性があるということ - 分配エポックとは、そのエポックの状態チャンネルが確定し、インデクサーがクエリフィーのリベートを請求できるようになるエポックのこと - - 確定したエポックとは、インデクサーが請求できるクエリフィーのリベートが残っていないエポックのことで、確定している + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![エクスプローラーイメージ 9](/img/Epoch-Stats.png) ## ユーザープロファイル -ネットワーク統計について説明したので、個人プロファイルに移りましょう。個人プロファイルは、ネットワークへの参加方法に関係なく、ネットワーク アクティビティを確認できる場所です。クリプト ウォレットはユーザー プロファイルとして機能し、ユーザー ダッシュボードでは次の情報を確認できます。 +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### プロフィールの概要 -ここでは、あなたが現在行ったアクションを確認できます。 また、自分のプロフィール情報、説明、ウェブサイト(追加した場合)もここに表示されます。 +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![エクスプローラーイメージ 10](/img/Profile-Overview.png) ### サブグラフタブ -「Subgraphs」タブをクリックすると、公開されているサブグラフが表示されます。 サブグラフは分散型ネットワークに公開されたときにのみ表示されます。 +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![エクスプローラーイメージ 11](/img/Subgraphs-Overview.png) ### インデックスタブ -[インデックス作成] タブをクリックすると、サブグラフに対するすべてのアクティブな割り当てと履歴割り当てを含むテーブルと、インデクサーとしての過去のパフォーマンスを分析して確認できるグラフが表示されます。 +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. このセクションには、インデクサー報酬とクエリフィーの詳細も含まれます。 以下のような指標が表示されます: @@ -158,7 +189,9 @@ The overview section has all the current network metrics as well as some cumulat ### デリゲーションタブ -デリゲーターは、グラフネットワークにとって重要な存在です。 デリゲーターは知見を駆使して、健全な報酬を提供するインデクサーを選ばなければなりません。 このタブでは、アクティブなデリゲーションの詳細と過去の履歴、そしてデリゲートしたインデクサーの各指標を確認することができます。 +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. ページの前半には、自分のデリゲーションチャートと報酬のみのチャートが表示されています。 左側には、現在のデリゲーションメトリクスを反映した KPI が表示されています。 diff --git a/website/pages/ja/network/indexing.mdx b/website/pages/ja/network/indexing.mdx index f7047e258c61..b2eca1c09fe0 100644 --- a/website/pages/ja/network/indexing.mdx +++ b/website/pages/ja/network/indexing.mdx @@ -42,7 +42,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap コミュニティが作成したダッシュボードの多くには保留中の報酬値が含まれており、次の手順に従って手動で簡単に確認できます。 -1. [メインネット・サブグラフ](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet)にクエリして、全てのアクティブなアロケーションの ID を取得します。 +1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -112,12 +112,12 @@ Use Etherscan to call `getRewards()`: - **Medium** - 100 個のサブグラフと 1 秒あたり 200 ~ 500 のリクエストをサポートするプロダクションインデクサー - **Large** - 現在使用されているすべてのサブグラフのインデックスを作成し、関連するトラフィックのリクエストに対応します -| Setup | Postgres
    (CPUs) | Postgres
    (memory in GBs) | Postgres
    (disk in TBs) | VMs
    (CPUs) | VMs
    (memory in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Setup | Postgres
    (CPUs) | Postgres
    (memory in GBs) | Postgres
    (disk in TBs) | VMs
    (CPUs) | VMs
    (memory in GBs) | +| -------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | ### インデクサーが取るべきセキュリティ対策は? @@ -149,20 +149,20 @@ Use Etherscan to call `getRewards()`: #### グラフノード -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
    (for subgraph queries) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    (for subgraph subscriptions) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ------------------------------------------------------- | ------------------------------------------------------ | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
    (for subgraph queries) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    (for subgraph subscriptions) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
    (for paid subgraph queries) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ------------------------------------------------------------ | --------------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
    (for paid subgraph queries) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Agent @@ -544,7 +544,7 @@ graph indexer status - `graph indexer rules maybe [options] ` - 配置の`thedecisionBasis` を`rules`に設定し、インデクサーエージェントがインデキシングルールを使用して、この配置にインデックスを作成するかどうかを決定するようにします。 -- `graph indexer actions get [options] ` - `all` を使って一つ以上のアクションを取得するか、 `action-id` を空にすると全てのアクションを取得します。追加引数 `--status` は、特定のステータスのアクションをすべて出力するために使用されます。 +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `graph indexer action queue allocate ` - キューの割り当てアクション。 @@ -730,10 +730,10 @@ default => 0.1 * $SYSTEM_LOAD; 上記のモデルを使用したクエリのコスト計算の例: -| クエリ | 価格 | +| クエリ | 価格 | | ---------------------------------------------------------------------------- | ------- | | { pairs(skip: 5000) { id } } | 0.5 GRT | -| { トークン { シンボル } } | 0.1 GRT | +| { トークン { シンボル } } | 0.1 GRT | | { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT | #### コストモデルの適用 diff --git a/website/pages/ja/network/overview.mdx b/website/pages/ja/network/overview.mdx index de1a9994f545..d46303b4c294 100644 --- a/website/pages/ja/network/overview.mdx +++ b/website/pages/ja/network/overview.mdx @@ -2,14 +2,20 @@ title: ネットワークの概要 --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## 概要 +## How does it work? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Specifics + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![トークンエコノミクス](/img/Network-roles@2x.png) -The Graph Network の経済的安全性と照会されるデータの完全性を確保するために、参加者は Graph トークン ([GRT](/tokenomics)) をステークして使用します。 GRT は、ネットワークでリソースを割り当てるために使用される ERC-20 であるワーク ユーティリティ トークンです。 +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/ja/new-chain-integration.mdx b/website/pages/ja/new-chain-integration.mdx index d4d71e358463..76d085dc6ea8 100644 --- a/website/pages/ja/new-chain-integration.mdx +++ b/website/pages/ja/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: 新しいネットワークの統合 +title: New Chain Integration --- -Graph Nodeは現在、以下のチェーンタイプからデータをインデックス化できます: +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- Ethereum, via EVM JSON-RPC and [Ethereum Firehose](https://github.com/streamingfast/firehose-ethereum) -- NEAR, via a [NEAR Firehose](https://github.com/streamingfast/near-firehose-indexer) -- Cosmos, via a [Cosmos Firehose](https://github.com/graphprotocol/firehose-cosmos) -- Arweave, via an [Arweave Firehose](https://github.com/graphprotocol/firehose-arweave) +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -もしご興味があるチェーンがあれば、統合はGraph Nodeの設定とテストの問題です。 +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -異なるチェーンタイプに興味がある場合、Graph Nodeとの新しい統合を構築する必要があります。私たちの推奨するアプローチは、問題のチェーン用に新しい Firehose を開発し、その Firehose を Graph Node と統合することです。詳細は下記をご覧ください。 +## Integration Strategies -**1. EVM JSON-RPC** +### 1. EVM JSON-RPC -ブロックチェーンが EVM と同等であり、クライアント/ノードが標準の EVM JSON-RPC API を公開している場合、グラフ ノードは新しいチェーンのインデックスを作成できるはずです。 詳細については、「EVM JSON-RPC のテスト」(new-chain-integration#testing-an-evm-json-rpc) を参照してください。 +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -**2. Firehose** +#### EVM JSON-RPC のテスト -EVMベースでないチェーンの場合、Graph NodeはgRPCと既知の型定義を介してブロックチェーンデータを取り込む必要があります。これは[StreamingFast](https://www.streamingfast.io/)によって開発された新技術である[Firehose](firehose/)を介して行うことができ、ファイルベースとストリーミングファーストアプローチを使用して高度にスケーラブルなインデックス化ブロックチェーンソリューションを提供します。Firehoseの開発でサポートが必要な場合は、[StreamingFastチーム](mailto:integrations@streamingfast.io/)までご連絡ください。 +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## EVM JSON-RPC と Firehose の違い +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, in a JSON-RPC batch request +- `trace_filter` _(optionally required for Graph Node to support call handlers)_ -これらの2つの方法は、サブグラフに適していますが、[Substreams](substreams/), を使用して開発者がビルドする場合、常にFirehoseが必要です。これには、[Substreams-powered subgraphs](cookbook/substreams-powered-subgraphs/) のようなサブストリームを活用したサブグラフの構築が含まれます。さらに、FirehoseはJSON-RPCと比較して、改善されたインデックス化速度を提供します。 +### 2. Firehose Integration -新しいEVMチェーンの統合者は、サブストリームの利点とその大規模な並列化されたインデックス化能力を考慮して、Firehoseベースのアプローチも検討することができます。両方をサポートすることで、開発者は新しいチェーンに対してサブストリームまたはサブグラフのどちらを構築するかを選択できるようになります。 +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> **注意**: EVM チェーンの Firehose ベースの統合では、インデクサーがチェーンのアーカイブ RPC ノードを実行してサブグラフに適切にインデックスを付ける必要があります。 これは、通常「eth_call」RPC メソッドによってアクセスできるスマート コントラクト状態を Firehose が提供できないためです。 (eth_calls は [開発者にとって良い習慣ではない](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/) であることを思い出してください) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## EVM JSON-RPC のテスト +#### Specific Firehose Instrumentation for EVM (`geth`) chains -Graph NodeがEVMチェーンからデータを取り込むためには、RPCノードは以下のEVM JSON RPCメソッドを公開する必要があります。 +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \_(for historical blocks, with EIP-1898 - requires archive node): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`, in a JSON-RPC batch request -- _`trace_filter`_ _(optionally required for Graph Node to support call handlers)_ +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### Graph Node の設定 +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -**ローカル環境を準備することから始めます** +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## Graph Node の設定 + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) -2. [この行](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) を変更して、新しいネットワーク名と EVM JSON RPC 準拠の URL を含めます。 - > 環境変数名自体は変更しないでください。ネットワーク名が異なる場合でも、「ethereum」という名前のままである必要があります。 -3. IPFSノードを実行するか、The Graphが使用するものを使用してください: https://api.thegraph.com/ipfs/ -**サブグラフをローカルにデプロイして統合をテストします。** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. 簡単なサブグラフの例を作成します。 いくつかのオプションを以下に示します。 - 1. 事前にパックされた [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) スマート コントラクトとサブグラフは良い出発点です。 - 2. 既存のスマート コントラクトまたは Solidity 開発環境からローカル サブグラフをブートストラップする [グラフ プラグインで Hardhat を使用](https://github.com/graphprotocol/hardhat-graph) -3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. -4. グラフ ノードでサブグラフを作成します: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. サブグラフをGraph Nodeに公開するには、次のコマンドを使用します:graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -Graph Nodeはエラーがない場合、デプロイされたサブグラフを同期するはずです。同期が完了するのを待ってから、ログに表示されたAPIエンドポイントに対していくつかのGraphQLクエリを送信してください。 +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## 新しい Firehose 対応チェーンの統合 +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. 簡単なサブグラフの例を作成します。 いくつかのオプションを以下に示します。 + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +Graph Nodeはエラーがない場合、デプロイされたサブグラフを同期するはずです。同期が完了するのを待ってから、ログに表示されたAPIエンドポイントに対していくつかのGraphQLクエリを送信してください。 -新しいチェーンを統合することは、Firehoseアプローチを使用しても可能です。これは、非EVMチェーン向けの現在の最良のオプションであり、サブストリームサポートの要件でもあります。追加のドキュメントでは、Firehoseの動作方法、新しいチェーンへのFirehoseサポートの追加、およびGraph Nodeとの統合に焦点を当てています。統合者に推奨されるドキュメント: +## Substreams-powered Subgraphs -1. [General docs on Firehose](firehose/) -2. [新チェーンのFirehoseサポート追加](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [Integrating Graph Node with a new chain via Firehose](https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/ja/operating-graph-node.mdx b/website/pages/ja/operating-graph-node.mdx index cb9e4f14e8f3..71da764773e6 100644 --- a/website/pages/ja/operating-graph-node.mdx +++ b/website/pages/ja/operating-graph-node.mdx @@ -26,7 +26,7 @@ title: オペレーティンググラフノード ### IPFSノード -IPFS ノード(バージョン 未満) - サブグラフのデプロイメタデータは IPFS ネットワーク上に保存されます。 グラフノードは、サブグラフのデプロイ時に主に IPFS ノードにアクセスし、サブグラフマニフェストと全てのリンクファイルを取得します。 ネットワーク・インデクサーは独自の IPFS ノードをホストする必要はありません。 ネットワーク用の IPFS ノードは、https://ipfs.network.thegraph.com でホストされています。 +IPFS ノード(バージョン 未満) - サブグラフのデプロイメタデータは IPFS ネットワーク上に保存されます。 グラフノードは、サブグラフのデプロイ時に主に IPFS ノードにアクセスし、サブグラフマニフェストと全てのリンクファイルを取得します。 ネットワーク・インデクサーは独自の IPFS ノードをホストする必要はありません。 ネットワーク用の IPFS ノードは、https://ipfs.network.thegraph.com でホストされています。 ### Prometheus メトリクスサーバー @@ -77,13 +77,13 @@ Kubernetesの完全な設定例は、[indexerリポジトリ](https://github.com グラフノードは起動時に以下のポートを公開します。 -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
    (for subgraph queries) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    (for subgraph subscriptions) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ------------------------------------------------------- | ------------------------------------------------------ | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
    (for subgraph queries) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    (for subgraph subscriptions) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | > **重要**: ポートを公に公開する場合は注意してください。**管理ポート**はロックしておく必要があります。ノードの JSON-RPC エンドポイント diff --git a/website/pages/ja/querying/graphql-api.mdx b/website/pages/ja/querying/graphql-api.mdx index 204a2fd52e89..839a9bff35f4 100644 --- a/website/pages/ja/querying/graphql-api.mdx +++ b/website/pages/ja/querying/graphql-api.mdx @@ -2,11 +2,19 @@ title: GraphQL API --- -This guide explains the GraphQL Query API that is used for The Graph Protocol. +Learn about the GraphQL Query API used in The Graph. -## クエリ +## What is GraphQL? -サブグラフのスキーマには、`Entities`と呼ばれるタイプが定義されています。各`Entity`タイプには、トップレベルの`Query`タイプに`entity`と`entities`フィールドが生成されます。なお、The Graph を使用する際には、`graphql`の`query` の先頭にクエリを含める必要はありません。 +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +To understand the larger role that GraphQL plays, review [developing](/network/developing/) and [creating a subgraph](/developing/creating-a-subgraph/). + +## Queries with GraphQL + +In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. + +> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. ### 例 @@ -21,7 +29,7 @@ This guide explains the GraphQL Query API that is used for The Graph Protocol. } ``` -> **注:** 単一のエンティティを照会する場合、`id` フィールドは必須であり、文字列でなければなりません。 +> Note: When querying for a single entity, the `id` field is required, and it must be writen as a string. すべての `Token` エンティティをクエリします。 @@ -36,7 +44,10 @@ This guide explains the GraphQL Query API that is used for The Graph Protocol. ### 並べ替え -コレクションをクエリする場合、`orderBy` パラメータを使用して特定の属性で並べ替えることができます。さらに、`orderDirection` を使用してソート方向を指定できます。昇順の場合は `asc`、降順の場合は `desc` です。 +When querying a collection, you may: + +- Use the `orderBy` parameter to sort by a specific attribute. +- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. #### 例 @@ -53,7 +64,7 @@ This guide explains the GraphQL Query API that is used for The Graph Protocol. グラフ ノード [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) の時点で、エンティティを並べ替えることができますネストされたエンティティに基づいています。 -次の例では、所有者の名前でトークンを並べ替えます。 +The following example shows tokens sorted by the name of their owner: ```graphql { @@ -70,11 +81,12 @@ This guide explains the GraphQL Query API that is used for The Graph Protocol. ### ページネーション -コレクションをクエリする場合、`first` パラメータを使用して、コレクションの先頭から改ページすることができます。デフォルトのソート順は、作成時間順ではなく、英数字の昇順の ID 順であることに注意してください。 - -さらに、 `skip` パラメーターを使用してエンティティをスキップし、ページ分割することができます。例えば`first:100` は最初の 100 個のエンティティを示し、`first:100, skip:100` は次の 100 個のエンティティを示します。 +When querying a collection, it's best to: -クエリは一般にパフォーマンスが低いため、非常に大きな `skip` 値を使用しないでください。多数のアイテムを取得するには、最後の例で示したように、属性に基づいてエンティティをページングする方がはるかに優れています。 +- Use the `first` parameter to paginate from the beginning of the collection. + - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. +- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. #### `first` を使用した例 @@ -106,7 +118,7 @@ This guide explains the GraphQL Query API that is used for The Graph Protocol. #### `first` と `id_ge` を使用した例 -クライアントが多数のエンティティを取得する必要がある場合は、属性に基づいてクエリを実行し、その属性でフィルター処理する方がはるかに効率的です。たとえば、クライアントは次のクエリを使用して多数のトークンを取得します。 +If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: ```graphql query manyTokens($lastID: String) { @@ -117,11 +129,12 @@ query manyTokens($lastID: String) { } ``` -初めて、`lastID = ""` でクエリを送信し、後続のリクエストでは `lastID` を最後の `id` 属性に設定します。前のリクエストのエンティティ。このアプローチは、`skip` 値を増やして使用するよりもはるかに優れたパフォーマンスを発揮します。 +The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### フィルタリング -クエリで `where` パラメータを使用して、さまざまなプロパティをフィルタリングできます。 `where` パラメータ内で複数の値をフィルタリングできます。 +- You can use the `where` parameter in your queries to filter for different properties. +- You can filter on multiple values within the `where` parameter. #### `where` を使用した例 @@ -155,7 +168,7 @@ query manyTokens($lastID: String) { #### ブロックフィルタリングの例 -`_change_block(number_gte: Int)` でエンティティをフィルタリングすることもできます - これは、指定されたブロック内またはそれ以降に更新されたエンティティをフィルタリングします。 +You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. これは、前回のポーリング以降など、変更されたエンティティのみをフェッチする場合に役立ちます。または、サブグラフでエンティティがどのように変化しているかを調査またはデバッグするのに役立ちます (ブロック フィルターと組み合わせると、特定のブロックで変更されたエンティティのみを分離できます)。 @@ -193,7 +206,7 @@ Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/ ##### `AND` 演算子 -次の例では、`outcome` `succeeded` および `number` が `100` 以上のチャレンジをフィルタリングしています。 +The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. ```graphql { @@ -208,7 +221,7 @@ Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/ ``` > **シンタックス シュガー:** コンマで区切られた部分式を渡すことで `and` 演算子を削除することで、上記のクエリを簡素化できます。 -> +> > ```graphql > { > challenges(where: { number_gte: 100, outcome: "succeeded" }) { @@ -223,7 +236,7 @@ Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/ ##### `OR` 演算子 -次の例では、`outcome` `succeeded` または `number` が `100` 以上のチャレンジをフィルタリングしています。 +The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. ```graphql { @@ -278,9 +291,9 @@ _change_block(number_gte: Int) デフォルトである最新のブロックだけでなく、過去の任意のブロックについてもエンティティの状態を照会できます。クエリが発生するブロックは、クエリのトップレベル フィールドに `block` 引数を含めることで、ブロック番号またはブロック ハッシュのいずれかで指定できます。 -そのようなクエリの結果は時間の経過とともに変化しません。つまり、特定の過去のブロックでクエリを実行しても、いつ実行されたとしても同じ結果が返されます。ただし、チェーンの先頭に非常に近いブロックでクエリを実行する場合を除いては、そのブロックがメインチェーン上にないことが判明し、チェーンが再構築される場合に結果が変わる可能性があります。ブロックが最終的とみなせるようになると、クエリの結果は変わらなくなります。 +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -現在の実装には、これらの保証を破る可能性がある特定の制限がまだ存在することに注意してください。実装は常に特定のブロックハッシュがメインチェーン上に存在しないことを判断できるわけではなく、また、まだ最終的とみなせないブロックのブロックハッシュによるクエリの結果が、同時に実行されるブロックの再構築によって影響を受ける可能性があります。これらの制限は、ブロックが最終的であり、メインチェーン上に存在することが確認されている場合には、ブロックハッシュによるクエリの結果に影響を与えません。詳細は[この問題](https://github.com/graphprotocol/graph-node/issues/1405)で説明されています。 +> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. #### 例 @@ -322,12 +335,12 @@ _change_block(number_gte: Int) 全文検索演算子: -| シンボル | オペレーター | 説明書き | -| --- | --- | --- | -| `&` | `と` | 複数の検索語を組み合わせて、指定したすべての検索語を含むエンティティをフィルタリングします。 | -| | | `Or` | 複数の検索語をオペレーターで区切って検索すると、指定した語のいずれかにマッチするすべてのエンティティが返されます。 | -| `<->` | `Follow by` | 2 つの単語の間の距離を指定します。 | -| `:*` | `プレフィックス` | プレフィックス検索語を使って、プレフィックスが一致する単語を検索します(2 文字必要) | +| シンボル | オペレーター | 説明書き | +| ----------- | ----------- | --------------------------------------------------------- | +| `&` | `と` | 複数の検索語を組み合わせて、指定したすべての検索語を含むエンティティをフィルタリングします。 | +| | | `Or` | 複数の検索語をオペレーターで区切って検索すると、指定した語のいずれかにマッチするすべてのエンティティが返されます。 | +| `<->` | `Follow by` | 2 つの単語の間の距離を指定します。 | +| `:*` | `プレフィックス` | プレフィックス検索語を使って、プレフィックスが一致する単語を検索します(2 文字必要) | #### 例 @@ -376,11 +389,11 @@ _change_block(number_gte: Int) ## スキーマ -データ ソースのスキーマ、つまりクエリに使用できるエンティティ タイプ、値、および関係は、[GraphQL インターフェイス定義言語 (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System)。 +The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL スキーマは通常、`クエリ`、`サブスクリプション`、および `ミューテーション` のルート タイプを定義します。グラフは `クエリ` のみをサポートします。サブグラフのルート `Query` タイプは、サブグラフ マニフェストに含まれる GraphQL スキーマから自動的に生成されます。 +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> **注:** 開発者はアプリケーションから基盤となるブロックチェーンに対して直接トランザクションを発行することが期待されるため、API はミューテーションを公開しません。 +> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. ### エンティティ diff --git a/website/pages/ja/querying/querying-best-practices.mdx b/website/pages/ja/querying/querying-best-practices.mdx index dc58ec63e7bb..f4a26c19d903 100644 --- a/website/pages/ja/querying/querying-best-practices.mdx +++ b/website/pages/ja/querying/querying-best-practices.mdx @@ -2,11 +2,9 @@ title: クエリのベストプラクティス --- -The Graphは、ブロックチェーンのデータをクエリするための分散化された方法を提供します。 +The Graph provides a decentralized way to query data from blockchains via GraphQL APIs, making it easier to query data with the GraphQL language. -The GraphのネットワークのデータはGraphQL APIで公開され、GraphQL言語によるデータクエリーが容易になります。 - -このページでは、GraphQLの言語ルールとGraphQLクエリのベストプラクティスに必要不可欠な情報をご案内しています。 +Learn the essential GraphQL language rules and GraphQL querying best practices. --- @@ -71,7 +69,7 @@ GraphQLは、HTTPを介して転送される言語と一連の規約です。 これは、標準の`fetch`(ネイティブであれば、`@whatwg-node/fetch`や`isomorphic-fetch`を介しても)を使用して、GraphQL APIにクエリを送信できることを意味します。 -ただし、「[アプリケーションからのクエリ](/querying/querying-from-an-application)」で述べたように、以下のような固有の機能をサポートする`graph-client`を使用することをおすすめします。 +However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), it's recommend to use `graph-client` which supports unique features such as: - クロスチェーンのサブグラフ処理:1回のクエリで複数のサブグラフからクエリを実行可能 - [自動ブロック追跡](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) @@ -104,8 +102,6 @@ main() その他の GraphQL クライアントの代替手段については、[「アプリケーションからのクエリ」](/querying/querying-from-an-application) で説明します。 -GraphQL クエリ構文の基本ルールを説明したので、今度は GraphQL クエリ記述のベスト プラクティスを見てみましょう。 - --- ## ベストプラクティス @@ -164,11 +160,11 @@ const result = await execute(query, { - サーバーレベルで**変数がキャッシュできます**。 - **ツールでクエリを静的に分析できる**(これについては、次のセクションで詳しく説明します。) -**注: 静的クエリに条件付きでフィールドを含める方法** +### How to include fields conditionally in static queries -特定の条件でのみ `owner` フィールドを含めることができます。 +You might want to include the `owner` field only on a particular condition. -このために、次のように `@include(if:...)` ディレクティブを利用できます: +For this, you can leverage the `@include(if:...)` directive as follows: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -191,7 +187,7 @@ const result = await execute(query, { }) ``` -注: 反対のディレクティブは `@skip(if: ...)` です。 +> 注: 反対のディレクティブは `@skip(if: ...)` です。 ### Ask for what you want @@ -199,9 +195,8 @@ GraphQL は、「欲しいものを聞いてください」というキャッチ このため、GraphQLでは、個々にリストすることなくすべての利用可能なフィールドを取得する方法はありません。 -GraphQL APIをクエリする際には、実際に使用するフィールドのみをクエリするように常に考えてください。 - -過剰なデータ取得の一般的な原因は、エンティティのコレクションです。デフォルトでは、クエリはコレクション内のエンティティを100個取得しますが、通常、実際に使用される量(たとえば、ユーザーに表示される量)よりもはるかに多いです。そのため、クエリはほぼ常に`first`を明示的に設定し、実際に必要なだけのエンティティを取得するようにする必要があります。これは、クエリ内のトップレベルのコレクションだけでなく、さらにエンティティのネストされたコレクションにも当てはまります。 +- GraphQL APIをクエリする際には、実際に使用するフィールドのみをクエリするように常に考えてください。 +- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. たとえば、次のクエリでは: @@ -335,8 +330,8 @@ query { このような繰り返しフィールド (`id`、`active`、`status`) は、多くの問題を引き起こします。 -- より広範囲なクエリに対応するために読みにくくなります -- クエリに基づいて TypeScript 型を生成するツールを使用する場合 (_前のセクションで詳しく説明します_)、`newDelegate` および `oldDelegate` は、2 つの異なるインライン インターフェイスになります。 +- More extensive queries become harder to read. +- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. クエリのリファクタリングされたバージョンは次のようになります: @@ -362,13 +357,13 @@ fragment DelegateItem on Transcoder { } ``` -GraphQLの`fragment`を使用すると、可読性が向上します(特に大規模な場合)し、さらにはより良いTypeScriptの型生成にも結びつきます。 +Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. 型生成ツールを使用すると、上記のクエリは適切な`DelegateItemFragment`型を生成します(_最後の「ツール」セクションを参照_)。 ### GraphQLフラグメントの注意点 -**フラグメントベースは型である必要があります** +### フラグメントベースは型である必要があります フラグメントは、適用できない型、つまり**フィールドを持たない型**に基づくことはできません。 @@ -380,7 +375,7 @@ fragment MyFragment on BigInt { `BigInt` は**スカラー** (ネイティブの「プレーン」タイプ) であり、フラグメントのベースとして使用できません。 -**フラグメントを拡散する方法** +#### フラグメントを拡散する方法 フラグメントは特定のタイプに定義されているため、クエリではそれに応じて使用する必要があります。 @@ -409,16 +404,16 @@ fragment VoteItem on Vote { ここでタイプ `Vote` のフラグメントを拡散することはできません。 -**フラグメントをデータのアトミックなビジネス単位として定義する** +#### フラグメントをデータのアトミックなビジネス単位として定義する -GraphQL フラグメントは、その使用法に基づいて定義する必要があります。 +GraphQL `Fragment`s must be defined based on their usage. ほとんどのユースケースでは、1つのタイプに対して1つのフラグメントを定義すること(繰り返しフィールドの使用または型生成の場合)で十分です。 -Fragment を使用する場合の経験則は次のとおりです: +Here is a rule of thumb for using fragments: -- 同じ型のフィールドがクエリ内で繰り返される場合、それらをFragmentでグループ化します。 -- 同じフィールドが繰り返される場合、複数のフラグメントを作成します。 +- When fields of the same type are repeated in a query, group them in a `Fragment`. +- When similar but different fields are repeated, create multiple fragments, for instance: ```graphql # base fragment (mostly used in listing) @@ -441,7 +436,7 @@ fragment VoteWithPoll on Vote { --- -## 必須ツール +## The Essential Tools ### GraphQL ウェブベースのエクスプローラ @@ -471,11 +466,11 @@ If you are looking for a more flexible way to debug/test your queries, other sim The [GraphQL VSCode Extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is a great addition to your development workflow, allowing you to: -- 構文の強調表示 -- オートコンプリートの提案 -- スキーマに対する検証 -- snippets -- フラグメントと入力タイプの定義に移動 +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets +- Go to definition for fragments and input types `graphql-eslint`を使用している場合、[ESLint VSCode拡張機能](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint)はエラーや警告を正しくコード内に表示するために必須です。 @@ -483,9 +478,9 @@ The [GraphQL VSCode Extension](https://marketplace.visualstudio.com/items?itemNa [JS GraphQLプラグイン](https://plugins.jetbrains.com/plugin/8097-graphql/)は、以下を提供することで、GraphQLを使用する際のエクスペリエンスを大幅に向上させます。 -- 構文の強調表示 -- オートコンプリートの提案 -- スキーマに対する検証 -- snippets +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets -詳細は、この[WebStorm の記事](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/)で、プラグインの主な機能をすべて紹介しています。 +For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. diff --git a/website/pages/ja/quick-start.mdx b/website/pages/ja/quick-start.mdx index 987d081a1d94..bc5b2bde3643 100644 --- a/website/pages/ja/quick-start.mdx +++ b/website/pages/ja/quick-start.mdx @@ -2,24 +2,18 @@ title: クイックスタート --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph to Subgraph Studio. +Learn how to easily publish and query a [subgraph](/developing/developer-faqs/#1-what-is-a-subgraph) on The Graph. -サブグラフが [supported network](/developing/supported-networks) からのデータにインデックスを付けることを確認してください。 - -このガイドは、次のことを前提として書かれています。 +## Prerequisites for this guide - クリプトウォレット -- 選択したネットワーク上のスマート コントラクト アドレス - -## 1. Subgraph Studio でサブグラフを作成する - -[Subgraph Studio](https://thegraph.com/studio/)にアクセスし、ウォレットを接続する。 +- A smart contract address on one of the [supported networks](/developing/supported-networks/) -Once your wallet is connected, you can begin by clicking “Create a Subgraph." It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name." +## Step-by-step -## 2. Graph CLI をインストールする +### 1. Graph CLI のインストール -The Graph CLI is written in TypeScript and you will need to have `node` and either `npm` or `yarn` installed to use it. Check that you have the most recent CLI version installed. +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. ローカル マシンで、次のいずれかのコマンドを実行します。 @@ -35,132 +29,161 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -## 3. Initialize your subgraph from existing contract +### 2. Create your subgraph + +If your contract has events, the `init` command will automatically create a scaffold of a subgraph. + +#### Create via Graph CLI + +Use the following command to create a subgraph in Subgraph Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +#### Create via Subgraph Studio + +Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. -Initialize your subgraph from an existing contract by running the initialize command: +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +2. Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". + +For additional information on subgraph creation and the Graph CLI, see [Creating a Subgraph](/developing/creating-a-subgraph). + +### 3. Initialize your subgraph + +#### From an existing contract + +The following command initializes your subgraph from an existing contract: ```sh graph init --studio ``` -> 特定のサブグラフのコマンドは、[Subgraph Studio](https://thegraph.com/studio/) のサブグラフ ページで見つけることができます。 +> Note: If your contract was verified on Etherscan, then the ABI will automatically be created in the CLI. + +特定のサブグラフのコマンドは、[Subgraph Studio](https://thegraph.com/studio/) のサブグラフ ページで見つけることができます。 -サブグラフを初期化すると、CLI ツールは次の情報を要求します。 +When you initialize your subgraph, the CLI will ask you for the following information: -- プロトコル: サブグラフがデータのインデックスを作成するプロトコルを選択します -- サブグラフ スラッグ: サブグラフの名前を作成します。サブグラフ スラッグは、サブグラフの識別子です。 -- サブグラフを作成するディレクトリ: ローカル ディレクトリを選択します -- Ethereum ネットワーク (オプション): サブグラフがデータのインデックスを作成する EVM 互換ネットワークを指定する必要がある場合があります。 -- コントラクト アドレス: データを照会するスマート コントラクト アドレスを見つけます。 -- ABI: ABI が自動入力されない場合は、JSON ファイルとして手動で入力する必要があります -- 開始ブロック: サブグラフがブロックチェーン データをインデックス化する間、時間を節約するために開始ブロックを入力することをお勧めします。コントラクトが展開されたブロックを見つけることで、開始ブロックを見つけることができます。 -- 契約名: 契約の名前を入力します -- コントラクト イベントをエンティティとしてインデックス付けする: これを true に設定することをお勧めします。発行されたすべてのイベントのサブグラフにマッピングが自動的に追加されるためです。 -- 別の契約を追加 (オプション): 別の契約を追加できます +- Protocol: Choose the protocol your subgraph will be indexing data from. +- Subgraph slug: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. +- Directory to create the subgraph in: Choose your local directory. +- Ethereum network (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- Contract address: Locate the smart contract address you’d like to query data from. +- ABI: If the ABI is not auto-populated, you will need to input it manually as a JSON file. +- Start Block: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- Contract Name: Input the name of your contract. +- Index contract events as entities: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- Add another contract (optional): You can add another contract. サブグラフを初期化する際に予想されることの例については、次のスクリーンショットを参照してください。 ![Subgraph command](/img/subgraph-init-example.png) -## 4. Write your subgraph +### 4. Write your subgraph -前述のコマンドでは、サブグラフを作成するための出発点として使用できる scaffold サブグラフを作成します。 サブグラフに変更を加える際には、主に 3 つのファイルを使用します: +The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. -- Manifest (`subgraph.yaml`) - The manifest defines what datasources your subgraphs will index. -- Schema (`schema.graphql`) - The GraphQL schema defines what data you wish to retrieve from the subgraph. -- AssemblyScript Mappings (`mapping.ts`) - This is the code that translates data from your datasources to the entities defined in the schema. +When making changes to the subgraph, you will mainly work with three files: -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). +- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -## 5. Deploy to Subgraph Studio +For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -サブグラフが作成されたら、次のコマンドを実行します。 +### 5. Deploy your subgraph -```sh -$ graph codegen -$ graph build -``` +Remember, deploying is not the same as publishing. + +- When you deploy a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. +- When you publish a subgraph, you are publishing it onchain to the decentralized network. + +1. サブグラフが作成されたら、次のコマンドを実行します。 + + ```sh + graph codegen + graph build + ``` + +2. Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. -- サブグラフの認証とデプロイを行います。 デプロイキーは、Subgraph Studio の Subgraph ページにあります。 +![Deploy key](/img/subgraph-studio-deploy-key.jpg) +```` ```sh -$ graph auth --studio -$ graph deploy --studio -``` -バージョンラベルの入力を求められます。 「0.0.1」のようなバージョン管理には [semver](https://semver.org/) を使用することを強くお勧めします。 つまり、「v1」、「version1」、「asdf」などの任意の文字列をバージョンとして自由に選択できます。 - -## 6. サブグラフをテストする - -In Subgraph Studio's playground environment, you can test your subgraph by making a sample query. - -ログは、サブグラフにエラーがあるかどうかを示します。運用サブグラフのログは次のようになります。 - -![Subgraph logs](/img/subgraph-logs-image.png) - -サブグラフに障害が発生した場合は、GraphiQL Playground を使用してサブグラフの健全性をクエリできます。 以下のクエリを利用して、サブグラフのデプロイメント ID を入力できることに注意してください。 この場合、`Qm...` はデプロイメント ID です (これは、サブグラフ ページの **詳細** にあります)。 以下のクエリはサブグラフがいつ失敗したかを通知するため、それに応じてデバッグできます。 - -```graphql -{ - node - synced - health - fatalError { - message - block { - number - hash - } - handler - } - nonFatalErrors { - message - block { - number - hash - } - handler - } - chains { - network - chainHeadBlock { - number - } - earliestBlock { - number - } - latestBlock { - number - } - lastHealthyBlock { - number - } - } - entityCount - } -} +graph auth --studio + +graph deploy --studio ``` +```` -## 7. Publish your subgraph to The Graph’s Decentralized Network +- The CLI will ask for a version label. + - It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. That said, you can choose any string for the version such as: `v1`, `version1`, `asdf`, etc. -Once your subgraph has been deployed to Subgraph Studio, you have tested it out, and you are ready to put it into production, you can then publish it to the decentralized network. +### 6. Review your subgraph -In Subgraph Studio, you will be able to click the publish button on the top right of your subgraph's page. +If you’d like to examine your subgraph before publishing it to the network, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: -サブグラフを公開したいネットワークを選択します。 [faster transaction speeds and lower gas costs](/arbitrum/arbitrum-faq) を利用するために、サブグラフを Arbitrum One に公開することをお勧めします。 +- Run a sample query. +- Analyze your subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: -The (upgrade Indexer)[/sunrise/#about-the-upgrade-indexer] will begin serving queries on your subgraph regardless of subgraph curation, and it will provide you with 100,000 free queries per month. + ![Subgraph logs](/img/subgraph-logs-image.png) -For a higher quality of service and stronger redundancy, you can curate your subgraph to attract more Indexers. At the time of writing, it is recommended that you curate your own subgraph with at least 3,000 GRT to ensure 3-5 additional Indexers begin serving queries on your subgraph. +### 7. Publish your subgraph to The Graph Network -ガスのコストを節約するために、サブグラフを The Graph の分散型ネットワークに公開するときにこのボタンを選択すると、公開したのと同じトランザクションでサブグラフをキュレートできます。 +Publishing a subgraph to the decentralized network makes it available for [Curators](/network/curating/) to begin curating it and [Indexers](/network/indexing/) to begin indexing it. -![Subgraph publish](/img/publish-and-signal-tx.png) +#### Publishing with Subgraph Studio + +1. To publish your subgraph, click the Publish button in the dashboard. +2. Select the network to which you would like to publish your subgraph. + +#### Publishing from the CLI + +As of version 0.73.0, you can also publish your subgraph with the Graph CLI. + +1. Open the `graph-cli`. + +2. Use the following commands: + + ```sh + graph codegen && graph build + ``` + + Then, + + ```sh + graph publish + ``` -## 8. Query your subgraph +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +To customize your deployment, see [Publishing a Subgraph](/publishing/publishing-a-subgraph/). + +#### Adding signal to your subgraph + +1. To attract indexers to query your subgraph, you should add GRT curation signal to it. + + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + +2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. + + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + +To learn more about curation, read [Curating](/network/curating/). + +To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: + +![Subgraph publish](/img/publish-and-signal-tx.png) -これで、GraphQL クエリをサブグラフのクエリ URL に送信することで、サブグラフにクエリを実行できます。これは、クエリ ボタンをクリックして見つけることができます。 +### 8. Query your subgraph -If you don't have your API key, you can query via the free, rate-limited development query URL, which can be used for development and staging. +Now, you can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -サブグラフからデータをクエリする方法については、[こちら](/querying/querying-the-graph/)を参照してください。 +For more information about querying data from your subgraph, read [Querying The Graph](/querying/querying-the-graph/). diff --git a/website/pages/ja/release-notes/assemblyscript-migration-guide.mdx b/website/pages/ja/release-notes/assemblyscript-migration-guide.mdx index 766fbb6c80a3..82a7d23de3f8 100644 --- a/website/pages/ja/release-notes/assemblyscript-migration-guide.mdx +++ b/website/pages/ja/release-notes/assemblyscript-migration-guide.mdx @@ -127,11 +127,8 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - 変数シャドウイングを行っていた場合は、重複する変数の名前を変更する必要があります。 - ### Null 比較 - サブグラフのアップグレードを行うと、時々以下のようなエラーが発生することがあります。 ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - 解決するには、 `if` 文を以下のように変更するだけです。 ```typescript @@ -288,7 +284,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - この問題を解決するには、そのプロパティアクセスのための変数を作成して、コンパイラが nullability check のマジックを行うようにします。 ```typescript diff --git a/website/pages/ja/sps/introduction.mdx b/website/pages/ja/sps/introduction.mdx new file mode 100644 index 000000000000..12e3f81c6d53 --- /dev/null +++ b/website/pages/ja/sps/introduction.mdx @@ -0,0 +1,19 @@ +--- +title: Introduction to Substreams-powered Subgraphs +--- + +By using a Substreams package (`.spkg`) as a data source, your subgraph gains access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. + +There are two methods of enabling this technology: + +Using Substreams [triggers](./triggers): Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. + +Using [Entity Changes](https://substreams.streamingfast.io/documentation/consume/subgraph/graph-out): By writing more of the logic into Substreams, you can consume the module's output directly into graph-node. In graph-node, you can use the Substreams data to create your subgraph entities. + +It is really a matter of where you put your logic, in the subgraph or the Substreams. Keep in mind that having more of your logic in Substreams benefits from a parallelized model, whereas triggers will be linearly consumed in graph-node. + +Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: + +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/ja/sps/triggers-example.mdx b/website/pages/ja/sps/triggers-example.mdx new file mode 100644 index 000000000000..665af83b548f --- /dev/null +++ b/website/pages/ja/sps/triggers-example.mdx @@ -0,0 +1,140 @@ +--- +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" +--- + +## 前提条件 + +Before starting, make sure to: + +- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. +- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. + +## Step 1: Initialize Your Project + + + +1. Open your Dev Container and run the following command to initialize your project: + + ```bash + substreams init + ``` + +2. Select the "minimal" project option. + +3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: + +```yaml +specVersion: v0.1.0 +package: + name: my_project_sol + version: v0.1.0 + +imports: # Pass your spkg of interest + solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg + +modules: + - name: map_spl_transfers + use: solana:map_block # Select corresponding modules available within your spkg + initialBlock: 260000082 + + - name: map_transactions_by_programid + use: solana:solana:transactions_by_programid_without_votes + +network: solana-mainnet-beta + +params: # Modify the param fields to meet your needs + # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA + map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE +``` + +## Step 2: Generate the Subgraph Manifest + +Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: + +```bash +substreams codegen subgraph +``` + +You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: + +```yaml +--- +dataSources: + - kind: substreams + name: my_project_sol + network: solana-mainnet-beta + source: + package: + moduleName: map_spl_transfers # Module defined in the substreams.yaml + file: ./my-project-sol-v0.1.0.spkg + mapping: + apiVersion: 0.0.7 + kind: substreams/graph-entities + file: ./src/mappings.ts + handler: handleTriggers +``` + +## Step 3: Define Entities in `schema.graphql` + +Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. Here is an example: + +```graphql +type MyTransfer @entity { + id: ID! + amount: String! + source: String! + designation: String! + signers: [String!]! +} +``` + +This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. + +## Step 4: Handle Substreams Data in `mappings.ts` + +With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: + +```ts +import { Protobuf } from 'as-proto/assembly' +import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events' +import { MyTransfer } from '../generated/schema' + +export function handleTriggers(bytes: Uint8Array): void { + const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode) + + for (let i = 0; i < input.data.length; i++) { + const event = input.data[i] + + if (event.transfer != null) { + let entity_id: string = `${event.txnId}-${i}` + const entity = new MyTransfer(entity_id) + entity.amount = event.transfer!.instruction!.amount.toString() + entity.source = event.transfer!.accounts!.source + entity.designation = event.transfer!.accounts!.destination + + if (event.transfer!.accounts!.signer!.single != null) { + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] + } else if (event.transfer!.accounts!.signer!.multisig != null) { + entity.signers = event.transfer!.accounts!.signer!.multisig!.signers + } + entity.save() + } + } +} +``` + +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + +## Conclusion + +You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. + +For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). diff --git a/website/pages/ja/sps/triggers.mdx b/website/pages/ja/sps/triggers.mdx new file mode 100644 index 000000000000..ed19635d4768 --- /dev/null +++ b/website/pages/ja/sps/triggers.mdx @@ -0,0 +1,37 @@ +--- +title: Substreams Triggers +--- + +Custom triggers allow you to send data directly into your subgraph mappings file and entities (similar to tables and fields), enabling full use of the GraphQL layer. By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data within your subgraph’s handler, ensuring efficient and streamlined data management within the subgraph framework. + +> Note: If you haven’t already, visit one of the How-To Guides found [here](./introduction) to scaffold your first project in the Development Container. + +The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. + +```tsx +export function handleTransactions(bytes: Uint8Array): void { + let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).trasanctions // 1. + if (transactions.length == 0) { + log.info('No transactions found', []) + return + } + + for (let i = 0; i < transactions.length; i++) { + // 2. + let transaction = transactions[i] + + let entity = new Transaction(transaction.hash) // 3. + entity.from = transaction.from + entity.to = transaction.to + entity.save() + } +} +``` + +Here's what you’re seeing in the `mappings.ts` file: + +1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object +2. Looping over the transactions +3. Create a new subgraph entity for every transaction + +To go through a detailed example of a trigger-based subgraph, [click here](./triggers-example). diff --git a/website/pages/ja/substreams.mdx b/website/pages/ja/substreams.mdx index 4983c635cf7d..394edd54fe84 100644 --- a/website/pages/ja/substreams.mdx +++ b/website/pages/ja/substreams.mdx @@ -4,9 +4,11 @@ title: サブストリーム ![Substreams ロゴ](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## サブストリームの4つのステップ @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### 知識を広げよう - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/ja/sunrise.mdx b/website/pages/ja/sunrise.mdx index cbc91c2445a7..5955cec07911 100644 --- a/website/pages/ja/sunrise.mdx +++ b/website/pages/ja/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Post-Sunrise + Upgrading to The Graph Network FAQ --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Note: The Sunrise of Decentralized Data ended June 12th, 2024. -## What is the Sunrise of Decentralized Data? +## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data is an initiative spearheaded by Edge & Node. The goal is to enable subgraph developers to seamlessly upgrade to The Graph’s decentralized network. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -この計画は、新しく公開されたサブグラフに対するクエリを提供するためのアップグレードされたIndexerや、新しいブロックチェーンネットワークをThe Graphに統合する機能など、The Graphエコシステムのこれまでの多くの開発を利用しています。 +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### What are the phases of the Sunrise? +### What happened to the hosted service? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. -## The Graph Networkへのサブグラフのアップグレード +During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. -### When will hosted service subgraphs no longer be available? +### Was Subgraph Studio impacted by this upgrade? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Why were subgraphs published to Arbitrum, did it start indexing a different network? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Will my hosted service subgraph be supported on The Graph Network? - -Yes, the upgrade Indexer will automatically support all hosted service subgraphs published to The Graph Network for a seamless upgrade experience. - -### How do I upgrade my hosted service subgraph? - -> Note: Upgrading a subgraph to The Graph Network cannot be undone. - - - -To upgrade a hosted service subgraph, you can visit the subgraph dashboard on the [hosted service](https://thegraph.com/hosted-service). - -1. Select the subgraph(s) you want to upgrade. -2. Select the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### How can I get support with the upgrade process? - -The Graph community is here to support developers as they move to The Graph Network. Join The Graph's [Discord server](https://discord.gg/vtvv7FP) and request help in the #upgrade-decentralized-network channel. - -### How can I ensure high quality of service and redundancy for subgraphs on The Graph Network? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Members from these blockchain communities are encouraged to integrate their chain through the [Chain Integration Process](/chain-integration-overview/). - -### How do I publish new versions to the network? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Upgrade to the latest version of [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Update your deploy command - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> Publishing requires Arbitrum ETH - upgrading your subgraph also airdrops a small amount to facilitate your first protocol interactions 🧑‍🚀 - -### I use a subgraph developed by someone else, how can I make sure that my service isn't interrupted? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### What happens if I don't upgrade my subgraph? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### How can I get started querying subgraphs on The Graph Network? - -You can explore available subgraphs on [Graph Explorer](https://thegraph.com/explorer). [Learn more about querying subgraphs on The Graph](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## About the Upgrade Indexer -### What is the upgrade Indexer? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> The upgrade Indexer is currently active. -### What chains does the upgrade Indexer support? +The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. -The upgrade Indexer supports chains that were previously only available on the hosted service. +### What does the upgrade Indexer do? -サポートされているチェーンの包括的なリストは[こちら](/developing/supported-networks/)。 +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### なぜEdge & Nodeはアップグレード・インデクサーを実行しているのか? -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. ### What does the upgrade indexer mean for existing Indexers? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. + +However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -アップグレード・インデクサーはまた、グラフ・ネットワーク上のサブグラフや新しいチェーンの潜在的な需要に関する情報を、インデクサー・コミュニティに提供します。 +The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. ### これはデリゲーターにとって何を意味するのか? -アップグレード・インデクサーは、デリゲータに強力な機会を提供します。より多くのサブグラフがホスティングされたサービスからグラフネットワークにアップグレードされると、デリゲータはネットワーク活動の増加から利益を得ることができます。 +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### アップグレード・インデクサーは、既存のインデクサーと報酬を奪い合うのでしょうか? +### Did the upgrade Indexer compete with existing Indexers for rewards? -いいえ、アップグレード・インデクサーは、サブグラフごとに最小量しか割り当てず、インデックス作成報酬は受け取りません。 +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### これはサブグラフ開発者にどのような影響を与えるのでしょうか? +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### これはデータ消費者にとってどのようなメリットがあるのでしょうか? +### How does the upgrade Indexer benefit data consumers? The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. -### アップグレード・インデクサーは、クエリにどのような価格をつけるのでしょうか? - -アップグレード・インデクサーは、クエリ料金市場に影響を与えないよう、クエリ料金を市場価格で設定すします。 - -### アップグレード・インデクサーがサブグラフのサポートを停止する基準は何ですか? - -アップグレード・インデクサーは、少なくとも3つの他のインデクサーによって提供された一貫性のあるクエリで、サブグラフが十分かつ正常に提供されるまで、サブグラフを提供します。 - -さらに、アップグレード・インデクサーは、そのサブグラフが過去30日間にクエリされなかった場合、そのサブグラフのサポートを停止します。 - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## About The Graph Network - -### インフラは自分で用意する必要がありますか? - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -サブグラフが十分なキュレーションシグナルに達し、他のIndexerがそれをサポートし始めると、アップグレード・インデクサーは徐々に先細りになり、他のインデクサーがインデックス作成報酬とクエリ手数料を徴収できるようになります。 - -### 独自のインデックス作成インフラをホストすべきか? - -自身のプロジェクトのためにインフラストラクチャを実行することは、グラフネットワークを使用する場合と比較して、[著しくリソースを消費します](/network/benefits/)。 - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -とはいえ、もしまだ[グラフ・ノード](https://github.com/graphprotocol/graph-node)を運営することに興味があるのであれば、グラフ・ネットワーク[インデクサーとして](https://thegraph.com/blog/how-to-become-indexer/)に参加し、自分のサブグラフや他のサブグラフのデータを提供することで、インデクシング報酬やクエリ報酬を得ることを検討しましょう。 - -### 集中型インデクシング・プロバイダを使うべきでしょうか? - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -集中型ホスティングに対するThe Graphの利点について詳しく説明します: +### How does the upgrade Indexer price queries? -- **レジリエンスと冗長性**: 分散型システムは、その分散された性質により、本質的により堅牢で回復力があります。データは単一のサーバーや場所に保存されるわけではありません。その代わり、世界中にある何百もの独立したIndexerがデータを提供します。これにより、1つのノードに障害が発生した場合のデータ損失やサービス中断のリスクを低減し、卓越したアップタイム(99.99%)を実現します。 +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **サービスの質**: 印象的なアップタイムに加え、The Graph Networkはクエリ速度(レイテンシー)の中央値が106msであり、他のホスティングサービスと比較して高いクエリー成功率を誇ります。詳しくは[このブログ](https://thegraph.com/blog/qos-the-graph-network/)をご覧ください。 +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -非中央集権的な性質、セキュリティ、透明性のためにブロックチェーン・ネットワークを選択したように、The Graph Networkを選択することも同じ原則の延長線上にあります。データインフラストラクチャをこれらの価値観に合わせることで、結束力があり、弾力性があり、信頼に基づいた開発環境を確保することができます。 +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/ja/supported-network-requirements.mdx b/website/pages/ja/supported-network-requirements.mdx index 6aa0c0caa16f..1aae63b06cc2 100644 --- a/website/pages/ja/supported-network-requirements.mdx +++ b/website/pages/ja/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| ネットワーク | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | +| ネットワーク | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | diff --git a/website/pages/ja/tap.mdx b/website/pages/ja/tap.mdx new file mode 100644 index 000000000000..09b348ad3897 --- /dev/null +++ b/website/pages/ja/tap.mdx @@ -0,0 +1,197 @@ +--- +title: TAP Migration Guide +--- + +Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. + +## 概要 + +[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: + +- Efficiently handles micropayments. +- Adds a layer of consolidations to on-chain transactions and costs. +- Allows Indexers control of receipts and payments, guaranteeing payment for queries. +- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. + +## Specifics + +TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. + +For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. + +### RAV Details + +- It’s money that is waiting to be sent to the blockchain. + +- It will continue to send requests to aggregate and ensure that the total value of non-aggregated receipts does not exceed the `amount willing to lose`. + +- Each RAV can be redeemed once in the contracts, which is why they are sent after the allocation is closed. + +### Redeeming RAV + +As long as you run `tap-agent` and `indexer-agent`, everything will be executed automatically. The following provides a detailed breakdown of the process: + +1. An Indexer closes allocation. + +2. ` period, tap-agent` takes all pending receipts for that specific allocation and requests an aggregation into a RAV, marking it as `last`. + +3. `indexer-agent` takes all the last RAVS and sends redeem requests to the blockchain, which will update the value of `redeem_at`. + +4. During the `` period, `indexer-agent` monitors if the blockchain has any reorganizations that revert the transaction. + + - If it was reverted, the RAV is resent to the blockchain. If it was not reverted, it gets marked as `final`. + +## Blockchain Addresses + +### Contracts + +| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| ------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | +| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | +| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | + +### Gateway + +| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| ---------- | --------------------------------------------- | --------------------------------------------- | +| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | + +### 要件 + +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. + +- [Graph TAP Aribtrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) + +> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployement. As a result, you have to index it manually. + +## Migration Guide + +### Software versions + +| Component | バージョン | Image Link | +| --------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------- | +| indexer-service | v1.0.0-rc.6 | [indexer-service](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-service-rs/264320627?tag=1.0.0-rc.6) | +| indexer-agent | PR #995 | [indexer-agent](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent/266166026?tag=sha-d98cf80) | +| tap-agent | v1.0.0-rc.6 | [tap-agent](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-tap-agent/264320547?tag=1.0.0-rc.6) | + +### Steps + +1. **Indexer Agent** + + - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). + - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + +2. **Indexer Service** + + - Fully replace your current configuration with the [new Indexer Service rs](https://github.com/graphprotocol/indexer-rs). It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + - Like the older version, you can scale Indexer Service horizontally easily. It is still stateless. + +3. **TAP Agent** + + - Run _one_ single instance of [TAP Agent](https://github.com/graphprotocol/indexer-rs) at all times. It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + +4. **Configure Indexer Service and TAP Agent** + + Configuration is a TOML file shared between `indexer-service` and `tap-agent`, supplied with the argument `--config /path/to/config.toml`. + + Check out the full [configuration](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) and the [default values](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) + +For minimal configuration, use the following template: + +```bash +# You will have to change *all* the values below to match your setup. +# +# Some of the config below are global graph network values, which you can find here: +# +# +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: +# +# [database] +# postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" + +[indexer] +indexer_address = "0x1111111111111111111111111111111111111111" +operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" + +[database] +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. +postgres_url = "postgres://postgres@postgres:5432/postgres" + +[graph_node] +# URL to your graph-node's query endpoint +query_url = "" +# URL to your graph-node's status endpoint +status_url = "" + +[subgraphs.network] +# Query URL for the Graph Network subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[subgraphs.escrow] +# Query URL for the Escrow subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[blockchain] +# The chain ID of the network that the graph network is running on +chain_id = 1337 +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. +receipts_verifier_address = "0x2222222222222222222222222222222222222222" + +######################################## +# Specific configurations to tap-agent # +######################################## +[tap] +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: +# max_amount_willing_to_lose_grt = "0.1" +max_amount_willing_to_lose_grt = 20 + +[tap.sender_aggregator_endpoints] +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. +0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" +``` + +注: + +- Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/tap/#gateway). +- Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/tap/#contracts) using the appropriate chain id. + +**Log Level** + +- You can set the log level by using the `RUST_LOG` environment variable. +- It’s recommended that you set it to `RUST_LOG=indexer_tap_agent=debug,info`. + +## Monitoring + +### Metrics + +All components expose the port 7300 to be queried by prometheus. + +### Grafana Dashboard + +You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) and import. + +### Launchpad + +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer) diff --git a/website/pages/ko/about.mdx b/website/pages/ko/about.mdx index 36c6a49f8fbc..9c21bf00d08f 100644 --- a/website/pages/ko/about.mdx +++ b/website/pages/ko/about.mdx @@ -2,46 +2,66 @@ title: About The Graph --- -This page will explain what The Graph is and how you can get started. - ## What is The Graph? -The Graph is a decentralized protocol for indexing and querying blockchain data. The Graph makes it possible to query data that is difficult to query directly. +The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. + +## Understanding the Basics + +Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. + +### Challenges Without The Graph + +In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. + +- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. + +- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. + +- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. + +### Why is this a problem? + +It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. + +Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. + +Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. -Projects with complex smart contracts like [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it really difficult to read anything other than basic data directly from the blockchain. +## The Graph Provides a Solution -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. -You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. +### How The Graph Functions -**Indexing blockchain data is really, really hard.** +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +#### Specifics -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. -## How The Graph Works +- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -The Graph learns what and how to index Ethereum data based on subgraph descriptions, known as the subgraph manifest. The subgraph description defines the smart contracts of interest for a subgraph, the events in those contracts to pay attention to, and how to map event data to data that The Graph will store in its database. +- When creating a subgraph, you need to write a subgraph manifest. -Once you have written a `subgraph manifest`, you use the Graph CLI to store the definition in IPFS and tell the indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. -This diagram gives more detail about the flow of data once a subgraph manifest has been deployed, dealing with Ethereum transactions: +The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. ![A graphic explaining how The Graph uses Graph Node to serve queries to data consumers](/img/graph-dataflow.png) The flow follows these steps: -1. A dapp adds data to Ethereum through a transaction on a smart contract. -2. The smart contract emits one or more events while processing the transaction. -3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. -4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. -5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. +1. A dapp adds data to Ethereum through a transaction on a smart contract. +2. The smart contract emits one or more events while processing the transaction. +3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. +4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. +5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. ## Next Steps -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +The following sections provide a more in-depth look at subgraphs, their deployment and data querying. -Before you start writing your own subgraph, you might want to have a look at [Graph Explorer](https://thegraph.com/explorer) and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/pages/ko/arbitrum/arbitrum-faq.mdx b/website/pages/ko/arbitrum/arbitrum-faq.mdx index a36b0103772f..9c12c8816259 100644 --- a/website/pages/ko/arbitrum/arbitrum-faq.mdx +++ b/website/pages/ko/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: Arbitrum FAQ Click [here](#billing-on-arbitrum-faqs) if you would like to skip to the Arbitrum Billing FAQs. -## Why is The Graph implementing an L2 Solution? +## Why did The Graph implement an L2 Solution? -By scaling The Graph on L2, network participants can expect: +By scaling The Graph on L2, network participants can now benefit from: - Upwards of 26x savings on gas fees @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can expect: - Security inherited from Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers could open and close allocations to index a greater number of subgraphs with greater frequency, developers could deploy and update subgraphs with greater ease, Delegators could delegate GRT with increased frequency, and Curators could add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion. @@ -41,27 +41,21 @@ To take advantage of using The Graph on L2, use this dropdown switcher to toggle ## As a subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? -There is no immediate action required, however, network participants are encouraged to begin moving to Arbitrum to take advantage of the benefits of L2. +Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) for additional support. -Core developer teams are working to create L2 transfer tools that will make it significantly easier to move delegation, curation, and subgraphs to Arbitrum. Network participants can expect L2 transfer tools to be available by summer of 2023. +All indexing rewards are now entirely on Arbitrum. -As of April 10th, 2023, 5% of all indexing rewards are being minted on Arbitrum. As network participation increases, and as the Council approves it, indexing rewards will gradually shift from Ethereum to Arbitrum, eventually moving entirely to Arbitrum. - -## If I would like to participate in the network on L2, what should I do? - -Please help [test the network](https://testnet.thegraph.com/explorer) on L2 and report feedback about your experience in [Discord](https://discord.gg/graphprotocol). - -## Are there any risks associated with scaling the network to L2? +## Were there any risks associated with scaling the network to L2? All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). Everything has been tested thoroughly, and a contingency plan is in place to ensure a safe and seamless transition. Details can be found [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Will existing subgraphs on Ethereum continue to work? +## Are existing subgraphs on Ethereum working? -Yes, The Graph Network contracts will operate in parallel on both Ethereum and Arbitrum until moving fully to Arbitrum at a later date. +All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. -## Will GRT have a new smart contract deployed on Arbitrum? +## Does GRT have a new smart contract deployed on Arbitrum? Yes, GRT has an additional [smart contract on Arbitrum](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7). However, the Ethereum mainnet [GRT contract](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) will remain operational. diff --git a/website/pages/ko/arbitrum/l2-transfer-tools-faq.mdx b/website/pages/ko/arbitrum/l2-transfer-tools-faq.mdx index de12152a1f00..602b2a2c3aa2 100644 --- a/website/pages/ko/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/pages/ko/arbitrum/l2-transfer-tools-faq.mdx @@ -22,7 +22,8 @@ The exception is with smart contract wallets like multisigs: these are smart con ### 만약 7일 안에 이체를 완료하지 못하면 어떻게 되나요? -L2 전송 도구는 Arbitrum의 기본 메커니즘을 사용하여 L1에서 L2로 메시지를 보냅니다. 이 메커니즘은 "재시도 가능한 티켓"이라고 하며 Arbitrum GRT 브리지를 포함한 모든 네이티브 토큰 브리지를 사용하여 사용됩니다. 재시도 가능한 티켓에 대해 자세히 읽을 수 있습니다 [Arbitrum 문서] (https://docs.arbitrum.io/arbos/l1-to-l2-messaging). +L2 전송 도구는 Arbitrum의 기본 메커니즘을 사용하여 L1에서 L2로 메시지를 보냅니다. 이 메커니즘은 "재시도 가능한 티켓"이라고 하며 Arbitrum GRT 브리지를 포함한 모든 네이티브 토큰 브리지를 사용하여 사용됩니다. 재시도 가능한 티켓에 대해 자세히 읽을 수 있습니다 [Arbitrum 문서] +(https://docs.arbitrum.io/arbos/l1-to-l2-messaging). 자산(하위 그래프, 스테이크, 위임 또는 큐레이션) 을 L2로 이전하면 L2에서 재시도 가능한 티켓을 생성하는 Arbitrum GRT 브리지를 통해 메시지가 전송됩니다. 전송 도구에는 거래에 일부 ETH 값이 포함되어 있으며, 이는 1) 티켓 생성 비용을 지불하고 2) L2에서 티켓을 실행하기 위해 가스 비용을 지불하는 데 사용됩니다. 그러나 티켓이 L2에서 실행될 준비가 될 때까지 가스 가격이 시간에 따라 달라질 수 있으므로 이 자동 실행 시도가 실패할 수 있습니다. 그런 일이 발생하면 Arbitrum 브릿지는 재시도 가능한 티켓을 최대 7일 동안 유지하며 누구나 티켓 "사용"을 재시도할 수 있습니다(Arbitrum에 브릿지된 일부 ETH가 있는 지갑이 필요함). @@ -40,6 +41,8 @@ If you have the L1 transaction hash (which you can find by looking at the recent + + 1. 이더리움 메인넷에서 전송 시작 2. 확인을 위해 20분 정도 기다리세요 diff --git a/website/pages/ko/billing.mdx b/website/pages/ko/billing.mdx index 37f9c840d00b..dec5cfdadc12 100644 --- a/website/pages/ko/billing.mdx +++ b/website/pages/ko/billing.mdx @@ -14,7 +14,7 @@ There are two plans to use when querying subgraphs on The Graph Network. ## Query Payments with credit card -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) +- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. You'll be redirected to the wallet selection page. Select your wallet and click on "Connect". 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. @@ -69,7 +69,7 @@ Once you bridge GRT, you can add it to your billing balance. 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. 4. Enter the amount of GRT you would like to withdraw. 5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. 6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. @@ -83,7 +83,7 @@ Once you bridge GRT, you can add it to your billing balance. - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. 5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. 6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. + - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. 7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. 8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. @@ -127,7 +127,7 @@ This will be a step by step guide for purchasing GRT on Binance. 7. Review your purchase and click "Buy GRT". 8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. 9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - Click on the "wallet" button, click withdraw, and select GRT. - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - Click "Continue" and confirm your transaction. @@ -198,7 +198,7 @@ You can learn more about getting ETH on Binance [here](https://www.binance.com/e ### How many queries will I need? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. @@ -208,6 +208,6 @@ Of course, both new and existing users can reach out to Edge & Node's BD team fo Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### What happens when my billing balance runs out? Will I get a warning? You will receive several email notifications before your billing balance runs out. diff --git a/website/pages/ko/chain-integration-overview.mdx b/website/pages/ko/chain-integration-overview.mdx index 2fe6c2580909..a142f3f817f9 100644 --- a/website/pages/ko/chain-integration-overview.mdx +++ b/website/pages/ko/chain-integration-overview.mdx @@ -6,12 +6,12 @@ A transparent and governance-based integration process was designed for blockcha ## Stage 1. Technical Integration -- Teams work on a Graph Node integration and Firehose for non-EVM based chains. [Here's how](/new-chain-integration/). +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. - Teams initiate the protocol integration process by creating a Forum thread [here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (New Data Sources sub-category under Governance & GIPs). Using the default Forum template is mandatory. ## Stage 2. Integration Validation -- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON RPC or Firehose endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - Graph Indexers test the integration on The Graph's testnet. - Core developers and Indexers monitor stability, performance, and data determinism. @@ -38,7 +38,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. -### 3. How much time will this process take? +### 3. How much time will the process of reaching full protocol support take? The time to mainnet is expected to be several weeks, varying based on the time of integration development, whether additional research is required, testing and bug fixes, and, as always, the timing of the governance process that requires community feedback. @@ -46,4 +46,4 @@ Protocol support for indexing rewards depends on the stakeholders' bandwidth to ### 4. How will priorities be handled? -Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. This is especially true for chains previously supported on the [hosted service](https://thegraph.com/hosted-service) or those relying on already tested stacks. +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/ko/cookbook/arweave.mdx b/website/pages/ko/cookbook/arweave.mdx index 15538454e3ff..b079da30a013 100644 --- a/website/pages/ko/cookbook/arweave.mdx +++ b/website/pages/ko/cookbook/arweave.mdx @@ -105,7 +105,7 @@ Schema definition describes the structure of the resulting subgraph database and The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). -Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/assemblyscript-api/). +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/ko/cookbook/base-testnet.mdx b/website/pages/ko/cookbook/base-testnet.mdx index 3a1d98a44103..0cc5ad365dfd 100644 --- a/website/pages/ko/cookbook/base-testnet.mdx +++ b/website/pages/ko/cookbook/base-testnet.mdx @@ -63,10 +63,10 @@ Your subgraph slug is an identifier for your subgraph. The CLI tool will walk yo The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. +- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retrieve from the subgraph. - AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema. -If you want to index additional data, you will need extend the manifest, schema and mappings. +If you want to index additional data, you will need to extend the manifest, schema and mappings. For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). diff --git a/website/pages/ko/cookbook/cosmos.mdx b/website/pages/ko/cookbook/cosmos.mdx index 5e9edfd82931..a8c359b3098c 100644 --- a/website/pages/ko/cookbook/cosmos.mdx +++ b/website/pages/ko/cookbook/cosmos.mdx @@ -85,7 +85,7 @@ Schema definition describes the structure of the resulting subgraph database and The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). -Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/assemblyscript-api/). +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/ko/cookbook/grafting.mdx b/website/pages/ko/cookbook/grafting.mdx index 6b4f419390d5..6c3b85419af9 100644 --- a/website/pages/ko/cookbook/grafting.mdx +++ b/website/pages/ko/cookbook/grafting.mdx @@ -22,7 +22,7 @@ For more information, you can check: - [Grafting](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic usecase. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network @@ -30,7 +30,7 @@ In this tutorial, we will be covering a basic usecase. We will replace an existi ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. While this is an effective way to preserve data and save time on indexing, grafting may introduce complexities and potential issues when migrating from a hosted environment to the decentralized network. It is not possible to graft a subgraph from The Graph Network back to the hosted service or Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. ### Best Practices @@ -80,7 +80,7 @@ dataSources: ``` - The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. ## Grafting Manifest Definition @@ -191,7 +191,7 @@ Congrats! You have successfully grafted a subgraph onto another subgraph. ## Additional Resources -If you want more experience with grafting, here's a few examples for popular contracts: +If you want more experience with grafting, here are a few examples for popular contracts: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) diff --git a/website/pages/ko/cookbook/near.mdx b/website/pages/ko/cookbook/near.mdx index 28486f8bb0be..a4f27caf6f3c 100644 --- a/website/pages/ko/cookbook/near.mdx +++ b/website/pages/ko/cookbook/near.mdx @@ -37,7 +37,7 @@ There are three aspects of subgraph definition: **schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph#the-graphql-schema). -**AssemblyScript Mappings:** [AssemblyScript code](/developing/assemblyscript-api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. +**AssemblyScript Mappings:** [AssemblyScript code](/developing/graph-ts/api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. During subgraph development there are two key commands: @@ -98,7 +98,7 @@ Schema definition describes the structure of the resulting subgraph database and The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). -NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/assemblyscript-api). +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/graph-ts/api). ```typescript @@ -165,9 +165,9 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/developing/assemblyscript-api) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/developing/graph-ts/api) is available to NEAR subgraph developers during mapping execution. -This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/assemblyscript-api#json-api) to allow developers to easily process these logs. +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/graph-ts/api#json-api) to allow developers to easily process these logs. ## Deploying a NEAR Subgraph diff --git a/website/pages/ko/cookbook/subgraph-uncrashable.mdx b/website/pages/ko/cookbook/subgraph-uncrashable.mdx index 989310a3f9a0..0cc91a0fa2c3 100644 --- a/website/pages/ko/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/ko/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: Safe Subgraph Code Generator - The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. These logs can be viewed in the The Graph's hosted service under the 'Logs' section. +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. diff --git a/website/pages/ko/cookbook/upgrading-a-subgraph.mdx b/website/pages/ko/cookbook/upgrading-a-subgraph.mdx index 5502b16d9288..a546f02c0800 100644 --- a/website/pages/ko/cookbook/upgrading-a-subgraph.mdx +++ b/website/pages/ko/cookbook/upgrading-a-subgraph.mdx @@ -136,7 +136,7 @@ Make sure **Update Subgraph Details in Explorer** is checked and click on **Save ## Deprecating a Subgraph on The Graph Network -Follow the steps [here](/managing/deprecating-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. +Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. ## Querying a Subgraph + Billing on The Graph Network diff --git a/website/pages/ko/deploying/multiple-networks.mdx b/website/pages/ko/deploying/multiple-networks.mdx new file mode 100644 index 000000000000..dc2b8e533430 --- /dev/null +++ b/website/pages/ko/deploying/multiple-networks.mdx @@ -0,0 +1,241 @@ +--- +title: Deploying a Subgraph to Multiple Networks +--- + +This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph). + +## Deploying the subgraph to multiple networks + +In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. + +### Using `graph-cli` + +Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: + +```sh +Options: + + ... + --network Network configuration to use from the networks config file + --network-file Networks config file path (default: "./networks.json") +``` + +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. + +> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. + +If you don't have a `networks.json` file, you'll need to manually create one with the following structure: + +```json +{ + "network1": { // the network name + "dataSource1": { // the dataSource name + "address": "0xabc...", // the contract address (optional) + "startBlock": 123456 // the startBlock (optional) + }, + "dataSource2": { + "address": "0x123...", + "startBlock": 123444 + } + }, + "network2": { + "dataSource1": { + "address": "0x987...", + "startBlock": 123 + }, + "dataSource2": { + "address": "0xxyz..", + "startBlock": 456 + } + }, + ... +} +``` + +> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. + +Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x123...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +This is what your networks config file should look like: + +```json +{ + "mainnet": { + "Gravity": { + "address": "0x123..." + } + }, + "sepolia": { + "Gravity": { + "address": "0xabc..." + } + } +} +``` + +Now we can run one of the following commands: + +```sh +# Using default networks.json file +yarn build --network sepolia + +# Using custom named file +yarn build --network sepolia --network-file path/to/config +``` + +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: sepolia + source: + address: '0xabc...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Now you are ready to `yarn deploy`. + +> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: + +```sh +# Using default networks.json file +yarn deploy --network sepolia + +# Using custom named file +yarn deploy --network sepolia --network-file path/to/config +``` + +### Using subgraph.yaml template + +One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). + +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: + +```json +{ + "network": "mainnet", + "address": "0x123..." +} +``` + +and + +```json +{ + "network": "sepolia", + "address": "0xabc..." +} +``` + +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + network: {{network}} + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + address: '{{address}}' + abi: Gravity + mapping: + kind: ethereum/events +``` + +In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: + +```json +{ + ... + "scripts": { + ... + "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", + "prepare:sepolia": "mustache config/sepolia.json subgraph.template.yaml > subgraph.yaml" + }, + "devDependencies": { + ... + "mustache": "^3.1.0" + } +} +``` + +To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: + +```sh +# Mainnet: +yarn prepare:mainnet && yarn deploy + +# Sepolia: +yarn prepare:sepolia && yarn deploy +``` + +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). + +**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. + +## Subgraph Studio subgraph archive policy + +A subgraph version in Studio is archived if and only if it meets the following criteria: + +- The version is not published to the network (or pending publish) +- The version was created 45 or more days ago +- The subgraph hasn't been queried in 30 days + +In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. + +Every subgraph affected with this policy has an option to bring the version in question back. + +## Checking subgraph health + +If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. + +Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: + +```graphql +{ + indexingStatusForCurrentVersion(subgraphName: "org/subgraph") { + synced + health + fatalError { + message + block { + number + hash + } + handler + } + chains { + chainHeadBlock { + number + } + latestBlock { + number + } + } + } +} +``` + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/pages/ko/developing/creating-a-subgraph.mdx b/website/pages/ko/developing/creating-a-subgraph.mdx index b4a2f306d8ed..2a97c2f051a0 100644 --- a/website/pages/ko/developing/creating-a-subgraph.mdx +++ b/website/pages/ko/developing/creating-a-subgraph.mdx @@ -2,45 +2,47 @@ title: Creating a Subgraph --- -A subgraph extracts data from a blockchain, processing it and storing it so that it can be easily queried via GraphQL. +This detailed guide provides instructions to successfully create a subgraph. -![Defining a Subgraph](/img/defining-a-subgraph.png) +A subgraph extracts data from a blockchain, processes it, and stores it for efficient querying via GraphQL. -The subgraph definition consists of a few files: +![Defining a Subgraph](/img/defining-a-subgraph.png) -- `subgraph.yaml`: a YAML file containing the subgraph manifest +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. -- `schema.graphql`: a GraphQL schema that defines what data is stored for your subgraph, and how to query it via GraphQL +## Getting Started -- `AssemblyScript Mappings`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) code that translates from the event data to the entities defined in your schema (e.g. `mapping.ts` in this tutorial) +### Install the Graph CLI -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key). It is recommended that you [add signal](/network/curating/#how-to-signal) to your subgraph with at least [3,000 GRT](/sunrise/#how-can-i-ensure-high-quality-of-service-and-redundancy-for-subgraphs-on-the-graph-network). +To build and deploy a subgraph, you will need the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-tooling) which you will need to build and deploy a subgraph. +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. -## Install the Graph CLI +On your local machine, run one of the following commands: -The Graph CLI is written in JavaScript, and you will need to install either `yarn` or `npm` to use it; it is assumed that you have yarn in what follows. +#### Using [npm](https://www.npmjs.com/) -Once you have `yarn`, install the Graph CLI by running +```bash +npm install -g @graphprotocol/graph-cli@latest +``` -**Install with yarn:** +#### Using [yarn](https://yarnpkg.com/) ```bash yarn global add @graphprotocol/graph-cli ``` -**Install with npm:** +- The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. -```bash -npm install -g @graphprotocol/graph-cli -``` +- This `graph init` command can also create a subgraph in Subgraph Studio by passing in `--product subgraph-studio`. + +- If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. -Once installed, the `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. This command can be used to create a subgraph in Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to your preferred network, bootstrapping a new subgraph from that contract can be a good way to get started. +## Create a subgraph -## From An Existing Contract +### From an existing contract -The following command creates a subgraph that indexes all events of an existing contract. It attempts to fetch the contract ABI from Etherscan and falls back to requesting a local file path. If any of the optional arguments are missing, it takes you through an interactive form. +The following command creates a subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,21 +53,29 @@ graph init \ [] ``` -The `` is the ID of your subgraph in Subgraph Studio, it can be found on your subgraph details page. +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. -## From An Example Subgraph +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. -The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: +### From an example subgraph + +The following command initializes a new project from an example subgraph: ```sh -graph init --studio +graph init --studio --from-example=example-subgraph ``` -The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. -## Add New dataSources To An Existing Subgraph +## Add new `dataSources` to an existing subgraph -Since `v0.31.0` the `graph-cli` supports adding new dataSources to an existing subgraph through the `graph add` command. +Since `v0.31.0`, the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: ```sh graph add
    [] @@ -78,22 +88,45 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -The `add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option), and will create a new `dataSource` in the same way that `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. +### Specifics + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: + + - If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- The contract `address` will be written to the `networks.json` for the relevant network. + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. -The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: +## Components of a subgraph -- If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. -- If `false`: a new entity & event handler should be created with `${dataSourceName}{EventName}`. +### The Subgraph Manifest -The contract `address` will be written to the `networks.json` for the relevant network. +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -> **Note:** When using the interactive cli, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. +The **subgraph definition** consists of the following files: -## The Subgraph Manifest +- `subgraph.yaml`: Contains the subgraph manifest -The subgraph manifest `subgraph.yaml` defines the smart contracts your subgraph indexes, which events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL -For the example subgraph, `subgraph.yaml` is: +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +A single subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: ```yaml specVersion: 0.0.4 @@ -180,9 +213,9 @@ A single subgraph can index data from multiple smart contracts. Add an entry for The triggers for a data source within a block are ordered using the following process: -1. Event and call triggers are first ordered by transaction index within the block. -2. Event and call triggers within the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. -3. Block triggers are run after event and call triggers, in the order they are defined in the manifest. +1. Event and call triggers are first ordered by transaction index within the block. +2. Event and call triggers within the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. +3. Block triggers are run after event and call triggers, in the order they are defined in the manifest. These ordering rules are subject to change. @@ -305,9 +338,9 @@ Imagine you have a subgraph that needs to make three Ethereum calls to fetch dat Traditionally, these calls might be made sequentially: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Total time taken = 3 + 2 + 4 = 9 seconds @@ -315,9 +348,9 @@ Total time taken = 3 + 2 + 4 = 9 seconds With this feature, you can declare these calls to be executed in parallel: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. @@ -325,9 +358,9 @@ Total time taken = max (3, 2, 4) = 4 seconds ### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. -2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. ### Example Configuration in Subgraph Manifest @@ -347,7 +380,7 @@ calls: Details for the example above: - `global0X128` is the declared `eth_call`. -- The text before colon(`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. @@ -360,17 +393,17 @@ calls: ### SpecVersion Releases -| Version | Release notes | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +| Version | Release notes | +|:-------:| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | ### Getting The ABIs @@ -442,16 +475,16 @@ For some entity types the `id` is constructed from the id's of two other entitie We support the following scalars in our GraphQL API: -| Type | Description | -| --- | --- | -| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| Type | Description | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | #### Enums @@ -593,7 +626,7 @@ This more elaborate way of storing many-to-many relationships will result in les #### Adding comments to the schema -As per GraphQL spec, comments can be added above schema entity attributes using the hash symble `#`. This is illustrated in the example below: +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: ```graphql type MyFirstEntity @entity { @@ -873,7 +906,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. -> +> > If prior blocks contain data relevant to the new data source, it is best to index that data by reading the current state of the contract and creating entities representing that state at the time the new data source is created. ### Data Source Context @@ -930,7 +963,7 @@ dataSources: ``` > **Note:** The contract creation block can be quickly looked up on Etherscan: -> +> > 1. Search for the contract by entering its address in the search bar. > 2. Click on the creation transaction hash in the `Contract Creator` section. > 3. Load the transaction details page where you'll find the start block for that contract. @@ -945,9 +978,9 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde `indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: -1. `"never"`: No pruning of historical data; retains the entire history. -2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. -3. A specific number: Sets a custom limit on the number of historical blocks to retain. +1. `"never"`: No pruning of historical data; retains the entire history. +2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +3. A specific number: Sets a custom limit on the number of historical blocks to retain. ``` indexerHints: @@ -982,29 +1015,6 @@ indexerHints: prune: never ``` -You can check the earliest block (with historical state) for a given subgraph by querying the [Indexing Status API](/deploying/deploying-a-subgraph-to-hosted/#checking-subgraph-health): - -``` -{ - indexingStatuses(subgraphs: ["Qm..."]) { - subgraph - synced - health - chains { - earliestBlock { - number - } - latestBlock { - number - } - chainHeadBlock { number } - } - } -} -``` - -Note that the `earliestBlock` is the earliest block with historical data, which will be more recent than the `startBlock` specified in the manifest, if the subgraph has been pruned. - ## Event Handlers Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. @@ -1477,7 +1487,7 @@ The file data source must specifically mention all the entity types which it wil #### Create a new handler to process files -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/assemblyscript-api/#json-api)). +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). The CID of the file as a readable string can be accessed via the `dataSource` as follows: diff --git a/website/pages/ko/developing/developer-faqs.mdx b/website/pages/ko/developing/developer-faqs.mdx index b4af2c711bc8..c8906615c081 100644 --- a/website/pages/ko/developing/developer-faqs.mdx +++ b/website/pages/ko/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: Developer FAQs --- -## 1. What is a subgraph? +This page summarizes some of the most common questions for developers building on The Graph. -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using the Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available to be queried by subgraph consumers. +## Subgraph Related -## 2. Can I delete my subgraph? +### 1. What is a subgraph? -It is not possible to delete subgraphs once they are created. +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. Can I change my subgraph name? +### 2. What is the first step to create a subgraph? -No. Once a subgraph is created, the name cannot be changed. Make sure to think of this carefully before you create your subgraph so it is easily searchable and identifiable by other dapps. +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. Can I change the GitHub account associated with my subgraph? +### 3. Can I still create a subgraph if my smart contracts don't have events? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Make sure to think of this carefully before you create your subgraph. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. Am I still able to create a subgraph if my smart contracts don't have events? +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are by far the fastest way to retrieve useful data. +### 4. Can I change the GitHub account associated with my subgraph? -If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended, as performance will be significantly slower. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. Is it possible to deploy one subgraph with the same name for multiple networks? +### 5. How do I update a subgraph on mainnet? -You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/deploying/deploying-a-subgraph-to-hosted#redeploying-a-subgraph) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. How are templates different from data sources? +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) upfront you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. +You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph#data-source-templates). -## 8. How do I make sure I'm using the latest version of graph-node for my local deployments? +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -You can run the following command: +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -docker pull graphprotocol/graph-node:latest -``` +You can also use `graph add` command to add a new dataSource. -**NOTE:** docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so it is important to do this to make sure you are up to date with the latest version of graph-node. +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. How do I call a contract function or access a public state variable from my subgraph mappings? +Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. -## 10. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Yes. On `graph init` command itself you can add multiple datasources by entering contracts one after the other. You can also use `graph add` command to add new datasource. +You can run the following command: -## 11. I want to contribute or add a GitHub issue. Where can I find the open source repositories? +```sh +docker pull graphprotocol/graph-node:latest +``` -- [graph-node](https://github.com/graphprotocol/graph-node) -- [graph-tooling](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. What is the recommended way to build "autogenerated" ids for an entity when handling events? +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -## 13. When listening to multiple contracts, is it possible to select the contract order to listen to events? +### 15. Can I delete my subgraph? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +It is not possible to delete subgraphs once they are created. However, you can [transfer and deprecate your subgraph](/managing/transfer-and-deprecate-a-subgraph/). -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +## Network Related + +### 16. What networks are supported by The Graph? + +You can find the list of the supported networks [here](/developing/supported-networks). + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? Yes. You can do this by importing `graph-ts` as per the example below: @@ -78,23 +99,21 @@ dataSource.network() dataSource.address() ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Do you support block and call handlers on Sepolia? Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. -## 16. Can I import ethers.js or other JS libraries into my subgraph mappings? - -Not currently, as mappings are written in AssemblyScript. One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. +## Indexing & Querying Related -## 17. Is it possible to specify what block to start indexing on? +### 19. Is it possible to specify what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 18. Are there some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 19. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: @@ -102,19 +121,7 @@ Yes! Try the following command, substituting "organization/subgraphName" with th curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -## 20. What networks are supported by The Graph? - -You can find the list of the supported networks [here](/developing/supported-networks). - -## 21. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? - -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. - -## 22. Is this possible to use Apollo Federation on top of graph-node? - -Federation is not supported yet, although we do want to support it in the future. At the moment, something you can do is use schema stitching, either on the client or via a proxy service. - -## 23. Is there a limit to how many objects The Graph can return per query? +### 22. Is there a limit to how many objects The Graph can return per query? By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: @@ -122,24 +129,19 @@ By default, query responses are limited to 100 items per collection. If you want someCollection(first: 1000, skip: ) { ... } ``` -## 24. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## 25. Where do I go to find my current subgraph on the hosted service? - -Head over to the hosted service in order to find subgraphs that you or others deployed to the hosted service. You can find it [here](https://thegraph.com/hosted-service). - -## 26. Will the hosted service start charging query fees? - -The Graph will never charge for the hosted service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The hosted service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to upgrade to the decentralized network as they are comfortable. - -## 27. How do I update a subgraph on mainnet? +## Miscellaneous -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +### 24. Is it possible to use Apollo Federation on top of graph-node? -## 28. In what order are the event, block, and call handlers triggered for a data source? +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +- [graph-node](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/ko/developing/graph-ts/api.mdx b/website/pages/ko/developing/graph-ts/api.mdx index 46442dfa941e..8fc1f4b48b61 100644 --- a/website/pages/ko/developing/graph-ts/api.mdx +++ b/website/pages/ko/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: AssemblyScript API --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/release-notes/assemblyscript-migration-guide) +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## API Reference @@ -27,16 +29,16 @@ The `@graphprotocol/graph-ts` library provides the following APIs: The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. -| Version | Release notes | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
    Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
    Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
    `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
    `etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Release notes | +| :-----: | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
    Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
    Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
    `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
    `etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Built-in Types @@ -252,7 +254,9 @@ export function handleTransfer(event: TransferEvent): void { When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. -Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### Loading entities from the store @@ -268,15 +272,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### Looking up entities created withing a block As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. -The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a Transaction from some on-chain event, and a later handler wants to access this transaction if it exists. In the case where the transaction does not exist, the subgraph will have to go to the database just to find out that the entity does not exist; if the subgraph author already knows that the entity must have been created in the same block, using loadInBlock avoids this database roundtrip. For some subgraphs, these missed lookups can contribute significantly to the indexing time. +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -503,7 +510,9 @@ Any other contract that is part of the subgraph can be imported from the generat #### Handling Reverted Calls -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -515,7 +524,7 @@ if (callResult.reverted) { } ``` -Note that a Graph node connected to a Geth or Infura client may not detect all reverts, if you rely on this we recommend using a Graph node connected to a Parity client. +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### Encoding/Decoding ABI diff --git a/website/pages/ko/developing/supported-networks.mdx b/website/pages/ko/developing/supported-networks.mdx index 7c2d8d858261..797202065e99 100644 --- a/website/pages/ko/developing/supported-networks.mdx +++ b/website/pages/ko/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. +- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. - If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). diff --git a/website/pages/ko/developing/unit-testing-framework.mdx b/website/pages/ko/developing/unit-testing-framework.mdx index f826a5ccb209..308135181ccb 100644 --- a/website/pages/ko/developing/unit-testing-framework.mdx +++ b/website/pages/ko/developing/unit-testing-framework.mdx @@ -1368,18 +1368,18 @@ The log output includes the test run duration. Here's an example: > Critical: Could not create WasmInstance from valid module with context: unknown import: wasi_snapshot_preview1::fd_write has not been defined -This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/assemblyscript-api/#logging-api) +This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/graph-ts/api/#logging-api) > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Block(defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(18,12) -> +> > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Transaction(defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(24,12) The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as`. The best way to fix issues like this one is to update everything to the latest released version. diff --git a/website/pages/ko/glossary.mdx b/website/pages/ko/glossary.mdx index cd24a22fd4d5..2978ecce3561 100644 --- a/website/pages/ko/glossary.mdx +++ b/website/pages/ko/glossary.mdx @@ -10,11 +10,9 @@ title: Glossary - **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Then, Indexers can begin indexing subgraphs to make them available to be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **Hosted service**: A temporary scaffold service for building and querying subgraphs as The Graph's decentralized network is maturing its cost of service, quality of service, and developer experience. - -- **Indexers**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. @@ -24,17 +22,17 @@ title: Glossary - **Indexer's Self Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. -- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service to The Graph Network. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegators**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curators**: Network participants that identify high-quality subgraphs, and “curate” them (i.e., signal GRT on them) in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. Indexers earn indexing rewards proportional to the signal on a subgraph. We see a correlation between the amount of GRT signalled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. -- **Subgraph Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a subgraph. - **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. @@ -46,11 +44,11 @@ title: Glossary 1. **Active**: An allocation is considered active when it is created on-chain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed. Specifically, the Indexer will lose 2.5% of their self-stake of GRT. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. @@ -62,11 +60,11 @@ title: Glossary - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **POI or Proof of Indexing**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent Proof of Indexing (POI). Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. @@ -78,10 +76,6 @@ title: Glossary - **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self stake. -- **_Upgrading_ a subgraph to The Graph Network**: The process of moving a subgraph from the hosted service to The Graph Network. - -- **_Updating_ a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). - -- **Upgrade Window**: The countdown for hosted service users to upgrade their subgraph(s) to The Graph Network beginning on April 11th, and ending on June 12th 2024. diff --git a/website/pages/ko/index.json b/website/pages/ko/index.json index 988a55bb63e2..cf306c8bfd31 100644 --- a/website/pages/ko/index.json +++ b/website/pages/ko/index.json @@ -21,10 +21,6 @@ "createASubgraph": { "title": "Create a Subgraph", "description": "Use Studio to create subgraphs" - }, - "migrateFromHostedService": { - "title": "Upgrade from the hosted service", - "description": "Upgrading subgraphs to The Graph Network" } }, "networkRoles": { diff --git a/website/pages/ko/managing/transfer-and-deprecate-a-subgraph.mdx b/website/pages/ko/managing/transfer-and-deprecate-a-subgraph.mdx new file mode 100644 index 000000000000..6bdd183f72d5 --- /dev/null +++ b/website/pages/ko/managing/transfer-and-deprecate-a-subgraph.mdx @@ -0,0 +1,65 @@ +--- +title: Transfer and Deprecate a Subgraph +--- + +## Transferring ownership of a subgraph + +Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. + +**Please note the following:** + +- Whoever owns the NFT controls the subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. +- You can easily move control of a subgraph to a multi-sig. +- A community member can create a subgraph on behalf of a DAO. + +### View your subgraph as an NFT + +To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: + +``` +https://opensea.io/your-wallet-address +``` + +Or a wallet explorer like **Rainbow.me**: + +``` +https://rainbow.me/your-wallet-addres +``` + +### Step-by-Step + +To transfer ownership of a subgraph, do the following: + +1. Use the UI built into Subgraph Studio: + + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) + +2. Choose the address that you would like to transfer the subgraph to: + + ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) + +Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: + +![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) + +## Deprecating a subgraph + +Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. + +### Step-by-Step + +To deprecate your subgraph, do the following: + +1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). +2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. +3. Your subgraph will no longer appear in searches on Graph Explorer. + +**Please note the following:** + +- The owner's wallet should call the `deprecateSubgraph` function. +- Curators will not be able to signal on the subgraph anymore. +- Curators that already signaled on the subgraph can withdraw their signal at an average share price. +- Deprecated subgraphs will show an error message. + +> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. diff --git a/website/pages/ko/mips-faqs.mdx b/website/pages/ko/mips-faqs.mdx index ae460989f96e..1f7553923765 100644 --- a/website/pages/ko/mips-faqs.mdx +++ b/website/pages/ko/mips-faqs.mdx @@ -6,10 +6,6 @@ title: MIPs FAQs > Note: the MIPs program is closed as of May 2023. Thank you to all the Indexers who participated! -It's an exciting time to be participating in The Graph ecosystem! During [Graph Day 2022](https://thegraph.com/graph-day/2022/) Yaniv Tal announced the [sunsetting of the hosted service](https://thegraph.com/blog/sunsetting-hosted-service/), a moment The Graph ecosystem has been working towards for many years. - -To support the sunsetting of the hosted service and the migration of all of it's activity to the decentralized network, The Graph Foundation has announced the [Migration Infrastructure Providers (MIPs) program](https://thegraph.com/blog/mips-multi-chain-indexing-incentivized-program). - The MIPs program is an incentivization program for Indexers to support them with resources to index chains beyond Ethereum mainnet and help The Graph protocol expand the decentralized network into a multi-chain infrastructure layer. The MIPs program has allocated 0.75% of the GRT supply (75M GRT), with 0.5% to reward Indexers who contribute to bootstrapping the network and 0.25% allocated to Network Grants for subgraph developers using multi-chain subgraphs. diff --git a/website/pages/ko/network/benefits.mdx b/website/pages/ko/network/benefits.mdx index 6be4e830e565..63217b6729b9 100644 --- a/website/pages/ko/network/benefits.mdx +++ b/website/pages/ko/network/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Cost Comparison | Self Hosted | The Graph 네트워크 | -| :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $0+ | $0 per month | -| Engineering time | $400 per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | -| Cost per query | $0 | $0 | -| Infrastructure | Centralized | Decentralized | -| Geographic redundancy | $750+ per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $750+ | $0 | +| Cost Comparison | Self Hosted | The Graph 네트워크 | +|:----------------------------:|:---------------------------------------:|:---------------------------------------------------------------:| +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $0+ | $0 per month | +| Engineering time | $400 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | +| Cost per query | $0 | $0 | +| Infrastructure | Centralized | Decentralized | +| Geographic redundancy | $750+ per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| Cost Comparison | Self Hosted | The Graph 네트워크 | -| :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $500 per month | $120 per month | -| Engineering time | $800 per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | ~3,000,000 | -| Cost per query | $0 | $0.00004 | -| Infrastructure | Centralized | Decentralized | -| Engineering expense | $200 per hour | Included | -| Geographic redundancy | $1,200 in total costs per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $1,650+ | $120 | +| Cost Comparison | Self Hosted | The Graph 네트워크 | +|:----------------------------:|:------------------------------------------:|:---------------------------------------------------------------:| +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $500 per month | $120 per month | +| Engineering time | $800 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~3,000,000 | +| Cost per query | $0 | $0.00004 | +| Infrastructure | Centralized | Decentralized | +| Engineering expense | $200 per hour | Included | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| Cost Comparison | Self Hosted | The Graph 네트워크 | -| :-: | :-: | :-: | -| Monthly server cost\* | $1100 per month, per node | $0 | -| Query costs | $4000 | $1,200 per month | -| Number of nodes needed | 10 | Not applicable | -| Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | ~30,000,000 | -| Cost per query | $0 | $0.00004 | -| Infrastructure | Centralized | Decentralized | -| Geographic redundancy | $1,200 in total costs per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $11,000+ | $1,200 | +| Cost Comparison | Self Hosted | The Graph 네트워크 | +|:----------------------------:|:-------------------------------------------:|:---------------------------------------------------------------:| +| Monthly server cost\* | $1100 per month, per node | $0 | +| Query costs | $4000 | $1,200 per month | +| Number of nodes needed | 10 | Not applicable | +| Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~30,000,000 | +| Cost per query | $0 | $0.00004 | +| Infrastructure | Centralized | Decentralized | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $11,000+ | $1,200 | \*including costs for backup: $50-$100 per month diff --git a/website/pages/ko/network/curating.mdx b/website/pages/ko/network/curating.mdx index fb2107c53884..b2864660fe8c 100644 --- a/website/pages/ko/network/curating.mdx +++ b/website/pages/ko/network/curating.mdx @@ -10,7 +10,7 @@ Before consumers can query a subgraph, it must be indexed. This is where curatio Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. @@ -18,7 +18,7 @@ The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). @@ -34,7 +34,7 @@ Signaling on a specific version is especially useful when one subgraph is used b Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay a 0.5% curation tax on all auto-migrated curation shares. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -49,7 +49,7 @@ However, it is recommended that curators leave their signaled GRT in place not o ## Risks 1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. 3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. @@ -78,50 +78,14 @@ It’s suggested that you don’t update your subgraphs too frequently. See the ### 5. Can I sell my curation shares? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). ### 6. Am I eligible for a curation grant? Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## Bonding Curve 101 - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![Price per shares](/img/price-per-share.png) - -As a result, price increases linearly, meaning that it will get more expensive to purchase a share over time. Here’s an example of what we mean, see the bonding curve below: - -![Bonding curve](/img/bonding-curve.png) - -Consider we have two curators that mint shares for a subgraph: - -- Curator A is the first to signal on the subgraph. By adding 120,000 GRT into the curve, they are able to mint 2000 shares. -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- Since both curators hold half the total of curation shares, they would receive an equal amount of curator royalties. -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- The remaining curator would now receive all the curator royalties for that subgraph. If they were to burn their shares to withdraw GRT, they would receive 120,000 GRT. -- **TLDR:** The GRT valuation of curation shares is determined by the bonding curve and can be volatile. There is potential to incur big losses. Signaling early means you put in less GRT for each share. By extension, this means you earn more curator royalties per GRT than later curators for the same subgraph. - -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** - -In the case of The Graph, [Bancor’s implementation of a bonding curve formula](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) is leveraged. - Still confused? Check out our Curation video guide below: diff --git a/website/pages/ko/network/delegating.mdx b/website/pages/ko/network/delegating.mdx index 81824234e072..f7430c5746ae 100644 --- a/website/pages/ko/network/delegating.mdx +++ b/website/pages/ko/network/delegating.mdx @@ -2,13 +2,23 @@ title: Delegating --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## Delegator Guide -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly, as that information is widely available online. There are three sections in this guide: @@ -24,15 +34,19 @@ Listed below are the main risks of being a Delegator in the protocol. Delegators cannot be slashed for bad behavior, but there is a tax on Delegators to disincentivize poor decision-making that could harm the integrity of the network. -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +As a Delegator, it's important to understand the following: -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### The delegation unbonding period Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely.
    ![Delegation unbonding](/img/Delegation-Unbonding.png) _Note the 0.5% fee in the Delegation UI, as well as the 28 day @@ -41,47 +55,65 @@ Another thing to consider is how to choose an Indexer wisely. If you choose an I ### Choosing a trustworthy Indexer with a fair reward payout for Delegators -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +#### Delegation Parameters + +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
    ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *The top Indexer is giving Delegators 90% of the rewards. The middle one is giving Delegators 20%. The bottom one is giving Delegators ~83%.*
    -- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. + +As you can see, in order to choose the right Indexer, you must consider multiple things. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -### Calculating Delegators expected return +## Calculating Delegators Expected Return -A Delegator must consider a lot of factors when determining the return. These include: +A Delegator must consider the following factors to determine a return: -- A technical Delegator can also look at the Indexer's ability to use the Delegated tokens available to them. If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### Considering the query fee cut and indexing fee cut -As described in the above sections, you should choose an Indexer that is transparent and honest about setting their Query Fee Cut and Indexing Fee Cuts. A Delegator should also look at the Parameters Cooldown time to see how much of a time buffer they have. After that is done, it is fairly simple to calculate the amount of rewards the Delegators are getting. The formula is: +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![Delegation Image 3](/img/Delegation-Reward-Formula.png) ### Considering the Indexer's delegation pool -Another thing a Delegator has to consider is what proportion of the Delegation Pool they own. All delegation rewards are shared evenly, with a simple rebalancing of the pool determined by the amount the Delegator has deposited into the pool. This gives the Delegator a share of the pool: +Delegators should consider the proportion of the Delegation Pool they own. -![Share formula](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +This gives the Delegator a share of the pool: + +![Share formula](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### Considering the delegation capacity -Another thing to consider is the delegation capacity. Currently, the Delegation Ratio is set to 16. This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of Delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. @@ -89,16 +121,21 @@ Therefore, a Delegator should always consider the Delegation Capacity of an Inde ### MetaMask "Pending Transaction" Bug -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### Example -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## Video guide for the network UI +## Video Guide -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/ko/network/developing.mdx b/website/pages/ko/network/developing.mdx index 1b76eb94ccca..81231c36ad59 100644 --- a/website/pages/ko/network/developing.mdx +++ b/website/pages/ko/network/developing.mdx @@ -2,52 +2,88 @@ title: Developing --- -Developers are the demand side of The Graph ecosystem. Developers build subgraphs and publish them to The Graph Network. Then, they query live subgraphs with GraphQL in order to power their applications. +To start coding right away, go to [Developer Quick Start](/quick-start/). + +## Overview + +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. + +On The Graph, you can: + +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. + +### What is GraphQL? + +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +### Developer Actions + +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. + +## Subgraph Specifics + +### What are subgraphs? + +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. + +A subgraph primarily consists of the following files: + +- `subgraph.yaml`: this YAML file contains the [subgraph manifest](/developing/creating-a-subgraph/#the-subgraph-manifest). +- `subgraph.graphql`: this GraphQL schema defines what data is stored for your subgraph, and how to query it via [GraphQL](/developing/creating-a-subgraph/#the-graphql-schema). +- `mappings`: this [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) mappings file translates data from the event data to the entities defined in your schema. + +Learn the detailed specifics to [create a subgraph](/developing/creating-a-subgraph/). ## Subgraph Lifecycle -Subgraphs deployed to the network have a defined lifecycle. +Here is a general overview of a subgraph’s lifecycle: -### Build locally +![Subgraph Lifecycle](/img/subgraph-lifecycle.png) -As with all subgraph development, it starts with local development and testing. Developers can use the same local setup whether they are building for The Graph Network, the hosted service or a local Graph Node, leveraging `graph-cli` and `graph-ts` to build their subgraph. Developers are encouraged to use tools such as [Matchstick](https://github.com/LimeChain/matchstick) for unit testing to improve the robustness of their subgraphs. +### Build locally -> There are certain constraints on The Graph Network, in terms of feature and network support. Only subgraphs on [supported networks](/developing/supported-networks) will earn indexing rewards, and subgraphs which fetch data from IPFS are also not eligible. +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. ### Deploy to Subgraph Studio -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. - -### Publish to the Network +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -When the developer is happy with their subgraph, they can publish it to The Graph Network. This is an on-chain action, which registers the subgraph so that it is discoverable by Indexers. Published subgraphs have a corresponding NFT, which is then easily transferable. The published subgraph has associated metadata, which provides other network participants with useful context and information. +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. -### Signal to Encourage Indexing +### Publish to the Network -Published subgraphs are unlikely to be picked up by Indexers without the addition of signal. Signal is locked GRT associated with a given subgraph, which indicates to Indexers that a given subgraph will receive query volume, and also contributes to the indexing rewards available for processing it. Subgraph developers will generally add signal to their subgraph, in order to encourage indexing. Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. -### Querying & Application Development +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-and-deprecate-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. -Once a subgraph has been processed by Indexers and is available for querying, developers can start to use the subgraph in their applications. Developers query subgraphs via a gateway, which forwards their queries to an Indexer who has processed the subgraph, paying query fees in GRT. +### Add Curation Signal for Indexing -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. +#### What is signal? -### Updating Subgraphs +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. +### Querying & Application Development -Once the Subgraph Developer is ready to update, they can initiate a transaction to point their subgraph at the new version. Updating the subgraph migrates any signal to the new version (assuming the user who applied the signal selected "auto-migrate"), which also incurs a migration tax. This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). -### Deprecating Subgraphs +Learn more about [querying subgraphs](/querying/querying-the-graph/). -At some point a developer may decide that they no longer need a published subgraph. At that point they may deprecate the subgraph, which returns any signalled GRT to the Curators. +### Updating Subgraphs -### Diverse Developer Roles +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -Some developers will engage with the full subgraph lifecycle on the network, publishing, querying and iterating on their own subgraphs. Some may be focused on subgraph development, building open APIs which others can build on. Some may be application focused, querying subgraphs deployed by others. +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. -### Developers and Network Economics +### Deprecating & Transferring Subgraphs -Developers are a key economic actor in the network, locking up GRT in order to encourage indexing, and crucially querying subgraphs, which is the network's primary value exchange. Subgraph developers also burn GRT whenever a subgraph is updated. +If you no longer need a published subgraph, you can [deprecate or transfer a subgraph](/managing/transfer-and-deprecate-a-subgraph/). Deprecating a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/ko/network/explorer.mdx b/website/pages/ko/network/explorer.mdx index bca2993eb0b3..02dca6ed2f9f 100644 --- a/website/pages/ko/network/explorer.mdx +++ b/website/pages/ko/network/explorer.mdx @@ -2,21 +2,35 @@ title: Graph Explorer --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## Subgraphs -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you’ll be able to test queries in the playground and be able to leverage network details to make informed decisions. You’ll also be able to signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. This is critical because signaling on a subgraph incentivizes it to be indexed, which means that it’ll surface on the network to eventually serve queries. +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, several details are surfaced. These include: +On each subgraph’s dedicated page, you can do the following: - Signal/Un-signal on subgraphs - View more details such as charts, current deployment ID, and other metadata @@ -31,26 +45,32 @@ On each subgraph’s dedicated page, several details are surfaced. These include ## Participants -Within this tab, you’ll get a bird’s eye view of all the people that are participating in the network activities, such as Indexers, Delegators, and Curators. Below, we’ll go into an in-depth review of what each tab means for you. +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### 1. Indexers ![Explorer Image 4](/img/Indexer-Pane.png) -Let’s start with the Indexers. Indexers are the backbone of the protocol, being the ones that stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. In the Indexers table, you’ll be able to see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made off of query fees and indexing rewards. Deep dives below: +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. + +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. -- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators -- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards -- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters -- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior -- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing -- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated +**Specifics** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations. -- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - Indexer Rewards - this is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance. -Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. Indexing parameters are set by clicking on the right-hand side of the table, or by going into an Indexer’s profile and clicking the “Delegate” button. +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. To learn more about how to become an Indexer, you can take a look at the [official documentation](/network/indexing) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) @@ -58,9 +78,13 @@ To learn more about how to become an Indexer, you can take a look at the [offici ### 2. Curators -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially attractive subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. By depositing GRT, Curators mint curation shares of a subgraph. As a result, Curators are eligible to earn a portion of the query fees that the subgraph they have signaled on generates. The bonding curve incentivizes Curators to curate the highest quality data sources. The Curator table in this section will allow you to see: +In the The Curator table listed below you can see: - The date the Curator started curating - The number of GRT that was deposited @@ -68,34 +92,36 @@ Curators can be community members, data consumers, or even subgraph developers w ![Explorer Image 6](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting the following links of [The Graph Academy](https://thegraph.academy/curators/) or [official documentation.](/network/curating) +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### 3. Delegators -Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers seek to attract Delegators by offering them a portion of the indexing rewards and query fees that they earn. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![Explorer Image 7](/img/Delegation-Overview.png) -The Delegators table will allow you to see the active Delegators in the community, as well as metrics such as: +In the Delegators table you can see the active Delegators in the community and important metrics: - The number of Indexers a Delegator is delegating towards - A Delegator’s original delegation - The rewards they have accumulated but have not withdrawn from the protocol - The realized rewards they withdrew from the protocol - Total amount of GRT they have currently in the protocol -- The date they last delegated at +- The date they last delegated -If you want to learn more about how to become a Delegator, look no further! All you have to do is to head over to the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## Network -In the Network section, you will see global KPIs as well as the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. ### Overview -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +The overview section has both all the current network metrics and some cumulative metrics over time: - The current total network stake - The stake split between the Indexers and their Delegators @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - Protocol parameters such as curation reward, inflation rate, and more - Current epoch rewards and fees -A few key details that are worth mentioning: +A few key details to note: -- **Query fees represent the fees generated by the consumers**, and they can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once the Indexers close their allocations towards the subgraphs they’ve been indexing. Thus the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ In the Epochs section, you can analyze on a per-epoch basis, metrics such as: - The active epoch is the one in which Indexers are currently allocating stake and collecting query fees - The settling epochs are the ones in which the state channels are being settled. This means that the Indexers are subject to slashing if the consumers open disputes against them. - The distributing epochs are the epochs in which the state channels for the epochs are being settled and Indexers can claim their query fee rebates. - - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers, thus being finalized. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![Explorer Image 9](/img/Epoch-Stats.png) ## Your User Profile -Now that we’ve talked about the network stats, let’s move on to your personal profile. Your personal profile is the place for you to see your network activity, no matter how you’re participating on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see: +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### Profile Overview -This is where you can see any current actions you took. This is also where you can find your profile information, description, and website (if you added one). +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![Explorer Image 10](/img/Profile-Overview.png) ### Subgraphs Tab -If you click into the Subgraphs tab, you’ll see your published subgraphs. This will not include any subgraphs deployed with the CLI for testing purposes – subgraphs will only show up when they are published to the decentralized network. +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Indexing Tab -If you click into the Indexing tab, you’ll find a table with all the active and historical allocations towards the subgraphs, as well as charts that you can analyze and see your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: @@ -158,7 +189,9 @@ This section will also include details about your net Indexer rewards and net qu ### Delegating Tab -Delegators are important to the Graph Network. A Delegator must use their knowledge to choose an Indexer that will provide a healthy return on rewards. Here you can find details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. In the first half of the page, you can see your delegation chart, as well as the rewards-only chart. To the left, you can see the KPIs that reflect your current delegation metrics. diff --git a/website/pages/ko/network/indexing.mdx b/website/pages/ko/network/indexing.mdx index 77013e86a790..ea382714aeff 100644 --- a/website/pages/ko/network/indexing.mdx +++ b/website/pages/ko/network/indexing.mdx @@ -42,7 +42,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) to get the IDs for all active allocations: +1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -112,12 +112,12 @@ Indexers may differentiate themselves by applying advanced techniques for making - **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. - **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. -| Setup | Postgres
    (CPUs) | Postgres
    (memory in GBs) | Postgres
    (disk in TBs) | VMs
    (CPUs) | VMs
    (memory in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Setup | Postgres
    (CPUs) | Postgres
    (memory in GBs) | Postgres
    (disk in TBs) | VMs
    (CPUs) | VMs
    (memory in GBs) | +| -------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | ### What are some basic security precautions an Indexer should take? @@ -149,20 +149,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer #### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
    (for subgraph queries) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    (for subgraph subscriptions) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
    (for subgraph queries) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    (for subgraph subscriptions) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
    (for paid subgraph queries) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
    (for paid subgraph queries) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Agent @@ -545,7 +545,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules maybe [options] ` — Set the `decisionBasis` for a deployment to `rules`, so that the Indexer agent will use indexing rules to decide whether to index this deployment. -- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additonal argument `--status` can be used to print out all actions of a certain status. +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `graph indexer action queue allocate ` - Queue allocation action diff --git a/website/pages/ko/network/overview.mdx b/website/pages/ko/network/overview.mdx index 16214028dbc9..0779d9a6cb00 100644 --- a/website/pages/ko/network/overview.mdx +++ b/website/pages/ko/network/overview.mdx @@ -2,14 +2,20 @@ title: Network Overview --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## Overview +## How does it work? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Specifics + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![Token Economics](/img/Network-roles@2x.png) -To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20 used to allocate resources in the network. +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/ko/new-chain-integration.mdx b/website/pages/ko/new-chain-integration.mdx index 35b2bc7c8b4a..534d6701efdb 100644 --- a/website/pages/ko/new-chain-integration.mdx +++ b/website/pages/ko/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: Integrating New Networks +title: New Chain Integration --- -Graph Node can currently index data from the following chain types: +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- Ethereum, via EVM JSON-RPC and [Ethereum Firehose](https://github.com/streamingfast/firehose-ethereum) -- NEAR, via a [NEAR Firehose](https://github.com/streamingfast/near-firehose-indexer) -- Cosmos, via a [Cosmos Firehose](https://github.com/graphprotocol/firehose-cosmos) -- Arweave, via an [Arweave Firehose](https://github.com/graphprotocol/firehose-arweave) +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -If you are interested in any of those chains, integration is a matter of Graph Node configuration and testing. +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -If you are interested in a different chain type, a new integration with Graph Node must be built. Our recommended approach is developing a new Firehose for the chain in question and then the integration of that Firehose with Graph Node. More info below. +## Integration Strategies -**1. EVM JSON-RPC** +### 1. EVM JSON-RPC -If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. For more information, refer to [Testing an EVM JSON-RPC](new-chain-integration#testing-an-evm-json-rpc). +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -**2. Firehose** +#### Testing an EVM JSON-RPC -For non-EVM-based chains, Graph Node must ingest blockchain data via gRPC and known type definitions. This can be done via [Firehose](firehose/), a new technology developed by [StreamingFast](https://www.streamingfast.io/) that provides a highly-scalable indexing blockchain solution using a files-based and streaming-first approach. Reach out to the [StreamingFast team](mailto:integrations@streamingfast.io/) if you need help with Firehose development. +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## Difference between EVM JSON-RPC & Firehose +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, in a JSON-RPC batch request +- `trace_filter` _(optionally required for Graph Node to support call handlers)_ -While the two are suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](substreams/), like building [Substreams-powered subgraphs](cookbook/substreams-powered-subgraphs/). In addition, Firehose allows for improved indexing speeds when compared to JSON-RPC. +### 2. Firehose Integration -New EVM chain integrators may also consider the Firehose-based approach, given the benefits of substreams and its massive parallelized indexing capabilities. Supporting both allows developers to choose between building substreams or subgraphs for the new chain. +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> **NOTE**: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that eth_calls are [not a good practice for developers](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## Testing an EVM JSON-RPC +#### Specific Firehose Instrumentation for EVM (`geth`) chains -For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON RPC methods: +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \_(for historical blocks, with EIP-1898 - requires archive node): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`, in a JSON-RPC batch request -- _`trace_filter`_ _(optionally required for Graph Node to support call handlers)_ +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### Graph Node Configuration +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -**Start by preparing your local environment** +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## Graph Node Configuration + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) -2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON RPC compliant URL - > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ -**Test the integration by locally deploying a subgraph** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. Create a simple example subgraph. Some options are below: - 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point - 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) -3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. -4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -Graph Node should be syncing the deployed subgraph if there are no errors. Give it time to sync, then send some GraphQL queries to the API endpoint printed in the logs. +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## Integrating a new Firehose-enabled chain +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. Create a simple example subgraph. Some options are below: + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +Graph Node should be syncing the deployed subgraph if there are no errors. Give it time to sync, then send some GraphQL queries to the API endpoint printed in the logs. -Integrating a new chain is also possible using the Firehose approach. This is currently the best option for non-EVM chains and a requirement for substreams support. Additional documentation focuses on how Firehose works, adding Firehose support for a new chain and integrating it with Graph Node. Recommended docs for integrators: +## Substreams-powered Subgraphs -1. [General docs on Firehose](firehose/) -2. [Adding Firehose support for a new chain](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [Integrating Graph Node with a new chain via Firehose](https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/ko/operating-graph-node.mdx b/website/pages/ko/operating-graph-node.mdx index dbbfcd5fc545..fb3d538f952a 100644 --- a/website/pages/ko/operating-graph-node.mdx +++ b/website/pages/ko/operating-graph-node.mdx @@ -77,13 +77,13 @@ A complete Kubernetes example configuration can be found in the [indexer reposit When it is running Graph Node exposes the following ports: -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
    (for subgraph queries) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    (for subgraph subscriptions) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
    (for subgraph queries) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    (for subgraph subscriptions) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | > **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. diff --git a/website/pages/ko/querying/graphql-api.mdx b/website/pages/ko/querying/graphql-api.mdx index 2bbc71b5bb9c..d8671e53a77c 100644 --- a/website/pages/ko/querying/graphql-api.mdx +++ b/website/pages/ko/querying/graphql-api.mdx @@ -2,11 +2,19 @@ title: GraphQL API --- -This guide explains the GraphQL Query API that is used for The Graph Protocol. +Learn about the GraphQL Query API used in The Graph. -## Queries +## What is GraphQL? -In your subgraph schema you define types called `Entities`. For each `Entity` type, an `entity` and `entities` field will be generated on the top-level `Query` type. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +To understand the larger role that GraphQL plays, review [developing](/network/developing/) and [creating a subgraph](/developing/creating-a-subgraph/). + +## Queries with GraphQL + +In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. + +> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. ### Examples @@ -21,7 +29,7 @@ Query for a single `Token` entity defined in your schema: } ``` -> **Note:** When querying for a single entity, the `id` field is required, and it must be a string. +> Note: When querying for a single entity, the `id` field is required, and it must be writen as a string. Query all `Token` entities: @@ -36,7 +44,10 @@ Query all `Token` entities: ### Sorting -When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. +When querying a collection, you may: + +- Use the `orderBy` parameter to sort by a specific attribute. +- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. #### Example @@ -53,7 +64,7 @@ When querying a collection, the `orderBy` parameter may be used to sort by a spe As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities. -In the following example, we sort the tokens by the name of their owner: +The following example shows tokens sorted by the name of their owner: ```graphql { @@ -70,11 +81,12 @@ In the following example, we sort the tokens by the name of their owner: ### Pagination -When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. - -Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +When querying a collection, it's best to: -Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. +- Use the `first` parameter to paginate from the beginning of the collection. + - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. +- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. #### Example using `first` @@ -106,7 +118,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect #### Example using `first` and `id_ge` -If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: +If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: ```graphql query manyTokens($lastID: String) { @@ -117,11 +129,12 @@ query manyTokens($lastID: String) { } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### Filtering -You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. +- You can use the `where` parameter in your queries to filter for different properties. +- You can filter on multiple values within the `where` parameter. #### Example using `where` @@ -155,7 +168,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: #### Example for block filtering -You can also filter entities by the `_change_block(number_gte: Int)` - this filters entities which were updated in or after the specified block. +You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). @@ -193,7 +206,7 @@ As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/release ##### `AND` Operator -In the following example, we are filtering for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. +The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. ```graphql { @@ -208,7 +221,7 @@ In the following example, we are filtering for challenges with `outcome` `succee ``` > **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas. -> +> > ```graphql > { > challenges(where: { number_gte: 100, outcome: "succeeded" }) { @@ -223,7 +236,7 @@ In the following example, we are filtering for challenges with `outcome` `succee ##### `OR` Operator -In the following example, we are filtering for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. +The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. ```graphql { @@ -278,9 +291,9 @@ _change_block(number_gte: Int) You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -Note that the current implementation is still subject to certain limitations that might violate these gurantees. The implementation can not always tell that a given block hash is not on the main chain at all, or that the result of a query by block hash for a block that can not be considered final yet might be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. #### Example @@ -322,12 +335,12 @@ Fulltext search queries have one required field, `text`, for supplying search te Fulltext search operators: -| Symbol | Operator | Description | -| --- | --- | --- | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| Symbol | Operator | Description | +| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | +| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | +| `<->` | `Follow by` | Specify the distance between two words. | +| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | #### Examples @@ -376,11 +389,11 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 ## Schema -The schema of your data source--that is, the entity types, values, and relationships that are available to query--are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> **Note:** Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. +> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. ### Entities diff --git a/website/pages/ko/querying/querying-best-practices.mdx b/website/pages/ko/querying/querying-best-practices.mdx index 32d1415b20fa..5654cf9e23a5 100644 --- a/website/pages/ko/querying/querying-best-practices.mdx +++ b/website/pages/ko/querying/querying-best-practices.mdx @@ -2,11 +2,9 @@ title: Querying Best Practices --- -The Graph provides a decentralized way to query data from blockchains. +The Graph provides a decentralized way to query data from blockchains via GraphQL APIs, making it easier to query data with the GraphQL language. -The Graph network's data is exposed through a GraphQL API, making it easier to query data with the GraphQL language. - -This page will guide you through the essential GraphQL language rules and GraphQL queries best practices. +Learn the essential GraphQL language rules and GraphQL querying best practices. --- @@ -71,7 +69,7 @@ GraphQL is a language and set of conventions that transport over HTTP. It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`). -However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), we recommend you to use our `graph-client` that supports unique features such as: +However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), it's recommend to use `graph-client` which supports unique features such as: - Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) @@ -104,8 +102,6 @@ main() More GraphQL client alternatives are covered in ["Querying from an Application"](/querying/querying-from-an-application). -Now that we covered the basic rules of GraphQL queries syntax, let's now look at the best practices of GraphQL query writing. - --- ## Best Practices @@ -164,11 +160,11 @@ Doing so brings **many advantages**: - **Variables can be cached** at server-level - **Queries can be statically analyzed by tools** (more on this in the following sections) -**Note: How to include fields conditionally in static queries** +### How to include fields conditionally in static queries -We might want to include the `owner` field only on a particular condition. +You might want to include the `owner` field only on a particular condition. -For this, we can leverage the `@include(if:...)` directive as follows: +For this, you can leverage the `@include(if:...)` directive as follows: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -191,7 +187,7 @@ const result = await execute(query, { }) ``` -Note: The opposite directive is `@skip(if: ...)`. +> Note: The opposite directive is `@skip(if: ...)`. ### Ask for what you want @@ -199,9 +195,8 @@ GraphQL became famous for its "Ask for what you want" tagline. For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually. -When querying GraphQL APIs, always think of querying only the fields that will be actually used. - -A common cause of over-fetching is collections of entities. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. Queries should therefore almost always set first explicitly, and make sure they only fetch as many entities as they actually need. This applies not just to top-level collections in a query, but even more so to nested collections of entities. +- When querying GraphQL APIs, always think of querying only the fields that will be actually used. +- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. For example, in the following query: @@ -337,8 +332,8 @@ query { Such repeated fields (`id`, `active`, `status`) bring many issues: -- harder to read for more extensive queries -- when using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. +- More extensive queries become harder to read. +- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. A refactored version of the query would be the following: @@ -364,13 +359,13 @@ fragment DelegateItem on Transcoder { } ``` -Using GraphQL `fragment` will improve readability (especially at scale) but also will result in better TypeScript types generation. +Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_). ### GraphQL Fragment do's and don'ts -**Fragment base must be a type** +### Fragment base must be a type A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**: @@ -382,7 +377,7 @@ fragment MyFragment on BigInt { `BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base. -**How to spread a Fragment** +#### How to spread a Fragment Fragments are defined on specific types and should be used accordingly in queries. @@ -411,16 +406,16 @@ fragment VoteItem on Vote { It is not possible to spread a fragment of type `Vote` here. -**Define Fragment as an atomic business unit of data** +#### Define Fragment as an atomic business unit of data -GraphQL Fragment must be defined based on their usage. +GraphQL `Fragment`s must be defined based on their usage. For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient. -Here is a rule of thumb for using Fragment: +Here is a rule of thumb for using fragments: -- when fields of the same type are repeated in a query, group them in a Fragment -- when similar but not the same fields are repeated, create multiple fragments, ex: +- When fields of the same type are repeated in a query, group them in a `Fragment`. +- When similar but different fields are repeated, create multiple fragments, for instance: ```graphql # base fragment (mostly used in listing) @@ -443,7 +438,7 @@ fragment VoteWithPoll on Vote { --- -## The essential tools +## The Essential Tools ### GraphQL web-based explorers @@ -473,11 +468,11 @@ This will allow you to **catch errors without even testing queries** on the play The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get: -- syntax highlighting -- autocomplete suggestions -- validation against schema -- snippets -- go to definition for fragments and input types +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets +- Go to definition for fragments and input types If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly. @@ -485,9 +480,9 @@ If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketp The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing: -- syntax highlighting -- autocomplete suggestions -- validation against schema -- snippets +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets -More information on this [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) that showcases all the plugin's main features. +For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. diff --git a/website/pages/ko/quick-start.mdx b/website/pages/ko/quick-start.mdx index cba2247457b8..9560a1389911 100644 --- a/website/pages/ko/quick-start.mdx +++ b/website/pages/ko/quick-start.mdx @@ -2,24 +2,18 @@ title: Quick Start --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph to Subgraph Studio. +Learn how to easily publish and query a [subgraph](/developing/developer-faqs/#1-what-is-a-subgraph) on The Graph. -Ensure that your subgraph will be indexing data from a [supported network](/developing/supported-networks). - -This guide is written assuming that you have: +## Prerequisites for this guide - A crypto wallet -- A smart contract address on the network of your choice - -## 1. Create a subgraph on Subgraph Studio - -Go to the [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- A smart contract address on one of the [supported networks](/developing/supported-networks/) -Once your wallet is connected, you can begin by clicking “Create a Subgraph." It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name." +## Step-by-step -## 2. Install the Graph CLI +### 1. Install the Graph CLI -The Graph CLI is written in TypeScript and you will need to have `node` and either `npm` or `yarn` installed to use it. Check that you have the most recent CLI version installed. +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. On your local machine, run one of the following commands: @@ -35,133 +29,161 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -## 3. Initialize your subgraph from existing contract +### 2. Create your subgraph + +If your contract has events, the `init` command will automatically create a scaffold of a subgraph. + +#### Create via Graph CLI + +Use the following command to create a subgraph in Subgraph Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +#### Create via Subgraph Studio + +Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. -Initialize your subgraph from an existing contract by running the initialize command: +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +2. Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". + +For additional information on subgraph creation and the Graph CLI, see [Creating a Subgraph](/developing/creating-a-subgraph). + +### 3. Initialize your subgraph + +#### From an existing contract + +The following command initializes your subgraph from an existing contract: ```sh graph init --studio ``` -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> Note: If your contract was verified on Etherscan, then the ABI will automatically be created in the CLI. + +You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -When you initialize your subgraph, the CLI tool will ask you for the following information: +When you initialize your subgraph, the CLI will ask you for the following information: -- Protocol: choose the protocol your subgraph will be indexing data from -- Subgraph slug: create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- Directory to create the subgraph in: choose your local directory -- Ethereum network(optional): you may need to specify which EVM-compatible network your subgraph will be indexing data from -- Contract address: Locate the smart contract address you’d like to query data from -- ABI: If the ABI is not autopopulated, you will need to input it manually as a JSON file -- Start Block: it is suggested that you input the start block to save time while your subgraph indexes blockchain data. You can locate the start block by finding the block where your contract was deployed. -- Contract Name: input the name of your contract -- Index contract events as entities: it is suggested that you set this to true as it will automatically add mappings to your subgraph for every emitted event -- Add another contract(optional): you can add another contract +- Protocol: Choose the protocol your subgraph will be indexing data from. +- Subgraph slug: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. +- Directory to create the subgraph in: Choose your local directory. +- Ethereum network (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- Contract address: Locate the smart contract address you’d like to query data from. +- ABI: If the ABI is not auto-populated, you will need to input it manually as a JSON file. +- Start Block: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- Contract Name: Input the name of your contract. +- Index contract events as entities: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- Add another contract (optional): You can add another contract. See the following screenshot for an example for what to expect when initializing your subgraph: ![Subgraph command](/img/subgraph-init-example.png) -## 4. Write your subgraph +### 4. Write your subgraph -The previous commands create a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: +The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. -- Manifest (`subgraph.yaml`) - The manifest defines what datasources your subgraphs will index. -- Schema (`schema.graphql`) - The GraphQL schema defines what data you wish to retrieve from the subgraph. -- AssemblyScript Mappings (`mapping.ts`) - This is the code that translates data from your datasources to the entities defined in the schema. +When making changes to the subgraph, you will mainly work with three files: -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). +- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -## 5. Deploy to Subgraph Studio +For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -Once your subgraph is written, run the following commands: +### 5. Deploy your subgraph -```sh -$ graph codegen -$ graph build -``` +Remember, deploying is not the same as publishing. + +- When you deploy a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. +- When you publish a subgraph, you are publishing it onchain to the decentralized network. + +1. Once your subgraph is written, run the following commands: + + ```sh + graph codegen + graph build + ``` + +2. Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. -- Authenticate and deploy your subgraph. The deploy key can be found on the Subgraph page in Subgraph Studio. +![Deploy key](/img/subgraph-studio-deploy-key.jpg) +```` ```sh -$ graph auth --studio -$ graph deploy --studio -``` -You will be asked for a version label. It's strongly recommended to use [semver](https://semver.org/) for versioning like `0.0.1`. That said, you are free to choose any string as version such as:`v1`, `version1`, `asdf`. - -## 6. Test your subgraph - -In Subgraph Studio's playground environment, you can test your subgraph by making a sample query. - -The logs will tell you if there are any errors with your subgraph. The logs of an operational subgraph will look like this: - -![Subgraph logs](/img/subgraph-logs-image.png) - -If your subgraph is failing, you can query the subgraph health by using the GraphiQL Playground. Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails, so you can debug accordingly: - -```graphql -{ - indexingStatuses(subgraphs: ["Qm..."]) { - node - synced - health - fatalError { - message - block { - number - hash - } - handler - } - nonFatalErrors { - message - block { - number - hash - } - handler - } - chains { - network - chainHeadBlock { - number - } - earliestBlock { - number - } - latestBlock { - number - } - lastHealthyBlock { - number - } - } - entityCount - } -} +graph auth --studio + +graph deploy --studio ``` +```` -## 7. Publish your subgraph to The Graph’s Decentralized Network +- The CLI will ask for a version label. + - It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. That said, you can choose any string for the version such as: `v1`, `version1`, `asdf`, etc. -Once your subgraph has been deployed to Subgraph Studio, you have tested it out, and you are ready to put it into production, you can then publish it to the decentralized network. +### 6. Review your subgraph -In Subgraph Studio, you will be able to click the publish button on the top right of your subgraph's page. +If you’d like to examine your subgraph before publishing it to the network, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: -Select the network you would like to publish your subgraph to. It is recommended to publish subgraphs to Arbitrum One to take advantage of the [faster transaction speeds and lower gas costs](/arbitrum/arbitrum-faq). +- Run a sample query. +- Analyze your subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: -The (upgrade Indexer)[/sunrise/#about-the-upgrade-indexer] will begin serving queries on your subgraph regardless of subgraph curation, and it will provide you with 100,000 free queries per month. + ![Subgraph logs](/img/subgraph-logs-image.png) -For a higher quality of service and stronger redundancy, you can curate your subgraph to attract more Indexers. At the time of writing, it is recommended that you curate your own subgraph with at least 3,000 GRT to ensure 3-5 additional Indexers begin serving queries on your subgraph. +### 7. Publish your subgraph to The Graph Network -To save on gas costs, you can curate your subgraph in the same transaction that you published it by selecting this button when you publish your subgraph to The Graph’s decentralized network: +Publishing a subgraph to the decentralized network makes it available for [Curators](/network/curating/) to begin curating it and [Indexers](/network/indexing/) to begin indexing it. -![Subgraph publish](/img/publish-and-signal-tx.png) +#### Publishing with Subgraph Studio + +1. To publish your subgraph, click the Publish button in the dashboard. +2. Select the network to which you would like to publish your subgraph. + +#### Publishing from the CLI + +As of version 0.73.0, you can also publish your subgraph with the Graph CLI. + +1. Open the `graph-cli`. + +2. Use the following commands: + + ```sh + graph codegen && graph build + ``` + + Then, + + ```sh + graph publish + ``` -## 8. Query your subgraph +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +To customize your deployment, see [Publishing a Subgraph](/publishing/publishing-a-subgraph/). + +#### Adding signal to your subgraph + +1. To attract indexers to query your subgraph, you should add GRT curation signal to it. + + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + +2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. + + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + +To learn more about curation, read [Curating](/network/curating/). + +To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: + +![Subgraph publish](/img/publish-and-signal-tx.png) -Now, you can query your subgraph by sending GraphQL queries to your subgraph’s Query URL, which you can find by clicking on the query button. +### 8. Query your subgraph -If you don't have your API key, you can query via the free, rate-limited development query URL, which can be used for development and staging. +Now, you can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read more [here](/querying/querying-the-graph/). +For more information about querying data from your subgraph, read [Querying The Graph](/querying/querying-the-graph/). diff --git a/website/pages/ko/release-notes/assemblyscript-migration-guide.mdx b/website/pages/ko/release-notes/assemblyscript-migration-guide.mdx index 85f6903a6c69..17224699570d 100644 --- a/website/pages/ko/release-notes/assemblyscript-migration-guide.mdx +++ b/website/pages/ko/release-notes/assemblyscript-migration-guide.mdx @@ -127,11 +127,8 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - You'll need to rename your duplicate variables if you had variable shadowing. - ### Null Comparisons - By doing the upgrade on your subgraph, sometimes you might get errors like these: ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - To solve you can simply change the `if` statement to something like this: ```typescript @@ -288,7 +284,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: ```typescript diff --git a/website/pages/ko/sps/introduction.mdx b/website/pages/ko/sps/introduction.mdx new file mode 100644 index 000000000000..12e3f81c6d53 --- /dev/null +++ b/website/pages/ko/sps/introduction.mdx @@ -0,0 +1,19 @@ +--- +title: Introduction to Substreams-powered Subgraphs +--- + +By using a Substreams package (`.spkg`) as a data source, your subgraph gains access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. + +There are two methods of enabling this technology: + +Using Substreams [triggers](./triggers): Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. + +Using [Entity Changes](https://substreams.streamingfast.io/documentation/consume/subgraph/graph-out): By writing more of the logic into Substreams, you can consume the module's output directly into graph-node. In graph-node, you can use the Substreams data to create your subgraph entities. + +It is really a matter of where you put your logic, in the subgraph or the Substreams. Keep in mind that having more of your logic in Substreams benefits from a parallelized model, whereas triggers will be linearly consumed in graph-node. + +Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: + +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/ko/sps/triggers-example.mdx b/website/pages/ko/sps/triggers-example.mdx new file mode 100644 index 000000000000..8e4f96eba14a --- /dev/null +++ b/website/pages/ko/sps/triggers-example.mdx @@ -0,0 +1,140 @@ +--- +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" +--- + +## Prerequisites + +Before starting, make sure to: + +- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. +- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. + +## Step 1: Initialize Your Project + + + +1. Open your Dev Container and run the following command to initialize your project: + + ```bash + substreams init + ``` + +2. Select the "minimal" project option. + +3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: + +```yaml +specVersion: v0.1.0 +package: + name: my_project_sol + version: v0.1.0 + +imports: # Pass your spkg of interest + solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg + +modules: + - name: map_spl_transfers + use: solana:map_block # Select corresponding modules available within your spkg + initialBlock: 260000082 + + - name: map_transactions_by_programid + use: solana:solana:transactions_by_programid_without_votes + +network: solana-mainnet-beta + +params: # Modify the param fields to meet your needs + # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA + map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE +``` + +## Step 2: Generate the Subgraph Manifest + +Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: + +```bash +substreams codegen subgraph +``` + +You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: + +```yaml +--- +dataSources: + - kind: substreams + name: my_project_sol + network: solana-mainnet-beta + source: + package: + moduleName: map_spl_transfers # Module defined in the substreams.yaml + file: ./my-project-sol-v0.1.0.spkg + mapping: + apiVersion: 0.0.7 + kind: substreams/graph-entities + file: ./src/mappings.ts + handler: handleTriggers +``` + +## Step 3: Define Entities in `schema.graphql` + +Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. Here is an example: + +```graphql +type MyTransfer @entity { + id: ID! + amount: String! + source: String! + designation: String! + signers: [String!]! +} +``` + +This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. + +## Step 4: Handle Substreams Data in `mappings.ts` + +With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: + +```ts +import { Protobuf } from 'as-proto/assembly' +import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events' +import { MyTransfer } from '../generated/schema' + +export function handleTriggers(bytes: Uint8Array): void { + const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode) + + for (let i = 0; i < input.data.length; i++) { + const event = input.data[i] + + if (event.transfer != null) { + let entity_id: string = `${event.txnId}-${i}` + const entity = new MyTransfer(entity_id) + entity.amount = event.transfer!.instruction!.amount.toString() + entity.source = event.transfer!.accounts!.source + entity.designation = event.transfer!.accounts!.destination + + if (event.transfer!.accounts!.signer!.single != null) { + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] + } else if (event.transfer!.accounts!.signer!.multisig != null) { + entity.signers = event.transfer!.accounts!.signer!.multisig!.signers + } + entity.save() + } + } +} +``` + +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + +## Conclusion + +You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. + +For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). diff --git a/website/pages/ko/sps/triggers.mdx b/website/pages/ko/sps/triggers.mdx new file mode 100644 index 000000000000..ed19635d4768 --- /dev/null +++ b/website/pages/ko/sps/triggers.mdx @@ -0,0 +1,37 @@ +--- +title: Substreams Triggers +--- + +Custom triggers allow you to send data directly into your subgraph mappings file and entities (similar to tables and fields), enabling full use of the GraphQL layer. By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data within your subgraph’s handler, ensuring efficient and streamlined data management within the subgraph framework. + +> Note: If you haven’t already, visit one of the How-To Guides found [here](./introduction) to scaffold your first project in the Development Container. + +The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. + +```tsx +export function handleTransactions(bytes: Uint8Array): void { + let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).trasanctions // 1. + if (transactions.length == 0) { + log.info('No transactions found', []) + return + } + + for (let i = 0; i < transactions.length; i++) { + // 2. + let transaction = transactions[i] + + let entity = new Transaction(transaction.hash) // 3. + entity.from = transaction.from + entity.to = transaction.to + entity.save() + } +} +``` + +Here's what you’re seeing in the `mappings.ts` file: + +1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object +2. Looping over the transactions +3. Create a new subgraph entity for every transaction + +To go through a detailed example of a trigger-based subgraph, [click here](./triggers-example). diff --git a/website/pages/ko/substreams.mdx b/website/pages/ko/substreams.mdx index 710e110012cc..a838a6924e2f 100644 --- a/website/pages/ko/substreams.mdx +++ b/website/pages/ko/substreams.mdx @@ -4,9 +4,11 @@ title: Substreams ![Substreams Logo](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## How Substreams Works in 4 Steps @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### Expand Your Knowledge - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/ko/sunrise.mdx b/website/pages/ko/sunrise.mdx index 32bf6c6d26d4..14d1444cf8cd 100644 --- a/website/pages/ko/sunrise.mdx +++ b/website/pages/ko/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Post-Sunrise + Upgrading to The Graph Network FAQ --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Note: The Sunrise of Decentralized Data ended June 12th, 2024. -## What is the Sunrise of Decentralized Data? +## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data is an initiative spearheaded by Edge & Node. The goal is to enable subgraph developers to seamlessly upgrade to The Graph’s decentralized network. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan draws on many previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs, and the ability to integrate new blockchain networks to The Graph. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### What are the phases of the Sunrise? +### What happened to the hosted service? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. -## Upgrading subgraphs to The Graph Network +During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. -### When will hosted service subgraphs no longer be available? +### Was Subgraph Studio impacted by this upgrade? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Why were subgraphs published to Arbitrum, did it start indexing a different network? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Will my hosted service subgraph be supported on The Graph Network? - -Yes, the upgrade Indexer will automatically support all hosted service subgraphs published to The Graph Network for a seamless upgrade experience. - -### How do I upgrade my hosted service subgraph? - -> Note: Upgrading a subgraph to The Graph Network cannot be undone. - - - -To upgrade a hosted service subgraph, you can visit the subgraph dashboard on the [hosted service](https://thegraph.com/hosted-service). - -1. Select the subgraph(s) you want to upgrade. -2. Select the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### How can I get support with the upgrade process? - -The Graph community is here to support developers as they move to The Graph Network. Join The Graph's [Discord server](https://discord.gg/vtvv7FP) and request help in the #upgrade-decentralized-network channel. - -### How can I ensure high quality of service and redundancy for subgraphs on The Graph Network? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Members from these blockchain communities are encouraged to integrate their chain through the [Chain Integration Process](/chain-integration-overview/). - -### How do I publish new versions to the network? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Upgrade to the latest version of [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Update your deploy command - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> Publishing requires Arbitrum ETH - upgrading your subgraph also airdrops a small amount to facilitate your first protocol interactions 🧑‍🚀 - -### I use a subgraph developed by someone else, how can I make sure that my service isn't interrupted? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### What happens if I don't upgrade my subgraph? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### How can I get started querying subgraphs on The Graph Network? - -You can explore available subgraphs on [Graph Explorer](https://thegraph.com/explorer). [Learn more about querying subgraphs on The Graph](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## About the Upgrade Indexer -### What is the upgrade Indexer? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> The upgrade Indexer is currently active. -### What chains does the upgrade Indexer support? +The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. -The upgrade Indexer supports chains that were previously only available on the hosted service. +### What does the upgrade Indexer do? -Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. ### What does the upgrade indexer mean for existing Indexers? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. + +However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As more subgraphs are upgraded from the hosted service to The Graph Network, Delegators stand to benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### Will the upgrade Indexer compete with existing Indexers for rewards? +### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer will only allocate the minimum amount per subgraph and will not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### How will this affect subgraph developers? +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### How does this benefit data consumers? +### How does the upgrade Indexer benefit data consumers? The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. -### How will the upgrade Indexer price queries? - -The upgrade Indexer will price queries at the market rate so as not to influence the query fee market. - -### What are the criteria for the upgrade Indexer to stop supporting a subgraph? - -The upgrade Indexer will serve a subgraph until it is sufficiently and successfully served with consistent queries served by at least 3 other Indexers. - -Furthermore, the upgrade Indexer will stop supporting a subgraph if it has not been queried in the last 30 days. - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## About The Graph Network - -### Do I need to run my own infrastructure? - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -Once your subgraph has reached adequate curation signal and other Indexers begin supporting it, the upgrade Indexer will gradually taper off, allowing other Indexers to collect indexing rewards and query fees. - -### Should I host my own indexing infrastructure? - -Running infrastructure for your own project is [significantly more resource intensive](/network/benefits/) when compared to using The Graph Network. - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -That being said, if you’re still interested in running a [Graph Node](https://github.com/graphprotocol/graph-node), consider joining The Graph Network [as an Indexer](https://thegraph.com/blog/how-to-become-indexer/) to earn indexing rewards and query fees by serving data on your subgraph and others. - -### Should I use a centralized indexing provider? - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -Here's a detailed breakdown of the benefits of The Graph over centralized hosting: +### How does the upgrade Indexer price queries? -- **Resilience and Redundancy**: Decentralized systems are inherently more robust and resilient due to their distributed nature. Data isn't stored on a single server or location. Instead, it's served by hundreds of independent Indexers around the globe. This reduces the risk of data loss or service interruptions if one node fails, leading to exceptional uptime (99.99%). +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **Quality of Service**: In addition to the impressive uptime, The Graph Network features a ~106ms median query speed (latency), and higher query success rates compared to hosted alternatives. Read more in [this blog](https://thegraph.com/blog/qos-the-graph-network/). +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -Just as you've chosen your blockchain network for its decentralized nature, security, and transparency, opting for The Graph Network is an extension of those same principles. By aligning your data infrastructure with these values, you ensure a cohesive, resilient, and trust-driven development environment. +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/ko/supported-network-requirements.mdx b/website/pages/ko/supported-network-requirements.mdx index df15ef48d762..9662552e4e6a 100644 --- a/website/pages/ko/supported-network-requirements.mdx +++ b/website/pages/ko/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| Network | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | +| Network | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | diff --git a/website/pages/ko/tap.mdx b/website/pages/ko/tap.mdx new file mode 100644 index 000000000000..872ad6231e9c --- /dev/null +++ b/website/pages/ko/tap.mdx @@ -0,0 +1,197 @@ +--- +title: TAP Migration Guide +--- + +Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. + +## Overview + +[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: + +- Efficiently handles micropayments. +- Adds a layer of consolidations to on-chain transactions and costs. +- Allows Indexers control of receipts and payments, guaranteeing payment for queries. +- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. + +## Specifics + +TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. + +For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. + +### RAV Details + +- It’s money that is waiting to be sent to the blockchain. + +- It will continue to send requests to aggregate and ensure that the total value of non-aggregated receipts does not exceed the `amount willing to lose`. + +- Each RAV can be redeemed once in the contracts, which is why they are sent after the allocation is closed. + +### Redeeming RAV + +As long as you run `tap-agent` and `indexer-agent`, everything will be executed automatically. The following provides a detailed breakdown of the process: + +1. An Indexer closes allocation. + +2. ` period, tap-agent` takes all pending receipts for that specific allocation and requests an aggregation into a RAV, marking it as `last`. + +3. `indexer-agent` takes all the last RAVS and sends redeem requests to the blockchain, which will update the value of `redeem_at`. + +4. During the `` period, `indexer-agent` monitors if the blockchain has any reorganizations that revert the transaction. + + - If it was reverted, the RAV is resent to the blockchain. If it was not reverted, it gets marked as `final`. + +## Blockchain Addresses + +### Contracts + +| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| ------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | +| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | +| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | + +### Gateway + +| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| ---------- | --------------------------------------------- | --------------------------------------------- | +| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | + +### Requirements + +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. + +- [Graph TAP Aribtrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) + +> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployement. As a result, you have to index it manually. + +## Migration Guide + +### Software versions + +| Component | Version | Image Link | +| --------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------- | +| indexer-service | v1.0.0-rc.6 | [indexer-service](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-service-rs/264320627?tag=1.0.0-rc.6) | +| indexer-agent | PR #995 | [indexer-agent](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent/266166026?tag=sha-d98cf80) | +| tap-agent | v1.0.0-rc.6 | [tap-agent](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-tap-agent/264320547?tag=1.0.0-rc.6) | + +### Steps + +1. **Indexer Agent** + + - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). + - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + +2. **Indexer Service** + + - Fully replace your current configuration with the [new Indexer Service rs](https://github.com/graphprotocol/indexer-rs). It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + - Like the older version, you can scale Indexer Service horizontally easily. It is still stateless. + +3. **TAP Agent** + + - Run _one_ single instance of [TAP Agent](https://github.com/graphprotocol/indexer-rs) at all times. It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + +4. **Configure Indexer Service and TAP Agent** + + Configuration is a TOML file shared between `indexer-service` and `tap-agent`, supplied with the argument `--config /path/to/config.toml`. + + Check out the full [configuration](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) and the [default values](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) + +For minimal configuration, use the following template: + +```bash +# You will have to change *all* the values below to match your setup. +# +# Some of the config below are global graph network values, which you can find here: +# +# +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: +# +# [database] +# postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" + +[indexer] +indexer_address = "0x1111111111111111111111111111111111111111" +operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" + +[database] +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. +postgres_url = "postgres://postgres@postgres:5432/postgres" + +[graph_node] +# URL to your graph-node's query endpoint +query_url = "" +# URL to your graph-node's status endpoint +status_url = "" + +[subgraphs.network] +# Query URL for the Graph Network subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[subgraphs.escrow] +# Query URL for the Escrow subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[blockchain] +# The chain ID of the network that the graph network is running on +chain_id = 1337 +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. +receipts_verifier_address = "0x2222222222222222222222222222222222222222" + +######################################## +# Specific configurations to tap-agent # +######################################## +[tap] +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: +# max_amount_willing_to_lose_grt = "0.1" +max_amount_willing_to_lose_grt = 20 + +[tap.sender_aggregator_endpoints] +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. +0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" +``` + +Notes: + +- Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/tap/#gateway). +- Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/tap/#contracts) using the appropriate chain id. + +**Log Level** + +- You can set the log level by using the `RUST_LOG` environment variable. +- It’s recommended that you set it to `RUST_LOG=indexer_tap_agent=debug,info`. + +## Monitoring + +### Metrics + +All components expose the port 7300 to be queried by prometheus. + +### Grafana Dashboard + +You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) and import. + +### Launchpad + +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer) diff --git a/website/pages/mr/about.mdx b/website/pages/mr/about.mdx index 221d6e16692d..d560ebcd179d 100644 --- a/website/pages/mr/about.mdx +++ b/website/pages/mr/about.mdx @@ -2,46 +2,66 @@ title: ग्राफ बद्दल --- -The Graph म्हणजे काय आणि तुम्ही सुरुवात कशी करू शकता हे हे पान स्पष्ट करेल. - ## द ग्राफ म्हणजे काय? -आलेख हे ब्लॉकचेन डेटाचे अनुक्रमणिका आणि क्वेरी करण्यासाठी विकेंद्रित प्रोटोकॉल आहे. ग्राफ थेट क्वेरी करणे कठीण असलेल्या डेटाची क्वेरी करणे शक्य करते. +The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. + +## Understanding the Basics + +Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. + +### Challenges Without The Graph + +In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. + +- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. + +- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. + +- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. + +### Why is this a problem? + +It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. + +Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. + +Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. -[Uniswap](https://uniswap.org/) सारखे जटिल स्मार्ट करार आणि [Bored Ape Yacht Club< सारखे NFTs उपक्रम असलेले प्रकल्प ](https://boredapeyachtclub.com/) इथरियम ब्लॉकचेनवर डेटा संग्रहित करा, ज्यामुळे ब्लॉकचेनमधून थेट मूलभूत डेटाशिवाय इतर काहीही वाचणे खरोखर कठीण होते. +## The Graph Provides a Solution -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. -तुम्ही तुमचा स्वतःचा सर्व्हर तयार करू शकता, तिथल्या व्यवहारांवर प्रक्रिया करू शकता, त्यांना डेटाबेसमध्ये सेव्ह करू शकता आणि डेटाची क्वेरी करण्यासाठी या सर्वांच्या वर API एंडपॉइंट तयार करू शकता. तथापि, हा पर्याय [संसाधन गहन](/network/benefits/) आहे, देखभाल आवश्यक आहे, अपयशाचा एकच बिंदू सादर करतो आणि विकेंद्रीकरणासाठी आवश्यक असलेले महत्त्वाचे सुरक्षा गुणधर्म खंडित करतो. +### How The Graph Functions -**ब्लॉकचेन डेटा अनुक्रमित करणे खरोखर, खरोखर कठीण आहे.** +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +#### Specifics -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. -## आलेख कसे कार्य करते +- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -सबग्राफ मॅनिफेस्ट म्हणून ओळखल्या जाणार्‍या सबग्राफ वर्णनावर आधारित इथरियम डेटा काय आणि कसा अनुक्रमित करायचा हे आलेख शिकतो. सबग्राफ वर्णन हे सबग्राफसाठी स्वारस्य असलेले स्मार्ट कॉन्ट्रॅक्ट्स, त्या कॉन्ट्रॅक्टमधील इव्हेंट्सकडे लक्ष देण्यासारखे आहे आणि ग्राफ त्याच्या डेटाबेसमध्ये संग्रहित केलेल्या डेटावर इव्हेंट डेटा कसा मॅप करायचा हे परिभाषित करते. +- When creating a subgraph, you need to write a subgraph manifest. -एकदा तुम्ही `सबग्राफ मॅनिफेस्ट` लिहिल्यानंतर, तुम्ही आयपीएफएसमध्ये व्याख्या संचयित करण्यासाठी ग्राफ सीएलआय वापरता आणि इंडेक्सरला त्या सबग्राफसाठी डेटा अनुक्रमणिका सुरू करण्यास सांगा. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. -This diagram gives more detail about the flow of data once a subgraph manifest has been deployed, dealing with Ethereum transactions: +The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. ![ग्राफिक डेटा ग्राहकांना प्रश्न देण्यासाठी ग्राफ नोड कसा वापरतो हे स्पष्ट करणारे ग्राफिक](/img/graph-dataflow.png) प्रवाह या चरणांचे अनुसरण करतो: -1. A dapp स्मार्ट करारावरील व्यवहाराद्वारे इथरियममध्ये डेटा जोडते. -2. व्यवहारावर प्रक्रिया करताना स्मार्ट करार एक किंवा अधिक इव्हेंट सोडतो. -3. ग्राफ नोड सतत नवीन ब्लॉक्ससाठी इथरियम स्कॅन करतो आणि तुमच्या सबग्राफचा डेटा त्यात असू शकतो. -4. ग्राफ नोड या ब्लॉक्समध्ये तुमच्या सबग्राफसाठी इथरियम इव्हेंट शोधतो आणि तुम्ही प्रदान केलेले मॅपिंग हँडलर चालवतो. मॅपिंग हे WASM मॉड्यूल आहे जे इथरियम इव्हेंट्सच्या प्रतिसादात ग्राफ नोड संचयित केलेल्या डेटा घटक तयार करते किंवा अद्यतनित करते. -5. नोडचा [GraphQL एंडपॉइंट](https://graphql.org/learn/) वापरून ब्लॉकचेन वरून अनुक्रमित केलेल्या डेटासाठी dapp ग्राफ नोडची क्वेरी करते. ग्राफ नोड यामधून, स्टोअरच्या इंडेक्सिंग क्षमतांचा वापर करून, हा डेटा मिळविण्यासाठी त्याच्या अंतर्निहित डेटा स्टोअरच्या क्वेरींमध्ये GraphQL क्वेरीचे भाषांतर करतो. dapp हा डेटा अंतिम वापरकर्त्यांसाठी समृद्ध UI मध्ये प्रदर्शित करते, जो ते Ethereum वर नवीन व्यवहार जारी करण्यासाठी वापरतात. चक्राची पुनरावृत्ती होते. +1. A dapp स्मार्ट करारावरील व्यवहाराद्वारे इथरियममध्ये डेटा जोडते. +2. व्यवहारावर प्रक्रिया करताना स्मार्ट करार एक किंवा अधिक इव्हेंट सोडतो. +3. ग्राफ नोड सतत नवीन ब्लॉक्ससाठी इथरियम स्कॅन करतो आणि तुमच्या सबग्राफचा डेटा त्यात असू शकतो. +4. ग्राफ नोड या ब्लॉक्समध्ये तुमच्या सबग्राफसाठी इथरियम इव्हेंट शोधतो आणि तुम्ही प्रदान केलेले मॅपिंग हँडलर चालवतो. मॅपिंग हे WASM मॉड्यूल आहे जे इथरियम इव्हेंट्सच्या प्रतिसादात ग्राफ नोड संचयित केलेल्या डेटा घटक तयार करते किंवा अद्यतनित करते. +5. नोडचा [GraphQL एंडपॉइंट](https://graphql.org/learn/) वापरून ब्लॉकचेन वरून अनुक्रमित केलेल्या डेटासाठी dapp ग्राफ नोडची क्वेरी करते. ग्राफ नोड यामधून, स्टोअरच्या इंडेक्सिंग क्षमतांचा वापर करून, हा डेटा मिळविण्यासाठी त्याच्या अंतर्निहित डेटा स्टोअरच्या क्वेरींमध्ये GraphQL क्वेरीचे भाषांतर करतो. dapp हा डेटा अंतिम वापरकर्त्यांसाठी समृद्ध UI मध्ये प्रदर्शित करते, जो ते Ethereum वर नवीन व्यवहार जारी करण्यासाठी वापरतात. चक्राची पुनरावृत्ती होते. ## पुढील पायऱ्या -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +The following sections provide a more in-depth look at subgraphs, their deployment and data querying. -Before you start writing your own subgraph, you might want to have a look at [Graph Explorer](https://thegraph.com/explorer) and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/pages/mr/arbitrum/arbitrum-faq.mdx b/website/pages/mr/arbitrum/arbitrum-faq.mdx index b72ce63a04ea..6a6ac6739ee8 100644 --- a/website/pages/mr/arbitrum/arbitrum-faq.mdx +++ b/website/pages/mr/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: Arbitrum FAQ Click [here](#billing-on-arbitrum-faqs) if you would like to skip to the Arbitrum Billing FAQs. -## Why is The Graph implementing an L2 Solution? +## Why did The Graph implement an L2 Solution? -By scaling The Graph on L2, network participants can expect: +By scaling The Graph on L2, network participants can now benefit from: - Upwards of 26x savings on gas fees @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can expect: - Security inherited from Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers could open and close allocations to index a greater number of subgraphs with greater frequency, developers could deploy and update subgraphs with greater ease, Delegators could delegate GRT with increased frequency, and Curators could add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion. @@ -41,27 +41,21 @@ To take advantage of using The Graph on L2, use this dropdown switcher to toggle ## सबग्राफ डेव्हलपर, डेटा कंझ्युमर, इंडेक्सर, क्युरेटर किंवा डेलिगेटर म्हणून, मला आता काय करावे लागेल? -There is no immediate action required, however, network participants are encouraged to begin moving to Arbitrum to take advantage of the benefits of L2. +Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) for additional support. -Core developer teams are working to create L2 transfer tools that will make it significantly easier to move delegation, curation, and subgraphs to Arbitrum. Network participants can expect L2 transfer tools to be available by summer of 2023. +All indexing rewards are now entirely on Arbitrum. -10 एप्रिल 2023 पर्यंत, सर्व इंडेक्सिंग रिवॉर्ड्सपैकी 5% आर्बिट्रमवर टाकले जात आहेत. जसजसा नेटवर्क सहभाग वाढतो, आणि काउन्सिलने त्याला मान्यता दिली तसतसे, अनुक्रमणिका बक्षिसे हळूहळू इथरियममधून आर्बिट्रममध्ये बदलली जातील, अखेरीस संपूर्णपणे आर्बिट्रमकडे जातील. - -## If I would like to participate in the network on L2, what should I do? - -Please help [test the network](https://testnet.thegraph.com/explorer) on L2 and report feedback about your experience in [Discord](https://discord.gg/graphprotocol). - -## Are there any risks associated with scaling the network to L2? +## Were there any risks associated with scaling the network to L2? All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). Everything has been tested thoroughly, and a contingency plan is in place to ensure a safe and seamless transition. Details can be found [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Will existing subgraphs on Ethereum continue to work? +## Are existing subgraphs on Ethereum working? -होय, ग्राफ नेटवर्क कॉन्ट्रॅक्ट्स नंतरच्या तारखेला पूर्णपणे आर्बिट्रममध्ये जाईपर्यंत इथरियम आणि आर्बिट्रम दोन्हीवर समांतरपणे कार्य करतील. +All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. -## Will GRT have a new smart contract deployed on Arbitrum? +## Does GRT have a new smart contract deployed on Arbitrum? Yes, GRT has an additional [smart contract on Arbitrum](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7). However, the Ethereum mainnet [GRT contract](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) will remain operational. diff --git a/website/pages/mr/billing.mdx b/website/pages/mr/billing.mdx index 26457c03358f..63430377cc61 100644 --- a/website/pages/mr/billing.mdx +++ b/website/pages/mr/billing.mdx @@ -14,7 +14,7 @@ There are two plans to use when querying subgraphs on The Graph Network. ## Query Payments with credit card -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) +- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. पृष्ठाच्या वरच्या उजव्या कोपर्यात "कनेक्ट वॉलेट" बटणावर क्लिक करा. तुम्हाला वॉलेट निवड पृष्ठावर पुनर्निर्देशित केले जाईल. तुमचे वॉलेट निवडा आणि "कनेक्ट" वर क्लिक करा. 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. @@ -69,7 +69,7 @@ Once you bridge GRT, you can add it to your billing balance. 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. 4. Enter the amount of GRT you would like to withdraw. 5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. 6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. @@ -83,7 +83,7 @@ Once you bridge GRT, you can add it to your billing balance. - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. 5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. 6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. + - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. 7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. 8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. @@ -127,7 +127,7 @@ This will be a step by step guide for purchasing GRT on Binance. 7. Review your purchase and click "Buy GRT". 8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. 9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - Click on the "wallet" button, click withdraw, and select GRT. - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - Click "Continue" and confirm your transaction. @@ -198,7 +198,7 @@ You can learn more about getting ETH on Binance [here](https://www.binance.com/e ### How many queries will I need? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. @@ -208,6 +208,6 @@ Of course, both new and existing users can reach out to Edge & Node's BD team fo Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### What happens when my billing balance runs out? Will I get a warning? You will receive several email notifications before your billing balance runs out. diff --git a/website/pages/mr/chain-integration-overview.mdx b/website/pages/mr/chain-integration-overview.mdx index 2fe6c2580909..a142f3f817f9 100644 --- a/website/pages/mr/chain-integration-overview.mdx +++ b/website/pages/mr/chain-integration-overview.mdx @@ -6,12 +6,12 @@ A transparent and governance-based integration process was designed for blockcha ## Stage 1. Technical Integration -- Teams work on a Graph Node integration and Firehose for non-EVM based chains. [Here's how](/new-chain-integration/). +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. - Teams initiate the protocol integration process by creating a Forum thread [here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (New Data Sources sub-category under Governance & GIPs). Using the default Forum template is mandatory. ## Stage 2. Integration Validation -- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON RPC or Firehose endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - Graph Indexers test the integration on The Graph's testnet. - Core developers and Indexers monitor stability, performance, and data determinism. @@ -38,7 +38,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. -### 3. How much time will this process take? +### 3. How much time will the process of reaching full protocol support take? The time to mainnet is expected to be several weeks, varying based on the time of integration development, whether additional research is required, testing and bug fixes, and, as always, the timing of the governance process that requires community feedback. @@ -46,4 +46,4 @@ Protocol support for indexing rewards depends on the stakeholders' bandwidth to ### 4. How will priorities be handled? -Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. This is especially true for chains previously supported on the [hosted service](https://thegraph.com/hosted-service) or those relying on already tested stacks. +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/mr/cookbook/arweave.mdx b/website/pages/mr/cookbook/arweave.mdx index b267a6d46869..f8fab0bd478b 100644 --- a/website/pages/mr/cookbook/arweave.mdx +++ b/website/pages/mr/cookbook/arweave.mdx @@ -105,7 +105,7 @@ Arweave डेटा स्रोत दोन प्रकारच्या इव्हेंटवर प्रक्रिया करण्यासाठी हँडलर [AssemblyScript](https://www.assemblyscript.org/) मध्ये लिहिलेले आहेत. -Arweave अनुक्रमणिका [AssemblyScript API](/developing/assemblyscript-api/) मध्ये Arweave-विशिष्ट डेटा प्रकारांचा परिचय देते. +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/mr/cookbook/base-testnet.mdx b/website/pages/mr/cookbook/base-testnet.mdx index d62bf749c571..7c046613e1a1 100644 --- a/website/pages/mr/cookbook/base-testnet.mdx +++ b/website/pages/mr/cookbook/base-testnet.mdx @@ -63,10 +63,10 @@ graph init --studio The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. +- स्कीमा (schema.graphql) - ग्राफक्यूएल स्कीमा तुम्हाला सबग्राफमधून कोणता डेटा मिळवायचा आहे ते परिभाषित करते. - असेंबलीस्क्रिप्ट मॅपिंग (mapping.ts) - हा असा कोड आहे जो तुमच्या डेटास्रोतमधील डेटाचे स्कीमामध्ये परिभाषित केलेल्या घटकांमध्ये भाषांतर करतो. -If you want to index additional data, you will need extend the manifest, schema and mappings. +If you want to index additional data, you will need to extend the manifest, schema and mappings. For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). diff --git a/website/pages/mr/cookbook/cosmos.mdx b/website/pages/mr/cookbook/cosmos.mdx index 56feb3bba773..3460369da07f 100644 --- a/website/pages/mr/cookbook/cosmos.mdx +++ b/website/pages/mr/cookbook/cosmos.mdx @@ -85,7 +85,7 @@ Schema definition describes the structure of the resulting subgraph database and इव्हेंटवर प्रक्रिया करण्यासाठी हँडलर [AssemblyScript](https://www.assemblyscript.org/) मध्ये लिहिलेले आहेत. -कॉसमॉस अनुक्रमणिका [AssemblyScript API](/developing/assemblyscript-api/) मध्ये कॉसमॉस-विशिष्ट डेटा प्रकारांचा परिचय देते. +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/mr/cookbook/grafting.mdx b/website/pages/mr/cookbook/grafting.mdx index 9c46a16ab79d..77d4b0d04899 100644 --- a/website/pages/mr/cookbook/grafting.mdx +++ b/website/pages/mr/cookbook/grafting.mdx @@ -22,7 +22,7 @@ title: करार बदला आणि त्याचा इतिहास - [कलम करणे](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -या ट्युटोरियलमध्ये, आपण मूलभूत वापराचे केस कव्हर करणार आहोत. आम्‍ही सध्‍याच्‍या कराराची जागा एकसमान कराराने (नवीन पत्‍त्‍यासह, परंतु समान कोडसह) बदलू. त्यानंतर, नवीन कराराचा मागोवा घेणाऱ्या "बेस" सबग्राफवर विद्यमान सबग्राफ कलम करा. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network @@ -30,7 +30,7 @@ title: करार बदला आणि त्याचा इतिहास ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. While this is an effective way to preserve data and save time on indexing, grafting may introduce complexities and potential issues when migrating from a hosted environment to the decentralized network. It is not possible to graft a subgraph from The Graph Network back to the hosted service or Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. ### Best Practices @@ -80,7 +80,7 @@ dataSources: ``` - The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - `मॅपिंग` विभाग स्वारस्यांचे ट्रिगर आणि त्या ट्रिगरला प्रतिसाद म्हणून चालवल्या जाणार्‍या कार्ये परिभाषित करतो. या प्रकरणात, आम्ही `Withdrawal` इव्हेंट ऐकत आहोत आणि जेव्हा ते उत्सर्जित होते तेव्हा `handleWithdrawal` फंक्शनला कॉल करत आहोत. ## Grafting मॅनिफेस्ट व्याख्या @@ -191,7 +191,7 @@ Congrats! You have successfully grafted a subgraph onto another subgraph. ## अतिरिक्त संसाधने -तुम्हाला ग्राफ्टिंगचा अधिक अनुभव हवा असल्यास, लोकप्रिय करारांसाठी येथे काही उदाहरणे आहेत: +If you want more experience with grafting, here are a few examples for popular contracts: - [वक्र](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) diff --git a/website/pages/mr/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx b/website/pages/mr/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx index 9df73635859c..d4e672df4fdb 100644 --- a/website/pages/mr/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx +++ b/website/pages/mr/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx @@ -2,7 +2,7 @@ title: How to Secure API Keys Using Next.js Server Components --- -## सारांश +## सविश्लेषण We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). diff --git a/website/pages/mr/cookbook/near.mdx b/website/pages/mr/cookbook/near.mdx index 9280cf7c0ce5..c5cbc606d535 100644 --- a/website/pages/mr/cookbook/near.mdx +++ b/website/pages/mr/cookbook/near.mdx @@ -37,7 +37,7 @@ This guide is an introduction to building subgraphs indexing smart contracts on **schema.graphql:** एक स्कीमा फाइल जी तुमच्या सबग्राफसाठी कोणता डेटा संग्रहित केला जातो आणि GraphQL द्वारे त्याची क्वेरी कशी करावी हे परिभाषित करते. जवळच्या सबग्राफसाठी आवश्यकता [विद्यमान दस्तऐवज](/developing/creating-a-subgraph#the-graphql-schema) द्वारे कव्हर केल्या जातात. -**AssemblyScript मॅपिंग:** [AssemblyScript कोड](/developing/assemblyscript-api) जो इव्हेंट डेटामधून तुमच्या स्कीमामध्ये परिभाषित केलेल्या घटकांमध्ये अनुवादित करतो. NEAR समर्थन NEAR-विशिष्ट डेटा प्रकार आणि नवीन JSON पार्सिंग कार्यक्षमता सादर करते. +**AssemblyScript Mappings:** [AssemblyScript code](/developing/graph-ts/api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. सबग्राफ विकासादरम्यान दोन प्रमुख आज्ञा आहेत: @@ -98,7 +98,7 @@ accounts: इव्हेंटवर प्रक्रिया करण्यासाठी हँडलर [AssemblyScript](https://www.assemblyscript.org/) मध्ये लिहिलेले आहेत. -NEAR अनुक्रमणिका [AssemblyScript API](/developing/assemblyscript-api) मध्ये NEAR-विशिष्ट डेटा प्रकारांचा परिचय देते. +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/graph-ts/api). ```typescript @@ -165,9 +165,9 @@ class ReceiptWithOutcome { - ब्लॉक हँडलर्सना एक `ब्लॉक` मिळेल - पावती हाताळणाऱ्यांना `ReceiptWithOutcome` मिळेल -अन्यथा, उर्वरित [AssemblyScript API](/developing/assemblyscript-api) मॅपिंग अंमलबजावणी दरम्यान जवळच्या सबग्राफ विकसकांसाठी उपलब्ध आहे. +Otherwise, the rest of the [AssemblyScript API](/developing/graph-ts/api) is available to NEAR subgraph developers during mapping execution. -यामध्ये नवीन JSON पार्सिंग फंक्शन समाविष्ट आहे - NEAR वरील लॉग वारंवार स्ट्रिंगिफाइड JSON म्हणून उत्सर्जित केले जातात. एक नवीन `json.fromString(...)`विकासकांना या लॉगवर सहज प्रक्रिया करण्याची अनुमती देण्यासाठी unction [JSON API](/developing/assemblyscript-api#json-api) चा भाग म्हणून उपलब्ध आहे. +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/graph-ts/api#json-api) to allow developers to easily process these logs. ## NEAR सबग्राफ डिप्लॉय करण्यासाठी diff --git a/website/pages/mr/cookbook/subgraph-uncrashable.mdx b/website/pages/mr/cookbook/subgraph-uncrashable.mdx index ed44a80b316f..1b8e106d4b03 100644 --- a/website/pages/mr/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/mr/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: सुरक्षित सबग्राफ कोड जनरेट - फ्रेमवर्कमध्ये एंटिटी व्हेरिएबल्सच्या गटांसाठी सानुकूल, परंतु सुरक्षित, सेटर फंक्शन्स तयार करण्याचा मार्ग (कॉन्फिग फाइलद्वारे) देखील समाविष्ट आहे. अशा प्रकारे वापरकर्त्याला जुना आलेख घटक लोड करणे/वापरणे अशक्य आहे आणि फंक्शनसाठी आवश्यक असलेले व्हेरिएबल सेव्ह करणे किंवा सेट करणे विसरणे देखील अशक्य आहे. -- डेटा अचूकता सुनिश्चित करण्यासाठी समस्या पॅच करण्यात मदत करण्यासाठी सबग्राफ लॉजिकचे उल्लंघन कोठे आहे हे दर्शविणारे लॉग म्हणून चेतावणी लॉग रेकॉर्ड केले जातात. हे लॉग 'लॉग' विभागांतर्गत ग्राफच्या होस्ट केलेल्या सेवेमध्ये पाहिले जाऊ शकतात. +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. ग्राफ CLI codegen कमांड वापरून Subgraph Uncrashable हा पर्यायी ध्वज म्हणून चालवला जाऊ शकतो. diff --git a/website/pages/mr/cookbook/upgrading-a-subgraph.mdx b/website/pages/mr/cookbook/upgrading-a-subgraph.mdx index 9351c87da358..c6877638a7ab 100644 --- a/website/pages/mr/cookbook/upgrading-a-subgraph.mdx +++ b/website/pages/mr/cookbook/upgrading-a-subgraph.mdx @@ -136,7 +136,7 @@ Make sure **Update Subgraph Details in Explorer** is checked and click on **Save ## Deprecating a Subgraph on The Graph Network -Follow the steps [here](/managing/deprecating-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. +Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. ## Querying a Subgraph + Billing on The Graph Network diff --git a/website/pages/mr/deploying/multiple-networks.mdx b/website/pages/mr/deploying/multiple-networks.mdx new file mode 100644 index 000000000000..ab644e836b37 --- /dev/null +++ b/website/pages/mr/deploying/multiple-networks.mdx @@ -0,0 +1,241 @@ +--- +title: Deploying a Subgraph to Multiple Networks +--- + +This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph). + +## एकाधिक नेटवर्कवर सबग्राफ तैनात करणे + +काही प्रकरणांमध्ये, तुम्हाला समान सबग्राफ एकाधिक नेटवर्कवर त्याच्या कोडची नक्कल न करता उपयोजित करायचा असेल. यासह येणारे मुख्य आव्हान हे आहे की या नेटवर्कवरील कराराचे पत्ते वेगळे आहेत. + +### Using `graph-cli` + +Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: + +```sh +Options: + + ... + --network Network configuration to use from the networks config file + --network-file Networks config file path (default: "./networks.json") +``` + +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. + +> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. + +If you don't have a `networks.json` file, you'll need to manually create one with the following structure: + +```json +{ + "network1": { // the network name + "dataSource1": { // the dataSource name + "address": "0xabc...", // the contract address (optional) + "startBlock": 123456 // the startBlock (optional) + }, + "dataSource2": { + "address": "0x123...", + "startBlock": 123444 + } + }, + "network2": { + "dataSource1": { + "address": "0x987...", + "startBlock": 123 + }, + "dataSource2": { + "address": "0xxyz..", + "startBlock": 456 + } + }, + ... +} +``` + +> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. + +Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x123...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +तुमची नेटवर्क कॉन्फिगरेशन फाइल अशी दिसली पाहिजे: + +```json +{ + "mainnet": { + "Gravity": { + "address": "0x123..." + } + }, + "sepolia": { + "Gravity": { + "address": "0xabc..." + } + } +} +``` + +आता आपण खालीलपैकी एक कमांड रन करू शकतो: + +```sh +# Using default networks.json file +yarn build --network sepolia + +# Using custom named file +yarn build --network sepolia --network-file path/to/config +``` + +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: sepolia + source: + address: '0xabc...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Now you are ready to `yarn deploy`. + +> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: + +```sh +# Using default networks.json file +yarn deploy --network sepolia + +# Using custom named file +yarn deploy --network sepolia --network-file path/to/config +``` + +### वापरत आहे subgraph.yaml टेम्पलेट + +One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). + +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: + +```json +{ + "network": "mainnet", + "address": "0x123..." +} +``` + +आणि + +```json +{ + "network": "sepolia", + "address": "0xabc..." +} +``` + +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + network: {{network}} + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + address: '{{address}}' + abi: Gravity + mapping: + kind: ethereum/events +``` + +In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: + +```json +{ + ... + "scripts": { + ... + "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", + "prepare:sepolia": "mustache config/sepolia.json subgraph.template.yaml > subgraph.yaml" + }, + "devDependencies": { + ... + "mustache": "^3.1.0" + } +} +``` + +To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: + +```sh +# Mainnet: +yarn prepare:mainnet && yarn deploy + +# Sepolia: +yarn prepare:sepolia && yarn deploy +``` + +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). + +**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. + +## सबग्राफ स्टुडिओ सबग्राफ संग्रहण धोरण + +A subgraph version in Studio is archived if and only if it meets the following criteria: + +- The version is not published to the network (or pending publish) +- The version was created 45 or more days ago +- The subgraph hasn't been queried in 30 days + +In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. + +या धोरणामुळे प्रभावित झालेल्या प्रत्येक सबग्राफला प्रश्नातील आवृत्ती परत आणण्याचा पर्याय आहे. + +## सबग्राफ आरोग्य तपासत आहे + +जर सबग्राफ यशस्वीरित्या समक्रमित झाला, तर ते कायमचे चांगले चालत राहण्याचे चांगले चिन्ह आहे. तथापि, नेटवर्कवरील नवीन ट्रिगर्समुळे तुमच्या सबग्राफची चाचणी न केलेली त्रुटी स्थिती येऊ शकते किंवा कार्यप्रदर्शन समस्यांमुळे किंवा नोड ऑपरेटरमधील समस्यांमुळे ते मागे पडू शकते. + +Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: + +```graphql +{ + indexingStatusForCurrentVersion(subgraphName: "org/subgraph") { + synced + health + fatalError { + message + block { + number + hash + } + handler + } + chains { + chainHeadBlock { + number + } + latestBlock { + number + } + } + } +} +``` + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/pages/mr/developing/creating-a-subgraph.mdx b/website/pages/mr/developing/creating-a-subgraph.mdx index 7eb73bc56aab..518b4fc1e73f 100644 --- a/website/pages/mr/developing/creating-a-subgraph.mdx +++ b/website/pages/mr/developing/creating-a-subgraph.mdx @@ -2,45 +2,47 @@ title: सबग्राफ तयार करणे --- -A subgraph extracts data from a blockchain, processing it and storing it so that it can be easily queried via GraphQL. +This detailed guide provides instructions to successfully create a subgraph. -![सबग्राफ परिभाषित करणे](/img/defining-a-subgraph.png) +A subgraph extracts data from a blockchain, processes it, and stores it for efficient querying via GraphQL. -The subgraph definition consists of a few files: +![सबग्राफ परिभाषित करणे](/img/defining-a-subgraph.png) -- `subgraph.yaml`: a YAML file containing the subgraph manifest +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. -- `schema.graphql`: एक GraphQL स्कीमा जो तुमच्या सबग्राफसाठी कोणता डेटा संग्रहित केला जातो आणि GraphQL द्वारे त्याची क्वेरी कशी करावी हे परिभाषित करते +## प्रारंभ करणे -- `AssemblyScript Mappings`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) code that translates from the event data to the entities defined in your schema (e.g. `mapping.ts` in this tutorial) +### Install the Graph CLI -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key). It is recommended that you [add signal](/network/curating/#how-to-signal) to your subgraph with at least [3,000 GRT](/sunrise/#how-can-i-ensure-high-quality-of-service-and-redundancy-for-subgraphs-on-the-graph-network). +To build and deploy a subgraph, you will need the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-tooling) which you will need to build and deploy a subgraph. +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. -## Install the Graph CLI +तुमच्या स्थानिक मशीनवर, खालीलपैकी एक कमांड चालवा: -आलेख CLI JavaScript मध्ये लिहिलेले आहे, आणि ते वापरण्यासाठी तुम्हाला `यार्न` किंवा `npm` स्थापित करावे लागेल; असे गृहीत धरले जाते की तुमच्याकडे पुढील गोष्टींमध्ये सूत आहे. +#### Using [npm](https://www.npmjs.com/) -तुमच्याकडे `यार्न` आल्यावर, चालवून आलेख CLI स्थापित करा +```bash +npm install -g @graphprotocol/graph-cli@latest +``` -**यार्नसह स्थापित करा:** +#### Using [yarn](https://yarnpkg.com/) ```bash yarn global add @graphprotocol/graph-cli ``` -**सह स्थापित करा npm:** +- The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. -```bash -npm install -g @graphprotocol/graph-cli -``` +- This `graph init` command can also create a subgraph in Subgraph Studio by passing in `--product subgraph-studio`. + +- If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. -Once installed, the `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. This command can be used to create a subgraph in Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to your preferred network, bootstrapping a new subgraph from that contract can be a good way to get started. +## Create a subgraph -## From An Existing Contract +### From an existing contract -खालील कमांड एक सबग्राफ तयार करते जे विद्यमान कराराच्या सर्व घटनांना अनुक्रमित करते. ते इथरस्कॅन वरून ABI करार मिळवण्याचा प्रयत्न करते आणि स्थानिक फाइल मार्गाची विनंती करण्यासाठी परत येते. पर्यायी युक्तिवादांपैकी कोणतेही गहाळ असल्यास, ते तुम्हाला परस्परसंवादी फॉर्ममधून घेऊन जाते. +The following command creates a subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,21 +53,29 @@ graph init \ [] ``` -The `` is the ID of your subgraph in Subgraph Studio, it can be found on your subgraph details page. +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. -## From An Example Subgraph +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. -दुसरा मोड `graph init` सपोर्ट करतो तो उदाहरण सबग्राफमधून नवीन प्रोजेक्ट तयार करतो. खालील कमांड हे करते: +### From an example subgraph + +The following command initializes a new project from an example subgraph: ```sh -आलेख init --studio +graph init --studio --from-example=example-subgraph ``` -The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. -## Add New dataSources To An Existing Subgraph +## Add new `dataSources` to an existing subgraph -Since `v0.31.0` the `graph-cli` supports adding new dataSources to an existing subgraph through the `graph add` command. +Since `v0.31.0`, the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: ```sh graph add
    [] @@ -78,22 +88,45 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -`add` कमांड इथरस्कॅनमधून ABI आणेल (जोपर्यंत ABI पथ `--abi` पर्यायाने निर्दिष्ट केला जात नाही), आणि नवीन `डेटास्रोत` तयार करेल. > त्याच प्रकारे `graph init` कमांड `डेटास्रोत` `---करारातून` तयार करते, त्यानुसार स्कीमा आणि मॅपिंग अद्यतनित करते. +### Specifics + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: + + - If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- The contract `address` will be written to the `networks.json` for the relevant network. + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. -The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: +## Components of a subgraph -- If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. -- If `false`: a new entity & event handler should be created with `${dataSourceName}{EventName}`. +### सबग्राफ मॅनिफेस्ट -The contract `address` will be written to the `networks.json` for the relevant network. +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -> **टीप:** परस्परसंवादी क्ली वापरताना, यशस्वीरित्या `ग्राफ इनिट` चालवल्यानंतर, तुम्हाला एक नवीन `डेटास्रोत` जोडण्यासाठी सूचित केले जाईल. +The **subgraph definition** consists of the following files: -## सबग्राफ मॅनिफेस्ट +- `subgraph.yaml`: Contains the subgraph manifest -सबग्राफ मॅनिफेस्ट `subgraph.yaml` स्मार्ट कॉन्ट्रॅक्ट्स तुमच्या सबग्राफ इंडेक्सेस परिभाषित करतो, या कॉन्ट्रॅक्टमधील कोणत्या इव्हेंट्सकडे लक्ष द्यायचे आणि ग्राफ नोड स्टोअर करत असलेल्या आणि क्वेरी करण्याची परवानगी देणार्‍या घटकांसाठी इव्हेंट डेटा कसा मॅप करायचा. सबग्राफ मॅनिफेस्टसाठी संपूर्ण तपशील [येथे](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md) आढळू शकतात. +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL -For the example subgraph, `subgraph.yaml` is: +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +A single subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: ```yaml specVersion: 0.0.4 @@ -180,9 +213,9 @@ dataSources: The triggers for a data source within a block are ordered using the following process: -1. Event and call triggers are first ordered by transaction index within the block. -2. समान व्यवहारामधील इव्हेंट आणि कॉल ट्रिगर्स एक नियम वापरून ऑर्डर केले जातात: प्रथम इव्हेंट ट्रिगर नंतर कॉल ट्रिगर, प्रत्येक प्रकार मॅनिफेस्टमध्ये परिभाषित केलेल्या क्रमाचा आदर करतो. -3. Block triggers are run after event and call triggers, in the order they are defined in the manifest. +1. Event and call triggers are first ordered by transaction index within the block. +2. समान व्यवहारामधील इव्हेंट आणि कॉल ट्रिगर्स एक नियम वापरून ऑर्डर केले जातात: प्रथम इव्हेंट ट्रिगर नंतर कॉल ट्रिगर, प्रत्येक प्रकार मॅनिफेस्टमध्ये परिभाषित केलेल्या क्रमाचा आदर करतो. +3. Block triggers are run after event and call triggers, in the order they are defined in the manifest. These ordering rules are subject to change. @@ -305,9 +338,9 @@ Imagine you have a subgraph that needs to make three Ethereum calls to fetch dat Traditionally, these calls might be made sequentially: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Total time taken = 3 + 2 + 4 = 9 seconds @@ -315,9 +348,9 @@ Total time taken = 3 + 2 + 4 = 9 seconds With this feature, you can declare these calls to be executed in parallel: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. @@ -325,9 +358,9 @@ Total time taken = max (3, 2, 4) = 4 seconds ### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. -2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. ### Example Configuration in Subgraph Manifest @@ -347,7 +380,7 @@ calls: Details for the example above: - `global0X128` is the declared `eth_call`. -- The text before colon(`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. @@ -360,17 +393,17 @@ calls: ### SpecVersion Releases -| आवृत्ती | रिलीझ नोट्स | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +| आवृत्ती | रिलीझ नोट्स | +|:-------:| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | ### ABIs मिळवणे @@ -442,16 +475,16 @@ type GravatarDeclined @entity { We support the following scalars in our GraphQL API: -| प्रकार | वर्णन | -| --- | --- | -| `बाइट्स` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | -| `स्ट्रिंग` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `बुलियन` | `बूलियन` मूल्यांसाठी स्केलर. | -| `इंट` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | मोठे पूर्णांक. इथरियमच्या `uint32`, `int64`, `uint64`, ..., `uint256` प्रकारांसाठी वापरले जाते. टीप: `uint32` खाली सर्व काही, जसे की `int32`, `uint24` किंवा `int8` `i32` म्हणून प्रस्तुत केले जाते 0>. | -| `बिग डेसिमल` | `BigDecimal` उच्च सुस्पष्टता दशांश एक महत्त्वपूर्ण आणि घातांक म्हणून प्रस्तुत केले जाते. घातांक श्रेणी −6143 ते +6144 पर्यंत आहे. 34 लक्षणीय अंकांपर्यंत पूर्णांक. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| प्रकार | वर्णन | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `बाइट्स` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `स्ट्रिंग` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `बुलियन` | `बूलियन` मूल्यांसाठी स्केलर. | +| `इंट` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | मोठे पूर्णांक. इथरियमच्या `uint32`, `int64`, `uint64`, ..., `uint256` प्रकारांसाठी वापरले जाते. टीप: `uint32` खाली सर्व काही, जसे की `int32`, `uint24` किंवा `int8` `i32` म्हणून प्रस्तुत केले जाते 0>. | +| `बिग डेसिमल` | `BigDecimal` उच्च सुस्पष्टता दशांश एक महत्त्वपूर्ण आणि घातांक म्हणून प्रस्तुत केले जाते. घातांक श्रेणी −6143 ते +6144 पर्यंत आहे. 34 लक्षणीय अंकांपर्यंत पूर्णांक. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | #### एनम्स @@ -593,7 +626,7 @@ query usersWithOrganizations { #### स्कीमामध्ये टिप्पण्या जोडत आहे -As per GraphQL spec, comments can be added above schema entity attributes using the hash symble `#`. This is illustrated in the example below: +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: ```graphql type MyFirstEntity @entity { @@ -873,7 +906,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **टीप:** नवीन डेटा स्रोत केवळ तो ज्या ब्लॉकमध्ये तयार केला गेला होता आणि पुढील सर्व ब्लॉकसाठी कॉल्स आणि इव्हेंटवर प्रक्रिया करेल, परंतु ऐतिहासिक डेटावर प्रक्रिया करणार नाही, म्हणजे, डेटावर प्रक्रिया करणार नाही. जे आधीच्या ब्लॉक्समध्ये समाविष्ट आहे. -> +> > पूर्वीच्या ब्लॉक्समध्ये नवीन डेटा स्रोताशी संबंधित डेटा असल्यास, कराराची वर्तमान स्थिती वाचून आणि नवीन डेटा स्रोत तयार करताना त्या स्थितीचे प्रतिनिधित्व करणारी संस्था तयार करून तो डेटा अनुक्रमित करणे सर्वोत्तम आहे. ### डेटा स्रोत संदर्भ @@ -930,7 +963,7 @@ dataSources: ``` > **Note:** The contract creation block can be quickly looked up on Etherscan: -> +> > 1. Search for the contract by entering its address in the search bar. > 2. Click on the creation transaction hash in the `Contract Creator` section. > 3. Load the transaction details page where you'll find the start block for that contract. @@ -945,9 +978,9 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde `indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: -1. `"never"`: No pruning of historical data; retains the entire history. -2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. -3. A specific number: Sets a custom limit on the number of historical blocks to retain. +1. `"never"`: No pruning of historical data; retains the entire history. +2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +3. A specific number: Sets a custom limit on the number of historical blocks to retain. ``` indexerHints: @@ -982,29 +1015,6 @@ indexerHints: prune: never ``` -You can check the earliest block (with historical state) for a given subgraph by querying the [Indexing Status API](/deploying/deploying-a-subgraph-to-hosted/#checking-subgraph-health): - -``` -{ - indexingStatuses(subgraphs: ["Qm..."]) { - subgraph - synced - health - chains { - earliestBlock { - number - } - latestBlock { - number - } - chainHeadBlock { number } - } - } -} -``` - -Note that the `earliestBlock` is the earliest block with historical data, which will be more recent than the `startBlock` specified in the manifest, if the subgraph has been pruned. - ## Event Handlers Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. @@ -1477,7 +1487,7 @@ The file data source must specifically mention all the entity types which it wil #### फाइल्सवर प्रक्रिया करण्यासाठी नवीन हँडलर तयार करा -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/assemblyscript-api/#json-api)). +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). The CID of the file as a readable string can be accessed via the `dataSource` as follows: diff --git a/website/pages/mr/developing/developer-faqs.mdx b/website/pages/mr/developing/developer-faqs.mdx index 82680a88f462..e6d975026a06 100644 --- a/website/pages/mr/developing/developer-faqs.mdx +++ b/website/pages/mr/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: विकसक वारंवार विचारले जाणारे प्रश्न --- -## 1. सबग्राफ म्हणजे काय? +This page summarizes some of the most common questions for developers building on The Graph. -सबग्राफ हा ब्लॉकचेन डेटावर तयार केलेला कस्टम API आहे. ग्राफक्यूएल क्वेरी भाषेचा वापर करून सबग्राफ्स विचारले जातात आणि ग्राफ सीएलआय वापरून ग्राफ नोडमध्ये तैनात केले जातात. एकदा द ग्राफच्या विकेंद्रीकृत नेटवर्कवर तैनात आणि प्रकाशित झाल्यानंतर, इंडेक्सर्स सबग्राफवर प्रक्रिया करतात आणि सबग्राफ ग्राहकांद्वारे विचारण्यासाठी त्यांना उपलब्ध करून देतात. +## Subgraph Related -## 2. मी माझा सबग्राफ हटवू शकतो? +### 1. सबग्राफ म्हणजे काय? -एकदा सबग्राफ तयार केल्यावर ते हटवणे शक्य नाही. +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. मी माझे सबग्राफ नाव बदलू शकतो का? +### 2. What is the first step to create a subgraph? -नाही. एकदा सबग्राफ तयार केल्यावर नाव बदलता येत नाही. तुम्ही तुमचा सबग्राफ तयार करण्यापूर्वी याचा काळजीपूर्वक विचार केल्याचे सुनिश्चित करा जेणेकरून ते इतर डॅप्सद्वारे सहजपणे शोधता येईल आणि ओळखता येईल. +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. मी माझ्या सबग्राफशी संबंधित गिटहब खाते बदलू शकतो का? +### 3. Can I still create a subgraph if my smart contracts don't have events? -नाही. एकदा सबग्राफ तयार केल्यावर, संबंधित GitHub खाते बदलता येत नाही. तुमचा सबग्राफ तयार करण्यापूर्वी याचा काळजीपूर्वक विचार करा. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. माझ्या स्मार्ट कॉन्ट्रॅक्टमध्ये इव्हेंट नसल्यास मी सबग्राफ तयार करू शकतो का? +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -तुम्‍हाला क्‍वेरी करण्‍यात स्वारस्य असलेल्‍या डेटाशी संबंधित इव्‍हेंट असण्‍यासाठी तुम्‍ही तुमच्‍या स्‍मार्ट कॉन्ट्रॅक्टची रचना करण्‍याची शिफारस केली जाते. सबग्राफमधील इव्हेंट हँडलर कॉन्ट्रॅक्ट इव्हेंटद्वारे ट्रिगर केले जातात आणि उपयुक्त डेटा पुनर्प्राप्त करण्याचा सर्वात जलद मार्ग आहे. +### 4. मी माझ्या सबग्राफशी संबंधित गिटहब खाते बदलू शकतो का? -जर तुम्ही काम करत असलेल्या करारांमध्ये इव्हेंट्स नसतील, तर तुमचा सबग्राफ इंडेक्सिंग ट्रिगर करण्यासाठी कॉल आणि ब्लॉक हँडलर वापरू शकतो. जरी याची शिफारस केलेली नाही, कारण कार्यप्रदर्शन लक्षणीयरीत्या हळू होईल. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. एकाधिक नेटवर्कसाठी समान नावाचा एक सबग्राफ तैनात करणे शक्य आहे का? +### 5. How do I update a subgraph on mainnet? -तुम्हाला एकाधिक नेटवर्कसाठी स्वतंत्र नावांची आवश्यकता असेल. तुमच्याकडे एकाच नावाखाली वेगवेगळे सबग्राफ असू शकत नसले तरी, एकाधिक नेटवर्कसाठी एकच कोडबेस ठेवण्याचे सोयीचे मार्ग आहेत. आमच्या दस्तऐवजात याबद्दल अधिक शोधा: [सबग्राफ पुन्हा तैनात करणे](/deploying/deploying-a-subgraph-to-hosted#redeploying-a-subgraph) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. डेटा स्रोतांपेक्षा टेम्पलेट वेगळे कसे आहेत? +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -तुमचा सबग्राफ इंडेक्स करत असताना टेम्प्लेट्स तुम्हाला फ्लायवर डेटा स्रोत तयार करण्याची परवानगी देतात. असे असू शकते की तुमचा करार नवीन करार तयार करेल कारण लोक त्याच्याशी संवाद साधतील आणि तुम्हाला त्या करारांचे स्वरूप (एबीआय, इव्हेंट इ.) आधीच माहित असल्याने तुम्ही ते टेम्पलेटमध्ये कसे अनुक्रमित करायचे आणि ते केव्हा ते परिभाषित करू शकता. तुमचा सबग्राफ कराराचा पत्ता पुरवून डायनॅमिक डेटा स्रोत तयार करेल. +तुम्हाला सबग्राफ पुन्हा तैनात करावा लागेल, परंतु सबग्राफ आयडी (IPFS हॅश) बदलत नसल्यास, त्याला सुरुवातीपासून सिंक करण्याची गरज नाही. + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +सबग्राफमध्‍ये, इव्‍हेंट नेहमी ब्लॉकमध्‍ये दिसण्‍याच्‍या क्रमाने संसाधित केले जातात, ते एकाधिक कॉन्ट्रॅक्टमध्‍ये असले किंवा नसले तरीही. + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. [डेटा स्रोत टेम्पलेट्स](/developing/creating-a-subgraph#data-source-templates) यावर "डेटा स्त्रोत टेम्पलेट इन्स्टंटिएटिंग करणे" विभाग पहा. -## 8. मी माझ्या स्थानिक उपयोजनांसाठी ग्राफ-नोडची नवीनतम आवृत्ती वापरत असल्याची खात्री कशी करावी? +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -तुम्ही खालील आदेश चालवू शकता: +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -डॉकर पुल ग्राफप्रोटोकॉल/ग्राफ-नोड:नवीनतम -``` +You can also use `graph add` command to add a new dataSource. -**सूचना:** डॉकर/डॉकर-कंपोज तुम्ही पहिल्यांदा चालवताना जी ग्राफ-नोड आवृत्ती काढली होती ती नेहमी वापरेल, त्यामुळे तुम्ही नवीनतम आवृत्तीसह अद्ययावत आहात याची खात्री करण्यासाठी हे करणे महत्त्वाचे आहे. ग्राफ-नोडचे. +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. मी माझ्या सबग्राफ मॅपिंगमधून कॉन्ट्रॅक्ट फंक्शन कसे कॉल करू किंवा सार्वजनिक स्टेट व्हेरिएबलमध्ये प्रवेश कसा करू? +Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. -## 10. दोन करारांसह `graph-cli` वरून `graph init` वापरून सबग्राफ सेट करणे शक्य आहे का? किंवा `graph init` चालवल्यानंतर `subgraph.yaml` मध्ये मी व्यक्तिचलितपणे दुसरा डेटासोर्स जोडायचा? +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Yes. On `graph init` command itself you can add multiple datasources by entering contracts one after the other. You can also use `graph add` command to add new datasource. +तुम्ही खालील आदेश चालवू शकता: -## 11. मला GitHub समस्या योगदान किंवा जोडायचे आहे. मी ओपन सोर्स रिपॉजिटरीज कुठे शोधू शकतो? +```sh +डॉकर पुल ग्राफप्रोटोकॉल/ग्राफ-नोड:नवीनतम +``` -- [आलेख नोड](https://github.com/graphprotocol/graph-node) -- [graph-tooling](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. इव्हेंट हाताळताना एखाद्या घटकासाठी "स्वयंजनित" आयडी तयार करण्याचा शिफारस केलेला मार्ग कोणता आहे? +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? कार्यक्रमादरम्यान फक्त एकच अस्तित्व तयार केले असल्यास आणि त्यापेक्षा चांगले काही उपलब्ध नसल्यास, व्यवहार हॅश + लॉग इंडेक्स अद्वितीय असेल. तुम्ही ते बाइट्समध्ये रूपांतरित करून आणि नंतर `crypto.keccak256` द्वारे पाइपिंग करून त्यांना अस्पष्ट करू शकता परंतु यामुळे ते अधिक अद्वितीय होणार नाही. -## 13. एकाधिक करार ऐकताना, कार्यक्रम ऐकण्यासाठी कॉन्ट्रॅक्ट ऑर्डर निवडणे शक्य आहे का? +### 15. Can I delete my subgraph? -सबग्राफमध्‍ये, इव्‍हेंट नेहमी ब्लॉकमध्‍ये दिसण्‍याच्‍या क्रमाने संसाधित केले जातात, ते एकाधिक कॉन्ट्रॅक्टमध्‍ये असले किंवा नसले तरीही. +It is not possible to delete subgraphs once they are created. However, you can [transfer and deprecate your subgraph](/managing/transfer-and-deprecate-a-subgraph/). -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +## Network Related + +### 16. What networks are supported by The Graph? + +तुम्ही समर्थित नेटवर्कची सूची [येथे](/developing/supported-networks) शोधू शकता. + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? होय. तुम्ही खालील उदाहरणानुसार `graph-ts` इंपोर्ट करून हे करू शकता: @@ -78,23 +99,21 @@ dataSource.network() dataSource.address() ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Do you support block and call handlers on Sepolia? Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. -## 16. मी माझ्या सबग्राफ मॅपिंगमध्ये ethers.js किंवा इतर JS लायब्ररी आयात करू शकतो का? - -असेंब्लीस्क्रिप्टमध्ये मॅपिंग लिहिल्याप्रमाणे सध्या नाही. यावर एक संभाव्य पर्यायी उपाय म्हणजे घटकांमध्ये कच्चा डेटा संग्रहित करणे आणि क्लायंटवर JS लायब्ररी आवश्यक असलेले तर्क करणे. +## Indexing & Querying Related -## 17. कोणत्या ब्लॉकवर अनुक्रमणिका सुरू करायची हे निर्दिष्ट करणे शक्य आहे का? +### 19. Is it possible to specify what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 18. इंडेक्सिंगची कार्यक्षमता वाढवण्यासाठी काही टिपा आहेत का? माझा सबग्राफ समक्रमित होण्यासाठी खूप वेळ घेत आहे +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -होय, कॉन्ट्रॅक्ट तैनात केलेल्या ब्लॉकमधून अनुक्रमणिका सुरू करण्यासाठी तुम्ही पर्यायी स्टार्ट ब्लॉक वैशिष्ट्यावर एक नजर टाकली पाहिजे: [स्टार्ट ब्लॉक्स](/developing/creating-a-subgraph#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 19. सबग्राफला अनुक्रमित केलेला नवीनतम ब्लॉक क्रमांक निश्चित करण्यासाठी थेट क्वेरी करण्याचा कोणताही मार्ग आहे का? +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? होय! खालील आदेश वापरून पहा, "संस्था/सबग्राफनेम" च्या जागी त्याखालील संस्था प्रकाशित झाली आहे आणि तुमच्या सबग्राफचे नाव: @@ -102,44 +121,27 @@ Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the n curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { साखळी { नवीनतम ब्लॉक { hash number }}}"}' https://api.thegraph.com/ index-node/graphql ``` -## 20. ग्राफद्वारे कोणते नेटवर्क समर्थित आहेत? - -तुम्ही समर्थित नेटवर्कची सूची [येथे](/developing/supported-networks) शोधू शकता. - -## 21. पुनर्नियोजन न करता उपग्राफ दुसर्‍या खात्यावर किंवा एंडपॉइंटवर डुप्लिकेट करणे शक्य आहे का? - -तुम्हाला सबग्राफ पुन्हा तैनात करावा लागेल, परंतु सबग्राफ आयडी (IPFS हॅश) बदलत नसल्यास, त्याला सुरुवातीपासून सिंक करण्याची गरज नाही. - -## 22. ग्राफ-नोडच्या शीर्षस्थानी अपोलो फेडरेशन वापरणे शक्य आहे का? +### 22. Is there a limit to how many objects The Graph can return per query? -फेडरेशन अद्याप समर्थित नाही, जरी आम्हाला भविष्यात समर्थन करायचे आहे. याक्षणी, तुम्ही क्लायंटवर किंवा प्रॉक्सी सेवेद्वारे, स्कीमा स्टिचिंग वापरू शकता. - -## 23. प्रत्येक क्वेरीमध्ये आलेख किती वस्तू परत करू शकतो याची मर्यादा आहे का? - -डीफॉल्टनुसार, क्वेरी प्रतिसाद प्रति संग्रह 100 आयटमपर्यंत मर्यादित आहेत. तुम्हाला अधिक प्राप्त करायचे असल्यास, तुम्ही प्रति संग्रह 1000 आयटमपर्यंत जाऊ शकता आणि त्यापलीकडे तुम्ही यासह पृष्ठांकन करू शकता: +By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: ```graphql काही संकलन(प्रथम: 1000, वगळा: <संख्या>) { ... } ``` -## 24. जर माझे dapp फ्रंटएंड क्वेरींगसाठी आलेख वापरत असेल, तर मला माझी क्वेरी की थेट फ्रंटएंडमध्ये लिहावी लागेल का? आम्ही वापरकर्त्यांसाठी क्वेरी शुल्क भरल्यास काय - दुर्भावनापूर्ण वापरकर्त्यांमुळे आमच्या क्वेरी शुल्क खूप जास्त असेल? - -सध्या, dapp साठी शिफारस केलेला दृष्टीकोन म्हणजे फ्रंटएंडमध्ये की जोडणे आणि अंतिम वापरकर्त्यांसमोर ते उघड करणे. ते म्हणाले, तुम्ही ती की होस्टनावावर मर्यादित करू शकता, जसे की _yourdapp.io_ आणि सबग्राफ. गेटवे सध्या एज द्वारे चालवले जात आहे & नोड. गेटवेच्या जबाबदारीचा एक भाग म्हणजे अपमानास्पद वर्तनासाठी निरीक्षण करणे आणि दुर्भावनापूर्ण क्लायंटकडून रहदारी अवरोधित करणे. - -## 25. Where do I go to find my current subgraph on the hosted service? - -Head over to the hosted service in order to find subgraphs that you or others deployed to the hosted service. You can find it [here](https://thegraph.com/hosted-service). - -## 26. Will the hosted service start charging query fees? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -The Graph will never charge for the hosted service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The hosted service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to upgrade to the decentralized network as they are comfortable. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## 27. How do I update a subgraph on mainnet? +## Miscellaneous -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +### 24. Is it possible to use Apollo Federation on top of graph-node? -## 28. In what order are the event, block, and call handlers triggered for a data source? +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +- [आलेख नोड](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/mr/developing/graph-ts/api.mdx b/website/pages/mr/developing/graph-ts/api.mdx index 56a87f6b95a3..36c6c0e9db2d 100644 --- a/website/pages/mr/developing/graph-ts/api.mdx +++ b/website/pages/mr/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: असेंबलीस्क्रिप्ट API --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/release-notes/assemblyscript-migration-guide) +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -सबग्राफ मॅपिंग लिहिताना कोणते अंगभूत API वापरले जाऊ शकतात हे हे पृष्ठ दस्तऐवजीकरण करते. बॉक्सच्या बाहेर दोन प्रकारचे API उपलब्ध आहेत: +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## API संदर्भ @@ -27,16 +29,16 @@ The `@graphprotocol/graph-ts` library provides the following APIs: The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. -| आवृत्ती | रिलीझ नोट्स | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
    Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
    Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
    `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
    `etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| आवृत्ती | रिलीझ नोट्स | +| :-----: | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
    Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
    Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
    `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
    `etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### अंगभूत प्रकार @@ -252,7 +254,9 @@ export function handleTransfer(event: TransferEvent): void { When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. -इतर घटकांशी टक्कर टाळण्यासाठी प्रत्येक घटकाकडे एक अद्वितीय आयडी असणे आवश्यक आहे. इव्हेंट पॅरामीटर्समध्ये वापरला जाऊ शकणारा एक अद्वितीय अभिज्ञापक समाविष्ट करणे सामान्य आहे. टीप: आयडी म्हणून ट्रान्झॅक्शन हॅश वापरणे हे गृहित धरते की समान व्यवहारातील इतर कोणत्याही इव्हेंटमध्ये या हॅशसह आयडी म्हणून अस्तित्व निर्माण होत नाही. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### स्टोअरमधून घटक लोड करत आहे @@ -268,15 +272,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### ब्लॉकसह तयार केलेल्या संस्था शोधत आहे As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. -The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a Transaction from some on-chain event, and a later handler wants to access this transaction if it exists. In the case where the transaction does not exist, the subgraph will have to go to the database just to find out that the entity does not exist; if the subgraph author already knows that the entity must have been created in the same block, using loadInBlock avoids this database roundtrip. For some subgraphs, these missed lookups can contribute significantly to the indexing time. +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -503,7 +510,9 @@ As long as the `ERC20Contract` on Ethereum has a public read-only function calle #### रिव्हर्ट केलेले कॉल हाताळणे -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -515,7 +524,7 @@ if (callResult.reverted) { } ``` -लक्षात ठेवा की गेथ किंवा इन्फुरा क्लायंटशी कनेक्ट केलेला ग्राफ नोड सर्व रिव्हर्ट्स शोधू शकत नाही, जर तुम्ही यावर अवलंबून असाल तर आम्ही पॅरिटी क्लायंटशी कनेक्ट केलेला ग्राफ नोड वापरण्याची शिफारस करतो. +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### एन्कोडिंग/डिकोडिंग ABI diff --git a/website/pages/mr/developing/supported-networks.mdx b/website/pages/mr/developing/supported-networks.mdx index 7c2d8d858261..797202065e99 100644 --- a/website/pages/mr/developing/supported-networks.mdx +++ b/website/pages/mr/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. +- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. - If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). diff --git a/website/pages/mr/developing/unit-testing-framework.mdx b/website/pages/mr/developing/unit-testing-framework.mdx index 51465b1def7a..58083835bea1 100644 --- a/website/pages/mr/developing/unit-testing-framework.mdx +++ b/website/pages/mr/developing/unit-testing-framework.mdx @@ -1368,18 +1368,18 @@ ___ ___ _ _ _ _ _ > गंभीर: संदर्भासह वैध मॉड्यूलमधून WasmInstance तयार करू शकलो नाही: अज्ञात आयात: wasi_snapshot_preview1::fd_write परिभाषित केले गेले नाही -याचा अर्थ तुम्ही तुमच्या कोडमध्ये `console.log` वापरले आहे, जे असेंबलीस्क्रिप्टद्वारे समर्थित नाही. कृपया [लॉगिंग API](/developing/assemblyscript-api/#logging-api) वापरण्याचा विचार करा +This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/graph-ts/api/#logging-api) > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Block(defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(18,12) -> +> > त्रुटी TS2554: अपेक्षित आहे? युक्तिवाद, पण मिळाले?. -> +> > नवीन ethereum.Transaction परत करा(defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(24,12) `graph-ts` आणि `matchstick-as` मधील जुळत नसल्यामुळे वितर्कांमधील जुळत नाही. यासारख्या समस्यांचे निराकरण करण्याचा सर्वोत्तम मार्ग म्हणजे नवीनतम रिलीझ केलेल्या आवृत्तीवर सर्वकाही अद्यतनित करणे. diff --git a/website/pages/mr/glossary.mdx b/website/pages/mr/glossary.mdx index 3b293eb5489a..530e8d44993d 100644 --- a/website/pages/mr/glossary.mdx +++ b/website/pages/mr/glossary.mdx @@ -10,11 +10,9 @@ title: Glossary - **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Then, Indexers can begin indexing subgraphs to make them available to be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **Hosted service**: A temporary scaffold service for building and querying subgraphs as The Graph's decentralized network is maturing its cost of service, quality of service, and developer experience. - -- **इंडेक्सर्स**: नेटवर्क सहभागी जे ब्लॉकचेनमधील डेटा इंडेक्स करण्यासाठी इंडेक्सिंग नोड्स चालवतात आणि GraphQL क्वेरी सर्व्ह करतात. +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **इंडेक्सर रेव्हेन्यू स्ट्रीम्स**: GRT मध्ये इंडेक्सर्सना दोन घटकांसह पुरस्कृत केले जाते: क्वेरी फी रिबेट्स आणि इंडेक्सिंग रिवॉर्ड्स. @@ -24,17 +22,17 @@ title: Glossary - **इंडेक्सरचा सेल्फ स्टेक**: विकेंद्रीकृत नेटवर्कमध्ये भाग घेण्यासाठी इंडेक्सर्सची जीआरटीची रक्कम. किमान 100,000 GRT आहे आणि कोणतीही उच्च मर्यादा नाही. -- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service to The Graph Network. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **प्रतिनिधी**: नेटवर्क सहभागी जे GRT चे मालक आहेत आणि त्यांचे GRT इंडेक्सर्सना सोपवतात. हे इंडेक्सर्सना नेटवर्कवरील सबग्राफमध्ये त्यांची भागीदारी वाढविण्यास अनुमती देते. त्या बदल्यात, प्रतिनिधींना अनुक्रमणिका बक्षिसेचा एक भाग प्राप्त होतो जो इंडेक्सर्सना सबग्राफवर प्रक्रिया करण्यासाठी प्राप्त होतो. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **प्रतिनिधी कर**: प्रतिनिधींनी इंडेक्सर्सना GRT सोपवल्यावर 0.5% शुल्क. फी भरण्यासाठी वापरण्यात आलेला जीआरटी जळाला आहे. -- **क्युरेटर**: नेटवर्क सहभागी जे उच्च-गुणवत्तेचे सबग्राफ ओळखतात आणि क्युरेशन शेअर्सच्या बदल्यात त्यांना “क्युरेट” करतात (म्हणजे त्यांच्यावर GRT सिग्नल करतात). जेव्हा इंडेक्सर्स सबग्राफवर क्वेरी फीचा दावा करतात, तेव्हा 10% त्या सबग्राफच्या क्युरेटर्सना वितरित केले जातात. इंडेक्सर्स सबग्राफवरील सिग्नलच्या प्रमाणात अनुक्रमणिका बक्षिसे मिळवतात. आम्ही GRT सिग्नलची रक्कम आणि सबग्राफ इंडेक्स करणार्‍या इंडेक्सर्सची संख्या यांच्यातील परस्परसंबंध पाहतो. +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **क्युरेशन टॅक्स**: क्युरेटर्सने सबग्राफवर GRT सिग्नल केल्यावर 1% फी भरली जाते. फी भरण्यासाठी वापरण्यात आलेला जीआरटी जळाला आहे. -- **सबग्राफ कंझ्युमर**: सबग्राफसाठी प्रश्न विचारणारा कोणताही अनुप्रयोग किंवा वापरकर्ता. +- **Data Consumer**: Any application or user that queries a subgraph. - **सबग्राफ डेव्हलपर**: एक विकासक जो ग्राफच्या विकेंद्रीकृत नेटवर्कवर सबग्राफ तयार करतो आणि तैनात करतो. @@ -46,11 +44,11 @@ title: Glossary 1. **सक्रिय**: ऑन-चेन तयार केल्यावर वाटप सक्रिय मानले जाते. याला वाटप उघडणे म्हणतात, आणि नेटवर्कला सूचित करते की इंडेक्सर सक्रियपणे अनुक्रमित करत आहे आणि विशिष्ट सबग्राफसाठी क्वेरी सर्व्ह करत आहे. सक्रिय वाटप सबग्राफवरील सिग्नल आणि वाटप केलेल्या GRT रकमेच्या प्रमाणात अनुक्रमणिका बक्षिसे जमा करतात. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **सबग्राफ स्टुडिओ**: सबग्राफ तयार करणे, उपयोजित करणे आणि प्रकाशित करणे यासाठी एक शक्तिशाली डॅप. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed. Specifically, the Indexer will lose 2.5% of their self-stake of GRT. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. @@ -62,11 +60,11 @@ title: Glossary - **GRT**: आलेखाचे कार्य उपयुक्तता टोकन. GRT नेटवर्क सहभागींना नेटवर्कमध्ये योगदान देण्यासाठी आर्थिक प्रोत्साहन देते. -- **POI or Proof of Indexing**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent Proof of Indexing (POI). Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **ग्राफ नोड**: ग्राफ नोड हा घटक आहे जो सबग्राफ अनुक्रमित करतो आणि परिणामी डेटा GraphQL API द्वारे क्वेरीसाठी उपलब्ध करतो. हे इंडेक्सर स्टॅकसाठी मध्यवर्ती आहे आणि यशस्वी इंडेक्सर चालवण्यासाठी ग्राफ नोडचे योग्य ऑपरेशन महत्वाचे आहे. +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **इंडेक्सर एजंट**: इंडेक्सर एजंट इंडेक्सर स्टॅकचा भाग आहे. नेटवर्कवर नोंदणी करणे, त्याच्या ग्राफ नोड्सवर सबग्राफ उपयोजन व्यवस्थापित करणे आणि वाटप व्यवस्थापित करणे यासह इंडेक्सरच्या साखळीतील परस्परसंवाद सुलभ करते. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **द ग्राफ क्लायंट**: GraphQL-आधारित dapps विकेंद्रित पद्धतीने तयार करण्यासाठी लायब्ररी. @@ -78,10 +76,6 @@ title: Glossary - **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self stake. -- **_Upgrading_ a subgraph to The Graph Network**: The process of moving a subgraph from the hosted service to The Graph Network. - -- **_Updating_ a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). - -- **Upgrade Window**: The countdown for hosted service users to upgrade their subgraph(s) to The Graph Network beginning on April 11th, and ending on June 12th 2024. diff --git a/website/pages/mr/index.json b/website/pages/mr/index.json index 2e7b3ad18213..2c7aec422d6b 100644 --- a/website/pages/mr/index.json +++ b/website/pages/mr/index.json @@ -21,10 +21,6 @@ "createASubgraph": { "title": "सबग्राफ तयार करा", "description": "सबग्राफ तयार करण्यासाठी स्टुडिओ वापरा" - }, - "migrateFromHostedService": { - "title": "Upgrade from the hosted service", - "description": "Upgrading subgraphs to The Graph Network" } }, "networkRoles": { diff --git a/website/pages/mr/managing/transfer-and-deprecate-a-subgraph.mdx b/website/pages/mr/managing/transfer-and-deprecate-a-subgraph.mdx new file mode 100644 index 000000000000..54ba44934ce5 --- /dev/null +++ b/website/pages/mr/managing/transfer-and-deprecate-a-subgraph.mdx @@ -0,0 +1,65 @@ +--- +title: Transfer and Deprecate a Subgraph +--- + +## सबग्राफची मालकी हस्तांतरित करणे + +Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. + +**Please note the following:** + +- Whoever owns the NFT controls the subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. +- You can easily move control of a subgraph to a multi-sig. +- A community member can create a subgraph on behalf of a DAO. + +### View your subgraph as an NFT + +To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: + +``` +https://opensea.io/your-wallet-address +``` + +Or a wallet explorer like **Rainbow.me**: + +``` +https://rainbow.me/your-wallet-addres +``` + +### Step-by-Step + +To transfer ownership of a subgraph, do the following: + +1. Use the UI built into Subgraph Studio: + + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) + +2. Choose the address that you would like to transfer the subgraph to: + + ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) + +Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: + +![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) + +## Deprecating a subgraph + +Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. + +### Step-by-Step + +To deprecate your subgraph, do the following: + +1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). +2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. +3. Your subgraph will no longer appear in searches on Graph Explorer. + +**Please note the following:** + +- The owner's wallet should call the `deprecateSubgraph` function. +- क्युरेटर यापुढे सबग्राफवर सिग्नल करू शकणार नाहीत. +- Curators that already signaled on the subgraph can withdraw their signal at an average share price. +- Deprecated subgraphs will show an error message. + +> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. diff --git a/website/pages/mr/mips-faqs.mdx b/website/pages/mr/mips-faqs.mdx index ae460989f96e..1f7553923765 100644 --- a/website/pages/mr/mips-faqs.mdx +++ b/website/pages/mr/mips-faqs.mdx @@ -6,10 +6,6 @@ title: MIPs FAQs > Note: the MIPs program is closed as of May 2023. Thank you to all the Indexers who participated! -It's an exciting time to be participating in The Graph ecosystem! During [Graph Day 2022](https://thegraph.com/graph-day/2022/) Yaniv Tal announced the [sunsetting of the hosted service](https://thegraph.com/blog/sunsetting-hosted-service/), a moment The Graph ecosystem has been working towards for many years. - -To support the sunsetting of the hosted service and the migration of all of it's activity to the decentralized network, The Graph Foundation has announced the [Migration Infrastructure Providers (MIPs) program](https://thegraph.com/blog/mips-multi-chain-indexing-incentivized-program). - The MIPs program is an incentivization program for Indexers to support them with resources to index chains beyond Ethereum mainnet and help The Graph protocol expand the decentralized network into a multi-chain infrastructure layer. The MIPs program has allocated 0.75% of the GRT supply (75M GRT), with 0.5% to reward Indexers who contribute to bootstrapping the network and 0.25% allocated to Network Grants for subgraph developers using multi-chain subgraphs. diff --git a/website/pages/mr/network/benefits.mdx b/website/pages/mr/network/benefits.mdx index b8b01f42b679..44616e68152e 100644 --- a/website/pages/mr/network/benefits.mdx +++ b/website/pages/mr/network/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| खर्चाची तुलना | स्वत: होस्ट केलेले | आलेख नेटवर्क | -| :-: | :-: | :-: | -| मासिक सर्व्हर खर्च\* | दरमहा $350 | $0 | -| क्वेरी खर्च | $0+ | $0 per month | -| अभियांत्रिकी वेळ | दरमहा $400 | काहीही नाही, जागतिक स्तरावर वितरित इंडेक्सर्ससह नेटवर्कमध्ये तयार केलेले | -| प्रति महिना प्रश्न | इन्फ्रा क्षमतांपुरती मर्यादित | 100,000 (Free Plan) | -| प्रति क्वेरी खर्च | $0 | $0 | -| पायाभूत सुविधा | केंद्रीकृत | विकेंद्रित | -| भौगोलिक रिडंडंसी | प्रति अतिरिक्त नोड $750+ | समाविष्ट | -| अपटाइम | बदलते | 99.9%+ | -| एकूण मासिक खर्च | $750+ | $0 | +| खर्चाची तुलना | स्वत: होस्ट केलेले | आलेख नेटवर्क | +|:----------------------------:|:---------------------------------------:|:------------------------------------------------------------------------:| +| मासिक सर्व्हर खर्च\* | दरमहा $350 | $0 | +| क्वेरी खर्च | $0+ | $0 per month | +| अभियांत्रिकी वेळ | दरमहा $400 | काहीही नाही, जागतिक स्तरावर वितरित इंडेक्सर्ससह नेटवर्कमध्ये तयार केलेले | +| प्रति महिना प्रश्न | इन्फ्रा क्षमतांपुरती मर्यादित | 100,000 (Free Plan) | +| प्रति क्वेरी खर्च | $0 | $0 | +| पायाभूत सुविधा | केंद्रीकृत | विकेंद्रित | +| भौगोलिक रिडंडंसी | प्रति अतिरिक्त नोड $750+ | समाविष्ट | +| अपटाइम | बदलते | 99.9%+ | +| एकूण मासिक खर्च | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| खर्चाची तुलना | स्वत: होस्ट केलेले | आलेख नेटवर्क | -| :-: | :-: | :-: | -| मासिक सर्व्हर खर्च\* | दरमहा $350 | $0 | -| क्वेरी खर्च | दरमहा $500 | $120 per month | -| अभियांत्रिकी वेळ | दरमहा $800 | काहीही नाही, जागतिक स्तरावर वितरित इंडेक्सर्ससह नेटवर्कमध्ये तयार केलेले | -| प्रति महिना प्रश्न | इन्फ्रा क्षमतांपुरती मर्यादित | ~3,000,000 | -| प्रति क्वेरी खर्च | $0 | $0.00004 | -| पायाभूत सुविधा | केंद्रीकृत | विकेंद्रित | -| अभियांत्रिकी खर्च | $200 प्रति तास | समाविष्ट | -| भौगोलिक रिडंडंसी | प्रति अतिरिक्त नोड एकूण खर्चात $1,200 | समाविष्ट | -| अपटाइम | बदलते | 99.9%+ | -| एकूण मासिक खर्च | $1,650+ | $120 | +| खर्चाची तुलना | स्वत: होस्ट केलेले | आलेख नेटवर्क | +|:----------------------------:|:------------------------------------------:|:------------------------------------------------------------------------:| +| मासिक सर्व्हर खर्च\* | दरमहा $350 | $0 | +| क्वेरी खर्च | दरमहा $500 | $120 per month | +| अभियांत्रिकी वेळ | दरमहा $800 | काहीही नाही, जागतिक स्तरावर वितरित इंडेक्सर्ससह नेटवर्कमध्ये तयार केलेले | +| प्रति महिना प्रश्न | इन्फ्रा क्षमतांपुरती मर्यादित | ~3,000,000 | +| प्रति क्वेरी खर्च | $0 | $0.00004 | +| पायाभूत सुविधा | केंद्रीकृत | विकेंद्रित | +| अभियांत्रिकी खर्च | $200 प्रति तास | समाविष्ट | +| भौगोलिक रिडंडंसी | प्रति अतिरिक्त नोड एकूण खर्चात $1,200 | समाविष्ट | +| अपटाइम | बदलते | 99.9%+ | +| एकूण मासिक खर्च | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| खर्चाची तुलना | स्वत: होस्ट केलेले | आलेख नेटवर्क | -| :-: | :-: | :-: | -| मासिक सर्व्हर खर्च\* | प्रति नोड, प्रति महिना $1100 | $0 | -| क्वेरी खर्च | $4000 | $1,200 per month | -| आवश्यक नोड्सची संख्या | 10 | लागू नाही | -| अभियांत्रिकी वेळ | दरमहा $6,000 किंवा अधिक | काहीही नाही, जागतिक स्तरावर वितरित इंडेक्सर्ससह नेटवर्कमध्ये तयार केलेले | -| प्रति महिना प्रश्न | इन्फ्रा क्षमतांपुरती मर्यादित | ~30,000,000 | -| प्रति क्वेरी खर्च | $0 | $0.00004 | -| पायाभूत सुविधा | केंद्रीकृत | विकेंद्रित | -| भौगोलिक रिडंडंसी | प्रति अतिरिक्त नोड एकूण खर्चात $1,200 | समाविष्ट | -| अपटाइम | बदलते | 99.9%+ | -| एकूण मासिक खर्च | $11,000+ | $1,200 | +| खर्चाची तुलना | स्वत: होस्ट केलेले | आलेख नेटवर्क | +|:----------------------------:|:-------------------------------------------:|:------------------------------------------------------------------------:| +| मासिक सर्व्हर खर्च\* | प्रति नोड, प्रति महिना $1100 | $0 | +| क्वेरी खर्च | $4000 | $1,200 per month | +| आवश्यक नोड्सची संख्या | 10 | लागू नाही | +| अभियांत्रिकी वेळ | दरमहा $6,000 किंवा अधिक | काहीही नाही, जागतिक स्तरावर वितरित इंडेक्सर्ससह नेटवर्कमध्ये तयार केलेले | +| प्रति महिना प्रश्न | इन्फ्रा क्षमतांपुरती मर्यादित | ~30,000,000 | +| प्रति क्वेरी खर्च | $0 | $0.00004 | +| पायाभूत सुविधा | केंद्रीकृत | विकेंद्रित | +| भौगोलिक रिडंडंसी | प्रति अतिरिक्त नोड एकूण खर्चात $1,200 | समाविष्ट | +| अपटाइम | बदलते | 99.9%+ | +| एकूण मासिक खर्च | $11,000+ | $1,200 | \*बॅकअपच्या खर्चासह: $50-$100 प्रति महिना diff --git a/website/pages/mr/network/curating.mdx b/website/pages/mr/network/curating.mdx index 2bf60e3da154..3dd47211773a 100644 --- a/website/pages/mr/network/curating.mdx +++ b/website/pages/mr/network/curating.mdx @@ -10,7 +10,7 @@ Before consumers can query a subgraph, it must be indexed. This is where curatio Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. @@ -18,7 +18,7 @@ The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). @@ -34,7 +34,7 @@ Within the Curator tab in Graph Explorer, curators will be able to signal and un तुमचा सिग्नल नवीनतम प्रोडक्शन बिल्डवर आपोआप स्थलांतरित होणे हे तुम्ही क्वेरी फी जमा करत असल्याचे सुनिश्चित करण्यासाठी मौल्यवान असू शकते. प्रत्येक वेळी तुम्ही क्युरेट करता तेव्हा 1% क्युरेशन कर लागतो. तुम्ही प्रत्येक स्थलांतरावर 0.5% क्युरेशन कर देखील द्याल. सबग्राफ विकसकांना वारंवार नवीन आवृत्त्या प्रकाशित करण्यापासून परावृत्त केले जाते - त्यांना सर्व स्वयं-स्थलांतरित क्युरेशन शेअर्सवर 0.5% क्युरेशन कर भरावा लागतो. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -49,7 +49,7 @@ However, it is recommended that curators leave their signaled GRT in place not o ## जोखीम 1. द ग्राफमध्ये क्वेरी मार्केट मूळतः तरुण आहे आणि नवीन मार्केट डायनॅमिक्समुळे तुमचा %APY तुमच्या अपेक्षेपेक्षा कमी असण्याचा धोका आहे. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. 3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. बगमुळे सबग्राफ अयशस्वी होऊ शकतो. अयशस्वी सबग्राफ क्वेरी शुल्क जमा करत नाही. परिणामी, विकसक बगचे निराकरण करेपर्यंत आणि नवीन आवृत्ती तैनात करेपर्यंत तुम्हाला प्रतीक्षा करावी लागेल. - तुम्ही सबग्राफच्या नवीनतम आवृत्तीचे सदस्यत्व घेतले असल्यास, तुमचे शेअर्स त्या नवीन आवृत्तीमध्ये स्वयंचलितपणे स्थलांतरित होतील. यावर 0.5% क्युरेशन कर लागेल. @@ -65,7 +65,7 @@ By signalling on a subgraph, you will earn a share of all the query fees that th उच्च-गुणवत्तेचे सबग्राफ शोधणे हे एक जटिल कार्य आहे, परंतु ते वेगवेगळ्या मार्गांनी संपर्क साधले जाऊ शकते. क्युरेटर म्हणून, तुम्हाला विश्वासार्ह सबग्राफ्स शोधायचे आहेत जे क्वेरी व्हॉल्यूम वाढवत आहेत. dApp च्या डेटा गरजा पूर्ण, अचूक आणि सपोर्ट करत असल्यास विश्वासार्ह सबग्राफ मौल्यवान असू शकतो. खराब आर्किटेक्‍ट सबग्राफ सुधारित करणे किंवा पुन्हा प्रकाशित करणे आवश्यक असू शकते आणि ते अयशस्वी देखील होऊ शकते. सबग्राफ मौल्यवान आहे की नाही याचे मूल्यांकन करण्यासाठी सबग्राफच्या आर्किटेक्चर किंवा कोडचे पुनरावलोकन करणे क्युरेटर्ससाठी महत्त्वपूर्ण आहे. परिणामी: -- क्युरेटर्स नेटवर्कबद्दलची त्यांची समज वापरून प्रयत्न करू शकतात आणि भविष्यात वैयक्तिक सबग्राफ अधिक किंवा कमी क्वेरी व्हॉल्यूम कसा निर्माण करू शकतो याचा अंदाज लावू शकतात +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future - Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. ### 3. What’s the cost of updating a subgraph? @@ -78,50 +78,14 @@ It’s suggested that you don’t update your subgraphs too frequently. See the ### 5. मी माझे क्युरेशन शेअर्स विकू शकतो का? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). ### 6. Am I eligible for a curation grant? Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## बाँडिंग वक्र 101 - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![प्रति शेअर किंमत](/img/price-per-share.png) - -परिणामी, किंमत एकरेषेने वाढते, याचा अर्थ वेळोवेळी शेअर खरेदी करणे अधिक महाग होईल. आम्हाला काय म्हणायचे आहे याचे एक उदाहरण येथे आहे, खाली बाँडिंग वक्र पहा: - -![बाँडिंग वक्र](/img/bonding-curve.png) - -आमच्याकडे दोन क्युरेटर आहेत जे सबग्राफसाठी शेअर करतात याचा विचार करा: - -- क्युरेटर A हा सबग्राफवर सिग्नल देणारा पहिला आहे. वक्र मध्ये 120,000 GRT जोडून, ते 2000 शेअर्स मिंट करण्यास सक्षम आहेत. -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- दोन्ही क्युरेटर्सचे एकूण क्युरेशन शेअर्सचे अर्धे भाग असल्याने, त्यांना समान प्रमाणात क्युरेटर रॉयल्टी मिळेल. -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- उर्वरित क्युरेटर आता त्या सबग्राफसाठी सर्व क्युरेटर रॉयल्टी प्राप्त करतील. जर त्यांनी GRT काढण्यासाठी त्यांचे शेअर्स जाळले तर त्यांना 120,000 GRT मिळतील. -- **TLDR:** क्युरेशन शेअर्सचे GRT मूल्यांकन बाँडिंग वक्र द्वारे निर्धारित केले जाते आणि ते अस्थिर असू शकते. मोठे नुकसान होण्याची शक्यता आहे. लवकर सिग्नल देणे म्हणजे तुम्ही प्रत्येक शेअरसाठी कमी GRT टाकता. विस्तारानुसार, याचा अर्थ तुम्ही त्याच सबग्राफसाठी नंतरच्या क्युरेटर्सपेक्षा प्रति GRT अधिक क्युरेटर रॉयल्टी मिळवता. - -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** - -The Graph च्या बाबतीत, [बँकोरची बाँडिंग वक्र फॉर्मची अंमलबजावणी](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) फायदा घेतला जातो. - अजूनही गोंधळलेले? खाली आमचे क्युरेशन व्हिडिओ मार्गदर्शक पहा: diff --git a/website/pages/mr/network/delegating.mdx b/website/pages/mr/network/delegating.mdx index 0380a837269d..9578b645b665 100644 --- a/website/pages/mr/network/delegating.mdx +++ b/website/pages/mr/network/delegating.mdx @@ -2,13 +2,23 @@ title: Delegating --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## Delegator Guide -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly, as that information is widely available online. There are three sections in this guide: @@ -24,64 +34,86 @@ Listed below are the main risks of being a Delegator in the protocol. वाईट वर्तनासाठी प्रतिनिधींना कमी केले जाऊ शकत नाही, परंतु नेटवर्कच्या अखंडतेला हानी पोहोचवणाऱ्या खराब निर्णयक्षमतेला प्रोत्साहन देण्यासाठी प्रतिनिधींवर कर लावला जातो. -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +As a Delegator, it's important to understand the following: -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### The delegation unbonding period Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely.
    ![प्रतिनिधी अनबॉन्डिंग](/img/Delegation-Unbonding.png) _डेलिगेशन UI मध्ये 0.5% शुल्क तसेच २८ दिवसांची नोंद घ्या - अनबॉन्डिंग कालावधी._ + अनबॉन्डिंग कालावधी._
    ### Choosing a trustworthy Indexer with a fair reward payout for Delegators -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +#### Delegation Parameters + +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
    - ![इंडेक्सिंग रिवॉर्ड कट](/img/Indexing-Reward-Cut.png) *टॉप इंडेक्सर प्रतिनिधींना 90% बक्षिसे देत आहे. द मधला - प्रतिनिधी 20% देत आहे. सर्वात खालचा भाग प्रतिनिधींना ~83% देत आहे.* + ![इंडेक्सिंग रिवॉर्ड कट](/img/Indexing-Reward-Cut.png) *टॉप इंडेक्सर प्रतिनिधींना 90% बक्षिसे देत आहे. द + मधला प्रतिनिधी 20% देत आहे. सर्वात खालचा भाग प्रतिनिधींना ~83% देत आहे.*
    -- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. + +As you can see, in order to choose the right Indexer, you must consider multiple things. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -### Calculating Delegators expected return +## Calculating Delegators Expected Return -A Delegator must consider a lot of factors when determining the return. These include: +A Delegator must consider the following factors to determine a return: -- तांत्रिक प्रतिनिधी त्यांच्याकडे उपलब्ध असलेले प्रतिनिधी टोकन वापरण्याची इंडेक्सरची क्षमता देखील पाहू शकतो. जर इंडेक्सर उपलब्ध असलेल्या सर्व टोकन्सचे वाटप करत नसेल, तर ते स्वतःसाठी किंवा त्यांच्या प्रतिनिधींसाठी जास्तीत जास्त नफा मिळवत नाहीत. -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### Considering the query fee cut and indexing fee cut -वरील विभागांमध्ये वर्णन केल्याप्रमाणे, तुम्ही एक इंडेक्सर निवडला पाहिजे जो त्यांच्या क्वेरी फी कट आणि इंडेक्सिंग फी कट सेट करण्याबद्दल पारदर्शक आणि प्रामाणिक असेल. त्यांच्याकडे किती वेळ बफर आहे हे पाहण्यासाठी प्रतिनिधीने पॅरामीटर्स कूलडाउन टाइम देखील पहावे. ते पूर्ण झाल्यानंतर, प्रतिनिधींना किती बक्षिसे मिळत आहेत याची गणना करणे अगदी सोपे आहे. सूत्र आहे: +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![Delegation Image 3](/img/Delegation-Reward-Formula.png) ### Considering the Indexer's delegation pool -प्रतिनिधीने विचारात घेण्याची आणखी एक गोष्ट म्हणजे त्यांच्या मालकीचे प्रतिनिधी पूल किती आहे. डेलिगेटरने पूलमध्ये जमा केलेल्या रकमेद्वारे निर्धारित पूलच्या साध्या पुनर्संतुलनासह, सर्व प्रतिनिधी पुरस्कार समान रीतीने सामायिक केले जातात. हे डेलिगेटरला पूलचा वाटा देते: +Delegators should consider the proportion of the Delegation Pool they own. -![Share formula](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +This gives the Delegator a share of the pool: + +![Share formula](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### Considering the delegation capacity -विचारात घेण्यासारखी दुसरी गोष्ट म्हणजे प्रतिनिधींची क्षमता. सध्या, डेलिगेशन रेशो 16 वर सेट केले आहे. याचा अर्थ असा की जर एखाद्या इंडेक्सरने 1,000,000 GRT स्टेक केले असेल, तर त्यांची डेलिगेशन क्षमता 16,000,000 GRT डेलिगेटेड टोकन्स आहे जी ते प्रोटोकॉलमध्ये वापरू शकतात. या रकमेवरील कोणतेही डेलिगेट केलेले टोकन सर्व डेलिगेटर रिवॉर्ड्स कमी करतील. +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. @@ -89,16 +121,21 @@ Therefore, a Delegator should always consider the Delegation Capacity of an Inde ### MetaMask "Pending Transaction" Bug -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### उदाहरण -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## Video guide for the network UI +## Video Guide -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/mr/network/developing.mdx b/website/pages/mr/network/developing.mdx index 9206b99c72cf..788231dedec6 100644 --- a/website/pages/mr/network/developing.mdx +++ b/website/pages/mr/network/developing.mdx @@ -2,52 +2,88 @@ title: विकसनशील --- -डेव्हलपर्स ही ग्राफ इकोसिस्टमची मागणीची बाजू आहे. विकसक सबग्राफ तयार करतात आणि ग्राफ नेटवर्कवर प्रकाशित करतात. त्यानंतर, ते त्यांच्या अनुप्रयोगांना सामर्थ्य देण्यासाठी GraphQL सह थेट सबग्राफ्सची क्वेरी करतात. +To start coding right away, go to [Developer Quick Start](/quick-start/). + +## सविश्लेषण + +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. + +On The Graph, you can: + +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. + +### What is GraphQL? + +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +### Developer Actions + +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. + +## Subgraph Specifics + +### What are subgraphs? + +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. + +A subgraph primarily consists of the following files: + +- `subgraph.yaml`: this YAML file contains the [subgraph manifest](/developing/creating-a-subgraph/#the-subgraph-manifest). +- `subgraph.graphql`: this GraphQL schema defines what data is stored for your subgraph, and how to query it via [GraphQL](/developing/creating-a-subgraph/#the-graphql-schema). +- `mappings`: this [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) mappings file translates data from the event data to the entities defined in your schema. + +Learn the detailed specifics to [create a subgraph](/developing/creating-a-subgraph/). ## सबग्राफ लाइफसायकल -नेटवर्कवर उपयोजित सबग्राफ्समध्ये एक परिभाषित जीवनचक्र असते. +Here is a general overview of a subgraph’s lifecycle: -### स्थानिक पातळीवर तयार करा +![सबग्राफ लाइफसायकल](/img/subgraph-lifecycle.png) -सर्व उपग्राफ विकासाप्रमाणे, हे स्थानिक विकास आणि चाचणीसह सुरू होते. डेव्हलपर समान स्थानिक सेटअप वापरू शकतात मग ते ग्राफ नेटवर्क, होस्ट केलेल्या सेवेसाठी किंवा स्थानिक ग्राफ नोडसाठी, `ग्राफ-क्ली` आणि `ग्राफ-टीएस` चा वापर करत असतील. सबग्राफ विकसकांना त्यांच्या सबग्राफची मजबूती सुधारण्यासाठी युनिट चाचणीसाठी [मॅचस्टिक](https://github.com/LimeChain/matchstick) सारखी साधने वापरण्यास प्रोत्साहित केले जाते. +### स्थानिक पातळीवर तयार करा -> ग्राफ नेटवर्कवर वैशिष्ट्य आणि नेटवर्क समर्थनाच्या बाबतीत काही मर्यादा आहेत. केवळ [समर्थित नेटवर्क](/developing/supported-networks) वरील सबग्राफ अनुक्रमित पुरस्कार मिळवतील आणि IPFS कडून डेटा आणणारे सबग्राफ देखील पात्र नाहीत. +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. ### Deploy to Subgraph Studio -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. - -### नेटवर्कवर प्रकाशित करा +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -जेव्हा विकसक त्यांच्या सबग्राफसह आनंदी असतो, तेव्हा ते ग्राफ नेटवर्कवर प्रकाशित करू शकतात. ही एक ऑन-चेन क्रिया आहे, जी सबग्राफची नोंदणी करते जेणेकरून ते इंडेक्सर्सद्वारे शोधता येईल. प्रकाशित सबग्राफमध्ये संबंधित NFT असतो, जो नंतर सहजपणे हस्तांतरित करता येतो. प्रकाशित सबग्राफमध्ये मेटाडेटा संबद्ध आहे, जो इतर नेटवर्क सहभागींना उपयुक्त संदर्भ आणि माहिती प्रदान करतो. +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. -### इंडेक्सिंगला प्रोत्साहन देण्यासाठी सिग्नल +### नेटवर्कवर प्रकाशित करा -प्रकाशित सबग्राफ इंडेक्सर्सद्वारे सिग्नल जोडल्याशिवाय उचलले जाण्याची शक्यता नाही. सिग्नल हा दिलेल्या सबग्राफशी संबंधित GRT लॉक केलेला आहे, जो निर्देशांककर्त्यांना सूचित करतो की दिलेल्या सबग्राफला क्वेरी व्हॉल्यूम प्राप्त होईल आणि त्यावर प्रक्रिया करण्यासाठी उपलब्ध अनुक्रमणिका बक्षिसेमध्ये देखील योगदान होते. इंडेक्सिंगला प्रोत्साहन देण्यासाठी सबग्राफ डेव्हलपर सामान्यतः त्यांच्या सबग्राफमध्ये सिग्नल जोडतील. तृतीय पक्ष क्युरेटर्स दिलेल्या सबग्राफवर सिग्नल देखील करू शकतात, जर त्यांना सबग्राफ क्वेरी व्हॉल्यूम वाढवण्याची शक्यता वाटत असेल. +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. -### चौकशी & अनुप्रयोग विकास +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-and-deprecate-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. -इंडेक्सर्सद्वारे सबग्राफवर प्रक्रिया केल्यानंतर आणि क्वेरीसाठी उपलब्ध झाल्यानंतर, विकासक त्यांच्या अनुप्रयोगांमध्ये सबग्राफ वापरण्यास प्रारंभ करू शकतात. विकासक गेटवेद्वारे सबग्राफ्सची क्वेरी करतात, जे त्यांच्या क्वेरी GRT मध्ये क्वेरी शुल्क भरून सबग्राफवर प्रक्रिया केलेल्या इंडेक्सरकडे पाठवतात. +### Add Curation Signal for Indexing -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. +#### What is signal? -### Updating Subgraphs +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. +### चौकशी & अनुप्रयोग विकास -Once the Subgraph Developer is ready to update, they can initiate a transaction to point their subgraph at the new version. Updating the subgraph migrates any signal to the new version (assuming the user who applied the signal selected "auto-migrate"), which also incurs a migration tax. This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). -### उपग्राफ नापसंत करत आहे +Learn more about [querying subgraphs](/querying/querying-the-graph/). -एखाद्या वेळी विकासक ठरवू शकतो की त्यांना यापुढे प्रकाशित सबग्राफची आवश्यकता नाही. त्या वेळी ते सबग्राफचे अवमूल्यन करू शकतात, जे क्युरेटर्सना कोणतेही सिग्नल केलेले GRT परत करतात. +### Updating Subgraphs -### विविध विकासक भूमिका +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -काही डेव्हलपर नेटवर्कवरील पूर्ण सबग्राफ लाइफसायकलमध्ये व्यस्त राहतील, त्यांच्या स्वतःच्या सबग्राफवर प्रकाशित, क्वेरी आणि पुनरावृत्ती करतील. काही सबग्राफ डेव्हलपमेंटवर लक्ष केंद्रित करू शकतात, ओपन API तयार करू शकतात ज्यावर इतर तयार करू शकतात. काही अनुप्रयोग केंद्रित असू शकतात, इतरांनी उपयोजित केलेल्या सबग्राफची चौकशी करतात. +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. -### विकसक आणि नेटवर्क इकॉनॉमिक्स +### Deprecating & Transferring Subgraphs -Developers are a key economic actor in the network, locking up GRT in order to encourage indexing, and crucially querying subgraphs, which is the network's primary value exchange. Subgraph developers also burn GRT whenever a subgraph is updated. +If you no longer need a published subgraph, you can [deprecate or transfer a subgraph](/managing/transfer-and-deprecate-a-subgraph/). Deprecating a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/mr/network/explorer.mdx b/website/pages/mr/network/explorer.mdx index efbb87aa2820..1bf46de5f24a 100644 --- a/website/pages/mr/network/explorer.mdx +++ b/website/pages/mr/network/explorer.mdx @@ -2,21 +2,35 @@ title: Graph Explorer --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## Subgraphs -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -जेव्हा तुम्ही सबग्राफवर क्लिक करता, तेव्हा तुम्ही खेळाच्या मैदानात प्रश्नांची चाचणी घेण्यास सक्षम व्हाल आणि माहितीपूर्ण निर्णय घेण्यासाठी नेटवर्क तपशीलांचा फायदा घेण्यास सक्षम असाल. इंडेक्सर्सना त्याचे महत्त्व आणि गुणवत्तेची जाणीव करून देण्यासाठी तुम्ही तुमच्या स्वतःच्या सबग्राफवर किंवा इतरांच्या सबग्राफवर GRT सिग्नल करण्यास सक्षम असाल. हे गंभीर आहे कारण सबग्राफवर सिग्नल केल्याने ते अनुक्रमित होण्यासाठी प्रोत्साहन मिळते, याचा अर्थ असा आहे की शेवटी क्वेरी सर्व्ह करण्यासाठी ते नेटवर्कवर येईल. +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, several details are surfaced. These include: +On each subgraph’s dedicated page, you can do the following: - Signal/Un-signal on subgraphs - चार्ट, वर्तमान उपयोजन आयडी आणि इतर मेटाडेटा यासारखे अधिक तपशील पहा @@ -31,26 +45,32 @@ On each subgraph’s dedicated page, several details are surfaced. These include ## Participants -या टॅबमध्‍ये, इंडेक्सर्स, डेलिगेटर्स आणि क्युरेटर्स यांसारख्या नेटवर्क अ‍ॅक्टिव्हिटीमध्ये भाग घेणाऱ्या सर्व लोकांचे विहंगम दृश्य तुम्हाला मिळेल. खाली, आम्ही प्रत्येक टॅबचा तुमच्यासाठी काय अर्थ होतो याचे सखोल पुनरावलोकन करू. +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### 1. Indexers ![Explorer Image 4](/img/Indexer-Pane.png) -चला इंडेक्सर्ससह प्रारंभ करूया. इंडेक्सर्स हे प्रोटोकॉलचा कणा आहेत, जे सबग्राफवर भाग घेतात, त्यांना अनुक्रमित करतात आणि सबग्राफ वापरणार्‍या कोणालाही प्रश्न देतात. इंडेक्सर्स टेबलमध्ये, तुम्ही इंडेक्सर्सचे डेलिगेशन पॅरामीटर्स, त्यांची हिस्सेदारी, त्यांनी प्रत्येक सबग्राफमध्ये किती भाग घेतला आहे आणि त्यांनी क्वेरी फी आणि इंडेक्सिंग रिवॉर्ड्समधून किती कमाई केली आहे हे पाहण्यास सक्षम असाल. खाली खोल गोतावळा: +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. + +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. -- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators -- प्रभावी रिवॉर्ड कट - इंडेक्सिंग रिवॉर्ड कट डेलिगेशन पूलवर लागू केला जातो. जर ते नकारात्मक असेल, तर याचा अर्थ असा आहे की इंडेक्सर त्यांच्या पुरस्कारांचा काही भाग देत आहे. जर ते सकारात्मक असेल, तर याचा अर्थ असा की इंडेक्सर त्यांचे काही बक्षिसे ठेवत आहे -- Cooldown Remaining - इंडेक्सर वरील डेलिगेशन पॅरामीटर्स बदलू शकत नाही तोपर्यंत उरलेला वेळ. इंडेक्सर्स जेव्हा त्यांचे डेलिगेशन पॅरामीटर्स अपडेट करतात तेव्हा कूलडाउन पीरियड्स सेट केले जातात -- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior -- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing -- उपलब्ध डेलिगेशन कॅपॅसिटी - इंडेक्सर्सना जास्त डेलिगेशन होण्याआधीही डेलिगेटेड स्टेकची रक्कम मिळू शकते +**Specifics** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - कमाल डेलिगेशन क्षमता - इंडेक्सर उत्पादकपणे स्वीकारू शकणारी जास्तीत जास्त डेलिगेटेड स्टेक. वाटप किंवा बक्षिसे गणनेसाठी जास्तीचा वाटप केला जाऊ शकत नाही. -- क्वेरी फी - हे एकूण शुल्क आहे जे शेवटच्या वापरकर्त्यांनी इंडेक्सरकडून नेहमीच्या क्वेरींसाठी दिले आहे +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - इंडेक्सर रिवॉर्ड्स - हे इंडेक्सर आणि त्यांच्या प्रतिनिधींनी सर्वकाळात मिळवलेले एकूण इंडेक्सर रिवॉर्ड्स आहेत. इंडेक्सर रिवॉर्ड्स GRT जारी करून दिले जातात. -इंडेक्सर्स क्वेरी फी आणि इंडेक्सिंग रिवॉर्ड दोन्ही मिळवू शकतात. कार्यात्मकपणे, जेव्हा नेटवर्क सहभागी इंडेक्सरला GRT सोपवतात तेव्हा असे घडते. हे इंडेक्सर्सना त्यांच्या इंडेक्सर पॅरामीटर्सवर अवलंबून क्वेरी फी आणि रिवॉर्ड प्राप्त करण्यास सक्षम करते. इंडेक्सिंग पॅरामीटर्स टेबलच्या उजव्या बाजूला क्लिक करून किंवा इंडेक्सरच्या प्रोफाइलमध्ये जाऊन "प्रतिनिधी" बटणावर क्लिक करून सेट केले जातात. +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. इंडेक्सर कसे व्हावे याबद्दल अधिक जाणून घेण्यासाठी, तुम्ही [अधिकृत दस्तऐवज](/network/indexing) किंवा [द ग्राफ अकादमी इंडेक्सर मार्गदर्शक](https://thegraph.academy/delegators/ पाहू शकता choosing-indexers/) @@ -58,9 +78,13 @@ On each subgraph’s dedicated page, several details are surfaced. These include ### 2. Curators -कोणते सबग्राफ उच्च दर्जाचे आहेत हे ओळखण्यासाठी क्युरेटर सबग्राफचे विश्लेषण करतात. एकदा क्युरेटरला संभाव्य आकर्षक सबग्राफ सापडला की, ते त्याच्या बाँडिंग वक्र वर सिग्नल करून ते क्युरेट करू शकतात. असे केल्याने, क्युरेटर्स इंडेक्सर्सना कळवतात की कोणते सबग्राफ उच्च दर्जाचे आहेत आणि ते अनुक्रमित केले पाहिजेत. +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -क्युरेटर समुदाय सदस्य, डेटा ग्राहक किंवा अगदी सबग्राफ डेव्हलपर असू शकतात जे GRT टोकन बाँडिंग वक्रमध्ये जमा करून त्यांच्या स्वतःच्या सबग्राफवर सिग्नल करतात. GRT जमा करून, क्युरेटर्स सबग्राफचे क्युरेशन शेअर्स मिंट करतात. परिणामी, क्युरेटर्स क्वेरी फीचा एक भाग मिळविण्यास पात्र आहेत ज्यावर त्यांनी संकेत दिलेला सबग्राफ व्युत्पन्न करतो. बाँडिंग वक्र क्युरेटर्सना उच्च गुणवत्तेचा डेटा स्रोत तयार करण्यासाठी प्रोत्साहन देते. या विभागातील क्युरेटर टेबल तुम्हाला हे पाहण्याची परवानगी देईल: +In the The Curator table listed below you can see: - The date the Curator started curating - The number of GRT that was deposited @@ -68,34 +92,36 @@ On each subgraph’s dedicated page, several details are surfaced. These include ![Explorer Image 6](/img/Curation-Overview.png) -तुम्हाला क्युरेटरच्या भूमिकेबद्दल अधिक जाणून घ्यायचे असल्यास, तुम्ही [द ग्राफ अकादमी](https://thegraph.academy/curators/) किंवा [अधिकृत दस्तऐवज](/network/curating) च्या खालील लिंक्सला भेट देऊन तसे करू शकता +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### 3. Delegators -द ग्राफ नेटवर्कची सुरक्षा आणि विकेंद्रीकरण राखण्यात प्रतिनिधी महत्त्वाची भूमिका बजावतात. ते एक किंवा एकाधिक इंडेक्सर्सना GRT टोकन्स सोपवून (म्हणजे "स्टेकिंग") नेटवर्कमध्ये सहभागी होतात. प्रतिनिधींशिवाय, इंडेक्सर्सना लक्षणीय बक्षिसे आणि शुल्क मिळण्याची शक्यता कमी असते. म्हणून, इंडेक्सर्स डेलिगेटर्सना इंडेक्सिंग रिवॉर्ड्स आणि त्यांनी कमावलेल्या क्वेरी फीचा एक भाग ऑफर करून त्यांना आकर्षित करण्याचा प्रयत्न करतात. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![Explorer Image 7](/img/Delegation-Overview.png) -The Delegators table will allow you to see the active Delegators in the community, as well as metrics such as: +In the Delegators table you can see the active Delegators in the community and important metrics: - The number of Indexers a Delegator is delegating towards - A Delegator’s original delegation - The rewards they have accumulated but have not withdrawn from the protocol - The realized rewards they withdrew from the protocol - Total amount of GRT they have currently in the protocol -- The date they last delegated at +- The date they last delegated -तुम्हाला प्रतिनिधी कसे व्हायचे याबद्दल अधिक जाणून घ्यायचे असल्यास, पुढे पाहू नका! तुम्हाला फक्त वर जावे लागेल[official documentation](/network/delegating) किंवा[आलेख अकादमी](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## Network -नेटवर्क विभागात, तुम्हाला जागतिक KPIs तसेच प्रत्येक युगाच्या आधारावर स्विच करण्याची आणि नेटवर्क मेट्रिक्सचे अधिक तपशीलवार विश्लेषण करण्याची क्षमता दिसेल. हे तपशील तुम्हाला कालांतराने नेटवर्क कसे कार्य करत आहे याची जाणीव देईल. +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. -### सारांश +### सविश्लेषण -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +The overview section has both all the current network metrics and some cumulative metrics over time: - The current total network stake - The stake split between the Indexers and their Delegators @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - Protocol parameters such as curation reward, inflation rate, and more - Current epoch rewards and fees -A few key details that are worth mentioning: +A few key details to note: -- **क्वेरी फी ग्राहकांद्वारे व्युत्पन्न केलेल्या फीचे प्रतिनिधित्व करतात**, आणि उपग्राफसाठी त्यांचे वाटप बंद झाल्यानंतर किमान 7 युगांच्या कालावधीनंतर (खाली पहा) इंडेक्सर्सद्वारे त्यावर दावा केला जाऊ शकतो (किंवा नाही). आणि त्यांनी दिलेला डेटा ग्राहकांनी प्रमाणित केला आहे. -- **इंडेक्सिंग रिवॉर्ड्स युगादरम्यान नेटवर्क जारी करण्यापासून निर्देशांककर्त्यांनी दावा केलेल्या पुरस्कारांच्या रकमेचे प्रतिनिधित्व करतात.** जरी प्रोटोकॉल जारी करणे निश्चित केले असले तरी, इंडेक्सर्सने त्यांचे वाटप बंद केल्यावरच बक्षिसे दिली जातात ते अनुक्रमित करत असलेल्या उपग्राफकडे. अशा प्रकारे प्रति-युगातील पुरस्कारांची संख्या बदलते (म्हणजे काही युगांदरम्यान, इंडेक्सर्सने अनेक दिवसांपासून खुले असलेले वाटप एकत्रितपणे बंद केले असावे). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ In the Epochs section, you can analyze on a per-epoch basis, metrics such as: - सक्रिय युग असा आहे ज्यामध्ये इंडेक्सर्स सध्या स्टेक वाटप करत आहेत आणि क्वेरी फी गोळा करत आहेत - सेटलिंग युग हे असे आहेत ज्यामध्ये राज्य वाहिन्या सेटल होत आहेत. याचा अर्थ असा की जर ग्राहकांनी त्यांच्या विरुद्ध विवाद उघडले तर निर्देशांक कमी केले जातील. - वितरण युग हे असे युग आहेत ज्यामध्ये युगांसाठी राज्य चॅनेल सेटल केले जात आहेत आणि इंडेक्सर्स त्यांच्या क्वेरी फी सवलतीचा दावा करू शकतात. - - अंतिम युग हे असे युग आहेत ज्यात अनुक्रमणिकांद्वारे दावा करण्यासाठी कोणतीही क्वेरी शुल्क सवलत शिल्लक नाही, अशा प्रकारे अंतिम रूप दिले जाते. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![Explorer Image 9](/img/Epoch-Stats.png) ## Your User Profile -आता आम्ही नेटवर्क आकडेवारीबद्दल बोललो आहोत, चला तुमच्या वैयक्तिक प्रोफाइलकडे जाऊया. तुमची वैयक्तिक प्रोफाइल ही तुमची नेटवर्क गतिविधी पाहण्याचे ठिकाण आहे, तुम्ही नेटवर्कवर कसे भाग घेत आहात हे महत्त्वाचे नाही. तुमचे क्रिप्टो वॉलेट तुमचे वापरकर्ता प्रोफाइल म्हणून काम करेल आणि वापरकर्ता डॅशबोर्डसह तुम्ही हे पाहू शकाल: +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### Profile Overview -येथे तुम्ही केलेल्या कोणत्याही वर्तमान क्रिया तुम्ही पाहू शकता. तुम्ही तुमची प्रोफाईल माहिती, वर्णन आणि वेबसाइट (तुम्ही जोडल्यास) येथे देखील शोधू शकता. +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![Explorer Image 10](/img/Profile-Overview.png) ### Subgraphs Tab -तुम्ही सबग्राफ टॅबवर क्लिक केल्यास, तुम्हाला तुमचे प्रकाशित सबग्राफ दिसतील. यामध्ये चाचणीच्या उद्देशांसाठी CLI सोबत तैनात केलेले कोणतेही सबग्राफ समाविष्ट केले जाणार नाहीत - सबग्राफ केवळ विकेंद्रित नेटवर्कवर प्रकाशित केल्यावरच दिसून येतील. +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Indexing Tab -तुम्ही इंडेक्सिंग टॅबवर क्लिक केल्यास, तुम्हाला सबग्राफसाठी सर्व सक्रिय आणि ऐतिहासिक वाटप असलेली एक टेबल मिळेल, तसेच तुम्ही इंडेक्सर म्हणून तुमच्या मागील कामगिरीचे विश्लेषण करू शकता आणि पाहू शकता. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. या विभागात तुमच्या निव्वळ इंडेक्सर रिवॉर्ड्स आणि नेट क्वेरी फीबद्दल तपशील देखील समाविष्ट असतील. तुम्हाला खालील मेट्रिक्स दिसतील: @@ -158,7 +189,9 @@ In the Epochs section, you can analyze on a per-epoch basis, metrics such as: ### Delegating Tab -आलेख नेटवर्कसाठी प्रतिनिधी महत्वाचे आहेत. एखाद्या प्रतिनिधीने त्यांच्या ज्ञानाचा उपयोग असा इंडेक्सर निवडण्यासाठी केला पाहिजे जो पुरस्कारांवर निरोगी परतावा देईल. येथे तुम्ही तुमच्या सक्रिय आणि ऐतिहासिक प्रतिनिधी मंडळांचे तपशील, निर्देशांकांच्या मेट्रिक्ससह शोधू शकता ज्यांना तुम्ही नियुक्त केले आहे. +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. पृष्‍ठाच्या पूर्वार्धात, तुम्‍ही तुमच्‍या डेलिगेशन चार्ट तसेच रिवॉर्ड-ओन्ली चार्ट पाहू शकता. डावीकडे, तुम्ही KPI पाहू शकता जे तुमचे वर्तमान प्रतिनिधीत्व मेट्रिक्स दर्शवतात. diff --git a/website/pages/mr/network/indexing.mdx b/website/pages/mr/network/indexing.mdx index 93c7e0cf5b0f..cc802bf1bd7e 100644 --- a/website/pages/mr/network/indexing.mdx +++ b/website/pages/mr/network/indexing.mdx @@ -42,7 +42,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap समुदायाने बनवलेल्या अनेक डॅशबोर्डमध्ये प्रलंबित पुरस्कार मूल्यांचा समावेश आहे आणि ते या चरणांचे अनुसरण करून सहजपणे व्यक्तिचलितपणे तपासले जाऊ शकतात: -1. Query the [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) to get the IDs for all active allocations: +1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -112,12 +112,12 @@ Once an allocation has been closed the rebates are available to be claimed by th - **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. - **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. -| Setup | Postgres
    (CPUs) | Postgres
    (memory in GBs) | Postgres
    (disk in TBs) | VMs
    (CPUs) | VMs
    (memory in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| मानक | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Setup | Postgres
    (CPUs) | Postgres
    (memory in GBs) | Postgres
    (disk in TBs) | VMs
    (CPUs) | VMs
    (memory in GBs) | +| ------ |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| +| Small | 4 | 8 | 1 | 4 | 16 | +| मानक | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | ### What are some basic security precautions an Indexer should take? @@ -149,20 +149,20 @@ Once an allocation has been closed the rebates are available to be claimed by th #### आलेख नोड -| बंदर | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
    (for subgraph queries) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    (for subgraph subscriptions) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| बंदर | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
    (for subgraph queries) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    (for subgraph subscriptions) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Service -| बंदर | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
    (for paid subgraph queries) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | --metrics-port | - | +| बंदर | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
    (for paid subgraph queries) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Agent @@ -545,7 +545,7 @@ graph indexer status - `ग्राफ इंडेक्सर नियम कदाचित [options] ` — उपयोजनासाठी `decisionBasis` सेट करा `नियम`, जेणेकरून इंडेक्सर एजंट हे उपयोजन अनुक्रमित करायचे की नाही हे ठरवण्यासाठी अनुक्रमणिका नियम वापरा. -- `ग्राफ इंडेक्सर क्रियांना [options] ` मिळतात - `सर्व` वापरून एक किंवा अधिक क्रिया मिळवा किंवा मिळवण्यासाठी `action-id` रिकामे ठेवा सर्व क्रिया. विशिष्ट स्थितीच्या सर्व क्रिया मुद्रित करण्यासाठी अतिरिक्त युक्तिवाद `--status` वापरला जाऊ शकतो. +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `ग्राफ इंडेक्सर क्रिया रांग वाटप <अलोकेशन-रक्कम>` - रांग वाटप क्रिया @@ -623,7 +623,7 @@ graph indexer rules delete QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK - Indexer can use the `indexer-cli` to view all queued actions - इंडेक्सर (किंवा इतर सॉफ्टवेअर) `indexer-cli` वापरून रांगेतील क्रिया मंजूर किंवा रद्द करू शकतात. मंजूर आणि रद्द आदेश इनपुट म्हणून अॅक्शन आयडीचा अॅरे घेतात. - अंमलबजावणी कर्मचारी नियमितपणे मंजूर कृतींसाठी रांगेत मतदान करतात. ते रांगेतील `मंजूर` क्रिया पकडेल, त्या कार्यान्वित करण्याचा प्रयत्न करेल आणि अंमलबजावणीच्या स्थितीनुसार `यशस्वी` किंवा `अयशस्वी< वर db मधील मूल्ये अपडेट करेल. /code>.
  • -
  • एखादी कृती यशस्वी झाल्यास कार्यकर्ता खात्री करेल की एक अनुक्रमणिका नियम उपस्थित आहे जो एजंटला वाटप कसे व्यवस्थापित करावे हे सांगते, एजंट ऑटो` किंवा ` मध्ये असताना मॅन्युअल क्रिया करताना उपयुक्त oversight` मोड. +
  • एखादी कृती यशस्वी झाल्यास कार्यकर्ता खात्री करेल की एक अनुक्रमणिका नियम उपस्थित आहे जो एजंटला वाटप कसे व्यवस्थापित करावे हे सांगते, एजंट ऑटो` किंवा ` मध्ये असताना मॅन्युअल क्रिया करताना उपयुक्त oversight` मोड. - इंडेक्सर कारवाईच्या अंमलबजावणीचा इतिहास पाहण्यासाठी कृती रांगेचे निरीक्षण करू शकतो आणि आवश्यक असल्यास क्रिया आयटमची अंमलबजावणी अयशस्वी झाल्यास पुन्हा मंजूर आणि अद्यतनित करू शकतो. कृती रांग रांगेत लावलेल्या आणि केलेल्या सर्व क्रियांचा इतिहास प्रदान करते. Data model: diff --git a/website/pages/mr/network/overview.mdx b/website/pages/mr/network/overview.mdx index 79ef11647660..f3726794f033 100644 --- a/website/pages/mr/network/overview.mdx +++ b/website/pages/mr/network/overview.mdx @@ -2,14 +2,20 @@ title: नेटवर्क विहंगावलोकन --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## सारांश +## How does it work? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Specifics + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![टोकन इकॉनॉमिक्स](/img/Network-roles@2x.png) -ग्राफ नेटवर्कची आर्थिक सुरक्षितता आणि विचारल्या जाणार्‍या डेटाची अखंडता सुनिश्चित करण्यासाठी, सहभागी ग्राफ टोकन ([GRT](/tokenomics)) घेतात आणि वापरतात. GRT एक कार्य उपयुक्तता टोकन आहे जो नेटवर्कमध्ये संसाधने वाटप करण्यासाठी वापरला जाणारा ERC-20 आहे. +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/mr/new-chain-integration.mdx b/website/pages/mr/new-chain-integration.mdx index 35b2bc7c8b4a..534d6701efdb 100644 --- a/website/pages/mr/new-chain-integration.mdx +++ b/website/pages/mr/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: Integrating New Networks +title: New Chain Integration --- -Graph Node can currently index data from the following chain types: +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- Ethereum, via EVM JSON-RPC and [Ethereum Firehose](https://github.com/streamingfast/firehose-ethereum) -- NEAR, via a [NEAR Firehose](https://github.com/streamingfast/near-firehose-indexer) -- Cosmos, via a [Cosmos Firehose](https://github.com/graphprotocol/firehose-cosmos) -- Arweave, via an [Arweave Firehose](https://github.com/graphprotocol/firehose-arweave) +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -If you are interested in any of those chains, integration is a matter of Graph Node configuration and testing. +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -If you are interested in a different chain type, a new integration with Graph Node must be built. Our recommended approach is developing a new Firehose for the chain in question and then the integration of that Firehose with Graph Node. More info below. +## Integration Strategies -**1. EVM JSON-RPC** +### 1. EVM JSON-RPC -If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. For more information, refer to [Testing an EVM JSON-RPC](new-chain-integration#testing-an-evm-json-rpc). +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -**2. Firehose** +#### Testing an EVM JSON-RPC -For non-EVM-based chains, Graph Node must ingest blockchain data via gRPC and known type definitions. This can be done via [Firehose](firehose/), a new technology developed by [StreamingFast](https://www.streamingfast.io/) that provides a highly-scalable indexing blockchain solution using a files-based and streaming-first approach. Reach out to the [StreamingFast team](mailto:integrations@streamingfast.io/) if you need help with Firehose development. +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## Difference between EVM JSON-RPC & Firehose +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, in a JSON-RPC batch request +- `trace_filter` _(optionally required for Graph Node to support call handlers)_ -While the two are suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](substreams/), like building [Substreams-powered subgraphs](cookbook/substreams-powered-subgraphs/). In addition, Firehose allows for improved indexing speeds when compared to JSON-RPC. +### 2. Firehose Integration -New EVM chain integrators may also consider the Firehose-based approach, given the benefits of substreams and its massive parallelized indexing capabilities. Supporting both allows developers to choose between building substreams or subgraphs for the new chain. +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> **NOTE**: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that eth_calls are [not a good practice for developers](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## Testing an EVM JSON-RPC +#### Specific Firehose Instrumentation for EVM (`geth`) chains -For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON RPC methods: +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \_(for historical blocks, with EIP-1898 - requires archive node): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`, in a JSON-RPC batch request -- _`trace_filter`_ _(optionally required for Graph Node to support call handlers)_ +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### Graph Node Configuration +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -**Start by preparing your local environment** +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## Graph Node Configuration + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) -2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON RPC compliant URL - > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ -**Test the integration by locally deploying a subgraph** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. Create a simple example subgraph. Some options are below: - 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point - 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) -3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. -4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -Graph Node should be syncing the deployed subgraph if there are no errors. Give it time to sync, then send some GraphQL queries to the API endpoint printed in the logs. +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## Integrating a new Firehose-enabled chain +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. Create a simple example subgraph. Some options are below: + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +Graph Node should be syncing the deployed subgraph if there are no errors. Give it time to sync, then send some GraphQL queries to the API endpoint printed in the logs. -Integrating a new chain is also possible using the Firehose approach. This is currently the best option for non-EVM chains and a requirement for substreams support. Additional documentation focuses on how Firehose works, adding Firehose support for a new chain and integrating it with Graph Node. Recommended docs for integrators: +## Substreams-powered Subgraphs -1. [General docs on Firehose](firehose/) -2. [Adding Firehose support for a new chain](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [Integrating Graph Node with a new chain via Firehose](https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/mr/operating-graph-node.mdx b/website/pages/mr/operating-graph-node.mdx index 907114f75be5..84d5fb6d4d9b 100644 --- a/website/pages/mr/operating-graph-node.mdx +++ b/website/pages/mr/operating-graph-node.mdx @@ -77,13 +77,13 @@ A complete Kubernetes example configuration can be found in the [indexer reposit When it is running Graph Node exposes the following ports: -| बंदर | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
    (for subgraph queries) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    (for subgraph subscriptions) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| बंदर | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
    (for subgraph queries) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    (for subgraph subscriptions) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | > **महत्त्वाचे**: पोर्ट सार्वजनिकपणे उघड करण्याबाबत सावधगिरी बाळगा - **प्रशासन पोर्ट** लॉक डाउन ठेवले पाहिजेत. यामध्ये ग्राफ नोड JSON-RPC एंडपॉइंटचा समावेश आहे. diff --git a/website/pages/mr/querying/graphql-api.mdx b/website/pages/mr/querying/graphql-api.mdx index 1179e40706f2..7fefa001aca4 100644 --- a/website/pages/mr/querying/graphql-api.mdx +++ b/website/pages/mr/querying/graphql-api.mdx @@ -2,11 +2,19 @@ title: GraphQL API --- -This guide explains the GraphQL Query API that is used for The Graph Protocol. +Learn about the GraphQL Query API used in The Graph. -## Queries +## What is GraphQL? -तुमच्या सबग्राफ स्कीमामध्ये तुम्ही `एंटिटीज` नावाचे प्रकार परिभाषित करता. प्रत्येक `संस्था` प्रकारासाठी, उच्च-स्तरीय `क्वेरी` प्रकारावर एक `संस्था` आणि `संस्था` फील्ड व्युत्पन्न केले जाईल. लक्षात ठेवा की ग्राफ वापरताना `क्वेरी` `graphql` क्वेरीच्या शीर्षस्थानी समाविष्ट करणे आवश्यक नाही. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +To understand the larger role that GraphQL plays, review [developing](/network/developing/) and [creating a subgraph](/developing/creating-a-subgraph/). + +## Queries with GraphQL + +In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. + +> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. ### Examples @@ -21,7 +29,7 @@ Query for a single `Token` entity defined in your schema: } ``` -> **Note:** When querying for a single entity, the `id` field is required, and it must be a string. +> Note: When querying for a single entity, the `id` field is required, and it must be writen as a string. Query all `Token` entities: @@ -36,7 +44,10 @@ Query all `Token` entities: ### Sorting -When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. +When querying a collection, you may: + +- Use the `orderBy` parameter to sort by a specific attribute. +- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. #### उदाहरण @@ -53,7 +64,7 @@ When querying a collection, the `orderBy` parameter may be used to sort by a spe As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities. -In the following example, we sort the tokens by the name of their owner: +The following example shows tokens sorted by the name of their owner: ```graphql { @@ -70,11 +81,12 @@ In the following example, we sort the tokens by the name of their owner: ### Pagination -When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. - -Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +When querying a collection, it's best to: -Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. +- Use the `first` parameter to paginate from the beginning of the collection. + - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. +- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. #### Example using `first` @@ -106,7 +118,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect #### Example using `first` and `id_ge` -If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: +If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: ```graphql query manyTokens($lastID: String) { @@ -117,11 +129,12 @@ query manyTokens($lastID: String) { } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### Filtering -You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. +- You can use the `where` parameter in your queries to filter for different properties. +- You can filter on multiple values within the `where` parameter. #### Example using `where` @@ -155,7 +168,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: #### Example for block filtering -You can also filter entities by the `_change_block(number_gte: Int)` - this filters entities which were updated in or after the specified block. +You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). @@ -193,7 +206,7 @@ This can be useful if you are looking to fetch only entities whose child-level e ##### `AND` Operator -In the following example, we are filtering for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. +The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. ```graphql { @@ -208,7 +221,7 @@ In the following example, we are filtering for challenges with `outcome` `succee ``` > **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas. -> +> > ```graphql > { > challenges(where: { number_gte: 100, outcome: "succeeded" }) { @@ -223,7 +236,7 @@ In the following example, we are filtering for challenges with `outcome` `succee ##### `OR` Operator -In the following example, we are filtering for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. +The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. ```graphql { @@ -278,9 +291,9 @@ _change_block(number_gte: Int) You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -Note that the current implementation is still subject to certain limitations that might violate these gurantees. The implementation can not always tell that a given block hash is not on the main chain at all, or that the result of a query by block hash for a block that can not be considered final yet might be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. #### उदाहरण @@ -322,12 +335,12 @@ Fulltext search queries have one required field, `text`, for supplying search te Fulltext search operators: -| Symbol | Operator | वर्णन | -| --- | --- | --- | -| `&` | `आणि` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `किंवा` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `द्वारे अनुसरण करा` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| Symbol | Operator | वर्णन | +| ----------- | ------------------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `आणि` | For combining multiple search terms into a filter for entities that include all of the provided terms | +| | | `किंवा` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | +| `<->` | `द्वारे अनुसरण करा` | Specify the distance between two words. | +| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | #### Examples @@ -376,11 +389,11 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 ## Schema -The schema of your data source--that is, the entity types, values, and relationships that are available to query--are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> **Note:** Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. +> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. ### Entities diff --git a/website/pages/mr/querying/querying-best-practices.mdx b/website/pages/mr/querying/querying-best-practices.mdx index 138fd7d2aa6f..7ff86eb85075 100644 --- a/website/pages/mr/querying/querying-best-practices.mdx +++ b/website/pages/mr/querying/querying-best-practices.mdx @@ -2,11 +2,9 @@ title: Querying Best Practices --- -The Graph provides a decentralized way to query data from blockchains. +The Graph provides a decentralized way to query data from blockchains via GraphQL APIs, making it easier to query data with the GraphQL language. -ग्राफ नेटवर्कचा डेटा GraphQL API द्वारे उघड केला जातो, ज्यामुळे GraphQL भाषेसह डेटाची क्वेरी करणे सोपे होते. - -हे पृष्‍ठ तुम्हाला GraphQL भाषेचे अत्यावश्यक नियम आणि GraphQL क्वेरी सर्वोत्तम पद्धतींबद्दल मार्गदर्शन करेल. +Learn the essential GraphQL language rules and GraphQL querying best practices. --- @@ -71,7 +69,7 @@ GraphQL is a language and set of conventions that transport over HTTP. It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`). -However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), we recommend you to use our `graph-client` that supports unique features such as: +However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), it's recommend to use `graph-client` which supports unique features such as: - Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) @@ -104,8 +102,6 @@ main() More GraphQL client alternatives are covered in ["Querying from an Application"](/querying/querying-from-an-application). -Now that we covered the basic rules of GraphQL queries syntax, let's now look at the best practices of GraphQL query writing. - --- ## Best Practices @@ -164,11 +160,11 @@ Doing so brings **many advantages**: - **Variables can be cached** at server-level - **Queries can be statically analyzed by tools** (more on this in the following sections) -**Note: How to include fields conditionally in static queries** +### How to include fields conditionally in static queries -We might want to include the `owner` field only on a particular condition. +You might want to include the `owner` field only on a particular condition. -For this, we can leverage the `@include(if:...)` directive as follows: +For this, you can leverage the `@include(if:...)` directive as follows: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -191,7 +187,7 @@ const result = await execute(query, { }) ``` -Note: The opposite directive is `@skip(if: ...)`. +> Note: The opposite directive is `@skip(if: ...)`. ### Ask for what you want @@ -199,9 +195,8 @@ GraphQL became famous for its "Ask for what you want" tagline. For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually. -When querying GraphQL APIs, always think of querying only the fields that will be actually used. - -A common cause of over-fetching is collections of entities. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. Queries should therefore almost always set first explicitly, and make sure they only fetch as many entities as they actually need. This applies not just to top-level collections in a query, but even more so to nested collections of entities. +- When querying GraphQL APIs, always think of querying only the fields that will be actually used. +- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. For example, in the following query: @@ -337,8 +332,8 @@ query { Such repeated fields (`id`, `active`, `status`) bring many issues: -- harder to read for more extensive queries -- प्रश्नांवर आधारित TypeScript प्रकार व्युत्पन्न करणारी साधने वापरताना (_त्यावर शेवटच्या विभागात अधिक_), `newDelegate` आणि `oldDelegate` या दोन वेगळ्या इनलाइनचा परिणाम होईल इंटरफेस. +- More extensive queries become harder to read. +- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. A refactored version of the query would be the following: @@ -364,13 +359,13 @@ fragment DelegateItem on Transcoder { } ``` -Using GraphQL `fragment` will improve readability (especially at scale) but also will result in better TypeScript types generation. +Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_). ### GraphQL Fragment do's and don'ts -**Fragment base must be a type** +### Fragment base must be a type A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**: @@ -382,7 +377,7 @@ fragment MyFragment on BigInt { `BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base. -**How to spread a Fragment** +#### How to spread a Fragment Fragments are defined on specific types and should be used accordingly in queries. @@ -411,16 +406,16 @@ fragment VoteItem on Vote { It is not possible to spread a fragment of type `Vote` here. -**Define Fragment as an atomic business unit of data** +#### Define Fragment as an atomic business unit of data -GraphQL Fragment must be defined based on their usage. +GraphQL `Fragment`s must be defined based on their usage. For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient. -Here is a rule of thumb for using Fragment: +Here is a rule of thumb for using fragments: -- when fields of the same type are repeated in a query, group them in a Fragment -- when similar but not the same fields are repeated, create multiple fragments, ex: +- When fields of the same type are repeated in a query, group them in a `Fragment`. +- When similar but different fields are repeated, create multiple fragments, for instance: ```graphql # base fragment (mostly used in listing) @@ -443,7 +438,7 @@ fragment VoteWithPoll on Vote { --- -## The essential tools +## The Essential Tools ### GraphQL web-based explorers @@ -473,11 +468,11 @@ This will allow you to **catch errors without even testing queries** on the play The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get: -- syntax highlighting -- autocomplete suggestions -- validation against schema -- snippets -- go to definition for fragments and input types +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets +- Go to definition for fragments and input types If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly. @@ -485,9 +480,9 @@ If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketp The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing: -- syntax highlighting -- autocomplete suggestions -- validation against schema -- snippets +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets -More information on this [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) that showcases all the plugin's main features. +For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. diff --git a/website/pages/mr/quick-start.mdx b/website/pages/mr/quick-start.mdx index 960c8e212b69..d6ef321e7508 100644 --- a/website/pages/mr/quick-start.mdx +++ b/website/pages/mr/quick-start.mdx @@ -2,24 +2,18 @@ title: क्विक स्टार्ट --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph to Subgraph Studio. +Learn how to easily publish and query a [subgraph](/developing/developer-faqs/#1-what-is-a-subgraph) on The Graph. -Ensure that your subgraph will be indexing data from a [supported network](/developing/supported-networks). - -हे मार्गदर्शक तुमच्याकडे आहे असे गृहीत धरून लिहिले आहे: +## Prerequisites for this guide - एक क्रिप्टो वॉलेट -- तुमच्या पसंतीच्या नेटवर्कवर एक स्मार्ट करार पत्ता - -## 1. सबग्राफ स्टुडिओवर सबग्राफ तयार करा - -Go to the [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- A smart contract address on one of the [supported networks](/developing/supported-networks/) -Once your wallet is connected, you can begin by clicking “Create a Subgraph." It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name." +## Step-by-step -## 2. आलेख CLI स्थापित करा +### 1. आलेख CLI स्थापित करा -The Graph CLI is written in TypeScript and you will need to have `node` and either `npm` or `yarn` installed to use it. Check that you have the most recent CLI version installed. +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. तुमच्या स्थानिक मशीनवर, खालीलपैकी एक कमांड चालवा: @@ -35,133 +29,161 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -## 3. Initialize your subgraph from existing contract +### 2. Create your subgraph + +If your contract has events, the `init` command will automatically create a scaffold of a subgraph. + +#### Create via Graph CLI + +Use the following command to create a subgraph in Subgraph Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +#### Create via Subgraph Studio + +Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. -Initialize your subgraph from an existing contract by running the initialize command: +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +2. Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". + +For additional information on subgraph creation and the Graph CLI, see [Creating a Subgraph](/developing/creating-a-subgraph). + +### 3. Initialize your subgraph + +#### From an existing contract + +The following command initializes your subgraph from an existing contract: ```sh graph init --studio ``` -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> Note: If your contract was verified on Etherscan, then the ABI will automatically be created in the CLI. + +You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -तुम्ही तुमचा सबग्राफ सुरू करता तेव्हा, CLI टूल तुम्हाला खालील माहितीसाठी विचारेल: +When you initialize your subgraph, the CLI will ask you for the following information: -- प्रोटोकॉल: तुमचा सबग्राफ 4 वरून डेटा अनुक्रमित करेल असा प्रोटोकॉल निवडा -- सबग्राफ स्लग: तुमच्या सबग्राफसाठी नाव तयार करा. तुमचा सबग्राफ स्लग तुमच्या सबग्राफसाठी एक ओळखकर्ता आहे. -- उपग्राफ तयार करण्यासाठी निर्देशिका: तुमची स्थानिक निर्देशिका निवडा -- इथरियम नेटवर्क (पर्यायी): तुमचा सबग्राफ कोणत्या EVM-सुसंगत नेटवर्कवरून डेटा अनुक्रमित करेल ते तुम्हाला निर्दिष्ट करावे लागेल -- कॉन्ट्रॅक्ट अॅड्रेस: ​​तुम्ही ज्यावरून डेटा क्वेरी करू इच्छिता तो स्मार्ट कॉन्ट्रॅक्ट अॅड्रेस शोधा -- ABI: ABI ऑटोपॉप्युलेट नसल्यास, तुम्हाला JSON फाइल म्हणून व्यक्तिचलितपणे इनपुट करावे लागेल -- स्टार्ट ब्लॉक: तुमचा सबग्राफ ब्लॉकचेन डेटा इंडेक्स करत असताना वेळ वाचवण्यासाठी तुम्ही स्टार्ट ब्लॉक इनपुट करा असे सुचवले जाते. तुमचा करार जिथे तैनात करण्यात आला होता तो ब्लॉक शोधून तुम्ही स्टार्ट ब्लॉक शोधू शकता. -- कराराचे नाव: तुमच्या कराराचे नाव प्रविष्ट करा -- इंडेक्स कॉन्ट्रॅक्ट इव्हेंट्स घटक म्हणून: असे सुचवले जाते की तुम्ही हे सत्य वर सेट करा कारण ते प्रत्येक उत्सर्जित इव्हेंटसाठी तुमच्या सबग्राफमध्ये स्वयंचलितपणे मॅपिंग जोडेल -- दुसरा करार जोडा(पर्यायी): तुम्ही दुसरा करार जोडू शकता +- Protocol: Choose the protocol your subgraph will be indexing data from. +- Subgraph slug: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. +- Directory to create the subgraph in: Choose your local directory. +- Ethereum network (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- Contract address: Locate the smart contract address you’d like to query data from. +- ABI: If the ABI is not auto-populated, you will need to input it manually as a JSON file. +- Start Block: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- Contract Name: Input the name of your contract. +- Index contract events as entities: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- Add another contract (optional): You can add another contract. तुमचा सबग्राफ सुरू करताना काय अपेक्षा करावी याच्या उदाहरणासाठी खालील स्क्रीनशॉट पहा: ![Subgraph command](/img/subgraph-init-example.png) -## 4. Write your subgraph +### 4. Write your subgraph -मागील कमांड एक स्कॅफोल्ड सबग्राफ तयार करतात ज्याचा वापर तुम्ही तुमचा सबग्राफ तयार करण्यासाठी प्रारंभिक बिंदू म्हणून करू शकता. सबग्राफमध्ये बदल करताना, तुम्ही प्रामुख्याने तीन फाइल्ससह कार्य कराल: +The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. -- Manifest (`subgraph.yaml`) - The manifest defines what datasources your subgraphs will index. -- Schema (`schema.graphql`) - The GraphQL schema defines what data you wish to retrieve from the subgraph. -- AssemblyScript Mappings (`mapping.ts`) - This is the code that translates data from your datasources to the entities defined in the schema. +When making changes to the subgraph, you will mainly work with three files: -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). +- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -## 5. Deploy to Subgraph Studio +For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -तुमचा सबग्राफ लिहिल्यानंतर, खालील आदेश चालवा: +### 5. Deploy your subgraph -```sh -$ आलेख कोडजेन -$ आलेख बिल्ड -``` +Remember, deploying is not the same as publishing. + +- When you deploy a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. +- When you publish a subgraph, you are publishing it onchain to the decentralized network. + +1. तुमचा सबग्राफ लिहिल्यानंतर, खालील आदेश चालवा: + + ```sh + graph codegen + graph build + ``` + +2. Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. -- तुमचा सबग्राफ प्रमाणित करा आणि उपयोजित करा. उपयोजन की सबग्राफ स्टुडिओमधील सबग्राफ पृष्ठावर आढळू शकते. +![Deploy key](/img/subgraph-studio-deploy-key.jpg) +```` ```sh -$ graph auth --studio -$ graph deploy --studio -``` -You will be asked for a version label. It's strongly recommended to use [semver](https://semver.org/) for versioning like `0.0.1`. That said, you are free to choose any string as version such as:`v1`, `version1`, `asdf`. - -## 6. तुमच्या सबग्राफची चाचणी घ्या - -In Subgraph Studio's playground environment, you can test your subgraph by making a sample query. - -तुमच्या सबग्राफमध्ये काही त्रुटी असल्यास नोंदी तुम्हाला सांगतील. ऑपरेशनल सबग्राफचे लॉग यासारखे दिसतील: - -![Subgraph logs](/img/subgraph-logs-image.png) - -If your subgraph is failing, you can query the subgraph health by using the GraphiQL Playground. Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails, so you can debug accordingly: - -```graphql -{ - indexingStatuses(subgraphs: ["Qm..."]) { - node - synced - health - fatalError { - message - block { - number - hash - } - handler - } - nonFatalErrors { - message - block { - number - hash - } - handler - } - chains { - network - chainHeadBlock { - number - } - earliestBlock { - number - } - latestBlock { - number - } - lastHealthyBlock { - number - } - } - entityCount - } -} +graph auth --studio + +graph deploy --studio ``` +```` -## 7. Publish your subgraph to The Graph’s Decentralized Network +- The CLI will ask for a version label. + - It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. That said, you can choose any string for the version such as: `v1`, `version1`, `asdf`, etc. -Once your subgraph has been deployed to Subgraph Studio, you have tested it out, and you are ready to put it into production, you can then publish it to the decentralized network. +### 6. Review your subgraph -In Subgraph Studio, you will be able to click the publish button on the top right of your subgraph's page. +If you’d like to examine your subgraph before publishing it to the network, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: -Select the network you would like to publish your subgraph to. It is recommended to publish subgraphs to Arbitrum One to take advantage of the [faster transaction speeds and lower gas costs](/arbitrum/arbitrum-faq). +- Run a sample query. +- Analyze your subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: -The (upgrade Indexer)[/sunrise/#about-the-upgrade-indexer] will begin serving queries on your subgraph regardless of subgraph curation, and it will provide you with 100,000 free queries per month. + ![Subgraph logs](/img/subgraph-logs-image.png) -For a higher quality of service and stronger redundancy, you can curate your subgraph to attract more Indexers. At the time of writing, it is recommended that you curate your own subgraph with at least 3,000 GRT to ensure 3-5 additional Indexers begin serving queries on your subgraph. +### 7. Publish your subgraph to The Graph Network -गॅसच्या खर्चावर बचत करण्यासाठी, जेव्हा तुम्ही तुमचा सबग्राफ The Graph च्या विकेंद्रित नेटवर्कवर प्रकाशित करता तेव्हा हे बटण निवडून तुम्ही प्रकाशित केलेल्या व्यवहारात तुम्ही तुमचा सबग्राफ क्युरेट करू शकता: +Publishing a subgraph to the decentralized network makes it available for [Curators](/network/curating/) to begin curating it and [Indexers](/network/indexing/) to begin indexing it. -![Subgraph publish](/img/publish-and-signal-tx.png) +#### Publishing with Subgraph Studio + +1. To publish your subgraph, click the Publish button in the dashboard. +2. Select the network to which you would like to publish your subgraph. + +#### Publishing from the CLI + +As of version 0.73.0, you can also publish your subgraph with the Graph CLI. + +1. Open the `graph-cli`. + +2. Use the following commands: + + ```sh + graph codegen && graph build + ``` + + Then, + + ```sh + graph publish + ``` -## 8. Query your subgraph +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +To customize your deployment, see [Publishing a Subgraph](/publishing/publishing-a-subgraph/). + +#### Adding signal to your subgraph + +1. To attract indexers to query your subgraph, you should add GRT curation signal to it. + + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + +2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. + + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + +To learn more about curation, read [Curating](/network/curating/). + +To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: + +![Subgraph publish](/img/publish-and-signal-tx.png) -आता, तुम्ही तुमच्या सबग्राफच्या क्वेरी URL वर GraphQL क्वेरी पाठवून तुमच्या सबग्राफची क्वेरी करू शकता, जी तुम्ही क्वेरी बटणावर क्लिक करून शोधू शकता. +### 8. Query your subgraph -If you don't have your API key, you can query via the free, rate-limited development query URL, which can be used for development and staging. +Now, you can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read more [here](/querying/querying-the-graph/). +For more information about querying data from your subgraph, read [Querying The Graph](/querying/querying-the-graph/). diff --git a/website/pages/mr/release-notes/assemblyscript-migration-guide.mdx b/website/pages/mr/release-notes/assemblyscript-migration-guide.mdx index a170ebec8cda..2d735415bebb 100644 --- a/website/pages/mr/release-notes/assemblyscript-migration-guide.mdx +++ b/website/pages/mr/release-notes/assemblyscript-migration-guide.mdx @@ -127,11 +127,8 @@ However now this isn't possible anymore, and the compiler returns this error: ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - You'll need to rename your duplicate variables if you had variable shadowing. - ### Null Comparisons - By doing the upgrade on your subgraph, sometimes you might get errors like these: ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - To solve you can simply change the `if` statement to something like this: ```typescript @@ -288,7 +284,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: ```typescript diff --git a/website/pages/mr/sps/introduction.mdx b/website/pages/mr/sps/introduction.mdx new file mode 100644 index 000000000000..12e3f81c6d53 --- /dev/null +++ b/website/pages/mr/sps/introduction.mdx @@ -0,0 +1,19 @@ +--- +title: Introduction to Substreams-powered Subgraphs +--- + +By using a Substreams package (`.spkg`) as a data source, your subgraph gains access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. + +There are two methods of enabling this technology: + +Using Substreams [triggers](./triggers): Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. + +Using [Entity Changes](https://substreams.streamingfast.io/documentation/consume/subgraph/graph-out): By writing more of the logic into Substreams, you can consume the module's output directly into graph-node. In graph-node, you can use the Substreams data to create your subgraph entities. + +It is really a matter of where you put your logic, in the subgraph or the Substreams. Keep in mind that having more of your logic in Substreams benefits from a parallelized model, whereas triggers will be linearly consumed in graph-node. + +Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: + +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/mr/sps/triggers-example.mdx b/website/pages/mr/sps/triggers-example.mdx new file mode 100644 index 000000000000..cbd9a217e1d5 --- /dev/null +++ b/website/pages/mr/sps/triggers-example.mdx @@ -0,0 +1,140 @@ +--- +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" +--- + +## पूर्वतयारी + +Before starting, make sure to: + +- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. +- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. + +## Step 1: Initialize Your Project + + + +1. Open your Dev Container and run the following command to initialize your project: + + ```bash + substreams init + ``` + +2. Select the "minimal" project option. + +3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: + +```yaml +specVersion: v0.1.0 +package: + name: my_project_sol + version: v0.1.0 + +imports: # Pass your spkg of interest + solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg + +modules: + - name: map_spl_transfers + use: solana:map_block # Select corresponding modules available within your spkg + initialBlock: 260000082 + + - name: map_transactions_by_programid + use: solana:solana:transactions_by_programid_without_votes + +network: solana-mainnet-beta + +params: # Modify the param fields to meet your needs + # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA + map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE +``` + +## Step 2: Generate the Subgraph Manifest + +Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: + +```bash +substreams codegen subgraph +``` + +You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: + +```yaml +--- +dataSources: + - kind: substreams + name: my_project_sol + network: solana-mainnet-beta + source: + package: + moduleName: map_spl_transfers # Module defined in the substreams.yaml + file: ./my-project-sol-v0.1.0.spkg + mapping: + apiVersion: 0.0.7 + kind: substreams/graph-entities + file: ./src/mappings.ts + handler: handleTriggers +``` + +## Step 3: Define Entities in `schema.graphql` + +Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. Here is an example: + +```graphql +type MyTransfer @entity { + id: ID! + amount: String! + source: String! + designation: String! + signers: [String!]! +} +``` + +This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. + +## Step 4: Handle Substreams Data in `mappings.ts` + +With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: + +```ts +import { Protobuf } from 'as-proto/assembly' +import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events' +import { MyTransfer } from '../generated/schema' + +export function handleTriggers(bytes: Uint8Array): void { + const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode) + + for (let i = 0; i < input.data.length; i++) { + const event = input.data[i] + + if (event.transfer != null) { + let entity_id: string = `${event.txnId}-${i}` + const entity = new MyTransfer(entity_id) + entity.amount = event.transfer!.instruction!.amount.toString() + entity.source = event.transfer!.accounts!.source + entity.designation = event.transfer!.accounts!.destination + + if (event.transfer!.accounts!.signer!.single != null) { + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] + } else if (event.transfer!.accounts!.signer!.multisig != null) { + entity.signers = event.transfer!.accounts!.signer!.multisig!.signers + } + entity.save() + } + } +} +``` + +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + +## Conclusion + +You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. + +For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). diff --git a/website/pages/mr/sps/triggers.mdx b/website/pages/mr/sps/triggers.mdx new file mode 100644 index 000000000000..ed19635d4768 --- /dev/null +++ b/website/pages/mr/sps/triggers.mdx @@ -0,0 +1,37 @@ +--- +title: Substreams Triggers +--- + +Custom triggers allow you to send data directly into your subgraph mappings file and entities (similar to tables and fields), enabling full use of the GraphQL layer. By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data within your subgraph’s handler, ensuring efficient and streamlined data management within the subgraph framework. + +> Note: If you haven’t already, visit one of the How-To Guides found [here](./introduction) to scaffold your first project in the Development Container. + +The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. + +```tsx +export function handleTransactions(bytes: Uint8Array): void { + let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).trasanctions // 1. + if (transactions.length == 0) { + log.info('No transactions found', []) + return + } + + for (let i = 0; i < transactions.length; i++) { + // 2. + let transaction = transactions[i] + + let entity = new Transaction(transaction.hash) // 3. + entity.from = transaction.from + entity.to = transaction.to + entity.save() + } +} +``` + +Here's what you’re seeing in the `mappings.ts` file: + +1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object +2. Looping over the transactions +3. Create a new subgraph entity for every transaction + +To go through a detailed example of a trigger-based subgraph, [click here](./triggers-example). diff --git a/website/pages/mr/substreams.mdx b/website/pages/mr/substreams.mdx index a0f162be9963..04915e275dfc 100644 --- a/website/pages/mr/substreams.mdx +++ b/website/pages/mr/substreams.mdx @@ -4,9 +4,11 @@ title: उपप्रवाह ![Substreams Logo](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## How Substreams Works in 4 Steps @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### Expand Your Knowledge - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/mr/sunrise.mdx b/website/pages/mr/sunrise.mdx index 32bf6c6d26d4..14d1444cf8cd 100644 --- a/website/pages/mr/sunrise.mdx +++ b/website/pages/mr/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Post-Sunrise + Upgrading to The Graph Network FAQ --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Note: The Sunrise of Decentralized Data ended June 12th, 2024. -## What is the Sunrise of Decentralized Data? +## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data is an initiative spearheaded by Edge & Node. The goal is to enable subgraph developers to seamlessly upgrade to The Graph’s decentralized network. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan draws on many previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs, and the ability to integrate new blockchain networks to The Graph. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### What are the phases of the Sunrise? +### What happened to the hosted service? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. -## Upgrading subgraphs to The Graph Network +During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. -### When will hosted service subgraphs no longer be available? +### Was Subgraph Studio impacted by this upgrade? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Why were subgraphs published to Arbitrum, did it start indexing a different network? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Will my hosted service subgraph be supported on The Graph Network? - -Yes, the upgrade Indexer will automatically support all hosted service subgraphs published to The Graph Network for a seamless upgrade experience. - -### How do I upgrade my hosted service subgraph? - -> Note: Upgrading a subgraph to The Graph Network cannot be undone. - - - -To upgrade a hosted service subgraph, you can visit the subgraph dashboard on the [hosted service](https://thegraph.com/hosted-service). - -1. Select the subgraph(s) you want to upgrade. -2. Select the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### How can I get support with the upgrade process? - -The Graph community is here to support developers as they move to The Graph Network. Join The Graph's [Discord server](https://discord.gg/vtvv7FP) and request help in the #upgrade-decentralized-network channel. - -### How can I ensure high quality of service and redundancy for subgraphs on The Graph Network? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Members from these blockchain communities are encouraged to integrate their chain through the [Chain Integration Process](/chain-integration-overview/). - -### How do I publish new versions to the network? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Upgrade to the latest version of [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Update your deploy command - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> Publishing requires Arbitrum ETH - upgrading your subgraph also airdrops a small amount to facilitate your first protocol interactions 🧑‍🚀 - -### I use a subgraph developed by someone else, how can I make sure that my service isn't interrupted? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### What happens if I don't upgrade my subgraph? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### How can I get started querying subgraphs on The Graph Network? - -You can explore available subgraphs on [Graph Explorer](https://thegraph.com/explorer). [Learn more about querying subgraphs on The Graph](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## About the Upgrade Indexer -### What is the upgrade Indexer? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> The upgrade Indexer is currently active. -### What chains does the upgrade Indexer support? +The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. -The upgrade Indexer supports chains that were previously only available on the hosted service. +### What does the upgrade Indexer do? -Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. ### What does the upgrade indexer mean for existing Indexers? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. + +However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As more subgraphs are upgraded from the hosted service to The Graph Network, Delegators stand to benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### Will the upgrade Indexer compete with existing Indexers for rewards? +### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer will only allocate the minimum amount per subgraph and will not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### How will this affect subgraph developers? +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### How does this benefit data consumers? +### How does the upgrade Indexer benefit data consumers? The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. -### How will the upgrade Indexer price queries? - -The upgrade Indexer will price queries at the market rate so as not to influence the query fee market. - -### What are the criteria for the upgrade Indexer to stop supporting a subgraph? - -The upgrade Indexer will serve a subgraph until it is sufficiently and successfully served with consistent queries served by at least 3 other Indexers. - -Furthermore, the upgrade Indexer will stop supporting a subgraph if it has not been queried in the last 30 days. - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## About The Graph Network - -### Do I need to run my own infrastructure? - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -Once your subgraph has reached adequate curation signal and other Indexers begin supporting it, the upgrade Indexer will gradually taper off, allowing other Indexers to collect indexing rewards and query fees. - -### Should I host my own indexing infrastructure? - -Running infrastructure for your own project is [significantly more resource intensive](/network/benefits/) when compared to using The Graph Network. - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -That being said, if you’re still interested in running a [Graph Node](https://github.com/graphprotocol/graph-node), consider joining The Graph Network [as an Indexer](https://thegraph.com/blog/how-to-become-indexer/) to earn indexing rewards and query fees by serving data on your subgraph and others. - -### Should I use a centralized indexing provider? - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -Here's a detailed breakdown of the benefits of The Graph over centralized hosting: +### How does the upgrade Indexer price queries? -- **Resilience and Redundancy**: Decentralized systems are inherently more robust and resilient due to their distributed nature. Data isn't stored on a single server or location. Instead, it's served by hundreds of independent Indexers around the globe. This reduces the risk of data loss or service interruptions if one node fails, leading to exceptional uptime (99.99%). +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **Quality of Service**: In addition to the impressive uptime, The Graph Network features a ~106ms median query speed (latency), and higher query success rates compared to hosted alternatives. Read more in [this blog](https://thegraph.com/blog/qos-the-graph-network/). +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -Just as you've chosen your blockchain network for its decentralized nature, security, and transparency, opting for The Graph Network is an extension of those same principles. By aligning your data infrastructure with these values, you ensure a cohesive, resilient, and trust-driven development environment. +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/mr/supported-network-requirements.mdx b/website/pages/mr/supported-network-requirements.mdx index a1a9e0338649..0dcc4da67474 100644 --- a/website/pages/mr/supported-network-requirements.mdx +++ b/website/pages/mr/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| Network | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| हिमस्खलन | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| इथरियम | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | -| फॅन्टम | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | -| आशावाद | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| बहुभुज | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | +| Network | Guides | System Requirements | Indexing Rewards | +| -------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| हिमस्खलन | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| इथरियम | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | +| फॅन्टम | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | +| आशावाद | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| बहुभुज | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | diff --git a/website/pages/mr/tap.mdx b/website/pages/mr/tap.mdx new file mode 100644 index 000000000000..8cde16ce40a5 --- /dev/null +++ b/website/pages/mr/tap.mdx @@ -0,0 +1,197 @@ +--- +title: TAP Migration Guide +--- + +Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. + +## सविश्लेषण + +[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: + +- Efficiently handles micropayments. +- Adds a layer of consolidations to on-chain transactions and costs. +- Allows Indexers control of receipts and payments, guaranteeing payment for queries. +- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. + +## Specifics + +TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. + +For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. + +### RAV Details + +- It’s money that is waiting to be sent to the blockchain. + +- It will continue to send requests to aggregate and ensure that the total value of non-aggregated receipts does not exceed the `amount willing to lose`. + +- Each RAV can be redeemed once in the contracts, which is why they are sent after the allocation is closed. + +### Redeeming RAV + +As long as you run `tap-agent` and `indexer-agent`, everything will be executed automatically. The following provides a detailed breakdown of the process: + +1. An Indexer closes allocation. + +2. ` period, tap-agent` takes all pending receipts for that specific allocation and requests an aggregation into a RAV, marking it as `last`. + +3. `indexer-agent` takes all the last RAVS and sends redeem requests to the blockchain, which will update the value of `redeem_at`. + +4. During the `` period, `indexer-agent` monitors if the blockchain has any reorganizations that revert the transaction. + + - If it was reverted, the RAV is resent to the blockchain. If it was not reverted, it gets marked as `final`. + +## Blockchain Addresses + +### Contracts + +| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| ------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | +| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | +| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | + +### Gateway + +| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| ---------- | --------------------------------------------- | --------------------------------------------- | +| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | + +### Requirements + +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. + +- [Graph TAP Aribtrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) + +> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployement. As a result, you have to index it manually. + +## Migration Guide + +### Software versions + +| Component | आवृत्ती | Image Link | +| --------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------- | +| indexer-service | v1.0.0-rc.6 | [indexer-service](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-service-rs/264320627?tag=1.0.0-rc.6) | +| indexer-agent | PR #995 | [indexer-agent](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent/266166026?tag=sha-d98cf80) | +| tap-agent | v1.0.0-rc.6 | [tap-agent](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-tap-agent/264320547?tag=1.0.0-rc.6) | + +### Steps + +1. **Indexer Agent** + + - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). + - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + +2. **Indexer Service** + + - Fully replace your current configuration with the [new Indexer Service rs](https://github.com/graphprotocol/indexer-rs). It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + - Like the older version, you can scale Indexer Service horizontally easily. It is still stateless. + +3. **TAP Agent** + + - Run _one_ single instance of [TAP Agent](https://github.com/graphprotocol/indexer-rs) at all times. It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + +4. **Configure Indexer Service and TAP Agent** + + Configuration is a TOML file shared between `indexer-service` and `tap-agent`, supplied with the argument `--config /path/to/config.toml`. + + Check out the full [configuration](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) and the [default values](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) + +For minimal configuration, use the following template: + +```bash +# You will have to change *all* the values below to match your setup. +# +# Some of the config below are global graph network values, which you can find here: +# +# +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: +# +# [database] +# postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" + +[indexer] +indexer_address = "0x1111111111111111111111111111111111111111" +operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" + +[database] +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. +postgres_url = "postgres://postgres@postgres:5432/postgres" + +[graph_node] +# URL to your graph-node's query endpoint +query_url = "" +# URL to your graph-node's status endpoint +status_url = "" + +[subgraphs.network] +# Query URL for the Graph Network subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[subgraphs.escrow] +# Query URL for the Escrow subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[blockchain] +# The chain ID of the network that the graph network is running on +chain_id = 1337 +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. +receipts_verifier_address = "0x2222222222222222222222222222222222222222" + +######################################## +# Specific configurations to tap-agent # +######################################## +[tap] +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: +# max_amount_willing_to_lose_grt = "0.1" +max_amount_willing_to_lose_grt = 20 + +[tap.sender_aggregator_endpoints] +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. +0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" +``` + +नोट्स: + +- Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/tap/#gateway). +- Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/tap/#contracts) using the appropriate chain id. + +**Log Level** + +- You can set the log level by using the `RUST_LOG` environment variable. +- It’s recommended that you set it to `RUST_LOG=indexer_tap_agent=debug,info`. + +## Monitoring + +### Metrics + +All components expose the port 7300 to be queried by prometheus. + +### Grafana Dashboard + +You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) and import. + +### Launchpad + +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer) diff --git a/website/pages/nl/about.mdx b/website/pages/nl/about.mdx index bebff0a938a1..641ff09b8d15 100644 --- a/website/pages/nl/about.mdx +++ b/website/pages/nl/about.mdx @@ -2,46 +2,66 @@ title: Over The Graph --- -This page will explain what The Graph is and how you can get started. - ## What is The Graph? -The Graph is a decentralized protocol for indexing and querying blockchain data. The Graph makes it possible to query data that is difficult to query directly. +The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. + +## Understanding the Basics + +Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. + +### Challenges Without The Graph + +In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. + +- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. + +- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. + +- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. + +### Why is this a problem? + +It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. + +Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. + +Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. -Projects with complex smart contracts like [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it really difficult to read anything other than basic data directly from the blockchain. +## The Graph Provides a Solution -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. -You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. +### How The Graph Functions -**Indexing blockchain data is really, really hard.** +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +#### Specifics -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. -## How The Graph Works +- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -The Graph learns what and how to index Ethereum data based on subgraph descriptions, known as the subgraph manifest. The subgraph description defines the smart contracts of interest for a subgraph, the events in those contracts to pay attention to, and how to map event data to data that The Graph will store in its database. +- When creating a subgraph, you need to write a subgraph manifest. -Once you have written a `subgraph manifest`, you use the Graph CLI to store the definition in IPFS and tell the indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. -This diagram gives more detail about the flow of data once a subgraph manifest has been deployed, dealing with Ethereum transactions: +The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. ![A graphic explaining how The Graph uses Graph Node to serve queries to data consumers](/img/graph-dataflow.png) The flow follows these steps: -1. A dapp adds data to Ethereum through a transaction on a smart contract. -2. The smart contract emits one or more events while processing the transaction. -3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. -4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. -5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. +1. A dapp adds data to Ethereum through a transaction on a smart contract. +2. The smart contract emits one or more events while processing the transaction. +3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. +4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. +5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. ## Next Steps -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +The following sections provide a more in-depth look at subgraphs, their deployment and data querying. -Before you start writing your own subgraph, you might want to have a look at [Graph Explorer](https://thegraph.com/explorer) and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/pages/nl/arbitrum/arbitrum-faq.mdx b/website/pages/nl/arbitrum/arbitrum-faq.mdx index 34580766c5ef..46baaad9d1e0 100644 --- a/website/pages/nl/arbitrum/arbitrum-faq.mdx +++ b/website/pages/nl/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: Arbitrum FAQ Klik [hier] (#billing-on-arbitrum-faqs) als je de Arbitrum facturering FAQ wilt overslaan. -## Waarom is The Graph een L2 oplossing aan het implementeren? +## Why did The Graph implement an L2 Solution? -Door het schalen van The Graph op L2, netwerk deelnemers kunnen het volgende verwachten: +By scaling The Graph on L2, network participants can now benefit from: - Meer dan 26x besparen op gas fees @@ -14,7 +14,7 @@ Door het schalen van The Graph op L2, netwerk deelnemers kunnen het volgende ver - Veiligheid overgenomen van Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers could open and close allocations to index a greater number of subgraphs with greater frequency, developers could deploy and update subgraphs with greater ease, Delegators could delegate GRT with increased frequency, and Curators could add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. The Graph gemeenschap heeft vorig jaar besloten om door te gaan met Arbitrum na de uitkomst van [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussie. @@ -41,27 +41,21 @@ Om gebruik te maken van The Graph op L2, gebruik deze keuzeschakelaar om te wiss ## Als een subgraph ontwikkelaar, data consument, Indexer, Curator, of Delegator, wat moet ik nu doen? -Er is geen directe handeling vereisd, echter, netwerk deelnemers worden wel aangemoedigd om te beginnen met overstappen naar Arbitrum om te profiteren van de voordelen van L2. +Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) for additional support. -Kern-ontwikkelingsteams werken eraan om L2 overdrachtstools te creëren die het aanzienlijk makkelijker zullen maken om delegatie, curatie en subgraphs naar Arbitrum te verplaatsen. Netwerkdeelnemers kunnen verwachten dat L2 overdrachtstools beschikbaar zullen zijn tegen de zomer van 2023. +All indexing rewards are now entirely on Arbitrum. -Vanaf 10 april 2023 wordt 5% van alle indexing beloningen gemint op Arbitrum. Naarmate de netwerkparticipatie toeneemt en de Raad het goedkeurt, zullen de indexing rewards geleidelijk verschuiven van Ethereum naar Arbitrum en uiteindelijk volledig overgaan naar Arbitrum. - -## If I would like to participate in the network on L2, what should I do? - -Please help [test the network](https://testnet.thegraph.com/explorer) on L2 and report feedback about your experience in [Discord](https://discord.gg/graphprotocol). - -## Zijn er risico's verbonden met het schalen van het netwerk naar L2? +## Were there any risks associated with scaling the network to L2? All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). Alles is grondig getest, en een eventualiteiten plan is gemaakt en klaargezet voor een veilige en naadloze transitie. Details kunnen [hier](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20) gevonden worden. -## Zullen bestaande subgraphs op Etherium blijven werken? +## Are existing subgraphs on Ethereum working? -Ja, The Graph Netwerk contracts zullen op zowel Ethereum als Arbitrum parallel opereren tot het volledig overgaan op Arbitrum op een latere datum. +All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. -## Zal GRT nieuwe smart contract implementeren op Arbitrum? +## Does GRT have a new smart contract deployed on Arbitrum? Ja, GRT heeft een extra [smart contract op Arbitrum](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7). Echter, het Ethereum mainnet [GRT contract](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) zal operationeel blijven. diff --git a/website/pages/nl/billing.mdx b/website/pages/nl/billing.mdx index 37f9c840d00b..dec5cfdadc12 100644 --- a/website/pages/nl/billing.mdx +++ b/website/pages/nl/billing.mdx @@ -14,7 +14,7 @@ There are two plans to use when querying subgraphs on The Graph Network. ## Query Payments with credit card -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) +- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. You'll be redirected to the wallet selection page. Select your wallet and click on "Connect". 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. @@ -69,7 +69,7 @@ Once you bridge GRT, you can add it to your billing balance. 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. 4. Enter the amount of GRT you would like to withdraw. 5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. 6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. @@ -83,7 +83,7 @@ Once you bridge GRT, you can add it to your billing balance. - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. 5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. 6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. + - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. 7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. 8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. @@ -127,7 +127,7 @@ This will be a step by step guide for purchasing GRT on Binance. 7. Review your purchase and click "Buy GRT". 8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. 9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - Click on the "wallet" button, click withdraw, and select GRT. - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - Click "Continue" and confirm your transaction. @@ -198,7 +198,7 @@ You can learn more about getting ETH on Binance [here](https://www.binance.com/e ### How many queries will I need? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. @@ -208,6 +208,6 @@ Of course, both new and existing users can reach out to Edge & Node's BD team fo Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### What happens when my billing balance runs out? Will I get a warning? You will receive several email notifications before your billing balance runs out. diff --git a/website/pages/nl/chain-integration-overview.mdx b/website/pages/nl/chain-integration-overview.mdx index 2fe6c2580909..a142f3f817f9 100644 --- a/website/pages/nl/chain-integration-overview.mdx +++ b/website/pages/nl/chain-integration-overview.mdx @@ -6,12 +6,12 @@ A transparent and governance-based integration process was designed for blockcha ## Stage 1. Technical Integration -- Teams work on a Graph Node integration and Firehose for non-EVM based chains. [Here's how](/new-chain-integration/). +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. - Teams initiate the protocol integration process by creating a Forum thread [here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (New Data Sources sub-category under Governance & GIPs). Using the default Forum template is mandatory. ## Stage 2. Integration Validation -- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON RPC or Firehose endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - Graph Indexers test the integration on The Graph's testnet. - Core developers and Indexers monitor stability, performance, and data determinism. @@ -38,7 +38,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. -### 3. How much time will this process take? +### 3. How much time will the process of reaching full protocol support take? The time to mainnet is expected to be several weeks, varying based on the time of integration development, whether additional research is required, testing and bug fixes, and, as always, the timing of the governance process that requires community feedback. @@ -46,4 +46,4 @@ Protocol support for indexing rewards depends on the stakeholders' bandwidth to ### 4. How will priorities be handled? -Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. This is especially true for chains previously supported on the [hosted service](https://thegraph.com/hosted-service) or those relying on already tested stacks. +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/nl/cookbook/arweave.mdx b/website/pages/nl/cookbook/arweave.mdx index ac5d84fd4ed5..959ef3a2c01a 100644 --- a/website/pages/nl/cookbook/arweave.mdx +++ b/website/pages/nl/cookbook/arweave.mdx @@ -105,7 +105,7 @@ Schema definition describes the structure of the resulting subgraph database and The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). -Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/assemblyscript-api/). +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/nl/cookbook/base-testnet.mdx b/website/pages/nl/cookbook/base-testnet.mdx index 3516b2551106..d1aa03bd7008 100644 --- a/website/pages/nl/cookbook/base-testnet.mdx +++ b/website/pages/nl/cookbook/base-testnet.mdx @@ -63,10 +63,10 @@ Je subgraph slug is een identificator voor je subgraph. De CLI tool zal je door The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. +- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retrieve from the subgraph. - AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema. -If you want to index additional data, you will need extend the manifest, schema and mappings. +If you want to index additional data, you will need to extend the manifest, schema and mappings. For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). diff --git a/website/pages/nl/cookbook/cosmos.mdx b/website/pages/nl/cookbook/cosmos.mdx index 5e9edfd82931..a8c359b3098c 100644 --- a/website/pages/nl/cookbook/cosmos.mdx +++ b/website/pages/nl/cookbook/cosmos.mdx @@ -85,7 +85,7 @@ Schema definition describes the structure of the resulting subgraph database and The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). -Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/assemblyscript-api/). +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/nl/cookbook/grafting.mdx b/website/pages/nl/cookbook/grafting.mdx index 6b4f419390d5..6c3b85419af9 100644 --- a/website/pages/nl/cookbook/grafting.mdx +++ b/website/pages/nl/cookbook/grafting.mdx @@ -22,7 +22,7 @@ For more information, you can check: - [Grafting](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic usecase. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network @@ -30,7 +30,7 @@ In this tutorial, we will be covering a basic usecase. We will replace an existi ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. While this is an effective way to preserve data and save time on indexing, grafting may introduce complexities and potential issues when migrating from a hosted environment to the decentralized network. It is not possible to graft a subgraph from The Graph Network back to the hosted service or Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. ### Best Practices @@ -80,7 +80,7 @@ dataSources: ``` - The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. ## Grafting Manifest Definition @@ -191,7 +191,7 @@ Congrats! You have successfully grafted a subgraph onto another subgraph. ## Additional Resources -If you want more experience with grafting, here's a few examples for popular contracts: +If you want more experience with grafting, here are a few examples for popular contracts: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) diff --git a/website/pages/nl/cookbook/near.mdx b/website/pages/nl/cookbook/near.mdx index 53d540caa987..c5416e3996fd 100644 --- a/website/pages/nl/cookbook/near.mdx +++ b/website/pages/nl/cookbook/near.mdx @@ -37,7 +37,7 @@ There are three aspects of subgraph definition: **schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph#the-graphql-schema). -**AssemblyScript Mappings:** [AssemblyScript code](/developing/assemblyscript-api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. +**AssemblyScript Mappings:** [AssemblyScript code](/developing/graph-ts/api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. Tijdens subgraph ontwikkeling zijn er twee belangrijke commando's: @@ -98,7 +98,7 @@ Schema definition describes the structure of the resulting subgraph database and The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). -NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/assemblyscript-api). +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/graph-ts/api). ```typescript @@ -165,9 +165,9 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/developing/assemblyscript-api) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/developing/graph-ts/api) is available to NEAR subgraph developers during mapping execution. -This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/assemblyscript-api#json-api) to allow developers to easily process these logs. +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/graph-ts/api#json-api) to allow developers to easily process these logs. ## Deploying a NEAR Subgraph diff --git a/website/pages/nl/cookbook/subgraph-uncrashable.mdx b/website/pages/nl/cookbook/subgraph-uncrashable.mdx index 989310a3f9a0..0cc91a0fa2c3 100644 --- a/website/pages/nl/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/nl/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: Safe Subgraph Code Generator - The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. These logs can be viewed in the The Graph's hosted service under the 'Logs' section. +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. diff --git a/website/pages/nl/cookbook/upgrading-a-subgraph.mdx b/website/pages/nl/cookbook/upgrading-a-subgraph.mdx index 5502b16d9288..a546f02c0800 100644 --- a/website/pages/nl/cookbook/upgrading-a-subgraph.mdx +++ b/website/pages/nl/cookbook/upgrading-a-subgraph.mdx @@ -136,7 +136,7 @@ Make sure **Update Subgraph Details in Explorer** is checked and click on **Save ## Deprecating a Subgraph on The Graph Network -Follow the steps [here](/managing/deprecating-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. +Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. ## Querying a Subgraph + Billing on The Graph Network diff --git a/website/pages/nl/deploying/multiple-networks.mdx b/website/pages/nl/deploying/multiple-networks.mdx new file mode 100644 index 000000000000..dc2b8e533430 --- /dev/null +++ b/website/pages/nl/deploying/multiple-networks.mdx @@ -0,0 +1,241 @@ +--- +title: Deploying a Subgraph to Multiple Networks +--- + +This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph). + +## Deploying the subgraph to multiple networks + +In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. + +### Using `graph-cli` + +Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: + +```sh +Options: + + ... + --network Network configuration to use from the networks config file + --network-file Networks config file path (default: "./networks.json") +``` + +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. + +> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. + +If you don't have a `networks.json` file, you'll need to manually create one with the following structure: + +```json +{ + "network1": { // the network name + "dataSource1": { // the dataSource name + "address": "0xabc...", // the contract address (optional) + "startBlock": 123456 // the startBlock (optional) + }, + "dataSource2": { + "address": "0x123...", + "startBlock": 123444 + } + }, + "network2": { + "dataSource1": { + "address": "0x987...", + "startBlock": 123 + }, + "dataSource2": { + "address": "0xxyz..", + "startBlock": 456 + } + }, + ... +} +``` + +> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. + +Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x123...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +This is what your networks config file should look like: + +```json +{ + "mainnet": { + "Gravity": { + "address": "0x123..." + } + }, + "sepolia": { + "Gravity": { + "address": "0xabc..." + } + } +} +``` + +Now we can run one of the following commands: + +```sh +# Using default networks.json file +yarn build --network sepolia + +# Using custom named file +yarn build --network sepolia --network-file path/to/config +``` + +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: sepolia + source: + address: '0xabc...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Now you are ready to `yarn deploy`. + +> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: + +```sh +# Using default networks.json file +yarn deploy --network sepolia + +# Using custom named file +yarn deploy --network sepolia --network-file path/to/config +``` + +### Using subgraph.yaml template + +One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). + +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: + +```json +{ + "network": "mainnet", + "address": "0x123..." +} +``` + +and + +```json +{ + "network": "sepolia", + "address": "0xabc..." +} +``` + +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + network: {{network}} + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + address: '{{address}}' + abi: Gravity + mapping: + kind: ethereum/events +``` + +In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: + +```json +{ + ... + "scripts": { + ... + "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", + "prepare:sepolia": "mustache config/sepolia.json subgraph.template.yaml > subgraph.yaml" + }, + "devDependencies": { + ... + "mustache": "^3.1.0" + } +} +``` + +To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: + +```sh +# Mainnet: +yarn prepare:mainnet && yarn deploy + +# Sepolia: +yarn prepare:sepolia && yarn deploy +``` + +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). + +**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. + +## Subgraph Studio subgraph archive policy + +A subgraph version in Studio is archived if and only if it meets the following criteria: + +- The version is not published to the network (or pending publish) +- The version was created 45 or more days ago +- The subgraph hasn't been queried in 30 days + +In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. + +Every subgraph affected with this policy has an option to bring the version in question back. + +## Checking subgraph health + +If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. + +Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: + +```graphql +{ + indexingStatusForCurrentVersion(subgraphName: "org/subgraph") { + synced + health + fatalError { + message + block { + number + hash + } + handler + } + chains { + chainHeadBlock { + number + } + latestBlock { + number + } + } + } +} +``` + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/pages/nl/developing/creating-a-subgraph.mdx b/website/pages/nl/developing/creating-a-subgraph.mdx index b4a2f306d8ed..2a97c2f051a0 100644 --- a/website/pages/nl/developing/creating-a-subgraph.mdx +++ b/website/pages/nl/developing/creating-a-subgraph.mdx @@ -2,45 +2,47 @@ title: Creating a Subgraph --- -A subgraph extracts data from a blockchain, processing it and storing it so that it can be easily queried via GraphQL. +This detailed guide provides instructions to successfully create a subgraph. -![Defining a Subgraph](/img/defining-a-subgraph.png) +A subgraph extracts data from a blockchain, processes it, and stores it for efficient querying via GraphQL. -The subgraph definition consists of a few files: +![Defining a Subgraph](/img/defining-a-subgraph.png) -- `subgraph.yaml`: a YAML file containing the subgraph manifest +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. -- `schema.graphql`: a GraphQL schema that defines what data is stored for your subgraph, and how to query it via GraphQL +## Getting Started -- `AssemblyScript Mappings`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) code that translates from the event data to the entities defined in your schema (e.g. `mapping.ts` in this tutorial) +### Install the Graph CLI -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key). It is recommended that you [add signal](/network/curating/#how-to-signal) to your subgraph with at least [3,000 GRT](/sunrise/#how-can-i-ensure-high-quality-of-service-and-redundancy-for-subgraphs-on-the-graph-network). +To build and deploy a subgraph, you will need the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-tooling) which you will need to build and deploy a subgraph. +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. -## Install the Graph CLI +On your local machine, run one of the following commands: -The Graph CLI is written in JavaScript, and you will need to install either `yarn` or `npm` to use it; it is assumed that you have yarn in what follows. +#### Using [npm](https://www.npmjs.com/) -Once you have `yarn`, install the Graph CLI by running +```bash +npm install -g @graphprotocol/graph-cli@latest +``` -**Install with yarn:** +#### Using [yarn](https://yarnpkg.com/) ```bash yarn global add @graphprotocol/graph-cli ``` -**Install with npm:** +- The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. -```bash -npm install -g @graphprotocol/graph-cli -``` +- This `graph init` command can also create a subgraph in Subgraph Studio by passing in `--product subgraph-studio`. + +- If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. -Once installed, the `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. This command can be used to create a subgraph in Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to your preferred network, bootstrapping a new subgraph from that contract can be a good way to get started. +## Create a subgraph -## From An Existing Contract +### From an existing contract -The following command creates a subgraph that indexes all events of an existing contract. It attempts to fetch the contract ABI from Etherscan and falls back to requesting a local file path. If any of the optional arguments are missing, it takes you through an interactive form. +The following command creates a subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,21 +53,29 @@ graph init \ [] ``` -The `` is the ID of your subgraph in Subgraph Studio, it can be found on your subgraph details page. +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. -## From An Example Subgraph +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. -The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: +### From an example subgraph + +The following command initializes a new project from an example subgraph: ```sh -graph init --studio +graph init --studio --from-example=example-subgraph ``` -The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. -## Add New dataSources To An Existing Subgraph +## Add new `dataSources` to an existing subgraph -Since `v0.31.0` the `graph-cli` supports adding new dataSources to an existing subgraph through the `graph add` command. +Since `v0.31.0`, the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: ```sh graph add
    [] @@ -78,22 +88,45 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -The `add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option), and will create a new `dataSource` in the same way that `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. +### Specifics + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: + + - If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- The contract `address` will be written to the `networks.json` for the relevant network. + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. -The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: +## Components of a subgraph -- If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. -- If `false`: a new entity & event handler should be created with `${dataSourceName}{EventName}`. +### The Subgraph Manifest -The contract `address` will be written to the `networks.json` for the relevant network. +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -> **Note:** When using the interactive cli, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. +The **subgraph definition** consists of the following files: -## The Subgraph Manifest +- `subgraph.yaml`: Contains the subgraph manifest -The subgraph manifest `subgraph.yaml` defines the smart contracts your subgraph indexes, which events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL -For the example subgraph, `subgraph.yaml` is: +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +A single subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: ```yaml specVersion: 0.0.4 @@ -180,9 +213,9 @@ A single subgraph can index data from multiple smart contracts. Add an entry for The triggers for a data source within a block are ordered using the following process: -1. Event and call triggers are first ordered by transaction index within the block. -2. Event and call triggers within the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. -3. Block triggers are run after event and call triggers, in the order they are defined in the manifest. +1. Event and call triggers are first ordered by transaction index within the block. +2. Event and call triggers within the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. +3. Block triggers are run after event and call triggers, in the order they are defined in the manifest. These ordering rules are subject to change. @@ -305,9 +338,9 @@ Imagine you have a subgraph that needs to make three Ethereum calls to fetch dat Traditionally, these calls might be made sequentially: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Total time taken = 3 + 2 + 4 = 9 seconds @@ -315,9 +348,9 @@ Total time taken = 3 + 2 + 4 = 9 seconds With this feature, you can declare these calls to be executed in parallel: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. @@ -325,9 +358,9 @@ Total time taken = max (3, 2, 4) = 4 seconds ### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. -2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. ### Example Configuration in Subgraph Manifest @@ -347,7 +380,7 @@ calls: Details for the example above: - `global0X128` is the declared `eth_call`. -- The text before colon(`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. @@ -360,17 +393,17 @@ calls: ### SpecVersion Releases -| Version | Release notes | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +| Version | Release notes | +|:-------:| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | ### Getting The ABIs @@ -442,16 +475,16 @@ For some entity types the `id` is constructed from the id's of two other entitie We support the following scalars in our GraphQL API: -| Type | Description | -| --- | --- | -| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| Type | Description | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | #### Enums @@ -593,7 +626,7 @@ This more elaborate way of storing many-to-many relationships will result in les #### Adding comments to the schema -As per GraphQL spec, comments can be added above schema entity attributes using the hash symble `#`. This is illustrated in the example below: +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: ```graphql type MyFirstEntity @entity { @@ -873,7 +906,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. -> +> > If prior blocks contain data relevant to the new data source, it is best to index that data by reading the current state of the contract and creating entities representing that state at the time the new data source is created. ### Data Source Context @@ -930,7 +963,7 @@ dataSources: ``` > **Note:** The contract creation block can be quickly looked up on Etherscan: -> +> > 1. Search for the contract by entering its address in the search bar. > 2. Click on the creation transaction hash in the `Contract Creator` section. > 3. Load the transaction details page where you'll find the start block for that contract. @@ -945,9 +978,9 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde `indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: -1. `"never"`: No pruning of historical data; retains the entire history. -2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. -3. A specific number: Sets a custom limit on the number of historical blocks to retain. +1. `"never"`: No pruning of historical data; retains the entire history. +2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +3. A specific number: Sets a custom limit on the number of historical blocks to retain. ``` indexerHints: @@ -982,29 +1015,6 @@ indexerHints: prune: never ``` -You can check the earliest block (with historical state) for a given subgraph by querying the [Indexing Status API](/deploying/deploying-a-subgraph-to-hosted/#checking-subgraph-health): - -``` -{ - indexingStatuses(subgraphs: ["Qm..."]) { - subgraph - synced - health - chains { - earliestBlock { - number - } - latestBlock { - number - } - chainHeadBlock { number } - } - } -} -``` - -Note that the `earliestBlock` is the earliest block with historical data, which will be more recent than the `startBlock` specified in the manifest, if the subgraph has been pruned. - ## Event Handlers Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. @@ -1477,7 +1487,7 @@ The file data source must specifically mention all the entity types which it wil #### Create a new handler to process files -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/assemblyscript-api/#json-api)). +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). The CID of the file as a readable string can be accessed via the `dataSource` as follows: diff --git a/website/pages/nl/developing/developer-faqs.mdx b/website/pages/nl/developing/developer-faqs.mdx index b9cd3035c35e..0f6f35271330 100644 --- a/website/pages/nl/developing/developer-faqs.mdx +++ b/website/pages/nl/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: Ontwikkelaar FAQs --- -## 1. What is a subgraph? +This page summarizes some of the most common questions for developers building on The Graph. -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using the Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available to be queried by subgraph consumers. +## Subgraph Related -## 2. Can I delete my subgraph? +### 1. What is a subgraph? -It is not possible to delete subgraphs once they are created. +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. Can I change my subgraph name? +### 2. What is the first step to create a subgraph? -No. Once a subgraph is created, the name cannot be changed. Make sure to think of this carefully before you create your subgraph so it is easily searchable and identifiable by other dapps. +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. Can I change the GitHub account associated with my subgraph? +### 3. Can I still create a subgraph if my smart contracts don't have events? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Make sure to think of this carefully before you create your subgraph. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. Am I still able to create a subgraph if my smart contracts don't have events? +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are by far the fastest way to retrieve useful data. +### 4. Can I change the GitHub account associated with my subgraph? -If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended, as performance will be significantly slower. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. Is it possible to deploy one subgraph with the same name for multiple networks? +### 5. How do I update a subgraph on mainnet? -You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/deploying/deploying-a-subgraph-to-hosted#redeploying-a-subgraph) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. How are templates different from data sources? +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) upfront you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. +You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph#data-source-templates). -## 8. How do I make sure I'm using the latest version of graph-node for my local deployments? +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -You can run the following command: +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -docker pull graphprotocol/graph-node:latest -``` +You can also use `graph add` command to add a new dataSource. -**NOTE:** docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so it is important to do this to make sure you are up to date with the latest version of graph-node. +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. How do I call a contract function or access a public state variable from my subgraph mappings? +Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. -## 10. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Yes. On `graph init` command itself you can add multiple datasources by entering contracts one after the other. You can also use `graph add` command to add new datasource. +You can run the following command: -## 11. I want to contribute or add a GitHub issue. Where can I find the open source repositories? +```sh +docker pull graphprotocol/graph-node:latest +``` -- [graph-node](https://github.com/graphprotocol/graph-node) -- [graph-tooling](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. What is the recommended way to build "autogenerated" ids for an entity when handling events? +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -## 13. When listening to multiple contracts, is it possible to select the contract order to listen to events? +### 15. Can I delete my subgraph? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +It is not possible to delete subgraphs once they are created. However, you can [transfer and deprecate your subgraph](/managing/transfer-and-deprecate-a-subgraph/). -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +## Network Related + +### 16. What networks are supported by The Graph? + +You can find the list of the supported networks [here](/developing/supported-networks). + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? Yes. You can do this by importing `graph-ts` as per the example below: @@ -78,23 +99,21 @@ dataSource.network() dataSource.address() ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Do you support block and call handlers on Sepolia? Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. -## 16. Can I import ethers.js or other JS libraries into my subgraph mappings? - -Not currently, as mappings are written in AssemblyScript. One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. +## Indexing & Querying Related -## 17. Is it possible to specify what block to start indexing on? +### 19. Is it possible to specify what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 18. Are there some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 19. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: @@ -102,19 +121,7 @@ Yes! Try the following command, substituting "organization/subgraphName" with th curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -## 20. What networks are supported by The Graph? - -You can find the list of the supported networks [here](/developing/supported-networks). - -## 21. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? - -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. - -## 22. Is this possible to use Apollo Federation on top of graph-node? - -Federation is not supported yet, although we do want to support it in the future. At the moment, something you can do is use schema stitching, either on the client or via a proxy service. - -## 23. Is there a limit to how many objects The Graph can return per query? +### 22. Is there a limit to how many objects The Graph can return per query? By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: @@ -122,24 +129,19 @@ By default, query responses are limited to 100 items per collection. If you want someCollection(first: 1000, skip: ) { ... } ``` -## 24. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## 25. Where do I go to find my current subgraph on the hosted service? - -Head over to the hosted service in order to find subgraphs that you or others deployed to the hosted service. You can find it [here](https://thegraph.com/hosted-service). - -## 26. Will the hosted service start charging query fees? - -The Graph will never charge for the hosted service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The hosted service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to upgrade to the decentralized network as they are comfortable. - -## 27. How do I update a subgraph on mainnet? +## Miscellaneous -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +### 24. Is it possible to use Apollo Federation on top of graph-node? -## 28. In what order are the event, block, and call handlers triggered for a data source? +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +- [graph-node](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/nl/developing/graph-ts/api.mdx b/website/pages/nl/developing/graph-ts/api.mdx index 46442dfa941e..8fc1f4b48b61 100644 --- a/website/pages/nl/developing/graph-ts/api.mdx +++ b/website/pages/nl/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: AssemblyScript API --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/release-notes/assemblyscript-migration-guide) +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## API Reference @@ -27,16 +29,16 @@ The `@graphprotocol/graph-ts` library provides the following APIs: The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. -| Version | Release notes | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
    Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
    Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
    `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
    `etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Release notes | +| :-----: | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
    Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
    Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
    `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
    `etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Built-in Types @@ -252,7 +254,9 @@ export function handleTransfer(event: TransferEvent): void { When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. -Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### Loading entities from the store @@ -268,15 +272,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### Looking up entities created withing a block As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. -The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a Transaction from some on-chain event, and a later handler wants to access this transaction if it exists. In the case where the transaction does not exist, the subgraph will have to go to the database just to find out that the entity does not exist; if the subgraph author already knows that the entity must have been created in the same block, using loadInBlock avoids this database roundtrip. For some subgraphs, these missed lookups can contribute significantly to the indexing time. +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -503,7 +510,9 @@ Any other contract that is part of the subgraph can be imported from the generat #### Handling Reverted Calls -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -515,7 +524,7 @@ if (callResult.reverted) { } ``` -Note that a Graph node connected to a Geth or Infura client may not detect all reverts, if you rely on this we recommend using a Graph node connected to a Parity client. +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### Encoding/Decoding ABI diff --git a/website/pages/nl/developing/supported-networks.mdx b/website/pages/nl/developing/supported-networks.mdx index bed07fc2da1c..bd39f933133c 100644 --- a/website/pages/nl/developing/supported-networks.mdx +++ b/website/pages/nl/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. +- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. - If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). diff --git a/website/pages/nl/developing/unit-testing-framework.mdx b/website/pages/nl/developing/unit-testing-framework.mdx index f826a5ccb209..308135181ccb 100644 --- a/website/pages/nl/developing/unit-testing-framework.mdx +++ b/website/pages/nl/developing/unit-testing-framework.mdx @@ -1368,18 +1368,18 @@ The log output includes the test run duration. Here's an example: > Critical: Could not create WasmInstance from valid module with context: unknown import: wasi_snapshot_preview1::fd_write has not been defined -This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/assemblyscript-api/#logging-api) +This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/graph-ts/api/#logging-api) > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Block(defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(18,12) -> +> > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Transaction(defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(24,12) The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as`. The best way to fix issues like this one is to update everything to the latest released version. diff --git a/website/pages/nl/glossary.mdx b/website/pages/nl/glossary.mdx index cd24a22fd4d5..2978ecce3561 100644 --- a/website/pages/nl/glossary.mdx +++ b/website/pages/nl/glossary.mdx @@ -10,11 +10,9 @@ title: Glossary - **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Then, Indexers can begin indexing subgraphs to make them available to be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **Hosted service**: A temporary scaffold service for building and querying subgraphs as The Graph's decentralized network is maturing its cost of service, quality of service, and developer experience. - -- **Indexers**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. @@ -24,17 +22,17 @@ title: Glossary - **Indexer's Self Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. -- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service to The Graph Network. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegators**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curators**: Network participants that identify high-quality subgraphs, and “curate” them (i.e., signal GRT on them) in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. Indexers earn indexing rewards proportional to the signal on a subgraph. We see a correlation between the amount of GRT signalled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. -- **Subgraph Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a subgraph. - **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. @@ -46,11 +44,11 @@ title: Glossary 1. **Active**: An allocation is considered active when it is created on-chain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed. Specifically, the Indexer will lose 2.5% of their self-stake of GRT. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. @@ -62,11 +60,11 @@ title: Glossary - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **POI or Proof of Indexing**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent Proof of Indexing (POI). Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. @@ -78,10 +76,6 @@ title: Glossary - **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self stake. -- **_Upgrading_ a subgraph to The Graph Network**: The process of moving a subgraph from the hosted service to The Graph Network. - -- **_Updating_ a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). - -- **Upgrade Window**: The countdown for hosted service users to upgrade their subgraph(s) to The Graph Network beginning on April 11th, and ending on June 12th 2024. diff --git a/website/pages/nl/index.json b/website/pages/nl/index.json index fb0e73883694..29aad7f6c047 100644 --- a/website/pages/nl/index.json +++ b/website/pages/nl/index.json @@ -21,10 +21,6 @@ "createASubgraph": { "title": "Creëer een Subgraph", "description": "Gebruik Studio om een subgraph te bouwen" - }, - "migrateFromHostedService": { - "title": "Upgrade from the hosted service", - "description": "Upgrading subgraphs to The Graph Network" } }, "networkRoles": { diff --git a/website/pages/nl/managing/transfer-and-deprecate-a-subgraph.mdx b/website/pages/nl/managing/transfer-and-deprecate-a-subgraph.mdx new file mode 100644 index 000000000000..6bdd183f72d5 --- /dev/null +++ b/website/pages/nl/managing/transfer-and-deprecate-a-subgraph.mdx @@ -0,0 +1,65 @@ +--- +title: Transfer and Deprecate a Subgraph +--- + +## Transferring ownership of a subgraph + +Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. + +**Please note the following:** + +- Whoever owns the NFT controls the subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. +- You can easily move control of a subgraph to a multi-sig. +- A community member can create a subgraph on behalf of a DAO. + +### View your subgraph as an NFT + +To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: + +``` +https://opensea.io/your-wallet-address +``` + +Or a wallet explorer like **Rainbow.me**: + +``` +https://rainbow.me/your-wallet-addres +``` + +### Step-by-Step + +To transfer ownership of a subgraph, do the following: + +1. Use the UI built into Subgraph Studio: + + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) + +2. Choose the address that you would like to transfer the subgraph to: + + ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) + +Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: + +![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) + +## Deprecating a subgraph + +Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. + +### Step-by-Step + +To deprecate your subgraph, do the following: + +1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). +2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. +3. Your subgraph will no longer appear in searches on Graph Explorer. + +**Please note the following:** + +- The owner's wallet should call the `deprecateSubgraph` function. +- Curators will not be able to signal on the subgraph anymore. +- Curators that already signaled on the subgraph can withdraw their signal at an average share price. +- Deprecated subgraphs will show an error message. + +> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. diff --git a/website/pages/nl/mips-faqs.mdx b/website/pages/nl/mips-faqs.mdx index ae460989f96e..1f7553923765 100644 --- a/website/pages/nl/mips-faqs.mdx +++ b/website/pages/nl/mips-faqs.mdx @@ -6,10 +6,6 @@ title: MIPs FAQs > Note: the MIPs program is closed as of May 2023. Thank you to all the Indexers who participated! -It's an exciting time to be participating in The Graph ecosystem! During [Graph Day 2022](https://thegraph.com/graph-day/2022/) Yaniv Tal announced the [sunsetting of the hosted service](https://thegraph.com/blog/sunsetting-hosted-service/), a moment The Graph ecosystem has been working towards for many years. - -To support the sunsetting of the hosted service and the migration of all of it's activity to the decentralized network, The Graph Foundation has announced the [Migration Infrastructure Providers (MIPs) program](https://thegraph.com/blog/mips-multi-chain-indexing-incentivized-program). - The MIPs program is an incentivization program for Indexers to support them with resources to index chains beyond Ethereum mainnet and help The Graph protocol expand the decentralized network into a multi-chain infrastructure layer. The MIPs program has allocated 0.75% of the GRT supply (75M GRT), with 0.5% to reward Indexers who contribute to bootstrapping the network and 0.25% allocated to Network Grants for subgraph developers using multi-chain subgraphs. diff --git a/website/pages/nl/network/benefits.mdx b/website/pages/nl/network/benefits.mdx index b41bc69c223e..fe5b0c8749cc 100644 --- a/website/pages/nl/network/benefits.mdx +++ b/website/pages/nl/network/benefits.mdx @@ -5,7 +5,7 @@ socialImage: https://thegraph.com/docs/img/seo/benefits.jpg Het gedecentraliseerde netwerk van The Graph is ontworpen en verfijnd om een robuuste ervaring te creëren bij het indexeren en opvragen van data. Het netwerk wordt iedere dag sterker door de duizenden bijdragers wereldwijd. -De voordelen van dit gedecentraliseerde protocol is dat het niet gerepliceerd kan worden door een `graph-node` lokaal te laten werken. Het Graph Netwerk is betrouwbaarder, efficiënter en goedkoper. +De voordelen van dit gedecentraliseerde protocol is dat het niet gerepliceerd kan worden door een `graph-node` lokaal te laten werken. Het Graph Netwerk is betrouwbaarder, efficiënter en goedkoper. Hier is een analyse: @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Kostenvergelijking | Zelf hosten | De Graph Netwerk | -| :-: | :-: | :-: | -| Maandelijkse serverkosten | $350 per maand | $0 | -| Querykosten | $0+ | $0 per month | -| Onderhoud tijd | $400 per maand | Geen, deze kosten worden opgevangen door het wereldwijd gedistribueerde netwerk van indexeerders | -| Aantal queries per maand | Beperkt tot infrastructuurcapaciteiten | 100,000 (Free Plan) | -| Kosten per query | $0 | $0 | -| Infrastructuur | Gecentraliseerd | Gedecentraliseerd | -| Geografische redundantie | $750+ per extra node | Inbegrepen | -| Uptime | Wisselend | 99,9%+ | -| Totale maandelijkse kosten | $750+ | $0 | +| Kostenvergelijking | Zelf hosten | De Graph Netwerk | +|:--------------------------:|:---------------------------------------:|:------------------------------------------------------------------------------------------------:| +| Maandelijkse serverkosten | $350 per maand | $0 | +| Querykosten | $0+ | $0 per month | +| Onderhoud tijd | $400 per maand | Geen, deze kosten worden opgevangen door het wereldwijd gedistribueerde netwerk van indexeerders | +| Aantal queries per maand | Beperkt tot infrastructuurcapaciteiten | 100,000 (Free Plan) | +| Kosten per query | $0 | $0 | +| Infrastructuur | Gecentraliseerd | Gedecentraliseerd | +| Geografische redundantie | $750+ per extra node | Inbegrepen | +| Uptime | Wisselend | 99,9%+ | +| Totale maandelijkse kosten | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| Kostenvergelijking | Zelf hosten | De Graph Netwerk | -| :-: | :-: | :-: | -| Maandelijkse serverkosten | $350 per maand | $0 | -| Querykosten | $500 per maand | $120 per month | -| Onderhoud | $800 per maand | Geen, deze kosten worden opgevangen door het wereldwijd gedistribueerde netwerk van indexeerders | -| Aantal queries per maand | Beperkt tot infrastructuurcapaciteiten | ~3,000,000 | -| Kosten per query | $0 | $0.00004 | -| Infrastructuur | Gecentraliseerd | Gedecentraliseerd | -| Technische personeelskosten | $200 per uur | Inbegrepen | -| Geografische redundantie | $1200 totale kosten per extra node | Inbegrepen | -| Uptime | Wisselend | 99,9%+ | -| Totale maandelijkse kosten | $1650+ | $120 | +| Kostenvergelijking | Zelf hosten | De Graph Netwerk | +|:---------------------------:|:-----------------------------------------:|:------------------------------------------------------------------------------------------------:| +| Maandelijkse serverkosten | $350 per maand | $0 | +| Querykosten | $500 per maand | $120 per month | +| Onderhoud | $800 per maand | Geen, deze kosten worden opgevangen door het wereldwijd gedistribueerde netwerk van indexeerders | +| Aantal queries per maand | Beperkt tot infrastructuurcapaciteiten | ~3,000,000 | +| Kosten per query | $0 | $0.00004 | +| Infrastructuur | Gecentraliseerd | Gedecentraliseerd | +| Technische personeelskosten | $200 per uur | Inbegrepen | +| Geografische redundantie | $1200 totale kosten per extra node | Inbegrepen | +| Uptime | Wisselend | 99,9%+ | +| Totale maandelijkse kosten | $1650+ | $120 | ## High Volume User (~30M queries per month) -| Kostenvergelijking | Zelf hosten | De Graph Netwerk | -| :-: | :-: | :-: | -| Maandelijkse serverkosten | $1100 per maand, per node | $0 | -| Querykosten | $4000 | $1,200 per month | -| Aantal benodigde nodes | 10 | Niet van toepassing | -| Onderhoud | $6000 of meer per maand | Geen, deze kosten worden opgevangen door het wereldwijd gedistribueerde netwerk van indexeerders | -| Aantal queries per maand | Beperkt tot infrastructuurcapaciteiten | ~30,000,000 | -| Kosten per query | $0 | $0.00004 | -| Infrastructuur | Gecentraliseerd | Gedecentraliseerd | -| Geografische redundantie | $1200 in totale kosten per extra node | Inbegrepen | -| Uptime | Wisselend | 99,9%+ | -| Totale maandelijkse kosten | $11000+ | $1,200 | +| Kostenvergelijking | Zelf hosten | De Graph Netwerk | +|:--------------------------:|:------------------------------------------:|:------------------------------------------------------------------------------------------------:| +| Maandelijkse serverkosten | $1100 per maand, per node | $0 | +| Querykosten | $4000 | $1,200 per month | +| Aantal benodigde nodes | 10 | Niet van toepassing | +| Onderhoud | $6000 of meer per maand | Geen, deze kosten worden opgevangen door het wereldwijd gedistribueerde netwerk van indexeerders | +| Aantal queries per maand | Beperkt tot infrastructuurcapaciteiten | ~30,000,000 | +| Kosten per query | $0 | $0.00004 | +| Infrastructuur | Gecentraliseerd | Gedecentraliseerd | +| Geografische redundantie | $1200 in totale kosten per extra node | Inbegrepen | +| Uptime | Wisselend | 99,9%+ | +| Totale maandelijkse kosten | $11000+ | $1,200 | \*inclusief kosten voor een back-up: $50-$100 per maand diff --git a/website/pages/nl/network/curating.mdx b/website/pages/nl/network/curating.mdx index 870fc3f4e54e..a985be3fe49a 100644 --- a/website/pages/nl/network/curating.mdx +++ b/website/pages/nl/network/curating.mdx @@ -10,7 +10,7 @@ Before consumers can query a subgraph, it must be indexed. This is where curatio Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. @@ -18,7 +18,7 @@ The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). @@ -34,7 +34,7 @@ Signaleren voor een specifieke versie is vooral handig wanneer één subgraph do Automatische migratie van je signalering naar de nieuwste subgraphversie kan waardevol zijn om ervoor te zorgen dat je querykosten blijft ontvangen. Elke keer dat je signaleert, wordt een curatiebelasting van 1% in rekening gebracht. Je betaalt ook een curatiebelasting van 0,5% bij elke migratie. Subgraphontwikkelaars worden ontmoedigd om vaak nieuwe versies te publiceren - ze moeten een curatiebelasting van 0,5% betalen voor alle automatisch gemigreerde curatie-aandelen. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -49,7 +49,7 @@ However, it is recommended that curators leave their signaled GRT in place not o ## Risico's 1. De querymarkt is nog jong bij het Graph Netwerk en er bestaat een risico dat je %APY lager kan zijn dan je verwacht door opkomende marktdynamiek. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. 3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. Een subgraph kan stuk gaan door een bug. Een subgraph die stuk is gegenereerd geen querykosten. Als gevolg hiervan moet je wachten tot de ontwikkelaar de bug repareert en een nieuwe versie implementeert. - Als je bent geabonneerd op de nieuwste versie van een subgraph, worden je curatieaandelen automatisch gemigreerd naar die nieuwe versie. Er is een curatiebelasting van 0,5%. @@ -65,7 +65,7 @@ By signalling on a subgraph, you will earn a share of all the query fees that th Het vinden van subgraphs van hoge kwaliteit is een complexe taak, maar het kan op vele verschillende manieren worden benadert. Als curator wil je op zoek gaan naar betrouwbare subgraphs die veel queries genereren. Een betrouwbare subgraph kan waardevol zijn als deze compleet, nauwkeurig is en voldoet aan de gegevensbehoeften van een dApp. Een slecht geconstrueerde subgrafiek moet mogelijk worden herzien of opnieuw worden gepubliceerd en kan ook stuk gaan. Het is essentieel voor curatoren om de architectuur of code van een subgraph te beoordelen om te bepalen of een subgraph waardevol is. Als gevolg daarvan kunnen curatoren: -- Hun begrip van een netwerk gebruiken om te proberen voorspellen hoe een individuele subgraph in de toekomst mogelijk een hoger of lager queryvolume zal genereren +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future - Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. ### Wat zijn de kosten voor het updaten van een subgraph? @@ -78,50 +78,14 @@ Het wordt aanbevolen om je subgraphs niet te vaak bij te werken. Zie de bovensta ### Kan ik mijn curatieaandelen verkopen? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). ### 6. Am I eligible for a curation grant? Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## Bonding Curve 101 - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![Prijs per aandeel](/img/price-per-share.png) - -Hierdoor neemt de prijs lineair toe, wat betekent dat het in de loop van de tijd duurder wordt om een curatie-aandeel te kopen. Voor een voorbeeld van wat we precies bedoelen, zie de bonding curve hieronder: - -![Bonding Curve](/img/bonding-curve.png) - -Stel dat we twee curatoren hebben die curatie-aandelen voor een subgraph maken: - -- Curator A is de eerste die signaleert op de subgraph. Door 120.000 GRT aan de curve toe te voegen, kan hij 2000 aandelen maken. -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- Aangezien beide curatoren de helft van het totale aantal curatie-aandelen hebben, zullen ze een gelijke hoeveelheid curator royalties ontvangen. -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- De overgebleven curator zou nu alle curator royalties voor die subgraph ontvangen. Als ze hun aandelen zouden "burnen" om GRT op te nemen, zouden ze 120.000 GRT ontvangen. -- **Samenvattend:** De GRT-waardering van curatie-aandelen wordt bepaald door de bonding curve en kan volatiel zijn. Er is een mogelijkheid om grote verliezen te leiden. Vroeg signaleren betekent dat je minder GRT inlegt voor elk aandeel. Als gevolg hiervan verdien je meer curator royalties per GRT dan latere curatoren voor dezelfde subgraph. - -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** - -In het geval van The Graph wordt gebruik gemaakt van de implementatie van een [bonding curve-formule van Bancor.](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA). - Nog in de war? Bekijk onze Curatie videogids hieronder: diff --git a/website/pages/nl/network/delegating.mdx b/website/pages/nl/network/delegating.mdx index 80f125053f68..940cec6d92ae 100644 --- a/website/pages/nl/network/delegating.mdx +++ b/website/pages/nl/network/delegating.mdx @@ -2,13 +2,23 @@ title: Delegeren --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## Delegeerder Gids -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly, as that information is widely available online. There are three sections in this guide: @@ -24,64 +34,84 @@ Hieronder staan de belangrijkste risico's van het zijn van een Delegeerder in he Delegeerders kunnen niet worden gestraft voor slecht gedrag, maar er is een belasting voor Delegeerders om slechte besluitvorming die de integriteit van het netwerk kan schaden te ontmoedigen. -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +As a Delegator, it's important to understand the following: -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### Delegatieontbindingsperiode Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely.
    - ![Delegation unbonding](/img/Delegation-Unbonding.png) _Let op de 0,5% belasting in de Delegatie UI, evenals de - 28-daagse ontbindingsperiode_ + ![Delegation unbonding](/img/Delegation-Unbonding.png) _Let op de 0,5% belasting in de Delegatie UI, evenals de 28-daagse ontbindingsperiode_
    ### Het kiezen van een betrouwbare Indexeerder met een eerlijke beloningsuitbetaling voor Delegeerders -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +#### Delegation Parameters + +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
    - ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *De bovenste Indexeerder geeft Delegeerders 90% van de - beloningen. De middelste geeft Delegeerders 20%. De onderste geeft Delegeerders ~83%.* + ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *De bovenste Indexeerder geeft Delegeerders 90% van de beloningen. De middelste geeft Delegeerders 20%. De onderste geeft Delegeerders ~83%.*
    -- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. + +As you can see, in order to choose the right Indexer, you must consider multiple things. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -### Het berekenen van de verwachte opbrengst van Delegeerders +## Calculating Delegators Expected Return -A Delegator must consider a lot of factors when determining the return. These include: +A Delegator must consider the following factors to determine a return: -- Een technische delegeerder kan ook kijken of een Indexeerder de aan hen beschikbaar gestelde gedelegeerde tokens wel volledig gebruikt. Als een Indexeerder niet alle beschikbare tokens toewijst, verdienen zij niet de maximale winst die mogelijk is voor zichzelf of hun Delegeerders. -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### Rekening houden met de query fee aandeel en indexing fee aandeel -Zoals beschreven in de bovenstaande secties, moet je een Indexeerder kiezen die transparant en eerlijk is over het instellen van hun Query Fee Aandeel en Indexing Fee Aandeel. Een Delegeerder moet ook kijken naar de parameter "Cooldown Time" om te zien hoeveel van een tijdsbuffer ze hebben. Nadat dat is gedaan, is het vrij eenvoudig om de hoeveelheid beloningen te berekenen die Delegators ontvangen. De formule is: +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![Delegatie Figuur 3](/img/Delegation-Reward-Formula.png) ### Rekening houden met de delegatie pool van een Indexeerder -Een andere factor waarmee een Delegeerder rekening moet houden, is welk deel van de Delegatiepool zij bezitten. Alle beloningen uit delegatie worden gelijk verdeeld, overeenkomstig een eenvoudige herverdeling van de pool, die bepaald wordt door de hoeveelheid die de Delegeerder in de pool heeft gestort. Dit geeft de Delegeerder een aandeel in de pool: +Delegators should consider the proportion of the Delegation Pool they own. -![Deel Formule](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +This gives the Delegator a share of the pool: + +![Deel Formule](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### Rekening houden met de delegatiecapaciteit -Een andere factor om te overwegen is de delegatiecapaciteit. Momenteel is de Delegatie Ratio ingesteld op 16. Dit betekent dat als een Indexeerder 1.000.000 GRT heeft gestaked, hun Delegatiecapaciteit 16.000.000 GRT aan gedelegeerde tokens is die ze kunnen gebruiken in het protocol. Elke gedelegeerde token boven dit bedrag zal voor alle Delegeerders de beloningen verwateren. +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. @@ -89,16 +119,21 @@ Therefore, a Delegator should always consider the Delegation Capacity of an Inde ### MetaMask "Pending Transaction" Bug -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### Example -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## Videohandleiding voor de netwerk-gebruikersomgeving +## Video Guide -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/nl/network/developing.mdx b/website/pages/nl/network/developing.mdx index 351296bb6d9d..20c665bdf750 100644 --- a/website/pages/nl/network/developing.mdx +++ b/website/pages/nl/network/developing.mdx @@ -2,52 +2,88 @@ title: Ontwikkelen --- -Ontwikkelaars zijn de vraagzijde van The Graph ecosysteem. Ontwikkelaars bouwen subgraphs en publiceren deze naar het Graph Netwerk. Vervolgens sturen ze query's naar de subgraphs met GraphQL om hun applicaties aan te sturen. +To start coding right away, go to [Developer Quick Start](/quick-start/). + +## Overview + +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. + +On The Graph, you can: + +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. + +### What is GraphQL? + +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +### Developer Actions + +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. + +## Subgraph Specifics + +### What are subgraphs? + +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. + +A subgraph primarily consists of the following files: + +- `subgraph.yaml`: this YAML file contains the [subgraph manifest](/developing/creating-a-subgraph/#the-subgraph-manifest). +- `subgraph.graphql`: this GraphQL schema defines what data is stored for your subgraph, and how to query it via [GraphQL](/developing/creating-a-subgraph/#the-graphql-schema). +- `mappings`: this [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) mappings file translates data from the event data to the entities defined in your schema. + +Learn the detailed specifics to [create a subgraph](/developing/creating-a-subgraph/). ## Subgraph Levenscyclus -Subgraphs op het netwerk hebben een gedefinieerde levenscyclus. +Here is a general overview of a subgraph’s lifecycle: -### Lokaal Bouwen +![Subgraph Lifecycle](/img/subgraph-lifecycle.png) -Net als bij alle subgraph-ontwikkeling, begint het met lokaal ontwikkelen en testen. Ontwikkelaars kunnen dezelfde lokale setup gebruiken, ongeacht of ze bouwen voor het Graph-netwerk, de hosted service of een lokale Graph Node, waarbij ze `graph-cli` en `graph-ts` gebruiken om hun subgraph te bouwen. Ontwikkelaars wordt aangemoedigd om tools zoals [Matchstick](https://github.com/LimeChain/matchstick) te gebruiken voor unit testing om de robuustheid van hun subgraphs te verbeteren. +### Lokaal Bouwen -> Er zijn bepaalde beperkingen op The Graph-netwerk, qua functies en netwerkondersteuning. Alleen subgraphs op [ondersteunde netwerken](/developing/supported-networks) komen in aanmerking voor indexeringsbeloningen, en subgraphs die data ophalen uit IPFS komen niet in aanmerking. +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. ### Deploy to Subgraph Studio -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. - -### Publiceren op het netwerk +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -Wanneer de ontwikkelaars tevreden zijn met hun subgraph, kunnen ze deze publiceren op The Graph-netwerk. Dit is een on-chain actie, die de subgraph registreert zodat deze door Indexers kan worden ontdekt. Gepubliceerde subgraphs hebben een bijbehorende NFT, die gemakkelijk overdraagbaar is. De gepubliceerde subgraph heeft bijbehorende metadata, die andere netwerkdeelnemers context en informatie bieden. +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. -### Signaal Geven om Indexering te Stimuleren +### Publiceren op het netwerk -Gepubliceerde subgraphs worden waarschijnlijk niet opgepikt door Indexers zonder toevoeging van signaal. Signaal is vergrendelde GRT die is gekoppeld aan een gegeven subgraph die aan Indexeerders aangeeft dat een bepaalde subgraph queryvolume zal ontvangen, dit draagt ook bij aan de indexeringsbeloningen die beschikbaar zijn voor het verwerken ervan. Subgraph-ontwikkelaars zullen over het algemeen signaal toevoegen aan hun eigen subgraph, om indexering aan te moedigen. Derde partij Curatoren kunnen ook signaal geven op een bepaalde subgraph, als zij van mening zijn dat de subgraph waarschijnlijk queryvolume zal genereren. +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. -### Query's versturen & Applicatieontwikkeling +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-and-deprecate-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. -Zodra een subgraph is verwerkt door Indexers en beschikbaar is voor bevraging, kunnen ontwikkelaars de subgraph in hun applicaties gaan gebruiken. Ontwikkelaars bevragen subgraphs via een gateway, die hun queries doorstuurt naar een Indexer die de subgraph heeft verwerkt, en betalen querykosten in GRT. +### Add Curation Signal for Indexing -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. +#### What is signal? -### Subgraphs Bijwerken +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. +### Query's versturen & Applicatieontwikkeling -Zodra de Subgraph-ontwikkelaar klaar is om bij te werken, kunnen ze een transactie starten om hun subgraph naar de nieuwe versie te wijzen. Het bijwerken van de subgraph migreert elk signaal naar de nieuwe versie (ervan uitgaande dat de gebruiker die het signaal heeft toegepast, "auto-migrate" heeft geselecteerd), wat ook een migratiebelasting met zich meebrengt. Deze signaalmigratie zou Indexers moeten aanzetten om de nieuwe versie van de subgraph te gaan indexeren, dus deze zou snel beschikbaar moeten zijn voor queries. +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). -### Subgraphs Uitfaseren +Learn more about [querying subgraphs](/querying/querying-the-graph/). -Op een gegeven moment kan een ontwikkelaar besluiten dat ze een gepubliceerde subgraph niet langer nodig hebben. Op dat moment kunnen ze de subgraph uitfaseren, wat eventuele gesignaleerde GRT aan de Curatoren retourneert. +### Subgraphs Bijwerken -### Diverse Ontwikkelaarsrollen +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -Sommige ontwikkelaars zullen zich bezighouden met de volledige subgraph-levenscyclus op het netwerk, publiceren, bevragen en itereren op hun eigen subgraphs. Sommigen kunnen zich richten op subgraph-ontwikkeling, het bouwen van open API's waar anderen op kunnen bouwen. Sommigen kunnen zich richten op applicatieontwikkeling, en bevragen subgraphs die door anderen zijn geïmplementeerd. +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. -### Ontwikkelaars en Netwerkeconomie +### Deprecating & Transferring Subgraphs -Ontwikkelaars zijn een belangrijke economische speler in het netwerk die GRT vastzetten om indexering te stimuleren en cruciaal zijn voor het bevragen van subgraphs, wat de primaire waarde-uitwisseling van het netwerk is. Subgraph-ontwikkelaars verbranden ook GRT wanneer een subgraph wordt bijgewerkt. +If you no longer need a published subgraph, you can [deprecate or transfer a subgraph](/managing/transfer-and-deprecate-a-subgraph/). Deprecating a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/nl/network/explorer.mdx b/website/pages/nl/network/explorer.mdx index e372bdaa81d3..cb2be6378d2a 100644 --- a/website/pages/nl/network/explorer.mdx +++ b/website/pages/nl/network/explorer.mdx @@ -2,21 +2,35 @@ title: Graph Explorer --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## Subgraphs -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![Explorer Afbeelding 1](/img/Subgraphs-Explorer-Landing.png) -Wanneer u op een subgraph klikt, kunt u query's testen in de playground en netwerkdetails gebruiken om geïnformeerde beslissingen te nemen. U kunt ook GRT signaleren op uw eigen subgraph of de subgraph van anderen om indexeerders bewust te maken van het belang en de kwaliteit ervan. Dit is cruciaal, want het signaleren op een subgraph stimuleert het om geïndexeerd te worden, wat betekent dat het op het netwerk zal verschijnen om uiteindelijk query's te serveren. +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Afbeelding 2](/img/Subgraph-Details.png) -Op elke pagina toegewijd aan een subgraph, worden verschillende details weergegeven. Bijvoorbeeld: +On each subgraph’s dedicated page, you can do the following: - Het toevoegen/weghalen van signaal op een subgraph - Details zoals grafieken, huidige implementatie-ID en andere metadata @@ -31,26 +45,32 @@ Op elke pagina toegewijd aan een subgraph, worden verschillende details weergege ## Deelnemers -Binnen dit tabblad krijgt u een vogelvlucht van alle mensen die deelnemen aan de netwerkactiviteiten, zoals Indexeerders, Delegeerders en Curatoren. Hieronder zullen we dieper ingaan op wat elk tabblad voor u betekent. +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### 1. Indexers ![Explorer Afbeelding 4](/img/Indexer-Pane.png) -Laten we beginnen met de Indexers. Indexers zijn de ruggengraat van het protocol, zij zijn degenen die Grt staken op subgraphs, deze indexeren en query's leveren aan iedereen die de subgraphs gebruikt. In de tabel van de Indexers kunt u de delegatieparameters van een Indexer, hun inzet, hoeveel stake ze op elke subgraph en hoeveel inkomsten ze hebben gegenereerd uit queryvergoedingen en indexeringsbeloningen zien. Meer details hieronder: +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. + +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. -- Query Fee Cut - het % van de queryvergoedingen dat de Indexer behoudt na het delen met Delegeerders -- Effective Reward Aandeel - de indexeringsbeloning die wordt toegepast op de Delegatie pool. Als dit negatief is, betekent dit dat de Indexeerder een deel van hun beloningen weggeeft. Als dit positief is, betekent dit dat de Indexeerder een deel van hun beloningen behoudt -- Cooldown Remaining - de resterende tijd tot de Indexeerder de bovenstaande delegatieparameters kan wijzigen. Cooldown-perioden worden ingesteld door Indexeerders wanneer ze hun delegatieparameters bijwerken -- Owned - Dit is de eigen stake van de Indexer, die kan worden verlaagd vanwege kwaadwillig of incorrect gedrag -- Delegated - Dit is de stake van de Delegeerders die door de Indexeerders kunnen worden gebruikt, maar niet kunnen worden afgenomen bij kwaadwillig of incorrect gedrag van de Indexeerder -- Allocated - Stake die Indexers actief alloceren aan de subgraphs die ze indexeren -- Available Delegation Capacity - het bedrag van gedelegeerde inzet dat de Indexeerders nog kunnen ontvangen voordat ze overgedelegeerd worden +**Specifics** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - het maximale bedrag aan gedelegeerde inzet dat de Indexer productief kan accepteren. Een teveel aan gedelegeerde inzet kan niet worden gebruikt voor allocaties of beloningsberekeningen. -- Query Fees - dit zijn de totale kosten die eindgebruikers hebben betaald voor query's van een Indexer +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - Indexer Rewards - dit zijn de totale indexerbeloningen die door de Indexer en hun Delegeerders zijn verdiend. Indexerbeloningen worden uitbetaald via de uitgifte van GRT. -Indexeerders kunnen zowel queryvergoedingen als indexeringsbeloningen verdienen. Functioneel gebeurt dit wanneer netwerkdeelnemers GRT aan een Indexeerder delegeren. Hierdoor kunnen Indexeerders queryvergoedingen en beloningen ontvangen, afhankelijk van hun Indexeerder parameters. Indexeerparameters worden ingesteld door aan de rechterkant van de tabel te klikken, of door naar het profiel van een Indexeerder te gaan en op de knop "Delegate" te klikken. +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. Om meer te weten te komen over hoe je een Indexer kunt worden, kun je een kijkje nemen in de [officiële documentatie](/network/indexing) of de Indexeerdersgidsen van [The Graph Academy](https://thegraph.academy/delegators/choosing-indexers/) @@ -58,9 +78,13 @@ Om meer te weten te komen over hoe je een Indexer kunt worden, kun je een kijkje ### 2. Curatoren -Curatoren analyseren subgraphs om te identificeren welke subgraphs van de hoogste kwaliteit zijn. Zodra een Curator een potentieel aantrekkelijke subgraph heeft gevonden, kunnen ze deze cureren door te signaleren, met Grt, op de bonding curve. Door dit te doen, laten Curatoren Indexeerders weten welke subgraphs van hoge kwaliteit zijn en geïndexeerd moeten worden. +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -Curatoren kunnen leden zijn van de community, dataconsumenten, of zelf subgraph-ontwikkelaars die op hun eigen subgraphs signaleren door GRT-tokens in een bonding curve te storten. Door GRT te storten, maken Curatoren curator-aandelen van een subgraph aan. Als gevolg hiervan komen Curatoren in aanmerking om een deel van de querykosten te verdienen die de subgraph waarop zijn hebben gesignaleerd gerenereert. De bonding curve stimuleert Curatoren om de hoogste kwaliteit data te cureren. De Curator-tabel in deze sectie laart je zien: +In the The Curator table listed below you can see: - De datum waarop de curator is begonnen met cureren - Het gestorte aantal GRT @@ -68,34 +92,36 @@ Curatoren kunnen leden zijn van de community, dataconsumenten, of zelf subgraph- ![Explorer Afbeelding 6](/img/Curation-Overview.png) -Als je meer wilt weten over de rol van de Curator, kun je de volgende websites bezoeken: [The Graph Academy](https://thegraph.academy/curators/) of de [officiële documentatie.](/network/curating) +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### 3. Delegators -Delegators spelen een belangrijke rol in het bewaken van de veiligheid en decentralisatie van het Graph Netwerk. Ze nemen deel aan het netwerk door hun GRT-tokens te delegeren aan één of meerdere Indexeerders. Zonder Delegators krijgen Indexeerders minder beloningen en vergoedingen. Daarom proberen Indexeerders Delegators aan te trekken door hen een deel van de indexeringsbeloningen en query-vergoedingen die zij verdienen aan te bieden. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -Delegators kiezen op hun beurt Indexeerders op basis van verschillende variabelen, zoals eerdere prestaties, hoeveelheid indexeringsbeloningen en hoeveelheid query-vergoedingen. Reputatie binnen de community kan hierbij ook een rol spelen. Het is aanbevolen om contact te leggen met de inderxers via [de Graph Discord](https://discord.gg/graphprotocol) of [het Graph Forum](https://forum.thegraph.com/)! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![Explorer Afbeelding 7](/img/Delegation-Overview.png) -De tabel met Delegators geeft je een overzicht van de actieve Delegators in de community, het laat ook de volgende statistieken zien: +In the Delegators table you can see the active Delegators in the community and important metrics: - Het aantal Indexeerders waaraan een Delegator heeft gedelegeerd - De oorspronkelijke delegatie van een Delegator - De beloningen die ze hebben opgebouwd, maar nog niet hebben opgenomen uit het protocol - De gerealiseerde beloningen die ze uit het protocol hebben opgenomen - Het totale bedrag aan GRT dat ze momenteel in het protocol hebben -- De datum waarop ze voor het laatst hebben gedelegeerd +- The date they last delegated -Wil je meer leren over hoe je een Delegator kunt worden? Zoek niet verder! Het enige wat je hoeft te doen is naar de [official documentatie](/network/delegating) of [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers) te gaan. +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## Netwerk -In het "Netwerk" gedeelte ziet u de prestatie-indicatoren, evenals de mogelijkheid om te wisselen naar een per-epoch basis en netwerkmetrieken gedetailleerder te analyseren. Deze details geven u een beeld van hoe het netwerk presteert over een bepaalde tijd. +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. ### Overview -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +The overview section has both all the current network metrics and some cumulative metrics over time: - De huidige totale hoeveelheid GRT in het netwerk - De verdeling van de GRT tussen Indexeerders en hun Delegators @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - Protocolparameters zoals curatiebeloningen, inflatiepercentage, en meer - Huidige epoch beloningen en kosten -Een paar belangrijke details die het vermelden waard zijn: +A few key details to note: -- **Querykosten vertegenwoordigen de kosten die door de consumenten worden gegenereerd** en kunnen (of niet) door de indexeerders worden geclaimd na een periode van ten minste 7 epochs (zie hieronder) nadat hun allocaties naar de subgraphs zijn gesloten en de data die zij hebben geleverd is gevalideerd door de consumenten. -- **Indexeringsbeloningen vertegenwoordigen de hoeveelheid beloningen die de Indexers hebben geclaimd van de netwerk uitgifte tijdens de epoch.** Hoewel de uitgifte van nieuwe Grt vaststaat, worden de beloningen pas gemunt nadat de Indexers hun allocaties naar de subgraphs die ze hebben geïndexeerd, sluiten. Daarom varieert het aantal beloningen per epoch (Bijvoorbeeld, tijdens sommige epochs, hebben Indexers mogelijk gezamenlijk allocaties gesloten die al vele dagen open stonden). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Afbeelding 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ In de Epochs afdeling kun je per epoch verschillende metrieken analyseren, zoals - De actieve epoch is degene waarin Indexers momenteel Grt toewijzen en querykosten verzamelen - De afhandelende epochs zijn die waarin de state channels worden afgehandeld. Dit betekent dat Indexers te maken kunnen krijgen met slashing als consumenten geschillen tegen hen openen. - De distribuerende epochs zijn de epochs waarin de state channels voor de epochs worden afgehandeld en Indexeerders hun querykostenkorting kunnen claimen. - - De afgeronde epochs zijn de epochs waarin geen querykostenkortingen meer te claimen zijn door de Indexeerders, waardoor ze als afgerond worden beschouwd. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![Explorer Afbeelding 9](/img/Epoch-Stats.png) ## Uw Gebruikersprofiel -Nu we het hebben gehad over de netwerkstatistieken, laten we verder gaan met uw persoonlijk profiel. Uw persoonlijk profiel is de plek waar u uw netwerkactiviteit kunt zien, ongeacht hoe u deelneemt aan het netwerk. Uw crypto wallet fungeert als uw gebruikersprofiel, en met het gebruikersdashboard kunt u het volgende zien: +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### Profieloverzicht -Hier kunt u zien welke acties u onlangs hebt ondernomen. Dit is ook waar u uw profielinformatie, beschrijving, en website (als u er een hebt toegevoegd) kunt vinden. +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![Explorer Afbeelding 10](/img/Profile-Overview.png) ### Subgraph Tab -Als u op de Subgraphs tab klikt, ziet u uw gepubliceerde subgraphs. Dit bevat niet de subgraphs die geïmplementeerd zijn met de CLI voor testdoeleinden - subgraphs worden alleen weergegeven als ze zijn gepubliceerd op het gedecentraliseerde netwerk. +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Afbeelding 11](/img/Subgraphs-Overview.png) ### Indexing Tab -Als u op de Indexing tab klikt, vindt u een tabel met alle actieve en historische allocaties op de subgraphs, evenals grafieken waarmee u uw eerder prestaties als Indexeerder kunt analyseren. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. Dit gedeelte bevat ook details over uw netto indexeringsbeloningen en netto querykosten. U zult de volgende metrics zien: @@ -158,7 +189,9 @@ Dit gedeelte bevat ook details over uw netto indexeringsbeloningen en netto quer ### Delegating Tab -Delegators zijn belangrijk voor The Graph Network. Een Delegator moet hun kennis gebruiken om een Indexer te kiezen die een gezond rendement op beloningen zal bieden. Hier kunt u details vinden van uw actieve en historische delegaties, samen met de metrics van de Indexeerders waarnaar u hebt gedelegeerd. +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. In de eerste helft van de pagina kunt u uw delegatiegrafiek zien, evenals de alleen-beloningen-grafiek. Aan de linkerkant kun u de KPI's zien die uw huidige delegatiemetrics weerspiegelen. diff --git a/website/pages/nl/network/indexing.mdx b/website/pages/nl/network/indexing.mdx index da03dc3c88d9..17bd1d30f2f5 100644 --- a/website/pages/nl/network/indexing.mdx +++ b/website/pages/nl/network/indexing.mdx @@ -42,7 +42,7 @@ Het RewardsManager-contract heeft een read-only [getRewards](https://github.com/ Veel van de door de community gemaakte dashboards bevatten waarden van ongerealiseerde beloningen en ze kunnen gemakkelijk handmatig worden gecontroleerd door deze stappen te volgen: -1. Query de [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) om de ID's te krijgen voor alle actieve allocaties: +1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -112,12 +112,12 @@ Indexeerders kunnen zich onderscheiden door geavanceerde technieken toe te passe - **Middel** - Indexer die 100 subgraphs ondersteund en 200-500 query's per seconde verwerkt. - **Groot** - Voorbereid om alle momenteel gebruikte subgraphs te indexeren en de bijbehorende query's te verwerken. -| Setup | Postgres
    (CPUs) | Postgres
    (Geheugen in GBs) | Postgres
    (schijf in TBs) | VMs
    (CPUs) | VMs
    (geheugen in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Klein | 4 | 8 | 1 | 4 | 16 | -| Standaard | 8 | 30 | 1 | 12 | 48 | -| Middel | 16 | 64 | 2 | 32 | 64 | -| Groot | 72 | 468 | 3.5 | 48 | 184 | +| Setup | Postgres
    (CPUs) | Postgres
    (Geheugen in GBs) | Postgres
    (schijf in TBs) | VMs
    (CPUs) | VMs
    (geheugen in GBs) | +| --------- |:--------------------------:|:-------------------------------------:|:-----------------------------------:|:---------------------:|:--------------------------------:| +| Klein | 4 | 8 | 1 | 4 | 16 | +| Standaard | 8 | 30 | 1 | 12 | 48 | +| Middel | 16 | 64 | 2 | 32 | 64 | +| Groot | 72 | 468 | 3.5 | 48 | 184 | ### Wat zijn enkele basisveiligheidsmaatregelen die een Indexeerder moet nemen? @@ -149,20 +149,20 @@ Tip: Om wendbare schaalvergroting te ondersteunen, wordt aanbevolen om Query- en #### Graph Node -| Poort | Doel | Routes | CLI-Argument | Omgevingsvariabele | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
    (voor subgraph query's) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    (voor subgraph abonnementen) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (voor het beheren van implementaties) | / | --admin-port | - | -| 8030 | Subgraph indexeerstatus API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Poort | Doel | Routes | CLI-Argument | Omgevingsvariabele | +| ----- | --------------------------------------------------------- | ---------------------------------------------------- | ----------------- | ------------------ | +| 8000 | GraphQL HTTP server
    (voor subgraph query's) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    (voor subgraph abonnementen) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (voor het beheren van implementaties) | / | --admin-port | - | +| 8030 | Subgraph indexeerstatus API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Service -| Poort | Doel | Routes | CLI-Argument | Omgevingsvariabele | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
    (voor betaalde subgraph query's) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | --metrics-port | - | +| Poort | Doel | Routes | CLI-Argument | Omgevingsvariabele | +| ----- | --------------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
    (voor betaalde subgraph query's) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Agent @@ -545,7 +545,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules maybe [options] ` — Set the `decisionBasis` for a deployment to `rules`, so that the Indexer agent will use indexing rules to decide whether to index this deployment. -- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additonal argument `--status` can be used to print out all actions of a certain status. +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `graph indexer action queue allocate ` - Queue allocation action diff --git a/website/pages/nl/network/overview.mdx b/website/pages/nl/network/overview.mdx index 16214028dbc9..0779d9a6cb00 100644 --- a/website/pages/nl/network/overview.mdx +++ b/website/pages/nl/network/overview.mdx @@ -2,14 +2,20 @@ title: Network Overview --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## Overview +## How does it work? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Specifics + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![Token Economics](/img/Network-roles@2x.png) -To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20 used to allocate resources in the network. +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/nl/new-chain-integration.mdx b/website/pages/nl/new-chain-integration.mdx index 35b2bc7c8b4a..534d6701efdb 100644 --- a/website/pages/nl/new-chain-integration.mdx +++ b/website/pages/nl/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: Integrating New Networks +title: New Chain Integration --- -Graph Node can currently index data from the following chain types: +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- Ethereum, via EVM JSON-RPC and [Ethereum Firehose](https://github.com/streamingfast/firehose-ethereum) -- NEAR, via a [NEAR Firehose](https://github.com/streamingfast/near-firehose-indexer) -- Cosmos, via a [Cosmos Firehose](https://github.com/graphprotocol/firehose-cosmos) -- Arweave, via an [Arweave Firehose](https://github.com/graphprotocol/firehose-arweave) +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -If you are interested in any of those chains, integration is a matter of Graph Node configuration and testing. +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -If you are interested in a different chain type, a new integration with Graph Node must be built. Our recommended approach is developing a new Firehose for the chain in question and then the integration of that Firehose with Graph Node. More info below. +## Integration Strategies -**1. EVM JSON-RPC** +### 1. EVM JSON-RPC -If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. For more information, refer to [Testing an EVM JSON-RPC](new-chain-integration#testing-an-evm-json-rpc). +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -**2. Firehose** +#### Testing an EVM JSON-RPC -For non-EVM-based chains, Graph Node must ingest blockchain data via gRPC and known type definitions. This can be done via [Firehose](firehose/), a new technology developed by [StreamingFast](https://www.streamingfast.io/) that provides a highly-scalable indexing blockchain solution using a files-based and streaming-first approach. Reach out to the [StreamingFast team](mailto:integrations@streamingfast.io/) if you need help with Firehose development. +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## Difference between EVM JSON-RPC & Firehose +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, in a JSON-RPC batch request +- `trace_filter` _(optionally required for Graph Node to support call handlers)_ -While the two are suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](substreams/), like building [Substreams-powered subgraphs](cookbook/substreams-powered-subgraphs/). In addition, Firehose allows for improved indexing speeds when compared to JSON-RPC. +### 2. Firehose Integration -New EVM chain integrators may also consider the Firehose-based approach, given the benefits of substreams and its massive parallelized indexing capabilities. Supporting both allows developers to choose between building substreams or subgraphs for the new chain. +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> **NOTE**: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that eth_calls are [not a good practice for developers](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## Testing an EVM JSON-RPC +#### Specific Firehose Instrumentation for EVM (`geth`) chains -For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON RPC methods: +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \_(for historical blocks, with EIP-1898 - requires archive node): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`, in a JSON-RPC batch request -- _`trace_filter`_ _(optionally required for Graph Node to support call handlers)_ +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### Graph Node Configuration +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -**Start by preparing your local environment** +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## Graph Node Configuration + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) -2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON RPC compliant URL - > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ -**Test the integration by locally deploying a subgraph** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. Create a simple example subgraph. Some options are below: - 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point - 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) -3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. -4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -Graph Node should be syncing the deployed subgraph if there are no errors. Give it time to sync, then send some GraphQL queries to the API endpoint printed in the logs. +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## Integrating a new Firehose-enabled chain +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. Create a simple example subgraph. Some options are below: + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +Graph Node should be syncing the deployed subgraph if there are no errors. Give it time to sync, then send some GraphQL queries to the API endpoint printed in the logs. -Integrating a new chain is also possible using the Firehose approach. This is currently the best option for non-EVM chains and a requirement for substreams support. Additional documentation focuses on how Firehose works, adding Firehose support for a new chain and integrating it with Graph Node. Recommended docs for integrators: +## Substreams-powered Subgraphs -1. [General docs on Firehose](firehose/) -2. [Adding Firehose support for a new chain](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [Integrating Graph Node with a new chain via Firehose](https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/nl/operating-graph-node.mdx b/website/pages/nl/operating-graph-node.mdx index d2117bcf07d5..7524b17721c9 100644 --- a/website/pages/nl/operating-graph-node.mdx +++ b/website/pages/nl/operating-graph-node.mdx @@ -77,13 +77,13 @@ A complete Kubernetes example configuration can be found in the [indexer reposit When it is running Graph Node exposes the following ports: -| Poort | Doel | Routes | CLI-Argument | Omgevingsvariabele | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
    (voor subgraph query's) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    (voor subgraph abonnementen) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (voor het beheren van implementaties) | / | --admin-port | - | -| 8030 | Subgraph indexeerstatus API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Poort | Doel | Routes | CLI-Argument | Omgevingsvariabele | +| ----- | --------------------------------------------------------- | ---------------------------------------------------- | ----------------- | ------------------ | +| 8000 | GraphQL HTTP server
    (voor subgraph query's) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    (voor subgraph abonnementen) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (voor het beheren van implementaties) | / | --admin-port | - | +| 8030 | Subgraph indexeerstatus API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | > **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. diff --git a/website/pages/nl/querying/graphql-api.mdx b/website/pages/nl/querying/graphql-api.mdx index 2bbc71b5bb9c..d8671e53a77c 100644 --- a/website/pages/nl/querying/graphql-api.mdx +++ b/website/pages/nl/querying/graphql-api.mdx @@ -2,11 +2,19 @@ title: GraphQL API --- -This guide explains the GraphQL Query API that is used for The Graph Protocol. +Learn about the GraphQL Query API used in The Graph. -## Queries +## What is GraphQL? -In your subgraph schema you define types called `Entities`. For each `Entity` type, an `entity` and `entities` field will be generated on the top-level `Query` type. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +To understand the larger role that GraphQL plays, review [developing](/network/developing/) and [creating a subgraph](/developing/creating-a-subgraph/). + +## Queries with GraphQL + +In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. + +> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. ### Examples @@ -21,7 +29,7 @@ Query for a single `Token` entity defined in your schema: } ``` -> **Note:** When querying for a single entity, the `id` field is required, and it must be a string. +> Note: When querying for a single entity, the `id` field is required, and it must be writen as a string. Query all `Token` entities: @@ -36,7 +44,10 @@ Query all `Token` entities: ### Sorting -When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. +When querying a collection, you may: + +- Use the `orderBy` parameter to sort by a specific attribute. +- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. #### Example @@ -53,7 +64,7 @@ When querying a collection, the `orderBy` parameter may be used to sort by a spe As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities. -In the following example, we sort the tokens by the name of their owner: +The following example shows tokens sorted by the name of their owner: ```graphql { @@ -70,11 +81,12 @@ In the following example, we sort the tokens by the name of their owner: ### Pagination -When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. - -Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +When querying a collection, it's best to: -Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. +- Use the `first` parameter to paginate from the beginning of the collection. + - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. +- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. #### Example using `first` @@ -106,7 +118,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect #### Example using `first` and `id_ge` -If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: +If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: ```graphql query manyTokens($lastID: String) { @@ -117,11 +129,12 @@ query manyTokens($lastID: String) { } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### Filtering -You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. +- You can use the `where` parameter in your queries to filter for different properties. +- You can filter on multiple values within the `where` parameter. #### Example using `where` @@ -155,7 +168,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: #### Example for block filtering -You can also filter entities by the `_change_block(number_gte: Int)` - this filters entities which were updated in or after the specified block. +You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). @@ -193,7 +206,7 @@ As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/release ##### `AND` Operator -In the following example, we are filtering for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. +The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. ```graphql { @@ -208,7 +221,7 @@ In the following example, we are filtering for challenges with `outcome` `succee ``` > **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas. -> +> > ```graphql > { > challenges(where: { number_gte: 100, outcome: "succeeded" }) { @@ -223,7 +236,7 @@ In the following example, we are filtering for challenges with `outcome` `succee ##### `OR` Operator -In the following example, we are filtering for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. +The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. ```graphql { @@ -278,9 +291,9 @@ _change_block(number_gte: Int) You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -Note that the current implementation is still subject to certain limitations that might violate these gurantees. The implementation can not always tell that a given block hash is not on the main chain at all, or that the result of a query by block hash for a block that can not be considered final yet might be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. #### Example @@ -322,12 +335,12 @@ Fulltext search queries have one required field, `text`, for supplying search te Fulltext search operators: -| Symbol | Operator | Description | -| --- | --- | --- | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| Symbol | Operator | Description | +| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | +| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | +| `<->` | `Follow by` | Specify the distance between two words. | +| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | #### Examples @@ -376,11 +389,11 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 ## Schema -The schema of your data source--that is, the entity types, values, and relationships that are available to query--are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> **Note:** Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. +> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. ### Entities diff --git a/website/pages/nl/querying/querying-best-practices.mdx b/website/pages/nl/querying/querying-best-practices.mdx index 32d1415b20fa..5654cf9e23a5 100644 --- a/website/pages/nl/querying/querying-best-practices.mdx +++ b/website/pages/nl/querying/querying-best-practices.mdx @@ -2,11 +2,9 @@ title: Querying Best Practices --- -The Graph provides a decentralized way to query data from blockchains. +The Graph provides a decentralized way to query data from blockchains via GraphQL APIs, making it easier to query data with the GraphQL language. -The Graph network's data is exposed through a GraphQL API, making it easier to query data with the GraphQL language. - -This page will guide you through the essential GraphQL language rules and GraphQL queries best practices. +Learn the essential GraphQL language rules and GraphQL querying best practices. --- @@ -71,7 +69,7 @@ GraphQL is a language and set of conventions that transport over HTTP. It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`). -However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), we recommend you to use our `graph-client` that supports unique features such as: +However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), it's recommend to use `graph-client` which supports unique features such as: - Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) @@ -104,8 +102,6 @@ main() More GraphQL client alternatives are covered in ["Querying from an Application"](/querying/querying-from-an-application). -Now that we covered the basic rules of GraphQL queries syntax, let's now look at the best practices of GraphQL query writing. - --- ## Best Practices @@ -164,11 +160,11 @@ Doing so brings **many advantages**: - **Variables can be cached** at server-level - **Queries can be statically analyzed by tools** (more on this in the following sections) -**Note: How to include fields conditionally in static queries** +### How to include fields conditionally in static queries -We might want to include the `owner` field only on a particular condition. +You might want to include the `owner` field only on a particular condition. -For this, we can leverage the `@include(if:...)` directive as follows: +For this, you can leverage the `@include(if:...)` directive as follows: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -191,7 +187,7 @@ const result = await execute(query, { }) ``` -Note: The opposite directive is `@skip(if: ...)`. +> Note: The opposite directive is `@skip(if: ...)`. ### Ask for what you want @@ -199,9 +195,8 @@ GraphQL became famous for its "Ask for what you want" tagline. For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually. -When querying GraphQL APIs, always think of querying only the fields that will be actually used. - -A common cause of over-fetching is collections of entities. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. Queries should therefore almost always set first explicitly, and make sure they only fetch as many entities as they actually need. This applies not just to top-level collections in a query, but even more so to nested collections of entities. +- When querying GraphQL APIs, always think of querying only the fields that will be actually used. +- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. For example, in the following query: @@ -337,8 +332,8 @@ query { Such repeated fields (`id`, `active`, `status`) bring many issues: -- harder to read for more extensive queries -- when using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. +- More extensive queries become harder to read. +- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. A refactored version of the query would be the following: @@ -364,13 +359,13 @@ fragment DelegateItem on Transcoder { } ``` -Using GraphQL `fragment` will improve readability (especially at scale) but also will result in better TypeScript types generation. +Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_). ### GraphQL Fragment do's and don'ts -**Fragment base must be a type** +### Fragment base must be a type A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**: @@ -382,7 +377,7 @@ fragment MyFragment on BigInt { `BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base. -**How to spread a Fragment** +#### How to spread a Fragment Fragments are defined on specific types and should be used accordingly in queries. @@ -411,16 +406,16 @@ fragment VoteItem on Vote { It is not possible to spread a fragment of type `Vote` here. -**Define Fragment as an atomic business unit of data** +#### Define Fragment as an atomic business unit of data -GraphQL Fragment must be defined based on their usage. +GraphQL `Fragment`s must be defined based on their usage. For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient. -Here is a rule of thumb for using Fragment: +Here is a rule of thumb for using fragments: -- when fields of the same type are repeated in a query, group them in a Fragment -- when similar but not the same fields are repeated, create multiple fragments, ex: +- When fields of the same type are repeated in a query, group them in a `Fragment`. +- When similar but different fields are repeated, create multiple fragments, for instance: ```graphql # base fragment (mostly used in listing) @@ -443,7 +438,7 @@ fragment VoteWithPoll on Vote { --- -## The essential tools +## The Essential Tools ### GraphQL web-based explorers @@ -473,11 +468,11 @@ This will allow you to **catch errors without even testing queries** on the play The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get: -- syntax highlighting -- autocomplete suggestions -- validation against schema -- snippets -- go to definition for fragments and input types +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets +- Go to definition for fragments and input types If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly. @@ -485,9 +480,9 @@ If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketp The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing: -- syntax highlighting -- autocomplete suggestions -- validation against schema -- snippets +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets -More information on this [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) that showcases all the plugin's main features. +For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. diff --git a/website/pages/nl/quick-start.mdx b/website/pages/nl/quick-start.mdx index 0d0c1c067ba6..092480e192a6 100644 --- a/website/pages/nl/quick-start.mdx +++ b/website/pages/nl/quick-start.mdx @@ -2,24 +2,18 @@ title: Snelle Start --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph to Subgraph Studio. +Learn how to easily publish and query a [subgraph](/developing/developer-faqs/#1-what-is-a-subgraph) on The Graph. -Ensure that your subgraph will be indexing data from a [supported network](/developing/supported-networks). - -This guide is written assuming that you have: +## Prerequisites for this guide - A crypto wallet -- A smart contract address on the network of your choice - -## 1. Create a subgraph on Subgraph Studio - -Go to the [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- A smart contract address on one of the [supported networks](/developing/supported-networks/) -Once your wallet is connected, you can begin by clicking “Create a Subgraph." It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name." +## Step-by-step -## 2. Install the Graph CLI +### 1. Installeer de Graph CLI -The Graph CLI is written in TypeScript and you will need to have `node` and either `npm` or `yarn` installed to use it. Check that you have the most recent CLI version installed. +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. On your local machine, run one of the following commands: @@ -35,133 +29,161 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -## 3. Initialize your subgraph from existing contract +### 2. Create your subgraph + +If your contract has events, the `init` command will automatically create a scaffold of a subgraph. + +#### Create via Graph CLI + +Use the following command to create a subgraph in Subgraph Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +#### Create via Subgraph Studio + +Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. -Initialize your subgraph from an existing contract by running the initialize command: +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +2. Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". + +For additional information on subgraph creation and the Graph CLI, see [Creating a Subgraph](/developing/creating-a-subgraph). + +### 3. Initialize your subgraph + +#### From an existing contract + +The following command initializes your subgraph from an existing contract: ```sh graph init --studio ``` -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> Note: If your contract was verified on Etherscan, then the ABI will automatically be created in the CLI. + +You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -When you initialize your subgraph, the CLI tool will ask you for the following information: +When you initialize your subgraph, the CLI will ask you for the following information: -- Protocol: choose the protocol your subgraph will be indexing data from -- Subgraph slug: create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- Directory to create the subgraph in: choose your local directory -- Ethereum network(optional): you may need to specify which EVM-compatible network your subgraph will be indexing data from -- Contract address: Locate the smart contract address you’d like to query data from -- ABI: If the ABI is not autopopulated, you will need to input it manually as a JSON file -- Start Block: it is suggested that you input the start block to save time while your subgraph indexes blockchain data. You can locate the start block by finding the block where your contract was deployed. -- Contract Name: input the name of your contract -- Index contract events as entities: it is suggested that you set this to true as it will automatically add mappings to your subgraph for every emitted event -- Add another contract(optional): you can add another contract +- Protocol: Choose the protocol your subgraph will be indexing data from. +- Subgraph slug: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. +- Directory to create the subgraph in: Choose your local directory. +- Ethereum network (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- Contract address: Locate the smart contract address you’d like to query data from. +- ABI: If the ABI is not auto-populated, you will need to input it manually as a JSON file. +- Start Block: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- Contract Name: Input the name of your contract. +- Index contract events as entities: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- Add another contract (optional): You can add another contract. See the following screenshot for an example for what to expect when initializing your subgraph: ![Subgraph command](/img/subgraph-init-example.png) -## 4. Write your subgraph +### 4. Write your subgraph -The previous commands create a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: +The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. -- Manifest (`subgraph.yaml`) - The manifest defines what datasources your subgraphs will index. -- Schema (`schema.graphql`) - The GraphQL schema defines what data you wish to retrieve from the subgraph. -- AssemblyScript Mappings (`mapping.ts`) - This is the code that translates data from your datasources to the entities defined in the schema. +When making changes to the subgraph, you will mainly work with three files: -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). +- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -## 5. Deploy to Subgraph Studio +For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -Once your subgraph is written, run the following commands: +### 5. Deploy your subgraph -```sh -$ graph codegen -$ graph build -``` +Remember, deploying is not the same as publishing. + +- When you deploy a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. +- When you publish a subgraph, you are publishing it onchain to the decentralized network. + +1. Once your subgraph is written, run the following commands: + + ```sh + graph codegen + graph build + ``` + +2. Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. -- Authenticate and deploy your subgraph. The deploy key can be found on the Subgraph page in Subgraph Studio. +![Deploy key](/img/subgraph-studio-deploy-key.jpg) +```` ```sh -$ graph auth --studio -$ graph deploy --studio -``` -You will be asked for a version label. It's strongly recommended to use [semver](https://semver.org/) for versioning like `0.0.1`. That said, you are free to choose any string as version such as:`v1`, `version1`, `asdf`. - -## 6. Test your subgraph - -In Subgraph Studio's playground environment, you can test your subgraph by making a sample query. - -The logs will tell you if there are any errors with your subgraph. The logs of an operational subgraph will look like this: - -![Subgraph logs](/img/subgraph-logs-image.png) - -If your subgraph is failing, you can query the subgraph health by using the GraphiQL Playground. Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails, so you can debug accordingly: - -```graphql -{ - indexingStatuses(subgraphs: ["Qm..."]) { - node - synced - health - fatalError { - message - block { - number - hash - } - handler - } - nonFatalErrors { - message - block { - number - hash - } - handler - } - chains { - network - chainHeadBlock { - number - } - earliestBlock { - number - } - latestBlock { - number - } - lastHealthyBlock { - number - } - } - entityCount - } -} +graph auth --studio + +graph deploy --studio ``` +```` -## 7. Publish your subgraph to The Graph’s Decentralized Network +- The CLI will ask for a version label. + - It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. That said, you can choose any string for the version such as: `v1`, `version1`, `asdf`, etc. -Once your subgraph has been deployed to Subgraph Studio, you have tested it out, and you are ready to put it into production, you can then publish it to the decentralized network. +### 6. Review your subgraph -In Subgraph Studio, you will be able to click the publish button on the top right of your subgraph's page. +If you’d like to examine your subgraph before publishing it to the network, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: -Select the network you would like to publish your subgraph to. It is recommended to publish subgraphs to Arbitrum One to take advantage of the [faster transaction speeds and lower gas costs](/arbitrum/arbitrum-faq). +- Run a sample query. +- Analyze your subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: -The (upgrade Indexer)[/sunrise/#about-the-upgrade-indexer] will begin serving queries on your subgraph regardless of subgraph curation, and it will provide you with 100,000 free queries per month. + ![Subgraph logs](/img/subgraph-logs-image.png) -For a higher quality of service and stronger redundancy, you can curate your subgraph to attract more Indexers. At the time of writing, it is recommended that you curate your own subgraph with at least 3,000 GRT to ensure 3-5 additional Indexers begin serving queries on your subgraph. +### 7. Publish your subgraph to The Graph Network -To save on gas costs, you can curate your subgraph in the same transaction that you published it by selecting this button when you publish your subgraph to The Graph’s decentralized network: +Publishing a subgraph to the decentralized network makes it available for [Curators](/network/curating/) to begin curating it and [Indexers](/network/indexing/) to begin indexing it. -![Subgraph publish](/img/publish-and-signal-tx.png) +#### Publishing with Subgraph Studio + +1. To publish your subgraph, click the Publish button in the dashboard. +2. Select the network to which you would like to publish your subgraph. + +#### Publishing from the CLI + +As of version 0.73.0, you can also publish your subgraph with the Graph CLI. + +1. Open the `graph-cli`. + +2. Use the following commands: + + ```sh + graph codegen && graph build + ``` + + Then, + + ```sh + graph publish + ``` -## 8. Query your subgraph +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +To customize your deployment, see [Publishing a Subgraph](/publishing/publishing-a-subgraph/). + +#### Adding signal to your subgraph + +1. To attract indexers to query your subgraph, you should add GRT curation signal to it. + + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + +2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. + + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + +To learn more about curation, read [Curating](/network/curating/). + +To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: + +![Subgraph publish](/img/publish-and-signal-tx.png) -Now, you can query your subgraph by sending GraphQL queries to your subgraph’s Query URL, which you can find by clicking on the query button. +### 8. Query your subgraph -If you don't have your API key, you can query via the free, rate-limited development query URL, which can be used for development and staging. +Now, you can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read more [here](/querying/querying-the-graph/). +For more information about querying data from your subgraph, read [Querying The Graph](/querying/querying-the-graph/). diff --git a/website/pages/nl/release-notes/assemblyscript-migration-guide.mdx b/website/pages/nl/release-notes/assemblyscript-migration-guide.mdx index 85f6903a6c69..17224699570d 100644 --- a/website/pages/nl/release-notes/assemblyscript-migration-guide.mdx +++ b/website/pages/nl/release-notes/assemblyscript-migration-guide.mdx @@ -127,11 +127,8 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - You'll need to rename your duplicate variables if you had variable shadowing. - ### Null Comparisons - By doing the upgrade on your subgraph, sometimes you might get errors like these: ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - To solve you can simply change the `if` statement to something like this: ```typescript @@ -288,7 +284,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: ```typescript diff --git a/website/pages/nl/sps/introduction.mdx b/website/pages/nl/sps/introduction.mdx new file mode 100644 index 000000000000..12e3f81c6d53 --- /dev/null +++ b/website/pages/nl/sps/introduction.mdx @@ -0,0 +1,19 @@ +--- +title: Introduction to Substreams-powered Subgraphs +--- + +By using a Substreams package (`.spkg`) as a data source, your subgraph gains access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. + +There are two methods of enabling this technology: + +Using Substreams [triggers](./triggers): Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. + +Using [Entity Changes](https://substreams.streamingfast.io/documentation/consume/subgraph/graph-out): By writing more of the logic into Substreams, you can consume the module's output directly into graph-node. In graph-node, you can use the Substreams data to create your subgraph entities. + +It is really a matter of where you put your logic, in the subgraph or the Substreams. Keep in mind that having more of your logic in Substreams benefits from a parallelized model, whereas triggers will be linearly consumed in graph-node. + +Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: + +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/nl/sps/triggers-example.mdx b/website/pages/nl/sps/triggers-example.mdx new file mode 100644 index 000000000000..8e4f96eba14a --- /dev/null +++ b/website/pages/nl/sps/triggers-example.mdx @@ -0,0 +1,140 @@ +--- +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" +--- + +## Prerequisites + +Before starting, make sure to: + +- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. +- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. + +## Step 1: Initialize Your Project + + + +1. Open your Dev Container and run the following command to initialize your project: + + ```bash + substreams init + ``` + +2. Select the "minimal" project option. + +3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: + +```yaml +specVersion: v0.1.0 +package: + name: my_project_sol + version: v0.1.0 + +imports: # Pass your spkg of interest + solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg + +modules: + - name: map_spl_transfers + use: solana:map_block # Select corresponding modules available within your spkg + initialBlock: 260000082 + + - name: map_transactions_by_programid + use: solana:solana:transactions_by_programid_without_votes + +network: solana-mainnet-beta + +params: # Modify the param fields to meet your needs + # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA + map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE +``` + +## Step 2: Generate the Subgraph Manifest + +Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: + +```bash +substreams codegen subgraph +``` + +You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: + +```yaml +--- +dataSources: + - kind: substreams + name: my_project_sol + network: solana-mainnet-beta + source: + package: + moduleName: map_spl_transfers # Module defined in the substreams.yaml + file: ./my-project-sol-v0.1.0.spkg + mapping: + apiVersion: 0.0.7 + kind: substreams/graph-entities + file: ./src/mappings.ts + handler: handleTriggers +``` + +## Step 3: Define Entities in `schema.graphql` + +Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. Here is an example: + +```graphql +type MyTransfer @entity { + id: ID! + amount: String! + source: String! + designation: String! + signers: [String!]! +} +``` + +This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. + +## Step 4: Handle Substreams Data in `mappings.ts` + +With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: + +```ts +import { Protobuf } from 'as-proto/assembly' +import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events' +import { MyTransfer } from '../generated/schema' + +export function handleTriggers(bytes: Uint8Array): void { + const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode) + + for (let i = 0; i < input.data.length; i++) { + const event = input.data[i] + + if (event.transfer != null) { + let entity_id: string = `${event.txnId}-${i}` + const entity = new MyTransfer(entity_id) + entity.amount = event.transfer!.instruction!.amount.toString() + entity.source = event.transfer!.accounts!.source + entity.designation = event.transfer!.accounts!.destination + + if (event.transfer!.accounts!.signer!.single != null) { + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] + } else if (event.transfer!.accounts!.signer!.multisig != null) { + entity.signers = event.transfer!.accounts!.signer!.multisig!.signers + } + entity.save() + } + } +} +``` + +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + +## Conclusion + +You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. + +For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). diff --git a/website/pages/nl/sps/triggers.mdx b/website/pages/nl/sps/triggers.mdx new file mode 100644 index 000000000000..ed19635d4768 --- /dev/null +++ b/website/pages/nl/sps/triggers.mdx @@ -0,0 +1,37 @@ +--- +title: Substreams Triggers +--- + +Custom triggers allow you to send data directly into your subgraph mappings file and entities (similar to tables and fields), enabling full use of the GraphQL layer. By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data within your subgraph’s handler, ensuring efficient and streamlined data management within the subgraph framework. + +> Note: If you haven’t already, visit one of the How-To Guides found [here](./introduction) to scaffold your first project in the Development Container. + +The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. + +```tsx +export function handleTransactions(bytes: Uint8Array): void { + let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).trasanctions // 1. + if (transactions.length == 0) { + log.info('No transactions found', []) + return + } + + for (let i = 0; i < transactions.length; i++) { + // 2. + let transaction = transactions[i] + + let entity = new Transaction(transaction.hash) // 3. + entity.from = transaction.from + entity.to = transaction.to + entity.save() + } +} +``` + +Here's what you’re seeing in the `mappings.ts` file: + +1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object +2. Looping over the transactions +3. Create a new subgraph entity for every transaction + +To go through a detailed example of a trigger-based subgraph, [click here](./triggers-example). diff --git a/website/pages/nl/substreams.mdx b/website/pages/nl/substreams.mdx index 710e110012cc..a838a6924e2f 100644 --- a/website/pages/nl/substreams.mdx +++ b/website/pages/nl/substreams.mdx @@ -4,9 +4,11 @@ title: Substreams ![Substreams Logo](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## How Substreams Works in 4 Steps @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### Expand Your Knowledge - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/nl/sunrise.mdx b/website/pages/nl/sunrise.mdx index 32bf6c6d26d4..14d1444cf8cd 100644 --- a/website/pages/nl/sunrise.mdx +++ b/website/pages/nl/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Post-Sunrise + Upgrading to The Graph Network FAQ --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Note: The Sunrise of Decentralized Data ended June 12th, 2024. -## What is the Sunrise of Decentralized Data? +## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data is an initiative spearheaded by Edge & Node. The goal is to enable subgraph developers to seamlessly upgrade to The Graph’s decentralized network. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan draws on many previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs, and the ability to integrate new blockchain networks to The Graph. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### What are the phases of the Sunrise? +### What happened to the hosted service? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. -## Upgrading subgraphs to The Graph Network +During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. -### When will hosted service subgraphs no longer be available? +### Was Subgraph Studio impacted by this upgrade? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Why were subgraphs published to Arbitrum, did it start indexing a different network? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Will my hosted service subgraph be supported on The Graph Network? - -Yes, the upgrade Indexer will automatically support all hosted service subgraphs published to The Graph Network for a seamless upgrade experience. - -### How do I upgrade my hosted service subgraph? - -> Note: Upgrading a subgraph to The Graph Network cannot be undone. - - - -To upgrade a hosted service subgraph, you can visit the subgraph dashboard on the [hosted service](https://thegraph.com/hosted-service). - -1. Select the subgraph(s) you want to upgrade. -2. Select the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### How can I get support with the upgrade process? - -The Graph community is here to support developers as they move to The Graph Network. Join The Graph's [Discord server](https://discord.gg/vtvv7FP) and request help in the #upgrade-decentralized-network channel. - -### How can I ensure high quality of service and redundancy for subgraphs on The Graph Network? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Members from these blockchain communities are encouraged to integrate their chain through the [Chain Integration Process](/chain-integration-overview/). - -### How do I publish new versions to the network? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Upgrade to the latest version of [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Update your deploy command - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> Publishing requires Arbitrum ETH - upgrading your subgraph also airdrops a small amount to facilitate your first protocol interactions 🧑‍🚀 - -### I use a subgraph developed by someone else, how can I make sure that my service isn't interrupted? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### What happens if I don't upgrade my subgraph? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### How can I get started querying subgraphs on The Graph Network? - -You can explore available subgraphs on [Graph Explorer](https://thegraph.com/explorer). [Learn more about querying subgraphs on The Graph](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## About the Upgrade Indexer -### What is the upgrade Indexer? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> The upgrade Indexer is currently active. -### What chains does the upgrade Indexer support? +The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. -The upgrade Indexer supports chains that were previously only available on the hosted service. +### What does the upgrade Indexer do? -Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. ### What does the upgrade indexer mean for existing Indexers? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. + +However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As more subgraphs are upgraded from the hosted service to The Graph Network, Delegators stand to benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### Will the upgrade Indexer compete with existing Indexers for rewards? +### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer will only allocate the minimum amount per subgraph and will not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### How will this affect subgraph developers? +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### How does this benefit data consumers? +### How does the upgrade Indexer benefit data consumers? The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. -### How will the upgrade Indexer price queries? - -The upgrade Indexer will price queries at the market rate so as not to influence the query fee market. - -### What are the criteria for the upgrade Indexer to stop supporting a subgraph? - -The upgrade Indexer will serve a subgraph until it is sufficiently and successfully served with consistent queries served by at least 3 other Indexers. - -Furthermore, the upgrade Indexer will stop supporting a subgraph if it has not been queried in the last 30 days. - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## About The Graph Network - -### Do I need to run my own infrastructure? - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -Once your subgraph has reached adequate curation signal and other Indexers begin supporting it, the upgrade Indexer will gradually taper off, allowing other Indexers to collect indexing rewards and query fees. - -### Should I host my own indexing infrastructure? - -Running infrastructure for your own project is [significantly more resource intensive](/network/benefits/) when compared to using The Graph Network. - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -That being said, if you’re still interested in running a [Graph Node](https://github.com/graphprotocol/graph-node), consider joining The Graph Network [as an Indexer](https://thegraph.com/blog/how-to-become-indexer/) to earn indexing rewards and query fees by serving data on your subgraph and others. - -### Should I use a centralized indexing provider? - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -Here's a detailed breakdown of the benefits of The Graph over centralized hosting: +### How does the upgrade Indexer price queries? -- **Resilience and Redundancy**: Decentralized systems are inherently more robust and resilient due to their distributed nature. Data isn't stored on a single server or location. Instead, it's served by hundreds of independent Indexers around the globe. This reduces the risk of data loss or service interruptions if one node fails, leading to exceptional uptime (99.99%). +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **Quality of Service**: In addition to the impressive uptime, The Graph Network features a ~106ms median query speed (latency), and higher query success rates compared to hosted alternatives. Read more in [this blog](https://thegraph.com/blog/qos-the-graph-network/). +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -Just as you've chosen your blockchain network for its decentralized nature, security, and transparency, opting for The Graph Network is an extension of those same principles. By aligning your data infrastructure with these values, you ensure a cohesive, resilient, and trust-driven development environment. +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/nl/supported-network-requirements.mdx b/website/pages/nl/supported-network-requirements.mdx index 9bfbc8d0fefd..aea52116a3a5 100644 --- a/website/pages/nl/supported-network-requirements.mdx +++ b/website/pages/nl/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| Netwerk | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | +| Netwerk | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | diff --git a/website/pages/nl/tap.mdx b/website/pages/nl/tap.mdx new file mode 100644 index 000000000000..872ad6231e9c --- /dev/null +++ b/website/pages/nl/tap.mdx @@ -0,0 +1,197 @@ +--- +title: TAP Migration Guide +--- + +Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. + +## Overview + +[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: + +- Efficiently handles micropayments. +- Adds a layer of consolidations to on-chain transactions and costs. +- Allows Indexers control of receipts and payments, guaranteeing payment for queries. +- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. + +## Specifics + +TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. + +For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. + +### RAV Details + +- It’s money that is waiting to be sent to the blockchain. + +- It will continue to send requests to aggregate and ensure that the total value of non-aggregated receipts does not exceed the `amount willing to lose`. + +- Each RAV can be redeemed once in the contracts, which is why they are sent after the allocation is closed. + +### Redeeming RAV + +As long as you run `tap-agent` and `indexer-agent`, everything will be executed automatically. The following provides a detailed breakdown of the process: + +1. An Indexer closes allocation. + +2. ` period, tap-agent` takes all pending receipts for that specific allocation and requests an aggregation into a RAV, marking it as `last`. + +3. `indexer-agent` takes all the last RAVS and sends redeem requests to the blockchain, which will update the value of `redeem_at`. + +4. During the `` period, `indexer-agent` monitors if the blockchain has any reorganizations that revert the transaction. + + - If it was reverted, the RAV is resent to the blockchain. If it was not reverted, it gets marked as `final`. + +## Blockchain Addresses + +### Contracts + +| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| ------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | +| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | +| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | + +### Gateway + +| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| ---------- | --------------------------------------------- | --------------------------------------------- | +| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | + +### Requirements + +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. + +- [Graph TAP Aribtrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) + +> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployement. As a result, you have to index it manually. + +## Migration Guide + +### Software versions + +| Component | Version | Image Link | +| --------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------- | +| indexer-service | v1.0.0-rc.6 | [indexer-service](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-service-rs/264320627?tag=1.0.0-rc.6) | +| indexer-agent | PR #995 | [indexer-agent](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent/266166026?tag=sha-d98cf80) | +| tap-agent | v1.0.0-rc.6 | [tap-agent](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-tap-agent/264320547?tag=1.0.0-rc.6) | + +### Steps + +1. **Indexer Agent** + + - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). + - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + +2. **Indexer Service** + + - Fully replace your current configuration with the [new Indexer Service rs](https://github.com/graphprotocol/indexer-rs). It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + - Like the older version, you can scale Indexer Service horizontally easily. It is still stateless. + +3. **TAP Agent** + + - Run _one_ single instance of [TAP Agent](https://github.com/graphprotocol/indexer-rs) at all times. It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + +4. **Configure Indexer Service and TAP Agent** + + Configuration is a TOML file shared between `indexer-service` and `tap-agent`, supplied with the argument `--config /path/to/config.toml`. + + Check out the full [configuration](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) and the [default values](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) + +For minimal configuration, use the following template: + +```bash +# You will have to change *all* the values below to match your setup. +# +# Some of the config below are global graph network values, which you can find here: +# +# +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: +# +# [database] +# postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" + +[indexer] +indexer_address = "0x1111111111111111111111111111111111111111" +operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" + +[database] +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. +postgres_url = "postgres://postgres@postgres:5432/postgres" + +[graph_node] +# URL to your graph-node's query endpoint +query_url = "" +# URL to your graph-node's status endpoint +status_url = "" + +[subgraphs.network] +# Query URL for the Graph Network subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[subgraphs.escrow] +# Query URL for the Escrow subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[blockchain] +# The chain ID of the network that the graph network is running on +chain_id = 1337 +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. +receipts_verifier_address = "0x2222222222222222222222222222222222222222" + +######################################## +# Specific configurations to tap-agent # +######################################## +[tap] +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: +# max_amount_willing_to_lose_grt = "0.1" +max_amount_willing_to_lose_grt = 20 + +[tap.sender_aggregator_endpoints] +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. +0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" +``` + +Notes: + +- Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/tap/#gateway). +- Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/tap/#contracts) using the appropriate chain id. + +**Log Level** + +- You can set the log level by using the `RUST_LOG` environment variable. +- It’s recommended that you set it to `RUST_LOG=indexer_tap_agent=debug,info`. + +## Monitoring + +### Metrics + +All components expose the port 7300 to be queried by prometheus. + +### Grafana Dashboard + +You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) and import. + +### Launchpad + +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer) diff --git a/website/pages/pl/about.mdx b/website/pages/pl/about.mdx index bc56c6ce8854..f669bb650fd6 100644 --- a/website/pages/pl/about.mdx +++ b/website/pages/pl/about.mdx @@ -2,46 +2,66 @@ title: Więcej o The Graph --- -Ta strona ma na celu wyjaśnienie czym jest The Graph i jak możesz zacząć go używać. - ## Co to jest The Graph? -The Graph jest zdecentralizowanym protokołem ideksującym dane na blockchainie i wysyłającym zapytania o te dane. The Graph umożliwia tworzenie zapytań o dane, które są bezpośrenio trudne do odpytania. +The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. + +## Understanding the Basics + +Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. + +### Challenges Without The Graph + +In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. + +- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. + +- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. + +- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. + +### Why is this a problem? + +It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. + +Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. + +Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. -Projekty wykorzystujące kompleksowe smart kontrakty jak [Uniswap](https://uniswap.org/) i inicjatywy NFT jak [Bored Ape Yacht Club](https://boredapeyachtclub.com/) przechowują dane na blockchainie Ethereum, co sprawia, że bardzo trudno jest odczytać cokolwiek poza bardzo podstawowymi danymi dezpośrednio z danej sieci blockchain. +## The Graph Provides a Solution -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. -Możesz równieź zbudować własny serwer, przetwarzać na nim tranzakcje, zapisaywać je w bazie danych i wykorzystywać punkt końcowy API w celu tworzenia zapytań o dane. Jednak ta opcja [wymaga dużych nakładów finansowych](/network/benefits/), regularnej konserwacji i utrzymania, a mimo to stanowi ona pojedyńczy punkt podatności na awarię i narusza warunki bezpieczeństwa wymagane w procesie decentralizacji. +### How The Graph Functions -**Indeksowanie danych na blockchainie jest bardzo, bardzo trudne.** +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +#### Specifics -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. -## Jak działa The Graph +- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -The Graph uczy się co i jak należy indeksować spośród danych sieci Ethereum na podstawie opisów subgraphów, zwanych manifestami. Opis subgraphu definiuje smart kontrakty, które leżą w obszarze zainteresowania danego subgraphu, zdarzenia w tych kontraktach, na które należy zwracać uwagę, oraz sposób mapowania danych zdarzeń na dane przechowywane w bazie danych The Graph. +- When creating a subgraph, you need to write a subgraph manifest. -Po napisaniu `manifestu subgraphu` można użyć narzędzia Graph CLI, aby przechować definicję w protokole IPFS i poinformować dowolnego indeksera o możliwości rozpoczęcia indeksowania danych dla tego subgraphu. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. -Ten diagram przedstawia bardziej szczegółowo przepływ danych po wdrożeniu manifestu subgraphu, kiedy mamy do czynienia z transakcjami Ethereum: +The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. ![Grafika wyjaśniająca sposób w jaki protokół The Graph wykorzystuje węzeł Graph Node by obsługiwać zapytania dla konsumentów danych](/img/graph-dataflow.png) Proces ten przebiega według poniższych kroków: -1. Aplikacja dApp dodaje dane do sieci Ethereum za pomocą transakcji w smart kontrakcie. -2. Inteligentny kontrakt emituje jedno lub więcej zdarzeń podczas przetwarzania transakcji. -3. Graph Node nieprzerwanie skanuje sieć Ethereum w poszukiwaniu nowych bloków i danych dla Twojego subgraphu, które mogą one zawierać. -4. Graph Node znajduje zdarzenia Ethereum dla Twojego subgraphu w tych blokach i uruchamia dostarczone przez Ciebie procedury mapowania. Mapowanie to moduł WASM, który tworzy lub aktualizuje jednostki danych przechowywane przez węzeł Graph Node w odpowiedzi na zdarzenia Ethereum. -5. Aplikacja dApp wysyła zapytanie do węzła Graph Node o dane zindeksowane na blockchainie, korzystając z [punktu końcowego GraphQL](https://graphql.org/learn/). Węzeł Graph Node przekształca zapytania GraphQL na zapytania do swojego podstawowego magazynu danych w celu pobrania tych danych, wykorzystując zdolności indeksowania magazynu. Aplikacja dApp wyświetla te dane w interfejsie użytkownika dla użytkowników końcowych, którzy używają go do tworzenia nowych transakcji w sieci Ethereum. Cykl się powtarza. +1. Aplikacja dApp dodaje dane do sieci Ethereum za pomocą transakcji w smart kontrakcie. +2. Inteligentny kontrakt emituje jedno lub więcej zdarzeń podczas przetwarzania transakcji. +3. Graph Node nieprzerwanie skanuje sieć Ethereum w poszukiwaniu nowych bloków i danych dla Twojego subgraphu, które mogą one zawierać. +4. Graph Node znajduje zdarzenia Ethereum dla Twojego subgraphu w tych blokach i uruchamia dostarczone przez Ciebie procedury mapowania. Mapowanie to moduł WASM, który tworzy lub aktualizuje jednostki danych przechowywane przez węzeł Graph Node w odpowiedzi na zdarzenia Ethereum. +5. Aplikacja dApp wysyła zapytanie do węzła Graph Node o dane zindeksowane na blockchainie, korzystając z [punktu końcowego GraphQL](https://graphql.org/learn/). Węzeł Graph Node przekształca zapytania GraphQL na zapytania do swojego podstawowego magazynu danych w celu pobrania tych danych, wykorzystując zdolności indeksowania magazynu. Aplikacja dApp wyświetla te dane w interfejsie użytkownika dla użytkowników końcowych, którzy używają go do tworzenia nowych transakcji w sieci Ethereum. Cykl się powtarza. ## Kolejne kroki -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +The following sections provide a more in-depth look at subgraphs, their deployment and data querying. -Before you start writing your own subgraph, you might want to have a look at [Graph Explorer](https://thegraph.com/explorer) and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/pages/pl/arbitrum/arbitrum-faq.mdx b/website/pages/pl/arbitrum/arbitrum-faq.mdx index 575b56c610f0..85aa42a5b6f1 100644 --- a/website/pages/pl/arbitrum/arbitrum-faq.mdx +++ b/website/pages/pl/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: Arbitrum - najczęściej zadawane pytania Kliknij [tutaj](#billing-on-Arbitrum-faqs), jeśli chcesz przejść do najczęściej zadawanych pytań dotyczących rozliczeń w sieci Arbitrum. -## Dlaczego The Graph implementuje rozwiązanie L2 (ang. Layer 2)? +## Why did The Graph implement an L2 Solution? -Dzięki procesowi skalowania protokołu The Graph na L2, uczestnicy ekosystemu mogą liczyć na: +By scaling The Graph on L2, network participants can now benefit from: - Ponad 26-krotną oszczędność na opłatach za gaz @@ -14,7 +14,7 @@ Dzięki procesowi skalowania protokołu The Graph na L2, uczestnicy ekosystemu m - Bezpieczeństwo jako spuścizna sieci Ethereum -Skalowanie smart kontraktów protokołu na L2 pozwala uczestnikom sieci na częstsze interakcje przy niższych opłatach za gaz. Na przykład, Indekserzy mogą otwierać i zamykać alokacje w celu indeksowania większej liczby subgrafów z większą częstotliwością, Deweloperzy mogą wdrażać i aktualizować subgrafy z większą łatwością, delegatorzy mogą delegować GRT ze zwiększoną częstotliwością, a kuratorzy mogą dodawać lub usuwać sygnał do większej liczby subgrafów - działania wcześniej uważane za zbyt kosztowne do częstego wykonywania ze względu na gaz. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. W zeszłym roku społeczność The Graph postanowiła pójść o krok do przodu z Arbitrum po wynikach dyskusji [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). @@ -41,27 +41,21 @@ By w pełni wykorzystać wszystkie zalety używania protokołu The Graph na L2 w ## Co powinien wiedzieć na ten temat subgraf developer, konsument danych, indekser, kurator lub delegator? -Nie jest wymagane natychmiastowe podjęcie działań, jednak uczestnicy sieci są zachęcani do rozpoczęcia przenoszenia się do sieci Arbitrum, aby skorzystać z zalet L2. +Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) for additional support. -Główne zespoły programistów pracują nad stworzeniem narzędzi do transferu L2, które znacznie ułatwią przeniesienie delegowania, kuratorowania i subgrafów do Arbitrum. Uczestnicy sieci mogą liczyć na dostęp do narzędzi transferu L2 do lata 2023 roku. +All indexing rewards are now entirely on Arbitrum. -Od 10 kwietnia 2023 roku 5% wszystkich nagród za indeksowanie jest emitowane w sieci Arbitrum. W miarę wzrostu udziału w sieci i zatwierdzenia przez Radę Fundacji The Graph, nagrody za indeksowanie stopniowo będą przenoszone z sieci Ethereum (L1) do sieci Arbitrum (L2), a ostatecznie całkowicie przeniosą się na Arbitrum. - -## Co trzeba zrobić by zacząć uczestniczyć w sieci Arbitrum (L2)? - -Prosimy o pomoc w [testowaniu sieci](https://testnet.thegraph.com/explorer) na L2 i zgłaszanie opinii na temat swoich doświadczeń na platformie [Discord](https://discord.gg/graphprotocol). - -## Czy w związku ze skalowaniem sieci do L2 wiąże się jakieś ryzyko? +## Were there any risks associated with scaling the network to L2? All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). Wszystko zostało dokładnie przetestowane i przygotowano plan awaryjny, aby zapewnić bezpieczne i płynne przeniesienie. Szczegóły można znaleźć [tutaj](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Czy subgrafy funkcjonujące już w sieci Ethereum, będą dalej działać? +## Are existing subgraphs on Ethereum working? -Tak. kontrakty z The Graph Network będą funkcjonować równolegle w sieciach Ethereum i Arbitrum dopóki nie nastąpi całkowite przeniesienie do sieci Arbitrum na późniejszym etapie. +All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. -## Czy dla tokenu GRT zostanie wdrożony nowy smart kontrakt w sieci Arbitrum? +## Does GRT have a new smart contract deployed on Arbitrum? Tak, GRT ma dodatkowy [smart kontrakt na Arbitrum](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7). Jednak główna sieć Ethereum [kontrakt GRT](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) będzie nadal funkcjonować. diff --git a/website/pages/pl/arbitrum/l2-transfer-tools-faq.mdx b/website/pages/pl/arbitrum/l2-transfer-tools-faq.mdx index e74cfdb59413..7ce69aeab929 100644 --- a/website/pages/pl/arbitrum/l2-transfer-tools-faq.mdx +++ b/website/pages/pl/arbitrum/l2-transfer-tools-faq.mdx @@ -118,9 +118,9 @@ Aby przesłać delegację, należy wykonać następujące kroki: If the Indexer to whom you're delegating is still operating on L1, when you transfer to Arbitrum you will forfeit any delegation rewards from open allocations on Ethereum mainnet. This means that you will lose the rewards from, at most, the last 28-day period. If you time the transfer right after the Indexer has closed allocations you can make sure this is the least amount possible. If you have a communication channel with your Indexer(s), consider discussing with them to find the best time to do your transfer. -### Co się stanie, jeśli indeksator, do którego obecnie deleguję, nie jest dostępny w Arbitrum One? +### What happens if the Indexer I currently delegate to isn't on Arbitrum One? -Narzędzie przesyłania L2 aktywuje się tylko wtedy, gdy delegowany przez Ciebie indeksator prześle swój stake do Arbitrum. +The L2 transfer tool will only be enabled if the Indexer you have delegated to has transferred their own stake to Arbitrum. ### Czy Delegaci mają możliwość delegowania do innego Indeksera? diff --git a/website/pages/pl/billing.mdx b/website/pages/pl/billing.mdx index 37f9c840d00b..dec5cfdadc12 100644 --- a/website/pages/pl/billing.mdx +++ b/website/pages/pl/billing.mdx @@ -14,7 +14,7 @@ There are two plans to use when querying subgraphs on The Graph Network. ## Query Payments with credit card -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) +- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. You'll be redirected to the wallet selection page. Select your wallet and click on "Connect". 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. @@ -69,7 +69,7 @@ Once you bridge GRT, you can add it to your billing balance. 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. 4. Enter the amount of GRT you would like to withdraw. 5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. 6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. @@ -83,7 +83,7 @@ Once you bridge GRT, you can add it to your billing balance. - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. 5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. 6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. + - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. 7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. 8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. @@ -127,7 +127,7 @@ This will be a step by step guide for purchasing GRT on Binance. 7. Review your purchase and click "Buy GRT". 8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. 9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - Click on the "wallet" button, click withdraw, and select GRT. - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - Click "Continue" and confirm your transaction. @@ -198,7 +198,7 @@ You can learn more about getting ETH on Binance [here](https://www.binance.com/e ### How many queries will I need? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. @@ -208,6 +208,6 @@ Of course, both new and existing users can reach out to Edge & Node's BD team fo Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### What happens when my billing balance runs out? Will I get a warning? You will receive several email notifications before your billing balance runs out. diff --git a/website/pages/pl/chain-integration-overview.mdx b/website/pages/pl/chain-integration-overview.mdx index 2fe6c2580909..a142f3f817f9 100644 --- a/website/pages/pl/chain-integration-overview.mdx +++ b/website/pages/pl/chain-integration-overview.mdx @@ -6,12 +6,12 @@ A transparent and governance-based integration process was designed for blockcha ## Stage 1. Technical Integration -- Teams work on a Graph Node integration and Firehose for non-EVM based chains. [Here's how](/new-chain-integration/). +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. - Teams initiate the protocol integration process by creating a Forum thread [here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (New Data Sources sub-category under Governance & GIPs). Using the default Forum template is mandatory. ## Stage 2. Integration Validation -- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON RPC or Firehose endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - Graph Indexers test the integration on The Graph's testnet. - Core developers and Indexers monitor stability, performance, and data determinism. @@ -38,7 +38,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. -### 3. How much time will this process take? +### 3. How much time will the process of reaching full protocol support take? The time to mainnet is expected to be several weeks, varying based on the time of integration development, whether additional research is required, testing and bug fixes, and, as always, the timing of the governance process that requires community feedback. @@ -46,4 +46,4 @@ Protocol support for indexing rewards depends on the stakeholders' bandwidth to ### 4. How will priorities be handled? -Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. This is especially true for chains previously supported on the [hosted service](https://thegraph.com/hosted-service) or those relying on already tested stacks. +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/pl/cookbook/arweave.mdx b/website/pages/pl/cookbook/arweave.mdx index 15538454e3ff..b079da30a013 100644 --- a/website/pages/pl/cookbook/arweave.mdx +++ b/website/pages/pl/cookbook/arweave.mdx @@ -105,7 +105,7 @@ Schema definition describes the structure of the resulting subgraph database and The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). -Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/assemblyscript-api/). +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/pl/cookbook/base-testnet.mdx b/website/pages/pl/cookbook/base-testnet.mdx index 3a1d98a44103..0cc5ad365dfd 100644 --- a/website/pages/pl/cookbook/base-testnet.mdx +++ b/website/pages/pl/cookbook/base-testnet.mdx @@ -63,10 +63,10 @@ Your subgraph slug is an identifier for your subgraph. The CLI tool will walk yo The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. +- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retrieve from the subgraph. - AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema. -If you want to index additional data, you will need extend the manifest, schema and mappings. +If you want to index additional data, you will need to extend the manifest, schema and mappings. For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). diff --git a/website/pages/pl/cookbook/cosmos.mdx b/website/pages/pl/cookbook/cosmos.mdx index 5e9edfd82931..a8c359b3098c 100644 --- a/website/pages/pl/cookbook/cosmos.mdx +++ b/website/pages/pl/cookbook/cosmos.mdx @@ -85,7 +85,7 @@ Schema definition describes the structure of the resulting subgraph database and The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). -Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/assemblyscript-api/). +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/pl/cookbook/grafting.mdx b/website/pages/pl/cookbook/grafting.mdx index 6b4f419390d5..6c3b85419af9 100644 --- a/website/pages/pl/cookbook/grafting.mdx +++ b/website/pages/pl/cookbook/grafting.mdx @@ -22,7 +22,7 @@ For more information, you can check: - [Grafting](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic usecase. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network @@ -30,7 +30,7 @@ In this tutorial, we will be covering a basic usecase. We will replace an existi ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. While this is an effective way to preserve data and save time on indexing, grafting may introduce complexities and potential issues when migrating from a hosted environment to the decentralized network. It is not possible to graft a subgraph from The Graph Network back to the hosted service or Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. ### Best Practices @@ -80,7 +80,7 @@ dataSources: ``` - The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. ## Grafting Manifest Definition @@ -191,7 +191,7 @@ Congrats! You have successfully grafted a subgraph onto another subgraph. ## Additional Resources -If you want more experience with grafting, here's a few examples for popular contracts: +If you want more experience with grafting, here are a few examples for popular contracts: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) diff --git a/website/pages/pl/cookbook/near.mdx b/website/pages/pl/cookbook/near.mdx index 28486f8bb0be..a4f27caf6f3c 100644 --- a/website/pages/pl/cookbook/near.mdx +++ b/website/pages/pl/cookbook/near.mdx @@ -37,7 +37,7 @@ There are three aspects of subgraph definition: **schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph#the-graphql-schema). -**AssemblyScript Mappings:** [AssemblyScript code](/developing/assemblyscript-api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. +**AssemblyScript Mappings:** [AssemblyScript code](/developing/graph-ts/api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. During subgraph development there are two key commands: @@ -98,7 +98,7 @@ Schema definition describes the structure of the resulting subgraph database and The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). -NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/assemblyscript-api). +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/graph-ts/api). ```typescript @@ -165,9 +165,9 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/developing/assemblyscript-api) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/developing/graph-ts/api) is available to NEAR subgraph developers during mapping execution. -This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/assemblyscript-api#json-api) to allow developers to easily process these logs. +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/graph-ts/api#json-api) to allow developers to easily process these logs. ## Deploying a NEAR Subgraph diff --git a/website/pages/pl/cookbook/subgraph-uncrashable.mdx b/website/pages/pl/cookbook/subgraph-uncrashable.mdx index 989310a3f9a0..0cc91a0fa2c3 100644 --- a/website/pages/pl/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/pl/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: Safe Subgraph Code Generator - The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. These logs can be viewed in the The Graph's hosted service under the 'Logs' section. +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. diff --git a/website/pages/pl/cookbook/upgrading-a-subgraph.mdx b/website/pages/pl/cookbook/upgrading-a-subgraph.mdx index 5502b16d9288..a546f02c0800 100644 --- a/website/pages/pl/cookbook/upgrading-a-subgraph.mdx +++ b/website/pages/pl/cookbook/upgrading-a-subgraph.mdx @@ -136,7 +136,7 @@ Make sure **Update Subgraph Details in Explorer** is checked and click on **Save ## Deprecating a Subgraph on The Graph Network -Follow the steps [here](/managing/deprecating-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. +Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. ## Querying a Subgraph + Billing on The Graph Network diff --git a/website/pages/pl/deploying/multiple-networks.mdx b/website/pages/pl/deploying/multiple-networks.mdx new file mode 100644 index 000000000000..dc2b8e533430 --- /dev/null +++ b/website/pages/pl/deploying/multiple-networks.mdx @@ -0,0 +1,241 @@ +--- +title: Deploying a Subgraph to Multiple Networks +--- + +This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph). + +## Deploying the subgraph to multiple networks + +In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. + +### Using `graph-cli` + +Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: + +```sh +Options: + + ... + --network Network configuration to use from the networks config file + --network-file Networks config file path (default: "./networks.json") +``` + +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. + +> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. + +If you don't have a `networks.json` file, you'll need to manually create one with the following structure: + +```json +{ + "network1": { // the network name + "dataSource1": { // the dataSource name + "address": "0xabc...", // the contract address (optional) + "startBlock": 123456 // the startBlock (optional) + }, + "dataSource2": { + "address": "0x123...", + "startBlock": 123444 + } + }, + "network2": { + "dataSource1": { + "address": "0x987...", + "startBlock": 123 + }, + "dataSource2": { + "address": "0xxyz..", + "startBlock": 456 + } + }, + ... +} +``` + +> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. + +Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x123...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +This is what your networks config file should look like: + +```json +{ + "mainnet": { + "Gravity": { + "address": "0x123..." + } + }, + "sepolia": { + "Gravity": { + "address": "0xabc..." + } + } +} +``` + +Now we can run one of the following commands: + +```sh +# Using default networks.json file +yarn build --network sepolia + +# Using custom named file +yarn build --network sepolia --network-file path/to/config +``` + +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: sepolia + source: + address: '0xabc...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Now you are ready to `yarn deploy`. + +> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: + +```sh +# Using default networks.json file +yarn deploy --network sepolia + +# Using custom named file +yarn deploy --network sepolia --network-file path/to/config +``` + +### Using subgraph.yaml template + +One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). + +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: + +```json +{ + "network": "mainnet", + "address": "0x123..." +} +``` + +and + +```json +{ + "network": "sepolia", + "address": "0xabc..." +} +``` + +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + network: {{network}} + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + address: '{{address}}' + abi: Gravity + mapping: + kind: ethereum/events +``` + +In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: + +```json +{ + ... + "scripts": { + ... + "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", + "prepare:sepolia": "mustache config/sepolia.json subgraph.template.yaml > subgraph.yaml" + }, + "devDependencies": { + ... + "mustache": "^3.1.0" + } +} +``` + +To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: + +```sh +# Mainnet: +yarn prepare:mainnet && yarn deploy + +# Sepolia: +yarn prepare:sepolia && yarn deploy +``` + +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). + +**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. + +## Subgraph Studio subgraph archive policy + +A subgraph version in Studio is archived if and only if it meets the following criteria: + +- The version is not published to the network (or pending publish) +- The version was created 45 or more days ago +- The subgraph hasn't been queried in 30 days + +In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. + +Every subgraph affected with this policy has an option to bring the version in question back. + +## Checking subgraph health + +If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. + +Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: + +```graphql +{ + indexingStatusForCurrentVersion(subgraphName: "org/subgraph") { + synced + health + fatalError { + message + block { + number + hash + } + handler + } + chains { + chainHeadBlock { + number + } + latestBlock { + number + } + } + } +} +``` + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/pages/pl/developing/creating-a-subgraph.mdx b/website/pages/pl/developing/creating-a-subgraph.mdx index b4a2f306d8ed..2a97c2f051a0 100644 --- a/website/pages/pl/developing/creating-a-subgraph.mdx +++ b/website/pages/pl/developing/creating-a-subgraph.mdx @@ -2,45 +2,47 @@ title: Creating a Subgraph --- -A subgraph extracts data from a blockchain, processing it and storing it so that it can be easily queried via GraphQL. +This detailed guide provides instructions to successfully create a subgraph. -![Defining a Subgraph](/img/defining-a-subgraph.png) +A subgraph extracts data from a blockchain, processes it, and stores it for efficient querying via GraphQL. -The subgraph definition consists of a few files: +![Defining a Subgraph](/img/defining-a-subgraph.png) -- `subgraph.yaml`: a YAML file containing the subgraph manifest +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. -- `schema.graphql`: a GraphQL schema that defines what data is stored for your subgraph, and how to query it via GraphQL +## Getting Started -- `AssemblyScript Mappings`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) code that translates from the event data to the entities defined in your schema (e.g. `mapping.ts` in this tutorial) +### Install the Graph CLI -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key). It is recommended that you [add signal](/network/curating/#how-to-signal) to your subgraph with at least [3,000 GRT](/sunrise/#how-can-i-ensure-high-quality-of-service-and-redundancy-for-subgraphs-on-the-graph-network). +To build and deploy a subgraph, you will need the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-tooling) which you will need to build and deploy a subgraph. +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. -## Install the Graph CLI +On your local machine, run one of the following commands: -The Graph CLI is written in JavaScript, and you will need to install either `yarn` or `npm` to use it; it is assumed that you have yarn in what follows. +#### Using [npm](https://www.npmjs.com/) -Once you have `yarn`, install the Graph CLI by running +```bash +npm install -g @graphprotocol/graph-cli@latest +``` -**Install with yarn:** +#### Using [yarn](https://yarnpkg.com/) ```bash yarn global add @graphprotocol/graph-cli ``` -**Install with npm:** +- The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. -```bash -npm install -g @graphprotocol/graph-cli -``` +- This `graph init` command can also create a subgraph in Subgraph Studio by passing in `--product subgraph-studio`. + +- If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. -Once installed, the `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. This command can be used to create a subgraph in Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to your preferred network, bootstrapping a new subgraph from that contract can be a good way to get started. +## Create a subgraph -## From An Existing Contract +### From an existing contract -The following command creates a subgraph that indexes all events of an existing contract. It attempts to fetch the contract ABI from Etherscan and falls back to requesting a local file path. If any of the optional arguments are missing, it takes you through an interactive form. +The following command creates a subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,21 +53,29 @@ graph init \ [] ``` -The `` is the ID of your subgraph in Subgraph Studio, it can be found on your subgraph details page. +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. -## From An Example Subgraph +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. -The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: +### From an example subgraph + +The following command initializes a new project from an example subgraph: ```sh -graph init --studio +graph init --studio --from-example=example-subgraph ``` -The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. -## Add New dataSources To An Existing Subgraph +## Add new `dataSources` to an existing subgraph -Since `v0.31.0` the `graph-cli` supports adding new dataSources to an existing subgraph through the `graph add` command. +Since `v0.31.0`, the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: ```sh graph add
    [] @@ -78,22 +88,45 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -The `add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option), and will create a new `dataSource` in the same way that `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. +### Specifics + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: + + - If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- The contract `address` will be written to the `networks.json` for the relevant network. + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. -The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: +## Components of a subgraph -- If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. -- If `false`: a new entity & event handler should be created with `${dataSourceName}{EventName}`. +### The Subgraph Manifest -The contract `address` will be written to the `networks.json` for the relevant network. +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -> **Note:** When using the interactive cli, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. +The **subgraph definition** consists of the following files: -## The Subgraph Manifest +- `subgraph.yaml`: Contains the subgraph manifest -The subgraph manifest `subgraph.yaml` defines the smart contracts your subgraph indexes, which events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL -For the example subgraph, `subgraph.yaml` is: +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +A single subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: ```yaml specVersion: 0.0.4 @@ -180,9 +213,9 @@ A single subgraph can index data from multiple smart contracts. Add an entry for The triggers for a data source within a block are ordered using the following process: -1. Event and call triggers are first ordered by transaction index within the block. -2. Event and call triggers within the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. -3. Block triggers are run after event and call triggers, in the order they are defined in the manifest. +1. Event and call triggers are first ordered by transaction index within the block. +2. Event and call triggers within the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. +3. Block triggers are run after event and call triggers, in the order they are defined in the manifest. These ordering rules are subject to change. @@ -305,9 +338,9 @@ Imagine you have a subgraph that needs to make three Ethereum calls to fetch dat Traditionally, these calls might be made sequentially: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Total time taken = 3 + 2 + 4 = 9 seconds @@ -315,9 +348,9 @@ Total time taken = 3 + 2 + 4 = 9 seconds With this feature, you can declare these calls to be executed in parallel: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. @@ -325,9 +358,9 @@ Total time taken = max (3, 2, 4) = 4 seconds ### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. -2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. ### Example Configuration in Subgraph Manifest @@ -347,7 +380,7 @@ calls: Details for the example above: - `global0X128` is the declared `eth_call`. -- The text before colon(`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. @@ -360,17 +393,17 @@ calls: ### SpecVersion Releases -| Version | Release notes | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +| Version | Release notes | +|:-------:| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | ### Getting The ABIs @@ -442,16 +475,16 @@ For some entity types the `id` is constructed from the id's of two other entitie We support the following scalars in our GraphQL API: -| Type | Description | -| --- | --- | -| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| Type | Description | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | #### Enums @@ -593,7 +626,7 @@ This more elaborate way of storing many-to-many relationships will result in les #### Adding comments to the schema -As per GraphQL spec, comments can be added above schema entity attributes using the hash symble `#`. This is illustrated in the example below: +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: ```graphql type MyFirstEntity @entity { @@ -873,7 +906,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. -> +> > If prior blocks contain data relevant to the new data source, it is best to index that data by reading the current state of the contract and creating entities representing that state at the time the new data source is created. ### Data Source Context @@ -930,7 +963,7 @@ dataSources: ``` > **Note:** The contract creation block can be quickly looked up on Etherscan: -> +> > 1. Search for the contract by entering its address in the search bar. > 2. Click on the creation transaction hash in the `Contract Creator` section. > 3. Load the transaction details page where you'll find the start block for that contract. @@ -945,9 +978,9 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde `indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: -1. `"never"`: No pruning of historical data; retains the entire history. -2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. -3. A specific number: Sets a custom limit on the number of historical blocks to retain. +1. `"never"`: No pruning of historical data; retains the entire history. +2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +3. A specific number: Sets a custom limit on the number of historical blocks to retain. ``` indexerHints: @@ -982,29 +1015,6 @@ indexerHints: prune: never ``` -You can check the earliest block (with historical state) for a given subgraph by querying the [Indexing Status API](/deploying/deploying-a-subgraph-to-hosted/#checking-subgraph-health): - -``` -{ - indexingStatuses(subgraphs: ["Qm..."]) { - subgraph - synced - health - chains { - earliestBlock { - number - } - latestBlock { - number - } - chainHeadBlock { number } - } - } -} -``` - -Note that the `earliestBlock` is the earliest block with historical data, which will be more recent than the `startBlock` specified in the manifest, if the subgraph has been pruned. - ## Event Handlers Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. @@ -1477,7 +1487,7 @@ The file data source must specifically mention all the entity types which it wil #### Create a new handler to process files -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/assemblyscript-api/#json-api)). +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). The CID of the file as a readable string can be accessed via the `dataSource` as follows: diff --git a/website/pages/pl/developing/developer-faqs.mdx b/website/pages/pl/developing/developer-faqs.mdx index ce139b51a9d0..ac698801e333 100644 --- a/website/pages/pl/developing/developer-faqs.mdx +++ b/website/pages/pl/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: FAQs dla developerów --- -## 1. What is a subgraph? +This page summarizes some of the most common questions for developers building on The Graph. -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using the Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available to be queried by subgraph consumers. +## Subgraph Related -## 2. Can I delete my subgraph? +### 1. What is a subgraph? -It is not possible to delete subgraphs once they are created. +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. Can I change my subgraph name? +### 2. What is the first step to create a subgraph? -No. Once a subgraph is created, the name cannot be changed. Make sure to think of this carefully before you create your subgraph so it is easily searchable and identifiable by other dapps. +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. Can I change the GitHub account associated with my subgraph? +### 3. Can I still create a subgraph if my smart contracts don't have events? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Make sure to think of this carefully before you create your subgraph. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. Am I still able to create a subgraph if my smart contracts don't have events? +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are by far the fastest way to retrieve useful data. +### 4. Can I change the GitHub account associated with my subgraph? -If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended, as performance will be significantly slower. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. Is it possible to deploy one subgraph with the same name for multiple networks? +### 5. How do I update a subgraph on mainnet? -You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/deploying/deploying-a-subgraph-to-hosted#redeploying-a-subgraph) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. How are templates different from data sources? +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) upfront you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. +You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph#data-source-templates). -## 8. How do I make sure I'm using the latest version of graph-node for my local deployments? +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -You can run the following command: +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -docker pull graphprotocol/graph-node:latest -``` +You can also use `graph add` command to add a new dataSource. -**NOTE:** docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so it is important to do this to make sure you are up to date with the latest version of graph-node. +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. How do I call a contract function or access a public state variable from my subgraph mappings? +Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. -## 10. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Yes. On `graph init` command itself you can add multiple datasources by entering contracts one after the other. You can also use `graph add` command to add new datasource. +You can run the following command: -## 11. I want to contribute or add a GitHub issue. Where can I find the open source repositories? +```sh +docker pull graphprotocol/graph-node:latest +``` -- [graph-node](https://github.com/graphprotocol/graph-node) -- [graph-tooling](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. What is the recommended way to build "autogenerated" ids for an entity when handling events? +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -## 13. When listening to multiple contracts, is it possible to select the contract order to listen to events? +### 15. Can I delete my subgraph? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +It is not possible to delete subgraphs once they are created. However, you can [transfer and deprecate your subgraph](/managing/transfer-and-deprecate-a-subgraph/). -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +## Network Related + +### 16. What networks are supported by The Graph? + +You can find the list of the supported networks [here](/developing/supported-networks). + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? Yes. You can do this by importing `graph-ts` as per the example below: @@ -78,23 +99,21 @@ dataSource.network() dataSource.address() ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Do you support block and call handlers on Sepolia? Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. -## 16. Can I import ethers.js or other JS libraries into my subgraph mappings? - -Not currently, as mappings are written in AssemblyScript. One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. +## Indexing & Querying Related -## 17. Is it possible to specify what block to start indexing on? +### 19. Is it possible to specify what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 18. Are there some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 19. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: @@ -102,19 +121,7 @@ Yes! Try the following command, substituting "organization/subgraphName" with th curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -## 20. What networks are supported by The Graph? - -You can find the list of the supported networks [here](/developing/supported-networks). - -## 21. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? - -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. - -## 22. Is this possible to use Apollo Federation on top of graph-node? - -Federation is not supported yet, although we do want to support it in the future. At the moment, something you can do is use schema stitching, either on the client or via a proxy service. - -## 23. Is there a limit to how many objects The Graph can return per query? +### 22. Is there a limit to how many objects The Graph can return per query? By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: @@ -122,24 +129,19 @@ By default, query responses are limited to 100 items per collection. If you want someCollection(first: 1000, skip: ) { ... } ``` -## 24. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## 25. Where do I go to find my current subgraph on the hosted service? - -Head over to the hosted service in order to find subgraphs that you or others deployed to the hosted service. You can find it [here](https://thegraph.com/hosted-service). - -## 26. Will the hosted service start charging query fees? - -The Graph will never charge for the hosted service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The hosted service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to upgrade to the decentralized network as they are comfortable. - -## 27. How do I update a subgraph on mainnet? +## Miscellaneous -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +### 24. Is it possible to use Apollo Federation on top of graph-node? -## 28. In what order are the event, block, and call handlers triggered for a data source? +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +- [graph-node](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/pl/developing/graph-ts/api.mdx b/website/pages/pl/developing/graph-ts/api.mdx index 46442dfa941e..8fc1f4b48b61 100644 --- a/website/pages/pl/developing/graph-ts/api.mdx +++ b/website/pages/pl/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: AssemblyScript API --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/release-notes/assemblyscript-migration-guide) +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## API Reference @@ -27,16 +29,16 @@ The `@graphprotocol/graph-ts` library provides the following APIs: The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. -| Version | Release notes | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
    Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
    Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
    `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
    `etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Release notes | +| :-----: | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
    Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
    Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
    `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
    `etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Built-in Types @@ -252,7 +254,9 @@ export function handleTransfer(event: TransferEvent): void { When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. -Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### Loading entities from the store @@ -268,15 +272,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### Looking up entities created withing a block As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. -The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a Transaction from some on-chain event, and a later handler wants to access this transaction if it exists. In the case where the transaction does not exist, the subgraph will have to go to the database just to find out that the entity does not exist; if the subgraph author already knows that the entity must have been created in the same block, using loadInBlock avoids this database roundtrip. For some subgraphs, these missed lookups can contribute significantly to the indexing time. +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -503,7 +510,9 @@ Any other contract that is part of the subgraph can be imported from the generat #### Handling Reverted Calls -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -515,7 +524,7 @@ if (callResult.reverted) { } ``` -Note that a Graph node connected to a Geth or Infura client may not detect all reverts, if you rely on this we recommend using a Graph node connected to a Parity client. +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### Encoding/Decoding ABI diff --git a/website/pages/pl/developing/supported-networks.mdx b/website/pages/pl/developing/supported-networks.mdx index 6a85c1a2997b..6a9a3d7332fe 100644 --- a/website/pages/pl/developing/supported-networks.mdx +++ b/website/pages/pl/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. +- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. - If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). diff --git a/website/pages/pl/developing/unit-testing-framework.mdx b/website/pages/pl/developing/unit-testing-framework.mdx index f826a5ccb209..308135181ccb 100644 --- a/website/pages/pl/developing/unit-testing-framework.mdx +++ b/website/pages/pl/developing/unit-testing-framework.mdx @@ -1368,18 +1368,18 @@ The log output includes the test run duration. Here's an example: > Critical: Could not create WasmInstance from valid module with context: unknown import: wasi_snapshot_preview1::fd_write has not been defined -This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/assemblyscript-api/#logging-api) +This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/graph-ts/api/#logging-api) > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Block(defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(18,12) -> +> > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Transaction(defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(24,12) The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as`. The best way to fix issues like this one is to update everything to the latest released version. diff --git a/website/pages/pl/glossary.mdx b/website/pages/pl/glossary.mdx index cd24a22fd4d5..2978ecce3561 100644 --- a/website/pages/pl/glossary.mdx +++ b/website/pages/pl/glossary.mdx @@ -10,11 +10,9 @@ title: Glossary - **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Then, Indexers can begin indexing subgraphs to make them available to be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **Hosted service**: A temporary scaffold service for building and querying subgraphs as The Graph's decentralized network is maturing its cost of service, quality of service, and developer experience. - -- **Indexers**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. @@ -24,17 +22,17 @@ title: Glossary - **Indexer's Self Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. -- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service to The Graph Network. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegators**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curators**: Network participants that identify high-quality subgraphs, and “curate” them (i.e., signal GRT on them) in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. Indexers earn indexing rewards proportional to the signal on a subgraph. We see a correlation between the amount of GRT signalled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. -- **Subgraph Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a subgraph. - **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. @@ -46,11 +44,11 @@ title: Glossary 1. **Active**: An allocation is considered active when it is created on-chain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed. Specifically, the Indexer will lose 2.5% of their self-stake of GRT. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. @@ -62,11 +60,11 @@ title: Glossary - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **POI or Proof of Indexing**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent Proof of Indexing (POI). Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. @@ -78,10 +76,6 @@ title: Glossary - **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self stake. -- **_Upgrading_ a subgraph to The Graph Network**: The process of moving a subgraph from the hosted service to The Graph Network. - -- **_Updating_ a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). - -- **Upgrade Window**: The countdown for hosted service users to upgrade their subgraph(s) to The Graph Network beginning on April 11th, and ending on June 12th 2024. diff --git a/website/pages/pl/index.json b/website/pages/pl/index.json index ba3d2a04875b..da068b79078f 100644 --- a/website/pages/pl/index.json +++ b/website/pages/pl/index.json @@ -21,10 +21,6 @@ "createASubgraph": { "title": "Jak stworzyć subgraf", "description": "Użyj aplikacji \"Studio\" by stworzyć subgraf" - }, - "migrateFromHostedService": { - "title": "Upgrade from the hosted service", - "description": "Upgrading subgraphs to The Graph Network" } }, "networkRoles": { @@ -60,10 +56,6 @@ "graphExplorer": { "title": "Graph Explorer", "description": "Eksploruj subgrafy i zacznij korzystać z protokołu" - }, - "hostedService": { - "title": "Hosted Service", - "description": "Create and explore subgraphs on the hosted service" } } }, diff --git a/website/pages/pl/managing/transfer-and-deprecate-a-subgraph.mdx b/website/pages/pl/managing/transfer-and-deprecate-a-subgraph.mdx new file mode 100644 index 000000000000..6bdd183f72d5 --- /dev/null +++ b/website/pages/pl/managing/transfer-and-deprecate-a-subgraph.mdx @@ -0,0 +1,65 @@ +--- +title: Transfer and Deprecate a Subgraph +--- + +## Transferring ownership of a subgraph + +Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. + +**Please note the following:** + +- Whoever owns the NFT controls the subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. +- You can easily move control of a subgraph to a multi-sig. +- A community member can create a subgraph on behalf of a DAO. + +### View your subgraph as an NFT + +To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: + +``` +https://opensea.io/your-wallet-address +``` + +Or a wallet explorer like **Rainbow.me**: + +``` +https://rainbow.me/your-wallet-addres +``` + +### Step-by-Step + +To transfer ownership of a subgraph, do the following: + +1. Use the UI built into Subgraph Studio: + + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) + +2. Choose the address that you would like to transfer the subgraph to: + + ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) + +Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: + +![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) + +## Deprecating a subgraph + +Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. + +### Step-by-Step + +To deprecate your subgraph, do the following: + +1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). +2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. +3. Your subgraph will no longer appear in searches on Graph Explorer. + +**Please note the following:** + +- The owner's wallet should call the `deprecateSubgraph` function. +- Curators will not be able to signal on the subgraph anymore. +- Curators that already signaled on the subgraph can withdraw their signal at an average share price. +- Deprecated subgraphs will show an error message. + +> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. diff --git a/website/pages/pl/mips-faqs.mdx b/website/pages/pl/mips-faqs.mdx index ae460989f96e..1f7553923765 100644 --- a/website/pages/pl/mips-faqs.mdx +++ b/website/pages/pl/mips-faqs.mdx @@ -6,10 +6,6 @@ title: MIPs FAQs > Note: the MIPs program is closed as of May 2023. Thank you to all the Indexers who participated! -It's an exciting time to be participating in The Graph ecosystem! During [Graph Day 2022](https://thegraph.com/graph-day/2022/) Yaniv Tal announced the [sunsetting of the hosted service](https://thegraph.com/blog/sunsetting-hosted-service/), a moment The Graph ecosystem has been working towards for many years. - -To support the sunsetting of the hosted service and the migration of all of it's activity to the decentralized network, The Graph Foundation has announced the [Migration Infrastructure Providers (MIPs) program](https://thegraph.com/blog/mips-multi-chain-indexing-incentivized-program). - The MIPs program is an incentivization program for Indexers to support them with resources to index chains beyond Ethereum mainnet and help The Graph protocol expand the decentralized network into a multi-chain infrastructure layer. The MIPs program has allocated 0.75% of the GRT supply (75M GRT), with 0.5% to reward Indexers who contribute to bootstrapping the network and 0.25% allocated to Network Grants for subgraph developers using multi-chain subgraphs. diff --git a/website/pages/pl/network/benefits.mdx b/website/pages/pl/network/benefits.mdx index 0167c34f3a67..5c2eeee3fdef 100644 --- a/website/pages/pl/network/benefits.mdx +++ b/website/pages/pl/network/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Cost Comparison | Self Hosted | Sieć The Graph | -| :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $0+ | $0 per month | -| Engineering time | $400 per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | -| Cost per query | $0 | $0 | -| Infrastructure | Centralized | Decentralized | -| Geographic redundancy | $750+ per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $750+ | $0 | +| Cost Comparison | Self Hosted | Sieć The Graph | +|:----------------------------:|:---------------------------------------:|:---------------------------------------------------------------:| +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $0+ | $0 per month | +| Engineering time | $400 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | +| Cost per query | $0 | $0 | +| Infrastructure | Centralized | Decentralized | +| Geographic redundancy | $750+ per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| Cost Comparison | Self Hosted | Sieć The Graph | -| :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $500 per month | $120 per month | -| Engineering time | $800 per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | ~3,000,000 | -| Cost per query | $0 | $0.00004 | -| Infrastructure | Centralized | Decentralized | -| Engineering expense | $200 per hour | Included | -| Geographic redundancy | $1,200 in total costs per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $1,650+ | $120 | +| Cost Comparison | Self Hosted | Sieć The Graph | +|:----------------------------:|:------------------------------------------:|:---------------------------------------------------------------:| +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $500 per month | $120 per month | +| Engineering time | $800 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~3,000,000 | +| Cost per query | $0 | $0.00004 | +| Infrastructure | Centralized | Decentralized | +| Engineering expense | $200 per hour | Included | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| Cost Comparison | Self Hosted | Sieć The Graph | -| :-: | :-: | :-: | -| Monthly server cost\* | $1100 per month, per node | $0 | -| Query costs | $4000 | $1,200 per month | -| Number of nodes needed | 10 | Not applicable | -| Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | ~30,000,000 | -| Cost per query | $0 | $0.00004 | -| Infrastructure | Centralized | Decentralized | -| Geographic redundancy | $1,200 in total costs per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $11,000+ | $1,200 | +| Cost Comparison | Self Hosted | Sieć The Graph | +|:----------------------------:|:-------------------------------------------:|:---------------------------------------------------------------:| +| Monthly server cost\* | $1100 per month, per node | $0 | +| Query costs | $4000 | $1,200 per month | +| Number of nodes needed | 10 | Not applicable | +| Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~30,000,000 | +| Cost per query | $0 | $0.00004 | +| Infrastructure | Centralized | Decentralized | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $11,000+ | $1,200 | \*including costs for backup: $50-$100 per month diff --git a/website/pages/pl/network/curating.mdx b/website/pages/pl/network/curating.mdx index fb2107c53884..b2864660fe8c 100644 --- a/website/pages/pl/network/curating.mdx +++ b/website/pages/pl/network/curating.mdx @@ -10,7 +10,7 @@ Before consumers can query a subgraph, it must be indexed. This is where curatio Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. @@ -18,7 +18,7 @@ The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). @@ -34,7 +34,7 @@ Signaling on a specific version is especially useful when one subgraph is used b Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay a 0.5% curation tax on all auto-migrated curation shares. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -49,7 +49,7 @@ However, it is recommended that curators leave their signaled GRT in place not o ## Risks 1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. 3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. @@ -78,50 +78,14 @@ It’s suggested that you don’t update your subgraphs too frequently. See the ### 5. Can I sell my curation shares? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). ### 6. Am I eligible for a curation grant? Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## Bonding Curve 101 - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![Price per shares](/img/price-per-share.png) - -As a result, price increases linearly, meaning that it will get more expensive to purchase a share over time. Here’s an example of what we mean, see the bonding curve below: - -![Bonding curve](/img/bonding-curve.png) - -Consider we have two curators that mint shares for a subgraph: - -- Curator A is the first to signal on the subgraph. By adding 120,000 GRT into the curve, they are able to mint 2000 shares. -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- Since both curators hold half the total of curation shares, they would receive an equal amount of curator royalties. -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- The remaining curator would now receive all the curator royalties for that subgraph. If they were to burn their shares to withdraw GRT, they would receive 120,000 GRT. -- **TLDR:** The GRT valuation of curation shares is determined by the bonding curve and can be volatile. There is potential to incur big losses. Signaling early means you put in less GRT for each share. By extension, this means you earn more curator royalties per GRT than later curators for the same subgraph. - -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** - -In the case of The Graph, [Bancor’s implementation of a bonding curve formula](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) is leveraged. - Still confused? Check out our Curation video guide below: diff --git a/website/pages/pl/network/delegating.mdx b/website/pages/pl/network/delegating.mdx index 81824234e072..f7430c5746ae 100644 --- a/website/pages/pl/network/delegating.mdx +++ b/website/pages/pl/network/delegating.mdx @@ -2,13 +2,23 @@ title: Delegating --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## Delegator Guide -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly, as that information is widely available online. There are three sections in this guide: @@ -24,15 +34,19 @@ Listed below are the main risks of being a Delegator in the protocol. Delegators cannot be slashed for bad behavior, but there is a tax on Delegators to disincentivize poor decision-making that could harm the integrity of the network. -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +As a Delegator, it's important to understand the following: -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### The delegation unbonding period Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely.
    ![Delegation unbonding](/img/Delegation-Unbonding.png) _Note the 0.5% fee in the Delegation UI, as well as the 28 day @@ -41,47 +55,65 @@ Another thing to consider is how to choose an Indexer wisely. If you choose an I ### Choosing a trustworthy Indexer with a fair reward payout for Delegators -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +#### Delegation Parameters + +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
    ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *The top Indexer is giving Delegators 90% of the rewards. The middle one is giving Delegators 20%. The bottom one is giving Delegators ~83%.*
    -- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. + +As you can see, in order to choose the right Indexer, you must consider multiple things. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -### Calculating Delegators expected return +## Calculating Delegators Expected Return -A Delegator must consider a lot of factors when determining the return. These include: +A Delegator must consider the following factors to determine a return: -- A technical Delegator can also look at the Indexer's ability to use the Delegated tokens available to them. If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### Considering the query fee cut and indexing fee cut -As described in the above sections, you should choose an Indexer that is transparent and honest about setting their Query Fee Cut and Indexing Fee Cuts. A Delegator should also look at the Parameters Cooldown time to see how much of a time buffer they have. After that is done, it is fairly simple to calculate the amount of rewards the Delegators are getting. The formula is: +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![Delegation Image 3](/img/Delegation-Reward-Formula.png) ### Considering the Indexer's delegation pool -Another thing a Delegator has to consider is what proportion of the Delegation Pool they own. All delegation rewards are shared evenly, with a simple rebalancing of the pool determined by the amount the Delegator has deposited into the pool. This gives the Delegator a share of the pool: +Delegators should consider the proportion of the Delegation Pool they own. -![Share formula](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +This gives the Delegator a share of the pool: + +![Share formula](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### Considering the delegation capacity -Another thing to consider is the delegation capacity. Currently, the Delegation Ratio is set to 16. This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of Delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. @@ -89,16 +121,21 @@ Therefore, a Delegator should always consider the Delegation Capacity of an Inde ### MetaMask "Pending Transaction" Bug -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### Example -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## Video guide for the network UI +## Video Guide -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/pl/network/developing.mdx b/website/pages/pl/network/developing.mdx index 1b76eb94ccca..81231c36ad59 100644 --- a/website/pages/pl/network/developing.mdx +++ b/website/pages/pl/network/developing.mdx @@ -2,52 +2,88 @@ title: Developing --- -Developers are the demand side of The Graph ecosystem. Developers build subgraphs and publish them to The Graph Network. Then, they query live subgraphs with GraphQL in order to power their applications. +To start coding right away, go to [Developer Quick Start](/quick-start/). + +## Overview + +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. + +On The Graph, you can: + +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. + +### What is GraphQL? + +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +### Developer Actions + +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. + +## Subgraph Specifics + +### What are subgraphs? + +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. + +A subgraph primarily consists of the following files: + +- `subgraph.yaml`: this YAML file contains the [subgraph manifest](/developing/creating-a-subgraph/#the-subgraph-manifest). +- `subgraph.graphql`: this GraphQL schema defines what data is stored for your subgraph, and how to query it via [GraphQL](/developing/creating-a-subgraph/#the-graphql-schema). +- `mappings`: this [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) mappings file translates data from the event data to the entities defined in your schema. + +Learn the detailed specifics to [create a subgraph](/developing/creating-a-subgraph/). ## Subgraph Lifecycle -Subgraphs deployed to the network have a defined lifecycle. +Here is a general overview of a subgraph’s lifecycle: -### Build locally +![Subgraph Lifecycle](/img/subgraph-lifecycle.png) -As with all subgraph development, it starts with local development and testing. Developers can use the same local setup whether they are building for The Graph Network, the hosted service or a local Graph Node, leveraging `graph-cli` and `graph-ts` to build their subgraph. Developers are encouraged to use tools such as [Matchstick](https://github.com/LimeChain/matchstick) for unit testing to improve the robustness of their subgraphs. +### Build locally -> There are certain constraints on The Graph Network, in terms of feature and network support. Only subgraphs on [supported networks](/developing/supported-networks) will earn indexing rewards, and subgraphs which fetch data from IPFS are also not eligible. +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. ### Deploy to Subgraph Studio -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. - -### Publish to the Network +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -When the developer is happy with their subgraph, they can publish it to The Graph Network. This is an on-chain action, which registers the subgraph so that it is discoverable by Indexers. Published subgraphs have a corresponding NFT, which is then easily transferable. The published subgraph has associated metadata, which provides other network participants with useful context and information. +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. -### Signal to Encourage Indexing +### Publish to the Network -Published subgraphs are unlikely to be picked up by Indexers without the addition of signal. Signal is locked GRT associated with a given subgraph, which indicates to Indexers that a given subgraph will receive query volume, and also contributes to the indexing rewards available for processing it. Subgraph developers will generally add signal to their subgraph, in order to encourage indexing. Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. -### Querying & Application Development +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-and-deprecate-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. -Once a subgraph has been processed by Indexers and is available for querying, developers can start to use the subgraph in their applications. Developers query subgraphs via a gateway, which forwards their queries to an Indexer who has processed the subgraph, paying query fees in GRT. +### Add Curation Signal for Indexing -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. +#### What is signal? -### Updating Subgraphs +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. +### Querying & Application Development -Once the Subgraph Developer is ready to update, they can initiate a transaction to point their subgraph at the new version. Updating the subgraph migrates any signal to the new version (assuming the user who applied the signal selected "auto-migrate"), which also incurs a migration tax. This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). -### Deprecating Subgraphs +Learn more about [querying subgraphs](/querying/querying-the-graph/). -At some point a developer may decide that they no longer need a published subgraph. At that point they may deprecate the subgraph, which returns any signalled GRT to the Curators. +### Updating Subgraphs -### Diverse Developer Roles +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -Some developers will engage with the full subgraph lifecycle on the network, publishing, querying and iterating on their own subgraphs. Some may be focused on subgraph development, building open APIs which others can build on. Some may be application focused, querying subgraphs deployed by others. +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. -### Developers and Network Economics +### Deprecating & Transferring Subgraphs -Developers are a key economic actor in the network, locking up GRT in order to encourage indexing, and crucially querying subgraphs, which is the network's primary value exchange. Subgraph developers also burn GRT whenever a subgraph is updated. +If you no longer need a published subgraph, you can [deprecate or transfer a subgraph](/managing/transfer-and-deprecate-a-subgraph/). Deprecating a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/pl/network/explorer.mdx b/website/pages/pl/network/explorer.mdx index 50c9a33dbbca..05b322667e92 100644 --- a/website/pages/pl/network/explorer.mdx +++ b/website/pages/pl/network/explorer.mdx @@ -2,21 +2,35 @@ title: Graph Explorer --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## Subgrafy -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you’ll be able to test queries in the playground and be able to leverage network details to make informed decisions. You’ll also be able to signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. This is critical because signaling on a subgraph incentivizes it to be indexed, which means that it’ll surface on the network to eventually serve queries. +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, several details are surfaced. These include: +On each subgraph’s dedicated page, you can do the following: - Signal/Un-signal on subgraphs - View more details such as charts, current deployment ID, and other metadata @@ -31,26 +45,32 @@ On each subgraph’s dedicated page, several details are surfaced. These include ## Participants -Within this tab, you’ll get a bird’s eye view of all the people that are participating in the network activities, such as Indexers, Delegators, and Curators. Below, we’ll go into an in-depth review of what each tab means for you. +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### 1. Indexers ![Explorer Image 4](/img/Indexer-Pane.png) -Let’s start with the Indexers. Indexers are the backbone of the protocol, being the ones that stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. In the Indexers table, you’ll be able to see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made off of query fees and indexing rewards. Deep dives below: +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. + +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. -- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators -- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards -- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters -- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior -- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing -- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated +**Specifics** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations. -- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - Indexer Rewards - this is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance. -Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. Indexing parameters are set by clicking on the right-hand side of the table, or by going into an Indexer’s profile and clicking the “Delegate” button. +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. To learn more about how to become an Indexer, you can take a look at the [official documentation](/network/indexing) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) @@ -58,9 +78,13 @@ To learn more about how to become an Indexer, you can take a look at the [offici ### 2. Curators -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially attractive subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. By depositing GRT, Curators mint curation shares of a subgraph. As a result, Curators are eligible to earn a portion of the query fees that the subgraph they have signaled on generates. The bonding curve incentivizes Curators to curate the highest quality data sources. The Curator table in this section will allow you to see: +In the The Curator table listed below you can see: - The date the Curator started curating - The number of GRT that was deposited @@ -68,34 +92,36 @@ Curators can be community members, data consumers, or even subgraph developers w ![Explorer Image 6](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting the following links of [The Graph Academy](https://thegraph.academy/curators/) or [official documentation.](/network/curating) +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### 3. Delegators -Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers seek to attract Delegators by offering them a portion of the indexing rewards and query fees that they earn. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![Explorer Image 7](/img/Delegation-Overview.png) -The Delegators table will allow you to see the active Delegators in the community, as well as metrics such as: +In the Delegators table you can see the active Delegators in the community and important metrics: - The number of Indexers a Delegator is delegating towards - A Delegator’s original delegation - The rewards they have accumulated but have not withdrawn from the protocol - The realized rewards they withdrew from the protocol - Total amount of GRT they have currently in the protocol -- The date they last delegated at +- The date they last delegated -If you want to learn more about how to become a Delegator, look no further! All you have to do is to head over to the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## Network -In the Network section, you will see global KPIs as well as the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. ### Overview -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +The overview section has both all the current network metrics and some cumulative metrics over time: - The current total network stake - The stake split between the Indexers and their Delegators @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - Protocol parameters such as curation reward, inflation rate, and more - Current epoch rewards and fees -A few key details that are worth mentioning: +A few key details to note: -- **Query fees represent the fees generated by the consumers**, and they can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once the Indexers close their allocations towards the subgraphs they’ve been indexing. Thus the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ In the Epochs section, you can analyze on a per-epoch basis, metrics such as: - The active epoch is the one in which Indexers are currently allocating stake and collecting query fees - The settling epochs are the ones in which the state channels are being settled. This means that the Indexers are subject to slashing if the consumers open disputes against them. - The distributing epochs are the epochs in which the state channels for the epochs are being settled and Indexers can claim their query fee rebates. - - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers, thus being finalized. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![Explorer Image 9](/img/Epoch-Stats.png) ## Your User Profile -Now that we’ve talked about the network stats, let’s move on to your personal profile. Your personal profile is the place for you to see your network activity, no matter how you’re participating on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see: +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### Profile Overview -This is where you can see any current actions you took. This is also where you can find your profile information, description, and website (if you added one). +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![Explorer Image 10](/img/Profile-Overview.png) ### Subgraphs Tab -If you click into the Subgraphs tab, you’ll see your published subgraphs. This will not include any subgraphs deployed with the CLI for testing purposes – subgraphs will only show up when they are published to the decentralized network. +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Indexing Tab -If you click into the Indexing tab, you’ll find a table with all the active and historical allocations towards the subgraphs, as well as charts that you can analyze and see your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: @@ -158,7 +189,9 @@ This section will also include details about your net Indexer rewards and net qu ### Delegating Tab -Delegators are important to the Graph Network. A Delegator must use their knowledge to choose an Indexer that will provide a healthy return on rewards. Here you can find details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. In the first half of the page, you can see your delegation chart, as well as the rewards-only chart. To the left, you can see the KPIs that reflect your current delegation metrics. diff --git a/website/pages/pl/network/indexing.mdx b/website/pages/pl/network/indexing.mdx index 77013e86a790..ea382714aeff 100644 --- a/website/pages/pl/network/indexing.mdx +++ b/website/pages/pl/network/indexing.mdx @@ -42,7 +42,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) to get the IDs for all active allocations: +1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -112,12 +112,12 @@ Indexers may differentiate themselves by applying advanced techniques for making - **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. - **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. -| Setup | Postgres
    (CPUs) | Postgres
    (memory in GBs) | Postgres
    (disk in TBs) | VMs
    (CPUs) | VMs
    (memory in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Setup | Postgres
    (CPUs) | Postgres
    (memory in GBs) | Postgres
    (disk in TBs) | VMs
    (CPUs) | VMs
    (memory in GBs) | +| -------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | ### What are some basic security precautions an Indexer should take? @@ -149,20 +149,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer #### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
    (for subgraph queries) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    (for subgraph subscriptions) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
    (for subgraph queries) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    (for subgraph subscriptions) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
    (for paid subgraph queries) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
    (for paid subgraph queries) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Agent @@ -545,7 +545,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules maybe [options] ` — Set the `decisionBasis` for a deployment to `rules`, so that the Indexer agent will use indexing rules to decide whether to index this deployment. -- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additonal argument `--status` can be used to print out all actions of a certain status. +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `graph indexer action queue allocate ` - Queue allocation action diff --git a/website/pages/pl/network/overview.mdx b/website/pages/pl/network/overview.mdx index 16214028dbc9..0779d9a6cb00 100644 --- a/website/pages/pl/network/overview.mdx +++ b/website/pages/pl/network/overview.mdx @@ -2,14 +2,20 @@ title: Network Overview --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## Overview +## How does it work? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Specifics + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![Token Economics](/img/Network-roles@2x.png) -To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20 used to allocate resources in the network. +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/pl/new-chain-integration.mdx b/website/pages/pl/new-chain-integration.mdx index 35b2bc7c8b4a..534d6701efdb 100644 --- a/website/pages/pl/new-chain-integration.mdx +++ b/website/pages/pl/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: Integrating New Networks +title: New Chain Integration --- -Graph Node can currently index data from the following chain types: +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- Ethereum, via EVM JSON-RPC and [Ethereum Firehose](https://github.com/streamingfast/firehose-ethereum) -- NEAR, via a [NEAR Firehose](https://github.com/streamingfast/near-firehose-indexer) -- Cosmos, via a [Cosmos Firehose](https://github.com/graphprotocol/firehose-cosmos) -- Arweave, via an [Arweave Firehose](https://github.com/graphprotocol/firehose-arweave) +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -If you are interested in any of those chains, integration is a matter of Graph Node configuration and testing. +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -If you are interested in a different chain type, a new integration with Graph Node must be built. Our recommended approach is developing a new Firehose for the chain in question and then the integration of that Firehose with Graph Node. More info below. +## Integration Strategies -**1. EVM JSON-RPC** +### 1. EVM JSON-RPC -If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. For more information, refer to [Testing an EVM JSON-RPC](new-chain-integration#testing-an-evm-json-rpc). +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -**2. Firehose** +#### Testing an EVM JSON-RPC -For non-EVM-based chains, Graph Node must ingest blockchain data via gRPC and known type definitions. This can be done via [Firehose](firehose/), a new technology developed by [StreamingFast](https://www.streamingfast.io/) that provides a highly-scalable indexing blockchain solution using a files-based and streaming-first approach. Reach out to the [StreamingFast team](mailto:integrations@streamingfast.io/) if you need help with Firehose development. +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## Difference between EVM JSON-RPC & Firehose +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, in a JSON-RPC batch request +- `trace_filter` _(optionally required for Graph Node to support call handlers)_ -While the two are suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](substreams/), like building [Substreams-powered subgraphs](cookbook/substreams-powered-subgraphs/). In addition, Firehose allows for improved indexing speeds when compared to JSON-RPC. +### 2. Firehose Integration -New EVM chain integrators may also consider the Firehose-based approach, given the benefits of substreams and its massive parallelized indexing capabilities. Supporting both allows developers to choose between building substreams or subgraphs for the new chain. +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> **NOTE**: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that eth_calls are [not a good practice for developers](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## Testing an EVM JSON-RPC +#### Specific Firehose Instrumentation for EVM (`geth`) chains -For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON RPC methods: +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \_(for historical blocks, with EIP-1898 - requires archive node): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`, in a JSON-RPC batch request -- _`trace_filter`_ _(optionally required for Graph Node to support call handlers)_ +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### Graph Node Configuration +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -**Start by preparing your local environment** +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## Graph Node Configuration + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) -2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON RPC compliant URL - > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ -**Test the integration by locally deploying a subgraph** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. Create a simple example subgraph. Some options are below: - 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point - 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) -3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. -4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -Graph Node should be syncing the deployed subgraph if there are no errors. Give it time to sync, then send some GraphQL queries to the API endpoint printed in the logs. +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## Integrating a new Firehose-enabled chain +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. Create a simple example subgraph. Some options are below: + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +Graph Node should be syncing the deployed subgraph if there are no errors. Give it time to sync, then send some GraphQL queries to the API endpoint printed in the logs. -Integrating a new chain is also possible using the Firehose approach. This is currently the best option for non-EVM chains and a requirement for substreams support. Additional documentation focuses on how Firehose works, adding Firehose support for a new chain and integrating it with Graph Node. Recommended docs for integrators: +## Substreams-powered Subgraphs -1. [General docs on Firehose](firehose/) -2. [Adding Firehose support for a new chain](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [Integrating Graph Node with a new chain via Firehose](https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/pl/operating-graph-node.mdx b/website/pages/pl/operating-graph-node.mdx index dbbfcd5fc545..fb3d538f952a 100644 --- a/website/pages/pl/operating-graph-node.mdx +++ b/website/pages/pl/operating-graph-node.mdx @@ -77,13 +77,13 @@ A complete Kubernetes example configuration can be found in the [indexer reposit When it is running Graph Node exposes the following ports: -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
    (for subgraph queries) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    (for subgraph subscriptions) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
    (for subgraph queries) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    (for subgraph subscriptions) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | > **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. diff --git a/website/pages/pl/querying/graphql-api.mdx b/website/pages/pl/querying/graphql-api.mdx index 2bbc71b5bb9c..d8671e53a77c 100644 --- a/website/pages/pl/querying/graphql-api.mdx +++ b/website/pages/pl/querying/graphql-api.mdx @@ -2,11 +2,19 @@ title: GraphQL API --- -This guide explains the GraphQL Query API that is used for The Graph Protocol. +Learn about the GraphQL Query API used in The Graph. -## Queries +## What is GraphQL? -In your subgraph schema you define types called `Entities`. For each `Entity` type, an `entity` and `entities` field will be generated on the top-level `Query` type. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +To understand the larger role that GraphQL plays, review [developing](/network/developing/) and [creating a subgraph](/developing/creating-a-subgraph/). + +## Queries with GraphQL + +In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. + +> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. ### Examples @@ -21,7 +29,7 @@ Query for a single `Token` entity defined in your schema: } ``` -> **Note:** When querying for a single entity, the `id` field is required, and it must be a string. +> Note: When querying for a single entity, the `id` field is required, and it must be writen as a string. Query all `Token` entities: @@ -36,7 +44,10 @@ Query all `Token` entities: ### Sorting -When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. +When querying a collection, you may: + +- Use the `orderBy` parameter to sort by a specific attribute. +- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. #### Example @@ -53,7 +64,7 @@ When querying a collection, the `orderBy` parameter may be used to sort by a spe As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities. -In the following example, we sort the tokens by the name of their owner: +The following example shows tokens sorted by the name of their owner: ```graphql { @@ -70,11 +81,12 @@ In the following example, we sort the tokens by the name of their owner: ### Pagination -When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. - -Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +When querying a collection, it's best to: -Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. +- Use the `first` parameter to paginate from the beginning of the collection. + - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. +- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. #### Example using `first` @@ -106,7 +118,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect #### Example using `first` and `id_ge` -If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: +If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: ```graphql query manyTokens($lastID: String) { @@ -117,11 +129,12 @@ query manyTokens($lastID: String) { } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### Filtering -You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. +- You can use the `where` parameter in your queries to filter for different properties. +- You can filter on multiple values within the `where` parameter. #### Example using `where` @@ -155,7 +168,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: #### Example for block filtering -You can also filter entities by the `_change_block(number_gte: Int)` - this filters entities which were updated in or after the specified block. +You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). @@ -193,7 +206,7 @@ As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/release ##### `AND` Operator -In the following example, we are filtering for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. +The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. ```graphql { @@ -208,7 +221,7 @@ In the following example, we are filtering for challenges with `outcome` `succee ``` > **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas. -> +> > ```graphql > { > challenges(where: { number_gte: 100, outcome: "succeeded" }) { @@ -223,7 +236,7 @@ In the following example, we are filtering for challenges with `outcome` `succee ##### `OR` Operator -In the following example, we are filtering for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. +The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. ```graphql { @@ -278,9 +291,9 @@ _change_block(number_gte: Int) You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -Note that the current implementation is still subject to certain limitations that might violate these gurantees. The implementation can not always tell that a given block hash is not on the main chain at all, or that the result of a query by block hash for a block that can not be considered final yet might be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. #### Example @@ -322,12 +335,12 @@ Fulltext search queries have one required field, `text`, for supplying search te Fulltext search operators: -| Symbol | Operator | Description | -| --- | --- | --- | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| Symbol | Operator | Description | +| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | +| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | +| `<->` | `Follow by` | Specify the distance between two words. | +| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | #### Examples @@ -376,11 +389,11 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 ## Schema -The schema of your data source--that is, the entity types, values, and relationships that are available to query--are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> **Note:** Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. +> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. ### Entities diff --git a/website/pages/pl/querying/querying-best-practices.mdx b/website/pages/pl/querying/querying-best-practices.mdx index 32d1415b20fa..5654cf9e23a5 100644 --- a/website/pages/pl/querying/querying-best-practices.mdx +++ b/website/pages/pl/querying/querying-best-practices.mdx @@ -2,11 +2,9 @@ title: Querying Best Practices --- -The Graph provides a decentralized way to query data from blockchains. +The Graph provides a decentralized way to query data from blockchains via GraphQL APIs, making it easier to query data with the GraphQL language. -The Graph network's data is exposed through a GraphQL API, making it easier to query data with the GraphQL language. - -This page will guide you through the essential GraphQL language rules and GraphQL queries best practices. +Learn the essential GraphQL language rules and GraphQL querying best practices. --- @@ -71,7 +69,7 @@ GraphQL is a language and set of conventions that transport over HTTP. It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`). -However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), we recommend you to use our `graph-client` that supports unique features such as: +However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), it's recommend to use `graph-client` which supports unique features such as: - Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) @@ -104,8 +102,6 @@ main() More GraphQL client alternatives are covered in ["Querying from an Application"](/querying/querying-from-an-application). -Now that we covered the basic rules of GraphQL queries syntax, let's now look at the best practices of GraphQL query writing. - --- ## Best Practices @@ -164,11 +160,11 @@ Doing so brings **many advantages**: - **Variables can be cached** at server-level - **Queries can be statically analyzed by tools** (more on this in the following sections) -**Note: How to include fields conditionally in static queries** +### How to include fields conditionally in static queries -We might want to include the `owner` field only on a particular condition. +You might want to include the `owner` field only on a particular condition. -For this, we can leverage the `@include(if:...)` directive as follows: +For this, you can leverage the `@include(if:...)` directive as follows: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -191,7 +187,7 @@ const result = await execute(query, { }) ``` -Note: The opposite directive is `@skip(if: ...)`. +> Note: The opposite directive is `@skip(if: ...)`. ### Ask for what you want @@ -199,9 +195,8 @@ GraphQL became famous for its "Ask for what you want" tagline. For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually. -When querying GraphQL APIs, always think of querying only the fields that will be actually used. - -A common cause of over-fetching is collections of entities. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. Queries should therefore almost always set first explicitly, and make sure they only fetch as many entities as they actually need. This applies not just to top-level collections in a query, but even more so to nested collections of entities. +- When querying GraphQL APIs, always think of querying only the fields that will be actually used. +- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. For example, in the following query: @@ -337,8 +332,8 @@ query { Such repeated fields (`id`, `active`, `status`) bring many issues: -- harder to read for more extensive queries -- when using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. +- More extensive queries become harder to read. +- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. A refactored version of the query would be the following: @@ -364,13 +359,13 @@ fragment DelegateItem on Transcoder { } ``` -Using GraphQL `fragment` will improve readability (especially at scale) but also will result in better TypeScript types generation. +Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_). ### GraphQL Fragment do's and don'ts -**Fragment base must be a type** +### Fragment base must be a type A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**: @@ -382,7 +377,7 @@ fragment MyFragment on BigInt { `BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base. -**How to spread a Fragment** +#### How to spread a Fragment Fragments are defined on specific types and should be used accordingly in queries. @@ -411,16 +406,16 @@ fragment VoteItem on Vote { It is not possible to spread a fragment of type `Vote` here. -**Define Fragment as an atomic business unit of data** +#### Define Fragment as an atomic business unit of data -GraphQL Fragment must be defined based on their usage. +GraphQL `Fragment`s must be defined based on their usage. For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient. -Here is a rule of thumb for using Fragment: +Here is a rule of thumb for using fragments: -- when fields of the same type are repeated in a query, group them in a Fragment -- when similar but not the same fields are repeated, create multiple fragments, ex: +- When fields of the same type are repeated in a query, group them in a `Fragment`. +- When similar but different fields are repeated, create multiple fragments, for instance: ```graphql # base fragment (mostly used in listing) @@ -443,7 +438,7 @@ fragment VoteWithPoll on Vote { --- -## The essential tools +## The Essential Tools ### GraphQL web-based explorers @@ -473,11 +468,11 @@ This will allow you to **catch errors without even testing queries** on the play The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get: -- syntax highlighting -- autocomplete suggestions -- validation against schema -- snippets -- go to definition for fragments and input types +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets +- Go to definition for fragments and input types If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly. @@ -485,9 +480,9 @@ If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketp The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing: -- syntax highlighting -- autocomplete suggestions -- validation against schema -- snippets +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets -More information on this [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) that showcases all the plugin's main features. +For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. diff --git a/website/pages/pl/quick-start.mdx b/website/pages/pl/quick-start.mdx index 522fd48fbb07..22d41cd2b891 100644 --- a/website/pages/pl/quick-start.mdx +++ b/website/pages/pl/quick-start.mdx @@ -1,25 +1,19 @@ --- -title: ' Na start' +title: " Na start" --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph to Subgraph Studio. +Learn how to easily publish and query a [subgraph](/developing/developer-faqs/#1-what-is-a-subgraph) on The Graph. -Ensure that your subgraph will be indexing data from a [supported network](/developing/supported-networks). - -This guide is written assuming that you have: +## Prerequisites for this guide - A crypto wallet -- A smart contract address on the network of your choice - -## 1. Create a subgraph on Subgraph Studio - -Go to the [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- A smart contract address on one of the [supported networks](/developing/supported-networks/) -Once your wallet is connected, you can begin by clicking “Create a Subgraph." It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name." +## Step-by-step -## 2. Install the Graph CLI +### 1. Install the Graph CLI -The Graph CLI is written in TypeScript and you will need to have `node` and either `npm` or `yarn` installed to use it. Check that you have the most recent CLI version installed. +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. On your local machine, run one of the following commands: @@ -35,133 +29,161 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -## 3. Initialize your subgraph from existing contract +### 2. Create your subgraph + +If your contract has events, the `init` command will automatically create a scaffold of a subgraph. + +#### Create via Graph CLI + +Use the following command to create a subgraph in Subgraph Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +#### Create via Subgraph Studio + +Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. -Initialize your subgraph from an existing contract by running the initialize command: +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +2. Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". + +For additional information on subgraph creation and the Graph CLI, see [Creating a Subgraph](/developing/creating-a-subgraph). + +### 3. Initialize your subgraph + +#### From an existing contract + +The following command initializes your subgraph from an existing contract: ```sh graph init --studio ``` -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> Note: If your contract was verified on Etherscan, then the ABI will automatically be created in the CLI. + +You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -When you initialize your subgraph, the CLI tool will ask you for the following information: +When you initialize your subgraph, the CLI will ask you for the following information: -- Protocol: choose the protocol your subgraph will be indexing data from -- Subgraph slug: create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- Directory to create the subgraph in: choose your local directory -- Ethereum network(optional): you may need to specify which EVM-compatible network your subgraph will be indexing data from -- Contract address: Locate the smart contract address you’d like to query data from -- ABI: If the ABI is not autopopulated, you will need to input it manually as a JSON file -- Start Block: it is suggested that you input the start block to save time while your subgraph indexes blockchain data. You can locate the start block by finding the block where your contract was deployed. -- Contract Name: input the name of your contract -- Index contract events as entities: it is suggested that you set this to true as it will automatically add mappings to your subgraph for every emitted event -- Add another contract(optional): you can add another contract +- Protocol: Choose the protocol your subgraph will be indexing data from. +- Subgraph slug: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. +- Directory to create the subgraph in: Choose your local directory. +- Ethereum network (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- Contract address: Locate the smart contract address you’d like to query data from. +- ABI: If the ABI is not auto-populated, you will need to input it manually as a JSON file. +- Start Block: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- Contract Name: Input the name of your contract. +- Index contract events as entities: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- Add another contract (optional): You can add another contract. See the following screenshot for an example for what to expect when initializing your subgraph: ![Subgraph command](/img/subgraph-init-example.png) -## 4. Write your subgraph +### 4. Write your subgraph -The previous commands create a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: +The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. -- Manifest (`subgraph.yaml`) - The manifest defines what datasources your subgraphs will index. -- Schema (`schema.graphql`) - The GraphQL schema defines what data you wish to retrieve from the subgraph. -- AssemblyScript Mappings (`mapping.ts`) - This is the code that translates data from your datasources to the entities defined in the schema. +When making changes to the subgraph, you will mainly work with three files: -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). +- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -## 5. Deploy to Subgraph Studio +For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -Once your subgraph is written, run the following commands: +### 5. Deploy your subgraph -```sh -$ graph codegen -$ graph build -``` +Remember, deploying is not the same as publishing. + +- When you deploy a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. +- When you publish a subgraph, you are publishing it onchain to the decentralized network. + +1. Once your subgraph is written, run the following commands: + + ```sh + graph codegen + graph build + ``` + +2. Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. -- Authenticate and deploy your subgraph. The deploy key can be found on the Subgraph page in Subgraph Studio. +![Deploy key](/img/subgraph-studio-deploy-key.jpg) +```` ```sh -$ graph auth --studio -$ graph deploy --studio -``` -You will be asked for a version label. It's strongly recommended to use [semver](https://semver.org/) for versioning like `0.0.1`. That said, you are free to choose any string as version such as:`v1`, `version1`, `asdf`. - -## 6. Test your subgraph - -In Subgraph Studio's playground environment, you can test your subgraph by making a sample query. - -The logs will tell you if there are any errors with your subgraph. The logs of an operational subgraph will look like this: - -![Subgraph logs](/img/subgraph-logs-image.png) - -If your subgraph is failing, you can query the subgraph health by using the GraphiQL Playground. Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails, so you can debug accordingly: - -```graphql -{ - indexingStatuses(subgraphs: ["Qm..."]) { - node - synced - health - fatalError { - message - block { - number - hash - } - handler - } - nonFatalErrors { - message - block { - number - hash - } - handler - } - chains { - network - chainHeadBlock { - number - } - earliestBlock { - number - } - latestBlock { - number - } - lastHealthyBlock { - number - } - } - entityCount - } -} +graph auth --studio + +graph deploy --studio ``` +```` -## 7. Publish your subgraph to The Graph’s Decentralized Network +- The CLI will ask for a version label. + - It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. That said, you can choose any string for the version such as: `v1`, `version1`, `asdf`, etc. -Once your subgraph has been deployed to Subgraph Studio, you have tested it out, and you are ready to put it into production, you can then publish it to the decentralized network. +### 6. Review your subgraph -In Subgraph Studio, you will be able to click the publish button on the top right of your subgraph's page. +If you’d like to examine your subgraph before publishing it to the network, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: -Select the network you would like to publish your subgraph to. It is recommended to publish subgraphs to Arbitrum One to take advantage of the [faster transaction speeds and lower gas costs](/arbitrum/arbitrum-faq). +- Run a sample query. +- Analyze your subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: -The (upgrade Indexer)[/sunrise/#about-the-upgrade-indexer] will begin serving queries on your subgraph regardless of subgraph curation, and it will provide you with 100,000 free queries per month. + ![Subgraph logs](/img/subgraph-logs-image.png) -For a higher quality of service and stronger redundancy, you can curate your subgraph to attract more Indexers. At the time of writing, it is recommended that you curate your own subgraph with at least 3,000 GRT to ensure 3-5 additional Indexers begin serving queries on your subgraph. +### 7. Publish your subgraph to The Graph Network -To save on gas costs, you can curate your subgraph in the same transaction that you published it by selecting this button when you publish your subgraph to The Graph’s decentralized network: +Publishing a subgraph to the decentralized network makes it available for [Curators](/network/curating/) to begin curating it and [Indexers](/network/indexing/) to begin indexing it. -![Subgraph publish](/img/publish-and-signal-tx.png) +#### Publishing with Subgraph Studio + +1. To publish your subgraph, click the Publish button in the dashboard. +2. Select the network to which you would like to publish your subgraph. + +#### Publishing from the CLI + +As of version 0.73.0, you can also publish your subgraph with the Graph CLI. + +1. Open the `graph-cli`. + +2. Use the following commands: + + ```sh + graph codegen && graph build + ``` + + Then, + + ```sh + graph publish + ``` -## 8. Query your subgraph +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +To customize your deployment, see [Publishing a Subgraph](/publishing/publishing-a-subgraph/). + +#### Adding signal to your subgraph + +1. To attract indexers to query your subgraph, you should add GRT curation signal to it. + + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + +2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. + + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + +To learn more about curation, read [Curating](/network/curating/). + +To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: + +![Subgraph publish](/img/publish-and-signal-tx.png) -Now, you can query your subgraph by sending GraphQL queries to your subgraph’s Query URL, which you can find by clicking on the query button. +### 8. Query your subgraph -If you don't have your API key, you can query via the free, rate-limited development query URL, which can be used for development and staging. +Now, you can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read more [here](/querying/querying-the-graph/). +For more information about querying data from your subgraph, read [Querying The Graph](/querying/querying-the-graph/). diff --git a/website/pages/pl/release-notes/assemblyscript-migration-guide.mdx b/website/pages/pl/release-notes/assemblyscript-migration-guide.mdx index 85f6903a6c69..17224699570d 100644 --- a/website/pages/pl/release-notes/assemblyscript-migration-guide.mdx +++ b/website/pages/pl/release-notes/assemblyscript-migration-guide.mdx @@ -127,11 +127,8 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - You'll need to rename your duplicate variables if you had variable shadowing. - ### Null Comparisons - By doing the upgrade on your subgraph, sometimes you might get errors like these: ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - To solve you can simply change the `if` statement to something like this: ```typescript @@ -288,7 +284,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: ```typescript diff --git a/website/pages/pl/sps/introduction.mdx b/website/pages/pl/sps/introduction.mdx new file mode 100644 index 000000000000..12e3f81c6d53 --- /dev/null +++ b/website/pages/pl/sps/introduction.mdx @@ -0,0 +1,19 @@ +--- +title: Introduction to Substreams-powered Subgraphs +--- + +By using a Substreams package (`.spkg`) as a data source, your subgraph gains access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. + +There are two methods of enabling this technology: + +Using Substreams [triggers](./triggers): Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. + +Using [Entity Changes](https://substreams.streamingfast.io/documentation/consume/subgraph/graph-out): By writing more of the logic into Substreams, you can consume the module's output directly into graph-node. In graph-node, you can use the Substreams data to create your subgraph entities. + +It is really a matter of where you put your logic, in the subgraph or the Substreams. Keep in mind that having more of your logic in Substreams benefits from a parallelized model, whereas triggers will be linearly consumed in graph-node. + +Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: + +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/pl/sps/triggers-example.mdx b/website/pages/pl/sps/triggers-example.mdx new file mode 100644 index 000000000000..8e4f96eba14a --- /dev/null +++ b/website/pages/pl/sps/triggers-example.mdx @@ -0,0 +1,140 @@ +--- +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" +--- + +## Prerequisites + +Before starting, make sure to: + +- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. +- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. + +## Step 1: Initialize Your Project + + + +1. Open your Dev Container and run the following command to initialize your project: + + ```bash + substreams init + ``` + +2. Select the "minimal" project option. + +3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: + +```yaml +specVersion: v0.1.0 +package: + name: my_project_sol + version: v0.1.0 + +imports: # Pass your spkg of interest + solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg + +modules: + - name: map_spl_transfers + use: solana:map_block # Select corresponding modules available within your spkg + initialBlock: 260000082 + + - name: map_transactions_by_programid + use: solana:solana:transactions_by_programid_without_votes + +network: solana-mainnet-beta + +params: # Modify the param fields to meet your needs + # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA + map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE +``` + +## Step 2: Generate the Subgraph Manifest + +Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: + +```bash +substreams codegen subgraph +``` + +You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: + +```yaml +--- +dataSources: + - kind: substreams + name: my_project_sol + network: solana-mainnet-beta + source: + package: + moduleName: map_spl_transfers # Module defined in the substreams.yaml + file: ./my-project-sol-v0.1.0.spkg + mapping: + apiVersion: 0.0.7 + kind: substreams/graph-entities + file: ./src/mappings.ts + handler: handleTriggers +``` + +## Step 3: Define Entities in `schema.graphql` + +Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. Here is an example: + +```graphql +type MyTransfer @entity { + id: ID! + amount: String! + source: String! + designation: String! + signers: [String!]! +} +``` + +This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. + +## Step 4: Handle Substreams Data in `mappings.ts` + +With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: + +```ts +import { Protobuf } from 'as-proto/assembly' +import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events' +import { MyTransfer } from '../generated/schema' + +export function handleTriggers(bytes: Uint8Array): void { + const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode) + + for (let i = 0; i < input.data.length; i++) { + const event = input.data[i] + + if (event.transfer != null) { + let entity_id: string = `${event.txnId}-${i}` + const entity = new MyTransfer(entity_id) + entity.amount = event.transfer!.instruction!.amount.toString() + entity.source = event.transfer!.accounts!.source + entity.designation = event.transfer!.accounts!.destination + + if (event.transfer!.accounts!.signer!.single != null) { + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] + } else if (event.transfer!.accounts!.signer!.multisig != null) { + entity.signers = event.transfer!.accounts!.signer!.multisig!.signers + } + entity.save() + } + } +} +``` + +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + +## Conclusion + +You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. + +For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). diff --git a/website/pages/pl/sps/triggers.mdx b/website/pages/pl/sps/triggers.mdx new file mode 100644 index 000000000000..ed19635d4768 --- /dev/null +++ b/website/pages/pl/sps/triggers.mdx @@ -0,0 +1,37 @@ +--- +title: Substreams Triggers +--- + +Custom triggers allow you to send data directly into your subgraph mappings file and entities (similar to tables and fields), enabling full use of the GraphQL layer. By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data within your subgraph’s handler, ensuring efficient and streamlined data management within the subgraph framework. + +> Note: If you haven’t already, visit one of the How-To Guides found [here](./introduction) to scaffold your first project in the Development Container. + +The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. + +```tsx +export function handleTransactions(bytes: Uint8Array): void { + let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).trasanctions // 1. + if (transactions.length == 0) { + log.info('No transactions found', []) + return + } + + for (let i = 0; i < transactions.length; i++) { + // 2. + let transaction = transactions[i] + + let entity = new Transaction(transaction.hash) // 3. + entity.from = transaction.from + entity.to = transaction.to + entity.save() + } +} +``` + +Here's what you’re seeing in the `mappings.ts` file: + +1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object +2. Looping over the transactions +3. Create a new subgraph entity for every transaction + +To go through a detailed example of a trigger-based subgraph, [click here](./triggers-example). diff --git a/website/pages/pl/substreams.mdx b/website/pages/pl/substreams.mdx index 710e110012cc..a838a6924e2f 100644 --- a/website/pages/pl/substreams.mdx +++ b/website/pages/pl/substreams.mdx @@ -4,9 +4,11 @@ title: Substreams ![Substreams Logo](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## How Substreams Works in 4 Steps @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### Expand Your Knowledge - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/pl/sunrise.mdx b/website/pages/pl/sunrise.mdx index 32bf6c6d26d4..14d1444cf8cd 100644 --- a/website/pages/pl/sunrise.mdx +++ b/website/pages/pl/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Post-Sunrise + Upgrading to The Graph Network FAQ --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Note: The Sunrise of Decentralized Data ended June 12th, 2024. -## What is the Sunrise of Decentralized Data? +## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data is an initiative spearheaded by Edge & Node. The goal is to enable subgraph developers to seamlessly upgrade to The Graph’s decentralized network. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan draws on many previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs, and the ability to integrate new blockchain networks to The Graph. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### What are the phases of the Sunrise? +### What happened to the hosted service? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. -## Upgrading subgraphs to The Graph Network +During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. -### When will hosted service subgraphs no longer be available? +### Was Subgraph Studio impacted by this upgrade? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Why were subgraphs published to Arbitrum, did it start indexing a different network? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Will my hosted service subgraph be supported on The Graph Network? - -Yes, the upgrade Indexer will automatically support all hosted service subgraphs published to The Graph Network for a seamless upgrade experience. - -### How do I upgrade my hosted service subgraph? - -> Note: Upgrading a subgraph to The Graph Network cannot be undone. - - - -To upgrade a hosted service subgraph, you can visit the subgraph dashboard on the [hosted service](https://thegraph.com/hosted-service). - -1. Select the subgraph(s) you want to upgrade. -2. Select the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### How can I get support with the upgrade process? - -The Graph community is here to support developers as they move to The Graph Network. Join The Graph's [Discord server](https://discord.gg/vtvv7FP) and request help in the #upgrade-decentralized-network channel. - -### How can I ensure high quality of service and redundancy for subgraphs on The Graph Network? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Members from these blockchain communities are encouraged to integrate their chain through the [Chain Integration Process](/chain-integration-overview/). - -### How do I publish new versions to the network? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Upgrade to the latest version of [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Update your deploy command - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> Publishing requires Arbitrum ETH - upgrading your subgraph also airdrops a small amount to facilitate your first protocol interactions 🧑‍🚀 - -### I use a subgraph developed by someone else, how can I make sure that my service isn't interrupted? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### What happens if I don't upgrade my subgraph? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### How can I get started querying subgraphs on The Graph Network? - -You can explore available subgraphs on [Graph Explorer](https://thegraph.com/explorer). [Learn more about querying subgraphs on The Graph](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## About the Upgrade Indexer -### What is the upgrade Indexer? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> The upgrade Indexer is currently active. -### What chains does the upgrade Indexer support? +The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. -The upgrade Indexer supports chains that were previously only available on the hosted service. +### What does the upgrade Indexer do? -Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. ### What does the upgrade indexer mean for existing Indexers? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. + +However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As more subgraphs are upgraded from the hosted service to The Graph Network, Delegators stand to benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### Will the upgrade Indexer compete with existing Indexers for rewards? +### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer will only allocate the minimum amount per subgraph and will not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### How will this affect subgraph developers? +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### How does this benefit data consumers? +### How does the upgrade Indexer benefit data consumers? The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. -### How will the upgrade Indexer price queries? - -The upgrade Indexer will price queries at the market rate so as not to influence the query fee market. - -### What are the criteria for the upgrade Indexer to stop supporting a subgraph? - -The upgrade Indexer will serve a subgraph until it is sufficiently and successfully served with consistent queries served by at least 3 other Indexers. - -Furthermore, the upgrade Indexer will stop supporting a subgraph if it has not been queried in the last 30 days. - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## About The Graph Network - -### Do I need to run my own infrastructure? - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -Once your subgraph has reached adequate curation signal and other Indexers begin supporting it, the upgrade Indexer will gradually taper off, allowing other Indexers to collect indexing rewards and query fees. - -### Should I host my own indexing infrastructure? - -Running infrastructure for your own project is [significantly more resource intensive](/network/benefits/) when compared to using The Graph Network. - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -That being said, if you’re still interested in running a [Graph Node](https://github.com/graphprotocol/graph-node), consider joining The Graph Network [as an Indexer](https://thegraph.com/blog/how-to-become-indexer/) to earn indexing rewards and query fees by serving data on your subgraph and others. - -### Should I use a centralized indexing provider? - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -Here's a detailed breakdown of the benefits of The Graph over centralized hosting: +### How does the upgrade Indexer price queries? -- **Resilience and Redundancy**: Decentralized systems are inherently more robust and resilient due to their distributed nature. Data isn't stored on a single server or location. Instead, it's served by hundreds of independent Indexers around the globe. This reduces the risk of data loss or service interruptions if one node fails, leading to exceptional uptime (99.99%). +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **Quality of Service**: In addition to the impressive uptime, The Graph Network features a ~106ms median query speed (latency), and higher query success rates compared to hosted alternatives. Read more in [this blog](https://thegraph.com/blog/qos-the-graph-network/). +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -Just as you've chosen your blockchain network for its decentralized nature, security, and transparency, opting for The Graph Network is an extension of those same principles. By aligning your data infrastructure with these values, you ensure a cohesive, resilient, and trust-driven development environment. +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/pl/supported-network-requirements.mdx b/website/pages/pl/supported-network-requirements.mdx index df15ef48d762..afbf755c0a5a 100644 --- a/website/pages/pl/supported-network-requirements.mdx +++ b/website/pages/pl/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| Network | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | +| Network | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | diff --git a/website/pages/pl/tap.mdx b/website/pages/pl/tap.mdx new file mode 100644 index 000000000000..872ad6231e9c --- /dev/null +++ b/website/pages/pl/tap.mdx @@ -0,0 +1,197 @@ +--- +title: TAP Migration Guide +--- + +Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. + +## Overview + +[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: + +- Efficiently handles micropayments. +- Adds a layer of consolidations to on-chain transactions and costs. +- Allows Indexers control of receipts and payments, guaranteeing payment for queries. +- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. + +## Specifics + +TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. + +For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. + +### RAV Details + +- It’s money that is waiting to be sent to the blockchain. + +- It will continue to send requests to aggregate and ensure that the total value of non-aggregated receipts does not exceed the `amount willing to lose`. + +- Each RAV can be redeemed once in the contracts, which is why they are sent after the allocation is closed. + +### Redeeming RAV + +As long as you run `tap-agent` and `indexer-agent`, everything will be executed automatically. The following provides a detailed breakdown of the process: + +1. An Indexer closes allocation. + +2. ` period, tap-agent` takes all pending receipts for that specific allocation and requests an aggregation into a RAV, marking it as `last`. + +3. `indexer-agent` takes all the last RAVS and sends redeem requests to the blockchain, which will update the value of `redeem_at`. + +4. During the `` period, `indexer-agent` monitors if the blockchain has any reorganizations that revert the transaction. + + - If it was reverted, the RAV is resent to the blockchain. If it was not reverted, it gets marked as `final`. + +## Blockchain Addresses + +### Contracts + +| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| ------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | +| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | +| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | + +### Gateway + +| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| ---------- | --------------------------------------------- | --------------------------------------------- | +| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | + +### Requirements + +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. + +- [Graph TAP Aribtrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) + +> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployement. As a result, you have to index it manually. + +## Migration Guide + +### Software versions + +| Component | Version | Image Link | +| --------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------- | +| indexer-service | v1.0.0-rc.6 | [indexer-service](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-service-rs/264320627?tag=1.0.0-rc.6) | +| indexer-agent | PR #995 | [indexer-agent](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent/266166026?tag=sha-d98cf80) | +| tap-agent | v1.0.0-rc.6 | [tap-agent](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-tap-agent/264320547?tag=1.0.0-rc.6) | + +### Steps + +1. **Indexer Agent** + + - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). + - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + +2. **Indexer Service** + + - Fully replace your current configuration with the [new Indexer Service rs](https://github.com/graphprotocol/indexer-rs). It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + - Like the older version, you can scale Indexer Service horizontally easily. It is still stateless. + +3. **TAP Agent** + + - Run _one_ single instance of [TAP Agent](https://github.com/graphprotocol/indexer-rs) at all times. It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + +4. **Configure Indexer Service and TAP Agent** + + Configuration is a TOML file shared between `indexer-service` and `tap-agent`, supplied with the argument `--config /path/to/config.toml`. + + Check out the full [configuration](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) and the [default values](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) + +For minimal configuration, use the following template: + +```bash +# You will have to change *all* the values below to match your setup. +# +# Some of the config below are global graph network values, which you can find here: +# +# +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: +# +# [database] +# postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" + +[indexer] +indexer_address = "0x1111111111111111111111111111111111111111" +operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" + +[database] +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. +postgres_url = "postgres://postgres@postgres:5432/postgres" + +[graph_node] +# URL to your graph-node's query endpoint +query_url = "" +# URL to your graph-node's status endpoint +status_url = "" + +[subgraphs.network] +# Query URL for the Graph Network subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[subgraphs.escrow] +# Query URL for the Escrow subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[blockchain] +# The chain ID of the network that the graph network is running on +chain_id = 1337 +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. +receipts_verifier_address = "0x2222222222222222222222222222222222222222" + +######################################## +# Specific configurations to tap-agent # +######################################## +[tap] +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: +# max_amount_willing_to_lose_grt = "0.1" +max_amount_willing_to_lose_grt = 20 + +[tap.sender_aggregator_endpoints] +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. +0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" +``` + +Notes: + +- Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/tap/#gateway). +- Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/tap/#contracts) using the appropriate chain id. + +**Log Level** + +- You can set the log level by using the `RUST_LOG` environment variable. +- It’s recommended that you set it to `RUST_LOG=indexer_tap_agent=debug,info`. + +## Monitoring + +### Metrics + +All components expose the port 7300 to be queried by prometheus. + +### Grafana Dashboard + +You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) and import. + +### Launchpad + +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer) diff --git a/website/pages/pt/about.mdx b/website/pages/pt/about.mdx index b76fd3f3da93..f244df9f2b9d 100644 --- a/website/pages/pt/about.mdx +++ b/website/pages/pt/about.mdx @@ -2,46 +2,66 @@ title: Sobre o The Graph --- -Esta página explicará o que é o The Graph e como pode começar. - ## O que é o The Graph? -O The Graph é um protocolo descentralizado para indexação e queries de dados de blockchains. O The Graph possibilita a consulta de dados que são difíceis de consultar diretamente. +O The Graph é um protocolo descentralizado poderoso que permite a consulta e indexação rápida de dados em blockchain. Ele simplifica o processo complexo de queries de dados de blockchain, o que facilita e acelera a programação de dApps. + +## Entenda o Básico Projetos com contratos inteligentes complexos, como o [Uniswap](https://uniswap.org/) e iniciativas de NFTs como o [Bored Ape Yacht Club](https://boredapeyachtclub.com/), armazenam dados na blockchain Ethereum, o que torna muito difícil ler qualquer coisa que não seja dados básicos diretamente da blockchain. -No caso do Bored Ape Yacht Club, podemos realizar operações básicas de leitura no [contrato](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code), como buscar o dono de um certo Ape, buscar a URI de conteúdo de um Ape com base na sua ID, ou na reserva total. Isto é possível porque estas operações de leitura são programadas diretamente no contrato inteligente, mas consultas e operações no mundo real mais avançadas, como agregação, busca, relacionamentos, e filtragem não-trivial _não_ são possíveis. Por exemplo, se quiséssemos consultar por Apes que são de um certo endereço, e filtrar por uma das suas características, nós não poderíamos pegar essa informação ao interagir diretamente com o próprio contrato. +### Desafios sem o The Graph + +No caso do exemplo listado acima, o Bored Ape Yacht Club, é possível realizar operações básicas [no contrato](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). Pode-se ver o dono de um certo Ape, ler a URI de um Ape com base na sua ID, ou ler a reserva total. + +- Isto pode ser feito porque estas operações de leitura são programadas diretamente no próprio contrato inteligente. Porém, queries e operações mais avançadas e específicas do mundo real, como agregação, busca, relacionamentos e filtros não triviais, **não são possíveis**. + +- Por exemplo, se alguém quisesse ver Apes em posse de um endereço específico e refinar a sua busca com base numa característica particular, não seria possível obter aquela informação ao interagir diretamente com o próprio contrato. + +- Para conseguir mais dados, seria necessário processar todo evento de [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) emitido na história, ler os metadados do IPFS usando a ID e o hash IPFS do token, e depois agregá-los. + +### Por que isto é um problema? + +Levariam **horas, ou até mesmo dias**, para que um aplicativo descentralizado (dApp) executado em um navegador conseguisse uma resposta a estas questões simples. + +Como alternativa, haveria a opção de construir o seu próprio servidor, processar as transações, salvá-las num banco de dados, e construir um endpoint de API sobre tudo isso tudo para poder fazer o query dos dados. Porém, esta opção [consome muitos recursos](/network/benefits/), precisa de manutenção, apresenta um único ponto de falha, e quebra propriedades de segurança importantes obrigatórias para a descentralização. + +Propriedades de blockchain, como finalidade, reorganizações de chain, ou blocos uncle, complicam ainda mais este processo, e não apenas o tornam longo e cansativo, mas dificultam conceitualmente a retirada de resultados precisos de queries dos dados da blockchain. + +## The Graph Providencia uma Solução + +O The Graph resolve este desafio com um protocolo descentralizado que indexa e permite queries eficientes e de alto desempenho de dados de blockchain. Estas APIs ("subgraphs" indexados) podem então ser consultados num query com uma API GraphQL padrão. -Para conseguir estes dados, seria necessário processar todo evento de [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) emitido na história, ler os metadados do IPFS usando a ID e o hash IPFS do token, e depois agregá-los. Levariam **horas, ou até mesmo dias,** para que um aplicativo descentralizado (dApp) executado em um navegador conseguisse uma resposta a estas questões simples. +Hoje, há um protocolo descentralizado apoiado pela implementação de código aberto do [Graph Node](https://github.com/graphprotocol/graph-node) que facilita este processo. -Também valeria construir o seu próprio servidor, processar as transações lá, salvá-las em um banco de dados, e construir um endpoint de API sobre tudo isso tudo para poder fazer o query dos dados. Porém, esta opção [consome muitos recursos](/network/benefits/), precisa de manutenção, apresenta um único ponto de falha, e quebra propriedades de segurança importantes obrigatórias para a descentralização. +### Como o The Graph Funciona -**Indexar dados de blockchain é muito, muito difícil.** +Indexar dados em blockchain é um processo difícil, mas facilitado pelo The Graph. O The Graph aprende como indexar dados no Ethereum com o uso de subgraphs. Subgraphs são APIs personalizadas construídas com dados de blockchain, que extraem, processam e armazenam dados de uma blockchain para poderem ser consultadas suavemente via GraphQL. -Propriedades de blockchain, como finalidade, reorganizações de chain, ou blocos uncle, complicam ainda mais este processo, e não apenas o tornam longo e cansativo, mas conceitualmente dificultam a retirada de resultados corretos de queries dos dados da blockchain. +#### Especificações -O The Graph fornece uma solução com um protocolo descentralizado que indexa e permite queries eficientes e de alto desempenho de dados de blockchain. Estas APIs ("subgraphs" indexados) podem então ser consultados num query com uma API GraphQL padrão. Hoje, há um serviço hospedado, e também um protocolo descentralizado com as mesmas capabilidades. Ambos são apoiados pela implementação de código aberto do [Graph Node](https://github.com/graphprotocol/graph-node). +- O The Graph usa descrições de subgraph, conhecidas como "manifests de subgraph" dentro do subgraph. -## Como o The Graph Funciona +- A descrição do subgraph contorna os contratos inteligentes de interesse para o mesmo, os eventos dentro destes contratos para focar, e como mapear dados de evento para dados que o The Graph armazenará no seu banco de dados. -O The Graph aprende quais dados indexar, e como indexar os dados na Ethereum com base em descrições de subgraph — conhecidas como manifests de subgraph. A descrição do subgraph define os contratos inteligentes de interesse para o mesmo, os eventos nestes contratos para prestar atenção, e como mapear dados de evento para dados que o The Graph armazenará no seu banco de dados. +- Ao criar um subgraph, primeiro é necessário escrever um manifest de subgraph. -Quando tiver escrito um `subgraph manifest`, use o Graph CLI para armazenar a definição no IPFS e mandar o indexador começar a indexar dados para o subgraph. +- Após escrever o `subgraph manifest`, é possível usar o Graph CLI para armazenar a definição no IPFS e instruir o Indexador para começar a indexar dados para o subgraph. -Este diagrama dá mais detalhes sobre o fluxo de dados quando um manifest de subgraph for lançado, na questão de transações na Ethereum: +O diagrama abaixo dá informações mais detalhadas sobre o fluxo de dados quando um manifest de subgraph for lançado com transações no Ethereum. ![Um gráfico que explica como o The Graph utiliza Graph Nodes para servir queries para consumidores de dados](/img/graph-dataflow.png) O fluxo segue estes passos: -1. Um dApp adiciona dados à Ethereum através de uma transação em contrato inteligente. -2. O contrato inteligente emite um ou mais eventos enquanto processa a transação. -3. O Graph Node escaneia continuamente a Ethereum por novos blocos e os dados que podem conter para o seu subgraph. -4. O Graph Node encontra eventos na Ethereum para o seu subgraph nestes blocos e executa os handlers de mapeamento que forneceu. O mapeamento é um módulo WASM que cria ou atualiza as entidades de dados que o Graph Node armazena em resposta a eventos na Ethereum. -5. O dApp consulta o Graph Node para dados indexados da blockchain, através do [endpoint GraphQL](https://graphql.org/learn/) do node. O Graph Node, por sua vez, traduz os queries GraphQL em queries para o seu armazenamento subjacente de dados para poder retirar estes dados, com o uso das capacidades de indexação do armazenamento. O dApp exibe estes dados em uma interface rica para utilizadores finais, que eles usam para emitir novas transações na Ethereum. E o ciclo se repete. +1. Um dApp adiciona dados à Ethereum através de uma transação em contrato inteligente. +2. O contrato inteligente emite um ou mais eventos enquanto processa a transação. +3. O Graph Node escaneia continuamente a Ethereum por novos blocos e os dados que podem conter para o seu subgraph. +4. O Graph Node encontra eventos na Ethereum para o seu subgraph nestes blocos e executa os handlers de mapeamento que forneceu. O mapeamento é um módulo WASM que cria ou atualiza as entidades de dados que o Graph Node armazena em resposta a eventos na Ethereum. +5. O dApp consulta o Graph Node para dados indexados da blockchain, através do [endpoint GraphQL](https://graphql.org/learn/) do node. O Graph Node, por sua vez, traduz os queries GraphQL em queries para o seu armazenamento subjacente de dados para poder retirar estes dados, com o uso das capacidades de indexação do armazenamento. O dApp exibe estes dados em uma interface rica para utilizadores finais, que eles usam para emitir novas transações na Ethereum. E o ciclo se repete. ## Próximos Passos -As seguintes secções explicam em mais detalhes como definir subgraphs, como lançá-los, e como buscar dados dos indexes que o Graph Node constrói. +As seguintes secções providenciam um olhar mais íntimo nos subgraphs, na sua publicação e no query de dados. -Antes de começar a escrever o seu próprio subgraph, confira o [Graph Explorer](https://thegraph.com/explorer) e explore alguns dos subgraphs que já foram lançados. A página para cada subgraph contém um playground que permite-lhe consultar os dados desse subgraph com queries no GraphQL. +Antes de escrever o seu próprio subgraph, é recomendado explorar o [Graph Explorer](https://thegraph.com/explorer) e revir alguns dos subgraphs já publicados. A página de todo subgraph inclui um ambiente de teste em GraphQL que lhe permite consultar os dados dele. diff --git a/website/pages/pt/arbitrum/arbitrum-faq.mdx b/website/pages/pt/arbitrum/arbitrum-faq.mdx index 42fa0be68967..873be7d055ef 100644 --- a/website/pages/pt/arbitrum/arbitrum-faq.mdx +++ b/website/pages/pt/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: Perguntas Frequentes do Arbitrum Clique [aqui](#billing-on-arbitrum-faqs) para pular até as Perguntas Frequentes de Cobranças no Arbitrum. -## Por que o The Graph está a implementar uma Solução L2? +## Why did The Graph implement an L2 Solution? -Ao escalar o The Graph na L2, os participantes da rede podem: +By scaling The Graph on L2, network participants can now benefit from: - Poupar até 26x em taxas de gas @@ -14,7 +14,7 @@ Ao escalar o The Graph na L2, os participantes da rede podem: - Herdar segurança do Ethereum -A escala dos contratos inteligentes do protocolo à L2 permite que os participantes da rede interajam com mais frequência por menos custos em taxas de gás. Por exemplo, os Indexadores podem abrir e fechar alocações para indexar um número maior de subgraphs com mais frequência; os programadores podem lançar e atualizar subgraphs com mais facilidade; os Delegadores podem delegar GRT com mais frequência; e os Curadores podem adicionar ou retirar sinais de um número maior de subgraphs–ações que, antigamente, eram consideradas caras demais para realizar com frequência devido ao gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. A comunidade do The Graph prosseguiu com o Arbitrum no ano passado, após o resultado da discussão [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). @@ -41,27 +41,21 @@ Para aproveitar o The Graph na L2, use este switcher de dropdown para alternar e ## Como um programador, consumidor de dados, Indexador, Curador ou Delegante, o que devo fazer agora? -Não é necessário fazer nada imediatamente, mas participantes na rede são bem-vindos para começar a mudança ao Arbitrum para aproveitar os benefícios da L2. +Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) for additional support. -As equipas centrais de programação estão a trabalhar para criar ferramentas de transferência para L2, que facilitarão muito o movimento de delegação, curadoria e subgraphs ao Arbitrum. Os participantes da rede podem esperar que ferramentas de transferência estejam disponíveis até o fim de 2023. +All indexing rewards are now entirely on Arbitrum. -Até 10 de abril de 2023, 5% de todas as recompensas de indexação eram mintadas no Arbitrum. À medida que a participação na rede aumenta, e que o Conselho a aprova, as recompensas de indexação serão movidas lentamente do Ethereum até o Arbitrum, até finalizar por completo a mudança ao Arbitrum. - -## Se eu quiser participar na rede na L2, o que devo fazer? - -Por favor, ajude a [testar a rede](https://testnet.thegraph.com/explorer) na L2 e informe-nos sobre a sua experiência no [Discord](https://discord.gg/graphprotocol). - -## Há algum risco associado ao escalamento da rede à L2? +## Were there any risks associated with scaling the network to L2? Todos os contratos inteligentes já foram devidamente [auditados](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). Tudo foi testado exaustivamente, e já está pronto um plano de contingência para garantir uma transição segura e suave. Mais detalhes [aqui](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Os subgraphs existentes no Ethereum continuarão a funcionar? +## Are existing subgraphs on Ethereum working? -Sim. Os contratos na Graph Network operarão em paralelo, tanto no Ethereum quanto no Arbitrum, até a migração completa ao Arbitrum numa data futura. +All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. -## O GRT terá um novo contrato inteligente lançado no Arbitrum? +## Does GRT have a new smart contract deployed on Arbitrum? Sim, o GRT tem um [contrato inteligente adicional no Arbitrum](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7). Porém, o [contrato do GRT](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) na mainnet do Ethereum continuará em operação. diff --git a/website/pages/pt/billing.mdx b/website/pages/pt/billing.mdx index 45abd7514130..a9694e876d05 100644 --- a/website/pages/pt/billing.mdx +++ b/website/pages/pt/billing.mdx @@ -14,7 +14,7 @@ Há dois planos disponíveis para queries de subgraphs na Graph Network. ## Pagamentos de Queries com cartão de crédito -- Para configurar opções de pagamento no cartão, os utilizadores deverão acessar o Subgraph Studio (https://thegraph.com/studio/) +- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) 1. Entre na [página de Cobranças do Subgraph Studio](https://thegraph.com/studio/billing/). 2. Clique no botão "Connect Wallet" (Conectar Carteira) no canto superior direito da página. Isto levará à página de seleção de carteira; lá, selecione a sua carteira e clique em "Connect". 3. Escolha "atualizar plano" se está a atualizar do Plano Grátis, ou escolha "Gerir plano" se já adicionou GRT ao seu saldo de cobrança no passado. Depois, é possível estimar o número de queries para conseguir uma estimativa de preço, mas isto não é obrigatório. @@ -69,7 +69,7 @@ Quando fizeres bridge do GRT, será possível adicioná-lo ao seu saldo de cobra 1. Entre na [página de Cobranças do Subgraph Studio](https://thegraph.com/studio/billing/). 2. Clique no botão "Connect Wallet" (Conectar Carteira) no canto superior direito da página, selecione a sua carteira e clique em "Connect". -3. Clique no botão "Manage" (Gerir) no canto superior direito da página. Selecione "Withdraw GRT" (Sacar GRT). Um painel lateral aparecerá. +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. 4. Insira a quantia de GRT que quer sacar. 5. Clique em 'Withdraw GRT' (Sacar GRT) para sacar o GRT do seu saldo. Assine a transação associada na sua carteira — isto custa gas. O GRT será enviado à sua carteira Arbitrum. 6. Quando a transação for confirmada, verá o GRT sacado do seu saldo na sua carteira Arbitrum. @@ -83,7 +83,7 @@ Quando fizeres bridge do GRT, será possível adicioná-lo ao seu saldo de cobra - Para sugestões sobre o número de queries que deve usar, veja a nossa página de **Perguntas Frequentes**. 5. Escolha "Criptomoedas". Atualmente, o GRT é a única criptomoeda aceita na Graph Network. 6. Selecione o número de meses que deseja pagar antecipadamente. - - Pagamentos antecipados não te comprometem a um uso futuro. Você só será cobrado pelo que usa, e pode sacar o seu saldo a qualquer hora. + - Pagamentos antecipados não te comprometem a usos futuros. Você só será cobrado pelo que usa, e pode sacar o seu saldo a qualquer hora. 7. Escolha a rede da qual o seu GRT será depositado. GRT do Arbitrum e do Ethereum são aceitáveis. Clique "Allow GRT Access" (Permitir Acesso ao GRT) e depois especifique a quantidade de GRT que pode ser retirada da sua carteira. - Se pagar múltiplos meses antecipadamente, permita o acesso à quantia que corresponde àquela quantidade. Esta interação não custará gas. 8. Por último, clique em "Add GRT to Billing Balance" (Adicionar GRT ao Saldo de Cobranças). Esta transação precisará de GRT no Arbitrum para cobrir os custos de gas. @@ -127,7 +127,7 @@ Este é um guia passo a passo para comprar GRT na Binance. 7. Verifique a sua compra e clique em "Comprar GRT". 8. Confirme a sua compra, e logo o seu GRT aparecerá na sua Carteira Spot da Binance. 9. É possível transferir o GRT da sua conta à sua carteira preferida, como o [MetaMask](https://metamask.io/). - - [Para sacar](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) o GRT à sua carteira, adicione o endereço da sua carteira à whitelist de saques. + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - Clique no botão "wallet", clique em "sacar", e selecione GRT. - Insira a quantia de GRT que deseja enviar, e o endereço da carteira na whitelist à qual quer enviar. - Clique em "Continuar" e confirme a sua transação. @@ -198,7 +198,7 @@ Saiba mais sobre como adquirir ETH na Binance [aqui](https://www.binance.com/en/ ### De quantas queries precisarei? -Não é necessário saber com antecedência quantos queries serão necessários. Você só será cobrado pelo que usar, e poderá sacar GRT da sua conta a qualquer hora. +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. Recomendamos estimar mais queries do que necessário para que não precise encher o seu saldo com frequência. Uma boa estimativa para aplicativos pequenos ou médios é começar com 1 a 2 milhões de queries por mês e monitorar atenciosamente o uso nas primeiras semanas. Para aplicativos maiores, uma boa estimativa consiste em utilizar o número de visitas diárias ao seu site multiplicado ao número de queries que a sua página mais ativa faz ao abrir. @@ -208,6 +208,6 @@ Claro que todos os utilizadores, novatos ou experientes, podem contactar a equip Sim, sempre é possível sacar GRT que não já foi usado para queries do seu saldo de cobrança. O contrato inteligente só é projetado para bridgear GRT da mainnet Ethereum até a rede Arbitrum. Se quiser transferir o seu GRT do Arbitrum de volta à mainnet Ethereum, precisará usar a [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### O que acontece quando o meu saldo de cobrança se esgota? Receberei um aviso? +### What happens when my billing balance runs out? Will I get a warning? Serão enviadas várias notificações de email antes do seu saldo de cobrança ser esvaziado. diff --git a/website/pages/pt/chain-integration-overview.mdx b/website/pages/pt/chain-integration-overview.mdx index 98fd802e80bf..6b239622281f 100644 --- a/website/pages/pt/chain-integration-overview.mdx +++ b/website/pages/pt/chain-integration-overview.mdx @@ -6,12 +6,12 @@ Um processo de integração transparente e baseado em governança foi desenhado ## Fase 1. Integração Técnica -- Equipas constroem uma integração com o Graph Node e com o Firehose para chains sem base em EVM. [Aqui está](/new-chain-integration/). +- Por favor, visite a página de [Integração de Novas Chains](/new-chain-integration) para informações sobre o apoio do `graph-node` para chains novas. - Equipas iniciam o processo de integração de protocolo com a criação de um tópico de Fórum [aqui](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (Nova subcategoria de Fontes de Dados sob Governança e GIPs). O uso do modelo padrão do Fórum é obrigatório. ## Fase 2. Validação de Integração -- Equipas colaboram com o núcleo de programadores, com a Graph Foundation, e com operadores de interfaces gráficas e gateways de redes, como o [Subgraph Studio](https://thegraph.com/studio/), para garantir um processo de integração suave. Isto envolve a providência da infraestrutura de backend necessária, como o JSON RPC da chain a ser integrada ou os endpoints do Firehose. Equipas que querem evitar a autohospedagem de tal infraestrutura podem usar a comunidade de operadores de nodes do The Graph (Indexadores) para fazê-lo, com qual a Foundation pode oferecer ajuda. +- Equipas colaboram com o núcleo de programadores, com a Graph Foundation, e com operadores de interfaces gráficas e gateways de redes, como o [Subgraph Studio](https://thegraph.com/studio/), para garantir um processo de integração suave. Isto envolve a providência da infraestrutura de backend necessária, como o JSON RPC da chain a ser integrada, Firehose ou os endpoints de Substreams. Equipas que querem evitar a autohospedagem de tal infraestrutura podem usar a comunidade de operadores de nodes do The Graph (Indexadores) para fazê-lo, com qual a Foundation pode oferecer ajuda. - Indexadores do Graph testam a integração na testnet do The Graph. - O núcleo de programadores e os Indexadores monitoram a estabilidade, a performance e o determinismo dos dados. @@ -38,7 +38,7 @@ Este processo é relacionado ao Serviço de Dados de Subgraph, no momento aplic Isto só impactaria o apoio do protocolo a recompensas de indexação em subgraphs movidos a Substreams. A nova implementação do Firehose precisaria de testes na testnet, seguindo a metodologia sublinhada na Fase 2 deste GIP. De maneira parecida, ao assumir que a implementação seja confiável e de bom desempenho, um PR no [Matrix de Apoio de Funções](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) seria requerido (A função de Subgraph `Substreams data sources`), assim como um novo GIP para apoio do protocolo a recompensas de indexação. Qualquer pessoa pode criar o PR e a GIP; a Foundation ajudaria com o apoio do Conselho. -### 3. Quanto tempo este processo levará? +### 3. Quando tempo demora a conclusão do processo de alcance ao apoio total a protocolos? Espera-se que leve várias semanas, com variação a depender do tempo da programação da integração, da necessidade de pesquisas adicionais, testes e bugfixes, e como sempre, o timing do processo de governança que exige deliberações da comunidade. @@ -46,4 +46,4 @@ O apoio do protocolo às recompensas de indexação depende da banda dos acionis ### 4. Como as prioridades serão administradas? -Similar ao #3, dependará do preparo geral e da banda dos acionistas envolvidos. Por exemplo, uma nova chain, com uma implementação nova do Firehose, pode demorar mais que as integrações que já foram testadas ou estão mais adiantadas no processo de governança. Isto vale especialmente para chains antes apoiadas no [serviço hospedado](https://thegraph.com/hosted-service) ou daquelas que dependem de stacks já testados. +Assim como no passo #3, dependerá do preparo geral e da banda dos acionistas envolvidos. Por exemplo, uma nova chain, com uma implementação nova do Firehose, pode demorar mais que as integrações que já foram testadas ou estão mais adiantadas no processo de governança. diff --git a/website/pages/pt/cookbook/arweave.mdx b/website/pages/pt/cookbook/arweave.mdx index d40f53a8683a..c1a729b5d940 100644 --- a/website/pages/pt/cookbook/arweave.mdx +++ b/website/pages/pt/cookbook/arweave.mdx @@ -86,7 +86,7 @@ dataSources: - A rede deve corresponder a uma rede no Graph Node que a hospeda. No Subgraph Studio, a mainnet do Arweave é `arweave-mainnet` - Fontes de dados no Arweave introduzem um campo `source.owner` opcional, a chave pública de uma carteira no Arweave -Fontes de dados no Arweave apoiam duas categorias de _handlers_: +Fontes de dados no Arweave apoiam duas categorias de *handlers*: - `blockHandlers` - Executar em cada bloco novo no Arweave. Nenhum `source.owner` é exigido. - `transactionHandlers` — Executar em todas as transações onde o `source.owner` da fonte de dados é o dono. Atualmente, um dono é exigido para o `transactionHandlers`; caso utilizadores queiram processar todas as transações, eles devem providenciar "" como o `source.owner` @@ -97,15 +97,15 @@ Fontes de dados no Arweave apoiam duas categorias de _handlers_: > Nota: Transações no [Irys (antigo Bundlr)](https://bundlr.network/) não são apoiadas presentemente. -## Definição de _Schema_ +## Definição de *Schema* A definição de Schema descreve a estrutura do banco de dados resultado do subgraph, e os relacionamentos entre entidades. Isto é agnóstico da fonte de dados original. Há mais detalhes na definição de schema de subgraph [aqui](/developing/creating-a-subgraph/#the-graphql-schema). ## Mapeamentos de AssemblyScript -Os _handlers_ para eventos de processamento são escritos em [AssemblyScript](https://www.assemblyscript.org/). +Os *handlers* para eventos de processamento são escritos em [AssemblyScript](https://www.assemblyscript.org/). -O _indexing_ do Arweave introduz categorias de dados específicas ao Arweave ao [API do AssemblyScript](/developing/assemblyscript-api/). +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { @@ -146,7 +146,7 @@ class Transaction { } ``` -_Handlers_ de bloco recebem um `Block`, enquanto transações recebem uma `Transaction`. +*Handlers* de bloco recebem um `Block`, enquanto transações recebem uma `Transaction`. Escrever os mapeamentos de um Subgraph no Arweave é muito similar à escrita dos mapeamentos de um Subgraph no Ethereum. Para mais informações, clique [aqui](/developing/creating-a-subgraph/#writing-mappings). diff --git a/website/pages/pt/cookbook/base-testnet.mdx b/website/pages/pt/cookbook/base-testnet.mdx index e9f4e14606fa..4a1611488e9b 100644 --- a/website/pages/pt/cookbook/base-testnet.mdx +++ b/website/pages/pt/cookbook/base-testnet.mdx @@ -63,10 +63,10 @@ O seu subgraph slug é uma identidade para o seu subgraph. A ferramenta de CLI g O comando anterior cria um subgraph de apoio que pode ser usado como ponto inicial para a construção do seu subgraph. Ao fazer mudanças ao subgraph, o trabalho principal será com três arquivos: - Manifest (subgraph.yaml) — O manifest define quais fontes de dados seus subgraphs indexarão. Certifique-se de adicionar `base-sepolia` como o nome da rede no arquivo manifest, para lançar o seu subgraph na testnet Base Sepolia. -- Schema (schema.graphql) — O schema GraphQL define quais dados desejas retirar do subgraph. +- Schema (schema.graphql) - O schema GraphQL define quais dados deseja retirar do subgraph. - Mapeamentos em AssemblyScript (mapping.ts) — Este é o código que traduz dados das suas fontes de dados às entidades definidas no schema. -Se quiser indexar dados adicionais, precisa estender o manifest, o schema e os mapeamentos. +If you want to index additional data, you will need to extend the manifest, schema and mappings. Para mais informações sobre como escrever o seu subgraph, veja [Criando um Subgraph](/desenvolvimento/criando-um-subgraph). diff --git a/website/pages/pt/cookbook/cosmos.mdx b/website/pages/pt/cookbook/cosmos.mdx index 3bd8c8dd3e11..eb72410727d1 100644 --- a/website/pages/pt/cookbook/cosmos.mdx +++ b/website/pages/pt/cookbook/cosmos.mdx @@ -85,7 +85,7 @@ A definição do schema descreve a estrutura do banco de dados do subgraph resul Os handlers para o processamento de eventos são escritos em [AssemblyScript](https://www.assemblyscript.org/). -A indexação do Cosmos introduz categorias de dados específicas ao Cosmos ao [API do AssemblyScript](/developing/assemblyscript-api/). +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/pt/cookbook/grafting.mdx b/website/pages/pt/cookbook/grafting.mdx index c92b438ecfdd..70bffd53aae8 100644 --- a/website/pages/pt/cookbook/grafting.mdx +++ b/website/pages/pt/cookbook/grafting.mdx @@ -30,7 +30,7 @@ Neste tutorial, cobriremos um caso de uso básico. Substituiremos um contrato ex ### Qual a Importância Disto? -O enxerto é uma ferramenta poderosa que lhe permite "enxertar" um subgraph em outro — a fim de, efetivamente, transferir dados históricos do subgraph existente a uma nova versão. Enquanto isto é uma forma eficaz de preservar dados e poupar tempo de indexação, enxertos podem causar complexidades e possíveis problemas ao migrar de um ambiente hospedado até a rede descentralizada. Não é possível enxertar um subgraph da Graph Network de volta ao serviço hospedado ou ao Subgraph Studio. +Isto é um recurso poderoso que permite que os programadores "enxertem" um subgraph em outro, o que, efetivamente, transfere dados históricos do subgraph existente até uma versão nova. Não é possível enxertar um subgraph da Graph Network de volta ao Subgraph Studio. ### Boas práticas @@ -80,7 +80,7 @@ dataSources: ``` - A fonte de dados `Lock` é o abi e o endereço do contrato que receberemos ao compilar e lançar o contrato -- A rede deve corresponder a uma rede indexada a ser consultada em query. Como executamos na testnet Sepolia, a rede é `sepolia` +- A rede deve corresponder a uma rede indexada que está a ser consultada em query. Como executamos na testnet Sepolia, a rede é `sepolia` - A seção `mapping` define os gatilhos de interesse e as funções que devem ser executadas em resposta àqueles gatilhos. Neste caso, esperamos o evento `Withdrawal` e chamaremos a função `handleWithdrawal` quando o evento for emitido. ## Definição de Manifest de Enxertos diff --git a/website/pages/pt/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx b/website/pages/pt/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx index 03b424569c4c..5bb8b017c0bc 100644 --- a/website/pages/pt/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx +++ b/website/pages/pt/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx @@ -2,7 +2,7 @@ title: Como Proteger Chaves de API com Componentes do Servidor Next.js --- -## Visão Geral +## Visão geral Podemos proteger a nossa chave API no frontend do nosso dApp com [componentes do servidor Next.js](https://nextjs.org/docs/app/building-your-application/rendering/server-components). Para ainda mais segurança, também podemos [restringir a nossa chave API a certos domínios ou subgraphs no Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). diff --git a/website/pages/pt/cookbook/near.mdx b/website/pages/pt/cookbook/near.mdx index 43435f23f20f..4f5d449d3ecb 100644 --- a/website/pages/pt/cookbook/near.mdx +++ b/website/pages/pt/cookbook/near.mdx @@ -37,7 +37,7 @@ Há três aspectos de definição de subgraph: **schema.graphql:** um arquivo schema que define quais dados são armazenados para o seu subgraph, e como consultá-los via GraphQL. Os requerimentos para subgraphs no NEAR são cobertos pela [documentação existente](/developing/creating-a-subgraph#the-graphql-schema). -**Mapeamentos do AssemblyScript:** [Código em AssemblyScript](/developing/assemblyscript-api) que traduz dos dados do evento às entidades definidas no seu schema. O apoio à NEAR introduz tipos de dados específicos ao NEAR e uma nova funcionalidade de análise de JSON. +**AssemblyScript Mappings:** [AssemblyScript code](/developing/graph-ts/api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. Existem dois comandos importantes durante o desenvolvimento de um subgraph: @@ -98,7 +98,7 @@ A definição do schema descreve a estrutura do banco de dados do subgraph resul Os handlers para o processamento de eventos são escritos em [AssemblyScript](https://www.assemblyscript.org/). -A indexação da NEAR introduz categorias de dados específicas à plataforma ao [API do AssemblyScript](/developing/assemblyscript-api). +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/graph-ts/api). ```typescript @@ -165,9 +165,9 @@ Estes tipos são repassados para handlers de blocos e recibos: - Handlers de blocos receberão um `Block` - Handlers de recibos receberão um `ReceiptWithOutcome` -Caso contrário, o resto da [API do AssemblyScript](/developing/assemblyscript-api) está à disposição dos programadores de subgraph no Near, durante a execução dos mapeamentos. +Otherwise, the rest of the [AssemblyScript API](/developing/graph-ts/api) is available to NEAR subgraph developers during mapping execution. -Isto inclui uma nova função de análise em JSON: logs na NEAR são frequentemente emitidos como JSONs em string. A nova função `json.fromString(...)` está disponível como parte da [API JSON](/developing/assemblyscript-api#json-api) para que programadores processem estes logs com facilidade. +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/graph-ts/api#json-api) to allow developers to easily process these logs. ## Lançando um Subgraph na NEAR diff --git a/website/pages/pt/cookbook/subgraph-uncrashable.mdx b/website/pages/pt/cookbook/subgraph-uncrashable.mdx index 4ffce19ebc87..19defed68524 100644 --- a/website/pages/pt/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/pt/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ O [Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrasha - A estrutura também inclui uma maneira (através do arquivo de configuração) de criar funções personalizadas, mas seguras, para configurar grupos de variáveis de entidade. Desta maneira, é impossível que o utilizador carregue/use uma entidade de graph obsoleta, e também é impossível esquecer de salvar ou determinar uma variável exigida pela função. -- Logs de aviso são gravados como logs que indicam onde há uma brecha na lógica do subgraph para ajudar a solucionar o problema e garantir a precisão dos dados. Estes logs podem ser visualizados no serviço hospedado do The Graph, na seção 'Logs'. +- Logs de aviso são registrados como logs que indicam onde há uma quebra de lógica no subgraph, para ajudar a consertar o problema e garantir a segurança dos dados. A Subgraph Uncrashable pode ser executada como flag opcional usando o comando codegen no Graph CLI. diff --git a/website/pages/pt/cookbook/upgrading-a-subgraph.mdx b/website/pages/pt/cookbook/upgrading-a-subgraph.mdx index cd93c722f1e8..070020c2995a 100644 --- a/website/pages/pt/cookbook/upgrading-a-subgraph.mdx +++ b/website/pages/pt/cookbook/upgrading-a-subgraph.mdx @@ -136,7 +136,7 @@ Marque a opção **Update Subgraph Details in Explorer\* (Atualizar Detalhes do ## Como Depreciar um Subgraph na The Graph Network -Siga os passos [aqui](/managing/deprecating-a-subgraph) para depreciar o seu subgraph e retirá-lo da The Graph Network. +Siga os passos [aqui](/managing/transfer-and-deprecate-a-subgraph) para depreciar o seu subgraph e retirá-lo da The Graph Network. ## Queries em um Subgraph + Cobrança na The Graph Network diff --git a/website/pages/pt/deploying/multiple-networks.mdx b/website/pages/pt/deploying/multiple-networks.mdx new file mode 100644 index 000000000000..91bdd9f3e94a --- /dev/null +++ b/website/pages/pt/deploying/multiple-networks.mdx @@ -0,0 +1,241 @@ +--- +title: Deploying a Subgraph to Multiple Networks +--- + +This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph). + +## Como lançar o subgraph a várias redes + +Em alguns casos, irá querer lançar o mesmo subgraph a várias redes sem duplicar o seu código completo. O grande desafio nisto é que os endereços de contrato nestas redes são diferentes. + +### Using `graph-cli` + +Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: + +```sh +Options: + + ... + --network Network configuration to use from the networks config file + --network-file Networks config file path (default: "./networks.json") +``` + +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. + +> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. + +If you don't have a `networks.json` file, you'll need to manually create one with the following structure: + +```json +{ + "network1": { // nome da rede + "dataSource1": { // nome do dataSource + "address": "0xabc...", // endereço do contrato (opcional) + "startBlock": 123456 // bloco inicial (opcional) + }, + "dataSource2": { + "address": "0x123...", + "startBlock": 123444 + } + }, + "network2": { + "dataSource1": { + "address": "0x987...", + "startBlock": 123 + }, + "dataSource2": { + "address": "0xxyz..", + "startBlock": 456 + } + }, + ... +} +``` + +> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. + +Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x123...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +O seu arquivo de config de redes deve ficar assim: + +```json +{ + "mainnet": { + "Gravity": { + "address": "0x123..." + } + }, + "sepolia": { + "Gravity": { + "address": "0xabc..." + } + } +} +``` + +Agora podemos executar um dos seguintes comandos: + +```sh +# Usar o arquivo networks.json padrão +yarn build --network sepolia + +# Usar arquivo com nome personalizado +yarn build --network sepolia --network-file local/do/config +``` + +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: sepolia + source: + address: '0xabc...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Now you are ready to `yarn deploy`. + +> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: + +```sh +# Usar o arquivo networks.json padrão +yarn deploy --network sepolia + +# Usar arquivo com nome personalizado +yarn deploy --network sepolia --network-file local/do/config +``` + +### Como usar o template subgraph.yaml + +One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). + +Por exemplo, vamos supor que um subgraph deve ser lançado à mainnet e à Sepolia, através de diferentes endereços de contratos. Então, seria possível definir dois arquivos de config ao fornecer os endereços para cada rede: + +```json +{ + "network": "mainnet", + "address": "0x123..." +} +``` + +e + +```json +{ + "network": "sepolia", + "address": "0xabc..." +} +``` + +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + network: {{network}} + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + address: '{{address}}' + abi: Gravity + mapping: + kind: ethereum/events +``` + +In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: + +```json +{ + ... + "scripts": { + ... + "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", + "prepare:sepolia": "mustache config/sepolia.json subgraph.template.yaml > subgraph.yaml" + }, + "devDependencies": { + ... + "mustache": "^3.1.0" + } +} +``` + +Para lançar este subgraph à mainnet ou à Sepolia, apenas um dos seguintes comandos precisaria ser executado: + +```sh +# Mainnet: +yarn prepare:mainnet && yarn deploy + +# Sepolia: +yarn prepare:sepolia && yarn deploy +``` + +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). + +**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. + +## Política de arqivamento do Subgraph Studio + +Uma versão de subgraph no Studio é arquivada se, e apenas se, atender aos seguintes critérios: + +- A versão não foi publicada na rede (ou tem a publicação pendente) +- A versão foi criada há 45 dias ou mais +- O subgraph não foi consultado em 30 dias + +Além disto, quando uma nova versão é editada, se o subgraph ainda não foi publicado, então a versão N-2 do subgraph é arquivada. + +Todos os subgraphs afetados por esta política têm a opção de trazer de volta a versão em questão. + +## Como conferir a saúde do subgraph + +Se um subgraph for sincronizado com sucesso, isto indica que ele continuará a rodar bem para sempre. Porém, novos gatilhos na rede podem revelar uma condição de erro não testada, ou ele pode começar a se atrasar por problemas de desempenho ou com os operadores de nodes. + +Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: + +```graphql +{ + indexingStatusForCurrentVersion(subgraphName: "org/subgraph") { + synced + health + fatalError { + message + block { + number + hash + } + handler + } + chains { + chainHeadBlock { + number + } + latestBlock { + number + } + } + } +} +``` + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/pages/pt/developing/creating-a-subgraph.mdx b/website/pages/pt/developing/creating-a-subgraph.mdx index ccdc1318423a..3c25d115207c 100644 --- a/website/pages/pt/developing/creating-a-subgraph.mdx +++ b/website/pages/pt/developing/creating-a-subgraph.mdx @@ -2,45 +2,47 @@ title: Como criar um Subgraph --- -Um subgraph extrai dados de uma blockchain, os processa e os armazena para poderem ser consultados facilmente via GraphQL. +This detailed guide provides instructions to successfully create a subgraph. -![Como definir um Subgraph](/img/defining-a-subgraph.png) +A subgraph extracts data from a blockchain, processes it, and stores it for efficient querying via GraphQL. -A definição de subgraph consiste de alguns arquivos: +![Como definir um Subgraph](/img/defining-a-subgraph.png) -- `subgraph.yaml`: um arquivo YAML que contém o manifest do subgraph +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. -- `schema.graphql`: um schema GraphQL que define quais dados são armazenados para o seu subgraph, e como consultá-los em query via GraphQL +## Como Começar -- `AssemblyScript Mappings`: código em [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) que traduz dos dados de eventos às entidades definidas no seu schema (por ex., `mapping.ts` neste tutorial) +### Como instalar o Graph CLI -> Para utilizar o seu subgraph na rede descentralizada do The Graph, será necessário [criar uma chave API](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key). É recomendado [adicionar um sinal](/network/curating/#how-to-signal) ao seu subgraph com, no mínimo, [3000 GRT](/sunrise/#how-can-i-ensure-high-quality-of-service-and-redundancy-for-subgraphs-on-the-graph-network). +To build and deploy a subgraph, you will need the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -Antes de se aprofundar nos conteúdos do arquivo manifest, instale o [Graph CLI](https://github.com/graphprotocol/graph-tooling), que será necessário para construir e adicionar um subgraph. +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. -## Como instalar o Graph CLI +Execute um dos seguintes comandos na sua máquina local: -O Graph CLI é escrito em JavaScript, e só pode ser usado após instalar o `yarn` ou o `npm`; vamos supor que tens o yarn daqui em diante. +#### Using [npm](https://www.npmjs.com/) -Quando tiver o `yarn`, instale o Graph CLI com o seguinte +```bash +npm install -g @graphprotocol/graph-cli@latest +``` -**Instalação com o yarn:** +#### Using [yarn](https://yarnpkg.com/) ```bash yarn global add @graphprotocol/graph-cli ``` -**Instalação com o npm:** +- The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. -```bash -npm install -g @graphprotocol/graph-cli -``` +- This `graph init` command can also create a subgraph in Subgraph Studio by passing in `--product subgraph-studio`. + +- If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. -Instalado, o comando `graph init` pode preparar um novo projeto de subgraph, seja de um contrato existente ou de um exemplo de subgraph. Este comando serve para criar um subgraph no Subgraph Studio ao passar o `graph init --product subgraph-studio`. Se já tem um contrato inteligente lançado na mainnet do Ethereum ou uma de suas testnets, inicializar um novo subgraph daquele contrato pode ser um bom começo. +## Create a subgraph -## De um Contrato Existente +### From an existing contract -O seguinte comando cria um subgraph que indexa todos os eventos de um contrato existente. Ele tenta buscar a ABI de contrato do Etherscan e resolve solicitar um local de arquivo. Se quaisquer dos argumentos opcionais estiverem a faltar, ele levará-te a um formulário interativo. +The following command creates a subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,21 +53,29 @@ graph init \ [] ``` -O `` é a ID do seu subgraph no Subgraph Studio, visível na página dos detalhes do seu subgraph. +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. -## De um Exemplo de Subgraph +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. -O segundo modo que o `graph init` apoia é criar um projeto a partir de um exemplo de subgraph. O seguinte comando faz isso: +### From an example subgraph + +The following command initializes a new project from an example subgraph: ```sh -graph init --studio +graph init --studio --from-example=example-subgraph ``` -O [subgraph de exemplo](https://github.com/graphprotocol/example-subgraph) é baseado no contrato Gravity por Dani Grant, que administra avatares de usuários e emite eventos `NewGravatar` ou `UpdateGravatar` sempre que avatares são criados ou atualizados. O subgraph lida com estes eventos ao escrever entidades `Gravatar` ao armazenamento do Graph Node e garantir que estes são atualizados de acordo com os eventos. As seguintes secções lidarão com os arquivos que compõem o manifest do subgraph para este exemplo. +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. -## Como Adicionar Novos dataSources para um Subgraph Existente +## Add new `dataSources` to an existing subgraph -Desde a `v0.31.0`, o `graph-cli` apoia a adição de novos dataSources para um subgraph existente, através do comando `graph add`. +Since `v0.31.0`, the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: ```sh graph add
    [] @@ -78,22 +88,45 @@ Opções: --network-file Caminho ao arquivo de configuração das redes (padrão: "./networks.json") ``` -O comando `add` pegará a ABI do Etherscan (a não ser que um caminho para a ABI seja especificado com a opção `--abi`), e criará um novo `dataSource` da mesma maneira que o comando `graph init` cria um `dataSource` `--from-contract`, a atualizar o schema e os mapeamentos de acordo. +### Especificações + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- A opção `--merge entities` identifica como o programador gostaria de lidar com nomes de conflito em `entity` e `event`: + + - Se for `true`: o novo `dataSource` deve usar `eventHandlers` & `entities` existentes. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- O endereço (`address`) será escrito ao `networks.json` para a rede relevante. + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. -A opção `--merge entities` identifica como o programador gostaria de lidar com nomes de conflito em `entity` e `event`: +## Components of a subgraph -- Se for `true`: o novo `dataSource` deve usar `eventHandlers` & `entities` existentes. -- Se for `false`: um novo handler de entidades & eventos deve ser criado com `${dataSourceName}{EventName}`. +### O Manifest do Subgraph -O endereço (`address`) será escrito ao `networks.json` para a rede relevante. +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -> **Nota:** Quando usar a cli interativa, após executar o `graph init` com êxito, receberá uma solicitação para adicionar um novo `dataSource`. +The **subgraph definition** consists of the following files: -## O Manifest do Subgraph +- `subgraph.yaml`: Contains the subgraph manifest -O manifest do subgraph `subgraph.yaml` define os contratos inteligentes indexados pelo seu subgraph; a quais eventos destes contratos prestar atenção; e como mapear dados de eventos a entidades que o Graph Node armazena e permite queries. Veja a especificação completa para manifests de subgraph [aqui](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL -Para o subgraph de exemplo, o `subgraph.yaml` é: +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +A single subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: ```yaml specVersion: 0.0.4 @@ -180,9 +213,9 @@ Um único subgraph pode indexar dados de vários contratos inteligentes. Adicion Os gatilhos para uma fonte de dados dentro de um bloco são ordenados com o seguinte processo: -1. Gatilhos de evento e chamada são, primeiro, ordenados por índice de transação no bloco. -2. Gatilhos de evento e chamada dentro da mesma transação são ordenados a usar uma convenção: primeiro, gatilhos de evento, e depois, de chamada, cada tipo a respeitar a ordem em que são definidos no manifest. -3. Gatilhos de blocos são executados após gatilhos de evento e chamada, na ordem em que são definidos no manifest. +1. Gatilhos de evento e chamada são, primeiro, ordenados por índice de transação no bloco. +2. Gatilhos de evento e chamada dentro da mesma transação são ordenados a usar uma convenção: primeiro, gatilhos de evento, e depois, de chamada, cada tipo a respeitar a ordem em que são definidos no manifest. +3. Gatilhos de blocos são executados após gatilhos de evento e chamada, na ordem em que são definidos no manifest. Estas regras de organização estão sujeitas à mudança. @@ -190,19 +223,19 @@ Estas regras de organização estão sujeitas à mudança. ### Filtros de Argumentos Indexados / Filtros de Tópicos -> **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0` +> **Requer**: [SpecVersion](#specversion-releases) >= `1.2.0` -Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments. +Filtros de tópico, também conhecidos como filtros de argumentos indexados, permitem que os utilizadores filtrem eventos de blockchain com alta precisão, em base nos valores dos seus argumentos indexados. -- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data. +- Estes filtros ajudam a isolar eventos específicos de interesse do fluxo vasto de eventos na blockchain, o que permite que subgraphs operem com mais eficácia ao focarem apenas em dados relevantes. -- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain. +- Isto serve para criar subgraphs pessoais que rastreiam endereços específicos e as suas interações com vários contratos inteligentes na blockchain. #### Como Filtros de Tópicos Funcionam -When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments. +Quando um contrato inteligente emite um evento, quaisquer argumentos que forem marcados como indexados podem ser usados como filtros no manifest de um subgraph. Isto permite que o subgraph preste atenção seletiva para eventos que correspondam a estes argumentos indexados. -- The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event. +- O primeiro argumento indexado do evento corresponde ao `topic1`, o segundo ao `topic2`, e por aí vai até o `topic3`, já que a Máquina Virtual de Ethereum (EVM) só permite até três argumentos indexados por evento. ```solidity // SPDX-License-Identifier: MIT @@ -223,7 +256,7 @@ contract Token { Neste exemplo: - O evento `Transfer` é usado para gravar transações de tokens entre endereços. -- The `from` and `to` parameters are indexed, allowing event listeners to filter and monitor transfers involving specific addresses. +- Os parâmetros `from` e `to` são indexados, o que permite que ouvidores de eventos filtrem e monitorizem transferências que envolvem endereços específicos. - A função `transfer` é uma representação simples de uma ação de transferência de token, e emite o evento Transfer sempre que é chamada. #### Configuração em Subgraphs @@ -249,7 +282,7 @@ Neste cenário: - Dentro de um Tópico Único: A lógica funciona como uma condição OR. O evento será processado se corresponder a qualquer dos valores listados num tópico. - Entre Tópicos Diferentes: A lógica funciona como uma condição AND. Um evento deve atender a todas as condições especificadas em vários tópicos para acionar o handler associado. -#### Example 1: Tracking Direct Transfers from Address A to Address B +#### Exemplo 1: Como Rastrear Transferências Diretas do Endereço A ao Endereço B ```yaml eventHandlers: @@ -265,75 +298,75 @@ Nesta configuração: - `topic2` é configurado para filtrar eventos `Transfer` onde `0xAddressB` é o remetente. - O subgraph só indexará transações que ocorrerem diretamente do `0xAddressA` ao `0xAddressB`. -#### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses +#### Exemplo 2: Como Rastrear Transações em Qualquer Direção Entre Dois ou Mais Endereços ```yaml eventHandlers: - event: Transfer(indexed address,indexed address,uint256) handler: handleTransferToOrFrom - topic1: ['0xAddressA', '0xAddressB', '0xAddressC'] # Sender Address - topic2: ['0xAddressB', '0xAddressC'] # Receiver Address + topic1: ['0xAddressA', '0xAddressB', '0xAddressC'] # Endereço do Remetente + topic2: ['0xAddressB', '0xAddressC'] # Endereço do Destinatário ``` Nesta configuração: -- `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender. -- `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver. -- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses. +- O `topic1` é configurado para filtrar eventos `Transfer` onde `0xAddressA`, `0xAddressB`, `0xAddressC` é o remetente. +- `topic2` é configurado para filtrar eventos `Transfer` onde `0xAddressB` é o remetente e `0xAddressC` é o destinatário. +- O subgraph indexará transações que ocorrerem em qualquer direção entre vários endereços, o que permite a monitoria compreensiva de interações que envolverem todos os endereços. ## eth_call declarada -> **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0`. Currently, `eth_calls` can only be declared for event handlers. +> **Requer**: [SpecVersion](#specversion-releases) >= `1.2.0`. Atualmente, `eth_calls` só podem ser declaradas para handlers de eventos. -Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel. +`eth_calls` declarativas são valiosas para subgraphs, por permitirem que `eth_calls` sejam executadas previamente para que o `graph-node` execute as <0>eth_calls em paralelo. -This feature does the following: +Esta ferramenta faz o seguinte: -- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency. -- Allows faster data fetching, resulting in quicker query responses and a better user experience. -- Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient. +- Aumenta muito o desempenho do retiro de dados da blockchain Ethereum ao reduzir o tempo total para múltiplas chamadas e otimizar a eficácia geral do subgraph. +- Permite retiros de dados mais rápidos, o que resulta em respostas de query aceleradas e uma experiência de utilizador melhorada. +- Reduz tempos de espera para aplicativos que precisam agregar dados de várias chamadas no Ethereum, o que aumenta a eficácia do processo de retiro de dados. -### Key Concepts +### Conceitos Importantes -- Declarative `eth_calls`: Ethereum calls that are defined to be executed in parallel rather than sequentially. -- Parallel Execution: Instead of waiting for one call to finish before starting the next, multiple calls can be initiated simultaneously. -- Time Efficiency: The total time taken for all the calls changes from the sum of the individual call times (sequential) to the time taken by the longest call (parallel). +- `eth_calls` declarativas: Chamadas no Ethereum definidas para serem executadas em paralelo, e não em sequência. +- Execução Paralela: Ao invés de esperar o término de uma chamada para começar a próxima, várias chamadas podem ser iniciadas simultaneamente. +- Eficácia de Tempo: O total de tempo levado para todas as chamadas muda da soma dos tempos de chamadas individuais (sequencial) para o tempo levado para a chamada mais longa (paralelo). -### Scenario without Declarative `eth_calls` +### Cenário sem `eth_calls` Declarativas -Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings. +Imagina que tens um subgraph que precisa fazer três chamadas no Ethereum para retirar dados sobre as transações, o saldo e as posses de token de um utilizador. -Traditionally, these calls might be made sequentially: +Tradicionalmente, estas chamadas podem ser realizadas em sequência: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Chamada 1 (Transações): Leva 3 segundos +2. Chamada 2 (Saldo): Leva 2 segundos +3. Chamada 3 (Posses de Token): Leva 4 segundos -Total time taken = 3 + 2 + 4 = 9 seconds +Total de tempo: 3 + 2 + 4 = 9 segundos -### Scenario with Declarative `eth_calls` +### Cenário com `eth_calls` Declarativas -With this feature, you can declare these calls to be executed in parallel: +Com esta ferramenta, é possível declarar que estas chamadas sejam executadas em paralelo: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Chamada 1 (Transações): Leva 3 segundos +2. Chamada 2 (Saldo): Leva 2 segundos +3. Chamada 3 (Posses de Token): Leva 4 segundos -Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. +Como estas chamadas são executadas em paralelo, o total de tempo é igual ao tempo gasto pela chamada mais longa. -Total time taken = max (3, 2, 4) = 4 seconds +Total de tempo = max (3, 2, 4) = 4 segundos -### How it Works +### Como Funciona -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. -2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +1. Definição Declarativa: No manifest do subgraph, as chamadas no Ethereum são declaradas de maneira que indique que elas possam ser executadas em paralelo. +2. Motor de Execução Paralela: O motor de execução do Graph Node reconhece estas declarações e executa as chamadas simultaneamente. +3. Agregação de Resultado: Quando todas as chamadas forem completadas, os resultados são agregados e usados pelo subgraph para mais processos. ### Exemplo de Configuração no Manifest do Subgraph -Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`. +`eth_calls` declaradas podem acessar o `event.address` do evento subjacente junto com todos os `event.params`. -`Subgraph.yaml` using `event.address`: +`Subgraph.yaml` que usa o `event.address`: ```yaml eventHandlers: @@ -344,14 +377,14 @@ calls: global1X128: Pool[event.address].feeGrowthGlobal1X128() ``` -Details for the example above: +Detalhes para o exemplo acima: -- `global0X128` is the declared `eth_call`. -- The text before colon(`global0X128`) is the label for this `eth_call` which is used when logging errors. -- The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` -- The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. +- `global0X128` é a `eth_call` declarada. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. +- O texto (`Pool[event.address].feeGrowthGlobal0X128()`) é a `eth_call` a ser executada, que está na forma do `Contract[address].function(arguments)` +- O `address` e o `arguments` podem ser substituídos por variáveis que serão disponibilizadas quando o handler for executado. -`Subgraph.yaml` using `event.params` +`Subgraph.yaml` que usa o `event.params` ```yaml calls: @@ -360,17 +393,17 @@ calls: ### Versões do SpecVersion -| Versão | Notas de atualização | -| :-: | --- | -| 1.2.0 | Adicionado apoio a [Filtragem de Argumentos Indexados](/#indexed-argument-filters--topic-filters) & `eth_call` declarado | -| 1.1.0 | Apoio a [Séries de Tempo & Agregações](#timeseries-and-aggregations). Apoio adicionado ao tipo `Int8` para `id`. | -| 1.0.0 | Apoia o recurso [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) para fazer pruning de subgraphs | -| 0.0.9 | Apoio ao recurso `endBlock` | -| 0.0.8 | Adicionado apoio ao polling de [Handlers de Bloco](developing/creating-a-subgraph/#polling-filter) e [Handlers de Inicialização](developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Adicionado apoio a [Fontes de Arquivos de Dados](developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Apoio à variante de calculação de [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi). | -| 0.0.5 | Adicionado apoio a handlers de eventos com acesso a recibos de transação. | -| 0.0.4 | Adicionado apoio à gestão de recursos de subgraph. | +| Versão | Notas de atualização | +|:------:| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| 1.2.0 | Adicionado apoio a [Filtragem de Argumentos Indexados](/#indexed-argument-filters--topic-filters) & `eth_call` declarado | +| 1.1.0 | Apoio a [Séries de Tempo & Agregações](#timeseries-and-aggregations). Apoio adicionado ao tipo `Int8` para `id`. | +| 1.0.0 | Apoia o recurso [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) para fazer pruning de subgraphs | +| 0.0.9 | Apoio ao recurso `endBlock` | +| 0.0.8 | Adicionado apoio ao polling de [Handlers de Bloco](developing/creating-a-subgraph/#polling-filter) e [Handlers de Inicialização](developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Adicionado apoio a [Fontes de Arquivos de Dados](developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Apoio à variante de calculação de [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi). | +| 0.0.5 | Adicionado apoio a handlers de eventos com acesso a recibos de transação. | +| 0.0.4 | Adicionado apoio à gestão de recursos de subgraph. | ### Como Obter as ABIs @@ -442,16 +475,16 @@ Para alguns tipos de entidade, o `id` é construído das id's de duas outras ent Nós apoiamos os seguintes escalares na nossa API do GraphQL: -| Tipo | Descrição | -| --- | --- | -| `Bytes` | Arranjo de bytes, representado como string hexadecimal. Usado frequentemente por hashes e endereços no Ethereum. | -| `String` | Escalar para valores `string`. Caracteres nulos são removidos automaticamente. | -| `Boolean` | Escalar para valores `boolean`. | -| `Int` | A especificação do GraphQL define o `Int` como um inteiro assinado de 32 bits. | -| `Int8` | Um número inteiro assinado em 8 bits, também conhecido como um número inteiro assinado em 64 bits, pode armazenar valores de -9,223,372,036,854,775,808 a 9,223,372,036,854,775,807. Prefira usar isto para representar o `i64` do ethereum. | -| `BigInt` | Números inteiros grandes. Usados para os tipos `uint32`, `int64`, `uint64`, ..., `uint256` do Ethereum. Nota: Tudo abaixo de `uint32`, como `int32`, `uint24` ou `int8` é representado como `i32`. | -| `BigDecimal` | `BigDecimal` Decimais de alta precisão representados como um significando e um exponente. O alcance de exponentes é de -6143 até +6144. Arredondado para 34 dígitos significantes. | -| `Timestamp` | É um valor `i64` em microssegundos. Usado frequentemente para campos `timestamp` para séries de tempo e agregações. | +| Tipo | Descrição | +| ------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Arranjo de bytes, representado como string hexadecimal. Usado frequentemente por hashes e endereços no Ethereum. | +| `String` | Escalar para valores `string`. Caracteres nulos são removidos automaticamente. | +| `Boolean` | Escalar para valores `boolean`. | +| `Int` | A especificação do GraphQL define o `Int` como um inteiro assinado de 32 bits. | +| `Int8` | Um número inteiro assinado em 8 bits, também conhecido como um número inteiro assinado em 64 bits, pode armazenar valores de -9,223,372,036,854,775,808 a 9,223,372,036,854,775,807. Prefira usar isto para representar o `i64` do ethereum. | +| `BigInt` | Números inteiros grandes. Usados para os tipos `uint32`, `int64`, `uint64`, ..., `uint256` do Ethereum. Nota: Tudo abaixo de `uint32`, como `int32`, `uint24` ou `int8` é representado como `i32`. | +| `BigDecimal` | `BigDecimal` Decimais de alta precisão representados como um significando e um exponente. O alcance de exponentes é de -6143 até +6144. Arredondado para 34 dígitos significantes. | +| `Timestamp` | É um valor `i64` em microssegundos. Usado frequentemente para campos `timestamp` para séries de tempo e agregações. | #### Enums @@ -593,7 +626,7 @@ Esta maneira mais elaborada de armazenar relacionamentos vários-com-vários arm #### Como adicionar comentários ao schema -Pela especificação do GraphQL, é possível adicionar comentários acima de atributos de entidade do schema com o símbolo de hash `#`. Isto é ilustrado no exemplo abaixo: +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: ```graphql type MyFirstEntity @entity { @@ -873,7 +906,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **Nota:** Uma nova fonte de dados só processará as chamadas e eventos para o bloco onde ele foi criado e todos os blocos a seguir. Porém, não serão processados dados históricos, por ex, contidos em blocos anteriores. -> +> > Se blocos anteriores conterem dados relevantes à nova fonte, é melhor indexá-los ao ler o estado atual do contrato e criar entidades que representem aquele estado na hora que a nova fonte de dados for criada. ### Contextos de Fontes de Dados @@ -930,7 +963,7 @@ dataSources: ``` > **Nota:** O bloco da criação do contrato pode ser buscado rapidamente no Etherscan: -> +> > 1. Procure pelo contrato ao inserir o seu endereço na barra de busca. > 2. Clique no hash da transação da criação na seção `Contract Creator`. > 3. Carregue a página dos detalhes da transação, onde encontrará o bloco inicial para aquele contrato. @@ -945,9 +978,9 @@ A configuração `indexerHints`, no manifest de um subgraph, providencia diretiv `indexerHints.prune`: Define a retenção de dados históricos de bloco para um subgraph. As opções incluem: -1. `"never"`: Nenhum pruning de dados históricos; retém o histórico completo. -2. `"auto"`: Retém o histórico mínimo necessário determinado pelo Indexador e otimiza o desempenho das queries. -3. Um número específico: Determina um limite personalizado no número de blocos históricos a guardar. +1. `"never"`: Nenhum pruning de dados históricos; retém o histórico completo. +2. `"auto"`: Retém o histórico mínimo necessário determinado pelo Indexador e otimiza o desempenho das queries. +3. Um número específico: Determina um limite personalizado no número de blocos históricos a guardar. ``` indexerHints: @@ -982,29 +1015,6 @@ indexerHints: prune: never ``` -É possível verificar o bloco mais antigo (com estado histórico) para um subgraph ao fazer um query da [API de Estado de Indexação](/deploying/deploying-a-subgraph-to-hosted/#checking-subgraph-health): - -``` -{ - indexingStatuses(subgraphs: ["Qm..."]) { - subgraph - synced - health - chains { - earliestBlock { - number - } - latestBlock { - number - } - chainHeadBlock { number } - } - } -} -``` - -Note que o `earliestBlock` é o bloco mais antigo com dados históricos, que será mais recente que o `startBlock` (bloco inicial) especificado no manifest, se o subgraph tiver passado por pruning. - ## Handlers de Eventos Handlers de eventos em um subgraph reagem a eventos específicos emitidos por contratos inteligentes na blockchain e acionam handlers definidos no manifest do subgraph. Isto permite que subgraphs processem e armazenem dados conforme a lógica definida. @@ -1382,7 +1392,7 @@ O subgraph enxertado pode usar um schema GraphQL que não é idêntico ao schema > **[Gerenciamento de Recursos](#experimental-features):** O `grafting` deve ser declarado sob `features` no manifest do subgraph. -## IPFS/Arweave File Data Sources +## Fontes de Dados de Arquivos em IPFS/Arweave Fontes de dados de arquivos são uma nova funcionalidade de subgraph para acessar dados off-chain de forma robusta e extensível. As fontes de dados de arquivos apoiam o retiro de arquivos do IPFS e do Arweave. @@ -1390,7 +1400,7 @@ Fontes de dados de arquivos são uma nova funcionalidade de subgraph para acessa ### Visão geral -Rather than fetching files "in line" during handler execution, this introduces templates which can be spawned as new data sources for a given file identifier. These new data sources fetch the files, retrying if they are unsuccessful, running a dedicated handler when the file is found. +Em vez de buscar arquivos "em fila" durante a execução do handler, isto introduz modelos que podem ser colocados como novas fontes de dados para um identificador de arquivos. Estas novas fontes de dados pegam os arquivos e tentam novamente caso não obtenham êxito; quando o arquivo é encontrado, executam um handler dedicado. Isto é parecido com os [modelos de fontes de dados existentes](/developing/creating-a-subgraph/#data-source-templates), usados para dinamicamente criar fontes de dados baseadas em chains. @@ -1477,7 +1487,7 @@ A fonte de dados de arquivos deve mencionar especificamente todos os tipos de en #### Criar um novo handler para processar arquivos -Este handler deve aceitar um parâmetro `Bytes`, que consistirá dos conteúdos do arquivo; quando encontrado, este poderá ser acessado. Isto costuma ser um arquivo JSON, que pode ser processado com helpers `graph-ts` ([documentação](/developing/assemblyscript-api/#json-api)). +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). A CID do arquivo como um string legível pode ser acessada através do `dataSource` a seguir: diff --git a/website/pages/pt/developing/developer-faqs.mdx b/website/pages/pt/developing/developer-faqs.mdx index 62729dbd0af0..86e65c294b8b 100644 --- a/website/pages/pt/developing/developer-faqs.mdx +++ b/website/pages/pt/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: Perguntas Frequentes dos Programadores --- -## 1. O que é um subgraph? +This page summarizes some of the most common questions for developers building on The Graph. -Um subgraph é uma API personalizada construída em dados de blockchains. Subgraphs são consultados com a linguagem GraphQL e lançados a um Graph Node usando o Graph CLI. Quando lançados e editados à rede descentralizada do The Graph, os Indexadores processam subgraphs e os disponibilizam para serem consultados em query por consumidores de subgraphs. +## Subgraph Related -## 2. Posso apagar o meu subgraph? +### 1. O que é um subgraph? -Não é possível apagar subgraphs após a sua criação. +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. Posso mudar o nome do meu subgraph? +### 2. What is the first step to create a subgraph? -Não. Quando um subgraph é criado, não é possível mudar o seu nome. Pense com cuidado antes de criar o seu subgraph para poder ser facilmente buscável e identificável por outros dapps. +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. Posso mudar a conta do GitHub associada ao meu subgraph? +### 3. Can I still create a subgraph if my smart contracts don't have events? -Não. Quando um subgraph é criado, não há mais como mudar a conta do GitHub associada a ele. Pense nisto com cuidado antes de criar o seu subgraph. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. Eu ainda posso criar um subgraph se os meus contratos inteligentes não tiverem eventos? +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -É altamente recomendado que estruture os seus contratos inteligentes para terem eventos associados com dados que tens interesse de consultar em query. Handlers de eventos no subgraph são ativados por eventos de contratos e são, de longe, a forma mais rápida de conseguir dados úteis. +### 4. Posso mudar a conta do GitHub associada ao meu subgraph? -Se os contratos com os quais trabalha não contêm eventos, o seu subgraph pode usar handlers de chamadas e blocos para ativar o indexing. Porém, isto não é recomendado, porque retarda muito o desempenho do subgraph. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. É possível lançar um subgraph com o mesmo nome para várias redes? +### 5. How do I update a subgraph on mainnet? -Precisará fazer nomes separados para várias redes. Enquanto não podes ter subgraphs diferentes sob o mesmo nome, há várias maneiras convenientes de ter uma única base de código para várias redes. Leia mais na nossa documentação: [Como Relançar um Subgraph](/deploying/deploying-a-subgraph-to-hosted#redeploying-a-subgraph) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. Quais são as diferenças entre modelos e fontes de dados? +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -Modelos (templates) permitem criar fontes de dados na hora, enquanto o seu subgraph está no processo de indexação. Pode ser que o seu contrato gerará novos contratos enquanto as pessoas interagem com ele, e como queres saber o formato destes contratos (ABI, eventos, etc.) à vista, pode definir como quer indexá-los em um modelo; e quando gerados, o seu subgraph criará uma fonte de dados dinâmica ao fornecer o endereço do contrato. +Deve relançar o subgraph, mas se a ID do subgraph (hash IPFS) não mudar, ele não precisará sincronizar do começo. + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Confira o estado `Access to smart contract` dentro da seção [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +Dentro de um subgraph, os eventos são sempre processados na ordem em que aparecem nos blocos, mesmo sendo ou não através de vários contratos. + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. Confira a seção "como instanciar um modelo de fontes de dados" em: [Modelos de fontes de dados](/developing/creating-a-subgraph#data-source-templates). -## 8. Como posso garantir que estou a usar a versão mais recente do graph-node para os meus lançamentos locais? +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -Podes executar o seguinte comando: +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -docker pull graphprotocol/graph-node:latest -``` +You can also use `graph add` command to add a new dataSource. -**NOTA:** O docker / docker-compose sempre usará a versão do graph-node que foi puxada na primeira vez que a executou, então é importante fazer isto para garantir que está em dia com a versão mais recente do graph-node. +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. Como chamo uma função de contrato ou acesso uma variável de estado público dos meus mapeamentos de subgraph? +Primeiro, handlers de eventos e chamadas são organizados pelo índice de transações dentro do bloco. Handlers de evento e chamada dentro da mesma transação são organizados com uma convenção: handlers de eventos primeiro e depois handlers de chamadas, com cada tipo a respeitar a ordem em que são definidos no manifest. Handlers de blocos são executados após handlers de eventos e chamadas, na ordem em que são definidos no manifest. Estas regras de organizações estão sujeitas a mudanças. -Confira o estado `Access to smart contract` dentro da seção [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +Com a criação de novas fontes de dados dinâmicas, os handlers definidos para fontes de dados dinâmicas só começarão a processar após o processamento dos handlers das fontes, e se repetirão na mesma sequência sempre que acionados. -## 10. É possível preparar um subgraph através de `graph init` a partir do `graph-cli` com dois contratos? Ou devo adicionar, manualmente, outra fonte de dados no `subgraph.yaml` após executar o `graph init`? +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Sim. No próprio comando `graph init`, é possível adicionar várias fontes de dados ao inserir contratos um após o outro. O comando `graph add` também pode adicionar uma nova fonte de dados. +Podes executar o seguinte comando: -## 11. Quero contribuir ou adicionar um problema no GitHub. Onde posso encontrar os repositórios de código aberto? +```sh +docker pull graphprotocol/graph-node:latest +``` -- [graph-node](https://github.com/graphprotocol/graph-node) -- [graph-tooling](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. Qual é a forma recomendada de construir ids "autogeradas" para uma entidade ao lidar com eventos? +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? Se só uma entidade for criada durante o evento e não houver nada melhor disponível, então o hash da transação + o index do log será original. Podes ofuscá-los ao converter aquilo em Bytes e então o colocar pelo `crypto.keccak256`, mas isto não o fará mais original. -## 13. Ao escutar vários contratos, é possível selecionar a ordem do contrato para escutar eventos? +### 15. Can I delete my subgraph? -Dentro de um subgraph, os eventos são sempre processados na ordem em que aparecem nos blocos, mesmo sendo ou não através de vários contratos. +It is not possible to delete subgraphs once they are created. However, you can [transfer and deprecate your subgraph](/managing/transfer-and-deprecate-a-subgraph/). -## 14. É possível diferenciar entre redes (mainnet, Sepolia, local) de dentro de handlers de eventos? +## Network Related + +### 16. What networks are supported by The Graph? + +Veja a lista das redes apoiadas [aqui](/developing/supported-networks). + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? Sim. Isto é possível ao importar o `graph-ts` como no exemplo abaixo: @@ -78,23 +99,21 @@ dataSource.network() dataSource.address() ``` -## 15. Vocês apoiam handlers de bloco e chamada no Sepolia? +### 18. Do you support block and call handlers on Sepolia? Sim. O Sepolia apoia handlers de blocos, chamadas e eventos. Vale notar que handlers de eventos têm desempenho muito melhor do que os outros dois e têm apoio em todas as redes compatíveis com EVMs. -## 16. Posso importar ethers.js ou outras bibliotecas JS nos meus mapeamentos de subgraph? - -Não no momento, já que mapeamentos são escritos em AssemblyScript. Outra solução seria armazenar dados puros em entidades e desempenhar lógicas que requerem bibliotecas JS no cliente. +## Indexing & Querying Related -## 17. É possível especificar em qual bloco começar a indexação? +### 19. Is it possible to specify what block to start indexing on? -Sim. `dataSources.source.startBlock` no arquivo `subgraph.yaml` especifica o número do bloco que a fonte de dados começa a indexar. Na maioria dos casos, sugerimos usar o bloco no qual o contrato foi criado: [Blocos de início](/developing/creating-a-subgraph#start-blocks) +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 18. Alguma dica para aumentar o desempenho da indexação? O meu subgraph demora muito para sincronizar +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -Sim. Confira o recurso opcional de bloco inicial (start blcok) para começar a indexar do bloco em que o contrato foi lançado: [Blocos iniciais](/developing/creating-a-subgraph#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 19. Há como consultar diretamente o subgraph para determinar o número do último bloco que ele indexou? +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? Sim! Execute o seguinte comando, com "organization/subgraphName" substituído com a organização sob a qual ele foi publicado e o nome do seu subgraph: @@ -102,44 +121,27 @@ Sim! Execute o seguinte comando, com "organization/subgraphName" substituído co curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -## 20. Quais redes são apoiadas pelo The Graph? - -Veja a lista das redes apoiadas [aqui](/developing/supported-networks). - -## 21. É possível duplicar um subgraph para outra conta ou ponto final sem relançá-lo? - -Deve relançar o subgraph, mas se a ID do subgraph (hash IPFS) não mudar, ele não precisará sincronizar do começo. - -## 22. É possível usar a Apollo Federation em cima do graph-node? +### 22. Is there a limit to how many objects The Graph can return per query? -A Federation ainda tem apoio, mas queremos apoiá-la no futuro. No momento, vale usar costura de schemas no cliente ou através de um serviço proxy. - -## 23. Há um limite de quantos objetos o Graph pode retornar por consulta? - -Normalmente, respostas a consultas são limitadas a 100 itens por coleção. Se quiser receber mais, pode subir para até 1000 itens por coleção; além disto, pode paginar com: +By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: ```graphql someCollection(first: 1000, skip: ) { ... } ``` -## 24. Se a frontend do meu dApp usa o The Graph para consultas, eu preciso escrever a minha chave de query diretamente no frontend? E se pagarmos taxas de query para utilizadores — utilizadores maliciosos podem aumentar muito estas taxas? - -Atualmente, a abordagem recomendada para um dApp é adicionar a chave ao frontend e expô-la para utilizadores finais. Dito isto, pode limitar aquela chave a um hostname, como _seudapp.io_ e um subgraph. A gateway está atualmente a ser executada pelo Edge & Node. Parte da responsabilidade de uma gateway é monitorar comportamentos abusivos e bloquear tráfego de clientes maliciosos. - -## 25. Onde encontro o meu subgraph atual no serviço hospedado? - -Vá para o Serviço Hospedado para achar subgraphs lançados por você ou outros ao Serviço Hospedado. Veja [aqui](https://thegraph.com/hosted-service). - -## 26. O serviço hospedado começará a cobrar taxas de query? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -The Graph nunca cobrará pelo Serviço Hospedado. Este é um protocolo descentralizado, e cobrar por um serviço centralizado não condiz com os valores do Graph. O Serviço Hospedado sempre foi um degrau temporário para chegar à rede descentralizada; os programadores terão tempo suficiente para migrar à rede descentralizada quando estiverem preparados. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## 27. Como atualizar um subgraph na mainnet? +## Miscellaneous -Se for um programador de subgraph, você pode lançar uma nova versão do seu subgraph ao Subgraph Studio com a CLI. O subgraph será privado até lá, mas se estiver contente com ele, você pode publicá-lo no Graph Explorer descentralizado. Isto criará uma nova versão do seu subgraph em que Curadores podem começar a sinalizar. +### 24. Is it possible to use Apollo Federation on top of graph-node? -## 28. Em qual ordem os handlers de evento, bloco e chamada são ativados para uma fonte de dados? +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -Primeiro, handlers de eventos e chamadas são organizados pelo índice de transações dentro do bloco. Handlers de evento e chamada dentro da mesma transação são organizados com uma convenção: handlers de eventos primeiro e depois handlers de chamadas, com cada tipo a respeitar a ordem em que são definidos no manifest. Handlers de blocos são executados após handlers de eventos e chamadas, na ordem em que são definidos no manifest. Estas regras de organizações estão sujeitas a mudanças. +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -Com a criação de novas fontes de dados dinâmicas, os handlers definidos para fontes de dados dinâmicas só começarão a processar após o processamento dos handlers das fontes, e se repetirão na mesma sequência sempre que acionados. +- [graph-node](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/pt/developing/graph-ts/api.mdx b/website/pages/pt/developing/graph-ts/api.mdx index f60daaf3ce3b..acc557d643ab 100644 --- a/website/pages/pt/developing/graph-ts/api.mdx +++ b/website/pages/pt/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: API AssemblyScript --- -> Nota: Se criou um subgraph antes da versão `0.22.0` do `graph-cli`/`graph-ts`, está a usar uma versão mais antiga do AssemblyScript. Favor conferir o [Guia de Migração](/release-notes/assemblyscript-migration-guide) +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -Esta página documenta quais APIs embutidas podem ser usadas ao escrever mapeamentos de subgraph. Há dois tipos de API disponíveis do início: +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- a [biblioteca do Graph TypeScript](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) e -- códigos gerados a partir dos arquivos do subgraph por `graph codegen`. +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -Também é possível adicionar outras bibliotecas como dependências, desde que sejam compatíveis com o [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Como esta é a linguagem na qual são escritos os mapeamentos, a [wiki do AssemblyScript](https://github.com/AssemblyScript/assemblyscript/wiki) é uma boa referência para a linguagem e as características comuns das bibliotecas. +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## Referência da API @@ -27,16 +29,16 @@ A biblioteca `@graphprotocol/graph-ts` fornece as seguintes APIs: No manifest do subgraph, `apiVersion` especifica a versão da API de mapeamento, executada pelo Graph Node para um subgraph. -| Versão | Notas de atualização | -| :-: | --- | -| 0.0.9 | Adiciona novas funções de host [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adiciona validação para existência de campos no schema ao salvar uma entidade. | -| 0.0.7 | Classes `TransactionReceipt` e `Log` adicionadas aos tipos do EthereumCampo
    Campo `receipt` adicionado ao objeto Ethereum Event | -| 0.0.6 | Campo `nonce` adicionado ao objeto Ethereum TransactionCampo
    `baseFeePerGas` adicionado ao objeto Ethereum Block | -| 0.0.5 | AssemblyScript atualizado à versão 0.19.10 (inclui mudanças recentes, favor ler o [`Guia de Migração`](/release-notes/assemblyscript-migration-guide))
    `ethereum.transaction.gasUsed` renomeado para `ethereum.transaction.gasLimit` | -| 0.0.4 | Campo `functionSignature` adicionado ao objeto Ethereum SmartContractCall | -| 0.0.3 | Campo `from` adicionado ao objeto Ethereum
    `Calletherem.call.address` renomeado para `ethereum.call.to` | -| 0.0.2 | Campo `input` adicionado ao objeto Ethereum Transaction | +| Versão | Notas de atualização | +| :----: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adiciona novas funções de host [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adiciona validação para existência de campos no schema ao salvar uma entidade. | +| 0.0.7 | Classes `TransactionReceipt` e `Log` adicionadas aos tipos do EthereumCampo
    Campo `receipt` adicionado ao objeto Ethereum Event | +| 0.0.6 | Campo `nonce` adicionado ao objeto Ethereum TransactionCampo
    `baseFeePerGas` adicionado ao objeto Ethereum Block | +| 0.0.5 | AssemblyScript atualizado à versão 0.19.10 (inclui mudanças recentes, favor ler o [`Guia de Migração`](/release-notes/assemblyscript-migration-guide))
    `ethereum.transaction.gasUsed` renomeado para `ethereum.transaction.gasLimit` | +| 0.0.4 | Campo `functionSignature` adicionado ao objeto Ethereum SmartContractCall | +| 0.0.3 | Campo `from` adicionado ao objeto Ethereum
    `Calletherem.call.address` renomeado para `ethereum.call.to` | +| 0.0.2 | Campo `input` adicionado ao objeto Ethereum Transaction | ### Tipos Embutidos @@ -164,7 +166,8 @@ _Matemática_ import { TypedMap } from '@graphprotocol/graph-ts' ``` -O `TypedMap` pode servir para armazenar pares de chave e valor (key e value ). Confira [este exemplo](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). +O `TypedMap` pode servir para armazenar pares de chave e valor (key e value +). Confira [este exemplo](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). A classe `TypedMap` tem a seguinte API: @@ -252,7 +255,9 @@ export function handleTransfer(event: TransferEvent): void { Quando um evento `Transfer` é encontrado durante o processamento da chain, ele é passado para o handler de evento `handleTransfer` com o tipo `Transfer` gerado (apelidado de `TransferEvent` aqui, para evitar confusões com o tipo de entidade). Este tipo permite o acesso a dados como a transação parente do evento e seus parâmetros. -Cada entidade deve ter um ID única para evitar colisões com outras entidades. É bem comum que parâmetros de eventos incluam um identificador único a ser usado. Nota: usar o mesmo hash de transação como ID presume que nenhum outro evento na mesma transação criará entidades a usar este hash como o ID. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### Como carregar entidades a partir do armazenamento @@ -268,15 +273,18 @@ if (transfer == null) { // Use a entidade Transfer como antes ``` -Como a entidade pode ainda não existir no armazenamento, o método `load` retorna um valor de tipo `Transfer | null`. Portanto, é bom prestar atenção ao caso `null` antes de usar o valor. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> **Nota:** Só é necessário carregar entidades se as mudanças feitas no mapeamento dependem dos dados anteriores de uma entidade. Veja a próxima seção para ver as duas maneiras de atualizar entidades existentes. +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### Como consultar entidades criadas dentro de um bloco Desde o `graph-node` v0.31.0, o `@graphprotocol/graph-ts` v0.30.0 e o `@graphprotocol/graph-cli v0.49.0`, o método `loadInBlock` está disponível em todos os tipos de entidade. -A API do armazenamento facilita o resgate de entidades que foram criadas ou atualizadas no bloco atual. Um caso comum: um handler cria uma Transação de algum evento on-chain, e um handler seguinte quer acessar esta transação caso ela exista. Se a transação não existe, o subgraph deve acessar o banco de dados para descobrir que a entidade não existe; se o autor do subgraph já souber que a entidade deve ter sido criada no mesmo bloco, o uso do loadInBlock evita esta volta pelo banco de dados. Para alguns subgraphs, estas consultas perdidas podem contribuir muito para o tempo de indexação. +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // ou como a ID for construída @@ -503,7 +511,9 @@ Qualquer outro contrato que seja parte do subgraph pode ser importado do código #### Como Lidar com Chamadas Revertidas -Se os métodos de apenas-leitura do seu contrato forem revertidos, chame o método do contrato gerado prefixado com `try_`. Por exemplo, o contrato do Gravity expõe o método `gravatarToOwner`. Este código poderia lidar com uma reversão naquele método: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -515,7 +525,7 @@ if (callResult.reverted) { } ``` -Note que um Graph Node conectado a um cliente Geth ou Infura pode não detectar todas as reversões; se depender disto, recomendamos usar um Graph Node conectado a um cliente Parity. +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### ABI de Codificação/Decodificação @@ -761,44 +771,44 @@ Quando o tipo de um valor é confirmado, ele pode ser convertido num [tipo embut ### Referência de Conversões de Tipos -| Fonte(s) | Destino | Função de conversão | -| -------------------- | -------------------- | ---------------------------- | -| Address | Bytes | nenhum | -| Address | String | s.toHexString() | -| BigDecimal | String | s.toString() | -| BigInt | BigDecimal | s.toBigDecimal() | -| BigInt | String (hexadecimal) | s.toHexString() ou s.toHex() | -| BigInt | String (unicode) | s.toString() | -| BigInt | i32 | s.toI32() | -| Boolean | Boolean | nenhum | -| Bytes (assinado) | BigInt | BigInt.fromSignedBytes(s) | -| Bytes (não assinado) | BigInt | BigInt.fromUnsignedBytes(s) | -| Bytes | String (hexadecimal) | s.toHexString() ou s.toHex() | -| Bytes | String (unicode) | s.toString() | -| Bytes | String (base58) | s.toBase58() | -| Bytes | i32 | s.toI32() | -| Bytes | u32 | s.toU32() | -| Bytes | JSON | json.fromBytes(s) | -| int8 | i32 | nenhum | -| int32 | i32 | nenhum | -| int32 | BigInt | BigInt.fromI32(s) | -| uint24 | i32 | nenhum | -| int64 - int256 | BigInt | nenhum | -| uint32 - uint256 | BigInt | nenhum | -| JSON | boolean | s.toBool() | -| JSON | i64 | s.toI64() | -| JSON | u64 | s.toU64() | -| JSON | f64 | s.toF64() | -| JSON | BigInt | s.toBigInt() | -| JSON | string | s.toString() | -| JSON | Array | s.toArray() | -| JSON | Object | s.toObject() | -| String | Address | Address.fromString(s) | -| Bytes | Address | Address.fromBytes(s) | -| String | BigInt | BigInt.fromString(s) | -| String | BigDecimal | BigDecimal.fromString(s) | -| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | -| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | +| Fonte(s) | Destino | Função de conversão | +| ------------------------ | -------------------- | ------------------------------ | +| Address | Bytes | nenhum | +| Address | String | s.toHexString() | +| BigDecimal | String | s.toString() | +| BigInt | BigDecimal | s.toBigDecimal() | +| BigInt | String (hexadecimal) | s.toHexString() ou s.toHex() | +| BigInt | String (unicode) | s.toString() | +| BigInt | i32 | s.toI32() | +| Boolean | Boolean | nenhum | +| Bytes (assinado) | BigInt | BigInt.fromSignedBytes(s) | +| Bytes (não assinado) | BigInt | BigInt.fromUnsignedBytes(s) | +| Bytes | String (hexadecimal) | s.toHexString() ou s.toHex() | +| Bytes | String (unicode) | s.toString() | +| Bytes | String (base58) | s.toBase58() | +| Bytes | i32 | s.toI32() | +| Bytes | u32 | s.toU32() | +| Bytes | JSON | json.fromBytes(s) | +| int8 | i32 | nenhum | +| int32 | i32 | nenhum | +| int32 | BigInt | BigInt.fromI32(s) | +| uint24 | i32 | nenhum | +| int64 - int256 | BigInt | nenhum | +| uint32 - uint256 | BigInt | nenhum | +| JSON | boolean | s.toBool() | +| JSON | i64 | s.toI64() | +| JSON | u64 | s.toU64() | +| JSON | f64 | s.toF64() | +| JSON | BigInt | s.toBigInt() | +| JSON | string | s.toString() | +| JSON | Array | s.toArray() | +| JSON | Object | s.toObject() | +| String | Address | Address.fromString(s) | +| Bytes | Address | Address.fromBytes(s) | +| String | BigInt | BigInt.fromString(s) | +| String | BigDecimal | BigDecimal.fromString(s) | +| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) | +| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | ### Metadados de Fontes de Dados diff --git a/website/pages/pt/developing/substreams-powered-subgraphs-faq.mdx b/website/pages/pt/developing/substreams-powered-subgraphs-faq.mdx index 45792250dc05..5fc4b8f0c2f6 100644 --- a/website/pages/pt/developing/substreams-powered-subgraphs-faq.mdx +++ b/website/pages/pt/developing/substreams-powered-subgraphs-faq.mdx @@ -66,7 +66,7 @@ A [documentação do Substreams](/substreams) lhe ensinará como construir módu A [documentação de subgraphs movidos a Substreams](/cookbook/substreams-powered-subgraphs/) lhe ensinará como empacotá-los para a publicação no The Graph. -The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code. +A [ferramenta de Codegen no Substreams mais recente](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) permitirá ao programador inicializar um projeto no Substreams sem a necessidade de código. ## Qual é o papel de módulos em Rust no Substreams? diff --git a/website/pages/pt/developing/supported-networks.mdx b/website/pages/pt/developing/supported-networks.mdx index 2e750d53de96..7a1a1d46a772 100644 --- a/website/pages/pt/developing/supported-networks.mdx +++ b/website/pages/pt/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integração com o Graph Node: `evm`, `near`, `cosmos`, `osmosis` e `ar` têm apoio nativo a handlers e tipos no Graph Node. Chains compatíveis com Firehose e Substreams podem utilizar a integração generalizada de [subgraphs movidos a Substreams](/cookbook/substreams-powered-subgraphs) (isto inclui as redes `evm` e `near`). ⁠ apoia o lançamento de [subgraphs movidos a Substreams](/cookbook/substreams-powered-subgraphs). - O Subgraph Studio depende da estabilidade e da confiança das tecnologias subjacentes, por exemplo, JSON-RPC, Firehose e endpoints dos Substreams. -- Subgraphs que indexam a Gnosis Chain podem agora ser lançados com o identificador de rede `gnosis`. O `xdai` ainda é apoiado para subgraphs já existentes no serviço hospedado. +- Subgraphs que indexam a Gnosis Chain agora podem ser lançados com a identificadora de rede `gnosis`. - Se um subgraph foi publicado via a CLI e visto por um Indexador, ele pode tecnicamente ser consultado mesmo sem apoio, e esforços estão a ser feitos para simplificar ainda mais a integração de novas redes. - Para uma lista completa de recursos apoiados na rede descentralizada, veja [esta página](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). diff --git a/website/pages/pt/developing/unit-testing-framework.mdx b/website/pages/pt/developing/unit-testing-framework.mdx index 3595d096dc1f..4d82e3a3d56d 100644 --- a/website/pages/pt/developing/unit-testing-framework.mdx +++ b/website/pages/pt/developing/unit-testing-framework.mdx @@ -1103,7 +1103,7 @@ test('ethereum/contract dataSource creation example', () => { assert.dataSourceCount('GraphTokenLockWallet', 0) // Crie uma nova fonte de dados GraphTokenLockWallet com o endereço - 0xa16081f360e3847006db660bae1c6d1b2e17ec2a + 0xA16081F360e3847006dB660bae1c6d1b2e17eC2A GraphTokenLockWallet.create(Address.fromString('0xA16081F360e3847006dB660bae1c6d1b2e17eC2A')) // Garanta que a dataSource foi criada @@ -1369,18 +1369,18 @@ A saída do log inclui a duração do teste. Veja um exemplo: > `Critical: Could not create WasmInstance from valid module with context: unknown import: wasi_snapshot_preview1::fd_write has not been defined` -Isto significa que usou o `console.log` no seu código, que não é apoiado pelo AssemblyScript. Considere usar a [API de Logging](/developing/assemblyscript-api/#logging-api) +This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/graph-ts/api/#logging-api) > `ERROR TS2554: Expected ? arguments, but got ?.` -> +> > `return new ethereum.Block(defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt);` -> +> > `in ~lib/matchstick-as/assembly/defaults.ts(18,12)` -> +> > `ERROR TS2554: Expected ? arguments, but got ?.` -> +> > `return new ethereum.Transaction(defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt);` -> +> > `in ~lib/matchstick-as/assembly/defaults.ts(24,12)` A diferença nos argumentos é causada pela diferença no `graph-ts` e no `matchstick-as`. Problemas como este são melhor resolvidos ao atualizar tudo para a versão mais recente. diff --git a/website/pages/pt/glossary.mdx b/website/pages/pt/glossary.mdx index 75cc00cb710e..763b5ef41ec4 100644 --- a/website/pages/pt/glossary.mdx +++ b/website/pages/pt/glossary.mdx @@ -10,11 +10,9 @@ title: Glossário - **Endpoint** (ponto final): Um URL para consultar um subgraph. O endpoint de testes para o Subgraph Studio é `https://api.studio.thegraph.com/query///` e o endpoint do Graph Explorer é `https://gateway.thegraph.com/api//subgraphs/id/`. O endpoint do Graph Explorer é utilizado para consultar subgraphs na rede descentralizada do The Graph. -- **Subgraph**: Uma API aberta que extrai dados de uma blockchain, os processa e os armazena para que possem ser consultados com facilidade via GraphQL. Programadores podem construir e editar subgraphs à Graph Network. Depois, os Indexadores podem começar a indexar subgraphs para disponibilizá-los para queries por qualquer pessoa. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **Serviço hospedado**: Um suporte temporário para construir e consultar subgraphs, enquanto a rede descentralizada do The Graph amadurece o seu custo e qualidade de serviço e experiência de programação. - -- **Indexadores**: Participantes da rede que executam nodes de indexação para indexar dados de blockchains e servir consultas em GraphQL. +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Fontes de Renda de Indexadores**: Os indexadores são recompensados em GRT com dois componentes: rebates de taxa de query e recompensas de indexação. @@ -24,17 +22,17 @@ title: Glossário - **Auto-Stake (Stake Próprio) do Indexador**: A quantia de GRT que os Indexadores usam para participar na rede descentralizada. A quantia mínima é 100.000 GRT, e não há limite máximo. -- **Indexador de Atualizações**: Um Indexador temporário feito para agir como uma reserva para queries de subgraphs não servidos por outros Indexadores na rede. Ele garante uma transição suave para subgraphs que atualizam do serviço hospedado à Graph Network. O Indexador de atualização não é competitivo com outros Indexadores. Ele apoia várias blockchains que antes estavam disponíveis apenas no serviço hospedado. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegantes**: Participantes na rede que são titulares de GRT, e delegam o seu GRT aos Indexadores. Isto permite aos Indexadores aumentar o seu stake nos subgraphs da rede. Em troca, os Delegantes recebem uma porção das Recompensas de Indexação que os Indexadores recebem por processar subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **Taxa de Delegação**: Uma taxa de 0.5% paga pelos Delegantes ao delegar GRT aos Indexadores. O GRT usado para pagar a taxa é queimado. -- **Curadores**: Participantes na rede que identificam subgraphs de alta qualidade, e os "curam" (por ex., sinalizam GRT neles) em troca de ações de curadoria. Quando Indexadores reivindicam taxas de query em um subgraph, 10% delas é distribuído aos Curadores daquele subgraph. Os Indexadores ganham recompensas de indexação proporcionais ao sinal em um subgraph. Perceba uma correlação entre a quantia de GRT sinalizada e o número de Indexadores que indexam um subgraph. +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **Taxa de Curadoria**: Uma taxa de 1% paga pelos Curadores quando sinalizam GRT em subgraphs. O GRT usado para pagar a taxa é queimado. -- **Consumidor de Subgraph**: Qualquer aplicativo ou utilizador que consulta um subgraph. +- **Data Consumer**: Any application or user that queries a subgraph. - **Programador de Subgraph**: Um programador que constrói e lança um subgraph à rede descentralizada do The Graph. @@ -46,11 +44,11 @@ title: Glossário 1. **Ativa**: Uma alocação é considerada ativa quando é criada on-chain. Isto se chama abrir de uma alocação, e indica à rede que o Indexador está a indexar e servir consultas ativamente para um subgraph particular. Alocações ativas acumulam recompensas de indexação proporcionais ao sinal no subgraph, e à quantidade de GRT alocada. - 2. **Fechada**: Um Indexador pode resgatar as recompensas acumuladas em um subgraph selecionado ao enviar uma Prova de Indexação (POI) recente e válida. Isto se chama "fechar uma alocação". Uma alocação deve ter ficado aberta por, no mínimo, um epoch antes que possa ser fechada. O período máximo de alocação é de 28 epochs; se um indexador deixar uma alocação aberta por mais que isso, ela se torna uma alocação obsoleta. Quando uma alocação está **Fechada**, um Pescador ainda pode abrir uma disputa contra um Indexador por servir dados falsos. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **Subgraph Studio**: um dApp poderoso para a construção, lançamento e edição de subgraphs. -- **Pescadores**: Um papel na Graph Network cumprido por participantes que monitoram a precisão e integridade dos dados servidos pelos Indexadores. Quando um Pescador identifica uma resposta de query ou uma POI que acreditam ser incorreta, ele pode iniciar uma disputa contra o Indexador. Se a disputa der um veredito a favor do Pescador, o Indexador é cortado, ou seja, perderá 2.5% do seu auto-stake de GRT. Desta quantidade, 50% é dado ao Pescador como recompensa pela sua vigilância, e os 50% restantes são retirados da circulação (queimados). Este mecanismo é desenhado para encorajar Pescadores a ajudar a manter a confiança na rede ao garantir que Indexadores sejam responsabilizados pelos dados que providenciam. +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Árbitros**: Participantes da rede apontados por um processo de governança. O papel do Árbitro é decidir o resultado de disputas de indexação e consultas, e a sua meta é maximizar a utilidade e confiança da Graph Network. @@ -62,11 +60,11 @@ title: Glossário - **GRT**: O token de utilidade do The Graph, que oferece incentivos económicos a participantes da rede por contribuir. -- **POI ou Prova de Indexação**: Quando um Indexador fecha a sua alocação e quer reivindicar as suas recompensas de indexação acumuladas em um certo subgraph, ele deve providenciar uma Prova de Indexação (POI) válida e recente. Os Pescadores podem disputar a POI providenciada por um Indexador; disputas resolvidas a favor do Pescador causam um corte para o Indexador. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: O componente que indexa subgraphs e disponibiliza os dados resultantes abertos a queries através de uma API GraphQL. Assim ele é essencial ao stack de indexadores, e operações corretas de um Graph Node são cruciais para executar um indexador com êxito. +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Agente de Indexador**: Parte do stack do indexador. Ele facilita as interações do Indexer on-chain, inclusive registos na rede, gestão de lançamentos de Subgraph ao(s) seu(s) Graph Node(s), e gestão de alocações. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: Uma biblioteca para construir dApps baseados em GraphQL de maneira descentralizada. @@ -78,10 +76,6 @@ title: Glossário - **Ferramentas de Transferência para L2**: Contratos inteligentes e interfaces que permitem que os participantes na rede transfiram ativos relacionados à rede da mainnet da Ethereum ao Arbitrum One. Os participantes podem transferir GRT delegado, subgraphs, ações de curadoria, e o autostake do Indexador. -- **_Atualização_ de um subgraph à Graph Network**: O processo de migrar um subgraph do serviço hospedado à Graph Network. - -- **_Atualização_ de um subgraph**: O processo de lançar uma nova versão de subgraph com atualizações ao manifest, schema e mapeamentos do subgraph. +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migração**: O processo de movimentar ações de curadoria da versão antiga de um subgraph a uma versão nova de um subgraph (por ex., quando a v.0.0.1 é atualizada à v.0.0.2). - -- **Janela de Atualização**: O período para que utilizadores do serviço hospedado atualizem o(s) seu(s) subgraph(s) à Graph Network começa em 11 de abril e termina em 12 de junho de 2024. diff --git a/website/pages/pt/index.json b/website/pages/pt/index.json index be45383678a4..7a9d618567bb 100644 --- a/website/pages/pt/index.json +++ b/website/pages/pt/index.json @@ -21,10 +21,6 @@ "createASubgraph": { "title": "Crie um Subgraph", "description": "Use o Studio para criar subgraphs" - }, - "migrateFromHostedService": { - "title": "Migração do serviço hospedado", - "description": "Atualização de subgraphs à Graph Network" } }, "networkRoles": { diff --git a/website/pages/pt/managing/transfer-and-deprecate-a-subgraph.mdx b/website/pages/pt/managing/transfer-and-deprecate-a-subgraph.mdx new file mode 100644 index 000000000000..9cd4ac4ee60e --- /dev/null +++ b/website/pages/pt/managing/transfer-and-deprecate-a-subgraph.mdx @@ -0,0 +1,65 @@ +--- +title: Transferência e Depreciação de Subgraphs +--- + +## Como transferir a titularidade de um subgraph + +Subgraphs publicados na rede descentralizada terão um NFT mintado no endereço que publicou o subgraph. O NFT é baseado no padrão ERC-721, que facilita transferências entre contas na Graph Network. + +**Lembre-se do seguinte:** + +- O dono do NFT controla o subgraph. +- Se o dono atual decidir vender ou transferir o NFT, ele não poderá mais editar ou atualizar aquele subgraph na rede. +- É possível transferir o controle de um subgraph para uma multisig. +- Um membro da comunidade pode criar um subgraph no nome de uma DAO. + +### Como visualizar o seu subgraph como um NFT + +Para visualizar o seu subgraph como um NFT, visite um mercado de NFTs como o **OpenSea**: + +``` +https://opensea.io/your-wallet-address +``` + +Ou um explorador de carteiras, como o **Rainbow.me**: + +``` +https://rainbow.me/your-wallet-addres +``` + +### Passo a Passo + +Para transferir a titularidade de um subgraph, faça o seguinte: + +1. Use a interface embutida no Subgraph Studio: + + ![Transferência de Titularidade de Subgraph](/img/subgraph-ownership-transfer-1.png) + +2. Escolha o endereço para o qual gostaria de transferir o subgraph: + + ![Transferência de Titularidade de Subgraph](/img/subgraph-ownership-transfer-2.png) + +Também é possível usar a interface embutida de mercados de NFT, como o OpenSea: + +![Transferência de Titularidade de Subgraph de um mercado de NFT](/img/subgraph-ownership-transfer-nft-marketplace.png) + +## Como Depreciar um Subgraph + +Embora não seja possível deletar um subgraph, é possível depreciá-lo no Graph Explorer. + +### Passo a Passo + +Para depreciar o seu subgraph, faça o seguinte: + +1. Visite o endereço do contrato para subgraphs no Arbitrum One [aqui](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). +2. Faça uma chamada de `deprecateSubgraph`, com o seu `SubgraphID` como argumento. +3. O seu subgraph não aparecerá mais em buscas no Graph Explorer. + +**Lembre-se do seguinte:** + +- A carteira do dono deve chamar a função `deprecateSubgraph`. +- Os curadores não poderão mais sinalizar no subgraph depreciado. +- Curadores que já sinalizaram no subgraph poderão retirar a sua sinalização a um preço de ação normal. +- Subgraphs depreciados demonstrarão uma mensagem de erro. + +> Se tiver interagido com o subgraph depreciado, poderá achá-lo no seu perfil de utilizador sob a aba "Subgraphs", Indexação ("Indexing") ou Curadoria ("Curating"), respetivamente. diff --git a/website/pages/pt/mips-faqs.mdx b/website/pages/pt/mips-faqs.mdx index 1408b61422ac..926a8544e89b 100644 --- a/website/pages/pt/mips-faqs.mdx +++ b/website/pages/pt/mips-faqs.mdx @@ -6,10 +6,6 @@ title: Perguntas frequentes sobre Provedores de Infraestrutura de Migração (MI > Nota: O programa de MIPs fechou em maio de 2023. Agradecemos a todos os Indexadores que participaram! -É uma boa época para participar do ecossistema do The Graph! Durante o [Graph Day 2022](https://thegraph.com/graph-day/2022/), Yaniv Tal anunciou a [aposentadoria do serviço hospedado](https://thegraph.com/blog/sunsetting-hosted-service/), um momento para o qual o ecossistema do The Graph se preparou por muitos anos. - -Para apoiar o desligamento do serviço hospedado e a migração de toda a sua atividade à rede descentralizada, a Graph Foundation anunciou o [programa de Provedores de Infraestrutura de Migração (MIPs)](https://thegraph.com/blog/mips-multi-chain-indexing-incentivized-program). - O programa de MIPs é um programa de incentivos para Indexadores, para apoiá-los com recursos para indexar chains além da mainnet Ethereum e ajudar o protocolo The Graph a expandir a rede descentralizada numa camada de infraestrutura multi-chain. O programa de MIPs alocou 0,75% da reserva de GRT (75 milhões de GRT), com 0.5% reservados para recompensar Indexadores que contribuam à inicialização da rede e 0.25% alocados a bolsas de rede para programadores de subgraph a usar subgraphs multi-chain. diff --git a/website/pages/pt/network/benefits.mdx b/website/pages/pt/network/benefits.mdx index a806ae77758e..2f2bb4210483 100644 --- a/website/pages/pt/network/benefits.mdx +++ b/website/pages/pt/network/benefits.mdx @@ -27,54 +27,53 @@ Os custos de query podem variar; o custo citado é o normal até o fechamento da ## Utilizador de Baixo Volume (menos de 100 mil queries por mês) -| Comparação de Custos | Auto-hospedagem | The Graph Network | -| :-: | :-: | :-: | -| Custo mensal de servidor\* | $350 por mês | $0 | -| Custos de query | $0+ | $0 por mês | -| Tempo de engenharia | $400 por mês | Nenhum, embutido na rede com Indexadores distribuídos globalmente | -| Queries por mês | Limitadas pelas capabilidades da infra | 100 mil (Plano Grátis) | -| Custo por query | $0 | $0 | -| Infraestrutura | Centralizada | Descentralizada | -| Redundância geográfica | $750+ por node adicional | Incluída | -| Uptime (disponibilidade) | Varia | 99.9%+ | -| Custos mensais totais | $750+ | $0 | +| Comparação de Custos | Auto-hospedagem | The Graph Network | +|:-------------------------------:|:---------------------------------------:|:-----------------------------------------------------------------:| +| Custo mensal de servidor\* | $350 por mês | $0 | +| Custos de query | $0+ | $0 por mês | +| Tempo de engenharia | $400 por mês | Nenhum, embutido na rede com Indexadores distribuídos globalmente | +| Queries por mês | Limitadas pelas capabilidades da infra | 100 mil (Plano Grátis) | +| Custo por query | $0 | $0 | +| Infraestrutura | Centralizada | Descentralizada | +| Redundância geográfica | $750+ por node adicional | Incluída | +| Uptime (disponibilidade) | Varia | 99.9%+ | +| Custos mensais totais | $750+ | $0 | ## Utilizador de Volume Médio (cerca de 3 milhões de queries por mês) -| Comparação de Custos | Auto-hospedagem | The Graph Network | -| :-: | :-: | :-: | -| Custo mensal de servidor\* | $350 por mês | $0 | -| Custos de query | $500 por mês | $120 por mês | -| Tempo de engenharia | $800 por mês | Nenhum, embutido na rede com Indexadores distribuídos globalmente | -| Queries por mês | Limitadas pelas capabilidades da infra | ~3 milhões | -| Custo por query | $0 | $0.00004 | -| Infraestrutura | Centralizada | Descentralizada | -| Custo de engenharia | $200 por hora | Incluído | -| Redundância geográfica | $1.200 em custos totais por node adicional | Incluída | -| Uptime (disponibilidade) | Varia | 99.9%+ | -| Custos mensais totais | $1.650+ | $120 | +| Comparação de Custos | Auto-hospedagem | The Graph Network | +|:-------------------------------:|:------------------------------------------:|:-----------------------------------------------------------------:| +| Custo mensal de servidor\* | $350 por mês | $0 | +| Custos de query | $500 por mês | $120 por mês | +| Tempo de engenharia | $800 por mês | Nenhum, embutido na rede com Indexadores distribuídos globalmente | +| Queries por mês | Limitadas pelas capabilidades da infra | ~3 milhões | +| Custo por query | $0 | $0.00004 | +| Infraestrutura | Centralizada | Descentralizada | +| Custo de engenharia | $200 por hora | Incluído | +| Redundância geográfica | $1.200 em custos totais por node adicional | Incluída | +| Uptime (disponibilidade) | Varia | 99.9%+ | +| Custos mensais totais | $1.650+ | $120 | ## Utilizador de Volume Alto (cerca de 30 milhões de queries por mês) -| Comparação de Custos | Auto-hospedagem | The Graph Network | -| :-: | :-: | :-: | -| Custo mensal de servidor\* | $1.100 por mês, por node | $0 | -| Custos de query | $4.000 | $1,200 por mês | -| Número de nodes necessário | 10 | Não se aplica | -| Tempo de engenharia | $6.000 ou mais por mês | Nenhum, embutido na rede com Indexadores distribuídos globalmente | -| Queries por mês | Limitadas pelas capabilidades da infra | Cerca de 30 milhões | -| Custo por query | $0 | $0.00004 | -| Infraestrutura | Centralizada | Descentralizada | -| Redundância geográfica | $1.200 em custos totais por node adicional | Incluída | -| Uptime (disponibilidade) | Varia | 99.9%+ | -| Custos mensais totais | $11.000+ | $1.200 | +| Comparação de Custos | Auto-hospedagem | The Graph Network | +|:-------------------------------:|:-------------------------------------------:|:-----------------------------------------------------------------:| +| Custo mensal de servidor\* | $1.100 por mês, por node | $0 | +| Custos de query | $4.000 | $1,200 por mês | +| Número de nodes necessário | 10 | Não se aplica | +| Tempo de engenharia | $6.000 ou mais por mês | Nenhum, embutido na rede com Indexadores distribuídos globalmente | +| Queries por mês | Limitadas pelas capabilidades da infra | Cerca de 30 milhões | +| Custo por query | $0 | $0.00004 | +| Infraestrutura | Centralizada | Descentralizada | +| Redundância geográfica | $1.200 em custos totais por node adicional | Incluída | +| Uptime (disponibilidade) | Varia | 99.9%+ | +| Custos mensais totais | $11.000+ | $1.200 | \*com custos de backup incluídos: $50-$100 por mês Tempo de engenharia baseado numa hipótese de $200 por hora -Reflete o custo ao consumidor de dados. Taxas de query ainda são pagas a Indexadores por queries do Plano -Grátis. +Reflete o custo ao consumidor de dados. Taxas de query ainda são pagas a Indexadores por queries do Plano Grátis. Os custos estimados são apenas para subgraphs na Mainnet do Ethereum — os custos são maiores ao auto-hospedar um `graph-node` em outras redes. Alguns utilizadores devem atualizar o seu subgraph a uma versão mais recente. Até o fechamento deste texto, devido às taxas de gas do Ethereum, uma atualização custa cerca de 50 dólares. Note que as taxas de gás no [Arbitrum](/arbitrum/arbitrum-faq) são muito menores que as da mainnet do Ethereum. diff --git a/website/pages/pt/network/curating.mdx b/website/pages/pt/network/curating.mdx index 852fc054dd5d..b43e5adca528 100644 --- a/website/pages/pt/network/curating.mdx +++ b/website/pages/pt/network/curating.mdx @@ -10,7 +10,7 @@ Antes que consumidores possam indexar um subgraph, ele deve ser indexado. É aqu Os Indexadores podem confiar no sinal de um Curador porque ao sinalizar, Curadores mintam uma ação de curadoria para o subgraph, o que dá aos Curadores uma porção de taxas de query futuras que o subgraph move. -Curadores dão eficiência à Graph Network, e a [sinalização](#how-to-signal) é o processo que curadores usam para dizer aos Indexadores que um subgraph é bom para indexar; onde GRT é adicionado a uma bonding curve para um subgraph. Os Indexadores podem confiar no sinal de um Curador porque ao sinalizar, Curadores mintam uma ação de curadoria para o subgraph, o que dá aos Curadores uma porção de taxas de query futuras que o subgraph move. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Sinais de curador são representados como tokens ERC20 chamados de Ações de Curadoria do Graph (GCS). Quem quiser ganhar mais taxas de query devem sinalizar o seu GRT a subgraphs que apostam que gerará um fluxo forte de taxas á rede. Curadores não podem ser cortados por mau comportamento, mas há uma taxa de depósito em Curadores para desincentivar más decisões que possam ferir a integridade da rede. Curadores também ganharão menos taxas de query se curarem um subgraph de baixa qualidade, já que haverão menos queries a processar ou menos Indexadores para processá-las. @@ -18,7 +18,7 @@ O [Indexador de Atualização do Nascer do Sol](/sunrise/#what-is-the-upgrade-in Ao sinalizar, Curadores podem decidir entre sinalizar numa versão específica do subgraph ou sinalizar com a automigração. Caso sinalizem com a automigração, as ações de um curador sempre serão atualizadas à versão mais recente publicada pelo programador. Se decidirem sinalizar numa versão específica, as ações sempre permanecerão nesta versão específica. -Para ajudar equipas que transitam subgraphs do serviço hospedado à Graph Network, o suporte da curadoria foi lançado. Se precisar de ajuda com curadoria para melhorar a qualidade do serviço, mande um pedido à equipa da Edge & Node em support@thegraph.zendesk.com e especifique os subgraphs com os quais você precisa de ajuda. +Se precisar de ajuda com a curadoria para melhorar a qualidade do serviço, mande um pedido à equipa da Edge & Node em support@thegraph.zendesk.com e especifique os subgraphs com que precisa de ajuda. Os indexadores podem achar subgraphs para indexar com base em sinais de curadoria que veem no Graph Explorer (imagem abaixo). @@ -34,7 +34,7 @@ Sinalizar numa versão específica serve muito mais quando um subgraph é usado Ter um sinal que migra automaticamente à build mais recente de um subgraph pode ser bom para garantir o acúmulo de taxas de consulta. Toda vez que cura, é incorrida uma taxa de 1% de curadoria. Também pagará uma taxa de 0.5% em toda migração. É recomendado que rogramadores de subgraphs evitem editar novas versões com frequência - eles devem pagar uma taxa de curadoria de 0.5% em todas as ações de curadoria auto-migradas. -> **Nota:** O primeiro endereço a sinalizar um subgraph particular é considerado o primeiro curador e deverá realizar tarefas muito mais intensivas em gas do que o resto dos seguintes curadores — porque o primeiro curador inicializa os tokens de ação de curadoria, inicializa o bonding curve (até no Arbitrum), e também transfere tokens no proxy do Graph. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Como Sacar o Seu GRT @@ -49,7 +49,7 @@ Porém, é recomendado que curadores deixem o seu GRT no lugar, não apenas para ## Riscos 1. O mercado de consulta é jovem por natureza no The Graph, e há sempre o risco do seu rendimento anual ser menor que o esperado devido às dinâmicas nascentes do mercado. -2. Taxa de Curadoria — quando um curador sinaliza GRT em um subgraph, ele incorre uma taxa de 1% de curadoria. Esta taxa é queimada, e o resto é depositado na reserva da bonding curve. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. 3. (Só para o Ethereum) Quando os curadores queimam as suas ações para sacar GRT, o valor das ações restantes em GRT é diminuído. Saiba que, em alguns casos, os curadores podem decidir queimar todas as suas ações **de uma vez só**. Isto pode ser comum se um programador de dApp parar de versionar/melhorar e consultar seu subgraph, ou se um subgraph falhar. Portanto, os curadores restantes podem não poder sacar mais do que uma fração do seu GRT inicial. Para um papel de rede com um perfil de risco menor, veja [Delegados](/network/delegating). 4. Um subgraph pode falhar devido a um erro de código. Um subgraph falho não acumula taxas de consulta. Portanto, espere até o programador consertar o erro e lançar uma nova versão. - Caso se inscreva à versão mais recente de um subgraph, suas ações migrarão automaticamente a esta versão nova. Isto incorrerá uma taxa de curadoria de 0.5%. @@ -65,8 +65,8 @@ Ao sinalizar em um subgraph, ganhará parte de todas as taxas de query geradas p Achar subgraphs de alta qualidade é uma tarefa complexa, mas ela pode ser abordada de várias formas diferentes. Como Curador, vale procurar subgraphs com boa reputação que movem volumes de consulta. Um subgraph confiável pode ser valioso se for completo, preciso, e apoiar as necessidades de dados de um dApp. Um subgraph mal arquitetado pode precisar de revisão ou reedição, e também tem risco de falhar. É crítico que os Curadores verifiquem a arquitetura ou código de um subgraph, para avaliar se ele é valioso. Portanto: -- Os curadores podem usar o seu conhecimento de uma rede para tentar prever como um subgraph individual pode gerar um volume maior ou menor de queries no futuro -- Os curadores também devem entender as métricas disponíveis através do Graph Explorer. Métricas como volume passado de consultas e quem é o programador do subgraph podem ajudar a determinar se um subgraph vale ou não o sinal. +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future +- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. ### 3. Qual o custo de atualizar um subgraph? @@ -78,50 +78,14 @@ Não atualize os seus subgraphs com frequência excessiva. Veja a questão acima ### 5. Posso vender as minhas ações de curadoria? -Ações de curadoria não podem ser "compradas" ou "vendidas", como outros tokens ERC20 que você deve conhecer. Eles só podem ser cunhados (criados) ou queimados (destruídos) dentro da bonding curve para um subgraph particular. A quantidade de GRT necessária para cunhar um novo sinal, e a quantidade de GRT que você recebe ao queimar o seu sinal existente, são determinados por aquela bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- Como um Curador no Ethereum, você precisa saber que quando queimar as suas ações de curadoria para sacar GRT, pode acabar com mais ou menos GRT do que o depositado inicialmente. -- Como um Curador no Arbitrum, é garantido que você receberá o GRT que depositou inicialmente (menos a taxa). +Como um Curador no Arbitrum, é garantido que você receberá o GRT que depositou inicialmente (menos a taxa). ### 6. Tenho direito a uma bolsa de curadoria? Bolsas de curadoria são determinadas individualmente. Se precisar de ajuda com a curadoria, entre em contacto em support@thegraph.zendesk.com. -## Diferenças Entre Curar no Ethereum x Arbitrum - -O comportamento do mecanismo de curadoria difere com base no lançamento da chain do protocolo, particularmente, como o preço de uma ação de subgraph é calculado. - -O lançamento original da Graph Network no Ethereum usa bonding curves para determinar o preço de ações: **o preço de cada ação de subgraph aumenta com cada token investido** e **o preço de cada ação diminui com cada token vendido.** Isto significa que a curadoria coloca o seu principal em risco, já que não há garantia de poder vender as suas ações e retornar o seu investimento original. - -No Arbitrum, curar subgraphs fica muito mais simples. As bonding curves são "achatadas" e o seu efeito é anulado, o que significa que nenhum Curador poderá realizar ganhos à custa dos outros. Isto permite que Curadores sinalizem ou dessinalizem em subgraphs a qualquer hora, a um custo consistente e com risco muito limitado. - -Se tens interesse em curar no Ethereum e quer entender os detalhes de bonding curves e seus efeitos, veja [Bonding Curve 101](#bonding-curve-101). Seja diligente; garanta que curará subgraphs de confiança. Criar um subgraph é um processo livre de permissões, para que o povo possa criar subgraphs e chamá-los do nome que quiser. Para mais conselhos sobre riscos de curadoria, confira o [Guia de Curadoria da Graph Academy.](https://thegraph.academy/curators/) - -## Os Básicos da Bonding Curve - -> **Nota**: esta secção só se aplica à curadoria no Ethereum, já que bonding curves são planas e não têm efeito no Arbitrum. - -Todo subgraph tem uma bonding curve, onde são cunhadas ações de curadoria quando um usuário adiciona sinais **dentro** da curva. A bonding curve de cada subgraph é única; cada curve é arquitetada para que o preço da cunhagem de uma ação de curadoria num subgraph cresça de forma linear, sobre o número de ações cunhadas. - -![Preço por ações](/img/price-per-share.png) - -Como resultado, o preço aumenta de forma linear, o que significa que a compra de uma ação ficará mais cara temporalmente. Aqui está um exemplo; veja a bonding curve abaixo: - -![Bonding curve](/img/bonding-curve.png) - -Considere que temos dois curadores que cunham ações para um subgraph: - -- O Curador A é o primeiro a sinalizar no subgraph. Ao adicionar 120.000 GRT à curve, ele pode cunhar 2000 ações. -- O sinal do Curador B chega no subgraph algum tempo depois. Para receber a mesma quantidade de ações que o Curador A, ele deveria adicionar 360.000 na curve. -- Como ambos os curadores têm metade do total das ações de curadoria, eles receberiam uma quantia igual de ‘royalties’ de curadoria. -- Se qualquer curador resolvesse queimar as suas 2000 ações de curadoria, ele receberia 360.000 GRT. -- O curador restante passaria a receber todos os ‘royalties’ de curadoria daquele subgraph. Caso ele queimasse as suas ações para sacar GRT, ele receberia 120.000 GRT. -- **RESUMINDO:** A valorização em GRT de ações de curadoria é determinada pela bonding curve, e pode ser volátil. Há o potencial para grandes prejuízos. Sinalizar precocemente significa que você coloca menos GRT para cada ação. Por tabela, isto significa que ganhas mais royalties de curadoria por GRT do que curadores que chegarem mais tarde para o mesmo subgraph. - -Em geral, uma bonding curve é uma curva matemática que define o relacionamento entre a reserva de token e o preço do ativo. No caso específico de curadoria de subgraphs, **o preço de cada ação de subgraph aumenta com cada token investido** e o **preço de cada ação cai com cada token vendido.** - -No caso do The Graph, é usada [a implementação da Bancor de uma fórmula de bonding curve](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA). - Ainda não percebeu? Confira o nosso guia em vídeo sobre a Curadoria abaixo: diff --git a/website/pages/pt/network/delegating.mdx b/website/pages/pt/network/delegating.mdx index b321854faa52..443228259bd1 100644 --- a/website/pages/pt/network/delegating.mdx +++ b/website/pages/pt/network/delegating.mdx @@ -2,13 +2,23 @@ title: Delegação --- -Os Delegantes são participantes da rede que delegam (por ex., "stake") GRT a um ou mais Indexadores. Estes contribuem à segurança da rede sem executar um Graph Node por conta própria. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Os Delegantes ganham uma porção das taxas de query e recompensas do Indexador ao delegar a ele. A quantidade de queries que um Indexador pode processar depende do próprio stake, do stake delegado e do preço que o Indexador cobra por cada consulta, portanto, quanto mais stake for alocado a um Indexador, mais queries ele pode processar. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## Guia do Delegante -Este guia explicará como ser um Delegante eficaz na Graph Network. Os Delegantes dividem a renda do protocolo com todos os Indexadores, com base no seu stake delegado. Um Delegante deve raciocinar bem para escolher Indexadores, baseado em vários fatores. Perceba que este guia não abordará passos como a configuração apropriada do MetaMask, já que essa informação está amplamente disponível na internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly, as that information is widely available online. Há três secções neste guia: @@ -24,64 +34,84 @@ Veja abaixo os riscos principais de ser um Delegante no protocolo. Os Delegantes não podem ser punidos por mau comportamento, mas há uma taxa sobre Delegantes, para desencorajar más decisões que possam ferir a integridade da rede. -É importante entender que, sempre que delega, o Delegante será cobrado 0.5%. Ou seja, se delegar 1000 GRT, automaticamente queimará 5 GRT. +As a Delegator, it's important to understand the following: -Por questões de segurança, um Delegante deve calcular o seu retorno potencial ao delegar a um Indexador. Por exemplo, um Delegante pode calcular quantos dias demorará até conseguir quitar a taxa de 0.5% na sua delegação. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### O período de separação da delegação Se um Delegado quiser cancelar a sua delegação, seus tokens estarão sujeitos a um período de 28 dias sem transferência, e também não poderá ganhar quaisquer recompensas por este período. -Considere escolher um Indexador com muito cuidado. Se escolher um Indexador que não é de confiança, ou não fez um bom trabalho, talvez queira cancelar a sua delegação, o que implica em sacrificar muitas oportunidades de recompensa, o que pode ser tão lamentável quanto queimar GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely.
    - ![Delegation unbonding](/img/Delegation-Unbonding.png) _Perceba a taxa de 0.5% na interface da Delegação, além do - período de separação de 28 dias._ + ![Delegation unbonding](/img/Delegation-Unbonding.png) _Perceba a taxa de 0.5% na interface da Delegação, além do período de separação de 28 dias._
    ### Como escolher um Indexador de confiança, com um pagamento justo para Delegantes -Este é um aspecto importante. Primeiro, vamos discutir três valores muito importantes: os Parâmetros de Delegação. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. -Porção da Recompensa de Indexação - Esta é a porção das recompensas que o Indexador guardará para si. Isto significa que, se esta for configurada para 100%, o Delegante receberá 0 recompensas de indexação. Se ver 80% na interface, isto significa que, como Delegado, receberá 20%. Importante saber que no começo da rede, as Recompensas de Indexação contarão como a maioria das recompensas. +#### Delegation Parameters + +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
    - ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *O Indexador acima está a dar 90% das recompensas aos Delegantes, - o do meio dá 20%, e o de baixo dá cerca de 83%.* + ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *O Indexador acima está a dar 90% das recompensas aos Delegantes, o do meio dá 20%, e o de baixo dá cerca de 83%.*
    -- Porção da Taxa de Recompensa - Isto funciona exatamente como a Porção da Recompensa de Indexação. Porém, isto aplica-se explicitamente a retornos nas taxas de consulta coletadas pelo Indexador. Perceba que, no começo da rede, os retornos de taxas de query serão muito menores que a recompensa de indexação. Vale prestar atenção na rede para determinar quando as taxas de consulta na rede começarão a ser mais significantes. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. + +As you can see, in order to choose the right Indexer, you must consider multiple things. -Como pode-se ver, há muito a considerar ao escolher o Indexador certo. É por isto que vale a pena explorar o [Discord do The Graph](https://discord.gg/graphprotocol) para determinar quais Indexadores tem as melhores reputações sociais e técnicas, a fim de recompensar Delegantes com consistência. Muitos dos Indexadores são ativos no Discord e estarão prontos para responder as suas perguntas. Muitos deles indexam há meses na testnet e fazem o seu melhor para ajudar os Delegantes a ganhar bons retornos, pois isto melhora a saúde e o êxito da rede. +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -### Como calcular o retorno esperado dos Delegados +## Calculating Delegators Expected Return -Um Delegante tem muito a considerar ao determinar o retorno. Os fatores incluem: +A Delegator must consider the following factors to determine a return: -- Um Delegante técnico também pode examinar a habilidade do Indexador de usar os tokens Delegados a ele disponíveis. Se um Indexador não aloca todos os tokens disponíveis, eles não estão a ganhar o lucro máximo que poderia ser para si mesmo ou para os seus Delegados. -- Agora mesmo, na rede, um Indexador pode escolher fechar uma alocação e resgatar as recompensas entre 1 e 28 dias. Então, é possível que um Indexador tenha muitas recompensas a resgatar, assim, diminuindo as suas recompensas totais. Isto deve ser considerado nos primeiros dias. +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### Como considerar as porções das taxas de query e de indexação -Como dito nas seções acima, deve escolher um Indexador que seja transparente e honesto sobre a configuração das suas porções de taxas de query e de indexação. Um Delegante também deve verificar o tempo de recarga dos parâmetros, para ver quanto tempo de preparo ele tem. Tudo isto feito, é simples calcular as recompensas que os Delegantes ganham. A fórmula é: +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![Imagem de delegação 3](/img/Delegation-Reward-Formula.png) ### Como considerar o pool de delegação do Indexador -Um Delegante também deve considerar a proporção do Pool de Delegação que tem. Todas as recompensas de delegação são divididas igualmente, com um rebalanço simples do pool determinado pela quantidade que o Delegado nele depositou. Isto dá ao delegado uma porção: +Delegators should consider the proportion of the Delegation Pool they own. -![Fórmula de ações](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Esta fórmula mostra que é possível que um Indexador que oferece apenas 20% para os Delegantes, dê uma recompensa melhor que um que dá 90%. +This gives the Delegator a share of the pool: + +![Fórmula de ações](/img/Share-Forumla.png) -Um Delegante pode então fazer as contas para determinar que o Indexador que oferece 20% aos Delegantes oferece um retorno melhor. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### Como considerar a capacidade de delegação -Atualmente, a Proporção de Delegação é configurada em 16. Portanto, caso um Indexador tenha feito stake em 1.000.000 GRT, sua Capacidade de Delegação é 16.000.000 de tokens delegados que eles podem usar no protocolo. Quaisquer tokens delegados acima desta quantidade diluirão todas as recompensas do Delegante. +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine que um Indexador tem 100.000.000 GRT delegados a ele, e a sua capacidade é de apenas 16.000.000 GRT. Efetivamente, 84.000.000 tokens GRT não estão em uso para ganhar tokens. E todos os Delegantes, e o Indexador, ganham muito menos recompensas do que deveriam ganhar. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Assim, um Delegante deve sempre considerar a Capacidade de Delegação de um Indexador, e levá-la em conta ao tomar decisões. @@ -89,16 +119,21 @@ Assim, um Delegante deve sempre considerar a Capacidade de Delegação de um Ind ### Erro de "Transação Pendente" no MetaMask -**Quando eu tento delegar a minha transação no MetaMask, ela aparece como "Pendente" ou "Na Fila" por mais tempo que o esperado. O que devo fazer?** +1. Quando eu tento delegar a minha transação no MetaMask, ela aparece como "Pendente" ou "Na Fila" por mais tempo que o esperado. O que devo fazer? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### Exemplo -Às vezes, tentativas de delegar a Indexadores via MetaMask podem falhar e causar que tentativas de transação fiquem "Pendentes" ou "Em Fila" por períodos prolongados. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -Por exemplo, um utilizador pode tentar delegar com uma taxa de gás insuficiente em relação aos preços atuais, fazendo com que a tentativa de transação fique como "Pendente" na sua carteira do MetaMask por mais de 15 minutos. Quando isto ocorre, um utilizador pode tentar mais transações, mas estas só serão processadas até a transação inicial for minerada, já que as transações para um endereço devem ser processadas em ordem. Em tais casos, estas transações podem ser canceladas no MetaMask, mas as tentativas de transação acumularão taxas de gas sem nenhuma garantia que as tentativas seguintes terão êxito. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -Uma solução mais simples para este bug é a reinicialização do navegador (por ex., usar "abort:restart" na barra de endereço), o que cancelará todas as tentativas anteriores sem subtrair gas da carteira. Vários utilizadores que encontraram este problema relataram êxito nas transações após reiniciar o seu navegador e tentar delegar. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## Guia de vídeo para a interface da rede +## Video Guide -Este guia em vídeo apresenta uma revisão completa desde documento e explica como considerar tudo nele durante as interações com a interface de utilizador. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/pt/network/developing.mdx b/website/pages/pt/network/developing.mdx index 2cdc95f3b451..06a747402735 100644 --- a/website/pages/pt/network/developing.mdx +++ b/website/pages/pt/network/developing.mdx @@ -2,52 +2,88 @@ title: Programação --- -Os programadores representam o lado de demanda do ecossistema do The Graph, constroem subgraphs e os editam à Graph Network. Então, eles consultam subgraphs ao vivo com queries no GraphQL, para abastecer os seus aplicativos. +To start coding right away, go to [Developer Quick Start](/quick-start/). + +## Visão geral + +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. + +On The Graph, you can: + +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. + +### What is GraphQL? + +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +### Developer Actions + +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. + +## Subgraph Specifics + +### What are subgraphs? + +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. + +A subgraph primarily consists of the following files: + +- `subgraph.yaml`: this YAML file contains the [subgraph manifest](/developing/creating-a-subgraph/#the-subgraph-manifest). +- `subgraph.graphql`: this GraphQL schema defines what data is stored for your subgraph, and how to query it via [GraphQL](/developing/creating-a-subgraph/#the-graphql-schema). +- `mappings`: this [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) mappings file translates data from the event data to the entities defined in your schema. + +Learn the detailed specifics to [create a subgraph](/developing/creating-a-subgraph/). ## Ciclo de Vida de um Subgraph -Os subgraphs lançados à rede têm um ciclo de vida definido. +Here is a general overview of a subgraph’s lifecycle: -### Construção local +![Ciclo de Vida de um Subgraph](/img/subgraph-lifecycle.png) -Assim como toda programação de subgraph, ela começa com desenvolvimento e testes locais. Os programadores podem usar o mesmo setup local para construir o seu subgraph — seja ao construir para a Graph Network, o serviço hospedado, ou um Graph Node local, com `graph-cli` e o `graph-ts`. Vale usar ferramentas como o [Matchstick](https://github.com/LimeChain/matchstick) para testes de unidades, para deixar os subgraphs mais robustos. +### Construção local -> Há alguns limites de apoio a recursos e redes na Graph Network. Só subgraphs em [redes apoiadas](/developing/supported-networks) ganharão recompensas de indexação, e subgraphs que retiram dados do IPFS também não têm direito a estas. +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. ### Lançamento no Subgraph Studio -Uma vez definido, o subgraph pode ser construído e lançado ao [Subgraph Studio](/deploying/subgraph-studio-faqs/). Este é um ambiente sandbox que indexará o subgraph lançado, e o disponibilizará para programação e testes (com rate limit). Com isto, os programadores têm a oportunidade de verificar que o seu subgraph funciona como esperado, sem qualquer erro na indexação. - -### Edição na rede +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -Quando o programador estiver satisfeito com o seu subgraph, pode editá-lo na Graph Network. Esta é uma ação on-chain, que registra o subgraph para que ele seja descoberto por Indexadores. Os subgraphs publicados têm NFTs correspondente, que são então facilmente transferíveis. Um subgraph editado tem metadados associados, que fornecem contexto e informações úteis a outros participantes da rede. +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. -### Sinalização para incentivar indexação +### Edição na rede -Subgraphs editados têm poucas chances de ser detectados por Indexadores sem a adição do sinal — um montante de GRT trancado associado a um subgraph, que indica aos Indexadores que um certo subgraph receberá volume de queries e também contribui às recompensas de indexação disponíveis por processá-los. Os programadores de subgraph geralmente adicionam o sinal ao próprio subgraph, para incentivar indexações. Curadores terceiros também podem sinalizar em um certo subgraph, se acharem que tem altas chances de movimentar o volume de queries. +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. -### Programação de Aplicativos & Queries +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-and-deprecate-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. -Quando um subgraph for processado pelos Indexadores e aberto a queries, os programadores podem começar a usar o subgraph nos seus aplicativos. Os programadores consultam subgraphs através de um gateway, que encaminha as suas consultas a um Indexador que processou o subgraph com uma taxa de query em GRT. +### Add Curation Signal for Indexing -Para poder fazer queries, os programadores devem gerar uma chave de API no Subgraph Studio. Esta chave deve ser bancada com GRT, para pagar taxas de query. Dá para configurar uma taxa máxima de query, para controlar os seus custos e limitar a sua chave de API a um único subgraph ou domínio de origem. O Subgraph Studio fornece dados aos programadores sobre o seu uso temporal de chaves de API. +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. -Os programadores também podem expressar uma preferência de Indexador ao gateway, por exemplo, ao preferir Indexadores com resposta de query mais rápida, ou cujos dados são mais atualizados. Estes controlos são programados no Subgraph Studio. +#### What is signal? -### Atualização de Subgraphs +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. -Após um tempo, um programador de subgraphs pode querer atualizar o seu subgraph, talvez para consertar um erro ou adicionar funcionalidades. O programador pode lançar uma(s) nova(s) versão (versões) do seu subgraph ao Subgraph Studio, para fins de programação e testes com rate limit. +### Programação de Aplicativos & Queries -Quando o Programador de Subgraph estiver pronto, ele pode iniciar uma transação para apontar seu subgraph à nova versão. Atualizar o subgraph migra qualquer sinal à versão nova (presumindo que o utilizador que aplicou o sinal selecionou "migrar automaticamente"), o que também incorre uma taxa de migração. Este sinal de migração deve incentivar os Indexadores a começar a indexar a nova versão do subgraph, para que ele logo fique aberto a queries. +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). -### Depreciação de Subgraphs +Learn more about [querying subgraphs](/querying/querying-the-graph/). -Em algum ponto, pode um programador decidir que ele não precisa mais de um subgraph editado. Naquele ponto, ele pode depreciar o subgraph, o que devolve qualquer GRT sinalizado aos Curadores. +### Atualização de Subgraphs -### Outros Papeis de Programadores +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -Alguns programadores engajarão com o ciclo de vida completo do subgraph na rede, com edição, queries, e iterações em seus próprios subgraphs. Alguns podem focar em programação de subgraphs, a construir APIs abertas que outros podem elaborar. Alguns podem focar em aplicativos, a consultar subgraphs lançados por outros. +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. -### Programadores e Economia da Rede +### Deprecating & Transferring Subgraphs -Programadores são agentes económicos importantes na rede, que trancam GRT para incentivar a indexação e fazem queries cruciais em subgraphs — a principal troca de valor da rede. Os programadores de subgraphs também queimam GRT quando um subgraph é atualizado. +If you no longer need a published subgraph, you can [deprecate or transfer a subgraph](/managing/transfer-and-deprecate-a-subgraph/). Deprecating a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/pt/network/explorer.mdx b/website/pages/pt/network/explorer.mdx index 11471e603cfa..31d87224dc8c 100644 --- a/website/pages/pt/network/explorer.mdx +++ b/website/pages/pt/network/explorer.mdx @@ -2,21 +2,35 @@ title: Graph Explorer --- -Seja bem vindo ao Graph Explorer — ou, como gostamos de chamá-lo, o seu portal descentralizado ao mundo de dados de subgraphs e redes. 👩🏽‍🚀 O Graph Explorer consiste de vários componentes onde você pode interagir com outros programadores de subgraphs e dApps, Curadores, Indexadores e Delegantes. Para um resumo geral do Graph Explorer, confira o vídeo abaixo (ou continue a ler): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## Subgraphs -Primeiramente, se acabou de lançar e editar o seu subgraph no Subgraph Studio, pode ver os seus próprios subgraphs finalizados (e também os dos outros) na rede descentralizada, na aba Subgraphs no topo da barra de navegação. Aqui, poderá achar o subgraph exato que procura com base na data de criação, quantidade de sinais, ou nome. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![Imagem do Explorer 1](/img/Subgraphs-Explorer-Landing.png) -Ao clicar em um subgraph, pode testar consultas no playground e aproveitar detalhes da rede para informar suas decisões. Também poderá sinalizar GRT no seu próprio subgraph, ou nos de outros, para que os indexadores tenham ciência da sua importância e qualidade. Isto é crítico, porque a sinalização em um subgraph o incentiva a ser indexado, o que significa que ele subirá na rede para eventualmente servir consultas. +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Imagem do Explorer 2](/img/Subgraph-Details.png) -Vários detalhes são exibidos na página dedicada de cada subgraph. Eles incluem: +On each subgraph’s dedicated page, you can do the following: - Sinalização/cancelamento de sinais em subgraphs - Ver mais detalhes como gráficos, ID de lançamento atual, e outros metadados @@ -31,26 +45,32 @@ Vários detalhes são exibidos na página dedicada de cada subgraph. Eles inclue ## Participantes -Dentro desta aba, terá uma vista panorâmica de todas as pessoas que participam das atividades da rede, como Indexadores, Delegantes e Curadores. Abaixo, faremos uma revisão profunda do significado de cada aba. +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### 1. Indexadores ![Imagem do Explorer 4](/img/Indexer-Pane.png) -Vamos começar com os Indexadores. Estes são a coluna do protocolo; fazem staking em subgraphs, os indexam, e servem queries para qualquer consumidor de subgraphs. Na tábua Indexers, poderá ver os parâmetros de delegação dos Indexadores, os stakes deles, quantos stakes fizeram em cada subgraph, e a renda que ganharam de taxas de consulta e recompensas de indexação. Mais detalhes abaixo: +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. + +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. -- Porção de Taxa de Query — a % dos rebates da taxa de query que o Indexador guarda ao dividir com os Delegantes -- Porção de Recompensa Efetiva — a porção de recompensa de indexação, aplicada ao pool de delegação. Se for negativa, ela significa que o Indexador está a distribuir parte das suas recompensas. Se for positiva, significa que o Indexador guarda um pouco das recompensas -- Tempo de Recarga — o tempo que resta até o Indexador poder mudar os parâmetros de delegação acima. Os períodos de recarga são configurados pelos Indexadores quando atualizam os seus parâmetros de delegação -- Títulos — Os stakes depositados pelo Indexador, que podem ser cortados por comportamento malicioso ou incorreto -- Delegado — Stake de Delegantes que pode ser alocado pelo Indexador, mas não pode ser cortado -- Alocado — O stake que os Indexadores estão a alocar ativamente aos subgraphs que indexam -- Capacidade Disponível de Delegação — a quantia de stakes delegados que os Indexadores ainda podem receber antes que sejam delegados demais +**Especificações** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Capacidade Máxima de Delegação — a quantidade máxima de stake delegado que o Indexador pode aceitar produtivamente. Um excesso de stake delegado não pode ser usado para alocações ou cálculos de recompensas. -- Taxas de Consulta — as taxas totais que os utilizadores finais pagaram por queries de um Indexador durante todo o seu tempo +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - Recompensas de Indexador — as recompensas totais recebidas pelo Indexador e pelos Delegantes temporalmente. As recompensas de Indexador são pagas pela emissão de GRT. -Os indexadores podem ganhar taxas de consulta e recompensas de indexação. Funcionalmente, isto acontece quando participantes da rede delegam GRT a um Indexador. Isto permite que Indexadores recebam taxas e recompensas de consultas, a depender dos seus parâmetros de Indexador. Parâmetros de indexação são configurados no lado direito da tábua, ou ao entrar no perfil de um Indexador e clicar no botão "Delegate" (Delegar). +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. Para aprender mais sobre como tornar-se um Indexador, pode conferir a [documentação oficial](/network/indexing) ou [os guias de Indexador da Graph Academy.](https://thegraph.academy/delegators/choosing-indexers/) @@ -58,9 +78,13 @@ Para aprender mais sobre como tornar-se um Indexador, pode conferir a [documenta ### 2. Curadores -Os Curadores analisam subgraphs, para identificar quais são de maior qualidade. Quando um Curador achar um subgraph atraente, ele pode curá-lo com sinais na sua bonding curve. Ao fazê-lo, os Curadores avisam aos Indexadores quais subgraphs têm qualidade alta e merecem ser indexados. +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -Os Curadores podem ser membros da comunidade, consumidores de dados, ou até mesmo programadores de subgraphs que sinalizam os seus próprios subgraphs com depósitos de tokens GRT em uma bonding curve. Ao depositar GRT, os Curadores cunham ações de curadoria em um subgraph. Como resultado, os Curadores têm direito a uma porção das taxas de consulta geradas pelo subgraph que sinalizaram. A bonding curve incentiva os Curadores a curar as fontes de dado de maior qualidade. A tábua de Curadoria nesta seção lhe mostrará: +In the The Curator table listed below you can see: - A data em que o Curador começou a curar - O número de GRT depositado @@ -68,34 +92,36 @@ Os Curadores podem ser membros da comunidade, consumidores de dados, ou até mes ![Imagem do Explorer 6](/img/Curation-Overview.png) -Se quiser saber mais sobre o papel de Curador, pode visitar os seguintes atalhos, da [Graph Academy](https://thegraph.academy/curators/) ou [da documentação oficial.](/network/curating) +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### 3. Delegantes -Os Delegantes têm um papel importante em manter a segurança e descentralização da Graph Network. Eles participam na rede com a delegação (por ex., "staking") de tokens GRT a um ou vários indexadores. Sem Delegantes, os Indexadores têm menos chances de atrair recompensas e taxas significativas. Então, os Indexadores procuram atrair Delegantes ao oferecê-los uma porção das recompensas de indexação e das taxas de query que ganham. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -Os Delegantes, por sua vez, selecionam Indexadores com base em um número de variáveis diferentes, como desempenho passado, recompensas de indexação, e porções de taxas de query. A reputação dentro da comunidade também importa! Vale a pena conectar-se com os indexadores selecionados através do [Discord](https://discord.gg/graphprotocol) ou do [Fórum](https://forum.thegraph.com/) do The Graph! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![Imagem do Explorer 7](/img/Delegation-Overview.png) -A tábua de Delegantes permitirá-lhe ver os Delegantes ativos na comunidade, assim como métricas que incluem: +In the Delegators table you can see the active Delegators in the community and important metrics: - O número de Indexadores aos quais um Delegante delega - A delegação original de um Delegante - As recompensas que acumularam, mas não sacaram, do protocolo - As recompensas realizadas que sacaram do protocolo - A quantidade total de GRT que têm no protocolo no momento -- A última data em que delegaram +- The date they last delegated -Caso queira aprender mais sobre como tornar-se um Delegado, é só ir à [documentação oficial](/network/delegating) ou à [Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## Rede -Na seção de Rede, verá KPIs globais e poderá trocar para uma base por-epoch e analisar métricas de rede em mais detalhes. Estes detalhes darão-lhe uma ideia de como a rede se desempenha com o tempo. +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. -### Visão Geral +### Visão geral -A secção de resumo tem todas as estatísticas atuais da rede, além de estatísticas cumulativas temporais. Aqui, poderá ver coisas como: +The overview section has both all the current network metrics and some cumulative metrics over time: - O stake total atual da rede - O stake dividido entre os Indexadores e os seus Delegantes @@ -104,10 +130,10 @@ A secção de resumo tem todas as estatísticas atuais da rede, além de estatí - Parâmetros de protocolo como recompensas de curadoria, ritmo de inflação, e mais - Recompensas e taxas na epoch atual -Vale mencionar alguns detalhes importantes: +A few key details to note: -- **As taxas de query representam as taxas geradas pelos consumidores**, e podem ser reivindicadas (ou não) pelos Indexadores após um período de pelo menos 7 epochs (veja abaixo), após as suas alocações aos subgraphs forem fechadas e os dados que serviram forem validados pelos consumidores. -- **As recompensas de indexação representam a quantidade de recompensas que os Indexadores conseguiram da emissão da rede durante a epoch.** Apesar da emissão do protocolo ser fixa, as recompensas só são cunhadas quando os Indexadores fecham as suas alocações aos subgraphs que indexaram. Assim, o número de recompensas por epoch varia (por ex., durante algumas epochs, Indexers podem ter coletivamente fechado alocações que estavam abertas por muitos dias). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Imagem do Explorer 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ Na seção de Epochs, pode analisar, numa base por epoch, métricas como: - A epoch ativa é aquela em que os Indexadores atualmente alocam stake e colecionam taxas de query - As epochs de estabelecimento são aquelas em que os canais de estado são estabelecidos. Portanto, os Indexadores são sujeitos a cortes caso os consumidores abram disputas contra eles. - Nas epochs de distribuição, são estabelecidos os canais de estado para as epochs, e os Indexadores podem reivindicar os seus rebates de taxas de query. - - As epochs finalizadas são aquelas que não têm mais rebates de taxas de consulta para serem reivindicadas pelos Indexadores; assim, finalizadas. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![Imagem do Explorer 9](/img/Epoch-Stats.png) ## Seu Perfil de Utilizador -Agora que já falamos sobre as estatísticas da rede, vamos para o seu perfil pessoal. Este é o seu lugar para ver a sua atividade na rede, sem se preocupar com a sua participação nela. A sua carteira de cripto será o seu perfil de utilizador, e com o Painel de Controlo do Utilizador, poderá ver: +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### Resumo do Perfil -Aqui, pode ver quaisquer ações atuais que tomou; também é onde pode achar as suas informações do perfil, sua descrição, e o seu ‘website’ (se tiver adicionado um). +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![Imagem do Explorer 10](/img/Profile-Overview.png) ### Aba de Subgraphs -Se clicar na aba de Subgraphs, verá os seus subgraphs publicados. Isto não incluirá nenhum subgraph lançado com a CLI para fins de teste — os subgraphs só aparecerão após serem editados à rede descentralizada. +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Imagem do Explorer 11](/img/Subgraphs-Overview.png) ### Aba de Indexação -Se clicar na aba de Indexação, verá uma tábua com todas as alocações ativas e históricas aos subgraphs, além de gráficos para analisar e ver o seu desempenho passado como um Indexador. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. Esta secção também incluirá detalhes sobre as suas recompensas de Indexador e taxas de consulta. Verá as seguintes métricas: @@ -158,7 +189,9 @@ Esta secção também incluirá detalhes sobre as suas recompensas de Indexador ### Aba de Delegação -Os delegantes são importantes para a Graph Network. Um Delegador deve usar o seu conhecimento para escolher um Indexador que proverá um retorno saudável em recompensas. Aqui, pode achar detalhes das suas delegações ativas e históricas, além das métricas dos Indexadores aos quais delegou. +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. Na primeira metade da página estão o seu gráfico de delegação e um gráfico só de recompensas. Na esquerda, pode ver os KPIs (indicadores de desempenho) que refletem as suas métricas atuais de delegação. diff --git a/website/pages/pt/network/indexing.mdx b/website/pages/pt/network/indexing.mdx index 9542ff878e3d..38318c43a5f3 100644 --- a/website/pages/pt/network/indexing.mdx +++ b/website/pages/pt/network/indexing.mdx @@ -42,7 +42,7 @@ O contrato RewardsManager tem uma função [getRewards](https://github.com/graph Muitos dos painéis feitos pela comunidade incluem valores pendentes de recompensas, que podem facilmente ser conferidos de forma manual ao seguir os seguintes passos: -1. Consulte a [subgraph da mainnet](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) para conseguir as IDs de todas as alocações ativas: +1. Consulte o [subgraph da mainnet](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) para conseguir as IDs para todas as alocações ativas: ```graphql query indexerAllocations { @@ -112,12 +112,12 @@ Os Indexadores podem se diferenciar ao aplicar técnicas avançadas para decidir - **Médio** — Indexador de Produção. Apoia 100 subgraphs e 200 – 500 pedidos por segundo. - **Grande** — Preparado para indexar todos os subgraphs usados atualmente e servir pedidos para o tráfego relacionado. -| Setup | Postgres
    (CPUs) | Postgres
    (memória em GBs) | Postgres
    (disco em TBs) | VMs
    (CPUs) | VMs
    (memória em GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Pequeno | 4 | 8 | 1 | 4 | 16 | -| Normal | 8 | 30 | 1 | 12 | 48 | -| Médio | 16 | 64 | 2 | 32 | 64 | -| Grande | 72 | 468 | 3.5 | 48 | 184 | +| Setup | Postgres
    (CPUs) | Postgres
    (memória em GBs) | Postgres
    (disco em TBs) | VMs
    (CPUs) | VMs
    (memória em GBs) | +| ------- |:--------------------------:|:------------------------------------:|:----------------------------------:|:---------------------:|:-------------------------------:| +| Pequeno | 4 | 8 | 1 | 4 | 16 | +| Normal | 8 | 30 | 1 | 12 | 48 | +| Médio | 16 | 64 | 2 | 32 | 64 | +| Grande | 72 | 468 | 3.5 | 48 | 184 | ### Há alguma precaução básica de segurança que um Indexador deve tomar? @@ -149,20 +149,20 @@ Nota: Para apoiar o escalamento ágil, recomendamos que assuntos de consulta e i #### Graph Node -| Porta | Propósito | Rotas | Argumento CLI | Variável de Ambiente | -| --- | --- | --- | --- | --- | -| 8000 | Servidor HTTP GraphQL
    (para consultas de subgraph) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | WS GraphQL
    (para inscrições a subgraphs) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (para gerir lançamentos) | / | --admin-port | - | -| 8030 | API de status de indexamento do subgraph | /graphql | --index-node-port | - | -| 8040 | Métricas Prometheus | /metrics | --metrics-port | - | +| Porta | Propósito | Rotas | Argumento CLI | Variável de Ambiente | +| ----- | ------------------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | Servidor HTTP GraphQL
    (para consultas de subgraph) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | WS GraphQL
    (para inscrições a subgraphs) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (para gerir lançamentos) | / | --admin-port | - | +| 8030 | API de status de indexamento do subgraph | /graphql | --index-node-port | - | +| 8040 | Métricas Prometheus | /metrics | --metrics-port | - | #### Serviço Indexador -| Porta | Propósito | Rotas | Argumento CLI | Variável de Ambiente | -| --- | --- | --- | --- | --- | -| 7600 | Servidor HTTP GraphQL
    (para consultas de subgraph pagas) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | --metrics-port | - | +| Porta | Propósito | Rotas | Argumento CLI | Variável de Ambiente | +| ----- | ------------------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | Servidor HTTP GraphQL
    (para consultas de subgraph pagas) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | --metrics-port | - | #### Agente Indexador @@ -545,7 +545,7 @@ O **Indexer CLI** se conecta ao agente indexador, normalmente através do redire - `graph indexer rules maybe [options] ` — Configura a `decisionBasis` de um lançamento para obedecer o `rules`, comandando o agente indexador a usar regras de indexação para decidir se este lançamento será ou não indexado. -- `graph indexer actions get [options] ` - Retira uma ou mais ações usando o `all`, ou deixa o `action-id` vazio para mostrar todas as ações. Um argumento adicional `--status` pode ser usado para imprimir no console todas as ações de um certo status. +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `graph indexer action queue allocate ` - Fila da ação de alocação diff --git a/website/pages/pt/network/overview.mdx b/website/pages/pt/network/overview.mdx index f9004acd111c..f2cff6868d75 100644 --- a/website/pages/pt/network/overview.mdx +++ b/website/pages/pt/network/overview.mdx @@ -2,14 +2,20 @@ title: Visão Geral da Rede --- -A Graph Network é um protocolo descentralizado de indexação projetado para organizar dados de blockchain. Aplicativos usam o GraphQL para consultar APIs abertas chamadas de subgraphs, a fim de retirar dados indexados na rede. Com o The Graph, programadores podem construir aplicativos sem servidor, executados totalmente em infraestrutura pública. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## Visão Geral +## How does it work? -A Graph Network consiste de Indexadores, Curadores e Delegantes que fornecem serviços à rede e servem dados para aplicativos Web3. Os consumidores usam os aplicativos e consomem os dados. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Especificações + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![Economia de Token](/img/Network-roles@2x.png) -Para garantir a segurança económica da Graph Network, e a integridade de dados em consultas, os participantes depositam e usam Graph Tokens ([GRT](/tokenomics)). O GRT é um token de utilidade de trabalho ERC-20 usado para alocar recursos na rede. +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Os Indexadores, Curadores e Delegantes ativos podem prover serviços e ganhar uma renda da rede, proporcional à quantidade de trabalho que realizam e ao seu stake em GRT. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/pt/new-chain-integration.mdx b/website/pages/pt/new-chain-integration.mdx index 02844784397e..9c4f0e35d292 100644 --- a/website/pages/pt/new-chain-integration.mdx +++ b/website/pages/pt/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: Integração de Novas Redes +title: Integração de Chains Novas --- -Atualmente, Graph Nodes podem indexar dados dos seguintes tipos de chain: +Chains podem trazer apoio a subgraphs para os seus ecossistemas ao iniciar uma nova integração de `graph-node`. Subgraphs são ferramentas poderosas de indexação que abrem infinitas possibilidades a programadores. O Graph Node já indexa dados das chains listadas aqui. Caso tenha interesse numa nova integração, há 2 estratégias para ela: -- Ethereum, através de EVM JSON-RPC e [Firehose Ethereum](https://github.com/streamingfast/firehose-ethereum) -- NEAR, através de um [Firehose NEAR](https://github.com/streamingfast/near-firehose-indexer) -- Cosmos, através de um [Firehose Cosmos](https://github.com/graphprotocol/firehose-cosmos) -- Arweave, através de um [Firehose Arweave](https://github.com/graphprotocol/firehose-arweave) +1. **EVM JSON-RPC** +2. **Firehose**: Todas as soluções de integração do Firehose incluem Substreams, um motor de transmissão de grande escala com base no Firehose com apoio nativo ao `graph-node`, o que permite transformações paralelizadas. -Se tiver interesse em qualquer destas chains, a integração será uma questão de configuração e testes do Graph Node. +> Note que enquanto a abordagem recomendada é o desenvolvimento de um novo Firehose para todas as chains novas, ele só é requerido para chains que não sejam EVMs. -Caso tenha interesse num tipo diferente de chain, será necessária a criação de uma integração nova com o Graph Node. Recomendamos programar um novo Firehose para a chain em questão e então a integração daquele Firehose com o Graph Node. Mais informações abaixo. +## Estratégias de Integração -**1. EVM JSON-RPC** +### 1. EVM JSON-RPC -Se a blockchain for equivalente à EVM, e o cliente/node expor a API EVM JSON-RPC, o Graph Node deve ser capaz de indexar a nova chain. Para mais informações, confira [Como testar uma EVM JSON-RPC](new-chain-integration#testing-an-evm-json-rpc). +Se a blockchain for equivalente a EVM e o cliente/node expor a API padrão de JSON-RPC, o Graph Node deve poder indexar a nova chain. -**2. Firehose** +#### Como testar uma EVM JSON-RPC -Para chains que não são baseadas em EVM, o Graph Node deverá ingerir dados de blockchain através da gRPC e definições de tipos conhecidas. Isto pode ser feito através do [Firehose](firehose/), uma nova tecnologia desenvolvida pelo [StreamingFast](https://www.streamingfast.io/) que providencia uma solução de indexação de blockchain altamente escalável com o uso de uma abordagem baseada em arquivos e que prioriza a transmissão de dados. Contacte a [equipe do StreamingFast](mailto:integrations@streamingfast.io/) caso precise de ajuda com a programação do Firehose. +Para que o Graph Node possa ingerir dados de uma chain EVM, o node RPC deve expor os seguintes métodos em EVM JSON-RPC: -## Diferenças entre EVM JSON-RPC e Firehose +- `eth_getLogs` +- `eth_call` (para blocos históricos, com EIP-1898 - requer node de arquivo) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, em um pedido conjunto em JSON-RPC +- `trace_filter` (opcional, para que o Graph Node tenha apoio a handlers de chamada)\* -Enquanto os dois são aptos para subgraphs, um Firehose é sempre exigido para programadores que querem construir com [Substreams](substreams/), como a construção de [subgraphs movidos a Substreams](cookbook/substreams-powered-subgraphs/). Além disso, o Firehose gera velocidades de indexação mais rápidas em comparação ao JSON-RPC. +### 2. Integração do Firehose -Novos integradores de chain EVM também podem considerar a abordagem com base no Firehose, com consideração aos benefícios do substreams e as suas imensas capacidades paralelas de indexação. Apoiar ambos permite que programadores escolham entre a construção de substreams ou subgraphs para a nova chain. +O [Firehose](https://firehose.streamingfast.io/firehose-setup/overview) é uma camada de extração de última geração, que coleta históricos em streams e arquivos planos em tempo real. A tecnologia do Firehose substitui estas chamadas de API com um fluxo de dados que utilizam um modelo de empurrão que envia dados ao node de indexação mais rapidamente. Isto ajuda a aumentar a velocidade da sincronização e da indexação. -> **NOTA**: Uma integração baseada no Firehose para chains EVM ainda exigirá que Indexadores executem o node RPC de arquivo da chain para indexar subgraphs corretamente. Isto se deve à inabilidade do Firehose para providenciar estados de contratos inteligentes que são tipicamente acessíveis pelo método RPC `eth_call`. (Vale lembrar que eth_calls [não são uma boa prática para programadores](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)) +O método principal de integrar o Firehose a chains é uma estratégia de polling de RPC. O nosso algoritmo de polling preverá quando um bloco novo irá chegar, e aumentará o ritmo em que ele verifica por um novo bloco quando se aproximar daquela hora, o que o faz uma solução de baixa latência muito eficiente. Para ajuda com a integração e a manutenção do Firehose, contacte a [equipa do StreamingFast](https://www.streamingfast.io/firehose-integration-program). Novas chains e os seus integrantes apreciarão a [consciência de fork](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) e as capacidades imensas de indexação paralelizada que o Firehose e os Substreams trazem ao seu ecossistema. ---- +> NOTA: Todas as integrações feitas pela equipa da StreamingFast incluem manutenção para o protocolo de réplica do Firehose no banco de código da chain. O StreamingFast rastreia todas as mudanças e lança binários quando o código é mudado, pelo programador ou pela StreamingFast. Isto inclui o lançamento de binários do Firehose/Substreams para o protocolo, a manutenção dos módulos de Substreams para o modelo de bloco da chain, e o lançamento de binários para o node da blockchain com a instrumentação, caso necessária. -## Como testar uma EVM JSON-RPC +#### Instrumentação Específica do Firehose para chains EVM (`geth`) -Para que o Graph Node possa ingerir dados de uma chain EVM, o node RPC deve expor os seguintes métodos em EVM JSON-RPC: +Para chains EVM, há um nível mais profundo de dados que podem ser alcançados através do [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0) `geth`, uma colaboração entre a Go-Ethereum e a StreamingFast, na construção de um sistema de traços rico e de alto throughput. O Live Tracer é a solução mais compreensiva, o que resulta em detalhes de blocos [Estendidos](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425). Isto permite novos paradigmas de indexação, como correspondência de padrões de eventos com base em mudanças no estado, chamadas, árvores de chamadas de parentes, ou o acionamento de eventos com base nas mudanças nas próprias variáveis em um contrato inteligente. -- `eth_getLogs` -- `eth_call` \_(para blocos históricos, com EIP-1898 - requer node de arquivo): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`, em um pedido conjunto em JSON-RPC -- _`trace_filter`_ _(opcional, para que o Graph Node tenha apoio a call handlers)_ +![Bloco base x bloco Estendido](/img/extended-vs-base-substreams-blocks.png) + +> NOTA: Esta melhoria no Firehose requer que chains usem o motor de EVM na `versão geth 1.13.0` adiante. + +## Considerações de EVM - Diferença entre JSON-RPC e Firehose + +Enquanto ambos o JSON-RPC e o Firehose são próprios para subgraphs, um Firehose é sempre necessário para programadores que querem construir com [Substreams](https://substreams.streamingfast.io). Apoiar os Substreams permite que programadores construam [subgraphs movidos a Substreams](/cookbook/substreams-powered-subgraphs) para a nova chain, e tem o potencial de melhorar o desempenho dos seus subgraphs. Além disto, o Firehose — como um substituto pronto para a camada de extração JSON-RPC do `graph-node` — reduz em 90% o número de chamadas RPC exigidas para indexação geral. + +- Todas essas chamadas `getLogs` e roundtrips são substituídas por um único fluxo que chega no coração do `graph-node`, um modelo de bloco único para todos os subgraphs que processa. -### Configuração do Graph Node +> NOTA: Uma integração baseada no Firehose para chains EVM ainda exigirá que os Indexadores executem o node RPC de arquivo da chain para indexar subgraphs corretamente. Isto é porque o Firehose não pode fornecer estados de contratos inteligentes que são tipicamente acessíveis pelo método RPC  `eth_call` . (Vale lembrar que eth_calls não são uma boa prática para programadores) -**Primeiro, prepare o seu ambiente local** +## Como Configurar um Graph Node + +Configurar um Graph Node é tão fácil quanto preparar o seu ambiente local. Quando o seu ambiente local estiver pronto, será possível testar a integração com a edição local de um subgraph. 1. [Clone o Graph Node](https://github.com/graphprotocol/graph-node) -2. Modifique [esta linha](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) para ela incluir o nome da nova rede e a URL do EVM JSON-RPC - > Não mude o nome do env var. Ele deve permanecer como `ethereum` mesmo se o nome da rede for diferente. -3. Execute um node IPFS ou use aquele usado pelo The Graph: https://api.thegraph.com/ipfs/ -**Teste a integração com o lançamento local de um subgraph** +2. Modifique [esta linha](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) para ela incluir o nome da nova rede e a URL do EVM JSON-RPC -1. Instale o [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. Crie um subgraph de exemplo simples. Aqui estão algumas opções: - 1. O contrato inteligente e o subgraph [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) pré-inclusos são bons para começar - 2. Inicie um subgraph local de qualquer contrato inteligente existente ou de um ambiente de programação em solidity [com o uso do Hardhat com um plugin do Graph](https://github.com/graphprotocol/hardhat-graph) -3. Adapte o `subgraph.yaml` resultante com a mudança do `dataSources.network` para o mesmo nome passado anteriormente ao Graph Node. -4. Crie o seu subgraph no Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. Publique o seu subgraph no Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + > Não mude o nome do env var. Ele deve permanecer como `ethereum` mesmo se o nome da rede for diferente. -O Graph Node deve então sincronizar o subgraph lançado caso não haja erros. Dê um tempo para que ele sincronize, e depois envie alguns queries em GraphQL ao endpoint da API produzido pelos logs. +3. Execute um node IPFS ou use aquele usado pelo The Graph: https://api.thegraph.com/ipfs/ ---- +### Como testar um JSON-RPC com a edição local de um subgraph + +1. Instale a [graph-cli](https://github.com/graphprotocol/graph-cli) +2. Crie um subgraph de exemplo simples. Aqui estão algumas opções: + 1. O contrato inteligente e o subgraph [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) pré-inclusos são bons pontos de partida + 2. Inicie um subgraph local a partir de qualquer contrato inteligente existente ou de um ambiente de programação em solidity [com o uso do Hardhat com um plugin do Graph](https://github.com/graphprotocol/hardhat-graph) +3. Adapte o `subgraph.yaml` resultante com a mudança do  `dataSources.network` para o mesmo nome passado anteriormente ao Graph Node. +4. Crie o seu subgraph no Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Edite o seu subgraph no Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` -## Integração de uma nova chain com o Firehose +O Graph Node deve então sincronizar o subgraph lançado caso não haja erros. Deixe-o sincronizar por um tempo, e depois envie alguns queries em GraphQL ao endpoint da API produzido pelos logs. -Integrar uma nova chain também é possível com a abordagem do Firehose. Esta é, atualmente, a melhor opção para chains não-EVM, e necessária para o apoio do substreams. Há mais documentações sobre como o Firehose funciona, como adicionar apoio ao Firehose para uma nova chain, e como integrá-la com o Graph Node. Documentos recomendados para integradores: +## Subgraphs movidos por Substreams -1. [Documentos gerais sobre o Firehose](firehose/) -2. [Como adicionar apoio do Firehose a uma nova chain](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [Integração do Graph Node com uma nova chain através do Firehose](https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/pt/operating-graph-node.mdx b/website/pages/pt/operating-graph-node.mdx index 38202b73da9c..5eb6c795dcd5 100644 --- a/website/pages/pt/operating-graph-node.mdx +++ b/website/pages/pt/operating-graph-node.mdx @@ -77,13 +77,13 @@ Veja uma configuração de exemplo completa do Kubernetes no [repositório de in Durante a execução, o Graph Node expõe as seguintes portas: -| Porta | Propósito | Rotas | Argumento CLI | Variável de Ambiente | -| --- | --- | --- | --- | --- | -| 8000 | Servidor HTTP GraphQL
    (para queries de subgraph) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | WS GraphQL
    (para inscrições a subgraphs) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (para gerir lançamentos) | / | --admin-port | - | -| 8030 | API de status de indexamento do subgraph | /graphql | --index-node-port | - | -| 8040 | Métricas Prometheus | /metrics | --metrics-port | - | +| Porta | Propósito | Rotas | Argumento CLI | Variável de Ambiente | +| ----- | ----------------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | Servidor HTTP GraphQL
    (para queries de subgraph) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | WS GraphQL
    (para inscrições a subgraphs) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (para gerir lançamentos) | / | --admin-port | - | +| 8030 | API de status de indexamento do subgraph | /graphql | --index-node-port | - | +| 8040 | Métricas Prometheus | /metrics | --metrics-port | - | > **Importante:** Cuidado ao expor portas publicamente; as **portas de administração** devem ser trancadas a sete chaves. Isto inclui o endpoint JSON-RPC do Graph Node. diff --git a/website/pages/pt/publishing/publishing-a-subgraph.mdx b/website/pages/pt/publishing/publishing-a-subgraph.mdx index 11e9eb60f822..acbf7b78103f 100644 --- a/website/pages/pt/publishing/publishing-a-subgraph.mdx +++ b/website/pages/pt/publishing/publishing-a-subgraph.mdx @@ -2,48 +2,48 @@ title: Como Editar um Subgraph na Rede Descentralizada --- -Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio) and it's ready to go into production, you can publish it to the decentralized network. +Após [lançar o seu subgraph ao Subgraph Studio](/deploying/deploying-a-subgraph-to-studio) e prepará-lo para entrar em produção, será possível editá-lo na rede descentralizada. -When you publish a subgraph to the decentralized network, you make it available for: +Ao editar um subgraph à rede descentralizada, ele será disponibilizado para: -- [Curators](/network/curating) to begin curating it. -- [Indexers](/network/indexing) to begin indexing it. +- Curadoria por [Curadores](/network/curating). +- Indexação por [Indexadores](/network/indexing). -Check out the list of [supported networks](/developing/supported-networks). +Confira a lista de [redes apoiadas](/developing/supported-networks). -## Publishing from Subgraph Studio +## Edição do Subgraph Studio -1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard -2. Click on the **Publish** button -3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/). +1. Entre no painel do [Subgraph Studio](https://thegraph.com/studio/) +2. Clique no botão **Publish** (Editar) +3. O seu subgraph passará a ser visível no [Graph Explorer](https://thegraph.com/explorer/). -All published versions of an existing subgraph can: +Todas as versões editadas de um subgraph existente podem: -- Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/arbitrum/arbitrum-faq). +- Ser editadas ao Arbitrum One. [Aprenda mais sobre a Graph Network no Arbitrum](/arbitrum/arbitrum-faq). -- Index data on any of the [supported networks](/developing/supported-networks), regardless of the network on which the subgraph was published. +- Indexar dados em quaisquer das [redes apoiadas](/developing/supported-networks), independente da rede na qual o subgraph foi editado. ### Como atualizar metadados para um subgraph editado -- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio. -- Once you’ve saved your changes and published the updates, they will appear in Graph Explorer. -- It's important to note that this process will not create a new version since your deployment has not changed. +- Após editar o seu subgraph à rede descentralizada, será possível editar os metadados a qualquer hora no Subgraph Studio. +- Após salvar as suas mudanças e publicar as atualizações, elas aparecerão no Graph Explorer. +- É importante notar que este processo não criará uma nova versão, já que a sua edição não terá mudado. ## Publicação da CLI -As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). +Desde a versão 0.73.0, é possível editar o seu subgraph com a [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -1. Open the `graph-cli`. -2. Use the following commands: `graph codegen && graph build` then `graph publish`. -3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. +1. Abra a `graph-cli`. +2. Use os seguintes comandos: `graph codegen && graph build` e depois `graph publish`. +3. Uma janela será aberta para o programador conectar a sua carteira, adicionar metadados e lançar o seu subgraph finalizado a uma rede de sua escolha. ![cli-ui](/img/cli-ui.png) -### Customizing your deployment +### Como personalizar o seu lançamento -You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags: +É possível enviar a sua build a um node IPFS específico e personalizar ainda mais o seu lançamento com as seguintes flags: ``` USAGE @@ -51,29 +51,29 @@ USAGE ] FLAGS - -h, --help Show CLI help. - -i, --ipfs= [default: https://api.thegraph.com/ipfs/api/v0] Upload build results to an IPFS node. - --ipfs-hash= IPFS hash of the subgraph manifest to deploy. - --protocol-network=
  • + + - Добавление встроенного типа `nonnull/NonNullable` ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2)) + + ### Оптимизации @@ -37,15 +41,21 @@ title: Руководство по миграции AssemblyScript - Кэширование большего количества обращений к полям в std Map и Set ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8)) - Оптимизация по двум степеням в `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2)) + + ### Прочее - Тип литерала массива теперь можно определить по его содержимому ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0)) - Стандартная библиотека обновлена до версии Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) + + ## Как выполнить обновление? 1. Измените мэппинги `apiVersion` в `subgraph.yaml` на `0.0.6`: + + ```yaml ... dataSources: @@ -56,8 +66,11 @@ dataSources: ... ``` + 2. Обновите используемый Вами `graph-cli` до `latest` версии, выполнив: + + ```bash # если он у Вас установлен глобально npm install --global @graphprotocol/graph-cli@latest @@ -66,21 +79,31 @@ npm install --global @graphprotocol/graph-cli@latest npm install --save-dev @graphprotocol/graph-cli@latest ``` + 3. Сделайте то же самое для `graph-ts`, но вместо глобальной установки сохраните его в своих основных зависимостях: + + ```bash npm install --save @graphprotocol/graph-ts@latest ``` + 4. Следуйте остальной части руководства, чтобы исправить языковые изменения. 5. Снова запустите `codegen` и `deploy`. + + ## Критические изменения + + ### Обнуляемость В более старой версии AssemblyScript можно было создать такой код: + + ```typescript function load(): Value | null { ... } @@ -88,8 +111,11 @@ let maybeValue = load(); maybeValue.aMethod(); ``` + Однако в новой версии, поскольку значение обнуляемо, требуется проверка, например, такая: + + ```typescript let maybeValue = load() @@ -98,28 +124,39 @@ if (maybeValue) { } ``` + Или принудительно вот такая: + + ```typescript let maybeValue = load()! // прерывается во время выполнения, если значение равно null maybeValue.aMethod() ``` + Если Вы не уверены, что выбрать, мы рекомендуем всегда использовать безопасную версию. Если значение не существует, Вы можете просто выполнить раннее выражение if с возвратом в обработчике субграфа. + + ### Затенение переменных Раньше можно было сделать [затенение переменных](https://en.wikipedia.org/wiki/Variable_shadowing) и код, подобный этому, работал: + + ```typescript let a = 10 let b = 20 let a = a + b ``` + Однако теперь это больше невозможно, и компилятор возвращает эту ошибку: + + ```typescript ERROR TS2451: Cannot redeclare block-scoped variable 'a' @@ -128,12 +165,16 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' in assembly/index.ts(4,3) ``` + Вам нужно будет переименовать дублированные переменные, если Вы используете затенение переменных. + ### Нулевые сравнения Выполняя обновление своего субграфа, иногда Вы можете получить такие ошибки: + + ```typescript ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'. if (decimals == null) { @@ -141,8 +182,11 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i in src/mappings/file.ts(41,21) ``` + Чтобы решить эту проблему, Вы можете просто изменить оператор `if` на что-то вроде этого: + + ```typescript if (!decimals) { @@ -151,17 +195,23 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i if (decimals === null) { ``` + Подобное относится к случаям, когда вместо == используется !=. + + ### Кастинг Раньше для кастинга обычно использовалось ключевое слово `as`, например: + + ```typescript let byteArray = new ByteArray(10) let uint8Array = byteArray as Uint8Array // equivalent to: byteArray ``` + Однако это работает только в двух случаях: - Примитивный кастинг (между такими типами, как `u8`, `i32`, `bool`; например: `let b: isize = 10; b as usize`); @@ -169,6 +219,8 @@ let uint8Array = byteArray as Uint8Array // equivalent to: byteArray Примеры: + + ```typescript // примитивный кастинг let a: usize = 10 @@ -176,6 +228,9 @@ let b: isize = 5 let c: usize = a + (b as usize) ``` + + + ```typescript // укрупнение по наследованию классов class Bytes extends Uint8Array {} @@ -184,11 +239,14 @@ let bytes = new Bytes(2) // bytes // то же, что: bytes as Uint8Array ``` + Есть два сценария, в которых Вы можете захотеть выполнить преобразование, но использовать `as`/`var` **небезопасно**: - Понижение уровня наследования классов (superclass → subclass) - Между двумя типами, имеющими общий супер класс + + ```typescript // понижение уровня наследования классов class Bytes extends Uint8Array {} @@ -197,6 +255,9 @@ let uint8Array = new Uint8Array(2) // uint8Array // перерывы в работе :( ``` + + + ```typescript // между двумя типами, имеющими общий суперкласс class Bytes extends Uint8Array {} @@ -206,8 +267,11 @@ let bytes = new Bytes(2) // bytes // перерывы в работе :( ``` + В таких случаях можно использовать функцию `changetype`: + + ```typescript // понижение уровня наследования классов class Bytes extends Uint8Array {} @@ -216,6 +280,9 @@ let uint8Array = new Uint8Array(2) changetype(uint8Array) // работает :) ``` + + + ```typescript // между двумя типами, имеющими общий суперкласс class Bytes extends Uint8Array {} @@ -225,8 +292,11 @@ let bytes = new Bytes(2) changetype(bytes) // работает :) ``` + Если Вы просто хотите удалить значение NULL, Вы можете продолжать использовать оператор `as` (или `variable`), но помните, что значение не может быть нулевым, иначе оно сломается. + + ```typescript // удалить значение NULL let previousBalance = AccountBalance.load(balanceId) // AccountBalance | null @@ -238,6 +308,7 @@ if (previousBalance != null) { let newBalance = new AccountBalance(balanceId) ``` + В случае обнуления мы рекомендуем Вам обратить внимание на [функцию проверки обнуления](https://www.assemblyscript.org/basics.html#nullability-checks), это сделает ваш код чище 🙂 Также мы добавили еще несколько статических методов в некоторые типы, чтобы облегчить кастинг: @@ -247,10 +318,14 @@ let newBalance = new AccountBalance(balanceId) - BigInt.fromByteArray - ByteArray.fromBigInt + + ### Проверка нулевого значения с доступом к свойству Чтобы применить [функцию проверки на нулевое значение](https://www.assemblyscript.org/basics.html#nullability-checks), Вы можете использовать операторы `if` или тернарный оператор (`?` и `:`) следующим образом: + + ```typescript let something: string | null = 'data' @@ -267,8 +342,11 @@ if (something) { } ``` + Однако это работает только тогда, когда Вы выполняете `if` / тернарную операцию для переменной, а не для доступа к свойству, например: + + ```typescript class Container { data: string | null @@ -280,8 +358,11 @@ container.data = 'data' let somethingOrElse: string = container.data ? container.data : 'else' // не компилируется ``` + В результате чего выдается ошибка: + + ```typescript ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/string/String'. @@ -289,8 +370,11 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` + Чтобы решить эту проблему, Вы можете создать переменную для доступа к этому свойству, чтобы компилятор мог выполнять проверку допустимости значений NULL: + + ```typescript class Container { data: string | null @@ -304,10 +388,15 @@ let data = container.data let somethingOrElse: string = data ? data : 'else' // компилируется просто отлично :) ``` + + + ### Перегрузка оператора при доступе к свойствам Если Вы попытаетесь суммировать (например) тип, допускающий значение Null (из доступа к свойству), с типом, не допускающим значение Null, компилятор AssemblyScript вместо того, чтобы выдать предупреждение об ошибке компиляции, предупреждающую, что одно из значений допускает значение Null, просто компилируется молча, давая возможность сломать код во время выполнения. + + ```typescript class BigInt extends Uint8Array { @operator('+') @@ -330,8 +419,11 @@ let wrapper = new Wrapper(y) wrapper.n = wrapper.n + x // не выдает ошибок времени компиляции, как это должно быть ``` + Мы открыли вопрос по этому поводу для компилятора AssemblyScript, но пока, если Вы выполняете подобные операции в своих мэппингах субграфов, Вам следует изменить их так, чтобы перед этим выполнялась проверка на нулевое значение. + + ```typescript let wrapper = new Wrapper(y) @@ -342,26 +434,37 @@ if (!wrapper.n) { wrapper.n = wrapper.n + x // теперь `n` гарантированно будет BigInt ``` + + + ### Инициализация значения Если у Вас есть такой код: + + ```typescript var value: Type // null value.x = 10 value.y = 'content' ``` + Он будет скомпилирован, но сломается во время выполнения. Это происходит из-за того, что значение не было инициализировано, поэтому убедитесь, что Ваш субграф инициализировал свои значения, например так: + + ```typescript var value = new Type() // initialized value.x = 10 value.y = 'content' ``` + Также, если у Вас есть свойства, допускающие значение NULL, в объекте GraphQL, например: + + ```graphql type Total @entity { id: Bytes! @@ -369,8 +472,11 @@ type Total @entity { } ``` + И у Вас есть код, аналогичный этому: + + ```typescript let total = Total.load('latest') @@ -381,8 +487,11 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` + Вам необходимо убедиться, что значение `total.amount` инициализировано, потому что, если Вы попытаетесь получить доступ к сумме, как в последней строке, произойдет сбой. Таким образом, Вы либо инициализируете его первым: + + ```typescript let total = Total.load('latest') @@ -394,8 +503,11 @@ if (total === null) { total.tokens = total.tokens + BigInt.fromI32(1) ``` + Или Вы можете просто изменить свою схему GraphQL, чтобы не использовать тип, допускающий значение NULL для этого свойства. Тогда мы инициализируем его нулем на этапе `codegen` 😉 + + ```graphql type Total @entity { id: Bytes! @@ -403,6 +515,9 @@ type Total @entity { } ``` + + + ```typescript let total = Total.load('latest') @@ -413,10 +528,15 @@ if (total === null) { total.amount = total.amount + BigInt.fromI32(1) ``` + + + ### Инициализация свойств класса Если Вы экспортируете какие-либо классы со свойствами, которые являются другими классами (декларированными Вами или стандартной библиотекой), то это выглядит следующим образом: + + ```typescript class Thing {} @@ -425,8 +545,11 @@ export class Something { } ``` + Компилятор выдаст ошибку, потому что Вам нужно либо добавить инициализатор для свойств, являющихся классами, либо добавить оператор `!`: + + ```typescript export class Something { constructor(public value: Thing) {} @@ -449,44 +572,63 @@ export class Something { } ``` + + + ### Инициализация массива Класс `Array` по-прежнему принимает число для инициализации длины списка, однако Вам следует соблюдать осторожность, поскольку такие операции, как `.push`, фактически увеличивают размер, а не добавляют его в начало, например: + + ```typescript let arr = new Array(5) // ["", "", "", "", ""] arr.push('something') // ["", "", "", "", "", "something"] // size 6 :( ``` + В зависимости от используемых типов, например, допускающих значение NULL, и способа доступа к ним, можно столкнуться с ошибкой времени выполнения, подобной этой: + + ``` ERRO Handler skipped due to execution failure, error: Mapping aborted at ~lib/array.ts, line 110, column 40, with message: Element type must be nullable if array is holey wasm backtrace: 0: 0x19c4 - !~lib/@graphprotocol/graph-ts/index/format 1: 0x1e75 - !~lib/@graphprotocol/graph-ts/common/collections/Entity#constructor 2: 0x30b9 - !node_modules/@graphprotocol/graph-ts/global/global/id_of_type ``` + Для того чтобы фактически начать, Вы должны либо инициализировать `Array` нулевым размером, следующим образом: + + ```typescript let arr = new Array(0) // [] arr.push('something') // ["something"] ``` + Или Вы должны изменить его через индекс: + + ```typescript let arr = new Array(5) // ["", "", "", "", ""] arr[0] = 'something' // ["something", "", "", "", ""] ``` + + + ### Схема GraphQL Это не прямое изменение AssemblyScript, но Вам, возможно, придется обновить файл `schema.graphql`. Теперь Вы больше не можете определять поля в своих типах, которые являются списками, не допускающими значение NULL. Если у Вас такая схема: + + ```graphql type Something @entity { id: Bytes! @@ -498,8 +640,11 @@ type MyEntity @entity { } ``` + Вам нужно добавить `!` к элементу типа List, например, так: + + ```graphql type Something @entity { id: Bytes! @@ -511,8 +656,11 @@ type MyEntity @entity { } ``` + Изменение произошло из-за различий в допустимости значений NULL между версиями AssemblyScript и связано с файлом `src/generated/schema.ts` (путь по умолчанию, возможно, Вы его изменили). + + ### Прочее - `Map#set` и `Set#add` согласованы со спецификацией, произведён возврат к `this` ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2)) diff --git a/website/pages/ru/release-notes/graphql-validations-migration-guide.mdx b/website/pages/ru/release-notes/graphql-validations-migration-guide.mdx index b7cb792259b3..25238b858a50 100644 --- a/website/pages/ru/release-notes/graphql-validations-migration-guide.mdx +++ b/website/pages/ru/release-notes/graphql-validations-migration-guide.mdx @@ -284,8 +284,8 @@ query { ```graphql query { - # В конце концов, у нас есть два определения "x", указывающие - # на разные поля! + # В конце концов, у нас есть два определения "x", указывающие + # на разные поля! ...A ...B } @@ -437,7 +437,7 @@ query { ```graphql query purposes { # Если в схеме "name" определено как "String", - # этот запрос не пройдёт валидацию. + # этот запрос не пройдёт валидацию. purpose(name: 1) { id } @@ -447,8 +447,8 @@ query purposes { query purposes($name: Int!) { # Если "name" определено в схеме как `String`, - # этот запрос не пройдёт валидацию, потому что - # используемая переменная имеет тип `Int` + # этот запрос не пройдёт валидацию, потому что + # используемая переменная имеет тип `Int` purpose(name: $name) { id } diff --git a/website/pages/ru/sps/introduction.mdx b/website/pages/ru/sps/introduction.mdx new file mode 100644 index 000000000000..12e3f81c6d53 --- /dev/null +++ b/website/pages/ru/sps/introduction.mdx @@ -0,0 +1,19 @@ +--- +title: Introduction to Substreams-powered Subgraphs +--- + +By using a Substreams package (`.spkg`) as a data source, your subgraph gains access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. + +There are two methods of enabling this technology: + +Using Substreams [triggers](./triggers): Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. + +Using [Entity Changes](https://substreams.streamingfast.io/documentation/consume/subgraph/graph-out): By writing more of the logic into Substreams, you can consume the module's output directly into graph-node. In graph-node, you can use the Substreams data to create your subgraph entities. + +It is really a matter of where you put your logic, in the subgraph or the Substreams. Keep in mind that having more of your logic in Substreams benefits from a parallelized model, whereas triggers will be linearly consumed in graph-node. + +Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: + +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/ru/sps/triggers-example.mdx b/website/pages/ru/sps/triggers-example.mdx new file mode 100644 index 000000000000..82172537ad4c --- /dev/null +++ b/website/pages/ru/sps/triggers-example.mdx @@ -0,0 +1,140 @@ +--- +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" +--- + +## Предварительные требования + +Before starting, make sure to: + +- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. +- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. + +## Step 1: Initialize Your Project + + + +1. Open your Dev Container and run the following command to initialize your project: + + ```bash + substreams init + ``` + +2. Select the "minimal" project option. + +3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: + +```yaml +specVersion: v0.1.0 +package: + name: my_project_sol + version: v0.1.0 + +imports: # Pass your spkg of interest + solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg + +modules: + - name: map_spl_transfers + use: solana:map_block # Select corresponding modules available within your spkg + initialBlock: 260000082 + + - name: map_transactions_by_programid + use: solana:solana:transactions_by_programid_without_votes + +network: solana-mainnet-beta + +params: # Modify the param fields to meet your needs + # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA + map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE +``` + +## Step 2: Generate the Subgraph Manifest + +Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: + +```bash +substreams codegen subgraph +``` + +You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: + +```yaml +--- +dataSources: + - kind: substreams + name: my_project_sol + network: solana-mainnet-beta + source: + package: + moduleName: map_spl_transfers # Module defined in the substreams.yaml + file: ./my-project-sol-v0.1.0.spkg + mapping: + apiVersion: 0.0.7 + kind: substreams/graph-entities + file: ./src/mappings.ts + handler: handleTriggers +``` + +## Step 3: Define Entities in `schema.graphql` + +Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. Here is an example: + +```graphql +type MyTransfer @entity { + id: ID! + amount: String! + source: String! + designation: String! + signers: [String!]! +} +``` + +This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. + +## Step 4: Handle Substreams Data in `mappings.ts` + +With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: + +```ts +import { Protobuf } from 'as-proto/assembly' +import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events' +import { MyTransfer } from '../generated/schema' + +export function handleTriggers(bytes: Uint8Array): void { + const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode) + + for (let i = 0; i < input.data.length; i++) { + const event = input.data[i] + + if (event.transfer != null) { + let entity_id: string = `${event.txnId}-${i}` + const entity = new MyTransfer(entity_id) + entity.amount = event.transfer!.instruction!.amount.toString() + entity.source = event.transfer!.accounts!.source + entity.designation = event.transfer!.accounts!.destination + + if (event.transfer!.accounts!.signer!.single != null) { + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] + } else if (event.transfer!.accounts!.signer!.multisig != null) { + entity.signers = event.transfer!.accounts!.signer!.multisig!.signers + } + entity.save() + } + } +} +``` + +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + +## Conclusion + +You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. + +For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). diff --git a/website/pages/ru/sps/triggers.mdx b/website/pages/ru/sps/triggers.mdx new file mode 100644 index 000000000000..ed19635d4768 --- /dev/null +++ b/website/pages/ru/sps/triggers.mdx @@ -0,0 +1,37 @@ +--- +title: Substreams Triggers +--- + +Custom triggers allow you to send data directly into your subgraph mappings file and entities (similar to tables and fields), enabling full use of the GraphQL layer. By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data within your subgraph’s handler, ensuring efficient and streamlined data management within the subgraph framework. + +> Note: If you haven’t already, visit one of the How-To Guides found [here](./introduction) to scaffold your first project in the Development Container. + +The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. + +```tsx +export function handleTransactions(bytes: Uint8Array): void { + let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).trasanctions // 1. + if (transactions.length == 0) { + log.info('No transactions found', []) + return + } + + for (let i = 0; i < transactions.length; i++) { + // 2. + let transaction = transactions[i] + + let entity = new Transaction(transaction.hash) // 3. + entity.from = transaction.from + entity.to = transaction.to + entity.save() + } +} +``` + +Here's what you’re seeing in the `mappings.ts` file: + +1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object +2. Looping over the transactions +3. Create a new subgraph entity for every transaction + +To go through a detailed example of a trigger-based subgraph, [click here](./triggers-example). diff --git a/website/pages/ru/substreams.mdx b/website/pages/ru/substreams.mdx index a6878e2dc49e..350cb53d10c0 100644 --- a/website/pages/ru/substreams.mdx +++ b/website/pages/ru/substreams.mdx @@ -4,9 +4,11 @@ title: Подпотоки ![Substreams Logo](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## How Substreams Works in 4 Steps @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### Expand Your Knowledge - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/ru/sunrise.mdx b/website/pages/ru/sunrise.mdx index 32bf6c6d26d4..14d1444cf8cd 100644 --- a/website/pages/ru/sunrise.mdx +++ b/website/pages/ru/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Post-Sunrise + Upgrading to The Graph Network FAQ --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Note: The Sunrise of Decentralized Data ended June 12th, 2024. -## What is the Sunrise of Decentralized Data? +## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data is an initiative spearheaded by Edge & Node. The goal is to enable subgraph developers to seamlessly upgrade to The Graph’s decentralized network. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan draws on many previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs, and the ability to integrate new blockchain networks to The Graph. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### What are the phases of the Sunrise? +### What happened to the hosted service? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. -## Upgrading subgraphs to The Graph Network +During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. -### When will hosted service subgraphs no longer be available? +### Was Subgraph Studio impacted by this upgrade? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Why were subgraphs published to Arbitrum, did it start indexing a different network? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Will my hosted service subgraph be supported on The Graph Network? - -Yes, the upgrade Indexer will automatically support all hosted service subgraphs published to The Graph Network for a seamless upgrade experience. - -### How do I upgrade my hosted service subgraph? - -> Note: Upgrading a subgraph to The Graph Network cannot be undone. - - - -To upgrade a hosted service subgraph, you can visit the subgraph dashboard on the [hosted service](https://thegraph.com/hosted-service). - -1. Select the subgraph(s) you want to upgrade. -2. Select the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### How can I get support with the upgrade process? - -The Graph community is here to support developers as they move to The Graph Network. Join The Graph's [Discord server](https://discord.gg/vtvv7FP) and request help in the #upgrade-decentralized-network channel. - -### How can I ensure high quality of service and redundancy for subgraphs on The Graph Network? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Members from these blockchain communities are encouraged to integrate their chain through the [Chain Integration Process](/chain-integration-overview/). - -### How do I publish new versions to the network? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Upgrade to the latest version of [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Update your deploy command - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> Publishing requires Arbitrum ETH - upgrading your subgraph also airdrops a small amount to facilitate your first protocol interactions 🧑‍🚀 - -### I use a subgraph developed by someone else, how can I make sure that my service isn't interrupted? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### What happens if I don't upgrade my subgraph? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### How can I get started querying subgraphs on The Graph Network? - -You can explore available subgraphs on [Graph Explorer](https://thegraph.com/explorer). [Learn more about querying subgraphs on The Graph](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## About the Upgrade Indexer -### What is the upgrade Indexer? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> The upgrade Indexer is currently active. -### What chains does the upgrade Indexer support? +The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. -The upgrade Indexer supports chains that were previously only available on the hosted service. +### What does the upgrade Indexer do? -Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. ### What does the upgrade indexer mean for existing Indexers? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. + +However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As more subgraphs are upgraded from the hosted service to The Graph Network, Delegators stand to benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### Will the upgrade Indexer compete with existing Indexers for rewards? +### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer will only allocate the minimum amount per subgraph and will not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### How will this affect subgraph developers? +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### How does this benefit data consumers? +### How does the upgrade Indexer benefit data consumers? The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. -### How will the upgrade Indexer price queries? - -The upgrade Indexer will price queries at the market rate so as not to influence the query fee market. - -### What are the criteria for the upgrade Indexer to stop supporting a subgraph? - -The upgrade Indexer will serve a subgraph until it is sufficiently and successfully served with consistent queries served by at least 3 other Indexers. - -Furthermore, the upgrade Indexer will stop supporting a subgraph if it has not been queried in the last 30 days. - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## About The Graph Network - -### Do I need to run my own infrastructure? - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -Once your subgraph has reached adequate curation signal and other Indexers begin supporting it, the upgrade Indexer will gradually taper off, allowing other Indexers to collect indexing rewards and query fees. - -### Should I host my own indexing infrastructure? - -Running infrastructure for your own project is [significantly more resource intensive](/network/benefits/) when compared to using The Graph Network. - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -That being said, if you’re still interested in running a [Graph Node](https://github.com/graphprotocol/graph-node), consider joining The Graph Network [as an Indexer](https://thegraph.com/blog/how-to-become-indexer/) to earn indexing rewards and query fees by serving data on your subgraph and others. - -### Should I use a centralized indexing provider? - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -Here's a detailed breakdown of the benefits of The Graph over centralized hosting: +### How does the upgrade Indexer price queries? -- **Resilience and Redundancy**: Decentralized systems are inherently more robust and resilient due to their distributed nature. Data isn't stored on a single server or location. Instead, it's served by hundreds of independent Indexers around the globe. This reduces the risk of data loss or service interruptions if one node fails, leading to exceptional uptime (99.99%). +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **Quality of Service**: In addition to the impressive uptime, The Graph Network features a ~106ms median query speed (latency), and higher query success rates compared to hosted alternatives. Read more in [this blog](https://thegraph.com/blog/qos-the-graph-network/). +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -Just as you've chosen your blockchain network for its decentralized nature, security, and transparency, opting for The Graph Network is an extension of those same principles. By aligning your data infrastructure with these values, you ensure a cohesive, resilient, and trust-driven development environment. +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/ru/supported-network-requirements.mdx b/website/pages/ru/supported-network-requirements.mdx index 2677263a063f..21a6f62ebb20 100644 --- a/website/pages/ru/supported-network-requirements.mdx +++ b/website/pages/ru/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| Сеть | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | +| Сеть | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Арбитрум | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | diff --git a/website/pages/ru/tap.mdx b/website/pages/ru/tap.mdx new file mode 100644 index 000000000000..68b38d7bf6c2 --- /dev/null +++ b/website/pages/ru/tap.mdx @@ -0,0 +1,197 @@ +--- +title: TAP Migration Guide +--- + +Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. + +## Обзор + +[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: + +- Efficiently handles micropayments. +- Adds a layer of consolidations to on-chain transactions and costs. +- Allows Indexers control of receipts and payments, guaranteeing payment for queries. +- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. + +## Specifics + +TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. + +For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. + +### RAV Details + +- It’s money that is waiting to be sent to the blockchain. + +- It will continue to send requests to aggregate and ensure that the total value of non-aggregated receipts does not exceed the `amount willing to lose`. + +- Each RAV can be redeemed once in the contracts, which is why they are sent after the allocation is closed. + +### Redeeming RAV + +As long as you run `tap-agent` and `indexer-agent`, everything will be executed automatically. The following provides a detailed breakdown of the process: + +1. An Indexer closes allocation. + +2. ` period, tap-agent` takes all pending receipts for that specific allocation and requests an aggregation into a RAV, marking it as `last`. + +3. `indexer-agent` takes all the last RAVS and sends redeem requests to the blockchain, which will update the value of `redeem_at`. + +4. During the `` period, `indexer-agent` monitors if the blockchain has any reorganizations that revert the transaction. + + - If it was reverted, the RAV is resent to the blockchain. If it was not reverted, it gets marked as `final`. + +## Blockchain Addresses + +### Contracts + +| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| ------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | +| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | +| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | + +### Gateway + +| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| ---------- | --------------------------------------------- | --------------------------------------------- | +| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | + +### Требования + +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. + +- [Graph TAP Aribtrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) + +> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployement. As a result, you have to index it manually. + +## Migration Guide + +### Software versions + +| Component | Версия | Image Link | +| --------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------- | +| indexer-service | v1.0.0-rc.6 | [indexer-service](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-service-rs/264320627?tag=1.0.0-rc.6) | +| indexer-agent | PR #995 | [indexer-agent](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent/266166026?tag=sha-d98cf80) | +| tap-agent | v1.0.0-rc.6 | [tap-agent](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-tap-agent/264320547?tag=1.0.0-rc.6) | + +### Steps + +1. **Indexer Agent** + + - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). + - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + +2. **Indexer Service** + + - Fully replace your current configuration with the [new Indexer Service rs](https://github.com/graphprotocol/indexer-rs). It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + - Like the older version, you can scale Indexer Service horizontally easily. It is still stateless. + +3. **TAP Agent** + + - Run _one_ single instance of [TAP Agent](https://github.com/graphprotocol/indexer-rs) at all times. It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + +4. **Configure Indexer Service and TAP Agent** + + Configuration is a TOML file shared between `indexer-service` and `tap-agent`, supplied with the argument `--config /path/to/config.toml`. + + Check out the full [configuration](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) and the [default values](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) + +For minimal configuration, use the following template: + +```bash +# You will have to change *all* the values below to match your setup. +# +# Some of the config below are global graph network values, which you can find here: +# +# +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: +# +# [database] +# postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" + +[indexer] +indexer_address = "0x1111111111111111111111111111111111111111" +operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" + +[database] +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. +postgres_url = "postgres://postgres@postgres:5432/postgres" + +[graph_node] +# URL to your graph-node's query endpoint +query_url = "" +# URL to your graph-node's status endpoint +status_url = "" + +[subgraphs.network] +# Query URL for the Graph Network subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[subgraphs.escrow] +# Query URL for the Escrow subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[blockchain] +# The chain ID of the network that the graph network is running on +chain_id = 1337 +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. +receipts_verifier_address = "0x2222222222222222222222222222222222222222" + +######################################## +# Specific configurations to tap-agent # +######################################## +[tap] +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: +# max_amount_willing_to_lose_grt = "0.1" +max_amount_willing_to_lose_grt = 20 + +[tap.sender_aggregator_endpoints] +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. +0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" +``` + +Примечания: + +- Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/tap/#gateway). +- Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/tap/#contracts) using the appropriate chain id. + +**Log Level** + +- You can set the log level by using the `RUST_LOG` environment variable. +- It’s recommended that you set it to `RUST_LOG=indexer_tap_agent=debug,info`. + +## Monitoring + +### Metrics + +All components expose the port 7300 to be queried by prometheus. + +### Grafana Dashboard + +You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) and import. + +### Launchpad + +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer) diff --git a/website/pages/sv/about.mdx b/website/pages/sv/about.mdx index 464822d00a4a..a819e0212c3b 100644 --- a/website/pages/sv/about.mdx +++ b/website/pages/sv/about.mdx @@ -2,46 +2,66 @@ title: Om The Graph --- -Denna sida kommer att förklara vad The Graph är och hur du kan komma igång. - ## Vad är The Graph? -The Graph är en decentraliserad protokoll för indexering och frågning av blockkedjedata. The Graph möjliggör frågor på data som är svår att fråga direkt. +The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. + +## Understanding the Basics + +Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. + +### Challenges Without The Graph + +In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. + +- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. + +- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. + +- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. + +### Why is this a problem? + +It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. + +Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. + +Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. -Projekt med komplexa smarta kontrakt som [Uniswap](https://uniswap.org/) och NFT-initiativ som [Bored Ape Yacht Club](https://boredapeyachtclub.com/) lagrar data på Ethereum-blockkedjan, vilket gör det mycket svårt att läsa något annat än grundläggande data direkt från blockkedjan. +## The Graph Provides a Solution -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. -Du skulle också kunna bygga din egen server, bearbeta transaktionerna där, spara dem i en databas och skapa en API-slutpunkt ovanpå alltihop för att fråga data. Men den här möjligheten är [resurskrävande](/network/benefits/), kräver underhåll, utgör en enskild felkälla och bryter viktiga säkerhetsegenskaper som krävs för decentralisering. +### How The Graph Functions -**Indexering av blockkedjedata är verkligen, verkligen svårt.** +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +#### Specifics -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. -## Hur The Graph Fungerar +- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -The Graf lär sig vad och hur man indexerar Ethereum-data baserat på subgrafbeskrivningar, kända som subgraf-manifestet. Subgrafbeskrivningen definierar de intressanta smarta kontrakten för en subgraf, händelserna i dessa kontrakt att vara uppmärksam på och hur man kartlägger händelsedata till data som The Graf kommer att lagra i sin databas. +- When creating a subgraph, you need to write a subgraph manifest. -När du har skrivit ett `subgraf-manifest`, använder du Graf CLI för att lagra definitionen i IPFS och talar om för indexeringen att börja indexera data för den subgrafen. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. -Denna diagram ger mer detaljer om datatillflödet när ett subgraf-manifest har distribuerats och hanterar Ethereum-transaktioner: +The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. ![En grafik som förklarar hur The Graf använder Graf Node för att servera frågor till datakonsumenter](/img/graph-dataflow.png) Följande steg följs: -1. En dapp lägger till data i Ethereum genom en transaktion på ett smart kontrakt. -2. Det smarta kontraktet sänder ut en eller flera händelser under bearbetningen av transaktionen. -3. Graf Node skannar kontinuerligt Ethereum efter nya block och den data för din subgraf de kan innehålla. -4. Graf Node hittar Ethereum-händelser för din subgraf i dessa block och kör de kartläggande hanterarna du tillhandahållit. Kartläggningen är en WASM-modul som skapar eller uppdaterar de dataenheter som Graph Node lagrar som svar på Ethereum-händelser. -5. Dappen frågar Graph Node om data som indexerats från blockkedjan med hjälp av nodens [GraphQL-slutpunkt](https://graphql.org/learn/). Graph Node översätter i sin tur GraphQL-frågorna till frågor för sin underliggande datalagring för att hämta dessa data, och använder lagrets indexeringsegenskaper. Dappen visar dessa data i ett användarvänligt gränssnitt för slutanvändare, som de använder för att utfärda nya transaktioner på Ethereum. Cykeln upprepas. +1. En dapp lägger till data i Ethereum genom en transaktion på ett smart kontrakt. +2. Det smarta kontraktet sänder ut en eller flera händelser under bearbetningen av transaktionen. +3. Graf Node skannar kontinuerligt Ethereum efter nya block och den data för din subgraf de kan innehålla. +4. Graf Node hittar Ethereum-händelser för din subgraf i dessa block och kör de kartläggande hanterarna du tillhandahållit. Kartläggningen är en WASM-modul som skapar eller uppdaterar de dataenheter som Graph Node lagrar som svar på Ethereum-händelser. +5. Dappen frågar Graph Node om data som indexerats från blockkedjan med hjälp av nodens [GraphQL-slutpunkt](https://graphql.org/learn/). Graph Node översätter i sin tur GraphQL-frågorna till frågor för sin underliggande datalagring för att hämta dessa data, och använder lagrets indexeringsegenskaper. Dappen visar dessa data i ett användarvänligt gränssnitt för slutanvändare, som de använder för att utfärda nya transaktioner på Ethereum. Cykeln upprepas. ## Nästa steg -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +The following sections provide a more in-depth look at subgraphs, their deployment and data querying. -Before you start writing your own subgraph, you might want to have a look at [Graph Explorer](https://thegraph.com/explorer) and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/pages/sv/arbitrum/arbitrum-faq.mdx b/website/pages/sv/arbitrum/arbitrum-faq.mdx index f454a4d1e420..2afe8b430db2 100644 --- a/website/pages/sv/arbitrum/arbitrum-faq.mdx +++ b/website/pages/sv/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: Arbitrum Vanliga frågor Klicka [here](#billing-on-arbitrum-faqs) om du vill hoppa till Arbitrum Billing Vanliga frågor. -## Varför implementerar The Graf en L2 lösning? +## Why did The Graph implement an L2 Solution? -Genom att skala The Graf på L2 kan nätverksdeltagare förvänta sig: +By scaling The Graph on L2, network participants can now benefit from: - Uppemot 26x besparingar på gasavgifter @@ -14,7 +14,7 @@ Genom att skala The Graf på L2 kan nätverksdeltagare förvänta sig: - Säkerhet ärvt från Ethereum -Genom att skala protokollets smarta kontrakt till L2 kan nätverksdeltagare interagera oftare till en reducerad kostnad i gasavgifter. Till exempel kan indexerare öppna och stänga allokeringar för att indexera ett större antal subgrafer med högre frekvens, utvecklare kan distribuera och uppdatera subgrafer med större lätthet, delegatorer kan delegera GRT med ökad frekvens och curatorer kan lägga till eller ta bort signaler till ett större antal subgrafer – åtgärder som tidigare ansågs vara för kostsamma för att utföra ofta på grund av gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. Graph gemenskapen beslutade att gå vidare med Arbitrum förra året efter resultatet av diskussionen [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). @@ -41,27 +41,21 @@ För att dra fördel av att använda The Graph på L2, använd den här rullgard ## Som subgrafutvecklare, datakonsument, indexerare, curator eller delegator, vad behöver jag göra nu? -Det krävs inga omedelbara åtgärder, men nätverksdeltagare uppmuntras att börja flytta till Arbitrum för att dra nytta av fördelarna med L2. +Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) for additional support. -Kärnutvecklarteam arbetar med att skapa L2 överföringsverktyg som kommer att göra det betydligt lättare att flytta delegering, kurering och subgrafer till Arbitrum. Nätverksdeltagare kan förvänta sig att L2 överföringsverktyg ska vara tillgängliga till sommaren 2023. +All indexing rewards are now entirely on Arbitrum. -Från och med den 10 april 2023 präglas 5 % av alla indexeringsbelöningar på Arbitrum. När nätverksdeltagandet ökar, och när rådet godkänner det, kommer indexeringsbelöningar gradvis att flyttas från Ethereum till Arbitrum, och så småningom flyttas helt till Arbitrum. - -## Om jag skulle vilja delta i nätverket på L2, vad ska jag göra? - -Vänligen hjälp [testa nätverket](https://testnet.thegraph.com/explorer) på L2 och rapportera feedback om din upplevelse av [Discord](https://discord.gg/graphprotocol). - -## Finns det några risker med att skala nätverket till L2? +## Were there any risks associated with scaling the network to L2? All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). Allt har testats noggrant och en beredskapsplan finns på plats för att säkerställa en säker och sömlös övergång. Detaljer finns [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Kommer befintliga subgrafer på Ethereum att fortsätta att fungera? +## Are existing subgraphs on Ethereum working? -Ja, The Graph Nätverk kontrakt kommer att fungera parallellt på både Ethereum och Arbitrum tills de flyttas helt till Arbitrum vid ett senare tillfälle. +All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. -## Kommer GRT att ha ett nytt smart kontrakt utplacerat på Arbitrum? +## Does GRT have a new smart contract deployed on Arbitrum? Ja, GRT har ytterligare ett [smart kontrakt på Arbitrum](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7). Ethereums huvudnät [GRT-kontrakt](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) kommer dock att fortsätta att fungera. diff --git a/website/pages/sv/billing.mdx b/website/pages/sv/billing.mdx index 96e596687dea..5c12aa5cd9c1 100644 --- a/website/pages/sv/billing.mdx +++ b/website/pages/sv/billing.mdx @@ -14,7 +14,7 @@ There are two plans to use when querying subgraphs on The Graph Network. ## Query Payments with credit card -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) +- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Klicka på knappen "Anslut plånbok" längst upp till höger på sidan. Du kommer att omdirigeras till sidan för plånboksval. Välj din plånbok och klicka på "Anslut". 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. @@ -69,7 +69,7 @@ Once you bridge GRT, you can add it to your billing balance. 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. 4. Enter the amount of GRT you would like to withdraw. 5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. 6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. @@ -83,7 +83,7 @@ Once you bridge GRT, you can add it to your billing balance. - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. 5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. 6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. + - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. 7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. 8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. @@ -127,7 +127,7 @@ This will be a step by step guide for purchasing GRT on Binance. 7. Review your purchase and click "Buy GRT". 8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. 9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - Click on the "wallet" button, click withdraw, and select GRT. - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - Click "Continue" and confirm your transaction. @@ -198,7 +198,7 @@ Du kan lära dig mer om att få ETH på Binance [här](https://www.binance.com/e ### How many queries will I need? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. @@ -208,6 +208,6 @@ Of course, both new and existing users can reach out to Edge & Node's BD team fo Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### What happens when my billing balance runs out? Will I get a warning? You will receive several email notifications before your billing balance runs out. diff --git a/website/pages/sv/chain-integration-overview.mdx b/website/pages/sv/chain-integration-overview.mdx index 3511f1e5a650..5a81ed695be7 100644 --- a/website/pages/sv/chain-integration-overview.mdx +++ b/website/pages/sv/chain-integration-overview.mdx @@ -6,12 +6,12 @@ En transparent och styrbaserad integrationsprocess utformades för blockchain-te ## Fas 1. Teknisk Integration -- Team arbetar med en Graph Node-integration och Firehose för icke-EVM-baserade kedjor. [Här är hur](/new-chain-integration/). +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. - Team startar protokollintegrationsprocessen genom att skapa en Forumtråd [here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (Ny Datakällor underkategori under Governance & GIPs). Att använda standardforummallen är obligatoriskt. ## Fas 2. Integrationsvalidering -- Team samarbetar med kärnutvecklare, Graph Foundation och operatörer av GUI:er och nätverksportar, såsom [Subgraf Studio](https://thegraph.com/studio/), för att säkerställa en smidig integrationsprocess. Detta innebär att tillhandahålla nödvändig backend-infrastruktur, såsom den integrerande kedjans JSON RPC eller Firehose-endpoints. Team som vill undvika självhostning av sådan infrastruktur kan dra nytta av The Graphs gemenskap av nodoperatörer (Indexers) för att göra det, vilket Stiftelsen kan hjälpa till med. +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - Graf Indexers testar integrationen på The Graphs testnät. - Kärnutvecklare och Indexers övervakar stabilitet, prestanda och datadeterminism. @@ -38,7 +38,7 @@ Denna process är relaterad till Subgraf Data Service och gäller endast nya Sub Detta skulle endast påverka protokollstödet för indexbelöningar på Substreams-drivna subgrafer. Den nya Firehose-implementeringen skulle behöva testas på testnätet, enligt den metodik som beskrivs för Fas 2 i detta GIP. På liknande sätt, förutsatt att implementationen är prestanda- och tillförlitlig, skulle en PR på [Funktionsstödsmatrisen](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) krävas (`Substreams data sources` Subgraf Feature), liksom en ny GIP för protokollstöd för indexbelöningar. Vem som helst kan skapa PR och GIP; Stiftelsen skulle hjälpa till med Rådets godkännande. -### 3. Hur lång tid tar denna process? +### 3. How much time will the process of reaching full protocol support take? Tiden till mainnet förväntas vara flera veckor, varierande baserat på tidpunkten för integrationsutveckling, om ytterligare forskning krävs, testning och buggfixar, och, som alltid, timingen av styrdighetsprocessen som kräver gemenskapens återkoppling. @@ -46,4 +46,4 @@ Protokollstöd för indexbelöningar beror på intressenternas bandbredd att for ### 4. Hur kommer prioriteringar att hanteras? -Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. This is especially true for chains previously supported on the [hosted service](https://thegraph.com/hosted-service) or those relying on already tested stacks. +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/sv/cookbook/arweave.mdx b/website/pages/sv/cookbook/arweave.mdx index 2ed14c71ee68..dd975d7f18f3 100644 --- a/website/pages/sv/cookbook/arweave.mdx +++ b/website/pages/sv/cookbook/arweave.mdx @@ -105,7 +105,7 @@ Schema definition beskriver strukturen för den resulterande subgraf databasen o Hanterarna för bearbetning av händelser är skrivna i [AssemblyScript](https://www.assemblyscript.org/). -Arweave indexering introducerar Arweave-specifika datatyper till [AssemblyScript API](/developing/assemblyscript-api/). +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/sv/cookbook/base-testnet.mdx b/website/pages/sv/cookbook/base-testnet.mdx index 87e4ccd8c8b4..604edb6199d9 100644 --- a/website/pages/sv/cookbook/base-testnet.mdx +++ b/website/pages/sv/cookbook/base-testnet.mdx @@ -63,10 +63,10 @@ Din subgraf snigel är en identifierare för din subgraf. CLI verktyget leder di The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. +- Schema (schema.graphql) - GraphQL schemat definierar vilken data du vill hämta från subgrafen. - AssemblyScript mappningar (mapping.ts) - Detta är koden som översätter data från dina datakällor till de enheter som definieras i schemat. -If you want to index additional data, you will need extend the manifest, schema and mappings. +If you want to index additional data, you will need to extend the manifest, schema and mappings. For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). diff --git a/website/pages/sv/cookbook/cosmos.mdx b/website/pages/sv/cookbook/cosmos.mdx index 0eb1879bbaa2..430f7f9d1920 100644 --- a/website/pages/sv/cookbook/cosmos.mdx +++ b/website/pages/sv/cookbook/cosmos.mdx @@ -85,7 +85,7 @@ Schema definition describes the structure of the resulting subgraph database and Hanterarna för bearbetning av händelser är skrivna i [AssemblyScript](https://www.assemblyscript.org/). -Cosmos indexering introducerar Cosmos specifika datatyper till [AssemblyScript API](/developing/assemblyscript-api/). +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/sv/cookbook/grafting.mdx b/website/pages/sv/cookbook/grafting.mdx index aa498bdcaaa6..8ddfa857f8da 100644 --- a/website/pages/sv/cookbook/grafting.mdx +++ b/website/pages/sv/cookbook/grafting.mdx @@ -22,7 +22,7 @@ För mer information kan du kontrollera: - [Ympning](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -I den här handledningen kommer vi att täcka ett grundläggande användningsfall. Vi kommer att ersätta ett befintligt kontrakt med ett identiskt kontrakt (med en ny adress, men samma kod). Ympa sedan den befintliga subgrafen på "bas"-subgrafen som spårar det nya kontraktet. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## Viktig anmärkning om ympning vid uppgradering till nätverket @@ -30,7 +30,7 @@ I den här handledningen kommer vi att täcka ett grundläggande användningsfal ### Varför är detta viktigt? -Ympning är en kraftfull funktion som gör det möjligt att "transplantera" en subgraph till en annan, och överföra historisk data från den befintliga subgraphen till en ny version. Även om detta är ett effektivt sätt att bevara data och spara tid på indexering, kan grafting introducera komplexiteter och potentiella problem vid övergången från en hostad miljö till det decentraliserade nätverket. Det är inte möjligt att använda grafting för att föra tillbaka en subgraph från The Graf Nätverk till den hostade tjänsten eller Subgraf Studio. +Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. ### Bästa praxis @@ -80,7 +80,7 @@ dataSources: ``` - `Lock`-datakällan är abi- och kontraktsadressen vi får när vi kompilerar och distribuerar kontraktet -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - Avsnittet `mappning` definierar utlösare av intresse och de funktioner som ska köras som svar på dessa utlösare. I det här fallet lyssnar vi efter händelsen `Withdrawal` och anropar funktionen `handleWithdrawal` när den sänds. ## Ympnings manifest Definition @@ -191,7 +191,7 @@ Congrats! You have successfully grafted a subgraph onto another subgraph. ## Ytterligare resurser -Om du vill ha mer erfarenhet av ympning, här är några exempel på populära kontrakt: +If you want more experience with grafting, here are a few examples for popular contracts: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) diff --git a/website/pages/sv/cookbook/near.mdx b/website/pages/sv/cookbook/near.mdx index 7ff18330d655..92f7b6979a38 100644 --- a/website/pages/sv/cookbook/near.mdx +++ b/website/pages/sv/cookbook/near.mdx @@ -37,7 +37,7 @@ Det finns tre aspekter av subgraf definition: **schema.graphql:** en schema fil som definierar vilken data som lagras för din subgraf, och hur man frågar den via GraphQL. Kraven för NEAR undergrafer täcks av [den befintliga dokumentationen](/developing/creating-a-subgraph#the-graphql-schema). -**AssemblyScript Mappings:**[AssemblyScript kod](/developing/assemblyscript-api) som översätter från händelsedata till de enheter som definieras i ditt schema. NEAR stöd introducerar NEAR specifika datatyper och ny JSON parsnings funktion. +**AssemblyScript Mappings:** [AssemblyScript code](/developing/graph-ts/api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. Under subgrafutveckling finns det två nyckelkommandon: @@ -98,7 +98,7 @@ Schemadefinition beskriver strukturen för den resulterande subgraf databasen oc Hanterarna för bearbetning av händelser är skrivna i [AssemblyScript](https://www.assemblyscript.org/). -NEAR indexering introducerar NEAR specifika datatyper till [AssemblyScript API](/developing/assemblyscript-api). +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/graph-ts/api). ```typescript @@ -165,9 +165,9 @@ Dessa typer skickas till block & kvittohanterare: - Blockhanterare kommer att få ett `Block` - Kvittohanterare kommer att få ett `ReceiptWithOutcome` -Annars är resten av [AssemblyScript API](/developing/assemblyscript-api) tillgänglig för NEAR subgraf utvecklare under körning av mappning. +Otherwise, the rest of the [AssemblyScript API](/developing/graph-ts/api) is available to NEAR subgraph developers during mapping execution. -Detta inkluderar en ny JSON parsnings funktion - loggar på NEAR sänds ofta ut som strängade JSON. En ny funktion `json.fromString(...)` är tillgänglig som en del av [JSON API](/developing/assemblyscript-api#json-api) för att tillåta utvecklare för att enkelt bearbeta dessa loggar. +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/graph-ts/api#json-api) to allow developers to easily process these logs. ## Utplacera en NEAR Subgraf diff --git a/website/pages/sv/cookbook/subgraph-uncrashable.mdx b/website/pages/sv/cookbook/subgraph-uncrashable.mdx index e6ef7ed8cc76..c77c02c2bee6 100644 --- a/website/pages/sv/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/sv/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: Säker subgraf kodgenerator - Ramverket innehåller också ett sätt (via konfigurationsfilen) att skapa anpassade, men säkra, sätterfunktioner för grupper av entitetsvariabler. På så sätt är det omöjligt för användaren att ladda/använda en inaktuell grafenhet och det är också omöjligt att glömma att spara eller ställa in en variabel som krävs av funktionen. -- Varningsloggar registreras som loggar som indikerar var det finns ett brott mot subgraf logik för att hjälpa till att korrigera problemet för att säkerställa datanoggrannhet. Dessa loggar kan ses i The Graphs värdtjänst under avsnittet "Loggar". +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable kan köras som en valfri flagga med kommandot Graph CLI codegen. diff --git a/website/pages/sv/cookbook/upgrading-a-subgraph.mdx b/website/pages/sv/cookbook/upgrading-a-subgraph.mdx index fc848cc59124..f7ff1f580d49 100644 --- a/website/pages/sv/cookbook/upgrading-a-subgraph.mdx +++ b/website/pages/sv/cookbook/upgrading-a-subgraph.mdx @@ -136,7 +136,7 @@ Se till att **Uppdatera subgraf detaljer i Utforskaren** är markerad och klicka ## Avskrivning av en subgraf i The Graph Nätverk -Följ stegen [here](/managing/deprecating-a-subgraph) för att depreciera din subgraph och ta bort den från The Graph Nätverk. +Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. ## Förfrågan om en undergraf + fakturering på The Graph Nätverk diff --git a/website/pages/sv/deploying/multiple-networks.mdx b/website/pages/sv/deploying/multiple-networks.mdx new file mode 100644 index 000000000000..f35ab9327a4c --- /dev/null +++ b/website/pages/sv/deploying/multiple-networks.mdx @@ -0,0 +1,241 @@ +--- +title: Deploying a Subgraph to Multiple Networks +--- + +This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph). + +## Distribuera undergrafen till flera nätverk + +I vissa fall vill du distribuera samma undergraf till flera nätverk utan att duplicera all dess kod. Den största utmaningen med detta är att kontraktsadresserna på dessa nätverk är olika. + +### Using `graph-cli` + +Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: + +```sh +Options: + + ... + --network Network configuration to use from the networks config file + --network-file Networks config file path (default: "./networks.json") +``` + +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. + +> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. + +If you don't have a `networks.json` file, you'll need to manually create one with the following structure: + +```json +{ + "network1": { // nätverkets namn + "dataSource1": { // namn på datakällan + "address": "0xabc...", // Avtalets adress (frivillig uppgift) + "startBlock": 123456 // startBlock (valfritt) + }, + "dataSource2": { + "address": "0x123...", + "startBlock": 123444 + } + }, + "network2": { + "dataSource1": { + "address": "0x987...", + "startBlock": 123 + }, + "dataSource2": { + "address": "0xxyz..", + "startBlock": 456 + } + }, + ... +} +``` + +> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. + +Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x123...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Så här ska nätverkets konfigurationsfil se ut: + +```json +{ + "mainnet": { + "Gravity": { + "address": "0x123..." + } + }, + "sepolia": { + "Gravity": { + "address": "0xabc..." + } + } +} +``` + +Nu kan vi köra något av följande kommandon: + +```sh +# Using default networks.json file +yarn build --network sepolia + +# Using custom named file +yarn build --network sepolia --network-file path/to/config +``` + +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: sepolia + source: + address: '0xabc...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Now you are ready to `yarn deploy`. + +> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: + +```sh +# Using default networks.json file +yarn deploy --network sepolia + +# Using custom named file +yarn deploy --network sepolia --network-file path/to/config +``` + +### Använda subgraph.yaml mallen + +One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). + +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: + +```json +{ + "network": "mainnet", + "address": "0x123..." +} +``` + +och + +```json +{ + "network": "sepolia", + "address": "0xabc..." +} +``` + +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + network: {{network}} + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + address: '{{address}}' + abi: Gravity + mapping: + kind: ethereum/events +``` + +In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: + +```json +{ + ... + "scripts": { + ... + "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", + "prepare:sepolia": "mustache config/sepolia.json subgraph.template.yaml > subgraph.yaml" + }, + "devDependencies": { + ... + "mustache": "^3.1.0" + } +} +``` + +To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: + +```sh +# Mainnet: +yarn prepare:mainnet && yarn deploy + +# Sepolia: +yarn prepare:sepolia && yarn deploy +``` + +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). + +**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. + +## Subgraph Studio subgraf arkivpolitik + +A subgraph version in Studio is archived if and only if it meets the following criteria: + +- The version is not published to the network (or pending publish) +- The version was created 45 or more days ago +- The subgraph hasn't been queried in 30 days + +In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. + +Varje subgraf som påverkas av denna policy har en möjlighet att ta tillbaka versionen i fråga. + +## Kontroll av undergrafens hälsa + +Om en subgraf synkroniseras framgångsrikt är det ett gott tecken på att den kommer att fortsätta att fungera bra för alltid. Nya triggers i nätverket kan dock göra att din subgraf stöter på ett otestat feltillstånd eller så kan den börja halka efter på grund av prestandaproblem eller problem med nodoperatörerna. + +Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: + +```graphql +{ + indexingStatusForCurrentVersion(subgraphName: "org/subgraph") { + synced + health + fatalError { + message + block { + number + hash + } + handler + } + chains { + chainHeadBlock { + number + } + latestBlock { + number + } + } + } +} +``` + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/pages/sv/developing/creating-a-subgraph.mdx b/website/pages/sv/developing/creating-a-subgraph.mdx index f24bc403d338..d3ec07c5ad4c 100644 --- a/website/pages/sv/developing/creating-a-subgraph.mdx +++ b/website/pages/sv/developing/creating-a-subgraph.mdx @@ -2,45 +2,47 @@ title: Skapa en Subgraph --- -En subgraph extraherar data från en blockchain, bearbetar den och lagrar den så att den kan frågas enkelt via GraphQL. +This detailed guide provides instructions to successfully create a subgraph. -![Definiera en Subgraph](/img/defining-a-subgraph.png) +A subgraph extracts data from a blockchain, processes it, and stores it for efficient querying via GraphQL. -Subgraph-definitionen består av några filer: +![Definiera en Subgraph](/img/defining-a-subgraph.png) -- `subgraph.yaml`: en YAML-fil som innehåller subgraph-manifestet +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. -- `schema.graphql`: ett GraphQL-schema som definierar vilka data som lagras för din subgraph och hur man frågar efter det via GraphQL +## Komma igång -- `AssemblyScript Mappings`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) kod som översätter från händelsedata till de enheter som är definierade i ditt schema (t.ex. `mapping.ts` i den här handledningen) +### Installera Graph CLI -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key). It is recommended that you [add signal](/network/curating/#how-to-signal) to your subgraph with at least [3,000 GRT](/sunrise/#how-can-i-ensure-high-quality-of-service-and-redundancy-for-subgraphs-on-the-graph-network). +To build and deploy a subgraph, you will need the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-tooling) which you will need to build and deploy a subgraph. +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. -## Installera Graph CLI +Kör ett av följande kommandon på din lokala dator: -Graph CLI är skrivet i JavaScript, och du måste installera antingen `yarn` eller `npm` för att använda det; det antas att du har yarn i det följande. +#### Using [npm](https://www.npmjs.com/) -När du har `yarn`, installera Graph CLI genom att köra +```bash +npm install -g @graphprotocol/graph-cli@latest +``` -**Installera med yarn:** +#### Using [yarn](https://yarnpkg.com/) ```bash yarn global add @graphprotocol/graph-cli ``` -**Installera med npm:** +- The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. -```bash -npm install -g @graphprotocol/graph-cli -``` +- This `graph init` command can also create a subgraph in Subgraph Studio by passing in `--product subgraph-studio`. + +- If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. -Once installed, the `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. This command can be used to create a subgraph in Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to your preferred network, bootstrapping a new subgraph from that contract can be a good way to get started. +## Create a subgraph -## Från ett Befintligt kontrakt +### From an existing contract -Följande kommando skapar en subgraf som indexerar alla händelser i ett befintligt kontrakt. Det försöker hämta kontraktets ABI från Etherscan och faller tillbaka till att begära en lokal filsökväg. Om något av de valfria argumenten saknas tar det dig genom ett interaktivt formulär. +The following command creates a subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,21 +53,29 @@ graph init \ [] ``` -`` är ID för din subgraf i Subgraf Studio, det kan hittas på din subgraf detaljsida. +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. + +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. -## Från ett Exempel Subgraph +### From an example subgraph -Det andra läget som `graph init` stöder är att skapa ett nytt projekt från ett exempel på en undergraf. Följande kommando gör detta: +The following command initializes a new project from an example subgraph: ```sh -graph init --studio +graph init --studio --from-example=example-subgraph ``` -The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. -## Lägg till nya datakällor i en befintlig Subgraf +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. -Från och med `v0.31.0` stöder `graph-cli` att lägga till nya datakällor i en befintlig subgraf genom kommandot `graph add`. +## Add new `dataSources` to an existing subgraph + +Since `v0.31.0`, the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: ```sh graph add
    [] @@ -78,22 +88,45 @@ Options: --network-file Sökväg till konfigurationsfil för nätverk (standard: "./networks.json") ``` -Kommandot `add` hämtar ABI: en från Etherscan (om inte en ABI-sökväg anges med alternativet `--abi`) och skapar en ny `dataSource` på samma sätt som kommandot `graph init` skapar en `dataSource` `--from-contract`, och uppdaterar schemat och mappningarna därefter. +### Specifics + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- Alternativet `--merge-entities` identifierar hur utvecklaren vill hantera konflikter med `entity`- och `event`-namn: + + - Om `true`: den nya `dataSource` ska använda befintliga `eventHandlers` & `entities`. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- Kontraktsadressen kommer att skrivas till `networks.json` för den relevanta nätverket. + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. + +## Components of a subgraph + +### Subgrafens manifest -Alternativet `--merge-entities` identifierar hur utvecklaren vill hantera konflikter med `entity`- och `event`-namn: +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -- Om `true`: den nya `dataSource` ska använda befintliga `eventHandlers` & `entities`. -- Om `false`: en ny entitet och händelsehanterare ska skapas med `${dataSourceName}{EventName}`. +The **subgraph definition** consists of the following files: -Kontraktsadressen kommer att skrivas till `networks.json` för den relevanta nätverket. +- `subgraph.yaml`: Contains the subgraph manifest -> **Obs:** När du använder det interaktiva kommandoraden, efter att ha kört `graph init` framgångsrikt, kommer du att bli ombedd att lägga till en ny `dataSource`. +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL -## Subgrafens manifest +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) -Subgrafens manifest `subgraph.yaml` definierar de smarta kontrakten som din subgraf indexerar, vilka händelser från dessa kontrakt som ska uppmärksammas och hur man kartlägger händelsedata till entiteter som Graph Node lagrar och tillåter att fråga. Den fullständiga specifikationen för subgrafens manifest finns [här](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +A single subgraph can: -För exempelsubgrafen är `subgraph.yaml`: +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: ```yaml specVersion: 0.0.4 @@ -180,9 +213,9 @@ En enskild subgraf kan indexera data från flera smarta kontrakt. Lägg till en Utlösarna för en datakälla inom ett block ordnas med hjälp av följande process: -1. Händelse- och anropsutlösare ordnas först efter transaktionsindex inom blocket. -2. Händelse- och anropsutlösare inom samma transaktion ordnas med hjälp av en konvention: händelseutlösare först, sedan anropsutlösare, varje typ respekterar ordningen de definieras i manifestet. -3. Blockutlösare körs efter händelse- och anropsutlösare, i den ordning de definieras i manifestet. +1. Händelse- och anropsutlösare ordnas först efter transaktionsindex inom blocket. +2. Händelse- och anropsutlösare inom samma transaktion ordnas med hjälp av en konvention: händelseutlösare först, sedan anropsutlösare, varje typ respekterar ordningen de definieras i manifestet. +3. Blockutlösare körs efter händelse- och anropsutlösare, i den ordning de definieras i manifestet. Dessa ordningsregler kan komma att ändras. @@ -305,9 +338,9 @@ Imagine you have a subgraph that needs to make three Ethereum calls to fetch dat Traditionally, these calls might be made sequentially: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Total time taken = 3 + 2 + 4 = 9 seconds @@ -315,9 +348,9 @@ Total time taken = 3 + 2 + 4 = 9 seconds With this feature, you can declare these calls to be executed in parallel: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. @@ -325,9 +358,9 @@ Total time taken = max (3, 2, 4) = 4 seconds ### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. -2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. ### Example Configuration in Subgraph Manifest @@ -347,7 +380,7 @@ calls: Details for the example above: - `global0X128` is the declared `eth_call`. -- The text before colon(`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. @@ -360,17 +393,17 @@ calls: ### SpecVersion Releases -| Version | Versionsanteckningar | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +| Version | Versionsanteckningar | +|:-------:| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | ### Hämta ABI: erna @@ -442,16 +475,16 @@ För vissa entitetstyper konstrueras `id` från id:erna hos två andra entiteter Vi stödjer följande skalartyper i vår GraphQL API: -| Typ | Beskrivning | -| --- | --- | -| `Bytes` | Bytematris, representerad som en hexadecimal sträng. Vanligt används för Ethereum-hashar och adresser. | -| `String` | Skalär för `string`-värden. Nolltecken stöds inte och tas automatiskt bort. | -| `Boolean` | Skalär för `boolean`-värden. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Stora heltal. Används för Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` typer. Observera: Allt under `uint32`, som `int32`, `uint24` eller `int8` representeras som `i32`. | -| `BigDecimal` | `BigDecimal` Högprecisionsdecimaler representerade som en signifikant och en exponent. Exponentområdet är från −6143 till +6144. Avrundat till 34 signifikanta siffror. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| Typ | Beskrivning | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Bytematris, representerad som en hexadecimal sträng. Vanligt används för Ethereum-hashar och adresser. | +| `String` | Skalär för `string`-värden. Nolltecken stöds inte och tas automatiskt bort. | +| `Boolean` | Skalär för `boolean`-värden. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Stora heltal. Används för Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` typer. Observera: Allt under `uint32`, som `int32`, `uint24` eller `int8` representeras som `i32`. | +| `BigDecimal` | `BigDecimal` Högprecisionsdecimaler representerade som en signifikant och en exponent. Exponentområdet är från −6143 till +6144. Avrundat till 34 signifikanta siffror. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | #### Enums @@ -593,7 +626,7 @@ Detta mer avancerade sätt att lagra många-till-många-relationer kommer att le #### Lägga till kommentarer i schemat -As per GraphQL spec, comments can be added above schema entity attributes using the hash symble `#`. This is illustrated in the example below: +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: ```graphql type MyFirstEntity @entity { @@ -617,7 +650,12 @@ type _Schema_ name: "bandSearch" language: en algorithm: rank - include: [{ entity: "Band", fields: [{ name: "name" }, { name: "description" }, { name: "bio" }] }] + include: [ + { + entity: "Band" + fields: [{ name: "name" }, { name: "description" }, { name: "bio" }] + } + ] ) type Band @entity { @@ -776,13 +814,13 @@ import { // The events classes: NewGravatar, UpdatedGravatar, -} from '../generated/Gravity/Gravity' +} from "../generated/Gravity/Gravity"; ``` Utöver detta genereras en klass för varje entitetstyp i subgrafens GraphQL-schema. Dessa klasser tillhandahåller typsäker entitetsladdning, läs- och skrivåtkomst till entitetsfält samt en `save()`-metod för att skriva entiteter till lagret. Alla entitetsklasser skrivs till `/schema.ts`, vilket gör att mappningar kan importera dem med ```javascript -import { Gravatar } from '../generated/schema' +import { Gravatar } from "../generated/schema" ``` > **Observera:** Kodgenerering måste utföras igen efter varje ändring av GraphQL-schemat eller ABIn som ingår i manifestet. Det måste också utföras minst en gång innan du bygger eller distribuerar subgrafet. @@ -805,7 +843,7 @@ dataSources: name: Factory network: mainnet source: - address: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' + address: "0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95" abi: Factory mapping: kind: ethereum/events @@ -863,17 +901,17 @@ templates: I det sista steget uppdaterar du mappningen av huvudkontraktet för att skapa en dynamisk datakällinstans från en av mallarna. I det här exemplet ändrar du mappningen av huvudkontraktet för att importera mallen `Exchange` och anropar metoden `Exchange.create(address)` för att börja indexera det nya växlingskontraktet. ```typescript -import { Exchange } from '../generated/templates' +import { Exchange } from "../generated/templates"; export function handleNewExchange(event: NewExchange): void { // Start indexing the exchange; `event.params.exchange` is the // address of the new exchange contract - Exchange.create(event.params.exchange) + Exchange.create(event.params.exchange); } ``` > ** Notera:** En ny datakälla bearbetar endast anrop och händelser för det block där den skapades och alla efterföljande block, men bearbetar inte historiska data, dvs. data som finns i tidigare block. -> +> > Om tidigare block innehåller data som är relevanta för den nya datakällan, är det bäst att indexera dessa data genom att läsa kontraktets aktuella status och skapa enheter som representerar denna status vid den tidpunkt då den nya datakällan skapas. ### Kontext för datakälla @@ -881,22 +919,22 @@ export function handleNewExchange(event: NewExchange): void { Datakällans kontext gör det möjligt att skicka extra konfiguration när en mall instansieras. I vårt exempel kan vi säga att börser är associerade med ett visst handelspar, vilket ingår i händelsen `NewExchange`. Den informationen kan skickas till den instansierade datakällan, så här: ```typescript -import { Exchange } from '../generated/templates' +import { Exchange } from "../generated/templates"; export function handleNewExchange(event: NewExchange): void { - let context = new DataSourceContext() - context.setString('tradingPair', event.params.tradingPair) - Exchange.createWithContext(event.params.exchange, context) + let context = new DataSourceContext(); + context.setString("tradingPair", event.params.tradingPair); + Exchange.createWithContext(event.params.exchange, context); } ``` Inuti en mappning av mallen `Exchange` kan kontexten sedan nås: ```typescript -import { dataSource } from '@graphprotocol/graph-ts' +import { dataSource } from "@graphprotocol/graph-ts"; -let context = dataSource.context() -let tradingPair = context.getString('tradingPair') +let context = dataSource.context(); +let tradingPair = context.getString("tradingPair") ``` Det finns sättare och hämtare som `setString` och `getString` för alla värdestyper. @@ -911,7 +949,7 @@ dataSources: name: ExampleSource network: mainnet source: - address: '0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95' + address: "0xc0a47dFe034B400B47bDaD5FecDa2621de6c4d95" abi: ExampleContract startBlock: 6627917 mapping: @@ -930,7 +968,7 @@ dataSources: ``` > **Observera:** Blocket där kontraktet skapades kan snabbt sökas upp på Etherscan: -> +> > 1. Sök efter kontraktet genom att ange dess adress i sökfältet. > 2. Klicka på transaktionshashen för skapandet i avsnittet `Kontraktsskapare`. > 3. Ladda sidan med transaktionsdetaljer där du hittar startblocket för det kontraktet. @@ -945,9 +983,9 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde `indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: -1. `"never"`: No pruning of historical data; retains the entire history. -2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. -3. A specific number: Sets a custom limit on the number of historical blocks to retain. +1. `"never"`: No pruning of historical data; retains the entire history. +2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +3. A specific number: Sets a custom limit on the number of historical blocks to retain. ``` indexerHints: @@ -982,29 +1020,6 @@ indexerHints: prune: never ``` -You can check the earliest block (with historical state) for a given subgraph by querying the [Indexing Status API](/deploying/deploying-a-subgraph-to-hosted/#checking-subgraph-health): - -``` -{ - indexingStatuses(subgraphs: ["Qm..."]) { - subgraph - synced - health - chains { - earliestBlock { - number - } - latestBlock { - number - } - chainHeadBlock { number } - } - } -} -``` - -Note that the `earliestBlock` is the earliest block with historical data, which will be more recent than the `startBlock` specified in the manifest, if the subgraph has been pruned. - ## Event Handlers Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. @@ -1057,7 +1072,7 @@ dataSources: name: Gravity network: mainnet source: - address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + address: "0x731a10897d267e19b34503ad902d0a29173ba4b1" abi: Gravity mapping: kind: ethereum/events @@ -1081,15 +1096,15 @@ dataSources: Varje anropsbehandlare tar en enda parameter med en typ som motsvarar namnet på den kallade funktionen. I det ovanstående exempelsubgrafet innehåller kartläggningen en hanterare för när funktionen `createGravatar` anropas och tar emot en `CreateGravatarCall`-parameter som ett argument: ```typescript -import { CreateGravatarCall } from '../generated/Gravity/Gravity' -import { Transaction } from '../generated/schema' +import { CreateGravatarCall } from "../generated/Gravity/Gravity"; +import { Transaction } from "../generated/schema"; export function handleCreateGravatar(call: CreateGravatarCall): void { - let id = call.transaction.hash - let transaction = new Transaction(id) - transaction.displayName = call.inputs._displayName - transaction.imageUrl = call.inputs._imageUrl - transaction.save() + let id = call.transaction.hash; + let transaction = new Transaction(id); + transaction.displayName = call.inputs._displayName; + transaction.imageUrl = call.inputs._imageUrl; + transaction.save(); } ``` @@ -1120,7 +1135,7 @@ dataSources: name: Gravity network: dev source: - address: '0x731a10897d267e19b34503ad902d0a29173ba4b1' + address: "0x731a10897d267e19b34503ad902d0a29173ba4b1" abi: Gravity mapping: kind: ethereum/events @@ -1172,9 +1187,9 @@ Den definierade hanteraren med filtret once kommer att anropas endast en gång i ```ts export function handleOnce(block: ethereum.Block): void { - let data = new InitialData(Bytes.fromUTF8('initial')) - data.data = 'Setup data here' - data.save() + let data = new InitialData(Bytes.fromUTF8("initial")); + data.data = "Setup data here"; + data.save(); } ``` @@ -1183,12 +1198,12 @@ export function handleOnce(block: ethereum.Block): void { Mappningsfunktionen tar emot ett `ethereum.Block` som sitt enda argument. Liksom mappningsfunktioner för händelser kan denna funktion komma åt befintliga subgrafiska enheter i lagret, anropa smarta kontrakt och skapa eller uppdatera enheter. ```typescript -import { ethereum } from '@graphprotocol/graph-ts' +import { ethereum } from "@graphprotocol/graph-ts"; export function handleBlock(block: ethereum.Block): void { - let id = block.hash - let entity = new Block(id) - entity.save() + let id = block.hash; + let entity = new Block(id); + entity.save(); } ``` @@ -1477,7 +1492,7 @@ The file data source must specifically mention all the entity types which it wil #### Skapa en ny hanterare för att bearbeta filer -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/assemblyscript-api/#json-api)). +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). CID för filen som en läsbar sträng kan nås via `dataSource` enligt följande: @@ -1488,26 +1503,26 @@ const cid = dataSource.stringParam() Exempel på hanterare: ```typescript -import { json, Bytes, dataSource } from '@graphprotocol/graph-ts' -import { TokenMetadata } from '../generated/schema' +import { json, Bytes, dataSource } from "@graphprotocol/graph-ts"; +import { TokenMetadata } from "../generated/schema"; export function handleMetadata(content: Bytes): void { - let tokenMetadata = new TokenMetadata(dataSource.stringParam()) - const value = json.fromBytes(content).toObject() + let tokenMetadata = new TokenMetadata(dataSource.stringParam()); + const value = json.fromBytes(content).toObject(); if (value) { - const image = value.get('image') - const name = value.get('name') - const description = value.get('description') - const externalURL = value.get('external_url') + const image = value.get("image"); + const name = value.get("name"); + const description = value.get("description"); + const externalURL = value.get("external_url"); if (name && image && description && externalURL) { - tokenMetadata.name = name.toString() - tokenMetadata.image = image.toString() - tokenMetadata.externalURL = externalURL.toString() - tokenMetadata.description = description.toString() + tokenMetadata.name = name.toString(); + tokenMetadata.image = image.toString(); + tokenMetadata.externalURL = externalURL.toString(); + tokenMetadata.description = description.toString(); } - tokenMetadata.save() + tokenMetadata.save(); } } ``` @@ -1526,29 +1541,29 @@ For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave b Exempel: ```typescript -import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' +import { TokenMetadata as TokenMetadataTemplate } from "../generated/templates"; -const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' +const ipfshash = "QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm"; //Denna exempelkod är för en undergraf för kryptosamverkan. Ovanstående ipfs-hash är en katalog med tokenmetadata för alla kryptosamverkande NFT:er. export function handleTransfer(event: TransferEvent): void { - let token = Token.load(event.params.tokenId.toString()) + let token = Token.load(event.params.tokenId.toString()); if (!token) { - token = new Token(event.params.tokenId.toString()) - token.tokenID = event.params.tokenId + token = new Token(event.params.tokenId.toString()); + token.tokenID = event.params.tokenId; - token.tokenURI = '/' + event.params.tokenId.toString() + '.json' - const tokenIpfsHash = ipfshash + token.tokenURI + token.tokenURI = "/" + event.params.tokenId.toString() + ".json"; + const tokenIpfsHash = ipfshash + token.tokenURI; //Detta skapar en sökväg till metadata för en enskild Crypto coven NFT. Den konkaterar katalogen med "/" + filnamn + ".json" - token.ipfsURI = tokenIpfsHash + token.ipfsURI = tokenIpfsHash; - TokenMetadataTemplate.create(tokenIpfsHash) + TokenMetadataTemplate.create(tokenIpfsHash); } - token.updatedAtTimestamp = event.block.timestamp - token.owner = event.params.to.toHexString() - token.save() + token.updatedAtTimestamp = event.block.timestamp; + token.owner = event.params.to.toHexString(); + token.save(); } ``` diff --git a/website/pages/sv/developing/developer-faqs.mdx b/website/pages/sv/developing/developer-faqs.mdx index 4c799cd8c0c5..ca0d80d667e6 100644 --- a/website/pages/sv/developing/developer-faqs.mdx +++ b/website/pages/sv/developing/developer-faqs.mdx @@ -2,99 +2,118 @@ title: Vanliga frågor för utvecklare --- -## 1. Vad är en subgraf? +This page summarizes some of the most common questions for developers building on The Graph. -En subgraf är en anpassad API byggd på blockkedjedata. Subgrafer frågas med hjälp av GraphQL-frågespråket och distribueras till en Graph Node med hjälp av Graph CLI. När de har distribuerats och publicerats till The Graphs decentraliserade nätverk, bearbetar Indexers subgrafer och gör dem tillgängliga för frågekonsumenter av subgrafer. +## Subgraph Related -## 2. Kan jag ta bort min subgraf? +### 1. Vad är en subgraf? -Det är inte möjligt att ta bort subgrafer efter att de har skapats. +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. Kan jag ändra namnet på min subgraf? +### 2. What is the first step to create a subgraph? -Nej. När en subgraf har skapats kan namnet inte ändras. Se till att tänka noga på detta innan du skapar din subgraf så att den är lätt sökbar och identifierbar av andra dappar. +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. Kan jag ändra det GitHub-konto som är kopplat till min subgraf? +### 3. Can I still create a subgraph if my smart contracts don't have events? -Nej. När en subgraf har skapats kan det associerade GitHub-kontot inte ändras. Tänk noggrant på detta innan du skapar din subgraf. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. Kan jag fortfarande skapa en subgraf om mina smarta kontrakt inte har händelser? +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -Det rekommenderas starkt att strukturera dina smarta kontrakt så att de har händelser som är kopplade till data du är intresserad av att fråga. Händelsehanterare i subgrafen utlöses av kontrakthändelser och är överlägset det snabbaste sättet att hämta användbar data. +### 4. Kan jag ändra det GitHub-konto som är kopplat till min subgraf? -Om de kontrakt du arbetar med inte innehåller händelser kan din subgraf använda sig av uppropshanterare och blockhanterare för att utlösa indexering. Detta rekommenderas dock inte, eftersom prestandan kommer att vara betydligt långsammare. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. Är det möjligt att distribuera en subgraf med samma namn för flera nätverk? +### 5. How do I update a subgraph on mainnet? -Du behöver separata namn för flera nätverk. Även om du inte kan ha olika subgrafer under samma namn finns det bekväma sätt att ha en enda kodbas för flera nätverk. Läs mer om detta i vår dokumentation: [Omdistribuera en subgraf](/deploying/deploying-a-subgraph-to-hosted#redeploying-a-subgraph) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. Hur skiljer sig mallar från datakällor? +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -Mallar låter dig skapa datakällor på flykt medan din subgraf indexerar. Det kan vara så att ditt kontrakt kommer att skapa nya kontrakt när människor interagerar med det, och eftersom du känner till formen av dessa kontrakt (ABI, händelser osv.) i förväg kan du definiera hur du vill indexera dem i en mall och när de skapas kommer din subgraf att skapa en dynamisk datakälla genom att tillhandahålla kontraktsadressen. +Du måste distribuera om subgrafen, men om subgrafens ID (IPFS-hash) inte ändras behöver den inte synkroniseras från början. + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +Inom en subgraf behandlas händelser alltid i den ordning de visas i blocken, oavsett om det är över flera kontrakt eller inte. + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. Kolla in avsnittet "Instansiera en mall för datakälla" på: [Mallar för datakällor](/developing/creating-a-subgraph#data-source-templates). -## 8. Hur ser jag till att jag använder den senaste versionen av graph-node för mina lokala distributioner? +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -Du kan köra följande kommando: +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -docker pull graphprotocol/graph-node:latest -``` +You can also use `graph add` command to add a new dataSource. -**OBS:** Docker / docker-compose kommer alltid att använda den graph-node-version som hämtades första gången du körde det, så det är viktigt att göra detta för att se till att du är uppdaterad med den senaste versionen av graph-node. +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. Hur anropar jag en kontraktsfunktion eller får åtkomst till en offentlig statisk variabel från mina subgraf-mappningar? +Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. -## 10. Är det möjligt att ställa in en subgraf med hjälp av `graph init` från `graph-cli` med två kontrakt? Eller bör jag manuellt lägga till en annan datakälla i `subgraph.yaml` efter att jag har kört `graph init`? +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Yes. On `graph init` command itself you can add multiple datasources by entering contracts one after the other. You can also use `graph add` command to add new datasource. +Du kan köra följande kommando: -## 11. Jag vill bidra eller lägga till ett GitHub-ärende. Var hittar jag öppna källkodsrepository? +```sh +docker pull graphprotocol/graph-node:latest +``` -- [graph-node](https://github.com/graphprotocol/graph-node) -- [graph-tooling](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. Vad är det rekommenderade sättet att bygga "automatiskt genererade" ID för en entitet när man hanterar händelser? +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? Om endast en entitet skapas under händelsen och om inget bättre är tillgängligt, skulle transaktionshashen + loggindexet vara unikt. Du kan förvränga dessa genom att konvertera dem till bytes och sedan skicka dem genom `crypto.keccak256`, men detta kommer inte att göra dem mer unika. -## 13. När du lyssnar på flera kontrakt, är det möjligt att välja kontraktsordningen för att lyssna på händelser? +### 15. Can I delete my subgraph? -Inom en subgraf behandlas händelser alltid i den ordning de visas i blocken, oavsett om det är över flera kontrakt eller inte. +It is not possible to delete subgraphs once they are created. However, you can [transfer and deprecate your subgraph](/managing/transfer-and-deprecate-a-subgraph/). -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +## Network Related + +### 16. What networks are supported by The Graph? + +Du kan hitta listan över de stödda nätverken [här](/developing/supported-networks). + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? Ja, du kan göra detta genom att importera `graph-ts` enligt exemplet nedan: ```javascript -import { dataSource } from '@graphprotocol/graph-ts' +import { dataSource } from "@graphprotocol/graph-ts" dataSource.network() dataSource.address() ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Do you support block and call handlers on Sepolia? Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. -## 16. Kan jag importera ethers.js eller andra JS-bibliotek i mina subgraf-mappningar? - -För närvarande inte, eftersom mappningar är skrivna i AssemblyScript. En möjlig alternativ lösning på detta är att lagra rådata i enheter och utföra logik som kräver JS-bibliotek på klienten. +## Indexing & Querying Related -## 17. Är det möjligt att specificera vilket block som ska börja indexera? +### 19. Is it possible to specify what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 18. Finns det några tips för att öka prestandan vid indexering? Min subgraf tar väldigt lång tid att synkronisera +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -Ja, du bör titta på den valfria funktionen för startblock för att börja indexera från det block där kontraktet distribuerades: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 19. Finns det ett sätt att direkt fråga subgrafen för att ta reda på det senaste blocknumret den har indexerat? +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? Ja! Prova följande kommando och ersätt "organization/subgraphName" med organisationen under vilken den är publicerad och namnet på din subgraf: @@ -102,44 +121,27 @@ Ja! Prova följande kommando och ersätt "organization/subgraphName" med organis curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -## 20. Vilka nätverk stöds av The Graph? - -Du kan hitta listan över de stödda nätverken [här](/developing/supported-networks). - -## 21. Är det möjligt att duplicera en subgraf till ett annat konto eller en annan slutpunkt utan att distribuera om? - -Du måste distribuera om subgrafen, men om subgrafens ID (IPFS-hash) inte ändras behöver den inte synkroniseras från början. - -## 22. Är det möjligt att använda Apollo Federation ovanpå graph-node? +### 22. Is there a limit to how many objects The Graph can return per query? -Federation stöds ännu inte, även om vi planerar att stödja det i framtiden. För närvarande kan du använda schema stitching, antingen på klienten eller via en proxytjänst. - -## 23. Finns det en begränsning för hur många objekt The Graph kan returnera per fråga? - -Som standard är frågesvar begränsade till 100 objekt per samling. Om du vill ha fler kan du gå upp till 1000 objekt per samling och bortom det kan du paginera med: +By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: ```graphql someCollection(first: 1000, skip: ) { ... } ``` -## 24. Om min dapp-frontänd använder The Graph för frågor, måste jag skriva in min frågenyckel direkt i frontänden? Vad händer om vi betalar frågeavgifter för användare – kommer skadliga användare att orsaka mycket höga frågeavgifter? - -För närvarande är det rekommenderade tillvägagångssättet för en dapp att lägga till nyckeln i frontänden och exponera den för slutanvändare. Med det sagt kan du begränsa den nyckeln till en värdnamn, som _yourdapp.io_ och subgraphen. Gatewayen drivs för närvarande av Edge & Node. En del av gatewayens ansvar är att övervaka missbruk och blockera trafik från skadliga klienter. - -## 25. Where do I go to find my current subgraph on the hosted service? - -Gå till hosted service för att hitta subgrafer som du eller andra har distribuerat till hosted service. Du hittar den [här](https://thegraph.com/hosted-service). - -## 26. Will the hosted service start charging query fees? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -The Graph kommer aldrig ta ut avgifter för hosted service. The Graph är en decentraliserad protokoll, och att ta ut avgifter för en centraliserad tjänst är inte i linje med The Graphs värderingar. Hosted service var alltid ett tillfälligt steg för att hjälpa till att nå det decentraliserade nätverket. Utvecklare kommer att ha tillräckligt med tid att uppgradera till det decentraliserade nätverket när de är bekväma med det. +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## 27. How do I update a subgraph on mainnet? +## Miscellaneous -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +### 24. Is it possible to use Apollo Federation on top of graph-node? -## 28. In what order are the event, block, and call handlers triggered for a data source? +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +- [graph-node](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/sv/developing/graph-ts/api.mdx b/website/pages/sv/developing/graph-ts/api.mdx index 227b7f3ee2bc..2dbb19d8f5cf 100644 --- a/website/pages/sv/developing/graph-ts/api.mdx +++ b/website/pages/sv/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: API för AssemblyScript --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/release-notes/assemblyscript-migration-guide) +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -Denna sida dokumenterar vilka inbyggda API: er som kan användas när man skriver mappningar av undergrafer. Två typer av API: er är tillgängliga från start: +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## API-referens @@ -27,16 +29,16 @@ The `@graphprotocol/graph-ts` library provides the following APIs: The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. -| Version | Versionsanteckningar | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
    Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
    Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
    `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
    `etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Versionsanteckningar | +| :-----: | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
    Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
    Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
    `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
    `etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Inbyggda typer @@ -161,7 +163,7 @@ _Math_ #### TypedMap ```typescript -import { TypedMap } from '@graphprotocol/graph-ts' +import { TypedMap } from "@graphprotocol/graph-ts"; ``` `TypedMap` can be used to store key-value pairs. See [this example](https://github.com/graphprotocol/aragon-subgraph/blob/29dd38680c5e5104d9fdc2f90e740298c67e4a31/individual-dao-subgraph/mappings/constants.ts#L51). @@ -177,7 +179,7 @@ The `TypedMap` class has the following API: #### Bytes ```typescript -import { Bytes } from '@graphprotocol/graph-ts' +import { Bytes } from "@graphprotocol/graph-ts"; ``` `Bytes` is used to represent arbitrary-length arrays of bytes. This includes Ethereum values of type `bytes`, `bytes32`, etc. @@ -203,7 +205,7 @@ _Operators_ #### Address ```typescript -import { Address } from '@graphprotocol/graph-ts' +import { Address } from "@graphprotocol/graph-ts"; ``` `Address` extends `Bytes` to represent Ethereum `address` values. @@ -216,7 +218,7 @@ It adds the following method on top of the `Bytes` API: ### Store API ```typescript -import { store } from '@graphprotocol/graph-ts' +import { store } from "@graphprotocol/graph-ts"; ``` The `store` API allows to load, save and remove entities from and to the Graph Node store. @@ -229,60 +231,65 @@ Följande är ett vanligt mönster för att skapa entiteter från Ethereum-händ ```typescript // Importera händelseklassen Transfer som genererats från ERC20 ABI -import { Transfer as TransferEvent } from '../generated/ERC20/ERC20' +import { Transfer as TransferEvent } from "../generated/ERC20/ERC20"; // Importera entitetstypen Transfer som genererats från GraphQL-schemat -import { Transfer } from '../generated/schema' +import { Transfer } from "../generated/schema"; // Händelsehanterare för överföring export function handleTransfer(event: TransferEvent): void { // Skapa en Transfer-entitet, med transaktionshash som enhets-ID - let id = event.transaction.hash - let transfer = new Transfer(id) + let id = event.transaction.hash; + let transfer = new Transfer(id); // Ange egenskaper för entiteten med hjälp av händelseparametrarna - transfer.from = event.params.from - transfer.to = event.params.to - transfer.amount = event.params.amount + transfer.from = event.params.from; + transfer.to = event.params.to; + transfer.amount = event.params.amount; // Spara entiteten till lagret - transfer.save() + transfer.save(); } ``` When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. -Varje entitet måste ha en unik ID för att undvika kollisioner med andra entiteter. Det är ganska vanligt att händelsens parametrar inkluderar en unik identifierare som kan användas. Observera: Att använda transaktionshashen som ID förutsätter att inga andra händelser i samma transaktion skapar entiteter med denna hash som ID. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### Ladda entiteter från lagret Om en entitet redan finns kan den laddas från lagret med följande: ```typescript -let id = event.transaction.hash // eller hur ID konstrueras -let transfer = Transfer.load(id) +let id = event.transaction.hash; // eller hur ID konstrueras +let transfer = Transfer.load(id); if (transfer == null) { - transfer = new Transfer(id) + transfer = new Transfer(id); } // Använd överföringsenheten som tidigare ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### Sökning av entiteter skapade inom ett block As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. -API:et för lagret underlättar hämtningen av entiteter som skapades eller uppdaterades i det aktuella blocket. En vanlig situation för detta är att en hanterare skapar en transaktion från någon händelse på kedjan, och en senare hanterare vill komma åt denna transaktion om den finns. I det fall då transaktionen inte finns, måste subgraphen gå till databasen bara för att ta reda på att entiteten inte finns; om subgraphförfattaren redan vet att entiteten måste ha skapats i samma block, undviker man detta databasbesök genom att använda loadInBlock. För vissa subgrapher kan dessa missade sökningar bidra avsevärt till indexeringstiden. +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript -let id = event.transaction.hash // eller hur ID konstrueras -let transfer = Transfer.loadInBlock(id) +let id = event.transaction.hash; // eller hur ID konstrueras +let transfer = Transfer.loadInBlock(id); if (transfer == null) { - transfer = new Transfer(id) + transfer = new Transfer(id); } // Använd överföringsenheten som tidigare @@ -336,7 +343,7 @@ transfer.amount = ... Det är också möjligt att avaktivera egenskaper med en av följande två instruktioner: ```typescript -transfer.from.unset() +transfer.from.unset(); transfer.from = null ``` @@ -346,14 +353,14 @@ Updating array properties is a little more involved, as the getting an array fro ```typescript // Detta kommer inte att fungera -entity.numbers.push(BigInt.fromI32(1)) -entity.save() +entity.numbers.push(BigInt.fromI32(1)); +entity.save(); // Detta kommer att fungera -let numbers = entity.numbers -numbers.push(BigInt.fromI32(1)) -entity.numbers = numbers -entity.save() +let numbers = entity.numbers; +numbers.push(BigInt.fromI32(1)); +entity.numbers = numbers; +entity.save(); ``` #### Ta bort entiteter från lagret @@ -391,12 +398,12 @@ type Transfer @entity { and a `Transfer(address,address,uint256)` event signature on Ethereum, the `from`, `to` and `amount` values of type `address`, `address` and `uint256` are converted to `Address` and `BigInt`, allowing them to be passed on to the `Bytes!` and `BigInt!` properties of the `Transfer` entity: ```typescript -let id = event.transaction.hash -let transfer = new Transfer(id) -transfer.from = event.params.from -transfer.to = event.params.to -transfer.amount = event.params.amount -transfer.save() +let id = event.transaction.hash; +let transfer = new Transfer(id); +transfer.from = event.params.from; +transfer.to = event.params.to; +transfer.amount = event.params.amount; +transfer.save(); ``` #### Händelser och Block/Transaktionsdata @@ -482,16 +489,19 @@ En vanlig mönster är att komma åt kontraktet från vilket en händelse härst ```typescript // Importera den genererade kontraktsklassen och den genererade klassen för överföringshändelser -import { ERC20Contract, Transfer as TransferEvent } from '../generated/ERC20Contract/ERC20Contract' +import { + ERC20Contract, + Transfer as TransferEvent, +} from "../generated/ERC20Contract/ERC20Contract"; // Importera den genererade entitetsklassen -import { Transfer } from '../generated/schema' +import { Transfer } from "../generated/schema"; export function handleTransfer(event: TransferEvent) { // Bind kontraktet till den adress som skickade händelsen - let contract = ERC20Contract.bind(event.address) + let contract = ERC20Contract.bind(event.address); // Åtkomst till tillståndsvariabler och funktioner genom att anropa dem - let erc20Symbol = contract.symbol() + let erc20Symbol = contract.symbol(); } ``` @@ -503,19 +513,21 @@ Andra kontrakt som är en del av subgraphen kan importeras från den genererade #### Hantering av återkallade anrop -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript -let gravitera = gravitera.bind(event.address) -let callResult = gravitera_gravatarToOwner(gravatar) +let gravitera = gravitera.bind(event.address); +let callResult = gravitera_gravatarToOwner(gravatar); if (callResult.reverted) { - log.info('getGravatar reverted', []) + log.info("getGravatar reverted", []); } else { - let owner = callResult.value + let owner = callResult.value; } ``` -Observera att en Graf-nod ansluten till en Geth eller Infura klient kanske inte upptäcker alla återkallade anrop. Om du förlitar dig på detta rekommenderar vi att du använder en Graph nod som är ansluten till en Parity klient. +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### Kodning/Dekodning av ABI @@ -570,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false ### API för loggning ```typescript -import { log } from '@graphprotocol/graph-ts' +import { log } from "@graphprotocol/graph-ts"; ``` The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument. @@ -586,7 +598,11 @@ The `log` API includes the following functions: The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on. ```typescript -log.info('Message to be displayed: {}, {}, {}', [value.toString(), anotherValue.toString(), 'already a string']) +log.info("Message to be displayed: {}, {}, {}", [ + value.toString(), + anotherValue.toString(), + "already a string", +]); ``` #### Loggning av ett eller flera värden @@ -609,11 +625,11 @@ export function handleSomeEvent(event: SomeEvent): void { I exemplet nedan loggas endast det första värdet i argument arrayen, trots att arrayen innehåller tre värden. ```typescript -let myArray = ['A', 'B', 'C'] +let myArray = ["A", "B", "C"]; export function handleSomeEvent(event: SomeEvent): void { // Visar : "Mitt värde är: A" (Även om tre värden skickas till `log.info`) - log.info('Mitt värde är: {}', myArray) + log.info("Mitt värde är: {}", myArray); } ``` @@ -622,11 +638,14 @@ export function handleSomeEvent(event: SomeEvent): void { Each entry in the arguments array requires its own placeholder `{}` in the log message string. The below example contains three placeholders `{}` in the log message. Because of this, all three values in `myArray` are logged. ```typescript -let myArray = ['A', 'B', 'C'] +let myArray = ["A", "B", "C"]; export function handleSomeEvent(event: SomeEvent): void { // Visar: "Mitt första värde är: A, andra värdet är: B, tredje värdet är: C" - log.info('My first value is: {}, second value is: {}, third value is: {}', myArray) + log.info( + "My first value is: {}, second value is: {}, third value is: {}", + myArray + ); } ``` @@ -637,7 +656,7 @@ För att visa ett specifikt värde i arrayen måste det indexeras och tillhandah ```typescript export function handleSomeEvent(event: SomeEvent): void { // Visar : "Mitt tredje värde är C" - log.info('My third value is: {}', [myArray[2]]) + log.info("My third value is: {}", [myArray[2]]); } ``` @@ -646,21 +665,21 @@ export function handleSomeEvent(event: SomeEvent): void { I exemplet nedan loggas blocknummer, blockhash och transaktionshash från en händelse: ```typescript -import { log } from '@graphprotocol/graph-ts' +import { log } from "@graphprotocol/graph-ts"; export function handleSomeEvent(event: SomeEvent): void { - log.debug('Block number: {}, block hash: {}, transaction hash: {}', [ + log.debug("Block number: {}, block hash: {}, transaction hash: {}", [ event.block.number.toString(), // "47596000" event.block.hash.toHexString(), // "0x..." event.transaction.hash.toHexString(), // "0x..." - ]) + ]); } ``` ### IPFS API ```typescript -import { ipfs } from '@graphprotocol/graph-ts' +import { ipfs } from "@graphprotocol/graph-ts" ``` Smart contracts occasionally anchor IPFS files on chain. This allows mappings to obtain the IPFS hashes from the contract and read the corresponding files from IPFS. The file data will be returned as `Bytes`, which usually requires further processing, e.g. with the `json` API documented later on this page. @@ -669,13 +688,13 @@ För att läsa en fil från IPFS med en given IPFS-hash eller sökväg görs fö ```typescript // Placera detta i en händelsehanterare i mappningen -let hash = 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D' -let data = ipfs.cat(hash) +let hash = "QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D"; +let data = ipfs.cat(hash); // Sökvägar som `QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile` // som inkluderar filer i kataloger stöds också -let path = 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile' -let data = ipfs.cat(path) +let path = "QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/Makefile"; +let data = ipfs.cat(path); ``` **Note:** `ipfs.cat` is not deterministic at the moment. If the file cannot be retrieved over the IPFS network before the request times out, it will return `null`. Due to this, it's always worth checking the result for `null`. @@ -683,31 +702,31 @@ let data = ipfs.cat(path) It is also possible to process larger files in a streaming fashion with `ipfs.map`. The function expects the hash or path for an IPFS file, the name of a callback, and flags to modify its behavior: ```typescript -import { JSONValue, Value } from '@graphprotocol/graph-ts' +import { JSONValue, Value } from "@graphprotocol/graph-ts"; export function processItem(value: JSONValue, userData: Value): void { // Se JSONValue-dokumentationen för mer information om hur man hanterar // med JSON-värden - let obj = value.toObject() - let id = obj.get('id') - let title = obj.get('title') + let obj = value.toObject(); + let id = obj.get("id"); + let title = obj.get("title"); if (!id || !title) { - return + return; } // Callbacks kan också skapa enheter - let newItem = new Item(id) - newItem.title = title.toString() - newitem.parent = userData.toString() // Ange parent till "parentId" - newitem.save() + let newItem = new Item(id); + newItem.title = title.toString(); + newitem.parent = userData.toString(); // Ange parent till "parentId" + newitem.save(); } // Placera detta i en händelsehanterare i mappningen -ipfs.map('Qm...', 'processItem', Value.fromString('parentId'), ['json']) +ipfs.map("Qm...", "processItem", Value.fromString("parentId"), ["json"]); // Alternativt kan du använda `ipfs.mapJSON`. -ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId')) +ipfs.mapJSON("Qm...", "processItem", Value.fromString("parentId")); ``` The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited. @@ -717,7 +736,7 @@ On success, `ipfs.map` returns `void`. If any invocation of the callback causes ### Crypto API ```typescript -import { crypto } from '@graphprotocol/graph-ts' +import { crypto } from "@graphprotocol/graph-ts"; ``` The `crypto` API makes a cryptographic functions available for use in mappings. Right now, there is only one: @@ -727,7 +746,7 @@ The `crypto` API makes a cryptographic functions available for use in mappings. ### JSON API ```typescript -import { json, JSONValueKind } from '@graphprotocol/graph-ts' +import { json, JSONValueKind } from "@graphprotocol/graph-ts" ``` JSON data can be parsed using the `json` API: diff --git a/website/pages/sv/developing/supported-networks.mdx b/website/pages/sv/developing/supported-networks.mdx index 47341ff9f563..c8bf3ab7a332 100644 --- a/website/pages/sv/developing/supported-networks.mdx +++ b/website/pages/sv/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. +- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. - If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - För en fullständig lista över vilka funktioner som stöds på det decentraliserade nätverket, se [den här sidan](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). diff --git a/website/pages/sv/developing/unit-testing-framework.mdx b/website/pages/sv/developing/unit-testing-framework.mdx index 67c3d602ef32..6214463b9c84 100644 --- a/website/pages/sv/developing/unit-testing-framework.mdx +++ b/website/pages/sv/developing/unit-testing-framework.mdx @@ -52,7 +52,7 @@ eller /node_modules/gluegun/build/index.js:13 throw up; ``` -Se till att du använder en nyare version av Node.js eftersom graph-cli inte längre stöder **v10.19.0**, och det är fortfarande standardversionen för nya Ubuntu-bilder på WSL. Till exempel är Matchstick bekräftat fungerande på WSL med **v18.1.0**. Du kan byta till den antingen via** nvm ** eller genom att uppdatera din globala Node.js. Glöm inte att ta bort `node_modules` och köra `npm install`igen efter att du har uppdaterat Node.js! Sedan, se till att du har **libpq** installerat, du kan göra det genom att köra +Se till att du använder en nyare version av Node.js eftersom graph-cli inte längre stöder **v10.19.0**, och det är fortfarande standardversionen för nya Ubuntu-bilder på WSL. Till exempel är Matchstick bekräftat fungerande på WSL med **v18.1.0**. Du kan byta till den antingen via** nvm ** eller genom att uppdatera din globala Node.js. Glöm inte att ta bort `node_modules` och köra ` npm install `igen efter att du har uppdaterat Node.js! Sedan, se till att du har **libpq** installerat, du kan göra det genom att köra ``` sudo apt-get install libpq-dev @@ -1368,18 +1368,18 @@ Loggutmatningen innehåller testkörningens varaktighet. Här är ett exempel: > Critical: Could not create WasmInstance from valid module with context: unknown import: wasi_snapshot_preview1::fd_write has not been defined -Det betyder att du har använt `console.log` i din kod, som inte stöds av AssemblyScript. Överväg att använda [Logging API](/developing/assemblyscript-api/#logging-api) +This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/graph-ts/api/#logging-api) > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Block(defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(18,12) -> +> > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Transaction(defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(24,12) Motsägelsen i argumenten beror på en motsägelse i `graph-ts` och `matchstick-as`. Det bästa sättet att åtgärda problem som detta är att uppdatera allt till den senaste utgivna versionen. diff --git a/website/pages/sv/glossary.mdx b/website/pages/sv/glossary.mdx index 5623c1c1b9fc..a967f04e7c60 100644 --- a/website/pages/sv/glossary.mdx +++ b/website/pages/sv/glossary.mdx @@ -10,11 +10,9 @@ title: Ordlista - **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Then, Indexers can begin indexing subgraphs to make them available to be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **Hosted service**: A temporary scaffold service for building and querying subgraphs as The Graph's decentralized network is maturing its cost of service, quality of service, and developer experience. - -- **Indexers**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. @@ -24,17 +22,17 @@ title: Ordlista - **Indexer's Self Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. -- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service to The Graph Network. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegators**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **Delegationsavgift **: En avgift på 0,5% som betalas av Delegatorer när de delegerar GRT till Indexers. Det GRT som används för att betala avgiften bränns. -- **Curators**: Network participants that identify high-quality subgraphs, and “curate” them (i.e., signal GRT on them) in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. Indexers earn indexing rewards proportional to the signal on a subgraph. We see a correlation between the amount of GRT signalled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. -- **Subgraph Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a subgraph. - **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. @@ -46,11 +44,11 @@ title: Ordlista 1. **Aktiv**: En allokering anses vara aktiv när den skapas på kedjan. Detta kallas att öppna en allokering och indikerar för nätverket att Indexer aktivt indexerar och betjänar frågor för en särskild subgraf. Aktiva allokeringar ackumulerar indexbelöningar proportionellt mot signalen på subgrafen och mängden GRT som allokerats. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **SubGraf Studio**: En kraftfull dapp för att bygga, distribuera och publicera subgrafer. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed. Specifically, the Indexer will lose 2.5% of their self-stake of GRT. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. @@ -62,11 +60,11 @@ title: Ordlista - **GRT**: The Graph's arbetsnytto-token. GRT tillhandahåller ekonomiska incitament för nätverksdeltagare att bidra till nätverket. -- **POI or Proof of Indexing**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent Proof of Indexing (POI). Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Grafnod**: Graph Node är komponenten som indexerar subgrafer och gör den resulterande datan tillgänglig för frågor via ett GraphQL API. Som sådan är den central för indexeringsstacken och korrekt drift av Graph Node är avgörande för att köra en framgångsrik Indexer. +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer-agent**: Indexer-agenten är en del av indexeringsstacken. Den underlättar Indexers interaktioner på kedjan, inklusive registrering på nätverket, hantering av subgrafers distributioner till dess Graph Node(s), och hantering av allokeringar. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Klient**: Ett bibliotek för att bygga decentraliserade dappar baserade på GraphQL. @@ -78,10 +76,6 @@ title: Ordlista - **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self stake. -- **_Uppgradering_ av en subgraf till The Graf Nätverk**: Processen att flytta en subgraf från hosted service till The Graph Nätverk. - -- **_Uppdatering_ av en subgraf**: Processen att släppa en ny subgrafversion med uppdateringar av subgrafens manifest, schema eller avbildning. +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). - -- **Upgrade Window**: The countdown for hosted service users to upgrade their subgraph(s) to The Graph Network beginning on April 11th, and ending on June 12th 2024. diff --git a/website/pages/sv/index.json b/website/pages/sv/index.json index 6556272356e0..0bf12ec4ac63 100644 --- a/website/pages/sv/index.json +++ b/website/pages/sv/index.json @@ -21,10 +21,6 @@ "createASubgraph": { "title": "Skapa en Subgraf", "description": "Använd Studio för att skapa subgrafer" - }, - "migrateFromHostedService": { - "title": "Upgrade from the hosted service", - "description": "Upgrading subgraphs to The Graph Network" } }, "networkRoles": { diff --git a/website/pages/sv/managing/transfer-and-deprecate-a-subgraph.mdx b/website/pages/sv/managing/transfer-and-deprecate-a-subgraph.mdx new file mode 100644 index 000000000000..4b0ec665a9e2 --- /dev/null +++ b/website/pages/sv/managing/transfer-and-deprecate-a-subgraph.mdx @@ -0,0 +1,65 @@ +--- +title: Transfer and Deprecate a Subgraph +--- + +## Överföring av ägande för en subgraf + +Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. + +**Please note the following:** + +- Whoever owns the NFT controls the subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. +- You can easily move control of a subgraph to a multi-sig. +- A community member can create a subgraph on behalf of a DAO. + +### View your subgraph as an NFT + +To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: + +``` +https://opensea.io/your-wallet-address +``` + +Or a wallet explorer like **Rainbow.me**: + +``` +https://rainbow.me/your-wallet-addres +``` + +### Step-by-Step + +To transfer ownership of a subgraph, do the following: + +1. Use the UI built into Subgraph Studio: + + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) + +2. Choose the address that you would like to transfer the subgraph to: + + ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) + +Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: + +![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) + +## Deprecating a subgraph + +Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. + +### Step-by-Step + +To deprecate your subgraph, do the following: + +1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). +2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. +3. Your subgraph will no longer appear in searches on Graph Explorer. + +**Please note the following:** + +- The owner's wallet should call the `deprecateSubgraph` function. +- Kuratorer kommer inte längre kunna signalera på subgrafet. +- Curators that already signaled on the subgraph can withdraw their signal at an average share price. +- Deprecated subgraphs will show an error message. + +> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. diff --git a/website/pages/sv/mips-faqs.mdx b/website/pages/sv/mips-faqs.mdx index 2f53debe4124..89e9045114c8 100644 --- a/website/pages/sv/mips-faqs.mdx +++ b/website/pages/sv/mips-faqs.mdx @@ -6,10 +6,6 @@ title: Vanliga Frågor om MIPs > Observera: MIPs-programmet är avslutat sedan maj 2023. Tack till alla Indexers som deltog! -Det är en spännande tid att delta i The Graph-ekosystemet! Under [Graph Day 2022](https://thegraph.com/graph-day/2022/) tillkännagav Yaniv Tal [avslutningen av den hostade tjänsten](https://thegraph.com/blog/sunsetting-hosted-service/), ett ögonblick som The Graph-ekosystemet har arbetat mot i många år. - -För att stödja avslutningen av den hostade tjänsten och migrationen av all dess aktivitet till det decentraliserade nätverket har The Graph Foundation tillkännagivit [Migration Infrastructure Providers (MIPs) programmet](https://thegraph.com/blog/mips-multi-chain-indexing-incentivized-program). - MIPs-programmet är ett incitamentsprogram för Indexers för att stödja dem med resurser att indexera kedjor bortom Ethereum-huvudnätet och hjälpa The Graph-protokollet att expandera det decentraliserade nätverket till en flerlagers infrastruktur. MIPs-programmet har allokerat 0,75% av GRT-försörjningen (75M GRT), med 0,5% för att belöna Indexers som bidrar till att starta nätverket och 0,25% som tilldelats Network Grants för subgraph-utvecklare som använder flerlags-subgraphs. diff --git a/website/pages/sv/network/benefits.mdx b/website/pages/sv/network/benefits.mdx index 262dada1844c..4a43665d6fa8 100644 --- a/website/pages/sv/network/benefits.mdx +++ b/website/pages/sv/network/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Kostnadsjämförelse | Egen Värd | The Graph Nätverk | -| :-: | :-: | :-: | -| Månatlig kostnad för server\* | $350 per månad | $0 | -| Kostnad för frågor | $0+ | $0 per month | -| Konstruktionstid | $400 per månad | Ingen, inbyggd i nätverket med globalt distribuerade Indexers | -| Frågor per månad | Begränsad till infra kapacitet | 100,000 (Free Plan) | -| Kostnad per fråga | $0 | $0 | -| Infrastruktur | Centraliserad | Decentraliserad | -| Geografisk redundans | $750+ per extra nod | Inkluderat | -| Drifttid | Varierande | 99.9%+ | -| Total Månadskostnad | $750+ | $0 | +| Kostnadsjämförelse | Egen Värd | The Graph Nätverk | +|:-------------------------------:|:---------------------------------------:|:-------------------------------------------------------------:| +| Månatlig kostnad för server\* | $350 per månad | $0 | +| Kostnad för frågor | $0+ | $0 per month | +| Konstruktionstid | $400 per månad | Ingen, inbyggd i nätverket med globalt distribuerade Indexers | +| Frågor per månad | Begränsad till infra kapacitet | 100,000 (Free Plan) | +| Kostnad per fråga | $0 | $0 | +| Infrastruktur | Centraliserad | Decentraliserad | +| Geografisk redundans | $750+ per extra nod | Inkluderat | +| Drifttid | Varierande | 99.9%+ | +| Total Månadskostnad | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| Kostnadsjämförelse | Egen Värd | The Graph Nätverk | -| :-: | :-: | :-: | -| Månadskostnad för server\* | $350 per månad | $0 | -| Kostnad för frågor | $500 per månad | $120 per month | -| Ingenjörstid | $800 per månad | Ingen, inbyggd i nätverket med globalt distribuerade Indexers | -| Frågor per månad | Begränsad till infra kapacitet | ~3,000,000 | -| Kostnad per fråga | $0 | $0.00004 | -| Infrastruktur | Centraliserad | Decentraliserad | -| Kostnader för ingenjörsarbete | $200 per timme | Inkluderat | -| Geografisk redundans | $1,200 i totala kostnader per extra nod | Inkluderat | -| Drifttid | Varierar | 99.9%+ | -| Total Månadskostnad | $1,650+ | $120 | +| Kostnadsjämförelse | Egen Värd | The Graph Nätverk | +|:-----------------------------:|:------------------------------------------:|:-------------------------------------------------------------:| +| Månadskostnad för server\* | $350 per månad | $0 | +| Kostnad för frågor | $500 per månad | $120 per month | +| Ingenjörstid | $800 per månad | Ingen, inbyggd i nätverket med globalt distribuerade Indexers | +| Frågor per månad | Begränsad till infra kapacitet | ~3,000,000 | +| Kostnad per fråga | $0 | $0.00004 | +| Infrastruktur | Centraliserad | Decentraliserad | +| Kostnader för ingenjörsarbete | $200 per timme | Inkluderat | +| Geografisk redundans | $1,200 i totala kostnader per extra nod | Inkluderat | +| Drifttid | Varierar | 99.9%+ | +| Total Månadskostnad | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| Kostnadsjämförelse | Egen Värd | The Graph Nätverk | -| :-: | :-: | :-: | -| Månadskostnad för server\* | $1100 per månad, per nod | $0 | -| Kostnad för frågor | $4000 | $1,200 per month | -| Antal noder som behövs | 10 | Ej tillämpligt | -| Ingenjörstid | $6,000 eller mer per månad | Ingen, inbyggd i nätverket med globalt distribuerade Indexers | -| Frågor per månad | Begränsad till infra kapacitet | ~30,000,000 | -| Kostnad per fråga | $0 | $0.00004 | -| Infrastruktur | Centraliserad | Decentraliserad | -| Geografisk redundans | $1,200 i totala kostnader per extra nod | Inkluderat | -| Drifttid | Varierar | 99.9%+ | -| Total Månadskostnad | $11,000+ | $1,200 | +| Kostnadsjämförelse | Egen Värd | The Graph Nätverk | +|:----------------------------:|:-------------------------------------------:|:-------------------------------------------------------------:| +| Månadskostnad för server\* | $1100 per månad, per nod | $0 | +| Kostnad för frågor | $4000 | $1,200 per month | +| Antal noder som behövs | 10 | Ej tillämpligt | +| Ingenjörstid | $6,000 eller mer per månad | Ingen, inbyggd i nätverket med globalt distribuerade Indexers | +| Frågor per månad | Begränsad till infra kapacitet | ~30,000,000 | +| Kostnad per fråga | $0 | $0.00004 | +| Infrastruktur | Centraliserad | Decentraliserad | +| Geografisk redundans | $1,200 i totala kostnader per extra nod | Inkluderat | +| Drifttid | Varierar | 99.9%+ | +| Total Månadskostnad | $11,000+ | $1,200 | \*inklusive kostnader för backup: $50-$100 per månad diff --git a/website/pages/sv/network/curating.mdx b/website/pages/sv/network/curating.mdx index 23a394b3a1a1..5d2091da795e 100644 --- a/website/pages/sv/network/curating.mdx +++ b/website/pages/sv/network/curating.mdx @@ -10,7 +10,7 @@ Before consumers can query a subgraph, it must be indexed. This is where curatio Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. @@ -18,7 +18,7 @@ The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). @@ -34,7 +34,7 @@ Att signalera på en specifik version är särskilt användbart när en subgraf Att ha din signal automatiskt migrerad till den nyaste produktionsversionen kan vara värdefullt för att säkerställa att du fortsätter att ackumulera frågeavgifter. Varje gång du signalerar åläggs en kuratoravgift på 1%. Du kommer också att betala en kuratoravgift på 0,5% vid varje migration. Subgrafutvecklare uppmanas att inte publicera nya versioner för ofta - de måste betala en kuratoravgift på 0,5% på alla automatiskt migrerade kuratorandelar. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -49,7 +49,7 @@ However, it is recommended that curators leave their signaled GRT in place not o ## Risker 1. Frågemarknaden är i grunden ung på The Graph och det finns en risk att din %APY kan vara lägre än du förväntar dig på grund av tidiga marknadsmekanik. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. 3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. En subgraf kan misslyckas på grund av en bugg. En misslyckad subgraf genererar inte frågeavgifter. Som ett resultat måste du vänta tills utvecklaren rättar felet och distribuerar en ny version. - Om du prenumererar på den nyaste versionen av en subgraf kommer dina andelar automatiskt att migreras till den nya versionen. Detta kommer att medföra en kuratoravgift på 0,5%. @@ -65,7 +65,7 @@ By signalling on a subgraph, you will earn a share of all the query fees that th Att hitta högkvalitativa subgrafer är en komplex uppgift, men den kan närmas på många olika sätt. Som kurator vill du leta efter pålitliga subgrafer som genererar frågevolym. En pålitlig subgraf kan vara värdefull om den är komplett, korrekt och stöder en dApps datamässiga behov. En dåligt utformad subgraf kan behöva revideras eller publiceras på nytt och kan också misslyckas. Det är avgörande för kuratorer att granska en subgrafs arkitektur eller kod för att bedöma om en subgraf är värdefull. Som ett resultat: -- Kuratorer kan använda sin förståelse för nätverket för att försöka förutsäga hur en enskild subgraf kan generera en högre eller lägre frågevolym i framtiden +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future - Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. ### 3. Vad kostar det att uppdatera en subgraf? @@ -78,50 +78,14 @@ Det föreslås att du inte uppdaterar dina subgrafer för ofta. Se frågan ovan ### 5. Kan jag sälja mina kuratorandelar? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). ### 6. Am I eligible for a curation grant? Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## Bindningskurva 101 - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![Pris per andel](/img/price-per-share.png) - -Som ett resultat ökar priset linjärt, vilket innebär att det blir dyrare att köpa en andel över tiden. Här är ett exempel på vad vi menar, se bindningskurvan nedan: - -![Bindningskurva](/img/bonding-curve.png) - -Låt oss säga att vi har två kuratorer som präglar andelar för en subgraf: - -- Kurator A är den första att signalera på subgrafen. Genom att lägga till 120 000 GRT i kurvan kan de prägla 2000 andelar. -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- Eftersom båda kuratorerna har hälften av det totala antalet kuratorandelar skulle de få lika mycket kuratorersättning. -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- Den återstående kuratorn skulle nu få all kuratorersättning för den subgrafen. Om de brände sina andelar för att ta ut GRT skulle de få 120 000 GRT. -- **TLDR:** GRT-värderingen av kuratorandelar bestäms av bindningskurvan och kan vara volatil. Det finns potential att ådra sig stora förluster. Att signalera tidigt innebär att du satsar mindre GRT för varje andel. Detta innebär i förlängningen att du tjänar mer kuratorersättning per GRT än senare kuratorer för samma subgraf. - -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** - -I fallet med The Graph används [Bancors implementation av en bindningskurvformel](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA). - Fortfarande förvirrad? Kolla in vår videohandledning om kurering nedan: diff --git a/website/pages/sv/network/delegating.mdx b/website/pages/sv/network/delegating.mdx index 171c745fdd68..d5043964f8d4 100644 --- a/website/pages/sv/network/delegating.mdx +++ b/website/pages/sv/network/delegating.mdx @@ -2,13 +2,23 @@ title: Delegera --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## Delegateringsguide -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly, as that information is widely available online. There are three sections in this guide: @@ -24,64 +34,87 @@ Här nedan listas huvudriskerna med att vara en Delegater i protokollet. Delegater kan inte "slashas" för dåligt beteende, men det finns en avgift för Delegater för att avskräcka dåligt beslutsfattande som kan skada nätverkets integritet. -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +As a Delegator, it's important to understand the following: -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### Perioden för upphävande av delegering Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely.
    - ![Delegation upphävning](/img/Delegation-Unbonding.png) _Observera avgiften på 0,5% i Delegation UI, samt den 28 dagar - långa upphävningsperioden._ + ![Delegation upphävning](/img/Delegation-Unbonding.png) _Observera avgiften +på 0,5% i Delegation UI, samt den 28 dagar långa upphävningsperioden._
    ### Att välja en pålitlig Indexer med en rättvis belöningsutbetalning till Delegater -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +#### Delegation Parameters + +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
    - ![Indexing Edward Cut](/img/Indexing-Reward-Cut.png) *Den översta Indexet ger Delegater 90% av belöningarna. Den - mellersta ger Delegater 20%. Den nedersta ger Delegater ~83%.* + ![Indexing Edward Cut](/img/Indexing-Reward-Cut.png) *Den översta Indexet ger +Delegater 90% av belöningarna. Den mellersta ger Delegater 20%. Den nedersta +ger Delegater ~83%.*
    -- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. + +As you can see, in order to choose the right Indexer, you must consider multiple things. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -### Beräkning av Delegaters förväntade avkastning +## Calculating Delegators Expected Return -A Delegator must consider a lot of factors when determining the return. These include: +A Delegator must consider the following factors to determine a return: -- En teknisk Delegater kan också titta på Indexer's förmåga att använda de Delegerade tokens som är tillgängliga för dem. Om en Indexer inte allokerar alla tillgängliga tokens tjänar de inte maximal vinst de kunde för sig själva eller sina Delegater. -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### Att överväga frågebetalningsavgiften och indexeringsavgiften -Som beskrivs i de ovanstående avsnitten bör du välja en Indexer som är öppen och ärlig om att sätta sina frågebetalningsavgifter och indexeringsavgifter. En Delegater bör också titta på Parametrarnas Kylningstid för att se hur mycket tidsskydd de har. Efter att detta är gjort är det ganska enkelt att beräkna den mängd belöningar som Delegaterna får. Formeln är: +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![Delegering Bild 3](/img/Delegation-Reward-Formula.png) ### Att överväga Indexer's delegeringspool -En annan sak som en Delegater måste överväga är vilken proportion av Delegationspoolen de äger. Alla delegationsbelöningar delas jämnt, med en enkel omviktning av poolen som avgörs av det belopp som Delegaterna har deponerat i poolen. Det ger Delegaterna en andel av poolen: +Delegators should consider the proportion of the Delegation Pool they own. -![Dela formel](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +This gives the Delegator a share of the pool: + +![Dela formel](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### Att överväga delegeringskapaciteten -En annan sak att överväga är delegeringskapaciteten. För närvarande är Delegationsförhållandet inställt på 16. Det innebär att om en Indexer har satsat 1 000 000 GRT är deras Delegationskapacitet 16 000 000 GRT av Delegerade tokens som de kan använda i protokollet. Alla delegerade tokens över denna mängd kommer att utspäda alla Delegaternas belöningar. +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. @@ -89,16 +122,21 @@ Therefore, a Delegator should always consider the Delegation Capacity of an Inde ### Metamask "Väntande transaktion" bugg -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### Exempel -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## Videoguide för nätverks-UI +## Video Guide -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/sv/network/developing.mdx b/website/pages/sv/network/developing.mdx index 7509edd5cffb..63aaa1643667 100644 --- a/website/pages/sv/network/developing.mdx +++ b/website/pages/sv/network/developing.mdx @@ -2,52 +2,88 @@ title: Utveckling --- -Utvecklare utgör efterfrågesidan av The Graph-ekosystemet. Utvecklare bygger undergrafer och publicerar dem på The Graph Nätverk. Därefter frågar de levande undergrafer med GraphQL för att driva sina applikationer. +To start coding right away, go to [Developer Quick Start](/quick-start/). + +## Översikt + +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. + +On The Graph, you can: + +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. + +### What is GraphQL? + +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +### Developer Actions + +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. + +## Subgraph Specifics + +### What are subgraphs? + +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. + +A subgraph primarily consists of the following files: + +- `subgraph.yaml`: this YAML file contains the [subgraph manifest](/developing/creating-a-subgraph/#the-subgraph-manifest). +- `subgraph.graphql`: this GraphQL schema defines what data is stored for your subgraph, and how to query it via [GraphQL](/developing/creating-a-subgraph/#the-graphql-schema). +- `mappings`: this [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) mappings file translates data from the event data to the entities defined in your schema. + +Learn the detailed specifics to [create a subgraph](/developing/creating-a-subgraph/). ## Subgrafens Livscykel -Undergrafer som distribueras till nätverket har en definierad livscykel. +Here is a general overview of a subgraph’s lifecycle: -### Bygg lokalt +![Livscykel för undergrafer](/img/subgraph-lifecycle.png) -Precis som med all subgrafutveckling börjar det med lokal utveckling och testning. Utvecklare kan använda samma lokala uppsättning oavsett om de bygger för The Graph Nätverk, den värdade tjänsten eller en lokal Graph Node, genom att använda `graph-cli` och `graph-ts` för att bygga sin subgraf. Utvecklare uppmuntras att använda verktyg som [Matchstick](https://github.com/LimeChain/matchstick) för enhetstestning för att förbättra robustheten hos sina subgrafer. +### Bygg lokalt -> Det finns vissa begränsningar på The Graf Nätverk, i termer av funktioner och nätverksstöd. Endast subgrafer på [stödda nätverk](/developing/supported-networks) kommer att tjäna indexbelöningar, och subgrafer som hämtar data från IPFS är heller inte kvalificerade. +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. ### Deploy to Subgraph Studio -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. - -### Publicera till Nätverket +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -När utvecklaren är nöjd med sin subgraf kan de publicera den på The Graf Nätverk. Detta är en on-chain-åtgärd, som registrerar subgrafen så att den kan upptäckas av Indexers. Publicerade subgrafer har en motsvarande NFT, som sedan kan överföras enkelt. Den publicerade subgrafen har associerad metadata, som ger andra nätverksdeltagare användbar sammanhang och information. +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. -### Signal för Att Främja Indexering +### Publicera till Nätverket -Publicerade subgrafer kommer troligen inte att plockas upp av Indexers utan tillsats av signal. Signal är låst GRT som är associerat med en given subgraf, vilket indikerar för Indexers att en given subgraf kommer att få frågevolym och bidrar också till de indexbelöningar som är tillgängliga för att bearbeta den. Subgrafutvecklare lägger vanligtvis till signal i sin subgraf för att främja indexering. Tredje part Curators kan också signalera på en given subgraf om de anser att subgrafen sannolikt kommer att generera frågevolym. +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. -### Frågor & Applikationsutveckling +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-and-deprecate-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. -När en subgraf har bearbetats av Indexers och är tillgänglig för frågor kan utvecklare börja använda subgrafen i sina applikationer. Utvecklare frågar subgrafer via en gateway, som vidarebefordrar deras frågor till en Indexer som har bearbetat subgrafen och betalar frågeavgifter i GRT. +### Add Curation Signal for Indexing -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. +#### What is signal? -### Uppdatering av Subgrafer +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. +### Frågor & Applikationsutveckling -När Subgrafutvecklaren är redo att uppdatera kan de initiera en transaktion för att peka sin subgraf till den nya versionen. Att uppdatera subgrafen migrerar all signal till den nya versionen (förutsatt att användaren som tillämpade signalen valde "auto-migrera"), vilket också medför en migrationsavgift. Denna signalmigration bör få Indexers att börja indexera den nya versionen av subgrafen, så den borde snart bli tillgänglig för frågor. +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). -### Avveckling av Subgrafer +Learn more about [querying subgraphs](/querying/querying-the-graph/). -Vid någon punkt kan en utvecklare besluta att de inte längre behöver en publicerad subgraf. Vid den tidpunkten kan de avveckla subgrafen, vilket returnerar all signalerad GRT till Curators. +### Uppdatering av Subgrafer -### Olika Utvecklarroller +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -Vissa utvecklare kommer att engagera sig i hela subgrafens livscykel på nätverket, publicera, fråga och iterera på sina egna subgrafer. Vissa kanske fokuserar på subgrafutveckling, bygger öppna API: er som andra kan bygga på. Vissa kan vara applikationsinriktade och fråga subgrafer som har distribuerats av andra. +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. -### Utvecklare och Nätverksekonomi +### Deprecating & Transferring Subgraphs -Utvecklare är en nyckelaktör i nätverket ekonomiskt sett, låser upp GRT för att främja indexering och viktigast av allt, frågar subgrafer, vilket är nätverkets primära värdeutbyte. Subgrafutvecklare bränner också GRT varje gång en subgraf uppdateras. +If you no longer need a published subgraph, you can [deprecate or transfer a subgraph](/managing/transfer-and-deprecate-a-subgraph/). Deprecating a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/sv/network/explorer.mdx b/website/pages/sv/network/explorer.mdx index 41315fc0ab51..1d8facb1b4a8 100644 --- a/website/pages/sv/network/explorer.mdx +++ b/website/pages/sv/network/explorer.mdx @@ -2,21 +2,35 @@ title: Graf Utforskaren --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## Subgraffar -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![Utforskaren Bild 1](/img/Subgraphs-Explorer-Landing.png) -När du klickar in på en subgraff kan du testa frågor i lekplatsen och använda nätverksinformation för att fatta informerade beslut. Du kommer också att kunna signalera GRT på din egen subgraff eller andra subgraffar för att göra indexerare medvetna om dess vikt och kvalitet. Detta är avgörande eftersom signalering på en subgraff uppmuntrar den att indexeras, vilket innebär att den kommer att synas på nätverket för att så småningom utföra frågor. +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Utforskaren Bild 2](/img/Subgraph-Details.png) -På varje dedikerad sida för subgraff visas flera detaljer, inklusive: +On each subgraph’s dedicated page, you can do the following: - Signalera/Sluta signalera på subgraffar - Visa mer detaljer som diagram, aktuell distributions-ID och annan metadata @@ -31,26 +45,32 @@ På varje dedikerad sida för subgraff visas flera detaljer, inklusive: ## Deltagare -Inom den här fliken får du en översikt över alla personer som deltar i nätverksaktiviteter, såsom indexerare, delegater och kuratorer. Nedan går vi igenom vad varje flik innebär för dig. +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### 1. Indexerare ![Utforskaren Bild 4](/img/Indexer-Pane.png) -Låt oss börja med indexerare. Indexerare är ryggraden i protokollet och de satsar på subgraffar, indexerar dem och serverar frågor till alla som konsumerar subgraffar. I indexerarens tabell kan du se indexerarens delegeringsparametrar, deras insats, hur mycket de har satsat på varje subgraff och hur mycket intäkter de har tjänat på frågeavgifter och indexeringsbelöningar. Här är några detaljer: +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. + +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. -- Andel av frågeavgift - den % av frågeavgifterna som indexeraren behåller när de delar med delegater -- Effektiv belöningsandel - belöningsandelen för indexeringsbelöning som tillämpas på delegeringspoolen. Om den är negativ innebär det att indexeraren ger bort en del av sina belöningar. Om den är positiv innebär det att indexeraren behåller en del av sina belöningar -- Nedkylningsåterstående - den tid som återstår tills indexeraren kan ändra ovanstående delegeringsparametrar. Nedkylningsperioder ställs upp av indexerare när de uppdaterar sina delegeringsparametrar -- Ägd - Detta är indexerarens deponerade insats, som kan straffas för skadligt eller felaktigt beteende -- Delegerad - Insats från delegater som kan tilldelas av indexeraren, men som inte kan straffas -- Tilldelad - Insats som indexerare aktivt tilldelar till de subgraffar de indexerar -- Tillgänglig delegeringskapacitet - mängden delegerad insats som indexerare fortfarande kan ta emot innan de blir överdelegerade +**Specifics** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Maximal delegeringskapacitet - den maximala mängden delegerad insats som indexeraren produktivt kan acceptera. Överskjuten delegerad insats kan inte användas för tilldelningar eller beräkningar av belöningar. -- Frågeavgifter - detta är de totala avgifter som slutanvändare har betalat för frågor från en indexerare över tid +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - Indexeringsbelöningar - detta är de totala indexeringsbelöningarna som indexeraren och deras delegater har tjänat över tid. Indexeringsbelöningar betalas genom GRT-utgivning. -Indexerare kan tjäna både frågeavgifter och indexeringsbelöningar. Funktionellt sker detta när nätverksdeltagare delegerar GRT till en indexerare. Detta gör att indexerare kan få frågeavgifter och belöningar beroende på deras indexeringsparametrar. Indexeringsparametrar ställs in genom att klicka på höger sida av tabellen eller genom att gå in på indexerarens profil och klicka på "Delegera"-knappen. +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. För att lära dig mer om hur du blir indexerare kan du titta på [officiell dokumentation](/Nätverk/indexing) eller [The Graf Academy Indexer-guiden.](https://thegraph.academy/delegators/choosing-indexers/) @@ -58,9 +78,13 @@ För att lära dig mer om hur du blir indexerare kan du titta på [officiell dok ### 2. Kuratorer -Kuratorer analyserar subgraffar för att identifiera vilka subgraffar som har högst kvalitet. När en kurator har hittat en potentiellt attraktiv subgraff kan de kurera den genom att signalera på dess bindningskurva. På så sätt låter kuratorer indexerare veta vilka subgraffar som är av hög kvalitet och bör indexerad. +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -Kuratorer kan vara samhällsmedlemmar, datakonsumenter eller till och med subgraffutvecklare som signalerar på sina egna subgraffar genom att deponera GRT-token i en bindningskurva. Genom att deponera GRT skapar kuratorer kuratorandelar av en subgraff. Som ett resultat är kuratorer berättigade att tjäna en del av frågeavgifterna som subgraffen de har signalerat på genererar. Bindningskurvan uppmuntrar kuratorer att kurera de högsta kvalitetsdatakällorna. Kuratortabellen i detta avsnitt låter dig se: +In the The Curator table listed below you can see: - Datumet då kuratorn började kurera - Antalet GRT som deponerades @@ -68,34 +92,36 @@ Kuratorer kan vara samhällsmedlemmar, datakonsumenter eller till och med subgra ![Utforskaren Bild 6](/img/Curation-Overview.png) -Om du vill lära dig mer om rollen som kurator kan du göra det genom att besöka följande länkar från [The Graph Academy](https://thegraph.academy/curators/) eller [officiell dokumentation.](/Nätverk/curating) +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### 3. Delegater -Delegater spelar en nyckelroll för att upprätthålla säkerheten och decentraliseringen av The Graph Nätverk. De deltar i nätverket genom att delegera (det vill säga "satsa") GRT-tokens till en eller flera indexerare. Utan delegater är det mindre sannolikt att indexerare tjänar betydande belöningar och avgifter. Därför försöker indexerare locka delegater genom att erbjuda dem en del av indexeringsbelöningarna och frågeavgifterna de tjänar. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -Delegater väljer i sin tur Indexers baserat på ett antal olika variabler, såsom tidigare prestanda, belöningsräntor för indexering och andel av frågeavgifter. Rekommendation inom gemenskapen kan också spela en roll i detta! Det rekommenderas att ansluta med de indexers som valts via [The Graph's Discord](https://discord.gg/graphprotocol) eller [The Graph Forum](https://forum.thegraph.com/)! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![Utforskaren Bild 7](/img/Delegation-Overview.png) -Delegattabellen kommer att låta dig se aktiva delegater i samhället, samt metriker som: +In the Delegators table you can see the active Delegators in the community and important metrics: - Antal indexerare en delegat delegerar till - En delegats ursprungliga delegation - Belöningar de har ackumulerat men inte har dragit tillbaka från protokollet - De realiserade belöningarna de drog tillbaka från protokollet - Totalt belopp av GRT som de för närvarande har i protokollet -- Datumet då de senast delegerade +- The date they last delegated -Om du vill lära dig mer om hur du blir delegat, behöver du inte leta längre! Allt du behöver göra är att besöka [officiell dokumentation](/Nätverk/delegating) eller [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## Nätverk -I avsnittet Nätverk kommer du att se globala KPI:er samt möjligheten att växla till en per-epok-basis och analysera nätverksmetriker mer detaljerat. Dessa detaljer ger dig en uppfattning om hur nätverket presterar över tiden. +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. ### Översikt -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +The overview section has both all the current network metrics and some cumulative metrics over time: - Nuvarande totala nätverksinsats - Insatsen fördelad mellan indexerare och deras delegater @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - Protokollparametrar såsom kuratorbelöning, inflationstakt och mer - Nuvarande epokbelöningar och avgifter -Några viktiga detaljer som är värda att nämna: +A few key details to note: -- **Frågeavgifter representerar avgifterna som genereras av användarna**, och de kan krävas (eller inte) av indexerare efter en period på minst 7 epoker (se nedan) efter att deras tilldelningar till subgraffar har avslutats och den data de serverat har validerats av användarna. -- **Indexeringsbelöningar representerar mängden belöningar som indexerare har krävt från nätverksutgivningen under epoken.** Även om protokollutgivningen är fast, genereras belöningarna endast när indexerare stänger sina tilldelningar till subgraffar som de har indexerat. Därför varierar antalet belöningar per epok (det vill säga under vissa epoker kan indexerare sammanlagt stänga tilldelningar som har varit öppna i många dagar). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Utforskaren Bild 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ I avsnittet Epoker kan du analysera på en per-epok-basis, metriker som: - Den aktiva epoken är den där indexerare för närvarande allokerar insats och samlar frågeavgifter - De avvecklande epokerna är de där statliga kanaler avvecklas. Detta innebär att indexerare är föremål för straff om användarna öppnar tvister mot dem. - De distribuerande epokerna är de epoker där statliga kanaler för epokerna avvecklas och indexerare kan kräva sina frågeavgiftsrabatter. - - De avslutade epokerna är de epoker som inte har några frågeavgiftsrabatter kvar att kräva av indexerare, och är därmed avslutade. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![Utforskaren Bild 9](/img/Epoch-Stats.png) ## Din användarprofil -Nu när vi har pratat om nätverksstatistik, låt oss gå vidare till din personliga profil. Din personliga profil är platsen där du kan se din nätverksaktivitet, oavsett hur du deltar i nätverket. Din kryptoplånbok kommer att fungera som din användarprofil, och med Användardashboarden kan du se: +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### Profilöversikt -Här kan du se de senaste åtgärder du har vidtagit. Detta är också där du hittar din profilinformation, beskrivning och webbplats (om du har lagt till en). +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![Utforskaren Bild 10](/img/Profile-Overview.png) ### Subgraffar-fliken -Om du klickar på Subgraffar-fliken ser du dina publicerade subgraffar. Detta inkluderar inte några subgraffar som distribuerats med CLI för teständamål - subgraffar kommer bara att visas när de publiceras på det decentraliserade nätverket. +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Utforskaren Bild 11](/img/Subgraphs-Overview.png) ### Indexeringstabell -Om du klickar på Indexeringsfliken hittar du en tabell med alla aktiva och historiska tilldelningar till subgraffar, samt diagram som du kan analysera och se din tidigare prestanda som indexerare. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. I det här avsnittet hittar du också information om dina nettobelöningar som indexerare och nettovärdaravgifter. Du kommer att se följande metriker: @@ -158,7 +189,9 @@ I det här avsnittet hittar du också information om dina nettobelöningar som i ### Delegattabell -Delegater är viktiga för The Graph Nätverk. En delegat måste använda sin kunskap för att välja en indexerare som kommer att ge en hälsosam avkastning på belöningar. Här hittar du detaljer om dina aktiva och historiska delegationer, samt metriker för indexerare som du har delegerat till. +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. I den första halvan av sidan kan du se din delegatdiagram, liksom diagrammet för endast belöningar. Till vänster kan du se KPI:er som återspeglar dina aktuella delegationsmetriker. diff --git a/website/pages/sv/network/indexing.mdx b/website/pages/sv/network/indexing.mdx index 034d5742ef06..5c06db2caf12 100644 --- a/website/pages/sv/network/indexing.mdx +++ b/website/pages/sv/network/indexing.mdx @@ -42,7 +42,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Många av gemenskapens egentillverkade instrument inkluderar värden för väntande belöningar och de kan enkelt kontrolleras manuellt genom att följa dessa steg: -1. Fråga [mainnet-subgrafen](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) för att få ID:n för alla aktiva tilldelningar: +1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -113,11 +113,11 @@ Indexers kan skilja sig åt genom att tillämpa avancerade tekniker för att fat - **Stor** - Förberedd för att indexera alla för närvarande använda subgrafer och att ta emot förfrågningar för relaterad trafik. | Konfiguration | Postgres
    (CPU:er) | Postgres
    (minne i GB) | Postgres
    (disk i TB) | VM:er
    (CPU:er) | VM:er
    (minne i GB) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Liten | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Stor | 72 | 468 | 3,5 | 48 | 184 | +| ------------- |:----------------------------:|:--------------------------------:|:-------------------------------:|:-------------------------:|:-----------------------------:| +| Liten | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Stor | 72 | 468 | 3,5 | 48 | 184 | ### Vilka grundläggande säkerhetsåtgärder bör en Indexer vidta? @@ -149,20 +149,20 @@ Observera: För att stödja smidig skalning rekommenderas det att fråge- och in #### Graf Node -| Port | Syfte | Vägar | CLI-argument | Miljövariabel | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP-server
    (för subgraf-förfrågningar) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    (för subgraf-prenumerationer) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (för hantering av distributioner) | / | --admin-port | - | -| 8030 | Subgrafindexeringsstatus-API | /graphql | --index-node-port | - | -| 8040 | Prometheus-metrar | /metrics | --metrics-port | - | +| Port | Syfte | Vägar | CLI-argument | Miljövariabel | +| ---- | ---------------------------------------------------------- | ---------------------------------------------------- | ----------------- | ------------- | +| 8000 | GraphQL HTTP-server
    (för subgraf-förfrågningar) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    (för subgraf-prenumerationer) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (för hantering av distributioner) | / | --admin-port | - | +| 8030 | Subgrafindexeringsstatus-API | /graphql | --index-node-port | - | +| 8040 | Prometheus-metrar | /metrics | --metrics-port | - | #### Indexertjänst -| Port | Syfte | Vägar | CLI-argument | Miljövariabel | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP-server
    (för betalda subgraf-förfrågningar) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus-metrar | /metrics | --metrics-port | - | +| Port | Syfte | Vägar | CLI-argument | Miljövariabel | +| ---- | ------------------------------------------------------------------ | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP-server
    (för betalda subgraf-förfrågningar) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus-metrar | /metrics | --metrics-port | - | #### Indexeragent @@ -545,7 +545,7 @@ Det föreslagna verktyget för att interagera med **Indexer Management API** är - `graph indexer rules maybe [options] ` — Ange `decisionBasis` för en distribution till `rules`, så kommer Indexeragenten att använda indexeringsregler för att avgöra om den ska indexera den här distributionen. -- `graph indexer actions get [options] ` - Hämta en eller flera åtgärder med `all` eller lämna `action-id` tomt för att hämta alla åtgärder. Ett ytterligare argument `--status` kan användas för att skriva ut alla åtgärder med en viss status. +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `graph indexer action queue allocate ` - Köa allokationsåtgärd diff --git a/website/pages/sv/network/overview.mdx b/website/pages/sv/network/overview.mdx index c90c1d462939..a8af86bb9142 100644 --- a/website/pages/sv/network/overview.mdx +++ b/website/pages/sv/network/overview.mdx @@ -2,14 +2,20 @@ title: Nätverksöversikt --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## Översikt +## How does it work? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Specifics + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![Tokenekonomi](/img/Network-roles@2x.png) -För att säkerställa den ekonomiska säkerheten för The Graph Nätverk och integriteten hos den data som frågas, satsar deltagare och använder Graph Tokens ([GRT](/tokenomics)). GRT är en arbetsnyttighetstoken som är en ERC-20-token som används för att allokera resurser i nätverket. +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/sv/new-chain-integration.mdx b/website/pages/sv/new-chain-integration.mdx index 8ebd913a3766..7902ed6f31b2 100644 --- a/website/pages/sv/new-chain-integration.mdx +++ b/website/pages/sv/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: Integrering av Nya Nätverk +title: New Chain Integration --- -Graf Node kan för närvarande indexera data från följande typer av blockkedjor: +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- Ethereum, via EVM JSON-RPC och [Ethereum Firehose](https://github.com/streamingfast/firehose-ethereum) -- NEAR, via en [NEAR Firehose](https://github.com/streamingfast/near-firehose-indexer) -- Cosmos, via en [Cosmos Firehose](https://github.com/graphprotocol/firehose-cosmos) -- Arweave, via en [Arweave Firehose](https://github.com/graphprotocol/firehose-arweave) +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -Om du är intresserad av någon av dessa blockkedjor är integrering en fråga om konfiguration och testning av Graf Node. +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -If you are interested in a different chain type, a new integration with Graph Node must be built. Our recommended approach is developing a new Firehose for the chain in question and then the integration of that Firehose with Graph Node. More info below. +## Integration Strategies -**1. EVM JSON-RPC** +### 1. EVM JSON-RPC -Om blockkedjan är EVM-ekvivalent och klienten/noden exponerar den standardiserade EVM JSON-RPC API:n, bör Graf Node kunna indexera den nya blockkedjan. För mer information, se [Testa en EVM JSON-RPC](new-chain-integration#testing-an-evm-json-rpc). +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -**2. Firehose** +#### Testa en EVM JSON-RPC -For non-EVM-based chains, Graph Node must ingest blockchain data via gRPC and known type definitions. This can be done via [Firehose](firehose/), a new technology developed by [StreamingFast](https://www.streamingfast.io/) that provides a highly-scalable indexing blockchain solution using a files-based and streaming-first approach. Reach out to the [StreamingFast team](mailto:integrations@streamingfast.io/) if you need help with Firehose development. +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## Skillnad mellan EVM JSON-RPC och Firehose +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, i en JSON-RPC batch-begäran +- `trace_filter` _(optionally required for Graph Node to support call handlers)_ -Medan båda alternativen är lämpliga för subgrafer krävs en Firehose alltid för utvecklare som vill bygga med [Substreams](substreams/), som att bygga [Substreams-drivna subgrafer](cookbook/substreams-powered-subgraphs/). Dessutom möjliggör Firehose förbättrade indexeringstider jämfört med JSON-RPC. +### 2. Firehose Integration -Nya EVM-blockkedjeintegratörer kan också överväga den Firehose-baserade metoden med tanke på fördelarna med substreams och dess massivt parallella indexeringsegenskaper. Att stödja båda alternativen ger utvecklare möjlighet att välja mellan att bygga substreams eller subgrafer för den nya blockkedjan. +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> **OBS**: En Firehose-baserad integration för EVM-blockkedjor kommer fortfarande att kräva att Indexers kör blockkedjans arkiv-RPC-nod för att korrekt indexera subgrafer. Detta beror på att Firehosen inte kan tillhandahålla den smarta kontraktsstatus som normalt är åtkomlig via `eth_call` RPC-metoden. (Det är värt att påminna om att eth_calls inte är [en bra praxis för utvecklare](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## Testa en EVM JSON-RPC +#### Specific Firehose Instrumentation for EVM (`geth`) chains -För att Graf Node ska kunna ta emot data från en EVM-blockkedja måste RPC-noden exponera följande EVM JSON-RPC-metoder: +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \_(för historiska block, med EIP-1898 - kräver arkivnod): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`, i en JSON-RPC batch-begäran -- _`trace_filter`_ _(valfritt krav för att Graf Node ska stödja anropshanterare)_ +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### Graf Node-konfiguration +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -**Börja med att förbereda din lokala miljö** +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## Graf Node-konfiguration + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [Klona Graf Node](https://github.com/graphprotocol/graph-node) -2. Ändra [den här raden](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) för att inkludera det nya nätverksnamnet och den EVM JSON-RPC-kompatibla URL:n - > Byt inte namnet på env-var självt. Det måste förbli `ethereum` även om nätverksnamnet är annorlunda. -3. Kör en IPFS-nod eller använd den som används av The Graf: https://api.thegraph.com/ipfs/ -**Testa integrationen genom att lokalt distribuera en subgraf** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. Skapa en enkel exempelsubgraf. Några alternativ är nedan: - 1. Den förpackade [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323)-smartkontrakt och subgraf är en bra startpunkt - 2. Starta en lokal subgraf från ett befintligt smart kontrakt eller en Solidity-utvecklingsmiljö [med hjälp av Hardhat med ett Graf-plugin](https://github.com/graphprotocol/hardhat-graph) -3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. -4. Skapa din subgraf i Graf Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. Publicera din subgraf till Graf Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -Graf Node bör synkronisera den distribuerade subgrafen om det inte finns några fel. Ge det tid att synkronisera, och skicka sedan några GraphQL-begäranden till API-slutpunkten som skrivs ut i loggarna. +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## Integrera en ny Firehose-aktiverad blockkedja +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. Skapa en enkel exempelsubgraf. Några alternativ är nedan: + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +Graf Node bör synkronisera den distribuerade subgrafen om det inte finns några fel. Ge det tid att synkronisera, och skicka sedan några GraphQL-begäranden till API-slutpunkten som skrivs ut i loggarna. -Det är också möjligt att integrera en ny blockkedja med Firehose-metoden. Detta är för närvarande det bästa alternativet för icke-EVM-blockkedjor och ett krav för stöd för delströmmar. Ytterligare dokumentation fokuserar på hur Firehose fungerar, hur du lägger till Firehose-stöd för en ny blockkedja och integrerar den med Graf Node. Rekommenderade dokument för integratörer: +## Substreams-powered Subgraphs -1. [Allmänna dokument om Firehose](firehose/) -2. [Adding Firehose support for a new chain](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [Integrera Graf Node med en ny blockkedja via Firehose](https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/sv/operating-graph-node.mdx b/website/pages/sv/operating-graph-node.mdx index 65b506e574dd..83c59421c17b 100644 --- a/website/pages/sv/operating-graph-node.mdx +++ b/website/pages/sv/operating-graph-node.mdx @@ -77,13 +77,13 @@ En komplett exempelkonfiguration för Kubernetes finns i [indexer repository](ht När Graph Node är igång exponerar den följande portar: -| Port | Syfte | Rutter | Argument för CLI | Miljö Variabel | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP-server
    (för frågor om undergrafer) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    (för prenumerationer på undergrafer) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (för hantering av distributioner) | / | --admin-port | - | -| 8030 | Status för indexering av undergrafer API | /graphql | --index-node-port | - | -| 8040 | Prometheus mätvärden | /metrics | --metrics-port | - | +| Port | Syfte | Rutter | Argument för CLI | Miljö Variabel | +| ---- | ---------------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------- | +| 8000 | GraphQL HTTP-server
    (för frågor om undergrafer) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    (för prenumerationer på undergrafer) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (för hantering av distributioner) | / | --admin-port | - | +| 8030 | Status för indexering av undergrafer API | /graphql | --index-node-port | - | +| 8040 | Prometheus mätvärden | /metrics | --metrics-port | - | > **Viktigt**: Var försiktig med att exponera portar offentligt - **administrationsportar** bör hållas säkra. Detta inkluderar JSON-RPC-slutpunkten för Graph Node. diff --git a/website/pages/sv/querying/distributed-systems.mdx b/website/pages/sv/querying/distributed-systems.mdx index 365340f65a1b..3143d4859eb8 100644 --- a/website/pages/sv/querying/distributed-systems.mdx +++ b/website/pages/sv/querying/distributed-systems.mdx @@ -84,8 +84,8 @@ Här kommer vi att använda argumentet `block: { hash: $blockHash }` för att bi /// Gets a list of domain names from a single block using pagination async function getDomainNames() { // Set a cap on the maximum number of items to pull. - let pages = 5 - const perPage = 1000 + let pages = 5; + const perPage = 1000; // The first query will get the first page of results and also get the block // hash so that the remainder of the queries are consistent with the first. @@ -100,34 +100,34 @@ async function getDomainNames() { hash } } - }` + }`; - let data = await graphql(listDomainsQuery, { perPage }) - let result = data.domains.map((d) => d.name) - let blockHash = data._meta.block.hash + let data = await graphql(listDomainsQuery, { perPage }); + let result = data.domains.map((d) => d.name); + let blockHash = data._meta.block.hash; - let query + let query; // Continue fetching additional pages until either we run into the limit of // 5 pages total (specified above) or we know we have reached the last page // because the page has fewer entities than a full page. while (data.domains.length == perPage && --pages) { - let lastID = data.domains[data.domains.length - 1].id + let lastID = data.domains[data.domains.length - 1].id; query = ` query ListDomains($perPage: Int!, $lastID: ID!, $blockHash: Bytes!) { domains(first: $perPage, where: { id_gt: $lastID }, block: { hash: $blockHash }) { name id } - }` + }`; - data = await graphql(query, { perPage, lastID, blockHash }) + data = await graphql(query, { perPage, lastID, blockHash }); // Accumulate domain names into the result for (domain of data.domains) { - result.push(domain.name) + result.push(domain.name); } } - return result + return result; } ``` diff --git a/website/pages/sv/querying/graphql-api.mdx b/website/pages/sv/querying/graphql-api.mdx index 32e9fbc90032..89b541abf5eb 100644 --- a/website/pages/sv/querying/graphql-api.mdx +++ b/website/pages/sv/querying/graphql-api.mdx @@ -2,11 +2,19 @@ title: GraphQL API --- -This guide explains the GraphQL Query API that is used for The Graph Protocol. +Learn about the GraphQL Query API used in The Graph. -## Förfrågningar +## What is GraphQL? -I din delgrafig schema definierar du typer som kallas `Entiteter`. För varje typ av `Entitet` kommer ett `entitet`- och `entiteter`-fält att genereras på toppnivån av `Query`-typen. Observera att `query` inte behöver inkluderas högst upp i `graphql`-förfrågan när du använder The Graph. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +To understand the larger role that GraphQL plays, review [developing](/network/developing/) and [creating a subgraph](/developing/creating-a-subgraph/). + +## Queries with GraphQL + +In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. + +> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. ### Exempel @@ -21,7 +29,7 @@ Förfrågan efter en enda `Token` -entitet som är definierad i din schema: } ``` -> **Note:** Vid sökning efter en enskild enhet krävs fältet `id`, och det måste vara en sträng. +> Note: When querying for a single entity, the `id` field is required, and it must be writen as a string. Fråga alla `Token`-enheter: @@ -36,7 +44,10 @@ Fråga alla `Token`-enheter: ### Sortering -När du frågar efter en samling kan parametern `orderBy` användas för att sortera efter ett specifikt attribut. Dessutom kan `orderDirection` användas för att ange sorteringsriktningen, `asc` för stigande eller `desc` för fallande. +When querying a collection, you may: + +- Use the `orderBy` parameter to sort by a specific attribute. +- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. #### Exempel @@ -53,7 +64,7 @@ När du frågar efter en samling kan parametern `orderBy` användas för att sor Från och med Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) kan entiteter sorteras på basis av nästlade entiteter. -I följande exempel sorterar vi tokens efter namnet på deras ägare: +The following example shows tokens sorted by the name of their owner: ```graphql { @@ -70,11 +81,12 @@ I följande exempel sorterar vi tokens efter namnet på deras ägare: ### Paginering -När du frågar efter en samling kan parametern `first` användas för att paginera från början av samlingen. Det är värt att notera att standardsorteringsordningen är efter ID i stigande alfanumerisk ordning, inte efter skapelsetid. - -Vidare kan parametern `skip` användas för att hoppa över enheter och paginera. t.ex. `first:100` visar de första 100 enheterna och `first:100, skip:100` visar de nästa 100 enheterna. +When querying a collection, it's best to: -Frågor bör undvika att använda mycket stora `skip`-värden eftersom de i allmänhet fungerar dåligt. För att hämta ett stort antal objekt är det mycket bättre att bläddra igenom entiteter baserat på ett attribut som visas i det sista exemplet. +- Use the `first` parameter to paginate from the beginning of the collection. + - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. +- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. #### Exempel med `first` @@ -106,7 +118,7 @@ Fråga 10 `Token`-enheter, förskjutna med 10 platser från början av samlingen #### Exempel med `first` och `id_ge` -Om en klient behöver hämta ett stort antal entiteter är det mycket mer effektivt att basera frågor på ett attribut och filtrera efter det attributet. En klient kan till exempel hämta ett stort antal tokens med hjälp av den här frågan: +If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: ```graphql query manyTokens($lastID: String) { @@ -117,11 +129,12 @@ query manyTokens($lastID: String) { } ``` -Första gången skickas frågan med `lastID = ""`, och för efterföljande frågor sätts `lastID` till `id`-attributet för den sista entiteten i den föregående frågan. Detta tillvägagångssätt kommer att fungera betydligt bättre än att använda ökande `skip`-värden. +The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### Filtrering -Du kan använda parametern `where` i dina frågor för att filtrera efter olika egenskaper. Du kan filtrera på flera värden inom parametern `where`. +- You can use the `where` parameter in your queries to filter for different properties. +- You can filter on multiple values within the `where` parameter. #### Exempel med `where` @@ -155,7 +168,7 @@ Du kan använda suffix som `_gt`, `_lte` för värdejämförelse: #### Exempel på blockfiltrering -Du kan också filtrera entiteter efter `_change_block(number_gte: Int)` - detta filtrerar entiteter som uppdaterades i eller efter det angivna blocket. +You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. Detta kan vara användbart om du bara vill hämta enheter som har ändrats, till exempel sedan den senaste gången du pollade. Eller alternativt kan det vara användbart för att undersöka eller felsöka hur enheter förändras i din undergraf (om det kombineras med ett blockfilter kan du isolera endast enheter som ändrades i ett visst block). @@ -193,7 +206,7 @@ Från och med Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node ##### `OCH` Operator -I följande exempel filtrerar vi efter utmaningar med `utfall` `lyckades` och `nummer` större än eller lika med `100`. +The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. ```graphql { @@ -208,7 +221,7 @@ I följande exempel filtrerar vi efter utmaningar med `utfall` `lyckades` och `n ``` > **Syntactic sugar:** Du kan förenkla ovanstående fråga genom att ta bort `and`-operatorn och istället skicka ett underuttryck separerat med kommatecken. -> +> > ```graphql > { > challenges(where: { number_gte: 100, outcome: "succeeded" }) { @@ -223,7 +236,7 @@ I följande exempel filtrerar vi efter utmaningar med `utfall` `lyckades` och `n ##### `OR` Operatör -I följande exempel filtrerar vi efter utmaningar med `outcome` `succeeded` eller `number` större än eller lika med `100`. +The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. ```graphql { @@ -278,9 +291,9 @@ _change_block(number_gte: Int) Du kan förfråga tillståndet för dina enheter inte bara för den senaste blocken, som är standard, utan också för en godtycklig block i det förflutna. Blocket vid vilket en förfrågan ska ske kan specifieras antingen med dess blocknummer eller dess blockhash genom att inkludera ett `block`-argument i toppnivåfälten för förfrågningar. -Resultatet av en sådan förfrågan kommer inte att ändras över tid, det vill säga, att förfråga vid en viss tidigare block kommer att returnera samma resultat oavsett när det utförs, med undantag för att om du förfrågar vid ett block mycket nära huvudet av kedjan, kan resultatet ändras om det visar sig att blocket inte är på huvudkedjan och kedjan omorganiseras. När ett block kan anses vara slutgiltigt kommer resultatet av förfrågan inte att ändras. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -Observera att den nuvarande implementationen fortfarande är föremål för vissa begränsningar som kan bryta mot dessa garantier. Implementeringen kan inte alltid avgöra om en given blockhash inte alls är på huvudkedjan eller om resultatet av en förfrågan med blockhash för ett block som ännu inte kan anses vara slutgiltigt kan påverkas av en samtidig omorganisering av block. De påverkar inte resultaten av förfrågningar med blockhash när blocket är slutgiltigt och känt att vara på huvudkedjan. [Detta problem](https://github.com/graphprotocol/graph-node/issues/1405) förklarar dessa begränsningar i detalj. +> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. #### Exempel @@ -322,12 +335,12 @@ Fulltextsökförfrågningar har ett obligatoriskt fält, `text`, för att tillha Fulltextsökoperatorer: -| Symbol | Operatör | Beskrivning | -| --- | --- | --- | -| `&` | `Och` | För att kombinera flera söktermer till ett filter för entiteter som inkluderar alla de angivna termerna | -| | | `Eller` | Förfrågningar med flera söktermer separerade av ellipsen kommer att returnera alla entiteter med en matchning från någon av de angivna termerna | -| `<->` | `Följs av` | Ange avståndet mellan två ord. | -| `:*` | `Prefix` | Använd prefixsöktermen för att hitta ord vars prefix matchar (2 tecken krävs.) | +| Symbol | Operatör | Beskrivning | +| ----------- | ---------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | +| `&` | `Och` | För att kombinera flera söktermer till ett filter för entiteter som inkluderar alla de angivna termerna | +| | | `Eller` | Förfrågningar med flera söktermer separerade av ellipsen kommer att returnera alla entiteter med en matchning från någon av de angivna termerna | +| `<->` | `Följs av` | Ange avståndet mellan två ord. | +| `:*` | `Prefix` | Använd prefixsöktermen för att hitta ord vars prefix matchar (2 tecken krävs.) | #### Exempel @@ -376,11 +389,11 @@ Graph Node implementerar [specifikationsbaserad](https://spec.graphql.org/Octobe ## Schema -Schemat för din datakälla - det vill säga de entitetstyper, värden och relationer som är tillgängliga för frågor - definieras genom [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL-scheman definierar i allmänhet rottyper för `queries`, `subscriptions` och `mutations`. Grafen stöder endast `queries`. Rottypen `Query` för din subgraf genereras automatiskt från GraphQL-schemat som ingår i subgrafmanifestet. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> **Note:** Vårt API exponerar inte mutationer eftersom utvecklare förväntas utfärda transaktioner direkt mot den underliggande blockkedjan från sina applikationer. +> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. ### Entiteter diff --git a/website/pages/sv/querying/querying-best-practices.mdx b/website/pages/sv/querying/querying-best-practices.mdx index abe7e8a0aba4..d88b3dc16c5b 100644 --- a/website/pages/sv/querying/querying-best-practices.mdx +++ b/website/pages/sv/querying/querying-best-practices.mdx @@ -2,11 +2,9 @@ title: Bästa praxis för förfrågningar --- -The Graph tillhandahåller ett decentraliserat sätt att hämta data från blockkedjor. +The Graph provides a decentralized way to query data from blockchains via GraphQL APIs, making it easier to query data with the GraphQL language. -The Graph-nätverkets data exponeras genom ett GraphQL API, vilket gör det enklare att fråga data med GraphQL-språket. - -Den här sidan kommer att guida dig genom de grundläggande reglerna för GraphQL-språket och bästa praxis för GraphQL-frågor. +Learn the essential GraphQL language rules and GraphQL querying best practices. --- @@ -71,7 +69,7 @@ GraphQL är ett språk och en uppsättning konventioner som transporteras över Det innebär att du kan ställa en fråga till ett GraphQL API med hjälp av standard `fetch` (nativt eller via `@whatwg-node/fetch` eller `isomorphic-fetch`). -Men, som det anges i ["Frågehantering från en applikation"](/querying/querying-from-an-application), rekommenderar vi att du använder vår `graph-client` som stöder unika funktioner som: +However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), it's recommend to use `graph-client` which supports unique features such as: - Hantering av subgrafer över olika blockkedjor: Frågehantering från flera subgrafer i en enda fråga - [Automatisk blockspårning](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) @@ -81,7 +79,7 @@ Men, som det anges i ["Frågehantering från en applikation"](/querying/querying Så här ställer du en fråga till The Graph med `graph-client`: ```tsx -import { execute } from '../.graphclient' +import { execute } from "../.graphclient"; const query = ` query GetToken($id: ID!) { @@ -90,13 +88,13 @@ query GetToken($id: ID!) { owner } } -` +`; const variables = { id: '1' } async function main() { - const result = await execute(query, variables) + const result = await execute(query, variables); // `result` är fullständigt typad! - console.log(result) + console.log(result); } main() @@ -104,8 +102,6 @@ main() Fler GraphQL-klientalternativ behandlas i ["Querying from an Application"](/querying/querying-from-an-application). -Nu när vi har gått igenom de grundläggande reglerna för syntax för GraphQL-förfrågningar ska vi titta på bästa praxis för att skriva GraphQL-förfrågningar. - --- ## Bästa praxis @@ -115,15 +111,15 @@ Nu när vi har gått igenom de grundläggande reglerna för syntax för GraphQL- En vanlig (dålig) praxis är att dynamiskt bygga upp frågesträngar enligt följande: ```tsx -const id = params.id -const fields = ['id', 'owner'] +const id = params.id; +const fields = ["id", "owner"]; const query = ` query GetToken { token(id: ${id}) { - ${fields.join('\n')} + ${fields.join("\n")} } } -` +`; // Execute query... ``` @@ -138,9 +134,9 @@ Medan det tidigare avsnittet genererar en giltig GraphQL-fråga har den **många Av dessa skäl rekommenderas det alltid att skriva frågor som statiska strängar: ```tsx -import { execute } from 'your-favorite-graphql-client' +import { execute } from "your-favorite-graphql-client"; -const id = params.id +const id = params.id; const query = ` query GetToken($id: ID!) { token(id: $id) { @@ -148,7 +144,7 @@ query GetToken($id: ID!) { owner } } -` +`; const result = await execute(query, { variables: { @@ -164,16 +160,16 @@ Detta medför **många fördelar**: - **Variabler kan cachas** på serversidan - **Frågor kan statiskt analyseras av verktyg** (mer om detta i följande avsnitt) -**Observera: Hur man inkluderar fält villkorligt i statiska frågor** +### How to include fields conditionally in static queries -Ibland vill vi inkludera fältet `owner` endast under vissa villkor. +You might want to include the `owner` field only on a particular condition. -För detta kan vi utnyttja direktivet `@include(if:...)` på följande sätt: +For this, you can leverage the `@include(if:...)` directive as follows: ```tsx -import { execute } from 'your-favorite-graphql-client' +import { execute } from "your-favorite-graphql-client"; -const id = params.id +const id = params.id; const query = ` query GetToken($id: ID!, $includeOwner: Boolean) { token(id: $id) { @@ -191,7 +187,7 @@ const result = await execute(query, { }) ``` -Observera: Det motsatta direktivet är `@skip(if: ...)`. +> Observera: Det motsatta direktivet är `@skip(if: ...)`. ### Ask for what you want @@ -199,9 +195,8 @@ GraphQL blev känd för sitt motto "Be om det du vill ha". Av den anledningen finns det ingen möjlighet i GraphQL att få alla tillgängliga fält utan att behöva lista dem individuellt. -När du frågar GraphQL API:er, tänk alltid på att endast fråga efter de fält som faktiskt kommer att användas. - -En vanlig orsak till överhämtning är samlingar av enheter. Som standard kommer frågor att hämta 100 enheter i en samling, vilket vanligtvis är mycket mer än vad som faktiskt kommer att användas, t.ex., för att visas för användaren. Därför bör frågor nästan alltid ange first explicit och se till att de bara hämtar så många enheter som de faktiskt behöver. Detta gäller inte bara för toppnivåsamlingar i en fråga, utan ännu mer för inbäddade samlingar av enheter. +- När du frågar GraphQL API:er, tänk alltid på att endast fråga efter de fält som faktiskt kommer att användas. +- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. Till exempel, i följande fråga: @@ -337,8 +332,8 @@ query { Sådana upprepade fält (`id`, `active`, `status`) medför många problem: -- svårare att läsa för mer omfattande frågor -- när du använder verktyg som genererar TypeScript-typer baserat på frågor (_mer om det i den sista avsnittet_), kommer `newDelegate` och `oldDelegate` att resultera i två olika inline-gränssnitt. +- More extensive queries become harder to read. +- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. En omstrukturerad version av frågan skulle vara följande: @@ -364,13 +359,13 @@ fragment DelegateItem on Transcoder { } ``` -Att använda GraphQL `fragment` kommer att förbättra läsbarheten (särskilt i större skala) och leda till bättre generering av TypeScript-typer. +Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. När du använder verktyget för typsgenerering kommer den ovanstående frågan att generera en korrekt typ av `DelegateItemFragment` (_se sista avsnittet_). ### Dos and Don'ts för GraphQL Fragment -**Fragmentbas måste vara en typ** +### Fragmentbas måste vara en typ Ett fragment kan inte baseras på en oanvändbar typ, kort sagt, **på en typ som inte har fält**: @@ -382,7 +377,7 @@ fragment MyFragment on BigInt { `BigInt` är en **skalär** (inbyggd "vanlig" typ) som inte kan användas som grund för ett fragment. -**Hur man sprider ett fragment** +#### Hur man sprider ett fragment Fragment är definierade på specifika typer och bör användas i enlighet med det i frågor. @@ -411,16 +406,16 @@ fragment VoteItem on Vote { Det är inte möjligt att sprida ett fragment av typ `Vote` här. -**Definiera fragment som en atomisk affärsenhet för data** +#### Definiera fragment som en atomisk affärsenhet för data -GraphQL Fragment måste definieras baserat på deras användning. +GraphQL `Fragment`s must be defined based on their usage. För de flesta användningsfall är det tillräckligt att definiera ett fragment per typ (i fallet med upprepade fält eller typgenerering). -Här är en tumregel för användning av fragment: +Here is a rule of thumb for using fragments: -- när fält av samma typ upprepas i en fråga, gruppera dem i ett fragment -- när liknande men inte samma fält upprepas, skapa flera fragment, t.ex. +- When fields of the same type are repeated in a query, group them in a `Fragment`. +- When similar but different fields are repeated, create multiple fragments, for instance: ```graphql # base fragment (mostly used in listing) @@ -443,7 +438,7 @@ fragment VoteWithPoll on Vote { --- -## De väsentliga verktygen +## The Essential Tools ### Webbaserade GraphQL-upptäckare @@ -473,11 +468,11 @@ Detta kommer att tillåta dig att **upptäcka fel utan ens att testa frågor** p [GraphQL VSCode-tillägget](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) är ett utmärkt komplement till din utvecklingsarbetsflöde för att få: -- syntaxmarkering -- autokompletteringsförslag -- validering mot schema -- snuttar -- gå till definition för fragment och inmatningstyper +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets +- Go to definition for fragments and input types Om du använder `graphql-eslint` är [ESLint VSCode-tillägget](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) ett måste för att visualisera fel och varningar korrekt infogade i din kod. @@ -485,9 +480,9 @@ Om du använder `graphql-eslint` är [ESLint VSCode-tillägget](https://marketpl [JS GraphQL-tillägget](https://plugins.jetbrains.com/plugin/8097-graphql/) kommer att förbättra din upplevelse av att arbeta med GraphQL genom att tillhandahålla: -- syntaxmarkering -- autokompletteringsförslag -- validering mot schema -- snuttar +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets -Mer information om denna [WebStorm-artikel](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) som visar upp alla tilläggets huvudfunktioner. +For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. diff --git a/website/pages/sv/querying/querying-from-an-application.mdx b/website/pages/sv/querying/querying-from-an-application.mdx index 6e653e81b539..5e702ea21d7f 100644 --- a/website/pages/sv/querying/querying-from-an-application.mdx +++ b/website/pages/sv/querying/querying-from-an-application.mdx @@ -163,9 +163,9 @@ npm install @apollo/client graphql Sedan kan du göra en förfrågan till API:et med följande kod: ```javascript -import { ApolloClient, InMemoryCache, gql } from '@apollo/client' +import { ApolloClient, InMemoryCache, gql } from "@apollo/client"; -const APIURL = 'https://api.studio.thegraph.com/query///' +const APIURL = "https://api.studio.thegraph.com/query///"; const tokensQuery = ` query { @@ -176,20 +176,20 @@ const tokensQuery = ` metadataURI } } -` +`; const client = new ApolloClient({ uri: APIURL, cache: new InMemoryCache(), -}) +}); client .query({ query: gql(tokensQuery), }) - .then((data) => console.log('Subgraph data: ', data)) + .then((data) => console.log("Subgraph data: ", data)) .catch((err) => { - console.log('Error fetching data: ', err) + console.log("Error fetching data: ", err); }) ``` @@ -207,20 +207,20 @@ const tokensQuery = ` metadataURI } } -` +`; client .query({ query: gql(tokensQuery), variables: { first: 10, - orderBy: 'createdAtTimestamp', - orderDirection: 'desc', + orderBy: "createdAtTimestamp", + orderDirection: "desc", }, }) - .then((data) => console.log('Subgraph data: ', data)) + .then((data) => console.log("Subgraph data: ", data)) .catch((err) => { - console.log('Error fetching data: ', err) + console.log("Error fetching data: ", err); }) ``` @@ -244,9 +244,9 @@ npm install urql graphql Sedan kan du göra en förfrågan till API:et med följande kod: ```javascript -import { createClient } from 'urql' +import { createClient } from "urql"; -const APIURL = 'https://api.thegraph.com/subgraphs/name/username/subgraphname' +const APIURL = "https://api.thegraph.com/subgraphs/name/username/subgraphname"; const tokensQuery = ` query { @@ -257,11 +257,11 @@ const tokensQuery = ` metadataURI } } -` +`; const client = createClient({ url: APIURL, -}) +}); const data = await client.query(tokensQuery).toPromise() ``` diff --git a/website/pages/sv/quick-start.mdx b/website/pages/sv/quick-start.mdx index 2a6e084ed0fd..a7b1d6b5a037 100644 --- a/website/pages/sv/quick-start.mdx +++ b/website/pages/sv/quick-start.mdx @@ -2,24 +2,18 @@ title: Snabbstart --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph to Subgraph Studio. +Learn how to easily publish and query a [subgraph](/developing/developer-faqs/#1-what-is-a-subgraph) on The Graph. -Se till att din subgraf kommer att indexera data från ett [nätverk som stöds] (/developing/supported-networks). - -Den här guiden är skriven förutsatt att du har: +## Prerequisites for this guide - En kryptoplånbok -- En smart kontraktsadress på det nätverk du väljer - -## 1. Skapa en subgraf på Subgraph Studio - -Go to the [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- A smart contract address on one of the [supported networks](/developing/supported-networks/) -Once your wallet is connected, you can begin by clicking “Create a Subgraph." It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name." +## Step-by-step -## 2. Installera Graph CLI +### 1. Installera Graph CLI -The Graph CLI is written in TypeScript and you will need to have `node` and either `npm` or `yarn` installed to use it. Check that you have the most recent CLI version installed. +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. Kör ett av följande kommandon på din lokala dator: @@ -35,133 +29,161 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -## 3. Initialize your subgraph from existing contract +### 2. Create your subgraph + +If your contract has events, the `init` command will automatically create a scaffold of a subgraph. + +#### Create via Graph CLI + +Use the following command to create a subgraph in Subgraph Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +#### Create via Subgraph Studio + +Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. -Initialize your subgraph from an existing contract by running the initialize command: +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +2. Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". + +For additional information on subgraph creation and the Graph CLI, see [Creating a Subgraph](/developing/creating-a-subgraph). + +### 3. Initialize your subgraph + +#### From an existing contract + +The following command initializes your subgraph from an existing contract: ```sh graph init --studio ``` -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> Note: If your contract was verified on Etherscan, then the ABI will automatically be created in the CLI. + +You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -När du initierar din subgraf kommer CLI verktyget att be dig om följande information: +When you initialize your subgraph, the CLI will ask you for the following information: -- Protokoll: välj det protokoll som din subgraf ska indexera data från -- Subgragh slug: skapa ett namn för din subgraf. Din subgraf snigel är en identifierare för din subgraf. -- Katalog att skapa subgrafen i: välj din lokala katalog -- Ethereum nätverk (valfritt): du kan behöva ange vilket EVM kompatibelt nätverk din subgraf kommer att indexera data från -- Kontraktsadress: Leta upp den smarta kontraktsadress som du vill fråga data från -- ABI: Om ABI inte fylls i automatiskt måste du mata in det manuellt som en JSON fil -- Startblock: det föreslås att du matar in startblocket för att spara tid medan din subgraf indexerar blockkedjedata. Du kan hitta startblocket genom att hitta blocket där ditt kontrakt distribuerades. -- Kontraktsnamn: ange namnet på ditt kontrakt -- Indexera kontraktshändelser som entiteter: det föreslås att du ställer in detta till sant eftersom det automatiskt lägger till mappningar till din subgraf för varje emitterad händelse -- Lägg till ett annat kontrakt (valfritt): du kan lägga till ett annat kontrakt +- Protocol: Choose the protocol your subgraph will be indexing data from. +- Subgraph slug: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. +- Directory to create the subgraph in: Choose your local directory. +- Ethereum network (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- Contract address: Locate the smart contract address you’d like to query data from. +- ABI: If the ABI is not auto-populated, you will need to input it manually as a JSON file. +- Start Block: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- Contract Name: Input the name of your contract. +- Index contract events as entities: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- Add another contract (optional): You can add another contract. Se följande skärmdump för ett exempel för vad du kan förvänta dig när du initierar din subgraf: ![Subgraph command](/img/subgraph-init-example.png) -## 4. Write your subgraph +### 4. Write your subgraph -De tidigare kommandona skapar en ställnings undergraf som du kan använda som utgångspunkt för att bygga din undergraf. När du gör ändringar i subgrafen kommer du huvudsakligen att arbeta med tre filer: +The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. -- Manifest (`subgraph.yaml`) - The manifest defines what datasources your subgraphs will index. -- Schema (`schema.graphql`) - The GraphQL schema defines what data you wish to retrieve from the subgraph. -- AssemblyScript Mappings (`mapping.ts`) - This is the code that translates data from your datasources to the entities defined in the schema. +When making changes to the subgraph, you will mainly work with three files: -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). +- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -## 5. Deploy to Subgraph Studio +For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -När din subgraf är skriven, kör följande kommandon: +### 5. Deploy your subgraph -```sh -$ graph codegen -$ graph build -``` +Remember, deploying is not the same as publishing. + +- When you deploy a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. +- When you publish a subgraph, you are publishing it onchain to the decentralized network. + +1. När din subgraf är skriven, kör följande kommandon: + + ```sh + graph codegen + graph build + ``` + +2. Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. -- Autentisera och distribuera din subgraf. Implementeringsnyckeln finns på Subgraph sidan i Subgraph Studio. +![Deploy key](/img/subgraph-studio-deploy-key.jpg) +```` ```sh -$ graph auth --studio -$ graph deploy --studio -``` -You will be asked for a version label. It's strongly recommended to use [semver](https://semver.org/) for versioning like `0.0.1`. That said, you are free to choose any string as version such as:`v1`, `version1`, `asdf`. - -## 6. Testa din subgraf - -In Subgraph Studio's playground environment, you can test your subgraph by making a sample query. - -Loggarna kommer att berätta om det finns några fel med din subgraf. Loggarna för en operativ subgraf kommer att se ut så här: - -![Subgraph logs](/img/subgraph-logs-image.png) - -If your subgraph is failing, you can query the subgraph health by using the GraphiQL Playground. Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails, so you can debug accordingly: - -```graphql -{ - indexingStatuses(subgraphs: ["Qm..."]) { - node - synced - health - fatalError { - message - block { - number - hash - } - handler - } - nonFatalErrors { - message - block { - number - hash - } - handler - } - chains { - network - chainHeadBlock { - number - } - earliestBlock { - number - } - latestBlock { - number - } - lastHealthyBlock { - number - } - } - entityCount - } -} +graph auth --studio + +graph deploy --studio ``` +```` -## 7. Publish your subgraph to The Graph’s Decentralized Network +- The CLI will ask for a version label. + - It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. That said, you can choose any string for the version such as: `v1`, `version1`, `asdf`, etc. -Once your subgraph has been deployed to Subgraph Studio, you have tested it out, and you are ready to put it into production, you can then publish it to the decentralized network. +### 6. Review your subgraph -In Subgraph Studio, you will be able to click the publish button on the top right of your subgraph's page. +If you’d like to examine your subgraph before publishing it to the network, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: -Select the network you would like to publish your subgraph to. It is recommended to publish subgraphs to Arbitrum One to take advantage of the [faster transaction speeds and lower gas costs](/arbitrum/arbitrum-faq). +- Run a sample query. +- Analyze your subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: -The (upgrade Indexer)[/sunrise/#about-the-upgrade-indexer] will begin serving queries on your subgraph regardless of subgraph curation, and it will provide you with 100,000 free queries per month. + ![Subgraph logs](/img/subgraph-logs-image.png) -For a higher quality of service and stronger redundancy, you can curate your subgraph to attract more Indexers. At the time of writing, it is recommended that you curate your own subgraph with at least 3,000 GRT to ensure 3-5 additional Indexers begin serving queries on your subgraph. +### 7. Publish your subgraph to The Graph Network -För att spara på gaskostnaderna kan du kurera din subgraf i samma transaktion som du publicerade den genom att välja den här knappen när du publicerar din subgraf till The Graphs decentraliserade nätverk: +Publishing a subgraph to the decentralized network makes it available for [Curators](/network/curating/) to begin curating it and [Indexers](/network/indexing/) to begin indexing it. -![Subgraph publish](/img/publish-and-signal-tx.png) +#### Publishing with Subgraph Studio + +1. To publish your subgraph, click the Publish button in the dashboard. +2. Select the network to which you would like to publish your subgraph. + +#### Publishing from the CLI + +As of version 0.73.0, you can also publish your subgraph with the Graph CLI. + +1. Open the `graph-cli`. + +2. Use the following commands: + + ```sh + graph codegen && graph build + ``` + + Then, + + ```sh + graph publish + ``` -## 8. Query your subgraph +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +To customize your deployment, see [Publishing a Subgraph](/publishing/publishing-a-subgraph/). + +#### Adding signal to your subgraph + +1. To attract indexers to query your subgraph, you should add GRT curation signal to it. + + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + +2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. + + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + +To learn more about curation, read [Curating](/network/curating/). + +To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: + +![Subgraph publish](/img/publish-and-signal-tx.png) -Nu kan du fråga din subgraf genom att skicka GraphQL frågor till din subgrafs fråge URL, som du kan hitta genom att klicka på frågeknappen. +### 8. Query your subgraph -If you don't have your API key, you can query via the free, rate-limited development query URL, which can be used for development and staging. +Now, you can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read more [here](/querying/querying-the-graph/). +For more information about querying data from your subgraph, read [Querying The Graph](/querying/querying-the-graph/). diff --git a/website/pages/sv/release-notes/assemblyscript-migration-guide.mdx b/website/pages/sv/release-notes/assemblyscript-migration-guide.mdx index 97c6bb95635a..afa4a7df4747 100644 --- a/website/pages/sv/release-notes/assemblyscript-migration-guide.mdx +++ b/website/pages/sv/release-notes/assemblyscript-migration-guide.mdx @@ -91,17 +91,17 @@ maybeValue.aMethod(); Men i den nyare versionen, eftersom värdet är nullable, måste du kontrollera, så här: ```typescript -let maybeValue = load() +let maybeValue = load(); if (maybeValue) { - maybeValue.aMethod() // `maybeValue` is not null anymore + maybeValue.aMethod(); // `maybeValue` is not null anymore } ``` Eller gör så här: ```typescript -let maybeValue = load()! // bryts i runtime om värdet är null +let maybeValue = load()!; // bryts i runtime om värdet är null maybeValue.aMethod() ``` @@ -113,8 +113,8 @@ Om du är osäker på vilken du ska välja, rekommenderar vi alltid att använda Tidigare kunde du använda [variabelskuggning](https://en.wikipedia.org/wiki/Variable_shadowing) och kod som detta skulle fungera: ```typescript -let a = 10 -let b = 20 +let a = 10; +let b = 20; let a = a + b ``` @@ -127,11 +127,8 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - Du måste döpa om dina duplicerade variabler om du hade variabelskuggning. - ### Jämförelser med nollvärden - När du gör uppgraderingen av din subgraf kan du ibland få fel som dessa: ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - För att lösa problemet kan du helt enkelt ändra `if`-satsen till något i den här stilen: ```typescript @@ -158,8 +154,8 @@ Samma gäller om du använder != istället för ==. Det vanliga sättet att göra kasting tidigare var att bara använda nyckelordet `as`, som så här: ```typescript -let byteArray = new ByteArray(10) -let uint8Array = byteArray as Uint8Array // motsvarande: byteArray +let byteArray = new ByteArray(10); +let uint8Array = byteArray as Uint8Array; // motsvarande: byteArray ``` Detta fungerar dock endast i två scenarier: @@ -171,8 +167,8 @@ Exempel: ```typescript // primitive casting -let a: usize = 10 -let b: isize = 5 +let a: usize = 10; +let b: isize = 5; let c: usize = a + (b as usize) ``` @@ -180,7 +176,7 @@ let c: usize = a + (b as usize) // upcasting on class inheritance class Bytes extends Uint8Array {} -let bytes = new Bytes(2) +let bytes = new Bytes(2); // bytes // same as: bytes as Uint8Array ``` @@ -193,7 +189,7 @@ Det finns två scenarier där du kan vilja casta, men att använda `as`/`var` // downcasting om klassarv class Bytes extends Uint8Array {} -let uint8Array = new Uint8Array(2) +let uint8Array = new Uint8Array(2); // uint8Array // breaks in runtime :( ``` @@ -202,7 +198,7 @@ let uint8Array = new Uint8Array(2) class Bytes extends Uint8Array {} class ByteArray extends Uint8Array {} -let bytes = new Bytes(2) +let bytes = new Bytes(2); // bytes // breaks in runtime :( ``` @@ -212,8 +208,8 @@ I dessa fall kan du använda funktionen `changetype`: // downcasting om klassarv class Bytes extends Uint8Array {} -let uint8Array = new Uint8Array(2) -changetype(uint8Array) // works :) +let uint8Array = new Uint8Array(2); +changetype(uint8Array); // works :) ``` ```typescript @@ -221,18 +217,18 @@ changetype(uint8Array) // works :) class Bytes extends Uint8Array {} class ByteArray extends Uint8Array {} -let bytes = new Bytes(2) -changetype(bytes) // works :) +let bytes = new Bytes(2); +changetype(bytes); // works :) ``` Om du bara vill ta bort nullability kan du fortsätta använda `as`-operatorn (eller `variable`), men se till att du vet att värdet inte kan vara null, annars kommer det att bryta. ```typescript // ta bort ogiltighet -let previousBalance = AccountBalance.load(balanceId) // AccountBalance | null +let previousBalance = AccountBalance.load(balanceId); // AccountBalance | null if (previousBalance != null) { - return previousBalance as AccountBalance // safe remove null + return previousBalance as AccountBalance; // safe remove null } let newBalance = new AccountBalance(balanceId) @@ -252,18 +248,18 @@ Vi har också lagt till några fler statiska metoder i vissa typer för att unde För att använda [nullability check feature](https://www.assemblyscript.org/basics.html#nullability-checks) kan du använda antingen `if`-satser eller den ternära operatorn (`?` och `:`) så här: ```typescript -let something: string | null = 'data' +let something: string | null = "data"; -let somethingOrElse = something ? something : 'else' +let somethingOrElse = something ? something : "else"; // or -let somethingOrElse +let somethingOrElse; if (something) { - somethingOrElse = something + somethingOrElse = something; } else { - somethingOrElse = 'else' + somethingOrElse = "else"; } ``` @@ -274,10 +270,10 @@ class Container { data: string | null } -let container = new Container() -container.data = 'data' +let container = new Container(); +container.data = "data"; -let somethingOrElse: string = container.data ? container.data : 'else' // Kompilerar inte +let somethingOrElse: string = container.data ? container.data : "else"; // Kompilerar inte ``` Vilket ger detta fel: @@ -288,7 +284,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - För att åtgärda problemet kan du skapa en variabel för den egenskapen så att kompilatorn kan utföra den magiska nollbarhetskontrollen: ```typescript @@ -296,12 +291,12 @@ class Container { data: string | null } -let container = new Container() -container.data = 'data' +let container = new Container(); +container.data = "data"; -let data = container.data +let data = container.data; -let somethingOrElse: string = data ? data : 'else' // kompilerar helt okej :) +let somethingOrElse: string = data ? data : "else"; // kompilerar helt okej :) ``` ### Operatörsöverladdning med egenskapsaccess @@ -310,7 +305,7 @@ Om du försöker summera (till exempel) en nullable typ (från en property acces ```typescript class BigInt extends Uint8Array { - @operator('+') + @operator("+") plus(other: BigInt): BigInt { // ... } @@ -320,26 +315,26 @@ class Wrapper { public constructor(public n: BigInt | null) {} } -let x = BigInt.fromI32(2) -let y: BigInt | null = null +let x = BigInt.fromI32(2); +let y: BigInt | null = null; -x + y // ge kompileringsfel om ogiltighet +x + y; // ge kompileringsfel om ogiltighet -let wrapper = new Wrapper(y) +let wrapper = new Wrapper(y); -wrapper.n = wrapper.n + x // ger inte kompileringsfel som det borde +wrapper.n = wrapper.n + x; // ger inte kompileringsfel som det borde ``` Vi har öppnat en fråga om AssemblyScript-kompilatorn för detta, men om du gör den här typen av operationer i dina subgraf-mappningar bör du ändra dem så att de gör en null-kontroll innan den. ```typescript -let wrapper = new Wrapper(y) +let wrapper = new Wrapper(y); if (!wrapper.n) { - wrapper.n = BigInt.fromI32(0) + wrapper.n = BigInt.fromI32(0); } -wrapper.n = wrapper.n + x // nu är `n` garanterat ett BigInt +wrapper.n = wrapper.n + x; // nu är `n` garanterat ett BigInt ``` ### Initialisering av värde @@ -347,17 +342,17 @@ wrapper.n = wrapper.n + x // nu är `n` garanterat ett BigInt Om du har någon kod som denna: ```typescript -var value: Type // null -value.x = 10 -value.y = 'content' +var value: Type; // null +value.x = 10; +value.y = "content" ``` Det kommer att kompilera men brytas vid körning, det händer eftersom värdet inte har initialiserats, så se till att din subgraf har initialiserat sina värden, så här: ```typescript -var value = new Type() // initialized -value.x = 10 -value.y = 'content' +var value = new Type(); // initialized +value.x = 10; +value.y = "content" ``` Även om du har nullable properties i en GraphQL-entitet, som denna: @@ -372,10 +367,10 @@ type Total @entity { Och du har en kod som liknar den här: ```typescript -let total = Total.load('latest') +let total = Total.load("latest"); if (total === null) { - total = new Total('latest') + total = new Total("latest") } total.amount = total.amount + BigInt.fromI32(1) @@ -384,11 +379,11 @@ total.amount = total.amount + BigInt.fromI32(1) Du måste se till att initialisera värdet `total.amount`, för om du försöker komma åt som i den sista raden för summan, kommer det att krascha. Så antingen initialiserar du det först: ```typescript -let total = Total.load('latest') +let total = Total.load("latest") if (total === null) { - total = new Total('latest') - total.amount = BigInt.fromI32(0) + total = new Total("latest") + total.amount = BigInt.fromI32(0); } total.tokens = total.tokens + BigInt.fromI32(1) @@ -404,10 +399,10 @@ type Total @entity { ``` ```typescript -let total = Total.load('latest') +let total = Total.load("latest"); if (total === null) { - total = new Total('latest') // initierar redan icke-nullställbara egenskaper + total = new Total("latest"); // initierar redan icke-nullställbara egenskaper } total.amount = total.amount + BigInt.fromI32(1) @@ -435,17 +430,17 @@ export class Something { // or export class Something { - value: Thing + value: Thing; constructor(value: Thing) { - this.value = value + this.value = value; } } // or export class Something { - value!: Thing + value!: Thing; } ``` @@ -454,9 +449,9 @@ export class Something { Klassen `Array` accepterar fortfarande ett tal för att initiera längden på listan, men du bör vara försiktig eftersom operationer som `.push` faktiskt ökar storleken istället för att lägga till i början, till exempel: ```typescript -let arr = new Array(5) // ["", "", "", "", ""] +let arr = new Array(5); // ["", "", "", "", ""] -arr.push('something') // ["", "", "", "", "", "something"] // size 6 :( +arr.push("something"); // ["", "", "", "", "", "something"] // size 6 :( ``` Beroende på vilka typer du använder, t.ex. nullable-typer, och hur du kommer åt dem, kan du stöta på ett runtime-fel som det här: @@ -468,17 +463,17 @@ ERRO Handler skipped due to execution failure, error: Mapping aborted at ~lib/ar För att faktiskt trycka i början bör du antingen initiera `Array` med storlek noll, så här: ```typescript -let arr = new Array(0) // [] +let arr = new Array(0); // [] -arr.push('something') // ["something"] +arr.push("something"); // ["something"] ``` Eller så bör du mutera den via index: ```typescript -let arr = new Array(5) // ["", "", "", "", ""] +let arr = new Array(5); // ["", "", "", "", ""] -arr[0] = 'something' // ["something", "", "", "", ""] +arr[0] = "something"; // ["something", "", "", "", ""] ``` ### GraphQL-schema diff --git a/website/pages/sv/sps/introduction.mdx b/website/pages/sv/sps/introduction.mdx new file mode 100644 index 000000000000..12e3f81c6d53 --- /dev/null +++ b/website/pages/sv/sps/introduction.mdx @@ -0,0 +1,19 @@ +--- +title: Introduction to Substreams-powered Subgraphs +--- + +By using a Substreams package (`.spkg`) as a data source, your subgraph gains access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. + +There are two methods of enabling this technology: + +Using Substreams [triggers](./triggers): Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. + +Using [Entity Changes](https://substreams.streamingfast.io/documentation/consume/subgraph/graph-out): By writing more of the logic into Substreams, you can consume the module's output directly into graph-node. In graph-node, you can use the Substreams data to create your subgraph entities. + +It is really a matter of where you put your logic, in the subgraph or the Substreams. Keep in mind that having more of your logic in Substreams benefits from a parallelized model, whereas triggers will be linearly consumed in graph-node. + +Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: + +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/sv/sps/triggers-example.mdx b/website/pages/sv/sps/triggers-example.mdx new file mode 100644 index 000000000000..e6793a6665b8 --- /dev/null +++ b/website/pages/sv/sps/triggers-example.mdx @@ -0,0 +1,140 @@ +--- +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" +--- + +## Förutsättningar + +Before starting, make sure to: + +- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. +- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. + +## Step 1: Initialize Your Project + + + +1. Open your Dev Container and run the following command to initialize your project: + + ```bash + substreams init + ``` + +2. Select the "minimal" project option. + +3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: + +```yaml +specVersion: v0.1.0 +package: + name: my_project_sol + version: v0.1.0 + +imports: # Pass your spkg of interest + solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg + +modules: + - name: map_spl_transfers + use: solana:map_block # Select corresponding modules available within your spkg + initialBlock: 260000082 + + - name: map_transactions_by_programid + use: solana:solana:transactions_by_programid_without_votes + +network: solana-mainnet-beta + +params: # Modify the param fields to meet your needs + # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA + map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE +``` + +## Step 2: Generate the Subgraph Manifest + +Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: + +```bash +substreams codegen subgraph +``` + +You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: + +```yaml +--- +dataSources: + - kind: substreams + name: my_project_sol + network: solana-mainnet-beta + source: + package: + moduleName: map_spl_transfers # Module defined in the substreams.yaml + file: ./my-project-sol-v0.1.0.spkg + mapping: + apiVersion: 0.0.7 + kind: substreams/graph-entities + file: ./src/mappings.ts + handler: handleTriggers +``` + +## Step 3: Define Entities in `schema.graphql` + +Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. Here is an example: + +```graphql +type MyTransfer @entity { + id: ID! + amount: String! + source: String! + designation: String! + signers: [String!]! +} +``` + +This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. + +## Step 4: Handle Substreams Data in `mappings.ts` + +With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: + +```ts +import { Protobuf } from 'as-proto/assembly' +import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events' +import { MyTransfer } from '../generated/schema' + +export function handleTriggers(bytes: Uint8Array): void { + const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode) + + for (let i = 0; i < input.data.length; i++) { + const event = input.data[i] + + if (event.transfer != null) { + let entity_id: string = `${event.txnId}-${i}` + const entity = new MyTransfer(entity_id) + entity.amount = event.transfer!.instruction!.amount.toString() + entity.source = event.transfer!.accounts!.source + entity.designation = event.transfer!.accounts!.destination + + if (event.transfer!.accounts!.signer!.single != null) { + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] + } else if (event.transfer!.accounts!.signer!.multisig != null) { + entity.signers = event.transfer!.accounts!.signer!.multisig!.signers + } + entity.save() + } + } +} +``` + +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + +## Conclusion + +You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. + +For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). diff --git a/website/pages/sv/sps/triggers.mdx b/website/pages/sv/sps/triggers.mdx new file mode 100644 index 000000000000..ed19635d4768 --- /dev/null +++ b/website/pages/sv/sps/triggers.mdx @@ -0,0 +1,37 @@ +--- +title: Substreams Triggers +--- + +Custom triggers allow you to send data directly into your subgraph mappings file and entities (similar to tables and fields), enabling full use of the GraphQL layer. By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data within your subgraph’s handler, ensuring efficient and streamlined data management within the subgraph framework. + +> Note: If you haven’t already, visit one of the How-To Guides found [here](./introduction) to scaffold your first project in the Development Container. + +The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. + +```tsx +export function handleTransactions(bytes: Uint8Array): void { + let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).trasanctions // 1. + if (transactions.length == 0) { + log.info('No transactions found', []) + return + } + + for (let i = 0; i < transactions.length; i++) { + // 2. + let transaction = transactions[i] + + let entity = new Transaction(transaction.hash) // 3. + entity.from = transaction.from + entity.to = transaction.to + entity.save() + } +} +``` + +Here's what you’re seeing in the `mappings.ts` file: + +1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object +2. Looping over the transactions +3. Create a new subgraph entity for every transaction + +To go through a detailed example of a trigger-based subgraph, [click here](./triggers-example). diff --git a/website/pages/sv/substreams.mdx b/website/pages/sv/substreams.mdx index 9e605493cfc7..3412778ff41e 100644 --- a/website/pages/sv/substreams.mdx +++ b/website/pages/sv/substreams.mdx @@ -4,9 +4,11 @@ title: Underströmmar ![Substreams Logo](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## How Substreams Works in 4 Steps @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### Expand Your Knowledge - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/sv/sunrise.mdx b/website/pages/sv/sunrise.mdx index 32bf6c6d26d4..14d1444cf8cd 100644 --- a/website/pages/sv/sunrise.mdx +++ b/website/pages/sv/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Post-Sunrise + Upgrading to The Graph Network FAQ --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Note: The Sunrise of Decentralized Data ended June 12th, 2024. -## What is the Sunrise of Decentralized Data? +## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data is an initiative spearheaded by Edge & Node. The goal is to enable subgraph developers to seamlessly upgrade to The Graph’s decentralized network. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan draws on many previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs, and the ability to integrate new blockchain networks to The Graph. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### What are the phases of the Sunrise? +### What happened to the hosted service? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. -## Upgrading subgraphs to The Graph Network +During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. -### When will hosted service subgraphs no longer be available? +### Was Subgraph Studio impacted by this upgrade? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Why were subgraphs published to Arbitrum, did it start indexing a different network? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Will my hosted service subgraph be supported on The Graph Network? - -Yes, the upgrade Indexer will automatically support all hosted service subgraphs published to The Graph Network for a seamless upgrade experience. - -### How do I upgrade my hosted service subgraph? - -> Note: Upgrading a subgraph to The Graph Network cannot be undone. - - - -To upgrade a hosted service subgraph, you can visit the subgraph dashboard on the [hosted service](https://thegraph.com/hosted-service). - -1. Select the subgraph(s) you want to upgrade. -2. Select the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### How can I get support with the upgrade process? - -The Graph community is here to support developers as they move to The Graph Network. Join The Graph's [Discord server](https://discord.gg/vtvv7FP) and request help in the #upgrade-decentralized-network channel. - -### How can I ensure high quality of service and redundancy for subgraphs on The Graph Network? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Members from these blockchain communities are encouraged to integrate their chain through the [Chain Integration Process](/chain-integration-overview/). - -### How do I publish new versions to the network? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Upgrade to the latest version of [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Update your deploy command - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> Publishing requires Arbitrum ETH - upgrading your subgraph also airdrops a small amount to facilitate your first protocol interactions 🧑‍🚀 - -### I use a subgraph developed by someone else, how can I make sure that my service isn't interrupted? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### What happens if I don't upgrade my subgraph? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### How can I get started querying subgraphs on The Graph Network? - -You can explore available subgraphs on [Graph Explorer](https://thegraph.com/explorer). [Learn more about querying subgraphs on The Graph](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## About the Upgrade Indexer -### What is the upgrade Indexer? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> The upgrade Indexer is currently active. -### What chains does the upgrade Indexer support? +The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. -The upgrade Indexer supports chains that were previously only available on the hosted service. +### What does the upgrade Indexer do? -Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. ### What does the upgrade indexer mean for existing Indexers? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. + +However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As more subgraphs are upgraded from the hosted service to The Graph Network, Delegators stand to benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### Will the upgrade Indexer compete with existing Indexers for rewards? +### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer will only allocate the minimum amount per subgraph and will not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### How will this affect subgraph developers? +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### How does this benefit data consumers? +### How does the upgrade Indexer benefit data consumers? The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. -### How will the upgrade Indexer price queries? - -The upgrade Indexer will price queries at the market rate so as not to influence the query fee market. - -### What are the criteria for the upgrade Indexer to stop supporting a subgraph? - -The upgrade Indexer will serve a subgraph until it is sufficiently and successfully served with consistent queries served by at least 3 other Indexers. - -Furthermore, the upgrade Indexer will stop supporting a subgraph if it has not been queried in the last 30 days. - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## About The Graph Network - -### Do I need to run my own infrastructure? - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -Once your subgraph has reached adequate curation signal and other Indexers begin supporting it, the upgrade Indexer will gradually taper off, allowing other Indexers to collect indexing rewards and query fees. - -### Should I host my own indexing infrastructure? - -Running infrastructure for your own project is [significantly more resource intensive](/network/benefits/) when compared to using The Graph Network. - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -That being said, if you’re still interested in running a [Graph Node](https://github.com/graphprotocol/graph-node), consider joining The Graph Network [as an Indexer](https://thegraph.com/blog/how-to-become-indexer/) to earn indexing rewards and query fees by serving data on your subgraph and others. - -### Should I use a centralized indexing provider? - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -Here's a detailed breakdown of the benefits of The Graph over centralized hosting: +### How does the upgrade Indexer price queries? -- **Resilience and Redundancy**: Decentralized systems are inherently more robust and resilient due to their distributed nature. Data isn't stored on a single server or location. Instead, it's served by hundreds of independent Indexers around the globe. This reduces the risk of data loss or service interruptions if one node fails, leading to exceptional uptime (99.99%). +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **Quality of Service**: In addition to the impressive uptime, The Graph Network features a ~106ms median query speed (latency), and higher query success rates compared to hosted alternatives. Read more in [this blog](https://thegraph.com/blog/qos-the-graph-network/). +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -Just as you've chosen your blockchain network for its decentralized nature, security, and transparency, opting for The Graph Network is an extension of those same principles. By aligning your data infrastructure with these values, you ensure a cohesive, resilient, and trust-driven development environment. +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/sv/supported-network-requirements.mdx b/website/pages/sv/supported-network-requirements.mdx index f7a4943afd1b..0eb3e96fa7a8 100644 --- a/website/pages/sv/supported-network-requirements.mdx +++ b/website/pages/sv/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| Nätverk | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | +| Nätverk | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | diff --git a/website/pages/sv/tap.mdx b/website/pages/sv/tap.mdx new file mode 100644 index 000000000000..cf5c279544fa --- /dev/null +++ b/website/pages/sv/tap.mdx @@ -0,0 +1,197 @@ +--- +title: TAP Migration Guide +--- + +Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. + +## Översikt + +[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: + +- Efficiently handles micropayments. +- Adds a layer of consolidations to on-chain transactions and costs. +- Allows Indexers control of receipts and payments, guaranteeing payment for queries. +- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. + +## Specifics + +TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. + +For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. + +### RAV Details + +- It’s money that is waiting to be sent to the blockchain. + +- It will continue to send requests to aggregate and ensure that the total value of non-aggregated receipts does not exceed the `amount willing to lose`. + +- Each RAV can be redeemed once in the contracts, which is why they are sent after the allocation is closed. + +### Redeeming RAV + +As long as you run `tap-agent` and `indexer-agent`, everything will be executed automatically. The following provides a detailed breakdown of the process: + +1. An Indexer closes allocation. + +2. ` period, tap-agent` takes all pending receipts for that specific allocation and requests an aggregation into a RAV, marking it as `last`. + +3. `indexer-agent` takes all the last RAVS and sends redeem requests to the blockchain, which will update the value of `redeem_at`. + +4. During the `` period, `indexer-agent` monitors if the blockchain has any reorganizations that revert the transaction. + + - If it was reverted, the RAV is resent to the blockchain. If it was not reverted, it gets marked as `final`. + +## Blockchain Addresses + +### Contracts + +| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| ------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | +| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | +| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | + +### Gateway + +| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| ---------- | --------------------------------------------- | --------------------------------------------- | +| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | + +### Krav + +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. + +- [Graph TAP Aribtrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) + +> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployement. As a result, you have to index it manually. + +## Migration Guide + +### Software versions + +| Component | Version | Image Link | +| --------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------- | +| indexer-service | v1.0.0-rc.6 | [indexer-service](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-service-rs/264320627?tag=1.0.0-rc.6) | +| indexer-agent | PR #995 | [indexer-agent](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent/266166026?tag=sha-d98cf80) | +| tap-agent | v1.0.0-rc.6 | [tap-agent](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-tap-agent/264320547?tag=1.0.0-rc.6) | + +### Steps + +1. **Indexer Agent** + + - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). + - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + +2. **Indexer Service** + + - Fully replace your current configuration with the [new Indexer Service rs](https://github.com/graphprotocol/indexer-rs). It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + - Like the older version, you can scale Indexer Service horizontally easily. It is still stateless. + +3. **TAP Agent** + + - Run _one_ single instance of [TAP Agent](https://github.com/graphprotocol/indexer-rs) at all times. It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + +4. **Configure Indexer Service and TAP Agent** + + Configuration is a TOML file shared between `indexer-service` and `tap-agent`, supplied with the argument `--config /path/to/config.toml`. + + Check out the full [configuration](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) and the [default values](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) + +For minimal configuration, use the following template: + +```bash +# You will have to change *all* the values below to match your setup. +# +# Some of the config below are global graph network values, which you can find here: +# +# +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: +# +# [database] +# postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" + +[indexer] +indexer_address = "0x1111111111111111111111111111111111111111" +operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" + +[database] +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. +postgres_url = "postgres://postgres@postgres:5432/postgres" + +[graph_node] +# URL to your graph-node's query endpoint +query_url = "" +# URL to your graph-node's status endpoint +status_url = "" + +[subgraphs.network] +# Query URL for the Graph Network subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[subgraphs.escrow] +# Query URL for the Escrow subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[blockchain] +# The chain ID of the network that the graph network is running on +chain_id = 1337 +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. +receipts_verifier_address = "0x2222222222222222222222222222222222222222" + +######################################## +# Specific configurations to tap-agent # +######################################## +[tap] +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: +# max_amount_willing_to_lose_grt = "0.1" +max_amount_willing_to_lose_grt = 20 + +[tap.sender_aggregator_endpoints] +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. +0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" +``` + +Noteringar: + +- Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/tap/#gateway). +- Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/tap/#contracts) using the appropriate chain id. + +**Log Level** + +- You can set the log level by using the `RUST_LOG` environment variable. +- It’s recommended that you set it to `RUST_LOG=indexer_tap_agent=debug,info`. + +## Monitoring + +### Metrics + +All components expose the port 7300 to be queried by prometheus. + +### Grafana Dashboard + +You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) and import. + +### Launchpad + +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer) diff --git a/website/pages/tr/about.mdx b/website/pages/tr/about.mdx index 5858793e7da6..5a116e7de131 100644 --- a/website/pages/tr/about.mdx +++ b/website/pages/tr/about.mdx @@ -2,46 +2,66 @@ title: Graph Hakkında --- -This page will explain what The Graph is and how you can get started. - ## What is The Graph? -The Graph is a decentralized protocol for indexing and querying blockchain data. The Graph makes it possible to query data that is difficult to query directly. +The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. + +## Understanding the Basics + +Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. + +### Challenges Without The Graph + +In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. + +- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. + +- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. + +- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. + +### Why is this a problem? + +It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. + +Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. + +Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. -Projects with complex smart contracts like [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it really difficult to read anything other than basic data directly from the blockchain. +## The Graph Provides a Solution -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. -You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. +### How The Graph Functions -**Indexing blockchain data is really, really hard.** +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +#### Specifics -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. -## How The Graph Works +- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -The Graph learns what and how to index Ethereum data based on subgraph descriptions, known as the subgraph manifest. The subgraph description defines the smart contracts of interest for a subgraph, the events in those contracts to pay attention to, and how to map event data to data that The Graph will store in its database. +- When creating a subgraph, you need to write a subgraph manifest. -Once you have written a `subgraph manifest`, you use the Graph CLI to store the definition in IPFS and tell the indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. -This diagram gives more detail about the flow of data once a subgraph manifest has been deployed, dealing with Ethereum transactions: +The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. ![A graphic explaining how The Graph uses Graph Node to serve queries to data consumers](/img/graph-dataflow.png) The flow follows these steps: -1. A dapp adds data to Ethereum through a transaction on a smart contract. -2. The smart contract emits one or more events while processing the transaction. -3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. -4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. -5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. +1. A dapp adds data to Ethereum through a transaction on a smart contract. +2. The smart contract emits one or more events while processing the transaction. +3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. +4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. +5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. ## Next Steps -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +The following sections provide a more in-depth look at subgraphs, their deployment and data querying. -Before you start writing your own subgraph, you might want to have a look at [Graph Explorer](https://thegraph.com/explorer) and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/pages/tr/arbitrum/arbitrum-faq.mdx b/website/pages/tr/arbitrum/arbitrum-faq.mdx index b839c3a0d1f3..7d6ec7913967 100644 --- a/website/pages/tr/arbitrum/arbitrum-faq.mdx +++ b/website/pages/tr/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: Arbitrum FAQ Arbitrum Faturalama SSS bölümüne geçmek istiyorsanız [buraya](#billing-on-arbitrum-faqs) tıklayın. -## Graph neden bir Katman2 Çözümü uyguluyor? +## Why did The Graph implement an L2 Solution? -Katman2'de Graph'ı ölçeklendirerek, ağ katılımcıları şunları bekleyebilir: +By scaling The Graph on L2, network participants can now benefit from: - Gas ücretlerinde 26 kata kadar tasarruf @@ -14,7 +14,7 @@ Katman2'de Graph'ı ölçeklendirerek, ağ katılımcıları şunları bekleyebi - Ethereum'dan aktarılmış güvenlik -Protokol akıllı sözleşmelerinin Katman2'ye ölçeklendirilmesi, ağ katılımcılarının gas ücretlerinde daha düşük bir maliyetle daha sık etkileşime girmesine olanak tanır. Örneğin, İndeksleyiciler daha fazla sayıda subgraph'ı daha sık indekslemek için tahsisleri açıp kapatabilir, geliştiriciler subgraphları daha kolay bir şekilde dağıtabilir ve güncelleyebilir, Delegatörler GRT'yi daha sık bir şekilde delege edebilir ve Küratörler daha önce gas nedeniyle sık sık gerçekleştirilemeyecek kadar maliyetli olduğu düşünülen, daha fazla sayıda subgraph'a sinyal ekleyebilir veya çıkarabilir. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. Graph topluluğu, geçen yıl [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) tartışmasının sonucuna göre Arbitrum ile çalışmaya karar verdi. @@ -41,27 +41,21 @@ Graph'ı Katman2'de kullanmanın avantajlarından yararlanmak için, zincirler a ## As a subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? -Hemen yapılması gereken bir eylem yok, ancak ağ katılımcılarına Katman2'nin faydalarından yararlanmaları için Arbitrum'a geçmeye başlamaları önerilir. +Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) for additional support. -Çekirdek geliştirici ekipleri, delegasyon, kürasyon ve subgraphları Arbitrum'a taşımayı önemli ölçüde kolaylaştıracak Katman2 transfer araçları oluşturmak için çalışıyor. Ağ katılımcıları, Katman2 aktarım araçlarının 2023 yazına kadar kullanıma sunulmasını bekleyebilirler. +All indexing rewards are now entirely on Arbitrum. -10 Nisan 2023 itibarıyla, tüm endeksleme ödüllerinin %5'i Arbitrum'da üretilmektedir. Ağ katılımı arttıkça ve Konsey onayladıkça, endeksleme ödülleri Ethereum'dan Arbitrum'a doğru yavaşça kayacaktır ve nihayetinde tamamen Arbitrum'a geçecektir. - -## Katman2'deki ağa katılmak istersem ne yapmalıyım? - -Lütfen Katman2'deki [ağı test etmeye](https://testnet.thegraph.com/explorer) yardımcı olun ve deneyiminizle ilgili [Discord](https://discord.gg/graphprotocol)'da geri bildirimde bulunun. - -## Ağı Katman2'ye ölçeklendirmekle ilgili herhangi bir risk var mı? +## Were there any risks associated with scaling the network to L2? All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). Güvenli ve sorunsuz bir geçiş sağlamak için her şey kapsamlı bir şekilde test edilmiş ve bir acil durum planı hazırlanmıştır. Ayrıntıları [burada](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20) bulabilirsiniz. -## Ethereum'daki mevcut subgraph'lar çalışmaya devam edecek mi? +## Are existing subgraphs on Ethereum working? -Evet, The Graph Ağı sözleşmeleri, daha sonraki bir tarihte tamamen Arbitrum'a taşınana kadar hem Ethereum hem de Arbitrum üzerinde paralel olarak çalışacaktır. +All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. -## GRT'nin Arbitrum'da dağıtılan yeni bir akıllı sözleşmesi olacak mı? +## Does GRT have a new smart contract deployed on Arbitrum? Evet, GRT'nin Arbitrum üzerinde ek bir [akıllı sözleşmesi](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7) bulunmaktadır. Ancak, Ethereum ana ağında bulunan [GRT sözleşmesi](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) çalışmaya devam edecektir. diff --git a/website/pages/tr/billing.mdx b/website/pages/tr/billing.mdx index b2faf34d0b49..405aab44b929 100644 --- a/website/pages/tr/billing.mdx +++ b/website/pages/tr/billing.mdx @@ -14,7 +14,7 @@ There are two plans to use when querying subgraphs on The Graph Network. ## Query Payments with credit card -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) +- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Sayfanın sağ üst köşesindeki "Cüzdanı Bağla" düğmesine tıklayın. Cüzdan seçim sayfasına yönlendirileceksiniz. Cüzdanınızı seçin ve "Bağlan" a tıklayın. 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. @@ -69,7 +69,7 @@ Once you bridge GRT, you can add it to your billing balance. 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. 4. Enter the amount of GRT you would like to withdraw. 5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. 6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. @@ -83,7 +83,7 @@ Once you bridge GRT, you can add it to your billing balance. - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. 5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. 6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. + - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. 7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. 8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. @@ -127,7 +127,7 @@ This will be a step by step guide for purchasing GRT on Binance. 7. Review your purchase and click "Buy GRT". 8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. 9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - Click on the "wallet" button, click withdraw, and select GRT. - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - Click "Continue" and confirm your transaction. @@ -198,7 +198,7 @@ Binance'de ETH edinmekle alakalı daha fazla bilgiyi [buradan](https://www.binan ### How many queries will I need? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. @@ -208,6 +208,6 @@ Of course, both new and existing users can reach out to Edge & Node's BD team fo Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### What happens when my billing balance runs out? Will I get a warning? You will receive several email notifications before your billing balance runs out. diff --git a/website/pages/tr/chain-integration-overview.mdx b/website/pages/tr/chain-integration-overview.mdx index ea08112b11fc..04b97b01b3fe 100644 --- a/website/pages/tr/chain-integration-overview.mdx +++ b/website/pages/tr/chain-integration-overview.mdx @@ -6,12 +6,12 @@ Blok zinciri ekiplerinin [Graph protokolüyle entegrasyon](https://forum.thegrap ## Aşama 1. Teknik Entegrasyon -- Ekipler, EVM tabanlı olmayan zincirler için Graph Düğüm entegrasyonu ve Firehose üzerinde çalışıyorr. [İşte nasıl olduğu](/new-chain-integration/). +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. - Ekipler, protokol entegrasyon sürecini bir Forum başlığı oluşturarak başlatır [here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (Yönetişim ve GIP'ler altındaki Yeni Veri Kaynakları alt kategorisi). Varsayılan Forum şablonunun kullanılması zorunludur. ## Aşama 2. Entegrasyon Doğrulaması -- Ekipler, entegrasyon sürecinin sorunsuz bir şekilde ilerlemesini sağlamak için çekirdek geliştiricilerle, Graph Vakfı ve [Subgraph Stüdyo](https://thegraph.com/studio/) gibi GUI'ler ve ağ geçidi operatörleri ile işbirliği yapmaktadır. Bu, entegre edilen zincirin JSON RPC veya Firehose uç noktaları gibi gerekli altyapının sağlanmasını içerir. Bu tür bir altyapıyı kendi kendine barındırmaktan kaçınmak isteyen ekipler, bunu yapmak için Graph'ın düğüm operatörleri (İndeksleyiciler) topluluğundan yararlanabilir ve Vakıf bu konuda yardımcı olabilir. +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - Graph İndeksleyicileri, entegrasyonu Graph'ın test ağında test eder. - Çekirdek geliştiriciler ve İndeksleyiciler kararlılığı, performansı ve veri belirleyiciliğini izler. @@ -38,7 +38,7 @@ Bu süreç Subgraph Veri Hizmeti ile ilgilidir ve yalnızca yeni Subgraph `Veri Bu, yalnızca Substreams destekli subgraphlar'da ödüllerin indekslenmesi için protokol desteğini etkileyecektir. Yeni Firehose uygulamasının, bu GIP'de Aşama 2 için özetlenen metodolojiyi izleyerek testnet üzerinde test edilmesi gerekecektir. Benzer şekilde, uygulamanın performanslı ve güvenilir olduğu varsayıldığı takdirde, [Özellik Destek Matrisi] (https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) üzerinde bir PR (`Substreams veri kaynakları` Subgraph Özelliği) ve ödüllerin indekslenmesi amacıyla protokol desteği için yeni bir GIP gerekecektir. PR ve GIP'yi herkes oluşturabilir; Vakıf, Konsey onayı konusunda yardımcı olacaktır. -### 3. Bu süreç ne kadar zaman alır? +### 3. How much time will the process of reaching full protocol support take? Ana ağa geçiş süresinin entegrasyon geliştirme süresine, ek araştırma gerekip gerekmediğine, test ve hata düzeltmelerine ve her zaman olduğu gibi topluluk geri bildirimi gerektiren yönetişim sürecinin zamanlamasına bağlı olarak değişmek kaydıyla birkaç hafta olması beklenmektedir. @@ -46,4 +46,4 @@ Ana ağa geçiş süresinin entegrasyon geliştirme süresine, ek araştırma ge ### 4. Öncelikler nasıl ele alınacak? -Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. This is especially true for chains previously supported on the [hosted service](https://thegraph.com/hosted-service) or those relying on already tested stacks. +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/tr/cookbook/arweave.mdx b/website/pages/tr/cookbook/arweave.mdx index c557711af6a6..8092218a8c85 100644 --- a/website/pages/tr/cookbook/arweave.mdx +++ b/website/pages/tr/cookbook/arweave.mdx @@ -105,7 +105,7 @@ Arweave veri kaynakları iki tür işleyiciyi destekler: Olayları işlemek için işleyiciler [AssemblyScript](https://www.assemblyscript.org/) içinde yazılmıştır. -Arweave indeksleme, [AssemblyScript API](/developing/assemblyscript-api/)'sine Arweave'ye özgü veri tipleri ekler. +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/tr/cookbook/base-testnet.mdx b/website/pages/tr/cookbook/base-testnet.mdx index fa615597e6dc..56e2dbb5e9f9 100644 --- a/website/pages/tr/cookbook/base-testnet.mdx +++ b/website/pages/tr/cookbook/base-testnet.mdx @@ -63,10 +63,10 @@ Subgraph kısa adı, subgraph'ınız için bir tanımlayıcıdır. CLI aracı, s The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. +- Şema (schema.graphql) - GraphQL şeması, subgraph'tan hangi verileri almak istediğinizi tanımlar. - AssemblyScript Eşleştirmeleri (mapping.ts) - Bu, veri kaynaklarınızdaki verileri şemada tanımlanan varlıklara çeviren koddur. -If you want to index additional data, you will need extend the manifest, schema and mappings. +If you want to index additional data, you will need to extend the manifest, schema and mappings. For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). diff --git a/website/pages/tr/cookbook/cosmos.mdx b/website/pages/tr/cookbook/cosmos.mdx index 3d4b5cadd624..0a1a8075d49f 100644 --- a/website/pages/tr/cookbook/cosmos.mdx +++ b/website/pages/tr/cookbook/cosmos.mdx @@ -85,7 +85,7 @@ Schema definition describes the structure of the resulting subgraph database and Olayları işlemek için işleyiciler [AssemblyScript](https://www.assemblyscript.org/) içinde yazılmıştır. -Cosmos indeksleme, Cosmos'a özgü veri türlerini [AssemblyScript API](/developing/assemblyscript-api/) ile tanıştırır. +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/tr/cookbook/grafting.mdx b/website/pages/tr/cookbook/grafting.mdx index b3e1f5a218c0..2857ded9d636 100644 --- a/website/pages/tr/cookbook/grafting.mdx +++ b/website/pages/tr/cookbook/grafting.mdx @@ -22,7 +22,7 @@ Daha fazla bilgi için kontrol edebilirsiniz: - [Graftlama](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -Bu eğitici içerikte, temel bir kullanım örneğini ele alacağız. Mevcut bir sözleşmeyi özdeş bir sözleşme ile değiştireceğiz (yeni bir adresle, ancak aynı kodla). Ardından, mevcut subgraph'ı yeni sözleşmeyi izleyen "base" subgraph'a graftlayacağız. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## Ağa Yükseltme Durumunda Graftlamaya İlişkin Önemli Not @@ -30,7 +30,7 @@ Bu eğitici içerikte, temel bir kullanım örneğini ele alacağız. Mevcut bir ### Bu Neden Önemli? -Graftlama, bir subgraph'ı diğerine "graftlamanıza" ve geçmiş verileri mevcut subgraph'tan yeni bir sürüme etkili bir şekilde transfer etmenize olanak tanıyan güçlü bir özelliktir. Bu, verileri korumak ve indekslemede zaman kazanmak için etkili bir yol olsa da, graftlama, barındırılan bir ortamdan merkeziyersiz ağa taşınırken karmaşıklıklar ve potansiyel sorunlar ortaya çıkarabilir. Bir subgraph'ı Graph Ağı'ndan barındırılan hizmete veya Subgraph Stüdyo'ya geri graftlamak mümkün değildir. +Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. ### En İyi Uygulamalar @@ -80,7 +80,7 @@ dataSources: ``` - `Lock` veri kaynağı, sözleşmeyi derleyip dağıttığımızda alacağımız abi ve sözleşme adresidir -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - `mapping` bölümü, ilgili tetikleyicileri ve bu tetikleyicilere yanıt olarak çalıştırılması gereken fonksiyonları tanımlar. Bu durumda, `Withdrawal` olayının etkinliklerini gözlemliyoruz ve yayıldığında `handleWithdrawal` fonksiyonunu çağırıyoruz. ## Graftlama Manifest Tanımı @@ -191,7 +191,7 @@ Congrats! You have successfully grafted a subgraph onto another subgraph. ## Ek Kaynaklar -Graftlama konusunda daha fazla tecrübe edinmek istiyorsanız, işte popüler sözleşmeler için birkaç örnek: +If you want more experience with grafting, here are a few examples for popular contracts: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) diff --git a/website/pages/tr/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx b/website/pages/tr/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx index e85f3e13acef..b3eff9269cea 100644 --- a/website/pages/tr/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx +++ b/website/pages/tr/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx @@ -2,7 +2,7 @@ title: How to Secure API Keys Using Next.js Server Components --- -## Overview +## Genel Bakış We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). diff --git a/website/pages/tr/cookbook/near.mdx b/website/pages/tr/cookbook/near.mdx index 5c6c584a52a0..da38d0375eb2 100644 --- a/website/pages/tr/cookbook/near.mdx +++ b/website/pages/tr/cookbook/near.mdx @@ -37,7 +37,7 @@ Subgraph tanımının üç yönü vardır: **schema.graphql:** Subgraph'ınız için hangi verilerin depolandığını ve bunlara GraphQL aracılığıyla nasıl sorgu yapılacağını tanımlayan bir şema dosyası. NEAR subgraph gereksinimleri [mevcut belgelendirmede](/developing/creating-a-subgraph#the-graphql-schema) ele alınmıştır. -**AssemblyScript Eşleştirmeleri:** Olay verilerini şemanızda tanımlanan varlıklara çeviren [AssemblyScript kodu](/developing/assemblyscript-api). NEAR desteği, NEAR'a özgü veri tiplerini ve yeni JSON ayrıştırma fonksiyonelliğini tanıtır. +**AssemblyScript Mappings:** [AssemblyScript code](/developing/graph-ts/api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. Subgraph geliştirme sırasında iki temel komut bulunmaktadır: @@ -98,7 +98,7 @@ NEAR veri kaynakları iki tür işleyiciyi destekler: Olayları işlemek için işleyiciler [AssemblyScript](https://www.assemblyscript.org/) içinde yazılmıştır. -NEAR indeksleme,[AssemblyScript API](/developing/assemblyscript-api)'sine NEAR'a özgü veri tipleri ekler. +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/graph-ts/api). ```typescript @@ -165,9 +165,9 @@ Bu türler blok & makbuz işleyicilerine aktarılır: - Blok işleyicileri bir `Block` alır - Makbuz işleyicileri bir `ReceiptWithOutcome` alır -Aksi takdirde, [AssemblyScript API](/developing/assemblyscript-api)'sinin geri kalanı eşleştirme yürütmesi sırasında NEAR subgraph geliştiricileri tarafından kullanılabilir. +Otherwise, the rest of the [AssemblyScript API](/developing/graph-ts/api) is available to NEAR subgraph developers during mapping execution. -Buna yeni bir JSON ayrıştırma fonksiyonuda dahildir - NEAR'daki kayıtlar sıklıkla dizilmiş JSON'lar olarak yayılır. Yeni bir `json.fromString(...)` fonksiyonu, geliştiricilerin bu kayıtları kolayca işlemesine olanak sağlamak için [JSON API](/developing/assemblyscript-api#json-api)'nin bir parçası olarak mevcuttur. +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/graph-ts/api#json-api) to allow developers to easily process these logs. ## NEAR Subgraph'ını Dağıtma diff --git a/website/pages/tr/cookbook/subgraph-uncrashable.mdx b/website/pages/tr/cookbook/subgraph-uncrashable.mdx index 015a2720bb6a..fedae7827357 100644 --- a/website/pages/tr/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/tr/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: Güvenli Subgraph Kod Oluşturucu - Framework ayrıca unsur değişkenleri grupları için özel, ancak güvenli ayarlayıcı fonksiyonları oluşturmanın bir yolunu (yapılandırma dosyası aracılığıyla) içerir. Bu sayede, kullanıcının eski bir graph unsurunu yüklemesi/kullanması ve ayrıca fonksiyonun gerektirdiği bir değişkeni kaydetmeyi veya ayarlamayı unutması imkansız hale gelir. -- Uyarı kayıtları, subgraph mantığında bir ihlal olduğunda veri doğruluğunu sağlamak amacıyla sorunu düzeltmek için kullanılabilecek kayıtlar olarak kaydedilir. Bu kayıtlar, Graph'ın barındırılan hizmetinde 'Kayıtlar' bölümünde görüntülenebilir. +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable, Graph CLI codegen komutu kullanılarak isteğe bağlı bir bayrak olarak çalıştırılabilir. diff --git a/website/pages/tr/cookbook/upgrading-a-subgraph.mdx b/website/pages/tr/cookbook/upgrading-a-subgraph.mdx index 1b81a456b415..d2a404fd9447 100644 --- a/website/pages/tr/cookbook/upgrading-a-subgraph.mdx +++ b/website/pages/tr/cookbook/upgrading-a-subgraph.mdx @@ -136,7 +136,7 @@ You can update the metadata of your subgraphs without having to publish a new ve ## Graph Ağında Bir Subgraph'ın Kullanımdan Kaldırılması -Subgraph'ınızı kullanımdan kaldırmak ve Graph Ağı'ndan silmek için adımları izleyin [here](/managing/deprecating-a-subgraph). +Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. ## Bir Subgraph'ı Sorgulama + Graph Ağında Faturalama diff --git a/website/pages/tr/deploying/multiple-networks.mdx b/website/pages/tr/deploying/multiple-networks.mdx new file mode 100644 index 000000000000..dc2b8e533430 --- /dev/null +++ b/website/pages/tr/deploying/multiple-networks.mdx @@ -0,0 +1,241 @@ +--- +title: Deploying a Subgraph to Multiple Networks +--- + +This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph). + +## Deploying the subgraph to multiple networks + +In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. + +### Using `graph-cli` + +Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: + +```sh +Options: + + ... + --network Network configuration to use from the networks config file + --network-file Networks config file path (default: "./networks.json") +``` + +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. + +> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. + +If you don't have a `networks.json` file, you'll need to manually create one with the following structure: + +```json +{ + "network1": { // the network name + "dataSource1": { // the dataSource name + "address": "0xabc...", // the contract address (optional) + "startBlock": 123456 // the startBlock (optional) + }, + "dataSource2": { + "address": "0x123...", + "startBlock": 123444 + } + }, + "network2": { + "dataSource1": { + "address": "0x987...", + "startBlock": 123 + }, + "dataSource2": { + "address": "0xxyz..", + "startBlock": 456 + } + }, + ... +} +``` + +> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. + +Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x123...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +This is what your networks config file should look like: + +```json +{ + "mainnet": { + "Gravity": { + "address": "0x123..." + } + }, + "sepolia": { + "Gravity": { + "address": "0xabc..." + } + } +} +``` + +Now we can run one of the following commands: + +```sh +# Using default networks.json file +yarn build --network sepolia + +# Using custom named file +yarn build --network sepolia --network-file path/to/config +``` + +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: sepolia + source: + address: '0xabc...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Now you are ready to `yarn deploy`. + +> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: + +```sh +# Using default networks.json file +yarn deploy --network sepolia + +# Using custom named file +yarn deploy --network sepolia --network-file path/to/config +``` + +### Using subgraph.yaml template + +One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). + +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: + +```json +{ + "network": "mainnet", + "address": "0x123..." +} +``` + +and + +```json +{ + "network": "sepolia", + "address": "0xabc..." +} +``` + +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + network: {{network}} + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + address: '{{address}}' + abi: Gravity + mapping: + kind: ethereum/events +``` + +In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: + +```json +{ + ... + "scripts": { + ... + "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", + "prepare:sepolia": "mustache config/sepolia.json subgraph.template.yaml > subgraph.yaml" + }, + "devDependencies": { + ... + "mustache": "^3.1.0" + } +} +``` + +To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: + +```sh +# Mainnet: +yarn prepare:mainnet && yarn deploy + +# Sepolia: +yarn prepare:sepolia && yarn deploy +``` + +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). + +**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. + +## Subgraph Studio subgraph archive policy + +A subgraph version in Studio is archived if and only if it meets the following criteria: + +- The version is not published to the network (or pending publish) +- The version was created 45 or more days ago +- The subgraph hasn't been queried in 30 days + +In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. + +Every subgraph affected with this policy has an option to bring the version in question back. + +## Checking subgraph health + +If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. + +Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: + +```graphql +{ + indexingStatusForCurrentVersion(subgraphName: "org/subgraph") { + synced + health + fatalError { + message + block { + number + hash + } + handler + } + chains { + chainHeadBlock { + number + } + latestBlock { + number + } + } + } +} +``` + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/pages/tr/developing/creating-a-subgraph.mdx b/website/pages/tr/developing/creating-a-subgraph.mdx index 387b4b84c389..0218b9d8452d 100644 --- a/website/pages/tr/developing/creating-a-subgraph.mdx +++ b/website/pages/tr/developing/creating-a-subgraph.mdx @@ -2,45 +2,47 @@ title: Subgraph Oluşturma --- -Subgraph, verileri bir blok zincirinden çıkarır, işler ve GraphQL aracılığıyla kolayca sorgulanabilmesi için depolar. +This detailed guide provides instructions to successfully create a subgraph. -![Subgraph Tanımlama](/img/defining-a-subgraph.png) +A subgraph extracts data from a blockchain, processes it, and stores it for efficient querying via GraphQL. -Subgraph tanımı birkaç dosyadan oluşmaktadır: +![Subgraph Tanımlama](/img/defining-a-subgraph.png) -- `subgraph.yaml`: Subgraph manifest'ini içeren bir YAML dosyası +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. -- `schema.graphql`: Subgraph içinde depolanan verileri ve GraphQL üzerinden nasıl sorgulayacağınızı tanımlayan bir GraphQL şeması +## Buradan Başlayın -- `AssemblyScript Mappings`: Olay verilerinden şemanızda tanımlanan varlıklara çeviri yapan [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) kodu (örneğin bu öğretici içerikte `mapping.ts`) +### Graph CLI'ı Yükleyin -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key). It is recommended that you [add signal](/network/curating/#how-to-signal) to your subgraph with at least [3,000 GRT](/sunrise/#how-can-i-ensure-high-quality-of-service-and-redundancy-for-subgraphs-on-the-graph-network). +To build and deploy a subgraph, you will need the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-tooling) which you will need to build and deploy a subgraph. +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. -## Graph CLI'ı Yükleyin +Yerel makinenizde aşağıdaki komutlardan birini çalıştırın: -Graph CLI, JavaScriptle yazılmıştır ve kullanmak için `yarn` veya `npm` kurmanız gerekir; aşağıdaki içerik yarn yüklediğinizi varsaymaktadır. +#### Using [npm](https://www.npmjs.com/) -`Yarn`'a sahip olduğunuzda, Graph CLI'yi çalıştırarak yükleyin +```bash +npm install -g @graphprotocol/graph-cli@latest +``` -**Yarn ile kurulum:** +#### Using [yarn](https://yarnpkg.com/) ```bash yarn global add @graphprotocol/graph-cli ``` -**Npm ile kurulum:** +- The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. -```bash -npm install -g @graphprotocol/graph-cli -``` +- This `graph init` command can also create a subgraph in Subgraph Studio by passing in `--product subgraph-studio`. + +- If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. -Once installed, the `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. This command can be used to create a subgraph in Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to your preferred network, bootstrapping a new subgraph from that contract can be a good way to get started. +## Create a subgraph -## Mevcut Bir Sözleşmeden +### From an existing contract -Aşağıdaki komut, mevcut bir sözleşmenin tüm olaylarını indeksleyen bir subgraph oluşturur. Sözleşme ABI'sini Etherscan'dan almaya çalışır ve yerel bir dosya yolu istemeye geri döner. İsteğe bağlı argümanlardan herhangi biri eksikse, sizi etkileşimli bir formdan geçirir. +The following command creates a subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,21 +53,29 @@ graph init \ [] ``` -``, Subgraph Studio'daki subgraph kimliğidir ve subgraph ayrıntıları sayfanızda bulunabilir. +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. -## Örnek Bir Subgraph'dan +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. -`Graph init`'in desteklediği ikinci mod, örnek bir subgraph'dan yeni bir proje oluşturmayı destekler. Aşağıdaki komut bunu yapar: +### From an example subgraph + +The following command initializes a new project from an example subgraph: ```sh -graph init --studio +graph init --studio --from-example=example-subgraph ``` -The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. -## Mevcut Bir Subgraph'a Yeni veriKaynakları(dataSources) Ekleme +## Add new `dataSources` to an existing subgraph -`v0.31.0` 'dan itibaren, `graph-cli`, var olan bir subgraph'a `graph add` komutu aracılığıyla yeni veriKaynakları(dataSources) eklemeyi destekler. +Since `v0.31.0`, the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: ```sh graph add
    [] @@ -78,22 +88,45 @@ Seçenekler: --network-file Ağ yapılandırma dosyası yolu (varsayılan: "./networks.json") ``` -`add` komutu, ABI'yi Etherscan'den getirecektir (`--abi` seçeneğiyle bir ABI yolu belirtilmedikçe) ve tıpkı `graph init` komutunun şemayı güncelleyerek ve eşleştirerek bir `dataSource` `--from-contract` oluşturması gibi yeni bir `dataSource` oluşturacaktır. +### Specifics + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- `--merge-entities` seçeneği, geliştiricinin `entity` ve `event` ad çakışmalarını nasıl ele alacağını belirler: + + - `true` ise: yeni `dataSource` mevcut `eventHandlers` & `entities`'i kullanmalıdır. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- Sözleşme `adresi`, ilgili ağ için `networks.json`'a yazılacaktır. + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. -`--merge-entities` seçeneği, geliştiricinin `entity` ve `event` ad çakışmalarını nasıl ele alacağını belirler: +## Components of a subgraph -- `true` ise: yeni `dataSource` mevcut `eventHandlers` & `entities`'i kullanmalıdır. -- `false` ise: `${dataSourceName}{EventName}` ile yeni bir entity(varlık) & event handler(olay işleyicisi) oluşturulmalıdır. +### Subgraph Manifestosu -Sözleşme `adresi`, ilgili ağ için `networks.json`'a yazılacaktır. +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -> **Not:** Etkileşimli cli kullanırken, `graph init` başarıyla çalıştırdıktan sonra yeni bir `dataSource` eklemeniz istenecektir. +The **subgraph definition** consists of the following files: -## Subgraph Manifestosu +- `subgraph.yaml`: Contains the subgraph manifest -Subgraph manifest'i `subgraph.yaml`, subgraph'ınız tarafından indekslenen akıllı sözleşmeleri, bu sözleşmelerdeki hangi olaylara dikkat edileceğini ve olay verilerinin Graph Node'un depoladığı ve sorgulamasına izin verdiği varlıklarla nasıl eşleneceğini tanımlar. Subgraph manifestlerinin tüm özelliklerini [burada](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md) bulabilirsiniz. +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL -Örnek subgraph için `subgraph.yaml` şöyledir: +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +A single subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: ```yaml specVersion: 0.0.4 @@ -180,9 +213,9 @@ Bir subgraph birden fazla akıllı sözleşmeden veri indeksleyebilir. `dataSour Bir bloktaki veri kaynağı için tetikleyiciler şu işlemlerle sıralanır: -1. Olay ve çağrı tetikleyicileri, öncelikle bloktaki işlem indeksine göre sıralanır. -2. Aynı işlemdeki olay ve çağrı tetikleyicileri, bir kurala göre sıralanır: önce olay tetikleyicileri, ardından çağrı tetikleyicileri olmak üzere her tür manifest'te tanımlandıkları sıraya göre sıralanır. -3. Blok tetikleyicileri, olay ve çağrı tetikleyicilerinden sonra manifest'te tanımlandıkları sırada göre çalıştırılır. +1. Olay ve çağrı tetikleyicileri, öncelikle bloktaki işlem indeksine göre sıralanır. +2. Aynı işlemdeki olay ve çağrı tetikleyicileri, bir kurala göre sıralanır: önce olay tetikleyicileri, ardından çağrı tetikleyicileri olmak üzere her tür manifest'te tanımlandıkları sıraya göre sıralanır. +3. Blok tetikleyicileri, olay ve çağrı tetikleyicilerinden sonra manifest'te tanımlandıkları sırada göre çalıştırılır. Bu sıralama kuralları değişebilir. @@ -305,9 +338,9 @@ Imagine you have a subgraph that needs to make three Ethereum calls to fetch dat Traditionally, these calls might be made sequentially: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Total time taken = 3 + 2 + 4 = 9 seconds @@ -315,9 +348,9 @@ Total time taken = 3 + 2 + 4 = 9 seconds With this feature, you can declare these calls to be executed in parallel: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. @@ -325,9 +358,9 @@ Total time taken = max (3, 2, 4) = 4 seconds ### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. -2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. ### Example Configuration in Subgraph Manifest @@ -347,7 +380,7 @@ calls: Details for the example above: - `global0X128` is the declared `eth_call`. -- The text before colon(`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. @@ -360,17 +393,17 @@ calls: ### SpecVersion Releases -| Sürüm | Sürüm Notları | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | +| Sürüm | Sürüm Notları | +|:-----:| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | | 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | ### ABI'leri Alma @@ -442,16 +475,16 @@ Bazı varlık türleri için `id`, iki diğer varlığın id'lerinden oluşturul GraphQL API'mizde aşağıdaki skalerleri destekliyoruz: -| Tür | Tanım | -| --- | --- | -| `Baytlar` | Byte dizisi, onaltılık bir dizgi olarak temsil edilir. Ethereum hash değerleri ve adresleri için yaygın olarak kullanılır. | -| `Dizgi(String)` | `string` değerleri için skaler. Null karakterleri desteklenmez ve otomatik olarak kaldırılır. | -| `Boolean` | `boolean` değerleri için skaler. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Büyük tamsayılar. Ethereum'un `uint32`, `int64`, `uint64`, ..., `uint256` türleri için kullanılır. Not: `int32`, `uint24` veya `int8` gibi `uint32`'nin altındaki her şey `i32`olarak temsil edilir. | -| `BigDecimal` | `BigDecimal` Yüksek hassasiyetli ondalık sayılar, bir anlamlı ve bir üsle temsil edilir. Üs aralığı -6143 ila +6144 arasındadır. 34 anlamlı rakama yuvarlanır. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| Tür | Tanım | +| --------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Baytlar` | Byte dizisi, onaltılık bir dizgi olarak temsil edilir. Ethereum hash değerleri ve adresleri için yaygın olarak kullanılır. | +| `Dizgi(String)` | `string` değerleri için skaler. Null karakterleri desteklenmez ve otomatik olarak kaldırılır. | +| `Boolean` | `boolean` değerleri için skaler. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Büyük tamsayılar. Ethereum'un `uint32`, `int64`, `uint64`, ..., `uint256` türleri için kullanılır. Not: `int32`, `uint24` veya `int8` gibi `uint32`'nin altındaki her şey `i32`olarak temsil edilir. | +| `BigDecimal` | `BigDecimal` Yüksek hassasiyetli ondalık sayılar, bir anlamlı ve bir üsle temsil edilir. Üs aralığı -6143 ila +6144 arasındadır. 34 anlamlı rakama yuvarlanır. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | #### Numaralandırmalar @@ -593,7 +626,7 @@ query usersWithOrganizations { #### Şemaya notlar/yorumlar ekleme -As per GraphQL spec, comments can be added above schema entity attributes using the hash symble `#`. This is illustrated in the example below: +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: ```graphql type MyFirstEntity @entity { @@ -873,7 +906,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **Not:** Yeni bir veri kaynağı, oluşturulduğu blok ve tüm takip eden bloklar için yalnızca çağrıları ve olayları işleyecektir, ancak önceki bloklarda bulunan geçmiş verileri işlemeyecektir. -> +> > Eğer önceki bloklar, yeni veri kaynağı için ilgili veri içeriyorsa, o veriyi indekslemek için sözleşmenin mevcut durumunu okuyarak ve yeni veri kaynağı oluşturulurken o zaman dilimindeki durumu temsil eden varlıklar oluşturarak yapmak en iyisidir. ### Veri Kaynağı Bağlamı @@ -930,7 +963,7 @@ dataSources: ``` > **Not:** Sözleşme oluşturma bloğu hızlı bir şekilde Etherscan'da aranabilir: -> +> > 1. Arama çubuğuna adresini girerek sözleşmeyi arayın. > 2. `Contract Creator` bölümünde oluşturma işlemi hash'ına tıklayın. > 3. İşlem detayları sayfasını yükleyin ve bu sözleşme için başlangıç bloğunu bulacaksınız. @@ -945,9 +978,9 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde `indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: -1. `"never"`: No pruning of historical data; retains the entire history. -2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. -3. A specific number: Sets a custom limit on the number of historical blocks to retain. +1. `"never"`: No pruning of historical data; retains the entire history. +2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +3. A specific number: Sets a custom limit on the number of historical blocks to retain. ``` indexerHints: @@ -982,29 +1015,6 @@ indexerHints: prune: never ``` -You can check the earliest block (with historical state) for a given subgraph by querying the [Indexing Status API](/deploying/deploying-a-subgraph-to-hosted/#checking-subgraph-health): - -``` -{ - indexingStatuses(subgraphs: ["Qm..."]) { - subgraph - synced - health - chains { - earliestBlock { - number - } - latestBlock { - number - } - chainHeadBlock { number } - } - } -} -``` - -Note that the `earliestBlock` is the earliest block with historical data, which will be more recent than the `startBlock` specified in the manifest, if the subgraph has been pruned. - ## Event Handlers Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. @@ -1477,7 +1487,7 @@ The file data source must specifically mention all the entity types which it wil #### Dosyaları işlemek için yeni bir işleyici oluşturun -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/assemblyscript-api/#json-api)). +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). Dosyanın okunabilir bir dize olarak CID'sine `dataSource` aracılığıyla şu şekilde erişilebilir: diff --git a/website/pages/tr/developing/developer-faqs.mdx b/website/pages/tr/developing/developer-faqs.mdx index b3d712bb4de8..e657137a9a54 100644 --- a/website/pages/tr/developing/developer-faqs.mdx +++ b/website/pages/tr/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: Geliştirici SSS --- -## 1. Subgraph nedir? +This page summarizes some of the most common questions for developers building on The Graph. -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using the Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available to be queried by subgraph consumers. +## Subgraph Related -## 2. Subgraph'ımı silebilir miyim? +### 1. Subgraph nedir? -It is not possible to delete subgraphs once they are created. +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. Subgraph ismimi değiştirebilir miyim? +### 2. What is the first step to create a subgraph? -No. Once a subgraph is created, the name cannot be changed. Make sure to think of this carefully before you create your subgraph so it is easily searchable and identifiable by other dapps. +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. Subgraph'ımla ilişkili GitHub hesabını değiştirebilir miyim? +### 3. Can I still create a subgraph if my smart contracts don't have events? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Make sure to think of this carefully before you create your subgraph. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. Akıllı sözleşmelerimin olayları yoksa yine de bir subgraph oluşturabilir miyim? +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are by far the fastest way to retrieve useful data. +### 4. Subgraph'ımla ilişkili GitHub hesabını değiştirebilir miyim? -If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended, as performance will be significantly slower. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. Birden fazla ağ için aynı isme sahip bir subgraph'ı dağıtmak mümkün mü? +### 5. How do I update a subgraph on mainnet? -You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/deploying/deploying-a-subgraph-to-hosted#redeploying-a-subgraph) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. Şablonların veri kaynaklarından farkı nedir? +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) upfront you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. +You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph#data-source-templates). -## 8. Yerel dağıtımlarım için graph-node'un en son sürümünü kullandığımdan nasıl emin olabilirim? +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -You can run the following command: +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -docker pull graphprotocol/graph-node:latest -``` +You can also use `graph add` command to add a new dataSource. -**NOTE:** docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so it is important to do this to make sure you are up to date with the latest version of graph-node. +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. Subgraph eşleştirmelerimden bir sözleşme fonksiyonunu nasıl çağırabilirim veya genel bir durum değişkenine nasıl erişebilirim? +Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. -## 10. İki sözleşme ile `graph-cli`den `graph init` kullanarak bir subgraph oluşturmak mümkün mü? Yoksa `graph init`'i çalıştırdıktan sonra `subgraph.yaml` dosyasına manuel olarak başka bir veri kaynağı mı eklemeliyim? +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Yes. On `graph init` command itself you can add multiple datasources by entering contracts one after the other. You can also use `graph add` command to add new datasource. +You can run the following command: -## 11. Katkıda bulunmak veya bir GitHub sorunu eklemek istiyorum. Açık kaynak depolarını nerede bulabilirim? +```sh +docker pull graphprotocol/graph-node:latest +``` -- [graph-node](https://github.com/graphprotocol/graph-node) -- [graph-tooling](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. Olayları işlerken bir varlık için "otomatik oluşturulan" kimlikler oluşturmanın önerilen yolu nedir? +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -## 13. Birden fazla sözleşmenin etkinliklerini gözlemlerken, olayların etkinliklerini gözlemlemek için sözleşme sırasını seçmek mümkün mü? +### 15. Can I delete my subgraph? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +It is not possible to delete subgraphs once they are created. However, you can [transfer and deprecate your subgraph](/managing/transfer-and-deprecate-a-subgraph/). -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +## Network Related + +### 16. What networks are supported by The Graph? + +You can find the list of the supported networks [here](/developing/supported-networks). + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? Yes. You can do this by importing `graph-ts` as per the example below: @@ -78,23 +99,21 @@ dataSource.network() dataSource.address() ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Do you support block and call handlers on Sepolia? Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. -## 16. Subgraph eşleştirmelerime ethers.js veya diğer JS kütüphanelerini aktarabilir miyim? - -Not currently, as mappings are written in AssemblyScript. One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. +## Indexing & Querying Related -## 17. İndekslemeye hangi bloktan başlanacağını belirtmek mümkün mü? +### 19. Is it possible to specify what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 18. İndeksleme performansını artırmak için bazı ipuçları var mı? Subgraph'ımın senkronize edilmesi çok uzun zaman alıyor +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 19. Subgraph üzerinde doğrudan sorgulama yaparak indekslediği en son blok numarasını belirlemenin bir yol var mı? +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: @@ -102,19 +121,7 @@ Yes! Try the following command, substituting "organization/subgraphName" with th curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -## 20. Graph hangi ağları destekliyor? - -You can find the list of the supported networks [here](/developing/supported-networks). - -## 21. Bir subgraph'ı yeniden dağıtmadan başka bir hesaba veya uç noktaya çoğaltmak mümkün mü? - -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. - -## 22. Apollo Federation'ı graph-node üzerinde kullanmak mümkün mü? - -Federation is not supported yet, although we do want to support it in the future. At the moment, something you can do is use schema stitching, either on the client or via a proxy service. - -## 23. Graph'ın sorgu başına kaç nesne döndürebileceğine dair bir sınır var mı? +### 22. Is there a limit to how many objects The Graph can return per query? By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: @@ -122,24 +129,19 @@ By default, query responses are limited to 100 items per collection. If you want someCollection(first: 1000, skip: ) { ... } ``` -## Dapp önyüzüm Graph'ı sorgulamak için kullanıyorsa, sorgu anahtarını önyüze doğrudan yazmam gerekiyor mu? Kullanıcılar için sorgu ücreti ödesek, kötü niyetli kullanıcılar sorgu ücretlerimizin çok yüksek olmasına neden olabilir mi? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## 25. Where do I go to find my current subgraph on the hosted service? - -Sizin veya başkalarının barındırılan hizmete dağıttığı subgraphları bulmak için barındırılan hizmete gidin. [Burada](https://thegraph.com/hosted-service) bulabilirsiniz. - -## 26. Will the hosted service start charging query fees? - -Graph, barındırılan hizmet için asla ücret talep etmeyecektir. Graph merkeziyetsiz bir protokoldür ve merkezi bir hizmet için ücret almak Graph'in değerleriyle uyuşmamaktadır. Barındırılan hizmet, merkeziyetsiz ağa ulaşmaya yardımcı olmak için her zaman geçici bir adım olmuştur. Geliştiriciler, merkeziyetsiz ağa rahatça yükseltebilmek için yeterli süreye sahip olacaklardır. - -## 27. How do I update a subgraph on mainnet? +## Miscellaneous -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +### 24. Is it possible to use Apollo Federation on top of graph-node? -## 28. In what order are the event, block, and call handlers triggered for a data source? +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +- [graph-node](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/tr/developing/graph-ts/api.mdx b/website/pages/tr/developing/graph-ts/api.mdx index 96925980ae4b..fa5cfacf8b7a 100644 --- a/website/pages/tr/developing/graph-ts/api.mdx +++ b/website/pages/tr/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: AssemblyScript API'si --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/release-notes/assemblyscript-migration-guide) +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -Bu sayfa subgraph eşleştirmelerini yazarken bullanılabilen yerleşik API'leri belgelemektedir. Hazır olarak iki çeşit API mevcuttur: +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## API Referansı @@ -27,16 +29,16 @@ The `@graphprotocol/graph-ts` library provides the following APIs: The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. -| Sürüm | Sürüm Notları | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
    Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
    Added `baseFeePerGas` to the Ethereum Block object | +| Sürüm | Sürüm Notları | +| :---: | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
    Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
    Added `baseFeePerGas` to the Ethereum Block object | | 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
    `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
    `etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
    `etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Dahili Türler @@ -239,20 +241,22 @@ export function handleTransfer(event: TransferEvent): void { // İşlem hash'ını olay kimliği olarak kullanarak bir Transfer varlığı oluşturun let id = event.transaction.hash let transfer = new Transfer(id) - + // Olay parametrelerini kullanarak varlığın özelliklerini ayarlayın transfer.from = event.params.from transfer.to = event.params.to transfer.amount = event.params.amount - + // Varlığı depoya kaydedin transfer.save() -} + } ``` When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. -Çakışmaları önlemek için her varlık benzersiz bir kimliğe sahip olmalıdır. Genellikle olay parametreleri, kullanılabilecek benzersiz bir tanımlayıcı içerir. Not: Kimlik olarak işlem hash'ını kullanmak aynı işlemdeki başka hiçbir olayın bu hash'ı kullanarak kimlik olarak varlık oluşturmayacağını varsayar. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### Depodan varlık yükleme @@ -268,15 +272,18 @@ if (transfer == null) { // Transfer varlığı önceki gibi kullanılır ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### Bir blok içinde oluşturulan varlıkları arama As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. -Store API, mevcut blokta oluşturulan veya güncellenen varlıkların alınmasını kolaylaştırır. Bunun için tipik bir durum, bir işleyicinin zincir üzerindeki bir etkinlikten bir İşlem oluşturması ve daha sonraki bir işleyicinin varsa bu işleme erişmek istemesidir. İşlemin mevcut olmadığı durumda, subgraph sadece varlığın mevcut olmadığını öğrenmek için veritabanına gitmek zorunda kalacaktır; eğer subgraph yazarı varlığın aynı blokta yaratılmış olması gerektiğini zaten biliyorsa, loadInBlock kullanmak bu veritabanı gidiş gelişini önler. Bazı subgraphlar için, bu kaçırılan aramalar indeksleme süresine önemli ölçüde katkıda bulunabilir. +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // veya ID nasıl oluşturulurmuşsa @@ -503,7 +510,9 @@ Subgraph parçası olan diğer tüm sözleşmelerde oluşturulan koddan içe akt #### Geri Dönen Çağrıları Yönetme -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -515,7 +524,7 @@ if (callResult.reverted) { } ``` -Bir Geth veya Infura istemcisine bağlı bir Graph düğümünün tüm geri dönüşleri algılamayabileceğini unutmayın, bu durumda Parity istemcisine bağlı bir Graph düğümü kullanmanızı öneririz. +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### ABI Kodlama/Çözme @@ -762,44 +771,44 @@ When the type of a value is certain, it can be converted to a [built-in type](#b ### Tip Dönüşümleri Referansı -| Source(s) | Destination | Conversion function | -| ----------------- | ----------------- | ---------------------------- | -| Address | Bytes | none | -| Address | String | s.toHexString() | -| BigDecimal | String | s.toString() | -| BigInt | BigDecimal | s.toBigDecimal() | -| BigInt | Dizgi (onaltılık) | s.toHexString() or s.toHex() | -| BigInt | String (unicode) | s.toString() | -| BigInt | i32 | s.toI32() | -| Boolean | Boolean | none | -| Bytes (signed) | BigInt | BigInt.fromSignedBytes(s) | -| Bytes (unsigned) | BigInt | BigInt.fromUnsignedBytes(s) | -| Bytes | Dizgi (onaltılık) | s.toHexString() or s.toHex() | -| Bytes | String (unicode) | s.toString() | -| Bytes | String (base58) | s.toBase58() | -| Bytes | i32 | s.toI32() | -| Bytes | u32 | s.toU32() | -| Bytes | JSON | json.fromBytes(s) | -| int8 | i32 | none | -| int32 | i32 | none | -| int32 | BigInt | BigInt.fromI32(s) | -| uint24 | i32 | none | -| int64 - int256 | BigInt | none | -| uint32 - uint256 | BigInt | none | -| JSON | boolean | s.toBool() | -| JSON | i64 | s.toI64() | -| JSON | u64 | s.toU64() | -| JSON | f64 | s.toF64() | -| JSON | BigInt | s.toBigInt() | -| JSON | string | s.toString() | -| JSON | Array | s.toArray() | -| JSON | Object | s.toObject() | -| String | Address | Address.fromString(s) | -| Bytes | Address | Address.fromBytes(s) | -| String | BigInt | BigInt.fromString(s) | -| String | BigDecimal | BigDecimal.fromString(s) | -| Dizgi (onaltılık) | Bytes | ByteArray.fromHexString(s) | -| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | +| Source(s) | Destination | Conversion function | +| -------------------- | -------------------- | ---------------------------- | +| Address | Bytes | none | +| Address | String | s.toHexString() | +| BigDecimal | String | s.toString() | +| BigInt | BigDecimal | s.toBigDecimal() | +| BigInt | Dizgi (onaltılık) | s.toHexString() or s.toHex() | +| BigInt | String (unicode) | s.toString() | +| BigInt | i32 | s.toI32() | +| Boolean | Boolean | none | +| Bytes (signed) | BigInt | BigInt.fromSignedBytes(s) | +| Bytes (unsigned) | BigInt | BigInt.fromUnsignedBytes(s) | +| Bytes | Dizgi (onaltılık) | s.toHexString() or s.toHex() | +| Bytes | String (unicode) | s.toString() | +| Bytes | String (base58) | s.toBase58() | +| Bytes | i32 | s.toI32() | +| Bytes | u32 | s.toU32() | +| Bytes | JSON | json.fromBytes(s) | +| int8 | i32 | none | +| int32 | i32 | none | +| int32 | BigInt | BigInt.fromI32(s) | +| uint24 | i32 | none | +| int64 - int256 | BigInt | none | +| uint32 - uint256 | BigInt | none | +| JSON | boolean | s.toBool() | +| JSON | i64 | s.toI64() | +| JSON | u64 | s.toU64() | +| JSON | f64 | s.toF64() | +| JSON | BigInt | s.toBigInt() | +| JSON | string | s.toString() | +| JSON | Array | s.toArray() | +| JSON | Object | s.toObject() | +| String | Address | Address.fromString(s) | +| Bytes | Address | Address.fromBytes(s) | +| String | BigInt | BigInt.fromString(s) | +| String | BigDecimal | BigDecimal.fromString(s) | +| Dizgi (onaltılık) | Bytes | ByteArray.fromHexString(s) | +| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) | ### Veri Kaynağı Meta Verileri diff --git a/website/pages/tr/developing/supported-networks.json b/website/pages/tr/developing/supported-networks.json index 6e5afdc4c92f..172107900351 100644 --- a/website/pages/tr/developing/supported-networks.json +++ b/website/pages/tr/developing/supported-networks.json @@ -2,7 +2,7 @@ "network": "Ağ", "cliName": "CLI Adı", "chainId": "Zincir Kimliği", - "hostedService": "Barındırılan Hizmet", + "hostedService": "Barındırılan hizmet", "subgraphStudio": "Subgraph Stüdyosu", "decentralizedNetwork": "Merkeziyetsiz Ağ", "integrationType": "Integration Type" diff --git a/website/pages/tr/developing/supported-networks.mdx b/website/pages/tr/developing/supported-networks.mdx index b2e82c63136e..9673a684405b 100644 --- a/website/pages/tr/developing/supported-networks.mdx +++ b/website/pages/tr/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. +- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. - If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - Merkeziyetsiz ağda hangi özelliklerin desteklendiğinin tam listesi için [bu sayfaya](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) göz atın. diff --git a/website/pages/tr/developing/unit-testing-framework.mdx b/website/pages/tr/developing/unit-testing-framework.mdx index 71a96521c2b3..328a29c5531b 100644 --- a/website/pages/tr/developing/unit-testing-framework.mdx +++ b/website/pages/tr/developing/unit-testing-framework.mdx @@ -1368,18 +1368,18 @@ Tutulan kayıt çıktısı test çalışma süresini içerir. İşte buna bir ö > Critical: Could not create WasmInstance from valid module with context: unknown import: wasi_snapshot_preview1::fd_write has not been defined -Bu, AssemblyScript tarafından desteklenmeyen `console.log`'u kullandığınız anlamına gelmektedir. Lütfen [Logging API](/developing/assemblyscript-api/#logging-api) kullanmayı düşünün +This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/graph-ts/api/#logging-api) > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Block(defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(18,12) -> +> > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Transaction(defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(24,12) Argümanlardaki uyumsuzluk, `graph-ts` ve `matchstick-as` arasındaki uyumsuzluktan kaynaklanır. Bu gibi sorunları düzeltmenin en iyi yolu her şeyi en son yayınlanan sürüme güncellemektir. diff --git a/website/pages/tr/glossary.mdx b/website/pages/tr/glossary.mdx index 23b85c69e0c1..9986408bf126 100644 --- a/website/pages/tr/glossary.mdx +++ b/website/pages/tr/glossary.mdx @@ -10,11 +10,9 @@ title: Glossary - **Uç Nokta**: Bir subgraph'ı sorgulamak için kullanılabilecek bir URL'dir. Subgraph Stüdyo için test uç noktası `https://api.studio.thegraph.com/query///` ve Graph Gezgini uç noktası `https://gateway.thegraph.com/api//subgraphs/id/` şeklindedir. Graph Gezgini uç noktası, Graph'ın merkeziyetsiz ağındaki subgraphları sorgulamak için kullanılır. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Then, Indexers can begin indexing subgraphs to make them available to be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **Hosted service**: A temporary scaffold service for building and querying subgraphs as The Graph's decentralized network is maturing its cost of service, quality of service, and developer experience. - -- **Indexers**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. @@ -24,17 +22,17 @@ title: Glossary - **İndeksleyicinin Kendi Stake'i**: İndeksleyicilerin merkeziyetsiz ağa katılmak için stake ettikleri GRT miktarıdır. Minimum 100.000 GRT'dir ve üst sınır yoktur. -- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service to The Graph Network. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegatörler**: GRT'ye sahip olan ve GRT'lerini indeksleyicilere stake eden ağ katılımcıları. Bu, indeksleyicilerin ağdaki subgraph'lerde mevcut paylarını artırmalarına olanak tanır. Buna karşılık, delegatörler, indeksleyicilerin subgraph'leri işlemek için aldıkları indeksleme ödüllerinin bir kısmını alırlar. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **Delegasyon Vergisi**: GRT'yi indeksleyicilere stake ettiklerinde delegatörler tarafından ödenen %0,5'lik bir ücret. Ücreti ödemek için kullanılan GRT yakılır. -- **Küratörler**: Yüksek kaliteli subgraph'leri belirleyen ve bunları küratörlük paylaşımları karşılığında "düzenleyen" (yani üzerlerinde GRT sinyali veren) ağ katılımcılarıdır. İndeksleyiciler bir subgraph'te sorgulama ücreti talep ettiğinde, o subgraph'in küratörlerine %10 dağıtılır. İndeksleyiciler, bir subgraph'teki sinyalle orantılı indeksleme ödülleri kazanır. Sinyal verilen GRT miktarı ile bir subgraph'i indeksleyen indeksleyicilerin sayısı arasında bir korelasyon görüyoruz. +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **Kurasyon Vergisi**: Küratörler tarafından subgraph'lerde GRT sinyali verildiğinde ödenen %1'lik bir ücrettir. Ücreti ödemek için kullanılan GRT yakılır. -- **Subgraph Tüketicisi**: Bir subgraph'ği sorgulayan herhangi bir uygulama veya kullanıcı. +- **Data Consumer**: Any application or user that queries a subgraph. - **Subgraph Geliştiricisi**: Graph'in merkeziyetsiz ağına bir subgraph inşa eden ve dağıtan bir geliştirici. @@ -46,11 +44,11 @@ title: Glossary 1. **Aktif**: Bir tahsis, zincir üzerinde oluşturulduğunda aktif kabul edilir. Buna tahsis açma denir ve ağa, indeksleyicinin belirli bir subgraph için sorguları aktif olarak indekslediğini ve sunduğunu gösterir. Aktif tahsisler, subgraph'teki sinyal ve tahsis edilen GRT miktarı ile orantılı olarak indeksleme ödülleri tahakkuk ettirir. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **Subgraph Stüdyo**: Subgraph'ler oluşturmak, deploy etmek ve yayınlamak için güçlü bir merkeziyetsiz uygulamadır. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed. Specifically, the Indexer will lose 2.5% of their self-stake of GRT. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. @@ -62,11 +60,11 @@ title: Glossary - **GRT**: Graph'in çalışma yardımcı programı belirtecidir. GRT, ağ katılımcılarına ağa katkıda bulunmaları için ekonomik teşvikler sağlar. -- **POI or Proof of Indexing**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent Proof of Indexing (POI). Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node, subgraph'leri indeksleyen ve elde edilen verileri bir GraphQL API aracılığıyla sorgulanabilir hale getiren bileşendir. Bu nedenle, indeksleyici yığınının merkezinde yer alır ve Graph node'unun doğru çalışması, başarılı bir indeksleyici olabilmek için çok önemlidir. +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **İndeksleyici Aracı**: İndeksleyici aracı, indeksleyici yığının bir parçasıdır. Ağa kaydolma, Graph node'larına subgraph deploy sürecini ve tahsisleri yönetme dahil olmak üzere indeksleyicinin zincir üzerindeki etkileşimlerini kolaylaştırır. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **Graph Tüketicileri**: Merkeziyetsiz bir şekilde GraphQL tabanlı merkeziyetsiz uygulamalar inşa etmeye yönelik bir kitaplık. @@ -78,10 +76,6 @@ title: Glossary - **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self stake. -- **Bir subgraph'ı Graph Ağı'na _yükseltme_**: Bir subgraph'ı barındırılan hizmetten Graph Ağı'na taşıma işlemi. - -- **Bir subgraph'ın _güncellenmesi_**: Subgraph manifestosunda, şemasında veya eşleştirmelerinde yapılan güncellemelerle yeni bir subgraph sürümü yayınlama işlemi. +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). - -- **Upgrade Window**: The countdown for hosted service users to upgrade their subgraph(s) to The Graph Network beginning on April 11th, and ending on June 12th 2024. diff --git a/website/pages/tr/index.json b/website/pages/tr/index.json index cfb9b4cd237a..ef3649332398 100644 --- a/website/pages/tr/index.json +++ b/website/pages/tr/index.json @@ -21,10 +21,6 @@ "createASubgraph": { "title": "Subgraph Oluştur", "description": "Subgraph'ler oluşturmak için Studio'yu kullanın" - }, - "migrateFromHostedService": { - "title": "Upgrade from the hosted service", - "description": "Upgrading subgraphs to The Graph Network" } }, "networkRoles": { @@ -60,16 +56,12 @@ "graphExplorer": { "title": "Graph Gezgini", "description": "Subgraph'leri keşfedin ve protokolle etkileşime girin" - }, - "hostedService": { - "title": "Barındırılan Hizmet", - "description": "Create and explore subgraphs on the hosted service" } } }, "supportedNetworks": { "title": "Desteklenen Ağlar", - "description": "The Graph supports the following networks.", - "footer": "For more details, see the {0} page." + "description": "The Graph aşağıdaki ağları destekler.", + "footer": "Daha fazla detay için {0} sayfasına bakın." } } diff --git a/website/pages/tr/managing/transfer-and-deprecate-a-subgraph.mdx b/website/pages/tr/managing/transfer-and-deprecate-a-subgraph.mdx new file mode 100644 index 000000000000..c13bb886739c --- /dev/null +++ b/website/pages/tr/managing/transfer-and-deprecate-a-subgraph.mdx @@ -0,0 +1,65 @@ +--- +title: Transfer and Deprecate a Subgraph +--- + +## Bir subgraph'ın sahipliğini devretme + +Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. + +**Please note the following:** + +- Whoever owns the NFT controls the subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. +- You can easily move control of a subgraph to a multi-sig. +- A community member can create a subgraph on behalf of a DAO. + +### View your subgraph as an NFT + +To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: + +``` +https://opensea.io/your-wallet-address +``` + +Or a wallet explorer like **Rainbow.me**: + +``` +https://rainbow.me/your-wallet-addres +``` + +### Step-by-Step + +To transfer ownership of a subgraph, do the following: + +1. Use the UI built into Subgraph Studio: + + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) + +2. Choose the address that you would like to transfer the subgraph to: + + ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) + +Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: + +![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) + +## Deprecating a subgraph + +Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. + +### Step-by-Step + +To deprecate your subgraph, do the following: + +1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). +2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. +3. Your subgraph will no longer appear in searches on Graph Explorer. + +**Please note the following:** + +- The owner's wallet should call the `deprecateSubgraph` function. +- Curators will not be able to signal on the subgraph anymore. +- Curators that already signaled on the subgraph can withdraw their signal at an average share price. +- Deprecated subgraphs will show an error message. + +> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. diff --git a/website/pages/tr/mips-faqs.mdx b/website/pages/tr/mips-faqs.mdx index da1e9c76231c..ce938d06fe19 100644 --- a/website/pages/tr/mips-faqs.mdx +++ b/website/pages/tr/mips-faqs.mdx @@ -6,10 +6,6 @@ title: MIPs FAQs > Not: MIPs programı Mayıs 2023 itibariyle kapanmıştır. Katılan tüm İndeksleyicilere teşekkür ederiz! -Graph ekosistemine katılmak için heyecan verici bir zaman! Yaniv Tal, Graph Day 2022](https://thegraph.com/graph-day/2022/) sırasında Graph ekosisteminin uzun yıllardır üzerinde çalıştığı bir an olan [barındırılan hizmetin kullanımdan kaldırılacağını](https://thegraph.com/blog/sunsetting-hosted-service/) duyurdu. - -Barındırılan hizmetin kullanımdan kaldırılması ve tüm faaliyetlerinin merkeziyetsiz ağa taşınmasını desteklemek için Graph Vakfı [Geçiş Altyapısı Sağlayıcıları (crwd)lbracketdwrcMIPs programını] (https://thegraph.com/blog/mips-multi-chain-indexing-incentivized-program) duyurdu. - MIPs programı, Ethereum ana ağının dışındaki zincirleri indekslemek ve Graph protokolü'nün merkeziyetsiz ağı çok zincirli bir altyapı katmanına genişletmesine yardımcı olmak için kaynaklarla İndeksleyicilere desteklemeyi amaçlayan bir teşvik programıdır. MIPs programı, GRT arzının %0,75'inin (75 milyon GRT), %0,5'ini ağın önyüklenmesine katkıda bulunan İndeksleyicileri ödüllendirmek ve %0,25'ini çok zincirli subgraphler kullanan subgraph geliştiricileri için Ağ Hibelerine tahsis etmiştir. diff --git a/website/pages/tr/network/benefits.mdx b/website/pages/tr/network/benefits.mdx index 779c806b5d23..00c6122d1b7a 100644 --- a/website/pages/tr/network/benefits.mdx +++ b/website/pages/tr/network/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Cost Comparison | Self Hosted | Graph Ağı | -| :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $0+ | $0 per month | -| Engineering time | $400 per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | -| Cost per query | $0 | $0 | -| Altyapı | Centralized | Decentralized | -| Geographic redundancy | $750+ per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $750+ | $0 | +| Cost Comparison | Self Hosted | Graph Ağı | +|:----------------------------:|:---------------------------------------:|:---------------------------------------------------------------:| +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $0+ | $0 per month | +| Engineering time | $400 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | +| Cost per query | $0 | $0 | +| Altyapı | Centralized | Decentralized | +| Geographic redundancy | $750+ per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| Cost Comparison | Self Hosted | Graph Ağı | -| :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $500 per month | $120 per month | -| Engineering time | $800 per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | ~3,000,000 | -| Cost per query | $0 | $0.00004 | -| Altyapı | Centralized | Decentralized | -| Engineering expense | $200 per hour | Included | -| Geographic redundancy | $1,200 in total costs per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $1,650+ | $120 | +| Cost Comparison | Self Hosted | Graph Ağı | +|:----------------------------:|:------------------------------------------:|:---------------------------------------------------------------:| +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $500 per month | $120 per month | +| Engineering time | $800 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~3,000,000 | +| Cost per query | $0 | $0.00004 | +| Altyapı | Centralized | Decentralized | +| Engineering expense | $200 per hour | Included | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| Cost Comparison | Self Hosted | Graph Ağı | -| :-: | :-: | :-: | -| Monthly server cost\* | $1100 per month, per node | $0 | -| Query costs | $4000 | $1,200 per month | -| Number of nodes needed | 10 | Not applicable | -| Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | ~30,000,000 | -| Cost per query | $0 | $0.00004 | -| Altyapı | Centralized | Decentralized | -| Geographic redundancy | $1,200 in total costs per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $11,000+ | $1,200 | +| Cost Comparison | Self Hosted | Graph Ağı | +|:----------------------------:|:-------------------------------------------:|:---------------------------------------------------------------:| +| Monthly server cost\* | $1100 per month, per node | $0 | +| Query costs | $4000 | $1,200 per month | +| Number of nodes needed | 10 | Not applicable | +| Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~30,000,000 | +| Cost per query | $0 | $0.00004 | +| Altyapı | Centralized | Decentralized | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $11,000+ | $1,200 | \*including costs for backup: $50-$100 per month diff --git a/website/pages/tr/network/curating.mdx b/website/pages/tr/network/curating.mdx index 4dd69fee41a8..f8f753b38f87 100644 --- a/website/pages/tr/network/curating.mdx +++ b/website/pages/tr/network/curating.mdx @@ -10,7 +10,7 @@ Before consumers can query a subgraph, it must be indexed. This is where curatio Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. @@ -18,7 +18,7 @@ The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). @@ -34,7 +34,7 @@ Signaling on a specific version is especially useful when one subgraph is used b Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay a 0.5% curation tax on all auto-migrated curation shares. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -49,7 +49,7 @@ However, it is recommended that curators leave their signaled GRT in place not o ## Risks 1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. 3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. @@ -78,50 +78,14 @@ Subgraph'ınızı çok sık güncellememeniz önerilir. Daha fazla ayrıntı iç ### 5. Can I sell my curation shares? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). ### 6. Am I eligible for a curation grant? Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## Bonding Curve 101 - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![Price per shares](/img/price-per-share.png) - -As a result, price increases linearly, meaning that it will get more expensive to purchase a share over time. Here’s an example of what we mean, see the bonding curve below: - -![Bonding curve](/img/bonding-curve.png) - -Consider we have two curators that mint shares for a subgraph: - -- Curator A is the first to signal on the subgraph. By adding 120,000 GRT into the curve, they are able to mint 2000 shares. -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- Since both curators hold half the total of curation shares, they would receive an equal amount of curator royalties. -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- The remaining curator would now receive all the curator royalties for that subgraph. If they were to burn their shares to withdraw GRT, they would receive 120,000 GRT. -- **TLDR:** The GRT valuation of curation shares is determined by the bonding curve and can be volatile. There is potential to incur big losses. Signaling early means you put in less GRT for each share. By extension, this means you earn more curator royalties per GRT than later curators for the same subgraph. - -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** - -In the case of The Graph, [Bancor’s implementation of a bonding curve formula](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) is leveraged. - Still confused? Check out our Curation video guide below: diff --git a/website/pages/tr/network/delegating.mdx b/website/pages/tr/network/delegating.mdx index 81824234e072..b96d844ef08b 100644 --- a/website/pages/tr/network/delegating.mdx +++ b/website/pages/tr/network/delegating.mdx @@ -2,13 +2,23 @@ title: Delegating --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## Delegator Guide -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly, as that information is widely available online. There are three sections in this guide: @@ -24,15 +34,19 @@ Listed below are the main risks of being a Delegator in the protocol. Delegators cannot be slashed for bad behavior, but there is a tax on Delegators to disincentivize poor decision-making that could harm the integrity of the network. -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +As a Delegator, it's important to understand the following: -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### The delegation unbonding period Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely.
    ![Delegation unbonding](/img/Delegation-Unbonding.png) _Note the 0.5% fee in the Delegation UI, as well as the 28 day @@ -41,47 +55,65 @@ Another thing to consider is how to choose an Indexer wisely. If you choose an I ### Choosing a trustworthy Indexer with a fair reward payout for Delegators -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +#### Delegation Parameters + +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
    ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *The top Indexer is giving Delegators 90% of the rewards. The middle one is giving Delegators 20%. The bottom one is giving Delegators ~83%.*
    -- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. + +As you can see, in order to choose the right Indexer, you must consider multiple things. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -### Calculating Delegators expected return +## Calculating Delegators Expected Return -A Delegator must consider a lot of factors when determining the return. These include: +A Delegator must consider the following factors to determine a return: -- A technical Delegator can also look at the Indexer's ability to use the Delegated tokens available to them. If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### Considering the query fee cut and indexing fee cut -As described in the above sections, you should choose an Indexer that is transparent and honest about setting their Query Fee Cut and Indexing Fee Cuts. A Delegator should also look at the Parameters Cooldown time to see how much of a time buffer they have. After that is done, it is fairly simple to calculate the amount of rewards the Delegators are getting. The formula is: +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![Delegation Image 3](/img/Delegation-Reward-Formula.png) ### Considering the Indexer's delegation pool -Another thing a Delegator has to consider is what proportion of the Delegation Pool they own. All delegation rewards are shared evenly, with a simple rebalancing of the pool determined by the amount the Delegator has deposited into the pool. This gives the Delegator a share of the pool: +Delegators should consider the proportion of the Delegation Pool they own. -![Share formula](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +This gives the Delegator a share of the pool: + +![Share formula](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### Considering the delegation capacity -Another thing to consider is the delegation capacity. Currently, the Delegation Ratio is set to 16. This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of Delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. @@ -89,16 +121,21 @@ Therefore, a Delegator should always consider the Delegation Capacity of an Inde ### MetaMask "Pending Transaction" Bug -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### Örnek -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## Video guide for the network UI +## Video Guide -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/tr/network/developing.mdx b/website/pages/tr/network/developing.mdx index 46b76f6632b6..01e441a12ac6 100644 --- a/website/pages/tr/network/developing.mdx +++ b/website/pages/tr/network/developing.mdx @@ -2,52 +2,88 @@ title: Geliştirme --- -Developers are the demand side of The Graph ecosystem. Developers build subgraphs and publish them to The Graph Network. Then, they query live subgraphs with GraphQL in order to power their applications. +To start coding right away, go to [Developer Quick Start](/quick-start/). + +## Genel Bakış + +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. + +On The Graph, you can: + +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. + +### What is GraphQL? + +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +### Developer Actions + +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. + +## Subgraph Specifics + +### What are subgraphs? + +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. + +A subgraph primarily consists of the following files: + +- `subgraph.yaml`: this YAML file contains the [subgraph manifest](/developing/creating-a-subgraph/#the-subgraph-manifest). +- `subgraph.graphql`: this GraphQL schema defines what data is stored for your subgraph, and how to query it via [GraphQL](/developing/creating-a-subgraph/#the-graphql-schema). +- `mappings`: this [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) mappings file translates data from the event data to the entities defined in your schema. + +Learn the detailed specifics to [create a subgraph](/developing/creating-a-subgraph/). ## Subgraph Lifecycle -Subgraphs deployed to the network have a defined lifecycle. +Here is a general overview of a subgraph’s lifecycle: -### Yerel olarak geliştirme +![Subgraph Lifecycle](/img/subgraph-lifecycle.png) -As with all subgraph development, it starts with local development and testing. Developers can use the same local setup whether they are building for The Graph Network, the hosted service or a local Graph Node, leveraging `graph-cli` and `graph-ts` to build their subgraph. Developers are encouraged to use tools such as [Matchstick](https://github.com/LimeChain/matchstick) for unit testing to improve the robustness of their subgraphs. +### Yerel olarak geliştirme -> There are certain constraints on The Graph Network, in terms of feature and network support. Only subgraphs on [supported networks](/developing/supported-networks) will earn indexing rewards, and subgraphs which fetch data from IPFS are also not eligible. +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. ### Deploy to Subgraph Studio -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. - -### Ağda Yayınlama +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -When the developer is happy with their subgraph, they can publish it to The Graph Network. This is an on-chain action, which registers the subgraph so that it is discoverable by Indexers. Published subgraphs have a corresponding NFT, which is then easily transferable. The published subgraph has associated metadata, which provides other network participants with useful context and information. +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. -### İndekslemeyi Teşvik Eden Sinyal +### Ağda Yayınlama -Published subgraphs are unlikely to be picked up by Indexers without the addition of signal. Signal is locked GRT associated with a given subgraph, which indicates to Indexers that a given subgraph will receive query volume, and also contributes to the indexing rewards available for processing it. Subgraph developers will generally add signal to their subgraph, in order to encourage indexing. Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. -### Sorgulama & Uygulama Geliştirme +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-and-deprecate-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. -Once a subgraph has been processed by Indexers and is available for querying, developers can start to use the subgraph in their applications. Developers query subgraphs via a gateway, which forwards their queries to an Indexer who has processed the subgraph, paying query fees in GRT. +### Add Curation Signal for Indexing -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. +#### What is signal? -### Subgraphları Güncelleme +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. +### Sorgulama & Uygulama Geliştirme -Subgraph Geliştiricisi yükseltmeye hazır olduğunda, subgraphlarını yeni sürüme yönlendirmek için bir işlem başlatabilir. Subgraph'ın güncellenmesi, her sinyali yeni sürüme geçirir (sinyali uygulayan kullanıcının "otomatik geçiş" seçeneğini seçtiğini varsayarsak) ve bu da bir geçiş kesintisine neden olur. Bu sinyal geçişi, İndeksleyicilerin subgraph'ı yeni sürümünü indekslemeye başlamasını sağlamalıdır, böylece yakında sorgulama için kullanılabilir hale gelecektir. +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). -### Subgraphları Kullanımdan Kaldırma +Learn more about [querying subgraphs](/querying/querying-the-graph/). -At some point a developer may decide that they no longer need a published subgraph. At that point they may deprecate the subgraph, which returns any signalled GRT to the Curators. +### Subgraphları Güncelleme -### Çeşitli Geliştirici Rolleri +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -Some developers will engage with the full subgraph lifecycle on the network, publishing, querying and iterating on their own subgraphs. Some may be focused on subgraph development, building open APIs which others can build on. Some may be application focused, querying subgraphs deployed by others. +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. -### Geliştiriciler ve Ağ Ekonomisi +### Deprecating & Transferring Subgraphs -Geliştiriciler ağda önemli bir ekonomik aktördür, indekslemeyi teşvik amacıyla GRT kilitlerler ve en önemlisi ağın birincil değer değişimi olan subgraphları sorgularlar. Subgraph geliştiricileri ayrıca bir subgraph güncellendiğinde GRT yakarlar. +If you no longer need a published subgraph, you can [deprecate or transfer a subgraph](/managing/transfer-and-deprecate-a-subgraph/). Deprecating a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/tr/network/explorer.mdx b/website/pages/tr/network/explorer.mdx index 852af644a69a..d13d254ff6d3 100644 --- a/website/pages/tr/network/explorer.mdx +++ b/website/pages/tr/network/explorer.mdx @@ -2,21 +2,35 @@ title: Graph Gezgini --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## Subgraph'ler -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![Gezgin Gürüntüsü 1](/img/Subgraphs-Explorer-Landing.png) -Bir subgraph tıkladığınızda, test alanında sorguları test edebilecek ve bilinçli kararlar vermek için ağ ayrıntılarından yararlanabileceksiniz. Ayrıca, kendi subgraph'ınız veya başkalarının subgraphlar'ında GRT sinyali vererek indeksleyicilerin bunun önemi ve kalitesinden haberdar olmasını sağlayabileceksiniz. Bu oldukça önemlidir, çünkü bir subgraph'ta sinyal vermek, pnun indekslenmesini teşvik eder, bu da onların nihayetinde sorguları sunmak için ağda görünmeleri anlamına gelir. +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Gezgin Gürüntüsü 2](/img/Subgraph-Details.png) -Her bir subgraph'ın özel sayfasında, çeşitli ayrıntılar ortaya çıkmaktadır. Bunlar şunları içerir: +On each subgraph’s dedicated page, you can do the following: - Subgraphlar üzerinde sinyal/sinyalsizlik - Grafikler, mevcut dağıtım kimliği ve diğer üst veri gibi daha fazla ayrıntı görüntüleme @@ -31,26 +45,32 @@ Her bir subgraph'ın özel sayfasında, çeşitli ayrıntılar ortaya çıkmakta ## Katılımcılar -Bu kısımda İndeksleyiciler, Delegatörler ve Küratörler gibi ağ faaliyetlerine katılan tüm kişilerin kuş bakışı bir görüş elde edeceksiniz. Aşağıda, her bir kısmın sizin için ne anlama geldiğini derinlemesine inceleyeceğiz. +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### 1. İndeksleyiciler ![Gezgin Gürüntüsü 4](/img/Indexer-Pane.png) -İndeksleyiciler ile başlayalım. İndeksleyiciler protokolün bel kemiğidir, subgraphlar'a stake eden, indeksleyen ve subgraphlar'ı kullanan herkese sorgu sunan kişilerdir. İndeksleyiciler tablosunda, bir İndeksleyicinin temsilci parametrelerini, hisselerini, her bir subgraph'a ne kadar stake ettiklerini, sorgu ücretlerini ve indeksleme ödüllerinden ne kadar gelir elde ettiklerini görebileceksiniz. Derinlemesine incelemeler aşağıda: +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. + +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. -- Sorgu Ücreti Kesintisi - İndeksleyici'nin Delegatörlerle bölüşürken tuttuğu sorgu ücreti indirimlerinin %'si -- Efektif Ödül Kesintisi - delegasyon havuzuna uygulanan indeksleme ödülü kesintisi. Eğer negatifse, indeksleyicinin ödüllerinin bir kısmını verdiği anlamına gelir. Pozitifse, İndeksleyicinin ödüllerinin bir kısmını elinde tuttuğu anlamına gelir -- Kalan Bekleme Süresi - İndeksleyici'nin yukarıdaki delegatör parametrelerini değiştirebilmesi için kalan süre. Bekleme süreleri, İndeksleyiciler tarafından delegatör parametrelerini güncellediklerinde ayarlanır -- Depozito - Bu, İndeksleyici'nin kötü niyetli veya yanlış davranışı sonucunda kesilebilecek yatırılmış payıdır -- Delege edilmiş - İndeksleyici tarafından tahsis edilebilen ancak kesilemeyen Delegatörler'in payları -- Tahsis edilmiş - İndeksleyiciler'in indeksledikleri subgraphlar'a aktif olarak ayırdıkları paydır -- Mevcut Delegasyon Kapasitesi - İndekslendiricilerin aşırı delege edilmiş duruma gelmeden önce alabilecekleri delege edilebilecek pay miktarı +**Specifics** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Maksimum Delegasyon Kapasitesi - Endekserin verimli bir şekilde kabul edebileceği en yüksek delegasyon miktarıdır. Fazla delege edilmiş pay, tahsisler veya ödül hesaplamaları için kullanılamaz. -- Sorgu Ücretleri - bu, son kullanıcıların bir İndeksleyiciden gelen sorgular için tüm zaman içinde ödediği toplam ücretlerdir +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - İndeksleyici Ödülleri - bu, İndeksleyici ve Delegatörler tarafından tüm zaman boyunca kazanılan toplam indeksleyici ödülleridir. İndeksleyici ödülleri GRT ihracı yoluyla ödenir. -İndeksleyiciler hem sorgu ücretleri hem de indeksleme ödülleri kazanabilir. İşlevsel olarak bu, ağ katılımcıları GRT'yi bir İndeksleyiciye delege ettiğinde gerçekleşir. Bu, İndeksleyicilerin İndeksleyici parametrelerine bağlı olarak sorgu ücretleri ve ödüller almasına olanak sağlar. İndeksleme parametreleri, tablonun sağ tarafına tıklanarak veya bir İndeksleyicinin profiline girip "Delegate" düğmesine tıklanarak ayarlanır. +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. Nasıl İndeksleyici olunacağı hakkında daha fazla bilgi edinmek için [resmi dökümantasyona](/network/indexing) veya [Graph Akademi İndeksleyici kılavuzlarına](https://thegraph.academy/delegators/choosing-indexers/) göz atabilirsiniz. @@ -58,9 +78,13 @@ Nasıl İndeksleyici olunacağı hakkında daha fazla bilgi edinmek için [resmi ### 2. Küratörler -Küratörler, hangi subgraphlar'ın en yüksek kalitede olduğunu belirlemek için subgraphlar'ı analiz eder. Bir Küratör potansiyel olarak cazip bir subgraph bulduğunda, bağlanma eğrisi üzerinde sinyal vererek onu kürate edebilir. Küratörler bunu yaparak, İndeksleyicilere hangi subgraphlar'ın yüksek kaliteli olduğunu ve indekslenmesi gerektiğini bildirir. +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -Küratörler topluluk üyeleri, veri kullanıcıları ve hatta GRT tokenlerini bir bağlanma eğrisine yatırarak kendi subgraphlar'ı hakkında sinyal veren subgraph geliştiricileri olabilir. Küratörler GRT yatırarak bir subgraph'ın kürasyon paylarını basarlar. Sonuç olarak Küratörler, sinyal verdikleri subgraph'ın ürettiği sorgu ücretlerinin bir kısmını almaya hak kazanırlar. Bağlanma eğrisi, Küratörleri en yüksek kaliteli veri kaynaklarının küratörlüğünü yapmaya teşvik eder. Bu bölümdeki Küratör tablosu şunları görmenizi sağlayacaktır: +In the The Curator table listed below you can see: - Küratör'ün küratörlüğe başladığı tarih - Yatırılan GRT sayısı @@ -68,34 +92,36 @@ Küratörler topluluk üyeleri, veri kullanıcıları ve hatta GRT tokenlerini b ![Gezgin Gürüntüsü 6](/img/Curation-Overview.png) -Küratör rolü hakkında daha fazla bilgi edinmek istiyorsanız, bunu [Graph Akademi](https://thegraph.academy/curators/) 'nin aşağıdaki bağlantılarını veya [resmi dökümantasyonunu](/network/curating) ziyaret ederek yapabilirsiniz. +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### 3. Delegatörler -Delegatörler, Graph Ağı'nın güvenliğinin ve merkeziyetsizliğinin korunmasında kilit bir rol oynar. GRT tokenlerini bir veya birden fazla indeksleyiciye delege ederek (yani "stake ederek") ağa katılırlar. TDelegatörler olmadan, İndeksleyicilerin önemli ödüller ve ücretler kazanma olasılığı daha düşüktür. Bu nedenle İndeksleyiciler, kazandıkları indeksleme ödüllerinin ve sorgu ücretlerinin bir kısmını Delegatörlere sunarak onları kendilerine delege etmeye teşvik etmeye çalışırlar. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -Delegatörler ise İndeksleyicileri sırasıyla geçmiş performans, indeksleme ödül oranları ve sorgu ücreti kesintileri gibi bir dizi farklı faktöre göre seçerler. Topluluk içinde sahip oldukları itibar da bu konuda bir faktör olabilir! Seçilen indeksleyicilerle [Graph Discord sunucusu](https://discord.gg/graphprotocol) veya [Graph Forum](https://forum.thegraph.com/)'u üzerinden bağlantı kurmanız önerilir! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![Gezgin Gürüntüsü 7](/img/Delegation-Overview.png) -Delegatörler tablosu, topluluktaki aktif Delegatörleri ve aşağıdaki gibi metrikleri görmenizi sağlayacaktır: +In the Delegators table you can see the active Delegators in the community and important metrics: - Bir Delegatör'ün delege ettiği İndeksleyici sayısı - Bir Delegatör'ün orijinal delegasyonu - Biriktirdikleri ancak protokolden çekmedikleri ödüller - Protokolden çekildiler gerçekleşmiş ödüller - Şu anda protokolde bulunan sahip oldukları toplam GRT miktarı -- En son delegasyon aldıkları tarih +- The date they last delegated -Nasıl Delegatör olunacağı hakkında daha fazla bilgi edinmek istiyorsanız, başka yere bakmanıza gerek yok! Tek yapmanız gereken [resmi dökümantasyona](/network/delegating) veya [Graph Akademi](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers)'ye bakmak. +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## Ağ -Ağ bölümünde, küresel APG'lerin yanı sıra her bir dönem bazına geçme ve ağ metriklerini daha ayrıntılı olarak analiz etme olanaklarını göreceksiniz. Bu ayrıntılar size ağın zaman içinde nasıl performans gösterdiğine dair bir fikir verecektir. +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. -### Overview +### Genel Bakış -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +The overview section has both all the current network metrics and some cumulative metrics over time: - Mevcut toplam ağ payı - İndeksleyiciler ve Delegatörler arasındaki pay paylaşımı @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - Kürasyon ödülü, enflasyon oranı ve daha fazlası gibi protokol parametreleri - Mevcut dönem ödülleri ve ücretleri -Bahsetmeye değer birkaç önemli ayrıntı: +A few key details to note: -- **Sorgu ücretleri kullanıcılar tarafından üretilen ücretleri temsil eder** ve subgraphlara yönelik tahsisleri kapatıldıktan ve sundukları veriler tüketiciler tarafından doğrulandıktan sonra en az 7 dönemlik bir sürenin ardından (aşağıya bakınız) İndeksleyiciler tarafından talep edilebilir (veya edilemez). -- **İndeksleme ödülleri, İndeksleyicilerin dönem boyunca ağ ihracından talep ettikleri ödül miktarını temsil eder.** Protokol ihracı sabit olmasına rağmen, ödüller yalnızca İndeksleyiciler indeksledikleri subgraphlara yönelik tahsislerini kapattıklarında basılır. Bu nedenle, her dönem başına ödül sayısı değişir (yani, bazı dönemler boyunca, İndeksleyiciler günlerce açık olan tahsisatları toplu olarak kapatmış olabilir). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Gezgin Gürüntüsü 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ Dönemler bölümünde, aşağıdaki gibi metrikleri dönem bazında analiz edeb - Aktif dönem, İndeksleyicilerin halihazırda pay tahsis ettiği ve sorgu ücretlerini topladığı dönemdir - Uzlaşma dönemleri, bildirim kanallarının uzlaştırıldığı dönemlerdir. Bu, kullanıcıların kendilerine karşı itirazda bulunması halinde İndeksleyicilerin kesintiye maruz kalacağı anlamına gelir. - Dağıtım dönemleri, bildirim kanallarının dönemler için yerleştiği ve İndeksleyicilerin sorgu ücreti iadelerini talep edebildiği dönemlerdir. - - Sonlandırılmış dönemler, İndeksleyiciler tarafından talep edilecek sorgu ücreti iadesi kalmamış, dolayısıyla sonlandırılmış dönemlerdir. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![Gezgin Gürüntüsü 9](/img/Epoch-Stats.png) ## Kullanıcı Profiliniz -Ağ istatistiklerinden bahsettiğimize göre, şimdi kişisel profilinize geçelim. Kişisel profiliniz, ağa nasıl katılıyor olursanız olun, ağ etkinliğinizi görebileceğiniz yerdir. Kripto cüzdanınız kullanıcı profiliniz olarak işlev görecek ve Kullanıcı Panosu ile şunları görebileceksiniz: +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### Profile Genel Bakış -Burası yaptığınız tüm mevcut eylemleri görebileceğiniz yerdir. Ayrıca profil bilgilerinizi, açıklamanızı ve web sitenizi de (eğer eklediyseniz) burada bulabilirsiniz. +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![Gezgin Gürüntüsü 10](/img/Profile-Overview.png) ### Subgraphlar Sekmesi -Subgraphlar sekmesine tıklarsanız, yayınlanmış subgraphlar'ı göreceksiniz. Bu, test amacıyla CLI ile dağıtılan herhangi bir subgraph'ı içermeyecektir - subgraphlar yalnızca merkeziyetsiz ağda yayınlandıklarında görünecektir. +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Gezgin Gürüntüsü 11](/img/Subgraphs-Overview.png) ### İndeksleme Sekmesi -İndeksleme sekmesine tıklarsanız, subgraplar'a yönelik tüm aktif ve geçmiş tahsisleri içeren bir tablonun yanı sıra bir İndeksleyici olarak geçmiş performansınızı analiz edebileceğiniz ve görebileceğiniz grafikler bulacaksınız. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. Bu bölümde ayrıca net İndeksleyici ödülleriniz ve sorgu ücretlerinizle ilgili ayrıntılar da yer alacaktır. Aşağıdaki metrikleri göreceksiniz: @@ -158,7 +189,9 @@ Bu bölümde ayrıca net İndeksleyici ödülleriniz ve sorgu ücretlerinizle il ### Delegasyon Sekmesi -Delegatörler Graph Ağı için önem arz etmektedir. Bir Delegatör, sağlıklı ödül getirisi sağlayacak bir İndeksleyici seçmek için bildiklerini kullanmalıdır. Burada, aktif ve geçmiş delegasyonlarınızın ayrıntılarını ve delege ettiğiniz İndeksleyicilerin metriklerini bulabilirsiniz. +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. Sayfanın ilk yarısında, delegasyon grafiğinizin yanı sıra yalnızca ödül grafiğini de görebilirsiniz. Sol tarafta, mevcut delegasyon metriklerinizi yansıtan APG'leri görebilirsiniz. diff --git a/website/pages/tr/network/indexing.mdx b/website/pages/tr/network/indexing.mdx index 1c6318d64907..a25dd472865b 100644 --- a/website/pages/tr/network/indexing.mdx +++ b/website/pages/tr/network/indexing.mdx @@ -42,7 +42,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) to get the IDs for all active allocations: +1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -112,12 +112,12 @@ Indexers may differentiate themselves by applying advanced techniques for making - **Orta** - 100 subgraph ve saniyede 200-500 isteği destekleyen Üretim İndeksleyici. - **Yüksek** - Şu anda kullanılan tüm subgraphları indekslemek ve ilgili trafik için istekleri sunmak için hazırlanmıştır. -| Kurulum | Postgres
    (CPU'lar) | Postgres
    (GB cinsinden bellek) | Postgres
    (TB cinsinden disk) | VM'ler
    (CPU'lar) | VM'ler
    (GB cinsinden bellek) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Düşük | 4 | 8 | 1 | 4 | 16 | -| Standart | 8 | 30 | 1 | 12 | 48 | -| Orta | 16 | 64 | 2 | 32 | 64 | -| Yüksek | 72 | 468 | 3.5 | 48 | 184 | +| Kurulum | Postgres
    (CPU'lar) | Postgres
    (GB cinsinden bellek) | Postgres
    (TB cinsinden disk) | VM'ler
    (CPU'lar) | VM'ler
    (GB cinsinden bellek) | +| -------- |:-----------------------------:|:-----------------------------------------:|:---------------------------------------:|:---------------------------:|:---------------------------------------:| +| Düşük | 4 | 8 | 1 | 4 | 16 | +| Standart | 8 | 30 | 1 | 12 | 48 | +| Orta | 16 | 64 | 2 | 32 | 64 | +| Yüksek | 72 | 468 | 3.5 | 48 | 184 | ### Bir İndeksleyicinin alması gereken bazı temel güvenlik önlemleri nelerdir? @@ -149,20 +149,20 @@ Not: Çevik ölçeklendirmeyi desteklemek için, sorgulama ve indeksleme endişe #### Graph Node -| Port | Amaç | Rotalar | CLI Argümanı | Ortam Değişkeni | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP sunucusu
    ( subgraph sorguları için) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    ( subgraph abonelikleri için) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (dağıtımları yönetmek için) | / | --admin-port | - | -| 8030 | Subgraph indeksleme durum API'si | /graphql | --index-node-port | - | -| 8040 | Prometheus metrikleri | /metrics | --metrics-port | - | +| Port | Amaç | Rotalar | CLI Argümanı | Ortam Değişkeni | +| ---- | ----------------------------------------------------------- | ---------------------------------------------------- | ----------------- | --------------- | +| 8000 | GraphQL HTTP sunucusu
    ( subgraph sorguları için) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    ( subgraph abonelikleri için) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (dağıtımları yönetmek için) | / | --admin-port | - | +| 8030 | Subgraph indeksleme durum API'si | /graphql | --index-node-port | - | +| 8040 | Prometheus metrikleri | /metrics | --metrics-port | - | #### İndeksleyici Hizmeti -| Port | Amaç | Rotalar | CLI Argümanı | Ortam Değişkeni | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP sunucusu
    (ücretli subgraph sorguları için) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrikleri | /metrics | --metrics-port | - | +| Port | Amaç | Rotalar | CLI Argümanı | Ortam Değişkeni | +| ---- | ------------------------------------------------------------------ | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP sunucusu
    (ücretli subgraph sorguları için) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrikleri | /metrics | --metrics-port | - | #### İndeksleyici Aracı @@ -545,7 +545,7 @@ graph indexer status - `graph indexer rules maybe [options] ` — Bir dağıtım için `decisionBasis` öğesini `rules` olarak ayarlayın, böylece İndeksleyici aracısı bu dağıtımı indeksleyip indekslemeyeceğine karar vermek için indeksleme kurallarını kullanacaktır. -- `graph indexer actions get [options] ` - `all` kullanarak bir veya daha fazla eylemi getirin veya tüm eylemleri almak için `action-id`'yi boş bırakın. Belirli bir durumdaki tüm eylemleri yazdırmak için `--status` ek argümanı kullanılabilir. +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `graph indexer action queue allocate ` - Kuyruk tahsis eylemi diff --git a/website/pages/tr/network/overview.mdx b/website/pages/tr/network/overview.mdx index 16214028dbc9..0779d9a6cb00 100644 --- a/website/pages/tr/network/overview.mdx +++ b/website/pages/tr/network/overview.mdx @@ -2,14 +2,20 @@ title: Network Overview --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## Overview +## How does it work? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Specifics + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![Token Economics](/img/Network-roles@2x.png) -To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20 used to allocate resources in the network. +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/tr/new-chain-integration.mdx b/website/pages/tr/new-chain-integration.mdx index f9e7086bc07f..9f2bc45f75cd 100644 --- a/website/pages/tr/new-chain-integration.mdx +++ b/website/pages/tr/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: Yeni Ağların Entegrasyonu +title: New Chain Integration --- -Graph Düğümü şu anda aşağıdaki zincir türlerinden verileri indeksleyebilir: +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- Ethereum, EVM JSON-RPC ve [Ethereum Firehose](https://github.com/streamingfast/firehose-ethereum) aracılığıyla -- NEAR, [NEAR Firehose](https://github.com/streamingfast/near-firehose-indexer) aracılığıyla -- Cosmos, [Cosmos Firehose](https://github.com/graphprotocol/firehose-cosmos) aracılığıyla -- Arweave, [Arweave Firehose](https://github.com/graphprotocol/firehose-arweave) aracılığıyla +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -Bu zincirlerden herhangi biriyle ilgileniyorsanız, entegrasyon Graph Düğümü yapılandırması ve testinden ibarettir. +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -If you are interested in a different chain type, a new integration with Graph Node must be built. Our recommended approach is developing a new Firehose for the chain in question and then the integration of that Firehose with Graph Node. More info below. +## Integration Strategies -**1. EVM JSON-RPC** +### 1. EVM JSON-RPC -Blok zinciri EVM eşdeğeri ise ve istemci/düğüm standart EVM JSON-RPC API'sini sunuyorsa, Graph Düğümü yeni zinciri indeksleyebilmelidir. Daha fazla bilgi için [EVM JSON-RPC'yi test etme](new-chain-integration#testing-an-evm-json-rpc) bölümüne bakın. +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -**2. Firehose** +#### EVM JSON-RPC'yi test etme -EVM tabanlı olmayan zincirler için, Graph Düğümü blok zinciri verilerini gRPC ve bilinen tip tanımları aracılığıyla alması zorunludur. Bu, [StreamingFast] (https://www.streamingfast.io/) tarafından geliştirilen ve dosya tabanlı ve akış öncelikli bir yaklaşım kullanarak yüksek ölçeklenebilir bir indeksleme blok zinciri çözümü sağlayan yeni bir teknoloji olan [Firehose](firehose/) aracılığıyla yapılabilir. [StreamingFast team](mailto:integrations@streamingfast.io/) geliştirme konusunda yardıma ihtiyacınız varsa StreamingFast ekibine ulaşın. +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## EVM JSON-RPC ve Firehose arasındaki fark +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, bir JSON-RPC toplu talebinde +- `trace_filter` _(optionally required for Graph Node to support call handlers)_ -Her ikisi de subgraphlar için uygun olsa da, [Substreams destekli subgraphlar](cookbook/substreams-powered-subgraphs/) oluşturmak gibi [Substreams](substreams/) ile oluşturmak isteyen geliştiriciler için her zaman bir Firehose gereklidir. Ayrıca Firehose, JSON-RPC ile karşılaştırıldığında daha iyi indeksleme hızları sağlar. +### 2. Firehose Integration -Yeni EVM zinciri entegre edicileri, substreams faydaları ve devasa paralelleştirilmiş indeksleme kabiliyetleri göz önüne alındığında Firehose tabanlı yaklaşımı da düşünebilirler. Her ikisinin de desteklenmesi, geliştiricilerin yeni zincir için substreams veya subgraphlar oluşturma arasında seçim yapmasına olanak tanır. +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> **NOT**: EVM zincirleri için Firehose tabanlı bir entegrasyon, subgraphları düzgün bir şekilde indekslemek için İndeksleyicilerin zincirin arşiv RPC düğümünü çalıştırmasını gerektirecektir. Bunun nedeni, Firehose'un \`eth_call' RPC metodu tarafından erişilebilen akıllı sözleşme durumunu sağlayamamasıdır. (eth_calls'ların [geliştiriciler için iyi bir uygulama olmadığını](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/) hatırlatmakta fayda var) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## EVM JSON-RPC'yi test etme +#### Specific Firehose Instrumentation for EVM (`geth`) chains -Graph Düğümü'nün bir EVM zincirinden veri alabilmesi için RPC düğümünün aşağıdaki EVM JSON RPC yöntemlerini sunması gerekir: +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \_(geçmiş bloklar için EIP-1898 ile - arşiv düğümü gerektirir): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`, bir JSON-RPC toplu talebinde -- _`trace_filter`_ _(Graph Düğümü'nün çağrı işleyicilerini desteklemesi için opsiyonel olarak gereklidir)_ +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### Graph Node Configuration +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -**Yerel ortamınızı hazırlayarak başlayın** +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## Graph Node Configuration + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [Graph Düğümü'nü Klonlayın](https://github.com/graphprotocol/graph-node) -2. [Bu satırı](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) yeni ağ adını ve EVM JSON RPC uyumlu URL'yi içerecek şekilde değiştirin - > env var adının kendisini değiştirmeyin. Ağ adı farklı olsa bile `ethereum` olarak kalmalıdır. -3. Bir IPFS düğümü çalıştırın veya Graph tarafından kullanılanı kullanın: https://api.thegraph.com/ipfs/ -**Bir subgraph'ı yerel olarak dağıtarak entegrasyonu test edin** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. Basit bir örnek subgraph oluşturun. Bazı seçenekler aşağıdadır: - 1. Önceden paketlenmiş [Gravitar] (https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) akıllı sözleşmesi ve subgraph'ı iyi bir başlangıç noktasıdır - 2. [Bir Graph eklentisi ile Hardhat kullanarak](https://github.com/graphprotocol/hardhat-graph) mevcut herhangi bir akıllı sözleşmeden veya solidity geliştirme ortamından yerel bir subgraph'ı önyükleyin -3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. -4. Graph Düğümü'nde subgraph'ınızı oluşturun: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. Subgraph'ınızı Graph Düğümü'nde yayınlayın: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -Herhangi bir hata olmadığı takdirde Graph Düğü'mü dağıtılan subgraph'ı senkronize ediyor olmalıdır. Senkronizasyon için zaman tanıyın, ardından kayıtlarla yazdırılan API uç noktasına bazı GraphQL sorguları gönderin. +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## Firehose özellikli yeni bir zincirin entegrasyonu +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. Basit bir örnek subgraph oluşturun. Bazı seçenekler aşağıdadır: + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +Herhangi bir hata olmadığı takdirde Graph Düğü'mü dağıtılan subgraph'ı senkronize ediyor olmalıdır. Senkronizasyon için zaman tanıyın, ardından kayıtlarla yazdırılan API uç noktasına bazı GraphQL sorguları gönderin. -Yeni bir zincirin entegrasyonu, Firehose yaklaşımını kullanarak da mümkündür. Bu, şu anda EVM dışı zincirler için en iyi seçenektir ve substreams desteği için bir gerekliliktir. Ek dokümantasyon, Firehose'un nasıl çalıştığına, yeni bir zincir için Firehose desteği eklemeyi ve onun Graph Düğümü ile entegrasyonunu içerir. Entegre ediciler için önerilen dokümanlar: +## Substreams-powered Subgraphs -1. [Firehose ile ilgili genel dokümanlar](firehose/) -2. [Yeni bir zincir için Firehose desteği ekleme](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [Firehose aracılığıyla Graph Düğümü'nün yeni bir zincirle entegrasyonu](https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/tr/operating-graph-node.mdx b/website/pages/tr/operating-graph-node.mdx index 7a9039808b81..7112df5b91c5 100644 --- a/website/pages/tr/operating-graph-node.mdx +++ b/website/pages/tr/operating-graph-node.mdx @@ -77,13 +77,13 @@ Tam Kubernetes örnek yapılandırması [indeksleyici Github deposunda](https:// Graph Düğümü çalışırken aşağıdaki portları açar: -| Port | Amaç | Rotalar | CLI Argümanı | Ortam Değişkeni | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP sunucusu
    ( subgraph sorguları için) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    ( subgraph abonelikleri için) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (dağıtımları yönetmek için) | / | --admin-port | - | -| 8030 | Subgraph indeksleme durum API'si | /graphql | --index-node-port | - | -| 8040 | Prometheus metrikleri | /metrics | --metrics-port | - | +| Port | Amaç | Rotalar | CLI Argümanı | Ortam Değişkeni | +| ---- | ----------------------------------------------------------- | ---------------------------------------------------- | ----------------- | --------------- | +| 8000 | GraphQL HTTP sunucusu
    ( subgraph sorguları için) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    ( subgraph abonelikleri için) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (dağıtımları yönetmek için) | / | --admin-port | - | +| 8030 | Subgraph indeksleme durum API'si | /graphql | --index-node-port | - | +| 8040 | Prometheus metrikleri | /metrics | --metrics-port | - | > **Önemli**: Bağlantı noktalarını herkese açık olarak açarken dikkatli olun - **yönetim portları** kilitli tutulmalıdır. Bu, Graph Düğümü JSON-RPC uç noktasını içerir. diff --git a/website/pages/tr/querying/graphql-api.mdx b/website/pages/tr/querying/graphql-api.mdx index ba6149cbdd94..54a450425b25 100644 --- a/website/pages/tr/querying/graphql-api.mdx +++ b/website/pages/tr/querying/graphql-api.mdx @@ -2,11 +2,19 @@ title: GraphQL API --- -This guide explains the GraphQL Query API that is used for The Graph Protocol. +Learn about the GraphQL Query API used in The Graph. -## Queries +## What is GraphQL? -In your subgraph schema you define types called `Entities`. For each `Entity` type, an `entity` and `entities` field will be generated on the top-level `Query` type. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +To understand the larger role that GraphQL plays, review [developing](/network/developing/) and [creating a subgraph](/developing/creating-a-subgraph/). + +## Queries with GraphQL + +In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. + +> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. ### Örnekler @@ -21,7 +29,7 @@ Query for a single `Token` entity defined in your schema: } ``` -> **Not:** Tek bir varlık için sorgulama yaparken `id` alanı zorunludur ve bir dize olmalıdır. +> Note: When querying for a single entity, the `id` field is required, and it must be writen as a string. Tüm `Token` varlıklarını sorgulayın: @@ -36,7 +44,10 @@ Tüm `Token` varlıklarını sorgulayın: ### Sıralama -Bir koleksiyonu sorgularken, belirli bir niteliğe göre sıralamak için `orderBy` parametresi kullanılabilir. Ayrıca, sıralama yönünü belirtmek için `orderDirection` kullanılabilir; artan için `asc` veya azalan için `desc`. +When querying a collection, you may: + +- Use the `orderBy` parameter to sort by a specific attribute. +- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. #### Örnek @@ -53,7 +64,7 @@ Bir koleksiyonu sorgularken, belirli bir niteliğe göre sıralamak için `order Graph Düğümü [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0)'dan itibaren varlıklar iç içe geçmiş varlıklar bazında sıralanabilir. -Aşağıdaki örnekte, tokenleri sahiplerinin adına göre sıralıyoruz: +The following example shows tokens sorted by the name of their owner: ```graphql { @@ -70,11 +81,12 @@ Aşağıdaki örnekte, tokenleri sahiplerinin adına göre sıralıyoruz: ### Sayfalandırma -Bir koleksiyonu sorgularken, koleksiyonun başından itibaren sayfalama yapmak için `first` parametresi kullanılabilir. Varsayılan sıralama düzeninin oluşturma zamanına göre değil, artan alfanümerik düzende ID'ye göre olduğunu belirtmekte fayda var. - -Ayrıca, `skip` parametresi varlıkları atlamak ve sayfalandırmak için kullanılabilir. örn. `first:100` ilk 100 varlığı gösterir ve `first:100, skip:100` sonraki 100 varlığı gösterir. +When querying a collection, it's best to: -Sorgular genellikle kötü performans gösterdiğinden çok büyük `skip` değerleri kullanmaktan kaçınmalıdır. Çok sayıda öğeyi almak için, son örnekte gösterildiği gibi bir özniteliğe dayalı olarak varlıklar arasında sayfa açmak çok daha idealdir. +- Use the `first` parameter to paginate from the beginning of the collection. + - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. +- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. #### `first`'ün kullanımına örnek @@ -106,7 +118,7 @@ Bir koleksiyonun ortasındaki varlık gruplarını sorgulamak için `skip` param #### `first` ve `id_ge`'nin kullanımına örnek -Bir istemcinin çok sayıda varlığı alması gerekiyorsa, sorguları bir niteliğe dayandırmak ve bu niteliğe göre filtrelemek çok daha performanslıdır. Örneğin, bir istemci bu sorguyu kullanarak çok sayıda token alabilir: +If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: ```graphql query manyTokens($lastID: String) { @@ -117,11 +129,12 @@ query manyTokens($lastID: String) { } ``` -İlk seferinde, sorguyu `lastID = ""` ile gönderecek ve sonraki istekler için `lastID`'yi önceki istekteki son varlığın `id` niteliğine ayarlayacaktır. Bu yaklaşım, artan `skip` değerleri kullanmaktan önemli ölçüde daha iyi performans gösterecektir. +The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### Filtreleme -Sorgularınızda `where` parametresini kullanarak farklı özellikler için filtreleme yapabilirsiniz. `where` parametresi içerisinde birden fazla değer için filtreleme yapabilirsiniz. +- You can use the `where` parameter in your queries to filter for different properties. +- You can filter on multiple values within the `where` parameter. #### `where`'in kullanımına örnek @@ -155,7 +168,7 @@ Değer karşılaştırması için `_gt`, `_lte` gibi son ekler kullanabilirsiniz #### Blok filtreleme için örnek -Varlıkları `_change_block(number_gte: Int)` ile de filtreleyebilirsiniz. Bu, belirtilen blok içinde veya sonrasında güncellenen varlıkları filtreler. +You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. Örneğin bu, son yoklamanızdan bu yana yalnızca değişen varlıkları almak istiyorsanız yararlı olabilir. Ya da alternatif olarak, subgraph'ınızda varlıkların nasıl değiştiğini araştırmak veya hata ayıklamak için yararlı olabilir (bir blok filtresiyle birleştirilirse, yalnızca belirli bir blokta değişen varlıkları izole edebilirsiniz). @@ -193,7 +206,7 @@ Graph Düğümü'nün [`v0.30.0`](https://github.com/graphprotocol/graph-node/re ##### `AND` Operator -Aşağıdaki örnekte, `outcome` değeri `succeeded` olan ve `number` değeri `100`'den büyük veya buna eşit olan zorlukları filtreliyoruz. +The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. ```graphql { @@ -208,7 +221,7 @@ Aşağıdaki örnekte, `outcome` değeri `succeeded` olan ve `number` değeri `1 ``` > **Syntactic sugar:** Yukarıdaki sorguyu, virgülle ayrılmış bir alt ifade geçirerek, `and` operatörünü kaldırarak basitleştirebilirsiniz. -> +> > ```graphql > { > challenges(where: { number_gte: 100, outcome: "succeeded" }) { @@ -223,7 +236,7 @@ Aşağıdaki örnekte, `outcome` değeri `succeeded` olan ve `number` değeri `1 ##### `OR` Operator -Aşağıdaki örnekte, `outcome` değeri `succeeded` olan veya `number` değeri `100` yada daha büyük olan zorlukları filtreliyoruz. +The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. ```graphql { @@ -278,9 +291,9 @@ _change_block(number_gte: Int) Varlıklarınızın durumunu yalnızca varsayılan olan en son blok için değil, aynı zamanda geçmişteki rastgele bir blok için de sorgulayabilirsiniz. Bir sorgunun gerçekleşmesi gereken blok, sorguların üst düzey alanlarına bir `block` bağımsız değişkeni eklenerek blok numarası veya blok karması ile belirtilebilir. -Böyle bir sorgunun sonucu zaman içinde değişmeyecektir, yani belirli bir geçmiş blokta sorgu yapmak, ne zaman yürütülürse yürütülsün aynı sonucu verecektir, ancak zincirin başına çok yakın bir blokta sorgu yaptığınız zaman, bu bloğun ana zincirde olmadığı ortaya çıkarsa ve zincir yeniden düzenlenirse sonuç değişebilir. Bir blok nihai olarak kabul edildiğinde takdirde, sorgunun sonucu değişmeyecektir. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -Lütfen şunu göz önünde bulundurun ki mevcut uygulama hala belirli sınırlamalara tabidir ve bu garantiyi ihlal edebilir. Uygulama her zaman verilen bir blok hash değerinin ana zincirde olup olmadığını ya da henüz kesinleştirilmemiş bir blok için blok hash değeri ile yapılan sorgunun, sorgu ile eş zamanlı olarak gerçekleşen bir blok yeniden düzenlemesi tarafından etkilenebileceğini bilemeyebilir. Ancak bu durum, blok kesinleştirildiğinde ve ana zincirde bulunduğu biliniyorsa, blok hash değeri ile yapılan sorguların sonuçlarını etkilemez. [Bu sorun](https://github.com/graphprotocol/graph-node/issues/1405), bu sınırlamaların ayrıntılarını detaylı bir şekilde açıklamaktadır. +> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. #### Örnek @@ -322,12 +335,12 @@ Tam metin arama sorgularının kullanması gereken bir zorunlu alanı vardır, b Tam metin arama operatörleri: -| Symbol | Operator | Tanım | -| --- | --- | --- | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| Symbol | Operator | Tanım | +| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | +| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | +| `<->` | `Follow by` | Specify the distance between two words. | +| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | #### Örnekler @@ -376,11 +389,11 @@ Graph Düğümü, [graphql-js referans uygulamasını](https://github.com/graphq ## Schema -Veri kaynağınızın şeması, sorgulamak için kullanılabilen varlık tipleri, değerler ve ilişkiler [GraphQL Arayüz Tanımlama Dili (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System) aracılığıyla tanımlanır. +The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL şemaları genellikle `queries`, `subscriptions` ve `mutations` için root tipleri tanımlar. Graph yalnızca `queries` destekler. Subgraph'ınız için root `Query` tipi, subgraph bildiriminize dahil edilen GraphQL şemasından otomatik olarak oluşturulur. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> **Not:** API'miz mutations tipini açığa çıkarmaz çünkü geliştiricilerin uygulamalarından doğrudan temelindeki blok zincire karşı işlemleri gerçekleştirmeleri beklenir. +> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. ### Varlıklar diff --git a/website/pages/tr/querying/querying-best-practices.mdx b/website/pages/tr/querying/querying-best-practices.mdx index 3878ac485e8f..c82e59db93a1 100644 --- a/website/pages/tr/querying/querying-best-practices.mdx +++ b/website/pages/tr/querying/querying-best-practices.mdx @@ -2,11 +2,9 @@ title: Querying Best Practices --- -The Graph provides a decentralized way to query data from blockchains. +The Graph provides a decentralized way to query data from blockchains via GraphQL APIs, making it easier to query data with the GraphQL language. -The Graph network's data is exposed through a GraphQL API, making it easier to query data with the GraphQL language. - -This page will guide you through the essential GraphQL language rules and GraphQL queries best practices. +Learn the essential GraphQL language rules and GraphQL querying best practices. --- @@ -71,7 +69,7 @@ GraphQL, HTTP aracılığıyla aktarım yapan bir dil ve kurallar bütünüdür. Bu, bir GraphQL API'sini standart `fetch` kullanarak (yerel olarak yada `@whatwg-node/fetch` veya `isomorphic-fetch`) sorgulayabileceğiniz anlamına gelir. -Ancak, ["Bir Uygulamadan Sorgulama"](/querying/querying-from-an-application) bölümünde belirtildiği gibi, aşağıdaki gibi benzersiz özellikleri destekleyen `graph-client`'ımızı kullanmanızı öneririz: +However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), it's recommend to use `graph-client` which supports unique features such as: - Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) @@ -104,8 +102,6 @@ main() Daha fazla GraphQL istemci alternatifi ["Bir Uygulamadan Sorgulama"](/querying/querying-from-an-application) bölümünde ele alınmıştır. -GraphQL sorguları sözdiziminin temel kurallarını ele aldığımıza göre, şimdi GraphQL sorgusu yazmanın en iyi uygulamalarına bakalım. - --- ## En İyi Uygulamalar @@ -164,11 +160,11 @@ Bunu yapmak **birçok avantajı** beraberinde getirir: - **Variables can be cached** at server-level - **Queries can be statically analyzed by tools** (more on this in the following sections) -**Not: Alanları statik sorgulara koşullu olarak dahil etme** +### How to include fields conditionally in static queries -`owner` alanını yalnızca belirli bir koşulda dahil etmek isteyebiliriz. +You might want to include the `owner` field only on a particular condition. -Bunun için `@include(if:...)` direktifinden aşağıdaki şekilde yararlanabiliriz: +For this, you can leverage the `@include(if:...)` directive as follows: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -191,7 +187,7 @@ const result = await execute(query, { }) ``` -Not: Zıt direktif `@skip(if: ...)` şeklindedir. +> Not: Zıt direktif `@skip(if: ...)` şeklindedir. ### Ask for what you want @@ -199,9 +195,8 @@ GraphQL, "Ne istiyorsanız isteyin" sloganıyla ünlenmiştir. Bu nedenle, GraphQL'de mevcut tüm alanları tek tek listelemeden almanın bir yolu yoktur. -GraphQL API'leri sorgularken, her zaman sadece gerçekten kullanılacak alanları sorgulamayı düşünmelisiniz. - -Aşırı alma'nın(over-fetching) yaygın bir nedeni varlık koleksiyonlarıdır. Varsayılan olarak, sorgular bir koleksiyondaki 100 varlığı getirecektir, bu da genellikle kullanıcıya göstermek için gerçekte kullanılacak olandan çok daha fazladır. Bu nedenle sorgular neredeyse her zaman ilk olarak açıkça ayarlanmalı ve yalnızca gerçekten ihtiyaç duydukları kadar varlık getirdiklerinden emin olmalıdırlar. Bu sadece bir sorgudaki üst düzey koleksiyonlar için değil, aynı zamanda iç içe geçmiş varlık koleksiyonları için de geçerlidir. +- GraphQL API'leri sorgularken, her zaman sadece gerçekten kullanılacak alanları sorgulamayı düşünmelisiniz. +- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. Örneğin, aşağıdaki sorguda: @@ -337,8 +332,8 @@ query { Bu tür tekrarlanan alanlar (`id`, `active`, `status`) birçok sorunu beraberinde getirir: -- harder to read for more extensive queries -- when using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. +- More extensive queries become harder to read. +- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. Sorgunun yeniden yapılandırılmış bir versiyonu aşağıdaki gibi olacaktır: @@ -364,13 +359,13 @@ fragment DelegateItem on Transcoder { } ``` -GraphQL `fragment` kullanımı okunabilirliği artıracak (özellikle ölçeklendirmede) ve aynı zamanda daha iyi TypeScript tipleri üretilmesini sağlayacaktır. +Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. Tip oluşturma aracı kullanıldığında, yukarıdaki sorgu uygun bir `DelegateItemFragment` tipi oluşturacaktır (_son "Tools" bölümüne göz atın_). ### GraphQL Fragment do's and don'ts -**Fragment tabanı bir tip olmalıdır** +### Fragment tabanı bir tip olmalıdır Bir Fragment uygulanabilir olmayan bir tipe, kısacası **alanları olmayan bir tipe** dayandırılamaz: @@ -382,7 +377,7 @@ fragment MyFragment on BigInt { `BigInt` bir **skalerdir** (yerel "plain" tip) ve bir parçanın tabanı olarak kullanılamaz. -**Fragment Nasıl Yayılır** +#### Fragment Nasıl Yayılır Fragmentler belirli tiplerde tanımlanmıştır ve sorgularda buna göre kullanılmalıdır. @@ -411,16 +406,16 @@ fragment VoteItem on Vote { `Vote` tipi bir parçayı buraya yaymak mümkün değildir. -**Fragment'ı atomik bir veri iş birimi olarak tanımlayın** +#### Fragment'ı atomik bir veri iş birimi olarak tanımlayın -GraphQL Fragment kullanımlarına göre tanımlanmalıdır. +GraphQL `Fragment`s must be defined based on their usage. Çoğu kullanım durumu için, tip başına bir parça tanımlamak (tekrarlanan alan kullanımı veya tip üretimi durumunda) yeterlidir. -İşte Fragment kullanımı için temel bir kural: +Here is a rule of thumb for using fragments: -- when fields of the same type are repeated in a query, group them in a Fragment -- when similar but not the same fields are repeated, create multiple fragments, ex: +- When fields of the same type are repeated in a query, group them in a `Fragment`. +- When similar but different fields are repeated, create multiple fragments, for instance: ```graphql # base fragment (mostly used in listing) @@ -443,7 +438,7 @@ fragment VoteWithPoll on Vote { --- -## The essential tools +## The Essential Tools ### GraphQL web-based explorers @@ -473,11 +468,11 @@ Bu, sorguları öğrenme ortamında **test etmeden** veya üretimde çalıştır GraphQL [VSCode uzantısı](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql), geliştirme iş akışınız için mükemmel bir eklentidir: -- syntax highlighting -- autocomplete suggestions -- validation against schema -- snippets -- go to definition for fragments and input types +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets +- Go to definition for fragments and input types Eğer `graphql-eslint` kullanıyorsanız, [ESLint VSCode uzantısı](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) kodunuzdaki hataları ve uyarıları doğru bir şekilde görselleştirmek için olmazsa olmazdır. @@ -485,9 +480,9 @@ Eğer `graphql-eslint` kullanıyorsanız, [ESLint VSCode uzantısı](https://mar [JS GraphQL eklentisi](https://plugins.jetbrains.com/plugin/8097-graphql/), GraphQL ile çalışırken deneyiminizi önemli ölçüde arttırcaktır: -- syntax highlighting -- autocomplete suggestions -- validation against schema -- snippets +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets -Eklentinin tüm ana özelliklerini gösteren bu [WebStorm makalesinde](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) daha fazla bilgi bulabilirsiniz. +For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. diff --git a/website/pages/tr/quick-start.mdx b/website/pages/tr/quick-start.mdx index 6b5178a057f3..86778416e18d 100644 --- a/website/pages/tr/quick-start.mdx +++ b/website/pages/tr/quick-start.mdx @@ -2,24 +2,18 @@ title: Hızlı Başlangıç --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph to Subgraph Studio. +Learn how to easily publish and query a [subgraph](/developing/developer-faqs/#1-what-is-a-subgraph) on The Graph. -Subgraph'ınızın [desteklenen bir ağdan](/developing/supported-networks) gelen verileri indeksleyeceğinden emin olun. - -Bu rehber, aşağıdakilere sahip olduğunuzu varsayar: +## Prerequisites for this guide - Bir kripto cüzdanı -- Seçtiğiniz ağ üzerinde bir akıllı sözleşme adresi - -## 1. Subgraph Stüdyo'da bir subgraph oluşturun - -Go to the [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- A smart contract address on one of the [supported networks](/developing/supported-networks/) -Once your wallet is connected, you can begin by clicking “Create a Subgraph." It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name." +## Step-by-step -## 2. Graph CLI'yi yükleyin +### The Graph CLI'ı Yükleyin -The Graph CLI is written in TypeScript and you will need to have `node` and either `npm` or `yarn` installed to use it. Check that you have the most recent CLI version installed. +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. Yerel makinenizde aşağıdaki komutlardan birini çalıştırın: @@ -35,133 +29,161 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -## 3. Initialize your subgraph from existing contract +### 2. Create your subgraph + +If your contract has events, the `init` command will automatically create a scaffold of a subgraph. + +#### Create via Graph CLI + +Use the following command to create a subgraph in Subgraph Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +#### Create via Subgraph Studio + +Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. -Initialize your subgraph from an existing contract by running the initialize command: +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +2. Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". + +For additional information on subgraph creation and the Graph CLI, see [Creating a Subgraph](/developing/creating-a-subgraph). + +### 3. Initialize your subgraph + +#### From an existing contract + +The following command initializes your subgraph from an existing contract: ```sh graph init --studio ``` -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> Note: If your contract was verified on Etherscan, then the ABI will automatically be created in the CLI. + +You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -Subgraph'ınızı başlattığınızda, CLI aracı sizden aşağıdaki bilgileri isteyecektir: +When you initialize your subgraph, the CLI will ask you for the following information: -- Protokol: Subgraph'ınızın veri indeksleyeceği protokolü seçin -- Subgraph slug: Subgraph'ınız için bir ad oluşturun. Subgraph slug'ınız subgraph'ınız için bir tanımlayıcıdır. -- Subgraph'ınızın oluşturulacağı dizin: yerel dizininizi seçin -- Ethereum ağı (opsiyonel): Subgraph'ınızın hangi EVM uyumlu ağdan veri indeksleyeceğini belirtmeniz gerekebilir -- Sözleşme adresi: Veri sorgulamak istediğiniz akıllı sözleşme adresini bulun -- ABI: ABI otomatik olarak doldurulmuyorsa, JSON dosyası haline manuel olarak girmeniz gerekecektir -- Başlangıç Bloğu: Subgraph'ınız blok zinciri verilerini indekslerken zaman kazanmak için başlangıç bloğunu girmeniz önerilir. Başlangıç bloğunu, sözleşmenizin dağıtıldığı bloğu bularak bulabilirsiniz. -- Sözleşme Adı: Sözleşmenizin adını girin -- Sözleşme olaylarını varlıklar olarak indeksleyin: Yayılan her olay için subgraph'ınıza otomatik olarak eşlemeler ekleyeceğinden bunu true olarak ayarlamanız önerilir -- Başka bir sözleşme ekle (opsiyonel): Başka bir sözleşme ekleyebilirsiniz +- Protocol: Choose the protocol your subgraph will be indexing data from. +- Subgraph slug: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. +- Directory to create the subgraph in: Choose your local directory. +- Ethereum network (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- Contract address: Locate the smart contract address you’d like to query data from. +- ABI: If the ABI is not auto-populated, you will need to input it manually as a JSON file. +- Start Block: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- Contract Name: Input the name of your contract. +- Index contract events as entities: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- Add another contract (optional): You can add another contract. Subgraph'ınızı başlatırken neyle karşılaşacağınıza dair bir örnek için aşağıdaki ekran görüntüsüne bakın: ![Subgraph command](/img/subgraph-init-example.png) -## 4. Write your subgraph +### 4. Write your subgraph -Önceki komutlar, subgraph'ınızı oluşturmak için bir başlangıç noktası olarak kullanabileceğiniz bir subgraph iskeletini oluşturur. Subgraph'ta değişiklik yaparken, temel olarak üç dosya ile çalışacaksınız: +The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. -- Manifest (`subgraph.yaml`) - The manifest defines what datasources your subgraphs will index. -- Schema (`schema.graphql`) - The GraphQL schema defines what data you wish to retrieve from the subgraph. -- AssemblyScript Mappings (`mapping.ts`) - This is the code that translates data from your datasources to the entities defined in the schema. +When making changes to the subgraph, you will mainly work with three files: -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). +- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -## 5. Deploy to Subgraph Studio +For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -Subgraph'ınız yazıldıktan sonra aşağıdaki komutları çalıştırın: +### 5. Deploy your subgraph -```sh -$ graph codegen -$ graph build -``` +Remember, deploying is not the same as publishing. + +- When you deploy a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. +- When you publish a subgraph, you are publishing it onchain to the decentralized network. + +1. Subgraph'ınız yazıldıktan sonra aşağıdaki komutları çalıştırın: + + ```sh + graph codegen + graph build + ``` + +2. Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. -- Subgraph'ınızı doğrulayın ve dağıtın. Dağıtım anahtarı Subgraph Stüdyo'daki Subgraph sayfasında bulunabilir. +![Deploy key](/img/subgraph-studio-deploy-key.jpg) +```` ```sh -$ graph auth --studio -$ graph deploy --studio -``` -You will be asked for a version label. It's strongly recommended to use [semver](https://semver.org/) for versioning like `0.0.1`. That said, you are free to choose any string as version such as:`v1`, `version1`, `asdf`. - -## 6. Subgraph'ınızı Test Edin - -In Subgraph Studio's playground environment, you can test your subgraph by making a sample query. - -Kayıtlar, subgraph'ınızla ilgili herhangi bir hata olup olmadığını size söyleyecektir. Çalışan bir subgraph'ın kayıtları aşağıdaki gibi görünecektir: - -![Subgraph logs](/img/subgraph-logs-image.png) - -If your subgraph is failing, you can query the subgraph health by using the GraphiQL Playground. Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails, so you can debug accordingly: - -```graphql -{ - indexingStatuses(subgraphs: ["Qm..."]) { - node - synced - health - fatalError { - message - block { - number - hash - } - handler - } - nonFatalErrors { - message - block { - number - hash - } - handler - } - chains { - network - chainHeadBlock { - number - } - earliestBlock { - number - } - latestBlock { - number - } - lastHealthyBlock { - number - } - } - entityCount - } -} +graph auth --studio + +graph deploy --studio ``` +```` -## 7. Publish your subgraph to The Graph’s Decentralized Network +- The CLI will ask for a version label. + - It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. That said, you can choose any string for the version such as: `v1`, `version1`, `asdf`, etc. -Once your subgraph has been deployed to Subgraph Studio, you have tested it out, and you are ready to put it into production, you can then publish it to the decentralized network. +### 6. Review your subgraph -In Subgraph Studio, you will be able to click the publish button on the top right of your subgraph's page. +If you’d like to examine your subgraph before publishing it to the network, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: -Select the network you would like to publish your subgraph to. It is recommended to publish subgraphs to Arbitrum One to take advantage of the [faster transaction speeds and lower gas costs](/arbitrum/arbitrum-faq). +- Run a sample query. +- Analyze your subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: -The (upgrade Indexer)[/sunrise/#about-the-upgrade-indexer] will begin serving queries on your subgraph regardless of subgraph curation, and it will provide you with 100,000 free queries per month. + ![Subgraph logs](/img/subgraph-logs-image.png) -For a higher quality of service and stronger redundancy, you can curate your subgraph to attract more Indexers. At the time of writing, it is recommended that you curate your own subgraph with at least 3,000 GRT to ensure 3-5 additional Indexers begin serving queries on your subgraph. +### 7. Publish your subgraph to The Graph Network -Gas maliyetlerinden tasarruf etmek için, subgraph'ınızı Graph'ın merkeziyetsiz ağında yayınlarken bu düğmeyi seçerek subgraph'ınızı yayınladığınız işlemle aynı işlemde kürate edebilirsiniz: +Publishing a subgraph to the decentralized network makes it available for [Curators](/network/curating/) to begin curating it and [Indexers](/network/indexing/) to begin indexing it. -![Subgraph publish](/img/publish-and-signal-tx.png) +#### Publishing with Subgraph Studio + +1. To publish your subgraph, click the Publish button in the dashboard. +2. Select the network to which you would like to publish your subgraph. + +#### Publishing from the CLI + +As of version 0.73.0, you can also publish your subgraph with the Graph CLI. + +1. Open the `graph-cli`. + +2. Use the following commands: + + ```sh + graph codegen && graph build + ``` + + Then, + + ```sh + graph publish + ``` -## 8. Query your subgraph +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +To customize your deployment, see [Publishing a Subgraph](/publishing/publishing-a-subgraph/). + +#### Adding signal to your subgraph + +1. To attract indexers to query your subgraph, you should add GRT curation signal to it. + + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + +2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. + + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + +To learn more about curation, read [Curating](/network/curating/). + +To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: + +![Subgraph publish](/img/publish-and-signal-tx.png) -Şimdi, subgraph'ınızın Sorgu URL'sine GraphQL sorguları göndererek onu sorgulayabilirsiniz; bu URL'yi sorgu düğmesine tıklayarak bulabilirsiniz. +### 8. Query your subgraph -If you don't have your API key, you can query via the free, rate-limited development query URL, which can be used for development and staging. +Now, you can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read more [here](/querying/querying-the-graph/). +For more information about querying data from your subgraph, read [Querying The Graph](/querying/querying-the-graph/). diff --git a/website/pages/tr/release-notes/assemblyscript-migration-guide.mdx b/website/pages/tr/release-notes/assemblyscript-migration-guide.mdx index a8bb2e376807..fd305a2ca624 100644 --- a/website/pages/tr/release-notes/assemblyscript-migration-guide.mdx +++ b/website/pages/tr/release-notes/assemblyscript-migration-guide.mdx @@ -127,11 +127,8 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - You'll need to rename your duplicate variables if you had variable shadowing. - ### Null Comparisons - By doing the upgrade on your subgraph, sometimes you might get errors like these: ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - To solve you can simply change the `if` statement to something like this: ```typescript @@ -288,7 +284,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: ```typescript diff --git a/website/pages/tr/sps/introduction.mdx b/website/pages/tr/sps/introduction.mdx new file mode 100644 index 000000000000..12e3f81c6d53 --- /dev/null +++ b/website/pages/tr/sps/introduction.mdx @@ -0,0 +1,19 @@ +--- +title: Introduction to Substreams-powered Subgraphs +--- + +By using a Substreams package (`.spkg`) as a data source, your subgraph gains access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. + +There are two methods of enabling this technology: + +Using Substreams [triggers](./triggers): Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. + +Using [Entity Changes](https://substreams.streamingfast.io/documentation/consume/subgraph/graph-out): By writing more of the logic into Substreams, you can consume the module's output directly into graph-node. In graph-node, you can use the Substreams data to create your subgraph entities. + +It is really a matter of where you put your logic, in the subgraph or the Substreams. Keep in mind that having more of your logic in Substreams benefits from a parallelized model, whereas triggers will be linearly consumed in graph-node. + +Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: + +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/tr/sps/triggers-example.mdx b/website/pages/tr/sps/triggers-example.mdx new file mode 100644 index 000000000000..d71f558acfe2 --- /dev/null +++ b/website/pages/tr/sps/triggers-example.mdx @@ -0,0 +1,140 @@ +--- +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" +--- + +## Ön Koşullar + +Before starting, make sure to: + +- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. +- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. + +## Step 1: Initialize Your Project + + + +1. Open your Dev Container and run the following command to initialize your project: + + ```bash + substreams init + ``` + +2. Select the "minimal" project option. + +3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: + +```yaml +specVersion: v0.1.0 +package: + name: my_project_sol + version: v0.1.0 + +imports: # Pass your spkg of interest + solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg + +modules: + - name: map_spl_transfers + use: solana:map_block # Select corresponding modules available within your spkg + initialBlock: 260000082 + + - name: map_transactions_by_programid + use: solana:solana:transactions_by_programid_without_votes + +network: solana-mainnet-beta + +params: # Modify the param fields to meet your needs + # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA + map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE +``` + +## Step 2: Generate the Subgraph Manifest + +Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: + +```bash +substreams codegen subgraph +``` + +You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: + +```yaml +--- +dataSources: + - kind: substreams + name: my_project_sol + network: solana-mainnet-beta + source: + package: + moduleName: map_spl_transfers # Module defined in the substreams.yaml + file: ./my-project-sol-v0.1.0.spkg + mapping: + apiVersion: 0.0.7 + kind: substreams/graph-entities + file: ./src/mappings.ts + handler: handleTriggers +``` + +## Step 3: Define Entities in `schema.graphql` + +Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. Here is an example: + +```graphql +type MyTransfer @entity { + id: ID! + amount: String! + source: String! + designation: String! + signers: [String!]! +} +``` + +This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. + +## Step 4: Handle Substreams Data in `mappings.ts` + +With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: + +```ts +import { Protobuf } from 'as-proto/assembly' +import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events' +import { MyTransfer } from '../generated/schema' + +export function handleTriggers(bytes: Uint8Array): void { + const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode) + + for (let i = 0; i < input.data.length; i++) { + const event = input.data[i] + + if (event.transfer != null) { + let entity_id: string = `${event.txnId}-${i}` + const entity = new MyTransfer(entity_id) + entity.amount = event.transfer!.instruction!.amount.toString() + entity.source = event.transfer!.accounts!.source + entity.designation = event.transfer!.accounts!.destination + + if (event.transfer!.accounts!.signer!.single != null) { + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] + } else if (event.transfer!.accounts!.signer!.multisig != null) { + entity.signers = event.transfer!.accounts!.signer!.multisig!.signers + } + entity.save() + } + } +} +``` + +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + +## Conclusion + +You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. + +For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). diff --git a/website/pages/tr/sps/triggers.mdx b/website/pages/tr/sps/triggers.mdx new file mode 100644 index 000000000000..ed19635d4768 --- /dev/null +++ b/website/pages/tr/sps/triggers.mdx @@ -0,0 +1,37 @@ +--- +title: Substreams Triggers +--- + +Custom triggers allow you to send data directly into your subgraph mappings file and entities (similar to tables and fields), enabling full use of the GraphQL layer. By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data within your subgraph’s handler, ensuring efficient and streamlined data management within the subgraph framework. + +> Note: If you haven’t already, visit one of the How-To Guides found [here](./introduction) to scaffold your first project in the Development Container. + +The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. + +```tsx +export function handleTransactions(bytes: Uint8Array): void { + let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).trasanctions // 1. + if (transactions.length == 0) { + log.info('No transactions found', []) + return + } + + for (let i = 0; i < transactions.length; i++) { + // 2. + let transaction = transactions[i] + + let entity = new Transaction(transaction.hash) // 3. + entity.from = transaction.from + entity.to = transaction.to + entity.save() + } +} +``` + +Here's what you’re seeing in the `mappings.ts` file: + +1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object +2. Looping over the transactions +3. Create a new subgraph entity for every transaction + +To go through a detailed example of a trigger-based subgraph, [click here](./triggers-example). diff --git a/website/pages/tr/substreams.mdx b/website/pages/tr/substreams.mdx index aa52656677c6..529ae72d4677 100644 --- a/website/pages/tr/substreams.mdx +++ b/website/pages/tr/substreams.mdx @@ -4,9 +4,11 @@ title: Substreams ![Substreams Logo](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## Substreams Nasıl Çalışır - 4 Adımda @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### Bilgi Dağarcığınızı Genişletin - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/tr/sunrise.mdx b/website/pages/tr/sunrise.mdx index 1e128fe103a0..e40c0b6f7914 100644 --- a/website/pages/tr/sunrise.mdx +++ b/website/pages/tr/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Post-Sunrise + Upgrading to The Graph Network FAQ --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Note: The Sunrise of Decentralized Data ended June 12th, 2024. -## What is the Sunrise of Decentralized Data? +## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data is an initiative spearheaded by Edge & Node. The goal is to enable subgraph developers to seamlessly upgrade to The Graph’s decentralized network. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -Bu plan, yeni yayınlanan subgraphlar üzerinde sorgular sunmak için bir yükseltme İndeksleyicisi ve yeni blok zinciri ağlarını Graph'a entegre etme yeteneği de dahil olmak üzere Graph ekosistemindeki önceki birçok önceki gelişmeyi içermektedir. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### What are the phases of the Sunrise? +### What happened to the hosted service? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. -## Upgrading subgraphs to The Graph Network +During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. -### When will hosted service subgraphs no longer be available? +### Was Subgraph Studio impacted by this upgrade? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Why were subgraphs published to Arbitrum, did it start indexing a different network? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Will my hosted service subgraph be supported on The Graph Network? - -Yes, the upgrade Indexer will automatically support all hosted service subgraphs published to The Graph Network for a seamless upgrade experience. - -### How do I upgrade my hosted service subgraph? - -> Note: Upgrading a subgraph to The Graph Network cannot be undone. - - - -To upgrade a hosted service subgraph, you can visit the subgraph dashboard on the [hosted service](https://thegraph.com/hosted-service). - -1. Select the subgraph(s) you want to upgrade. -2. Select the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### How can I get support with the upgrade process? - -The Graph community is here to support developers as they move to The Graph Network. Join The Graph's [Discord server](https://discord.gg/vtvv7FP) and request help in the #upgrade-decentralized-network channel. - -### How can I ensure high quality of service and redundancy for subgraphs on The Graph Network? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Members from these blockchain communities are encouraged to integrate their chain through the [Chain Integration Process](/chain-integration-overview/). - -### How do I publish new versions to the network? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Upgrade to the latest version of [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Update your deploy command - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> Publishing requires Arbitrum ETH - upgrading your subgraph also airdrops a small amount to facilitate your first protocol interactions 🧑‍🚀 - -### I use a subgraph developed by someone else, how can I make sure that my service isn't interrupted? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### What happens if I don't upgrade my subgraph? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### How can I get started querying subgraphs on The Graph Network? - -You can explore available subgraphs on [Graph Explorer](https://thegraph.com/explorer). [Learn more about querying subgraphs on The Graph](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## About the Upgrade Indexer -### What is the upgrade Indexer? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> The upgrade Indexer is currently active. -### What chains does the upgrade Indexer support? +The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. -The upgrade Indexer supports chains that were previously only available on the hosted service. +### What does the upgrade Indexer do? -Desteklenen zincirlerin kapsamlı bir listesini inceleyin [here](/developing/supported-networks/). +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Yükseltme İndeksleyicisini neden Edge & Node çalıştırıyor? -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. ### What does the upgrade indexer mean for existing Indexers? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. + +However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -Yükseltme İndeksleyicisi ayrıca İndeksleyici topluluğuna Graph Ağı'ndaki subgraphlar ve yeni zincirler konusunda potansiyel talep hakkında bilgi sunmaktadır. +The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. ### Bu Delegatörler için ne anlama gelmektedir? -Yükseltme İndeksleyicisi, Delegatörler için büyük bir fırsat sunmaktadır. Daha fazla subgraph barındırılan hizmetten Graph Ağı'na yükseltildikçe, Delegatörler artan ağ etkinliğinden faydalanmaya devam edecektir. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### Yükseltme İndeksleyicisi, mevcut İndeksleyicilerle ödüller için rekabet edecek mi? +### Did the upgrade Indexer compete with existing Indexers for rewards? -Hayır, yükseltme İndeksleyicisi yalnızca subgraph başına minimum miktarı tahsis edecek ve indeksleme ödüllerini toplamayacaktır. +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### Bu durum subgraph geliştiricilerini nasıl etkileyecek? +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### Bu, veri tüketicilerine nasıl fayda sağlar? +### How does the upgrade Indexer benefit data consumers? The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. -### Yükseltme İndeksleyicisi sorguları nasıl fiyatlandıracak? - -Yükseltme İndeksleyicisi, sorgu ücreti pazarını etkilememek adına sorguları piyasa fiyatına göre fiyatlandıracaktır. - -### Yükseltme İndeksleyicisi'nin bir subgraph'ı desteklemeyi durdurması için kriterler nelerdir? - -Yükseltme İndeksleyicisi, bir subgraph'a, en az 3 diğer İndeksleyici tarafından sağlanan tutarlı sorgularla yeterli ve başarılı bir şekilde hizmet verilene kadar hizmet verecektir. - -Ayrıca, yükseltme İndeksleyicisi, bir subhraph son 30 günde sorgulanmamış ise desteğini durduracaktır. - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## About The Graph Network - -### Kendi altyapımı çalıştırmam gerekiyor mu? - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -Subgraph'ınız yeterli kürasyon sinyaline ulaştığında ve diğer İndeksleyiciler tarafından desteklemeye başladığında, yükseltme İndeksleyicisi kademeli olarak azalacak ve diğer İndeksleyicilerin indeksleme ödüllerini ve sorgu ücretlerini toplama fırsatı tanıyacaktır. - -### Kendi indeksleme altyapımı barındırmalı mıyım? - -Kendi projeniz için altyapıyı çalıştırmak, Graph Ağı'nı kullanmaya kıyasla [önemli ölçüde daha fazla kaynak](/network/benefits/) gerektirir. - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -Bununla birlikte, hala bir [Graph Düğümü](https://github.com/graphprotocol/graph-node) çalıştırmakla ilgileniyorsanız, subgraph'ınızda ve diğerlerinde veri sunarak indeksleme ödülleri ve sorgu ücretleri kazanmak için Graph Ağı'na [İndeksleyici olarak](https://thegraph.com/blog/how-to-become-indexer/) katılmayı düşünün. - -### Merkezi bir indeksleme sağlayıcısı kullanmalı mıyım? - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -İşte Graph'ın merkezi barındırmaya göre avantajlarının ayrıntılı bir açıklaması: +### How does the upgrade Indexer price queries? -- **Dayanıklılık ve Yedeklilik**: Merkeziyetsiz sistemler, dağıtık yapıları nedeniyle doğal olarak daha dayanıklı ve esnektir. Veriler tek bir sunucuda veya konumda depolanmaz. Bunun yerine, dünyanın dört bir yanındaki yüzlerce bağımsız İndeksleyici tarafından sunulur. Bu, bir düğümün arızalanması durumunda veri kaybı veya hizmet kesintisi riskini azaltır ve olağanüstü çalışma süreleri (%99,99) sağlar. +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **Hizmet Kalitesi**: Etkileyici çalışma süresine ek olarak, Graph Ağı yaklaşık 106 ms medyan sorgu hızı (gecikme) ve barındırılan alternatiflere kıyasla daha yüksek sorgu başarı oranlarına sahiptir. Daha fazla bilgi için [bu bloğa göz atın] (https://thegraph.com/blog/qos-the-graph-network/). +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -Merkezi olmayan yapısı, güvenliği ve şeffaflığı nedeniyle blok zincir ağını seçtiğiniz gibi, aynı şekilde Graph Ağı'nı tercih etmek bu ilkelerin bir devamı niteliğindedir. Veri altyapınızı bu değerlerle uyumlu hale getirerek bütünlük, dayanıklılık ve güvene dayalı bir geliştirme ortamını sağlarsınız. +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/tr/supported-network-requirements.mdx b/website/pages/tr/supported-network-requirements.mdx index 5329c6b9dad2..416333a6e291 100644 --- a/website/pages/tr/supported-network-requirements.mdx +++ b/website/pages/tr/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| Ağ | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | +| Ağ | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | diff --git a/website/pages/tr/tap.mdx b/website/pages/tr/tap.mdx new file mode 100644 index 000000000000..16ce75ff72f2 --- /dev/null +++ b/website/pages/tr/tap.mdx @@ -0,0 +1,197 @@ +--- +title: TAP Migration Guide +--- + +Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. + +## Genel Bakış + +[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: + +- Efficiently handles micropayments. +- Adds a layer of consolidations to on-chain transactions and costs. +- Allows Indexers control of receipts and payments, guaranteeing payment for queries. +- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. + +## Specifics + +TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. + +For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. + +### RAV Details + +- It’s money that is waiting to be sent to the blockchain. + +- It will continue to send requests to aggregate and ensure that the total value of non-aggregated receipts does not exceed the `amount willing to lose`. + +- Each RAV can be redeemed once in the contracts, which is why they are sent after the allocation is closed. + +### Redeeming RAV + +As long as you run `tap-agent` and `indexer-agent`, everything will be executed automatically. The following provides a detailed breakdown of the process: + +1. An Indexer closes allocation. + +2. ` period, tap-agent` takes all pending receipts for that specific allocation and requests an aggregation into a RAV, marking it as `last`. + +3. `indexer-agent` takes all the last RAVS and sends redeem requests to the blockchain, which will update the value of `redeem_at`. + +4. During the `` period, `indexer-agent` monitors if the blockchain has any reorganizations that revert the transaction. + + - If it was reverted, the RAV is resent to the blockchain. If it was not reverted, it gets marked as `final`. + +## Blockchain Addresses + +### Contracts + +| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| ------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | +| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | +| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | + +### Gateway + +| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| ---------- | --------------------------------------------- | --------------------------------------------- | +| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | + +### Gereksinimler + +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. + +- [Graph TAP Aribtrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) + +> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployement. As a result, you have to index it manually. + +## Migration Guide + +### Software versions + +| Component | Sürüm | Image Link | +| --------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------- | +| indexer-service | v1.0.0-rc.6 | [indexer-service](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-service-rs/264320627?tag=1.0.0-rc.6) | +| indexer-agent | PR #995 | [indexer-agent](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent/266166026?tag=sha-d98cf80) | +| tap-agent | v1.0.0-rc.6 | [tap-agent](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-tap-agent/264320547?tag=1.0.0-rc.6) | + +### Steps + +1. **Indexer Agent** + + - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). + - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + +2. **Indexer Service** + + - Fully replace your current configuration with the [new Indexer Service rs](https://github.com/graphprotocol/indexer-rs). It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + - Like the older version, you can scale Indexer Service horizontally easily. It is still stateless. + +3. **TAP Agent** + + - Run _one_ single instance of [TAP Agent](https://github.com/graphprotocol/indexer-rs) at all times. It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + +4. **Configure Indexer Service and TAP Agent** + + Configuration is a TOML file shared between `indexer-service` and `tap-agent`, supplied with the argument `--config /path/to/config.toml`. + + Check out the full [configuration](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) and the [default values](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) + +For minimal configuration, use the following template: + +```bash +# You will have to change *all* the values below to match your setup. +# +# Some of the config below are global graph network values, which you can find here: +# +# +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: +# +# [database] +# postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" + +[indexer] +indexer_address = "0x1111111111111111111111111111111111111111" +operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" + +[database] +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. +postgres_url = "postgres://postgres@postgres:5432/postgres" + +[graph_node] +# URL to your graph-node's query endpoint +query_url = "" +# URL to your graph-node's status endpoint +status_url = "" + +[subgraphs.network] +# Query URL for the Graph Network subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[subgraphs.escrow] +# Query URL for the Escrow subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[blockchain] +# The chain ID of the network that the graph network is running on +chain_id = 1337 +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. +receipts_verifier_address = "0x2222222222222222222222222222222222222222" + +######################################## +# Specific configurations to tap-agent # +######################################## +[tap] +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: +# max_amount_willing_to_lose_grt = "0.1" +max_amount_willing_to_lose_grt = 20 + +[tap.sender_aggregator_endpoints] +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. +0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" +``` + +Notlar: + +- Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/tap/#gateway). +- Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/tap/#contracts) using the appropriate chain id. + +**Log Level** + +- You can set the log level by using the `RUST_LOG` environment variable. +- It’s recommended that you set it to `RUST_LOG=indexer_tap_agent=debug,info`. + +## Monitoring + +### Metrics + +All components expose the port 7300 to be queried by prometheus. + +### Grafana Dashboard + +You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) and import. + +### Launchpad + +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer) diff --git a/website/pages/uk/about.mdx b/website/pages/uk/about.mdx index 0ee69ee29eca..ebc3f6d6f360 100644 --- a/website/pages/uk/about.mdx +++ b/website/pages/uk/about.mdx @@ -2,46 +2,66 @@ title: Про The Graph --- -На цій сторінці ви дізнаєтесь, що таке The Graph і як ви можете розпочати працювати з ним. - ## Що таке The Graph? -The Graph - це децентралізований протокол для індексації та запитів щодо даних блокчейну. The Graph дозволяє робити запити про дані, які важко отримати безпосередньо. +The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. + +## Understanding the Basics + +Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. + +### Challenges Without The Graph + +In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. + +- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. + +- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. + +- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. + +### Why is this a problem? + +It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. + +Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. + +Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. -Проєкти зі складними смартконтрактами, такі як [Uniswap](https://uniswap.org/), також НФТ проєкти, до прикладу - [Bored Ape Yacht Club](https://boredapeyachtclub.com/) зберігають свої дані на блокчейні Ethereum. Це робить практично неможливим зчитування будь-чого, окрім основних даних безпосередньо з блокчейну. +## The Graph Provides a Solution -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. -Ви також можете створити власний сервер, обробляти транзакції на ньому, зберігати їх у базі даних і створити на ньому кінцеву точку API для запиту даних. Проте, цей варіант дуже [ресурсозатратний](/network/benefits/), потребує регулярного обслуговування, являє собою єдину точку збою, а також порушує важливі особливості безпеки, що необхідні для децентралізації. +### How The Graph Functions -**Індексування даних блокчейну це дуже важке заняття.** +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +#### Specifics -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. -## Як працює The Graph +- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -The Graph вивчає що і як індексувати в даних Ethereum за допомогою описів до підграфів, які називаються підграф маніфестами. Цей опис допомагає визначити смартконтракти, які представляють певний інтерес для підграфа, також події, що відбулись в цих смартконтрактах, на які варто звернути увагу, а також допомагає зіставити дані про ці події з тими даними, які The Graph буде зберігати у своїй базі даних. +- When creating a subgraph, you need to write a subgraph manifest. -Після того як ви написали `підграф маніфест`, використовуйте the Graph CLI, щоб зберегти значення в IPFS і вказати індексатору почати тестування даних для цього підграфа. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. -На цій діаграмі більш детально показано потік даних, що стосуються транзакцій в Ethereum, одразу після розгортання підграф маніфесту: +The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. ![Малюнок, що пояснює, як The Graph використовує Graph Node для обслуговування запитів до споживачів даних](/img/graph-dataflow.png) Цей потік даних проходить такі етапи: -1. Додаток відправляє дані в мережу Ethereum через транзакцію в смартконтракті. -2. Під час обробки транзакції смартконтракт видає одну або декілька різних подій. -3. Graph Node постійно сканує Ethereum на наявність нових блоків і даних для вашого підграфа, які вони можуть містити. -4. Graph Node знаходить події на Ethereum для вашого підграфа в цих блоках і запускає надані вами mapping handlers. Mapping - це модуль WASM, який створює або оновлює структуру даних, що зберігаються у Graph Node у відповідь на події на Ethereum. -5. Додаток запитує Graph Node про дані, проіндексовані в блокчейні, використовуючи [кінцеву точку GraphQL](https://graphql.org/learn/). The Graph Node, і собі, переводить запити GraphQL в запити до свого базового сховища даних, щоб отримати ці дані, використовуючи можливості індексації сховища. Dapp відображає ці дані в величезному інтерфейсі для кінцевих користувачів, який вони використовують для створення нових транзакцій на Ethereum. Цикл повторюється. +1. Додаток відправляє дані в мережу Ethereum через транзакцію в смартконтракті. +2. Під час обробки транзакції смартконтракт видає одну або декілька різних подій. +3. Graph Node постійно сканує Ethereum на наявність нових блоків і даних для вашого підграфа, які вони можуть містити. +4. Graph Node знаходить події на Ethereum для вашого підграфа в цих блоках і запускає надані вами mapping handlers. Mapping - це модуль WASM, який створює або оновлює структуру даних, що зберігаються у Graph Node у відповідь на події на Ethereum. +5. Додаток запитує Graph Node про дані, проіндексовані в блокчейні, використовуючи [кінцеву точку GraphQL](https://graphql.org/learn/). The Graph Node, і собі, переводить запити GraphQL в запити до свого базового сховища даних, щоб отримати ці дані, використовуючи можливості індексації сховища. Dapp відображає ці дані в величезному інтерфейсі для кінцевих користувачів, який вони використовують для створення нових транзакцій на Ethereum. Цикл повторюється. ## Наступні кроки -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +The following sections provide a more in-depth look at subgraphs, their deployment and data querying. -Before you start writing your own subgraph, you might want to have a look at [Graph Explorer](https://thegraph.com/explorer) and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/pages/uk/arbitrum/arbitrum-faq.mdx b/website/pages/uk/arbitrum/arbitrum-faq.mdx index 16104f2abf6b..560d2bf46563 100644 --- a/website/pages/uk/arbitrum/arbitrum-faq.mdx +++ b/website/pages/uk/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: Найбільш поширені запитання по Arbitrum Click [here](#billing-on-arbitrum-faqs) if you would like to skip to the Arbitrum Billing FAQs. -## Чому The Graph використовує рішення на L2? +## Why did The Graph implement an L2 Solution? -Внаслідок переходу The Graph на L2, користувачі мережі можуть очікувати: +By scaling The Graph on L2, network participants can now benefit from: - Upwards of 26x savings on gas fees @@ -14,7 +14,7 @@ Click [here](#billing-on-arbitrum-faqs) if you would like to skip to the Arbitru - Security inherited from Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers could open and close allocations to index a greater number of subgraphs with greater frequency, developers could deploy and update subgraphs with greater ease, Delegators could delegate GRT with increased frequency, and Curators could add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion. @@ -41,27 +41,21 @@ Once you have GRT on Arbitrum, you can add it to your billing balance. ## Якщо я розробник підграфів, споживач даних, Індексатор, Куратор або Делегат, що мені потрібно робити зараз? -There is no immediate action required, however, network participants are encouraged to begin moving to Arbitrum to take advantage of the benefits of L2. +Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) for additional support. -Core developer teams are working to create L2 transfer tools that will make it significantly easier to move delegation, curation, and subgraphs to Arbitrum. Network participants can expect L2 transfer tools to be available by summer of 2023. +All indexing rewards are now entirely on Arbitrum. -As of April 10th, 2023, 5% of all indexing rewards are being minted on Arbitrum. As network participation increases, and as the Council approves it, indexing rewards will gradually shift from Ethereum to Arbitrum, eventually moving entirely to Arbitrum. - -## Якщо я хочу взяти участь у мережі на L2, що мені потрібно зробити? - -Please help [test the network](https://testnet.thegraph.com/explorer) on L2 and report feedback about your experience in [Discord](https://discord.gg/graphprotocol). - -## Чи є якісь ризики, пов'язані з переходом мережі на L2? +## Were there any risks associated with scaling the network to L2? All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). Everything has been tested thoroughly, and a contingency plan is in place to ensure a safe and seamless transition. Details can be found [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Чи продовжать працювати вже наявні підграфи на Ethrereum? +## Are existing subgraphs on Ethereum working? -Так, звичайно, контракти в мережі The Graph будуть працювати одночасно на Ethereum та Arbitrum до моменту повного переходу на Arbitrum, який заплановано пізніше. +All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. -## Чи розгорнутий на Arbitrum новий смартконтракт для GRT? +## Does GRT have a new smart contract deployed on Arbitrum? Yes, GRT has an additional [smart contract on Arbitrum](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7). However, the Ethereum mainnet [GRT contract](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) will remain operational. diff --git a/website/pages/uk/billing.mdx b/website/pages/uk/billing.mdx index 47467e108558..b6e73127d865 100644 --- a/website/pages/uk/billing.mdx +++ b/website/pages/uk/billing.mdx @@ -14,7 +14,7 @@ There are two plans to use when querying subgraphs on The Graph Network. ## Query Payments with credit card -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) +- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Натисніть на кнопку "Connect Wallet" у правому верхньому куті сторінки. Ви будете перенаправлені на сторінку вибору гаманця. Виберіть той, який вам підходить, і натисніть кнопку "Connect". 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. @@ -69,7 +69,7 @@ Once you bridge GRT, you can add it to your billing balance. 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. 4. Enter the amount of GRT you would like to withdraw. 5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. 6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. @@ -83,7 +83,7 @@ Once you bridge GRT, you can add it to your billing balance. - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. 5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. 6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. + - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. 7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. 8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. @@ -127,7 +127,7 @@ This will be a step by step guide for purchasing GRT on Binance. 7. Review your purchase and click "Buy GRT". 8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. 9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - Click on the "wallet" button, click withdraw, and select GRT. - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - Click "Continue" and confirm your transaction. @@ -198,7 +198,7 @@ You can learn more about getting ETH on Binance [here](https://www.binance.com/e ### How many queries will I need? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. @@ -208,6 +208,6 @@ Of course, both new and existing users can reach out to Edge & Node's BD team fo Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### What happens when my billing balance runs out? Will I get a warning? You will receive several email notifications before your billing balance runs out. diff --git a/website/pages/uk/chain-integration-overview.mdx b/website/pages/uk/chain-integration-overview.mdx index 2fe6c2580909..a142f3f817f9 100644 --- a/website/pages/uk/chain-integration-overview.mdx +++ b/website/pages/uk/chain-integration-overview.mdx @@ -6,12 +6,12 @@ A transparent and governance-based integration process was designed for blockcha ## Stage 1. Technical Integration -- Teams work on a Graph Node integration and Firehose for non-EVM based chains. [Here's how](/new-chain-integration/). +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. - Teams initiate the protocol integration process by creating a Forum thread [here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (New Data Sources sub-category under Governance & GIPs). Using the default Forum template is mandatory. ## Stage 2. Integration Validation -- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON RPC or Firehose endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - Graph Indexers test the integration on The Graph's testnet. - Core developers and Indexers monitor stability, performance, and data determinism. @@ -38,7 +38,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. -### 3. How much time will this process take? +### 3. How much time will the process of reaching full protocol support take? The time to mainnet is expected to be several weeks, varying based on the time of integration development, whether additional research is required, testing and bug fixes, and, as always, the timing of the governance process that requires community feedback. @@ -46,4 +46,4 @@ Protocol support for indexing rewards depends on the stakeholders' bandwidth to ### 4. How will priorities be handled? -Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. This is especially true for chains previously supported on the [hosted service](https://thegraph.com/hosted-service) or those relying on already tested stacks. +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/uk/cookbook/arweave.mdx b/website/pages/uk/cookbook/arweave.mdx index f0284b6d5cf1..c7159f6e7bb2 100644 --- a/website/pages/uk/cookbook/arweave.mdx +++ b/website/pages/uk/cookbook/arweave.mdx @@ -105,7 +105,7 @@ Schema definition describes the structure of the resulting subgraph database and Обробники для виконання подій написані на мові [AssemblyScript](https://www.assemblyscript.org/). -Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/assemblyscript-api/). +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/uk/cookbook/base-testnet.mdx b/website/pages/uk/cookbook/base-testnet.mdx index 33d4dc7876af..a7657cd1a3cd 100644 --- a/website/pages/uk/cookbook/base-testnet.mdx +++ b/website/pages/uk/cookbook/base-testnet.mdx @@ -63,10 +63,10 @@ Your subgraph slug is an identifier for your subgraph. The CLI tool will walk yo The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. +- Схема (schema.graphql) - схема The GraphQL визначає, які дані ви хочете отримати з підграфа. - AssemblyScript Mappings (mapping.ts) - Це код, який транслює дані з ваших джерел даних до елементів, визначених у схемі. -If you want to index additional data, you will need extend the manifest, schema and mappings. +If you want to index additional data, you will need to extend the manifest, schema and mappings. For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). diff --git a/website/pages/uk/cookbook/cosmos.mdx b/website/pages/uk/cookbook/cosmos.mdx index 8a191b4f9914..838cc1efec74 100644 --- a/website/pages/uk/cookbook/cosmos.mdx +++ b/website/pages/uk/cookbook/cosmos.mdx @@ -85,7 +85,7 @@ Schema definition describes the structure of the resulting subgraph database and Обробники для виконання подій написані на мові [AssemblyScript](https://www.assemblyscript.org/). -Індексація Cosmos вводить специфічні для Cosmos типи даних до [AssemblyScript API](/developing/assemblyscript-api/). +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/uk/cookbook/grafting.mdx b/website/pages/uk/cookbook/grafting.mdx index 46b93c05d891..4e8339df3ee3 100644 --- a/website/pages/uk/cookbook/grafting.mdx +++ b/website/pages/uk/cookbook/grafting.mdx @@ -22,7 +22,7 @@ title: Замініть контракт та збережіть його іст - [Grafting](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -У цьому матеріалі ми розглянемо базовий випадок використання. Ми замінимо наявний контракт на ідентичний (з новою адресою, але тим самим кодом). +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## Важливе зауваження щодо графтингу при оновленні в мережі @@ -30,7 +30,7 @@ title: Замініть контракт та збережіть його іст ### Чому це так важливо? -Grafting - це потужна функція, яка дозволяє " накладати" один підграф на інший, ефективно переносячи історичні дані з наявного підграфа в нову версію. Хоча це ефективний спосіб зберегти дані та заощадити час на індексацію, але при перенесенні з хостингу в децентралізовану мережу можуть виникнути складнощі та потенційні проблеми. Неможливо трансплантувати підграф з The Graph Network назад до хостингового сервісу або Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. ### Найкращі практики @@ -80,7 +80,7 @@ dataSources: ``` - Джерелом даних `Lock` є адреса abi та адреса контракту, яку ми отримаємо під час компіляції та розгортання контракту -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - Розділ `mapping` визначає тригери, що нас цікавлять, і функції, які мають бути запущені у відповідь на ці тригери. У цьому випадку ми очікуємо на `Withdrawal` і після цього викликаємо функцію `handleWithdrawal` коли вона з'являється. ## Визначення Grafting Manifest @@ -191,7 +191,7 @@ Congrats! You have successfully grafted a subgraph onto another subgraph. ## Додаткові матеріали -Якщо ви хочете отримати більше досвіду роботи зі процесом графтингу, ось кілька прикладів популярних контрактів: +If you want more experience with grafting, here are a few examples for popular contracts: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) diff --git a/website/pages/uk/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx b/website/pages/uk/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx index 86af97bcd350..e37d83acbe78 100644 --- a/website/pages/uk/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx +++ b/website/pages/uk/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx @@ -2,7 +2,7 @@ title: How to Secure API Keys Using Next.js Server Components --- -## Короткий огляд +## Overview We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key). diff --git a/website/pages/uk/cookbook/near.mdx b/website/pages/uk/cookbook/near.mdx index d1e79f20540a..5e01cd954498 100644 --- a/website/pages/uk/cookbook/near.mdx +++ b/website/pages/uk/cookbook/near.mdx @@ -37,7 +37,7 @@ There are three aspects of subgraph definition: **schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph#the-graphql-schema). -**AssemblyScript Mappings:** [AssemblyScript code](/developing/assemblyscript-api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. +**AssemblyScript Mappings:** [AssemblyScript code](/developing/graph-ts/api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. During subgraph development there are two key commands: @@ -98,7 +98,7 @@ Schema definition describes the structure of the resulting subgraph database and Обробники для виконання подій написані на мові [AssemblyScript](https://www.assemblyscript.org/). -NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/assemblyscript-api). +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/graph-ts/api). ```typescript @@ -165,9 +165,9 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/developing/assemblyscript-api) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/developing/graph-ts/api) is available to NEAR subgraph developers during mapping execution. -This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/assemblyscript-api#json-api) to allow developers to easily process these logs. +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/graph-ts/api#json-api) to allow developers to easily process these logs. ## Deploying a NEAR Subgraph diff --git a/website/pages/uk/cookbook/subgraph-uncrashable.mdx b/website/pages/uk/cookbook/subgraph-uncrashable.mdx index 989310a3f9a0..0cc91a0fa2c3 100644 --- a/website/pages/uk/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/uk/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: Safe Subgraph Code Generator - The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. These logs can be viewed in the The Graph's hosted service under the 'Logs' section. +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. diff --git a/website/pages/uk/cookbook/upgrading-a-subgraph.mdx b/website/pages/uk/cookbook/upgrading-a-subgraph.mdx index 5502b16d9288..a546f02c0800 100644 --- a/website/pages/uk/cookbook/upgrading-a-subgraph.mdx +++ b/website/pages/uk/cookbook/upgrading-a-subgraph.mdx @@ -136,7 +136,7 @@ Make sure **Update Subgraph Details in Explorer** is checked and click on **Save ## Deprecating a Subgraph on The Graph Network -Follow the steps [here](/managing/deprecating-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. +Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. ## Querying a Subgraph + Billing on The Graph Network diff --git a/website/pages/uk/deploying/multiple-networks.mdx b/website/pages/uk/deploying/multiple-networks.mdx new file mode 100644 index 000000000000..dc2b8e533430 --- /dev/null +++ b/website/pages/uk/deploying/multiple-networks.mdx @@ -0,0 +1,241 @@ +--- +title: Deploying a Subgraph to Multiple Networks +--- + +This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph). + +## Deploying the subgraph to multiple networks + +In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. + +### Using `graph-cli` + +Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: + +```sh +Options: + + ... + --network Network configuration to use from the networks config file + --network-file Networks config file path (default: "./networks.json") +``` + +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. + +> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. + +If you don't have a `networks.json` file, you'll need to manually create one with the following structure: + +```json +{ + "network1": { // the network name + "dataSource1": { // the dataSource name + "address": "0xabc...", // the contract address (optional) + "startBlock": 123456 // the startBlock (optional) + }, + "dataSource2": { + "address": "0x123...", + "startBlock": 123444 + } + }, + "network2": { + "dataSource1": { + "address": "0x987...", + "startBlock": 123 + }, + "dataSource2": { + "address": "0xxyz..", + "startBlock": 456 + } + }, + ... +} +``` + +> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. + +Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x123...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +This is what your networks config file should look like: + +```json +{ + "mainnet": { + "Gravity": { + "address": "0x123..." + } + }, + "sepolia": { + "Gravity": { + "address": "0xabc..." + } + } +} +``` + +Now we can run one of the following commands: + +```sh +# Using default networks.json file +yarn build --network sepolia + +# Using custom named file +yarn build --network sepolia --network-file path/to/config +``` + +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: sepolia + source: + address: '0xabc...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Now you are ready to `yarn deploy`. + +> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: + +```sh +# Using default networks.json file +yarn deploy --network sepolia + +# Using custom named file +yarn deploy --network sepolia --network-file path/to/config +``` + +### Using subgraph.yaml template + +One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). + +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: + +```json +{ + "network": "mainnet", + "address": "0x123..." +} +``` + +and + +```json +{ + "network": "sepolia", + "address": "0xabc..." +} +``` + +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + network: {{network}} + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + address: '{{address}}' + abi: Gravity + mapping: + kind: ethereum/events +``` + +In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: + +```json +{ + ... + "scripts": { + ... + "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", + "prepare:sepolia": "mustache config/sepolia.json subgraph.template.yaml > subgraph.yaml" + }, + "devDependencies": { + ... + "mustache": "^3.1.0" + } +} +``` + +To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: + +```sh +# Mainnet: +yarn prepare:mainnet && yarn deploy + +# Sepolia: +yarn prepare:sepolia && yarn deploy +``` + +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). + +**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. + +## Subgraph Studio subgraph archive policy + +A subgraph version in Studio is archived if and only if it meets the following criteria: + +- The version is not published to the network (or pending publish) +- The version was created 45 or more days ago +- The subgraph hasn't been queried in 30 days + +In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. + +Every subgraph affected with this policy has an option to bring the version in question back. + +## Checking subgraph health + +If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. + +Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: + +```graphql +{ + indexingStatusForCurrentVersion(subgraphName: "org/subgraph") { + synced + health + fatalError { + message + block { + number + hash + } + handler + } + chains { + chainHeadBlock { + number + } + latestBlock { + number + } + } + } +} +``` + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/pages/uk/developing/creating-a-subgraph.mdx b/website/pages/uk/developing/creating-a-subgraph.mdx index bacce9064883..cb780426b392 100644 --- a/website/pages/uk/developing/creating-a-subgraph.mdx +++ b/website/pages/uk/developing/creating-a-subgraph.mdx @@ -2,45 +2,47 @@ title: Creating a Subgraph --- -A subgraph extracts data from a blockchain, processing it and storing it so that it can be easily queried via GraphQL. +This detailed guide provides instructions to successfully create a subgraph. -![Defining a Subgraph](/img/defining-a-subgraph.png) +A subgraph extracts data from a blockchain, processes it, and stores it for efficient querying via GraphQL. -The subgraph definition consists of a few files: +![Defining a Subgraph](/img/defining-a-subgraph.png) -- `subgraph.yaml`: a YAML file containing the subgraph manifest +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. -- `schema.graphql`: a GraphQL schema that defines what data is stored for your subgraph, and how to query it via GraphQL +## Getting Started -- `AssemblyScript Mappings`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) code that translates from the event data to the entities defined in your schema (e.g. `mapping.ts` in this tutorial) +### Install the Graph CLI -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key). It is recommended that you [add signal](/network/curating/#how-to-signal) to your subgraph with at least [3,000 GRT](/sunrise/#how-can-i-ensure-high-quality-of-service-and-redundancy-for-subgraphs-on-the-graph-network). +To build and deploy a subgraph, you will need the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-tooling) which you will need to build and deploy a subgraph. +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. -## Install the Graph CLI +На вашому локальному комп'ютері запустіть одну з наведених нижче команд: -The Graph CLI is written in JavaScript, and you will need to install either `yarn` or `npm` to use it; it is assumed that you have yarn in what follows. +#### Using [npm](https://www.npmjs.com/) -Once you have `yarn`, install the Graph CLI by running +```bash +npm install -g @graphprotocol/graph-cli@latest +``` -**Install with yarn:** +#### Using [yarn](https://yarnpkg.com/) ```bash yarn global add @graphprotocol/graph-cli ``` -**Install with npm:** +- The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. -```bash -npm install -g @graphprotocol/graph-cli -``` +- This `graph init` command can also create a subgraph in Subgraph Studio by passing in `--product subgraph-studio`. + +- If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. -Once installed, the `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. This command can be used to create a subgraph in Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to your preferred network, bootstrapping a new subgraph from that contract can be a good way to get started. +## Create a subgraph -## From An Existing Contract +### From an existing contract -The following command creates a subgraph that indexes all events of an existing contract. It attempts to fetch the contract ABI from Etherscan and falls back to requesting a local file path. If any of the optional arguments are missing, it takes you through an interactive form. +The following command creates a subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,21 +53,29 @@ graph init \ [] ``` -The `` is the ID of your subgraph in Subgraph Studio, it can be found on your subgraph details page. +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. -## From An Example Subgraph +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. -The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: +### From an example subgraph + +The following command initializes a new project from an example subgraph: ```sh -graph init --studio +graph init --studio --from-example=example-subgraph ``` -The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. -## Add New dataSources To An Existing Subgraph +## Add new `dataSources` to an existing subgraph -Since `v0.31.0` the `graph-cli` supports adding new dataSources to an existing subgraph through the `graph add` command. +Since `v0.31.0`, the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: ```sh graph add
    [] @@ -78,22 +88,45 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -The `add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option), and will create a new `dataSource` in the same way that `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. +### Specifics + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: + + - If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- The contract `address` will be written to the `networks.json` for the relevant network. + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. -The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: +## Components of a subgraph -- If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. -- If `false`: a new entity & event handler should be created with `${dataSourceName}{EventName}`. +### The Subgraph Manifest -The contract `address` will be written to the `networks.json` for the relevant network. +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -> **Note:** When using the interactive cli, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. +The **subgraph definition** consists of the following files: -## The Subgraph Manifest +- `subgraph.yaml`: Contains the subgraph manifest -The subgraph manifest `subgraph.yaml` defines the smart contracts your subgraph indexes, which events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL -For the example subgraph, `subgraph.yaml` is: +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +A single subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: ```yaml specVersion: 0.0.4 @@ -180,9 +213,9 @@ A single subgraph can index data from multiple smart contracts. Add an entry for The triggers for a data source within a block are ordered using the following process: -1. Event and call triggers are first ordered by transaction index within the block. -2. Event and call triggers within the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. -3. Block triggers are run after event and call triggers, in the order they are defined in the manifest. +1. Event and call triggers are first ordered by transaction index within the block. +2. Event and call triggers within the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. +3. Block triggers are run after event and call triggers, in the order they are defined in the manifest. These ordering rules are subject to change. @@ -305,9 +338,9 @@ Imagine you have a subgraph that needs to make three Ethereum calls to fetch dat Traditionally, these calls might be made sequentially: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Total time taken = 3 + 2 + 4 = 9 seconds @@ -315,9 +348,9 @@ Total time taken = 3 + 2 + 4 = 9 seconds With this feature, you can declare these calls to be executed in parallel: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. @@ -325,9 +358,9 @@ Total time taken = max (3, 2, 4) = 4 seconds ### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. -2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. ### Example Configuration in Subgraph Manifest @@ -347,7 +380,7 @@ calls: Details for the example above: - `global0X128` is the declared `eth_call`. -- The text before colon(`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. @@ -360,17 +393,17 @@ calls: ### SpecVersion Releases -| Version | Release notes | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +| Version | Release notes | +|:-------:| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | ### Getting The ABIs @@ -442,16 +475,16 @@ For some entity types the `id` is constructed from the id's of two other entitie We support the following scalars in our GraphQL API: -| Type | Description | -| --- | --- | -| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| Type | Description | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | #### Enums @@ -593,7 +626,7 @@ This more elaborate way of storing many-to-many relationships will result in les #### Adding comments to the schema -As per GraphQL spec, comments can be added above schema entity attributes using the hash symble `#`. This is illustrated in the example below: +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: ```graphql type MyFirstEntity @entity { @@ -873,7 +906,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. -> +> > If prior blocks contain data relevant to the new data source, it is best to index that data by reading the current state of the contract and creating entities representing that state at the time the new data source is created. ### Data Source Context @@ -930,7 +963,7 @@ dataSources: ``` > **Note:** The contract creation block can be quickly looked up on Etherscan: -> +> > 1. Search for the contract by entering its address in the search bar. > 2. Click on the creation transaction hash in the `Contract Creator` section. > 3. Load the transaction details page where you'll find the start block for that contract. @@ -945,9 +978,9 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde `indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: -1. `"never"`: No pruning of historical data; retains the entire history. -2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. -3. A specific number: Sets a custom limit on the number of historical blocks to retain. +1. `"never"`: No pruning of historical data; retains the entire history. +2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +3. A specific number: Sets a custom limit on the number of historical blocks to retain. ``` indexerHints: @@ -982,29 +1015,6 @@ indexerHints: prune: never ``` -You can check the earliest block (with historical state) for a given subgraph by querying the [Indexing Status API](/deploying/deploying-a-subgraph-to-hosted/#checking-subgraph-health): - -``` -{ - indexingStatuses(subgraphs: ["Qm..."]) { - subgraph - synced - health - chains { - earliestBlock { - number - } - latestBlock { - number - } - chainHeadBlock { number } - } - } -} -``` - -Note that the `earliestBlock` is the earliest block with historical data, which will be more recent than the `startBlock` specified in the manifest, if the subgraph has been pruned. - ## Event Handlers Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. @@ -1388,7 +1398,7 @@ File data sources are a new subgraph functionality for accessing off-chain data > This also lays the groundwork for deterministic indexing of off-chain data, as well as the potential introduction of arbitrary HTTP-sourced data. -### Короткий огляд +### Overview Rather than fetching files "in line" during handler execution, this introduces templates which can be spawned as new data sources for a given file identifier. These new data sources fetch the files, retrying if they are unsuccessful, running a dedicated handler when the file is found. @@ -1477,7 +1487,7 @@ The file data source must specifically mention all the entity types which it wil #### Create a new handler to process files -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/assemblyscript-api/#json-api)). +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). The CID of the file as a readable string can be accessed via the `dataSource` as follows: diff --git a/website/pages/uk/developing/developer-faqs.mdx b/website/pages/uk/developing/developer-faqs.mdx index 0ace9cbee870..8ef6f36f1ecd 100644 --- a/website/pages/uk/developing/developer-faqs.mdx +++ b/website/pages/uk/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: FAQ для розробників --- -## 1. What is a subgraph? +This page summarizes some of the most common questions for developers building on The Graph. -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using the Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available to be queried by subgraph consumers. +## Subgraph Related -## 2. Can I delete my subgraph? +### 1. What is a subgraph? -It is not possible to delete subgraphs once they are created. +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. Can I change my subgraph name? +### 2. What is the first step to create a subgraph? -No. Once a subgraph is created, the name cannot be changed. Make sure to think of this carefully before you create your subgraph so it is easily searchable and identifiable by other dapps. +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. Can I change the GitHub account associated with my subgraph? +### 3. Can I still create a subgraph if my smart contracts don't have events? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Make sure to think of this carefully before you create your subgraph. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. Am I still able to create a subgraph if my smart contracts don't have events? +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are by far the fastest way to retrieve useful data. +### 4. Can I change the GitHub account associated with my subgraph? -If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended, as performance will be significantly slower. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. Is it possible to deploy one subgraph with the same name for multiple networks? +### 5. How do I update a subgraph on mainnet? -You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/deploying/deploying-a-subgraph-to-hosted#redeploying-a-subgraph) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. How are templates different from data sources? +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) upfront you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. +You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph#data-source-templates). -## 8. How do I make sure I'm using the latest version of graph-node for my local deployments? +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -You can run the following command: +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -docker pull graphprotocol/graph-node:latest -``` +You can also use `graph add` command to add a new dataSource. -**NOTE:** docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so it is important to do this to make sure you are up to date with the latest version of graph-node. +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. How do I call a contract function or access a public state variable from my subgraph mappings? +Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. -## 10. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Yes. On `graph init` command itself you can add multiple datasources by entering contracts one after the other. You can also use `graph add` command to add new datasource. +You can run the following command: -## 11. I want to contribute or add a GitHub issue. Where can I find the open source repositories? +```sh +docker pull graphprotocol/graph-node:latest +``` -- [graph-node](https://github.com/graphprotocol/graph-node) -- [graph-tooling](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. What is the recommended way to build "autogenerated" ids for an entity when handling events? +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -## 13. When listening to multiple contracts, is it possible to select the contract order to listen to events? +### 15. Can I delete my subgraph? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +It is not possible to delete subgraphs once they are created. However, you can [transfer and deprecate your subgraph](/managing/transfer-and-deprecate-a-subgraph/). -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +## Network Related + +### 16. What networks are supported by The Graph? + +You can find the list of the supported networks [here](/developing/supported-networks). + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? Yes. You can do this by importing `graph-ts` as per the example below: @@ -78,23 +99,21 @@ dataSource.network() dataSource.address() ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Do you support block and call handlers on Sepolia? Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. -## 16. Can I import ethers.js or other JS libraries into my subgraph mappings? - -Not currently, as mappings are written in AssemblyScript. One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. +## Indexing & Querying Related -## 17. Is it possible to specify what block to start indexing on? +### 19. Is it possible to specify what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 18. Are there some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 19. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: @@ -102,19 +121,7 @@ Yes! Try the following command, substituting "organization/subgraphName" with th curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -## 20. What networks are supported by The Graph? - -You can find the list of the supported networks [here](/developing/supported-networks). - -## 21. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? - -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. - -## 22. Is this possible to use Apollo Federation on top of graph-node? - -Federation is not supported yet, although we do want to support it in the future. At the moment, something you can do is use schema stitching, either on the client or via a proxy service. - -## 23. Is there a limit to how many objects The Graph can return per query? +### 22. Is there a limit to how many objects The Graph can return per query? By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: @@ -122,24 +129,19 @@ By default, query responses are limited to 100 items per collection. If you want someCollection(first: 1000, skip: ) { ... } ``` -## 24. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## 25. Where do I go to find my current subgraph on the hosted service? - -Head over to the hosted service in order to find subgraphs that you or others deployed to the hosted service. You can find it [here](https://thegraph.com/hosted-service). - -## 26. Will the hosted service start charging query fees? - -The Graph will never charge for the hosted service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The hosted service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to upgrade to the decentralized network as they are comfortable. - -## 27. How do I update a subgraph on mainnet? +## Miscellaneous -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +### 24. Is it possible to use Apollo Federation on top of graph-node? -## 28. In what order are the event, block, and call handlers triggered for a data source? +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +- [graph-node](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/uk/developing/graph-ts/api.mdx b/website/pages/uk/developing/graph-ts/api.mdx index 46442dfa941e..8fc1f4b48b61 100644 --- a/website/pages/uk/developing/graph-ts/api.mdx +++ b/website/pages/uk/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: AssemblyScript API --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/release-notes/assemblyscript-migration-guide) +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## API Reference @@ -27,16 +29,16 @@ The `@graphprotocol/graph-ts` library provides the following APIs: The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. -| Version | Release notes | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
    Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
    Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
    `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
    `etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Release notes | +| :-----: | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
    Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
    Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
    `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
    `etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Built-in Types @@ -252,7 +254,9 @@ export function handleTransfer(event: TransferEvent): void { When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. -Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### Loading entities from the store @@ -268,15 +272,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### Looking up entities created withing a block As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. -The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a Transaction from some on-chain event, and a later handler wants to access this transaction if it exists. In the case where the transaction does not exist, the subgraph will have to go to the database just to find out that the entity does not exist; if the subgraph author already knows that the entity must have been created in the same block, using loadInBlock avoids this database roundtrip. For some subgraphs, these missed lookups can contribute significantly to the indexing time. +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -503,7 +510,9 @@ Any other contract that is part of the subgraph can be imported from the generat #### Handling Reverted Calls -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -515,7 +524,7 @@ if (callResult.reverted) { } ``` -Note that a Graph node connected to a Geth or Infura client may not detect all reverts, if you rely on this we recommend using a Graph node connected to a Parity client. +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### Encoding/Decoding ABI diff --git a/website/pages/uk/developing/supported-networks.mdx b/website/pages/uk/developing/supported-networks.mdx index b1654dee17b2..6a1805deefee 100644 --- a/website/pages/uk/developing/supported-networks.mdx +++ b/website/pages/uk/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. +- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. - If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). diff --git a/website/pages/uk/developing/unit-testing-framework.mdx b/website/pages/uk/developing/unit-testing-framework.mdx index f826a5ccb209..308135181ccb 100644 --- a/website/pages/uk/developing/unit-testing-framework.mdx +++ b/website/pages/uk/developing/unit-testing-framework.mdx @@ -1368,18 +1368,18 @@ The log output includes the test run duration. Here's an example: > Critical: Could not create WasmInstance from valid module with context: unknown import: wasi_snapshot_preview1::fd_write has not been defined -This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/assemblyscript-api/#logging-api) +This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/graph-ts/api/#logging-api) > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Block(defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(18,12) -> +> > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Transaction(defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(24,12) The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as`. The best way to fix issues like this one is to update everything to the latest released version. diff --git a/website/pages/uk/glossary.mdx b/website/pages/uk/glossary.mdx index 392cc6b55f27..0025fb36342f 100644 --- a/website/pages/uk/glossary.mdx +++ b/website/pages/uk/glossary.mdx @@ -10,11 +10,9 @@ title: Глосарій - **Кінцева точка**: URL-адреса, яку можна використовувати для запиту підграфа. Кінцевою точкою тестування для Subgraph Studio є `https://api.studio.thegraph.com/query///`, а кінцевою точкою для Graph Explorer є `https://gateway.thegraph.com/api//subgraphs/id/`. Кінцева точка Graph Explorer використовується для запиту підграфів у децентралізованій мережі The Graph. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Then, Indexers can begin indexing subgraphs to make them available to be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **Hosted service**: A temporary scaffold service for building and querying subgraphs as The Graph's decentralized network is maturing its cost of service, quality of service, and developer experience. - -- **Індексатори**: Користувачі мережі, які запускають ноди індексації для індексування даних з блокчейнів та обслуговування запитів до GraphQL. +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Винагорода індексаторам в GRT складається з двох компонентів: певна комісія за запити (query fee rebates) та винагорода за індексацію (indexing rewards). @@ -24,17 +22,17 @@ title: Глосарій - **Indexer's Self Stake**: Сума токенів GRT, яку Індексатори стейкають, щоб брати участь у децентралізації мережі. Мінімальна сума становить 100 000 GRT, без верхнього ліміту. -- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service to The Graph Network. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Делегати**: Користувачі мережі, які володіють токеном GRT та делегують його Індексаторам. Це дозволяє індексаторам збільшити кількість застейканих токенів на власних підграфах всередині мережі. Натомість делегати отримують частину винагороди за індексування, яку індексатори отримують за свою роботу. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **Delegation Tax**: Комісія у розмірі 0.5% сплачується делегатами, коли вони делегують власні GRT індексаторам. GRT, який використовувався для сплати цієї комісії, спалюється. -- **Куратори**: Користувачі мережі, які ідентифікують якісні підграфи та "курують" їх (тобто подають на них сигнал за допомогою власних GRT токенів) в обмін на винагороди за кураторство. Коли індексатори отримують плату за запит до підграфа, 10% розподіляється між Кураторами цього підграфа. Індексатори отримують винагороду за індексацію пропорційно кількості сигналів на підграфі. Ми бачимо кореляцію між кількістю GRT, які були подані в якості сигналу та кількістю індексаторів, що індексують підграф. +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **Curation Tax**: Комісія у розмірі 1%, яку сплачують куратори, коли подають сигнал в токенах GRT на підграфи. Відповідно GRT, що використовується для сплати цієї комісії, спалюється. -- **Користувач підграфа**: Будь-яка програма або користувач, який робить запит до підграфа. +- **Data Consumer**: Any application or user that queries a subgraph. - **Розробник підграфа**: Розробник, який створює та розгортає підграф у децентралізованій мережі The Graph. @@ -46,11 +44,11 @@ title: Глосарій 1. **Активний**: Розподіл вважається активним, коли він створюється всередині мережі. Це називається відкриттям розподілу і вказує мережі на те, що індексатор активно індексує та обслуговує запити для конкретного підграфа. При активному розподілі нараховується винагорода за індексацію пропорційно до кількості сигналів на підграфі та суми розподілених GRT токенів. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **Subgraph Studio**: Потужний додаток для створення, розгортання та публікації підграфів. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed. Specifically, the Indexer will lose 2.5% of their self-stake of GRT. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. @@ -62,11 +60,11 @@ title: Глосарій - **GRT**: Функціональний токен екосистеми The Graph. GRT надає економічні заохочення учасникам мережі за їх внесок у її розвиток. -- **POI or Proof of Indexing**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent Proof of Indexing (POI). Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node - це компонент, який індексує підграфи та робить отримані дані доступними для запитів через GraphQL API. Загалом, він є центральним елементом стека індексатора, і правильна робота Graph Node має вирішальне значення для успішної роботи індексатора. +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Агент індексації**: Агент індексації (Indexer agent) є частиною стека індексаторів. Він полегшує взаємодію Індексатора всередині мережі, включаючи реєстрацію, управління розгортанням підграфів у Graph Node та управління розподілом. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **Клієнт The Graph**: Бібліотека для децентралізованого створення додатків на основі GraphQL. @@ -78,10 +76,6 @@ title: Глосарій - **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self stake. -- **_Upgrading_ a subgraph to The Graph Network**: The process of moving a subgraph from the hosted service to The Graph Network. - -- **_Updating_ a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). - -- **Upgrade Window**: The countdown for hosted service users to upgrade their subgraph(s) to The Graph Network beginning on April 11th, and ending on June 12th 2024. diff --git a/website/pages/uk/index.json b/website/pages/uk/index.json index 14650a885c28..3d97cf38ab78 100644 --- a/website/pages/uk/index.json +++ b/website/pages/uk/index.json @@ -21,10 +21,6 @@ "createASubgraph": { "title": "Створення субграфа", "description": "Використання студії для створення субграфів" - }, - "migrateFromHostedService": { - "title": "Upgrade from the hosted service", - "description": "Upgrading subgraphs to The Graph Network" } }, "networkRoles": { diff --git a/website/pages/uk/managing/transfer-and-deprecate-a-subgraph.mdx b/website/pages/uk/managing/transfer-and-deprecate-a-subgraph.mdx new file mode 100644 index 000000000000..6bdd183f72d5 --- /dev/null +++ b/website/pages/uk/managing/transfer-and-deprecate-a-subgraph.mdx @@ -0,0 +1,65 @@ +--- +title: Transfer and Deprecate a Subgraph +--- + +## Transferring ownership of a subgraph + +Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. + +**Please note the following:** + +- Whoever owns the NFT controls the subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. +- You can easily move control of a subgraph to a multi-sig. +- A community member can create a subgraph on behalf of a DAO. + +### View your subgraph as an NFT + +To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: + +``` +https://opensea.io/your-wallet-address +``` + +Or a wallet explorer like **Rainbow.me**: + +``` +https://rainbow.me/your-wallet-addres +``` + +### Step-by-Step + +To transfer ownership of a subgraph, do the following: + +1. Use the UI built into Subgraph Studio: + + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) + +2. Choose the address that you would like to transfer the subgraph to: + + ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) + +Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: + +![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) + +## Deprecating a subgraph + +Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. + +### Step-by-Step + +To deprecate your subgraph, do the following: + +1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). +2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. +3. Your subgraph will no longer appear in searches on Graph Explorer. + +**Please note the following:** + +- The owner's wallet should call the `deprecateSubgraph` function. +- Curators will not be able to signal on the subgraph anymore. +- Curators that already signaled on the subgraph can withdraw their signal at an average share price. +- Deprecated subgraphs will show an error message. + +> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. diff --git a/website/pages/uk/mips-faqs.mdx b/website/pages/uk/mips-faqs.mdx index 7f39862c23ec..fdfbe08cf5c7 100644 --- a/website/pages/uk/mips-faqs.mdx +++ b/website/pages/uk/mips-faqs.mdx @@ -6,10 +6,6 @@ title: MIPs FAQs > Примітка: програма MIPs закрита з травня 2023 року. Дякуємо всім індексаторам, які взяли участь! -Це чудовий час для того, щоб взяти участь в екосистемі The graph. Протягом [Graph Day 2022] (https://thegraph.com/graph-day/2022/) Yaniv Tal анонсував [sunsetting of the hosted service](https://thegraph.com/blog/sunsetting-hosted-service/), момент, для якого екосистема The Graph працювала протягом багатьох років. - -Щоб підтримати завершення роботи хостингового сервісу та перенесення всієї активності в децентралізовану мережу, The Graph Foundation оголосив про [Migration Infrastructure Providers (crwd)lbracketdwrcMIPs program] (https://thegraph.com/blog/mips-multi-chain-indexing-incentivized-program). - Програма MIPs - це оплачувана програма для Індексаторів, яка надає їм необхідні ресурси для індексації різних мереж, за межами мережі Ethereum і допомагає протоколу The Graph розширити децентралізовану мережу до рівня мультичейн інфраструктури. На програму MIPs виділено 0.75% від загальної кількості токенів GRT (75 мільйонів GRT), з яких 0.5% буде використано для нагороди Індексаторів, які роблять свій вклад на бутстрап мережі та 0.25% зарезервовані під Network Grants для розробників підграфів, які використовують мультичейн підграфи. diff --git a/website/pages/uk/network/benefits.mdx b/website/pages/uk/network/benefits.mdx index 2ddce9526ffe..5c4ddf308daf 100644 --- a/website/pages/uk/network/benefits.mdx +++ b/website/pages/uk/network/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Порівняння вартості послуг | Самостійний хостинг | Graph мережа | -| :-: | :-: | :-: | -| Щомісячна плата за сервер\* | $350 на місяць | $0 | -| Вартість запитів | $0+ | $0 per month | -| Час технічного обслуговування | $400 на місяць | Немає, вбудовані в мережу з глобально розподіленими індексаторами | -| Кількість запитів за місяць | Обмежується інфраструктурними можливостями | 100,000 (Free Plan) | -| Вартість одного запиту | $0 | $0 | -| Інфраструктура | Централізована | Децентралізована | -| Географічне резервування | $750+ за кожну додаткову ноду | Включено | -| Час безвідмовної роботи | Варіюється | 99.9%+ | -| Загальна сума щомісячних витрат | $750+ | $0 | +| Порівняння вартості послуг | Самостійний хостинг | Graph мережа | +|:-----------------------------------------:|:------------------------------------------:|:-----------------------------------------------------------------:| +| Щомісячна плата за сервер\* | $350 на місяць | $0 | +| Вартість запитів | $0+ | $0 per month | +| Час технічного обслуговування | $400 на місяць | Немає, вбудовані в мережу з глобально розподіленими індексаторами | +| Кількість запитів за місяць | Обмежується інфраструктурними можливостями | 100,000 (Free Plan) | +| Вартість одного запиту | $0 | $0 | +| Інфраструктура | Централізована | Децентралізована | +| Географічне резервування | $750+ за кожну додаткову ноду | Включено | +| Час безвідмовної роботи | Варіюється | 99.9%+ | +| Загальна сума щомісячних витрат | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| Порівняння вартості послуг | Самостійний хостинг | Graph мережа | -| :-: | :-: | :-: | -| Щомісячна плата за сервер\* | $350 на місяць | $0 | -| Вартість запитів | $500 на місяць | $120 per month | -| Час технічного обслуговування | $800 на місяць | Немає, вбудовані в мережу з глобально розподіленими індексаторами | -| Кількість запитів за місяць | Обмежується інфраструктурними можливостями | ~3,000,000 | -| Вартість одного запиту | $0 | $0.00004 | -| Інфраструктура | Централізована | Децентралізована | -| Інженерно-технічні витрати | $200 на годину | Включено | -| Географічне резервування | $1,200 загальних витрат на кожну додаткову ноду | Включено | -| Час безвідмовної роботи | Варіюється | 99.9%+ | -| Загальна сума щомісячних витрат | $1,650+ | $120 | +| Порівняння вартості послуг | Самостійний хостинг | Graph мережа | +|:-----------------------------------------:|:-----------------------------------------------:|:-----------------------------------------------------------------:| +| Щомісячна плата за сервер\* | $350 на місяць | $0 | +| Вартість запитів | $500 на місяць | $120 per month | +| Час технічного обслуговування | $800 на місяць | Немає, вбудовані в мережу з глобально розподіленими індексаторами | +| Кількість запитів за місяць | Обмежується інфраструктурними можливостями | ~3,000,000 | +| Вартість одного запиту | $0 | $0.00004 | +| Інфраструктура | Централізована | Децентралізована | +| Інженерно-технічні витрати | $200 на годину | Включено | +| Географічне резервування | $1,200 загальних витрат на кожну додаткову ноду | Включено | +| Час безвідмовної роботи | Варіюється | 99.9%+ | +| Загальна сума щомісячних витрат | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| Порівняння вартості послуг | Самостійний хостинг | Graph мережа | -| :-: | :-: | :-: | -| Щомісячна плата за сервер\* | $1100 на місяць, за одну ноду | $0 | -| Вартість запитів | $4000 | $1,200 per month | -| Кількість необхідних нод | 10 | Не стосується | -| Час технічного обслуговування | $6,000 і більше на місяць | Немає, вбудовані в мережу з глобально розподіленими індексаторами | -| Кількість запитів за місяць | Обмежується інфраструктурними можливостями | ~30,000,000 | -| Вартість одного запиту | $0 | $0.00004 | -| Інфраструктура | Централізована | Децентралізована | -| Географічне резервування | $1,200 загальних витрат на кожну додаткову ноду | Включено | -| Час безвідмовної роботи | Варіюється | 99.9%+ | -| Загальна сума щомісячних витрат | $11,000+ | $1,200 | +| Порівняння вартості послуг | Самостійний хостинг | Graph мережа | +|:-----------------------------------------:|:-----------------------------------------------:|:-----------------------------------------------------------------:| +| Щомісячна плата за сервер\* | $1100 на місяць, за одну ноду | $0 | +| Вартість запитів | $4000 | $1,200 per month | +| Кількість необхідних нод | 10 | Не стосується | +| Час технічного обслуговування | $6,000 і більше на місяць | Немає, вбудовані в мережу з глобально розподіленими індексаторами | +| Кількість запитів за місяць | Обмежується інфраструктурними можливостями | ~30,000,000 | +| Вартість одного запиту | $0 | $0.00004 | +| Інфраструктура | Централізована | Децентралізована | +| Географічне резервування | $1,200 загальних витрат на кожну додаткову ноду | Включено | +| Час безвідмовної роботи | Варіюється | 99.9%+ | +| Загальна сума щомісячних витрат | $11,000+ | $1,200 | \*включаючи витрати на резервне копіювання: $50-$100 на місяць diff --git a/website/pages/uk/network/curating.mdx b/website/pages/uk/network/curating.mdx index eae0e3463a43..82552ced3f3c 100644 --- a/website/pages/uk/network/curating.mdx +++ b/website/pages/uk/network/curating.mdx @@ -10,7 +10,7 @@ Before consumers can query a subgraph, it must be indexed. This is where curatio Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. @@ -18,7 +18,7 @@ The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). @@ -34,7 +34,7 @@ Within the Curator tab in Graph Explorer, curators will be able to signal and un Автоматичне переміщення вашого сигналу на найновішу версію може бути корисним для того, щоб ви продовжували нараховувати комісію за запити. Кожного разу, коли ви здійснюєте кураторську роботу, стягується плата за в розмірі 1%. Ви також сплачуєте 0,5% за кураторство, за кожну міграцію. Розробникам підграфів не рекомендується часто публікувати нові версії - вони повинні сплачувати 0.5% кураторам за всі автоматично переміщені частки кураторів. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -49,7 +49,7 @@ However, it is recommended that curators leave their signaled GRT in place not o ## Ризики 1. Ринок запитів за своєю суттю молодий в Graph, і існує ризик того, що ваш %APY може бути нижчим, ніж ви очікуєте, через динаміку ринку, що зароджується. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. 3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. Підграф може не працювати через різноманітні помилки (баги). Підграф, що не працює не стягує комісію за запити. В результаті вам доведеться почекати, поки розробник виправить усі помилки й випустить нову версію. - Якщо ви підключені до найновішої версії підграфу, ваші частки будуть автоматично перенесені до цієї нової версії. При цьому буде стягуватися податок на в розмірі 0,5%. @@ -65,7 +65,7 @@ By signalling on a subgraph, you will earn a share of all the query fees that th Пошук найякісніших підграфів є складним завданням, але до нього можна підійти різними способами. Як куратор, ви хочете шукати надійні підграфи, які сприяють збільшенню обсягу запитів. Надійний підграф може бути цінним, якщо він є повноцінним, чітким і відповідає потребам dApp в інформації. Погано організований підграф може потребувати перегляду або повторної редакції, а також може в кінцевому підсумку виявитися неефективним. Для кураторів дуже важливо переглянути інфраструктуру або код підграфа, щоб оцінити, чи є він цінним. Як результат: -- Куратори можуть використовувати своє бачення мережі, щоб спробувати передбачити, як окремий підграф може генерувати більший або менший обсяг запитів у майбутньому +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future - Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. ### 3. What’s the cost of updating a subgraph? @@ -78,50 +78,14 @@ It’s suggested that you don’t update your subgraphs too frequently. See the ### 5. Чи можу я продати свої частки куратора? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). ### 6. Am I eligible for a curation grant? Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## Крива зв'язування 101 - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![Ціна за частку](/img/price-per-share.png) - -В результаті, ціна зростає лінійно, що означає, що з часом придбати одну частку буде все дорожче і дорожче. Ось приклад того, що ми маємо на увазі, див. криву зв'язування нижче: - -![Крива зв'язування](/img/bonding-curve.png) - -Уявімо, що у нас є два куратори, які мінтять частки для підграфа: - -- Куратор А першим подає сигнал на підграфа. Додавши до кривої 120 000 GRT, вони можуть змінтити 2000 штук. -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- Оскільки обидва куратори володіють половиною загальної кількості кураторських часток, вони отримають рівну суму винагороди за кураторство. -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- Куратор, що залишився, тепер отримає всю кураторську винагороду за цей підграф. Якби вони продали свої частки, щоб вивести GRT, вони б отримали лише 120 000 GRT. -- **TLDR:** Оцінка кількості GRT кураторських часток визначається кривою зв'язування і може бути волатильною. Існує потенціал для зазнання значних втрат. Сигналізація на ранній стадії означає, що ви вкладаєте менше GRT на кожну частку. Це означає, що ви заробляєте більше винагороди, як куратор за кожний GRT токен, ніж пізніші куратори за один і той самий підграф. - -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** - -У випадку з Graph, застосовується, [впроваджена Bancor](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA), формула кривої зв'язування. - Все ще спантеличені? Перегляньте нашу відео інструкцію нижче: diff --git a/website/pages/uk/network/delegating.mdx b/website/pages/uk/network/delegating.mdx index be101087c591..185816122650 100644 --- a/website/pages/uk/network/delegating.mdx +++ b/website/pages/uk/network/delegating.mdx @@ -2,13 +2,23 @@ title: Делегування --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## Гайд для делегатів -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly, as that information is widely available online. There are three sections in this guide: @@ -24,64 +34,84 @@ There are three sections in this guide: Делегати не можуть бути виключені за погану поведінку, але існує певний штраф для них, щоб позбавити стимулу щодо прийняття поганих рішень, які можуть зашкодити цілісності мережі. -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +As a Delegator, it's important to understand the following: -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### Період розблокування делегації Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely.
    - ![Delegation unbonding](/img/Delegation-Unbonding.png) _Зверніть увагу на комісію в розмірі 0,5% в Інтерфейсі для - делегацій, а також на 28-денний період розблокування._ + ![Delegation unbonding](/img/Delegation-Unbonding.png) _Зверніть увагу на комісію в розмірі 0,5% в Інтерфейсі для делегацій, а також на 28-денний період розблокування._
    ### Вибір надійного індексатора зі справедливою винагородою для делегатів -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +#### Delegation Parameters + +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
    - ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *Найкращий індексатор віддає делегатам 90% від суми винагороди. - Середній - 20%. Найменший індексатор дає ~83%. * + ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *Найкращий індексатор віддає делегатам 90% від суми винагороди. Середній - 20%. Найменший індексатор дає ~83%. *
    -- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. + +As you can see, in order to choose the right Indexer, you must consider multiple things. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -### Розрахунок очікуваного прибутку +## Calculating Delegators Expected Return -A Delegator must consider a lot of factors when determining the return. These include: +A Delegator must consider the following factors to determine a return: -- Технічно підкований делегат також може подивитися на ефективність індексатора використовувати делеговані токени, які йому доступні. Якщо індексатор не використовує всі наявні токени, він не отримує максимального прибутку, який міг би отримати сам або його делегати. -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### Розглянемо питання про отримання частини винагороди за індексацію та за запити -Як описано в попередніх підрозділах, ви повинні обрати індексатора, який є відкритим і чесним у призначенні комісій за запити та за індексацію. Делегат також повинен звернути увагу на час перезарядження параметрів, щоб побачити, який часовий буфер він має. Після цього досить просто розрахувати суму винагороди, яку отримують делегати. Формула наступна: +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![Delegation Image 3](/img/Delegation-Reward-Formula.png) ### Розглянемо пул делегацій індексатора -Ще один момент, який повинен враховувати делегат, — це те, якою часткою пулу делегацій він володіє. Всі винагороди за делегацію розподіляються рівномірно, шляхом простого перерозподілу пулу, що визначається сумою, яку делегат вніс до пулу. Це дає делегату певну частку пулу: +Delegators should consider the proportion of the Delegation Pool they own. -![Share formula](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +This gives the Delegator a share of the pool: + +![Share formula](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### Розглянемо ліміт делегування -Інша річ, яку слід враховувати, — це ліміт делегування. Зараз коефіцієнт делегування встановлений на рівні 16. Це означає, що якщо індексатор застейкав 1 000 000 GRT, його ліміт делегації становить 16 000 000 GRT делегованих токенів, які він може використовувати в протоколі. Будь-які делеговані токени, що перевищують цю суму, розмивають всі винагороди делегата. +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. @@ -89,16 +119,21 @@ Therefore, a Delegator should always consider the Delegation Capacity of an Inde ### Баг в гаманці MetaMask "Pending Transaction" -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### Example -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## Відеоінструкція, по взаємодії з інтерфейсом мережі Graph +## Video Guide -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/uk/network/developing.mdx b/website/pages/uk/network/developing.mdx index 867bcb966f11..336d82eb7552 100644 --- a/website/pages/uk/network/developing.mdx +++ b/website/pages/uk/network/developing.mdx @@ -2,52 +2,88 @@ title: Розробка --- -Розробники — це джерело попиту в екосистемі Graph. Розробники створюють підграфи та розміщують їх в мережі Graph. Потім вони налаштовують коректну роботу підграфів по роботі з запитами, з допомогою GraphQL, аби забезпечити роботу своїх додатків. +To start coding right away, go to [Developer Quick Start](/quick-start/). + +## Overview + +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. + +On The Graph, you can: + +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. + +### What is GraphQL? + +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +### Developer Actions + +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. + +## Subgraph Specifics + +### What are subgraphs? + +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. + +A subgraph primarily consists of the following files: + +- `subgraph.yaml`: this YAML file contains the [subgraph manifest](/developing/creating-a-subgraph/#the-subgraph-manifest). +- `subgraph.graphql`: this GraphQL schema defines what data is stored for your subgraph, and how to query it via [GraphQL](/developing/creating-a-subgraph/#the-graphql-schema). +- `mappings`: this [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) mappings file translates data from the event data to the entities defined in your schema. + +Learn the detailed specifics to [create a subgraph](/developing/creating-a-subgraph/). ## Subgraph Lifecycle -Запущені в мережі підграфи мають визначений життєвий цикл. +Here is a general overview of a subgraph’s lifecycle: -### Build locally +![Subgraph Lifecycle](/img/subgraph-lifecycle.png) -Як і будь-яка інша розробка підграфів, вона починається з локальної розробки та тестування. Розробники можуть використовувати ті ж самі локальні налаштування, незалежно від того, чи створюють вони для мережі Graph, хостинговий сервіс або локальну Graph ноду, використовуючи при цьому `graph-cli` та `graph-ts` для створення свого підграфа. Розробникам рекомендується використовувати такі інструменти, як [Matchstick](https://github.com/LimeChain/matchstick) для модульного тестування, аби підвищити надійність своїх підграфів. +### Build locally -> Існують певні обмеження для мережі The Graph з точки зору можливостей і підтримки різних мереж. Тільки підграфи у [мережах, які підтримуються](/developing/supported-networks) отримають винагороду за індексацію, а підграфи, які отримують дані з IPFS, не мають права на отримання винагороди. +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. ### Deploy to Subgraph Studio -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. - -### Publish to the Network +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -Коли розробник повністю задоволений своїм підграфом, він може розмістити його в The Graph Network. Ця дія відбувається в основній мережі, де проходить реєстрація підграфа таким чином, щоб його могли знайти індексатори. Опубліковані підграфи мають відповідний NFT, який потім легко передається. Опублікований підграф має пов'язані з ним метадані, які надають іншим учасникам мережі потрібний довідковий матеріал та інформацію. +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. -### Signal to Encourage Indexing +### Publish to the Network -Опубліковані підграфи навряд чи будуть підтримуватись індексаторами без наявності сигналу. Сигнал — це заблоковані GRT, приналежні до даного підграфа, які вказують індексаторам на те, що даний підграф буде мати змогу опрацьовувати великий обсяг запитів, а також сприяти отриманню винагород за індексацію, які можна отримати за обробку запитів. Розробники субграфів, як правило, додають сигнал до свого підграфа, щоб заохочувати проведення індексації. Сторонні куратори також можуть подавати сигнал на певний підграф, якщо вони вважають, що такий підграф може сприяти зростанню обсягу запитів в майбутньому. +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. -### Querying & Application Development +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-and-deprecate-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. -Після того, як підграф підтримується індексаторами та доступний для запитів, розробники можуть починати використовувати підграф у своїх додатках. Розробники запитують підграфи через шлюз, який перенаправляє їхні запити до індексатора, який підтримує підграф, оплачуючи збір за запити в GRT. +### Add Curation Signal for Indexing -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. +#### What is signal? -### Updating Subgraphs +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. +### Querying & Application Development -Once the Subgraph Developer is ready to update, they can initiate a transaction to point their subgraph at the new version. Updating the subgraph migrates any signal to the new version (assuming the user who applied the signal selected "auto-migrate"), which also incurs a migration tax. This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). -### Deprecating Subgraphs +Learn more about [querying subgraphs](/querying/querying-the-graph/). -У певний момент розробник може вирішити, що йому більше не потрібен опублікований підграф. У цей момент він може видалити підграф, який повертає будь-яку суму токенів GRT кураторам, які використували їх для подачі сигналу. +### Updating Subgraphs -### Diverse Developer Roles +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -Деякі розробники будуть брати участь у повному життєвому циклі субграфів у мережі, публікуючи, запитуючи та ітеруючи свої власні підграфи. Деякі з них можуть зосередитися на розробці субграфів, створюючи відкриті API, на яких можуть базуватися інші. Деякі можуть бути орієнтовані на додатки, запитуючи підграфи, розміщені іншими. +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. -### Developers and Network Economics +### Deprecating & Transferring Subgraphs -Developers are a key economic actor in the network, locking up GRT in order to encourage indexing, and crucially querying subgraphs, which is the network's primary value exchange. Subgraph developers also burn GRT whenever a subgraph is updated. +If you no longer need a published subgraph, you can [deprecate or transfer a subgraph](/managing/transfer-and-deprecate-a-subgraph/). Deprecating a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/uk/network/explorer.mdx b/website/pages/uk/network/explorer.mdx index 9d8fa4ed8a8a..1246158a97f3 100644 --- a/website/pages/uk/network/explorer.mdx +++ b/website/pages/uk/network/explorer.mdx @@ -2,21 +2,35 @@ title: Graph Explorer --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## Підграфи -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -Коли ви натискаєте на підграфа, ви зможете тестувати подачу запитів в дослідницькому просторі та зможете використовувати отримувані дані для прийняття обґрунтованих рішень. Ви також можете подати сигнал за допомогою GRT токенів на свій власний підграф або на підграфи інших, щоб індексатори знали про його цінність і якість певного підграфа. Це дуже важливо, тому що сигналізація підграфа стимулює його індексацію, а це означає, що він з'явиться в мережі, щоб в кінцевому підсумку обслуговувати запити. +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -На спеціальній сторінці присвяченій кожному підграфу, висвітлюється декілька важливих відомостей. До них відносяться: +On each subgraph’s dedicated page, you can do the following: - Наявність/відсутність сигналу на підграфах - Перегляд додаткових відомостей, таких як діаграми, поточний ID розгортання та інші ключові параметри @@ -31,26 +45,32 @@ First things first, if you just finished deploying and publishing your subgraph ## Учасники -На цій вкладці ви зможете побачити з висоти пташиного польоту всіх користувачів, які беруть участь у діяльності мережі, таких як індексатори, делегати та куратори. Нижче ми детально розглянемо, що кожна вкладка означає для вас. +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### 1. Індексатори ![Explorer Image 4](/img/Indexer-Pane.png) -Почнемо з індексаторів. Індексатори є основою протоколу, оскільки саме вони стейкають на підграфи, індексують їх і обслуговують запити для всіх, хто використовує підграфи. У таблиці "Індексатори" ви зможете побачити параметри делегування індексаторів, їх стейк, скільки вони застейкали на кожен підграф і скільки вони отримали доходу від комісій за запити та винагороду за індексацію. Детальніше про це нижче: +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. + +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. -- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators -- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards -- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters -- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior -- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing -- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated +**Specifics** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations. -- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - Indexer Rewards - this is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance. -Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. Indexing parameters are set by clicking on the right-hand side of the table, or by going into an Indexer’s profile and clicking the “Delegate” button. +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. To learn more about how to become an Indexer, you can take a look at the [official documentation](/network/indexing) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) @@ -58,9 +78,13 @@ To learn more about how to become an Indexer, you can take a look at the [offici ### 2. Curators -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially attractive subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. By depositing GRT, Curators mint curation shares of a subgraph. As a result, Curators are eligible to earn a portion of the query fees that the subgraph they have signaled on generates. The bonding curve incentivizes Curators to curate the highest quality data sources. The Curator table in this section will allow you to see: +In the The Curator table listed below you can see: - The date the Curator started curating - The number of GRT that was deposited @@ -68,34 +92,36 @@ Curators can be community members, data consumers, or even subgraph developers w ![Explorer Image 6](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting the following links of [The Graph Academy](https://thegraph.academy/curators/) or [official documentation.](/network/curating) +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### 3. Delegators -Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers seek to attract Delegators by offering them a portion of the indexing rewards and query fees that they earn. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![Explorer Image 7](/img/Delegation-Overview.png) -The Delegators table will allow you to see the active Delegators in the community, as well as metrics such as: +In the Delegators table you can see the active Delegators in the community and important metrics: - The number of Indexers a Delegator is delegating towards - A Delegator’s original delegation - The rewards they have accumulated but have not withdrawn from the protocol - The realized rewards they withdrew from the protocol - Total amount of GRT they have currently in the protocol -- The date they last delegated at +- The date they last delegated -If you want to learn more about how to become a Delegator, look no further! All you have to do is to head over to the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## Network -In the Network section, you will see global KPIs as well as the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. -### Короткий огляд +### Overview -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +The overview section has both all the current network metrics and some cumulative metrics over time: - The current total network stake - The stake split between the Indexers and their Delegators @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - Protocol parameters such as curation reward, inflation rate, and more - Current epoch rewards and fees -A few key details that are worth mentioning: +A few key details to note: -- **Query fees represent the fees generated by the consumers**, and they can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once the Indexers close their allocations towards the subgraphs they’ve been indexing. Thus the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ In the Epochs section, you can analyze on a per-epoch basis, metrics such as: - The active epoch is the one in which Indexers are currently allocating stake and collecting query fees - The settling epochs are the ones in which the state channels are being settled. This means that the Indexers are subject to slashing if the consumers open disputes against them. - The distributing epochs are the epochs in which the state channels for the epochs are being settled and Indexers can claim their query fee rebates. - - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers, thus being finalized. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![Explorer Image 9](/img/Epoch-Stats.png) ## Your User Profile -Now that we’ve talked about the network stats, let’s move on to your personal profile. Your personal profile is the place for you to see your network activity, no matter how you’re participating on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see: +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### Profile Overview -This is where you can see any current actions you took. This is also where you can find your profile information, description, and website (if you added one). +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![Explorer Image 10](/img/Profile-Overview.png) ### Subgraphs Tab -If you click into the Subgraphs tab, you’ll see your published subgraphs. This will not include any subgraphs deployed with the CLI for testing purposes – subgraphs will only show up when they are published to the decentralized network. +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Indexing Tab -If you click into the Indexing tab, you’ll find a table with all the active and historical allocations towards the subgraphs, as well as charts that you can analyze and see your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: @@ -158,7 +189,9 @@ This section will also include details about your net Indexer rewards and net qu ### Delegating Tab -Delegators are important to the Graph Network. A Delegator must use their knowledge to choose an Indexer that will provide a healthy return on rewards. Here you can find details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. In the first half of the page, you can see your delegation chart, as well as the rewards-only chart. To the left, you can see the KPIs that reflect your current delegation metrics. diff --git a/website/pages/uk/network/indexing.mdx b/website/pages/uk/network/indexing.mdx index 56860303c428..c17d4f1573bd 100644 --- a/website/pages/uk/network/indexing.mdx +++ b/website/pages/uk/network/indexing.mdx @@ -42,7 +42,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Багато інформаційних панелей, створених спільнотою, містять очікувані значення винагород, і їх можна легко перевірити вручну, виконавши ці кроки: -1. Надішліть запит на [підграф в основній мережі](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet), щоб отримати ідентифікатори всіх активних розподілів: +1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql запит indexerAllocations { @@ -113,11 +113,11 @@ Query fees are collected by the gateway and distributed to indexers according to - **Large** - підготовлений для індексації всіх підграфів, що використовуються наразі, і обслуговування запитів на відповідний трафік. | Налаштування | Postgres
    (CPU) | Postgres
    (пам'ять в GB) | Postgres
    (диск у ТБ) | VMs
    (Центральні CPU) | VMs
    (пам'ять у ГБ) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| ------------ |:-------------------------:|:----------------------------------:|:-------------------------------:|:-------------------------------:|:-----------------------------:| +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | ### Яких основних заходів безпеки повинен дотримуватися індексатор? @@ -149,26 +149,26 @@ Query fees are collected by the gateway and distributed to indexers according to #### Graph Node -| Порт | Призначення | Розташування | Аргумент CLI | Перемінна оточення | -| --- | --- | --- | --- | --- | -| 8000 | HTTP-сервер GraphQL
    (для запитів до підграфів) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-порт | - | -| 8001 | GraphQL WS
    (для підписок на підграфи) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (для керування розгортаннями) | / | --admin-port | - | -| 8030 | API стану індексації підграфів | /graphql | --index-node-port | - | -| 8040 | Метрики Prometheus | /metrics | --metrics-port | - | +| Порт | Призначення | Розташування | Аргумент CLI | Перемінна оточення | +| ---- | --------------------------------------------------------- | ---------------------------------------------------- | ----------------- | ------------------ | +| 8000 | HTTP-сервер GraphQL
    (для запитів до підграфів) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-порт | - | +| 8001 | GraphQL WS
    (для підписок на підграфи) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (для керування розгортаннями) | / | --admin-port | - | +| 8030 | API стану індексації підграфів | /graphql | --index-node-port | - | +| 8040 | Метрики Prometheus | /metrics | --metrics-port | - | #### Служба індексації -| Порт | Призначення | Розташування | Аргумент CLI | Перемінна оточення | -| --- | --- | --- | --- | --- | -| 7600 | HTTP-сервер GraphQL
    (для платних запитів до підграфів) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Метрики Prometheus | /metrics | --metrics-port | - | +| Порт | Призначення | Розташування | Аргумент CLI | Перемінна оточення | +| ---- | ----------------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | HTTP-сервер GraphQL
    (для платних запитів до підграфів) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Метрики Prometheus | /metrics | --metrics-port | - | #### Агент індексації -| Порт | Призначення | Розташування | Аргумент CLI | Перемінна оточення | -| --- | --- | --- | --- | --- | -| 8000 | API для керування індексатором | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | +| Порт | Призначення | Розташування | Аргумент CLI | Перемінна оточення | +| ---- | ------------------------------ | ------------ | ------------------------- | --------------------------------------- | +| 8000 | API для керування індексатором | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | ### Налаштування серверної інфраструктури з використанням Terraform на Google Cloud @@ -545,7 +545,7 @@ graph indexer status - `graph indexer rules maybe [options] ` — установіть `decisionBasis` для розгортання на `rules`, щоб агент індексатора використовував правила індексування, щоб вирішити, чи індексувати це розгортання. -- `graph indexer actions get [options] ` - отримання однієї або декількох дій за допомогою `all` або можливість залишити `action-id` пустим, щоб отримати всі дії. Додатковий аргумент `--status` можна використовувати для виведення всіх дій певного статусу. +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `graph indexer action queue allocate ` – розподіл черги diff --git a/website/pages/uk/network/overview.mdx b/website/pages/uk/network/overview.mdx index 1a9f3f00fc43..60c044309f09 100644 --- a/website/pages/uk/network/overview.mdx +++ b/website/pages/uk/network/overview.mdx @@ -2,14 +2,20 @@ title: Загальний огляд мережі --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## Короткий огляд +## How does it work? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Specifics + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![Економіка токенів](/img/Network-roles@2x.png) -Для забезпечення економічної безпеки The Graph Network і цілісності даних, що запитуються, учасники стейкають і використовують Graph токени ([GRT](/tokenomics)). GRT - це функціональний (utility) токен, який існує в мережі ERC-20 та використовується для розподілу ресурсів в мережі. +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/uk/new-chain-integration.mdx b/website/pages/uk/new-chain-integration.mdx index 35b2bc7c8b4a..534d6701efdb 100644 --- a/website/pages/uk/new-chain-integration.mdx +++ b/website/pages/uk/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: Integrating New Networks +title: New Chain Integration --- -Graph Node can currently index data from the following chain types: +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- Ethereum, via EVM JSON-RPC and [Ethereum Firehose](https://github.com/streamingfast/firehose-ethereum) -- NEAR, via a [NEAR Firehose](https://github.com/streamingfast/near-firehose-indexer) -- Cosmos, via a [Cosmos Firehose](https://github.com/graphprotocol/firehose-cosmos) -- Arweave, via an [Arweave Firehose](https://github.com/graphprotocol/firehose-arweave) +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -If you are interested in any of those chains, integration is a matter of Graph Node configuration and testing. +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -If you are interested in a different chain type, a new integration with Graph Node must be built. Our recommended approach is developing a new Firehose for the chain in question and then the integration of that Firehose with Graph Node. More info below. +## Integration Strategies -**1. EVM JSON-RPC** +### 1. EVM JSON-RPC -If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. For more information, refer to [Testing an EVM JSON-RPC](new-chain-integration#testing-an-evm-json-rpc). +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -**2. Firehose** +#### Testing an EVM JSON-RPC -For non-EVM-based chains, Graph Node must ingest blockchain data via gRPC and known type definitions. This can be done via [Firehose](firehose/), a new technology developed by [StreamingFast](https://www.streamingfast.io/) that provides a highly-scalable indexing blockchain solution using a files-based and streaming-first approach. Reach out to the [StreamingFast team](mailto:integrations@streamingfast.io/) if you need help with Firehose development. +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## Difference between EVM JSON-RPC & Firehose +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, in a JSON-RPC batch request +- `trace_filter` _(optionally required for Graph Node to support call handlers)_ -While the two are suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](substreams/), like building [Substreams-powered subgraphs](cookbook/substreams-powered-subgraphs/). In addition, Firehose allows for improved indexing speeds when compared to JSON-RPC. +### 2. Firehose Integration -New EVM chain integrators may also consider the Firehose-based approach, given the benefits of substreams and its massive parallelized indexing capabilities. Supporting both allows developers to choose between building substreams or subgraphs for the new chain. +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> **NOTE**: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that eth_calls are [not a good practice for developers](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## Testing an EVM JSON-RPC +#### Specific Firehose Instrumentation for EVM (`geth`) chains -For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON RPC methods: +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \_(for historical blocks, with EIP-1898 - requires archive node): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`, in a JSON-RPC batch request -- _`trace_filter`_ _(optionally required for Graph Node to support call handlers)_ +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### Graph Node Configuration +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -**Start by preparing your local environment** +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## Graph Node Configuration + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) -2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON RPC compliant URL - > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ -**Test the integration by locally deploying a subgraph** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. Create a simple example subgraph. Some options are below: - 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point - 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) -3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. -4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -Graph Node should be syncing the deployed subgraph if there are no errors. Give it time to sync, then send some GraphQL queries to the API endpoint printed in the logs. +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## Integrating a new Firehose-enabled chain +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. Create a simple example subgraph. Some options are below: + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +Graph Node should be syncing the deployed subgraph if there are no errors. Give it time to sync, then send some GraphQL queries to the API endpoint printed in the logs. -Integrating a new chain is also possible using the Firehose approach. This is currently the best option for non-EVM chains and a requirement for substreams support. Additional documentation focuses on how Firehose works, adding Firehose support for a new chain and integrating it with Graph Node. Recommended docs for integrators: +## Substreams-powered Subgraphs -1. [General docs on Firehose](firehose/) -2. [Adding Firehose support for a new chain](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [Integrating Graph Node with a new chain via Firehose](https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/uk/operating-graph-node.mdx b/website/pages/uk/operating-graph-node.mdx index 30a9ee532653..cd8118b275b7 100644 --- a/website/pages/uk/operating-graph-node.mdx +++ b/website/pages/uk/operating-graph-node.mdx @@ -77,13 +77,13 @@ A complete Kubernetes example configuration can be found in the [indexer reposit When it is running Graph Node exposes the following ports: -| Порт | Призначення | Розташування | Аргумент CLI | Перемінна оточення | -| --- | --- | --- | --- | --- | -| 8000 | HTTP-сервер GraphQL
    (для запитів до підграфів) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-порт | - | -| 8001 | GraphQL WS
    (для підписок на підграфи) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (для керування розгортаннями) | / | --admin-port | - | -| 8030 | API стану індексації підграфів | /graphql | --index-node-port | - | -| 8040 | Метрики Prometheus | /metrics | --metrics-port | - | +| Порт | Призначення | Розташування | Аргумент CLI | Перемінна оточення | +| ---- | --------------------------------------------------------- | ---------------------------------------------------- | ----------------- | ------------------ | +| 8000 | HTTP-сервер GraphQL
    (для запитів до підграфів) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-порт | - | +| 8001 | GraphQL WS
    (для підписок на підграфи) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (для керування розгортаннями) | / | --admin-port | - | +| 8030 | API стану індексації підграфів | /graphql | --index-node-port | - | +| 8040 | Метрики Prometheus | /metrics | --metrics-port | - | > **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. diff --git a/website/pages/uk/querying/graphql-api.mdx b/website/pages/uk/querying/graphql-api.mdx index 2bbc71b5bb9c..d8671e53a77c 100644 --- a/website/pages/uk/querying/graphql-api.mdx +++ b/website/pages/uk/querying/graphql-api.mdx @@ -2,11 +2,19 @@ title: GraphQL API --- -This guide explains the GraphQL Query API that is used for The Graph Protocol. +Learn about the GraphQL Query API used in The Graph. -## Queries +## What is GraphQL? -In your subgraph schema you define types called `Entities`. For each `Entity` type, an `entity` and `entities` field will be generated on the top-level `Query` type. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +To understand the larger role that GraphQL plays, review [developing](/network/developing/) and [creating a subgraph](/developing/creating-a-subgraph/). + +## Queries with GraphQL + +In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. + +> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. ### Examples @@ -21,7 +29,7 @@ Query for a single `Token` entity defined in your schema: } ``` -> **Note:** When querying for a single entity, the `id` field is required, and it must be a string. +> Note: When querying for a single entity, the `id` field is required, and it must be writen as a string. Query all `Token` entities: @@ -36,7 +44,10 @@ Query all `Token` entities: ### Sorting -When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. +When querying a collection, you may: + +- Use the `orderBy` parameter to sort by a specific attribute. +- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. #### Example @@ -53,7 +64,7 @@ When querying a collection, the `orderBy` parameter may be used to sort by a spe As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities. -In the following example, we sort the tokens by the name of their owner: +The following example shows tokens sorted by the name of their owner: ```graphql { @@ -70,11 +81,12 @@ In the following example, we sort the tokens by the name of their owner: ### Pagination -When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. - -Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +When querying a collection, it's best to: -Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. +- Use the `first` parameter to paginate from the beginning of the collection. + - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. +- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. #### Example using `first` @@ -106,7 +118,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect #### Example using `first` and `id_ge` -If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: +If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: ```graphql query manyTokens($lastID: String) { @@ -117,11 +129,12 @@ query manyTokens($lastID: String) { } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### Filtering -You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. +- You can use the `where` parameter in your queries to filter for different properties. +- You can filter on multiple values within the `where` parameter. #### Example using `where` @@ -155,7 +168,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: #### Example for block filtering -You can also filter entities by the `_change_block(number_gte: Int)` - this filters entities which were updated in or after the specified block. +You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). @@ -193,7 +206,7 @@ As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/release ##### `AND` Operator -In the following example, we are filtering for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. +The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. ```graphql { @@ -208,7 +221,7 @@ In the following example, we are filtering for challenges with `outcome` `succee ``` > **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas. -> +> > ```graphql > { > challenges(where: { number_gte: 100, outcome: "succeeded" }) { @@ -223,7 +236,7 @@ In the following example, we are filtering for challenges with `outcome` `succee ##### `OR` Operator -In the following example, we are filtering for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. +The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. ```graphql { @@ -278,9 +291,9 @@ _change_block(number_gte: Int) You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -Note that the current implementation is still subject to certain limitations that might violate these gurantees. The implementation can not always tell that a given block hash is not on the main chain at all, or that the result of a query by block hash for a block that can not be considered final yet might be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. #### Example @@ -322,12 +335,12 @@ Fulltext search queries have one required field, `text`, for supplying search te Fulltext search operators: -| Symbol | Operator | Description | -| --- | --- | --- | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| Symbol | Operator | Description | +| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | +| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | +| `<->` | `Follow by` | Specify the distance between two words. | +| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | #### Examples @@ -376,11 +389,11 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 ## Schema -The schema of your data source--that is, the entity types, values, and relationships that are available to query--are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> **Note:** Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. +> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. ### Entities diff --git a/website/pages/uk/querying/querying-best-practices.mdx b/website/pages/uk/querying/querying-best-practices.mdx index d8fafba633eb..dc80f29703ba 100644 --- a/website/pages/uk/querying/querying-best-practices.mdx +++ b/website/pages/uk/querying/querying-best-practices.mdx @@ -2,11 +2,9 @@ title: Найкращі практики виконання запитів --- -The Graph забезпечує децентралізований спосіб запиту даних з блокчейнів. +The Graph provides a decentralized way to query data from blockchains via GraphQL APIs, making it easier to query data with the GraphQL language. -Дані мережі The Graph відображаються через GraphQL API, що полегшує запити даних за допомогою мови програмування GraphQL. - -Ця сторінка допоможе вам ознайомитися з основними правилами мови GraphQL та найкращими практиками виконання запитів в GraphQL. +Learn the essential GraphQL language rules and GraphQL querying best practices. --- @@ -71,7 +69,7 @@ GraphQL - це мова програмування і набір механіз Це означає, що ви можете запитувати API GraphQL, використовуючи стандартні команди `fetch` (безпосередньо або через `@whatwg-node/fetch` or `isomorphic-fetch`). -Проте, як зазначено в ["Querying from an Application"](/querying/querying-from-an-application), ми рекомендуємо використовувати `graph-client`, який підтримує такі унікальні функції, як: +However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), it's recommend to use `graph-client` which supports unique features such as: - Робота з кросс-чейн підграфами: Отримання інформації з декількох підграфів за один запит - [Автоматичне відстежування блоків](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) @@ -104,8 +102,6 @@ main() Інші альтернативні GraphQL клієнти описані в ["Querying from an Application"](/querying/querying-from-an-application). -Тепер, коли ми розглянули основні правила синтаксису запитів GraphQL, розгляньмо найкращі практики написання запитів в GraphQL. - --- ## Найкращі практики @@ -164,11 +160,11 @@ const result = await execute(query, { - **Змінні можуть бути кешовані** на рівні сервера - **Запити можна статично аналізувати за допомогою інструментів** (більше про це в наступних розділах) -**Примітка: Як умовно додавати поля в статичні запити** +### How to include fields conditionally in static queries -Можливо, ми захочемо додати поле `owner` лише за певних умов. +You might want to include the `owner` field only on a particular condition. -Для цього ми можемо використовувати директиву `@include(if:...)` наступним чином: +For this, you can leverage the `@include(if:...)` directive as follows: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -191,7 +187,7 @@ const result = await execute(query, { }) ``` -Примітка: Протилежною директивою є `@skip(if: ...)`. +> Примітка: Протилежною директивою є `@skip(if: ...)`. ### Ask for what you want @@ -199,9 +195,8 @@ GraphQL став відомим завдяки своєму слогану "Ask З цієї причини в GraphQL не існує способу отримати всі доступні поля без необхідності виведення кожного з них окремо. -Запитуючи API GraphQL, завжди запитуйте тільки ті поля, які дійсно будуть використовуватися. - -Поширеною причиною надмірної вибірки є колекції об'єктів. За замовчуванням запити отримують 100 об'єктів з колекції, що зазвичай набагато більше, ніж буде використано, наприклад, для демонстрації користувачеві. Тому в запитах майже завжди слід явно встановлювати перше значення, і переконатися, що вони отримують стільки об'єктів, скільки їм насправді потрібно. Це стосується не лише колекцій верхнього рівня в запиті, але й навіть більше - відкладених колекцій об'єктів. +- Запитуючи API GraphQL, завжди запитуйте тільки ті поля, які дійсно будуть використовуватися. +- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. Наприклад, у наступному запиті: @@ -337,8 +332,8 @@ query { Такі поля, що повторюються (`id`, `active`, `status`) створюють багато проблем: -- важче читається для більш розгорнутих запитів -- при використанні інструментів, які генерують типи TypeScript на основі запитів (_детальніше про це в останньому розділі_), `newDelegate` і `oldDelegate` призводять до появи двох різних вбудованих інтерфейсів. +- More extensive queries become harder to read. +- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. Рефакторизована версія запиту виглядатиме наступним чином: @@ -364,13 +359,13 @@ fragment DelegateItem on Transcoder { } ``` -Використання GraphQL `fragment` покращить зручність читання (особливо в масштабі), а також призведе до кращої генерації типів TypeScript. +Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. При використанні інструменту генерації типів, вищенаведений запит згенерує правильний `DelegateItemFragment` тип (_див. попередній розділ "Tools"_). ### Фрагмент GraphQL, що можна і що не можна робити -**База фрагменту повинна бути типом** +### База фрагменту повинна бути типом Фрагмент не може ґрунтуватися на незастосовному типі, тобто на типі, що не має полів, тобто **на типі, що не має полів**: @@ -382,7 +377,7 @@ fragment MyFragment on BigInt { `BigInt` - це **скалярний** (нативний "звичайний" тип), який не можна використовувати як основу фрагмента. -**Як розповсюдити Фрагмент** +#### Як розповсюдити Фрагмент Фрагменти визначені для певних типів і повинні використовуватися в запитах відповідно до цього. @@ -411,16 +406,16 @@ fragment VoteItem on Vote { Неможливо розповсюдити фрагмент типу `Vote` тут. -**Визначте фрагмент як атомну бізнес-одиницю даних** +#### Визначте фрагмент як атомну бізнес-одиницю даних -Фрагменти GraphQL повинні бути заданими на основі їх використання. +GraphQL `Fragment`s must be defined based on their usage. Для більшості випадків використання достатньо визначити один фрагмент для кожного типу (у випадку повторного використання полів або генерації типів). -Ось практичне правило використання Фрагмента: +Here is a rule of thumb for using fragments: -- коли поля одного типу повторюються в запиті, згрупуйте їх у Фрагмент -- коли повторюються схожі, але не однакові поля, створіть кілька фрагментів, наприклад: +- When fields of the same type are repeated in a query, group them in a `Fragment`. +- When similar but different fields are repeated, create multiple fragments, for instance: ```graphql # base fragment (mostly used in listing) @@ -443,7 +438,7 @@ fragment VoteWithPoll on Vote { --- -## Необхідні інструменти +## The Essential Tools ### GraphQL web-based explorers @@ -473,11 +468,11 @@ If you are looking for a more flexible way to debug/test your queries, other sim [GraphQL VSCode розширення](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) є чудовим доповненням до вашого процесу розробки, щоб отримати: -- виділення синтаксису -- автозаповнення пропозицій -- валідацію за схемою -- фрагменти -- перехід до визначення для фрагментів і типів вхідних даних +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets +- Go to definition for fragments and input types Якщо ви використовуєте `graphql-eslint`, [ESLint VSCode розширення](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint), що вкрай необхідне для правильної візуалізації помилок та попереджень, закладених у вашому коді. @@ -485,9 +480,9 @@ If you are looking for a more flexible way to debug/test your queries, other sim [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) значно покращить ваш досвід роботи з GraphQL, надавши: -- виділення синтаксису -- автозаповнення пропозицій -- валідацію за схемою -- фрагменти +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets -Більше інформації можна знайти тут [у статті від WebStorm](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/), де продемонстровано всі основні функції плагіна. +For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. diff --git a/website/pages/uk/quick-start.mdx b/website/pages/uk/quick-start.mdx index e1c67fc854ce..a410b99c0dfd 100644 --- a/website/pages/uk/quick-start.mdx +++ b/website/pages/uk/quick-start.mdx @@ -2,24 +2,18 @@ title: Швидкий старт --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph to Subgraph Studio. +Learn how to easily publish and query a [subgraph](/developing/developer-faqs/#1-what-is-a-subgraph) on The Graph. -Ensure that your subgraph will be indexing data from a [supported network](/developing/supported-networks). - -Цей покроковий посібник написаний з урахуванням того, що у вас уже є: +## Prerequisites for this guide - Криптогаманець -- Адреса смартконтракту в мережі, яку ви обрали - -## 1. Створення підграфа в Subgraph Studio - -Go to the [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- A smart contract address on one of the [supported networks](/developing/supported-networks/) -Once your wallet is connected, you can begin by clicking “Create a Subgraph." It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name." +## Step-by-step -## 2. Встановлення Graph CLI +### 1. Install the Graph CLI -The Graph CLI is written in TypeScript and you will need to have `node` and either `npm` or `yarn` installed to use it. Check that you have the most recent CLI version installed. +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. На вашому локальному комп'ютері запустіть одну з наведених нижче команд: @@ -35,133 +29,161 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -## 3. Initialize your subgraph from existing contract +### 2. Create your subgraph + +If your contract has events, the `init` command will automatically create a scaffold of a subgraph. + +#### Create via Graph CLI + +Use the following command to create a subgraph in Subgraph Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +#### Create via Subgraph Studio + +Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. -Initialize your subgraph from an existing contract by running the initialize command: +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +2. Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". + +For additional information on subgraph creation and the Graph CLI, see [Creating a Subgraph](/developing/creating-a-subgraph). + +### 3. Initialize your subgraph + +#### From an existing contract + +The following command initializes your subgraph from an existing contract: ```sh graph init --studio ``` -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> Note: If your contract was verified on Etherscan, then the ABI will automatically be created in the CLI. + +You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -Коли ви ініціалізуєте ваш підграф, CLI інструмент запитає вас про таку інформацію: +When you initialize your subgraph, the CLI will ask you for the following information: -- Протокол: виберіть протокол, з якого ваш підграф буде індексувати дані -- Підграф мітка: створіть ім'я для вашого підграфа. Ваша підграф мітка є ідентифікатором для вашого підграфа. -- Директорія для створення підграфа в ній: оберіть вашу локальну директорію -- Мережа Ethereum (необов'язково): можливо, вам потрібно буде вказати, з якої EVM-сумісної мережі ваш підграф буде індексувати дані -- Адреса контракту: Вкажіть адресу смарт-контракту, з якого ви хочете запитувати дані -- ABI: Якщо ABI не заповнюється автоматично, вам потрібно буде ввести його вручну у вигляді JSON-файлу -- Стартовий блок: рекомендується вказати стартовий блок, щоб заощадити час, поки ваш підграф індексує дані з блокчейну. Ви можете знайти стартовий блок, знайшовши блок, де був розгорнутий ваш контракт. -- Назва контракту: введіть назву вашого контракту -- Індексація подій контракту у якості елементів: рекомендується встановити значення true, оскільки це автоматично додасть відповідність вашого підграфа для кожної виданої події -- Додання ще одного контракту (необов'язково): ви можете додати ще один контракт +- Protocol: Choose the protocol your subgraph will be indexing data from. +- Subgraph slug: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. +- Directory to create the subgraph in: Choose your local directory. +- Ethereum network (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- Contract address: Locate the smart contract address you’d like to query data from. +- ABI: If the ABI is not auto-populated, you will need to input it manually as a JSON file. +- Start Block: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- Contract Name: Input the name of your contract. +- Index contract events as entities: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- Add another contract (optional): You can add another contract. На наступному скриншоті ви можете побачити, чого варто очікувати при ініціалізації вашого підграфа: ![Subgraph command](/img/subgraph-init-example.png) -## 4. Write your subgraph +### 4. Write your subgraph -Попередні команди створюють так званий "скелет" підграфа, який ви можете використовувати як відправну точку для побудови вашого підграфа. При внесенні змін до підграфа ви будете працювати переважно з трьома файлами: +The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. -- Manifest (`subgraph.yaml`) - The manifest defines what datasources your subgraphs will index. -- Schema (`schema.graphql`) - The GraphQL schema defines what data you wish to retrieve from the subgraph. -- AssemblyScript Mappings (`mapping.ts`) - This is the code that translates data from your datasources to the entities defined in the schema. +When making changes to the subgraph, you will mainly work with three files: -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). +- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -## 5. Deploy to Subgraph Studio +For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -Як тільки ваш підграф буде написаний, виконайте наступні команди: +### 5. Deploy your subgraph -```sh -$ graph codegen -$ graph build -``` +Remember, deploying is not the same as publishing. + +- When you deploy a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. +- When you publish a subgraph, you are publishing it onchain to the decentralized network. + +1. Як тільки ваш підграф буде написаний, виконайте наступні команди: + + ```sh + graph codegen + graph build + ``` + +2. Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. -- Автентифікуйте та розгорніть ваш підграф. Ключ для розгортання можна знайти на сторінці підграфа у Subgraph Studio. +![Deploy key](/img/subgraph-studio-deploy-key.jpg) +```` ```sh -$ graph auth --studio -$ graph deploy --studio -``` -You will be asked for a version label. It's strongly recommended to use [semver](https://semver.org/) for versioning like `0.0.1`. That said, you are free to choose any string as version such as:`v1`, `version1`, `asdf`. - -## 6. Тестування вашого підграфа - -In Subgraph Studio's playground environment, you can test your subgraph by making a sample query. - -Журнали покажуть вам, чи є якісь помилки у вашому підграфі. Журнал робочого підграфа матиме такий вигляд: - -![Subgraph logs](/img/subgraph-logs-image.png) - -If your subgraph is failing, you can query the subgraph health by using the GraphiQL Playground. Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails, so you can debug accordingly: - -```graphql -{ - indexingStatuses(subgraphs: ["Qm..."]) { - node - synced - health - fatalError { - message - block { - number - hash - } - handler - } - nonFatalErrors { - message - block { - number - hash - } - handler - } - chains { - network - chainHeadBlock { - number - } - earliestBlock { - number - } - latestBlock { - number - } - lastHealthyBlock { - number - } - } - entityCount - } -} +graph auth --studio + +graph deploy --studio ``` +```` -## 7. Publish your subgraph to The Graph’s Decentralized Network +- The CLI will ask for a version label. + - It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. That said, you can choose any string for the version such as: `v1`, `version1`, `asdf`, etc. -Once your subgraph has been deployed to Subgraph Studio, you have tested it out, and you are ready to put it into production, you can then publish it to the decentralized network. +### 6. Review your subgraph -In Subgraph Studio, you will be able to click the publish button on the top right of your subgraph's page. +If you’d like to examine your subgraph before publishing it to the network, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: -Select the network you would like to publish your subgraph to. It is recommended to publish subgraphs to Arbitrum One to take advantage of the [faster transaction speeds and lower gas costs](/arbitrum/arbitrum-faq). +- Run a sample query. +- Analyze your subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: -The (upgrade Indexer)[/sunrise/#about-the-upgrade-indexer] will begin serving queries on your subgraph regardless of subgraph curation, and it will provide you with 100,000 free queries per month. + ![Subgraph logs](/img/subgraph-logs-image.png) -For a higher quality of service and stronger redundancy, you can curate your subgraph to attract more Indexers. At the time of writing, it is recommended that you curate your own subgraph with at least 3,000 GRT to ensure 3-5 additional Indexers begin serving queries on your subgraph. +### 7. Publish your subgraph to The Graph Network -Щоб заощадити на витратах на газ, ви можете надіслати сигнал на власний підграф у тій самій транзакції, в якій ви його опублікували, вибравши цю функцію під час публікації підграфа в децентралізованій мережі The Graph: +Publishing a subgraph to the decentralized network makes it available for [Curators](/network/curating/) to begin curating it and [Indexers](/network/indexing/) to begin indexing it. -![Subgraph publish](/img/publish-and-signal-tx.png) +#### Publishing with Subgraph Studio + +1. To publish your subgraph, click the Publish button in the dashboard. +2. Select the network to which you would like to publish your subgraph. + +#### Publishing from the CLI + +As of version 0.73.0, you can also publish your subgraph with the Graph CLI. + +1. Open the `graph-cli`. + +2. Use the following commands: + + ```sh + graph codegen && graph build + ``` + + Then, + + ```sh + graph publish + ``` -## 8. Query your subgraph +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +To customize your deployment, see [Publishing a Subgraph](/publishing/publishing-a-subgraph/). + +#### Adding signal to your subgraph + +1. To attract indexers to query your subgraph, you should add GRT curation signal to it. + + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + +2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. + + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + +To learn more about curation, read [Curating](/network/curating/). + +To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: + +![Subgraph publish](/img/publish-and-signal-tx.png) -Тепер ви можете запитувати ваш підграф, надсилаючи GraphQL-запити на URL-адресу запиту вашого підграфа, яку ви можете знайти, натиснувши на кнопку запиту. +### 8. Query your subgraph -If you don't have your API key, you can query via the free, rate-limited development query URL, which can be used for development and staging. +Now, you can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read more [here](/querying/querying-the-graph/). +For more information about querying data from your subgraph, read [Querying The Graph](/querying/querying-the-graph/). diff --git a/website/pages/uk/release-notes/assemblyscript-migration-guide.mdx b/website/pages/uk/release-notes/assemblyscript-migration-guide.mdx index 85f6903a6c69..17224699570d 100644 --- a/website/pages/uk/release-notes/assemblyscript-migration-guide.mdx +++ b/website/pages/uk/release-notes/assemblyscript-migration-guide.mdx @@ -127,11 +127,8 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - You'll need to rename your duplicate variables if you had variable shadowing. - ### Null Comparisons - By doing the upgrade on your subgraph, sometimes you might get errors like these: ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - To solve you can simply change the `if` statement to something like this: ```typescript @@ -288,7 +284,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: ```typescript diff --git a/website/pages/uk/sps/introduction.mdx b/website/pages/uk/sps/introduction.mdx new file mode 100644 index 000000000000..12e3f81c6d53 --- /dev/null +++ b/website/pages/uk/sps/introduction.mdx @@ -0,0 +1,19 @@ +--- +title: Introduction to Substreams-powered Subgraphs +--- + +By using a Substreams package (`.spkg`) as a data source, your subgraph gains access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. + +There are two methods of enabling this technology: + +Using Substreams [triggers](./triggers): Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. + +Using [Entity Changes](https://substreams.streamingfast.io/documentation/consume/subgraph/graph-out): By writing more of the logic into Substreams, you can consume the module's output directly into graph-node. In graph-node, you can use the Substreams data to create your subgraph entities. + +It is really a matter of where you put your logic, in the subgraph or the Substreams. Keep in mind that having more of your logic in Substreams benefits from a parallelized model, whereas triggers will be linearly consumed in graph-node. + +Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: + +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/uk/sps/triggers-example.mdx b/website/pages/uk/sps/triggers-example.mdx new file mode 100644 index 000000000000..8e4f96eba14a --- /dev/null +++ b/website/pages/uk/sps/triggers-example.mdx @@ -0,0 +1,140 @@ +--- +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" +--- + +## Prerequisites + +Before starting, make sure to: + +- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. +- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. + +## Step 1: Initialize Your Project + + + +1. Open your Dev Container and run the following command to initialize your project: + + ```bash + substreams init + ``` + +2. Select the "minimal" project option. + +3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: + +```yaml +specVersion: v0.1.0 +package: + name: my_project_sol + version: v0.1.0 + +imports: # Pass your spkg of interest + solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg + +modules: + - name: map_spl_transfers + use: solana:map_block # Select corresponding modules available within your spkg + initialBlock: 260000082 + + - name: map_transactions_by_programid + use: solana:solana:transactions_by_programid_without_votes + +network: solana-mainnet-beta + +params: # Modify the param fields to meet your needs + # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA + map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE +``` + +## Step 2: Generate the Subgraph Manifest + +Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: + +```bash +substreams codegen subgraph +``` + +You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: + +```yaml +--- +dataSources: + - kind: substreams + name: my_project_sol + network: solana-mainnet-beta + source: + package: + moduleName: map_spl_transfers # Module defined in the substreams.yaml + file: ./my-project-sol-v0.1.0.spkg + mapping: + apiVersion: 0.0.7 + kind: substreams/graph-entities + file: ./src/mappings.ts + handler: handleTriggers +``` + +## Step 3: Define Entities in `schema.graphql` + +Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. Here is an example: + +```graphql +type MyTransfer @entity { + id: ID! + amount: String! + source: String! + designation: String! + signers: [String!]! +} +``` + +This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. + +## Step 4: Handle Substreams Data in `mappings.ts` + +With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: + +```ts +import { Protobuf } from 'as-proto/assembly' +import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events' +import { MyTransfer } from '../generated/schema' + +export function handleTriggers(bytes: Uint8Array): void { + const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode) + + for (let i = 0; i < input.data.length; i++) { + const event = input.data[i] + + if (event.transfer != null) { + let entity_id: string = `${event.txnId}-${i}` + const entity = new MyTransfer(entity_id) + entity.amount = event.transfer!.instruction!.amount.toString() + entity.source = event.transfer!.accounts!.source + entity.designation = event.transfer!.accounts!.destination + + if (event.transfer!.accounts!.signer!.single != null) { + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] + } else if (event.transfer!.accounts!.signer!.multisig != null) { + entity.signers = event.transfer!.accounts!.signer!.multisig!.signers + } + entity.save() + } + } +} +``` + +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + +## Conclusion + +You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. + +For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). diff --git a/website/pages/uk/sps/triggers.mdx b/website/pages/uk/sps/triggers.mdx new file mode 100644 index 000000000000..ed19635d4768 --- /dev/null +++ b/website/pages/uk/sps/triggers.mdx @@ -0,0 +1,37 @@ +--- +title: Substreams Triggers +--- + +Custom triggers allow you to send data directly into your subgraph mappings file and entities (similar to tables and fields), enabling full use of the GraphQL layer. By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data within your subgraph’s handler, ensuring efficient and streamlined data management within the subgraph framework. + +> Note: If you haven’t already, visit one of the How-To Guides found [here](./introduction) to scaffold your first project in the Development Container. + +The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. + +```tsx +export function handleTransactions(bytes: Uint8Array): void { + let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).trasanctions // 1. + if (transactions.length == 0) { + log.info('No transactions found', []) + return + } + + for (let i = 0; i < transactions.length; i++) { + // 2. + let transaction = transactions[i] + + let entity = new Transaction(transaction.hash) // 3. + entity.from = transaction.from + entity.to = transaction.to + entity.save() + } +} +``` + +Here's what you’re seeing in the `mappings.ts` file: + +1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object +2. Looping over the transactions +3. Create a new subgraph entity for every transaction + +To go through a detailed example of a trigger-based subgraph, [click here](./triggers-example). diff --git a/website/pages/uk/substreams.mdx b/website/pages/uk/substreams.mdx index 710e110012cc..a838a6924e2f 100644 --- a/website/pages/uk/substreams.mdx +++ b/website/pages/uk/substreams.mdx @@ -4,9 +4,11 @@ title: Substreams ![Substreams Logo](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## How Substreams Works in 4 Steps @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### Expand Your Knowledge - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/uk/sunrise.mdx b/website/pages/uk/sunrise.mdx index 32bf6c6d26d4..14d1444cf8cd 100644 --- a/website/pages/uk/sunrise.mdx +++ b/website/pages/uk/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Post-Sunrise + Upgrading to The Graph Network FAQ --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Note: The Sunrise of Decentralized Data ended June 12th, 2024. -## What is the Sunrise of Decentralized Data? +## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data is an initiative spearheaded by Edge & Node. The goal is to enable subgraph developers to seamlessly upgrade to The Graph’s decentralized network. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan draws on many previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs, and the ability to integrate new blockchain networks to The Graph. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### What are the phases of the Sunrise? +### What happened to the hosted service? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. -## Upgrading subgraphs to The Graph Network +During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. -### When will hosted service subgraphs no longer be available? +### Was Subgraph Studio impacted by this upgrade? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Why were subgraphs published to Arbitrum, did it start indexing a different network? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Will my hosted service subgraph be supported on The Graph Network? - -Yes, the upgrade Indexer will automatically support all hosted service subgraphs published to The Graph Network for a seamless upgrade experience. - -### How do I upgrade my hosted service subgraph? - -> Note: Upgrading a subgraph to The Graph Network cannot be undone. - - - -To upgrade a hosted service subgraph, you can visit the subgraph dashboard on the [hosted service](https://thegraph.com/hosted-service). - -1. Select the subgraph(s) you want to upgrade. -2. Select the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### How can I get support with the upgrade process? - -The Graph community is here to support developers as they move to The Graph Network. Join The Graph's [Discord server](https://discord.gg/vtvv7FP) and request help in the #upgrade-decentralized-network channel. - -### How can I ensure high quality of service and redundancy for subgraphs on The Graph Network? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Members from these blockchain communities are encouraged to integrate their chain through the [Chain Integration Process](/chain-integration-overview/). - -### How do I publish new versions to the network? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Upgrade to the latest version of [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Update your deploy command - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> Publishing requires Arbitrum ETH - upgrading your subgraph also airdrops a small amount to facilitate your first protocol interactions 🧑‍🚀 - -### I use a subgraph developed by someone else, how can I make sure that my service isn't interrupted? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### What happens if I don't upgrade my subgraph? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### How can I get started querying subgraphs on The Graph Network? - -You can explore available subgraphs on [Graph Explorer](https://thegraph.com/explorer). [Learn more about querying subgraphs on The Graph](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## About the Upgrade Indexer -### What is the upgrade Indexer? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> The upgrade Indexer is currently active. -### What chains does the upgrade Indexer support? +The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. -The upgrade Indexer supports chains that were previously only available on the hosted service. +### What does the upgrade Indexer do? -Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. ### What does the upgrade indexer mean for existing Indexers? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. + +However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As more subgraphs are upgraded from the hosted service to The Graph Network, Delegators stand to benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### Will the upgrade Indexer compete with existing Indexers for rewards? +### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer will only allocate the minimum amount per subgraph and will not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### How will this affect subgraph developers? +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### How does this benefit data consumers? +### How does the upgrade Indexer benefit data consumers? The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. -### How will the upgrade Indexer price queries? - -The upgrade Indexer will price queries at the market rate so as not to influence the query fee market. - -### What are the criteria for the upgrade Indexer to stop supporting a subgraph? - -The upgrade Indexer will serve a subgraph until it is sufficiently and successfully served with consistent queries served by at least 3 other Indexers. - -Furthermore, the upgrade Indexer will stop supporting a subgraph if it has not been queried in the last 30 days. - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## About The Graph Network - -### Do I need to run my own infrastructure? - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -Once your subgraph has reached adequate curation signal and other Indexers begin supporting it, the upgrade Indexer will gradually taper off, allowing other Indexers to collect indexing rewards and query fees. - -### Should I host my own indexing infrastructure? - -Running infrastructure for your own project is [significantly more resource intensive](/network/benefits/) when compared to using The Graph Network. - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -That being said, if you’re still interested in running a [Graph Node](https://github.com/graphprotocol/graph-node), consider joining The Graph Network [as an Indexer](https://thegraph.com/blog/how-to-become-indexer/) to earn indexing rewards and query fees by serving data on your subgraph and others. - -### Should I use a centralized indexing provider? - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -Here's a detailed breakdown of the benefits of The Graph over centralized hosting: +### How does the upgrade Indexer price queries? -- **Resilience and Redundancy**: Decentralized systems are inherently more robust and resilient due to their distributed nature. Data isn't stored on a single server or location. Instead, it's served by hundreds of independent Indexers around the globe. This reduces the risk of data loss or service interruptions if one node fails, leading to exceptional uptime (99.99%). +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **Quality of Service**: In addition to the impressive uptime, The Graph Network features a ~106ms median query speed (latency), and higher query success rates compared to hosted alternatives. Read more in [this blog](https://thegraph.com/blog/qos-the-graph-network/). +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -Just as you've chosen your blockchain network for its decentralized nature, security, and transparency, opting for The Graph Network is an extension of those same principles. By aligning your data infrastructure with these values, you ensure a cohesive, resilient, and trust-driven development environment. +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/uk/supported-network-requirements.mdx b/website/pages/uk/supported-network-requirements.mdx index df15ef48d762..9662552e4e6a 100644 --- a/website/pages/uk/supported-network-requirements.mdx +++ b/website/pages/uk/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| Network | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | +| Network | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | diff --git a/website/pages/uk/tap.mdx b/website/pages/uk/tap.mdx new file mode 100644 index 000000000000..872ad6231e9c --- /dev/null +++ b/website/pages/uk/tap.mdx @@ -0,0 +1,197 @@ +--- +title: TAP Migration Guide +--- + +Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. + +## Overview + +[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: + +- Efficiently handles micropayments. +- Adds a layer of consolidations to on-chain transactions and costs. +- Allows Indexers control of receipts and payments, guaranteeing payment for queries. +- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. + +## Specifics + +TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. + +For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. + +### RAV Details + +- It’s money that is waiting to be sent to the blockchain. + +- It will continue to send requests to aggregate and ensure that the total value of non-aggregated receipts does not exceed the `amount willing to lose`. + +- Each RAV can be redeemed once in the contracts, which is why they are sent after the allocation is closed. + +### Redeeming RAV + +As long as you run `tap-agent` and `indexer-agent`, everything will be executed automatically. The following provides a detailed breakdown of the process: + +1. An Indexer closes allocation. + +2. ` period, tap-agent` takes all pending receipts for that specific allocation and requests an aggregation into a RAV, marking it as `last`. + +3. `indexer-agent` takes all the last RAVS and sends redeem requests to the blockchain, which will update the value of `redeem_at`. + +4. During the `` period, `indexer-agent` monitors if the blockchain has any reorganizations that revert the transaction. + + - If it was reverted, the RAV is resent to the blockchain. If it was not reverted, it gets marked as `final`. + +## Blockchain Addresses + +### Contracts + +| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| ------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | +| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | +| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | + +### Gateway + +| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| ---------- | --------------------------------------------- | --------------------------------------------- | +| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | + +### Requirements + +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. + +- [Graph TAP Aribtrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) + +> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployement. As a result, you have to index it manually. + +## Migration Guide + +### Software versions + +| Component | Version | Image Link | +| --------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------- | +| indexer-service | v1.0.0-rc.6 | [indexer-service](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-service-rs/264320627?tag=1.0.0-rc.6) | +| indexer-agent | PR #995 | [indexer-agent](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent/266166026?tag=sha-d98cf80) | +| tap-agent | v1.0.0-rc.6 | [tap-agent](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-tap-agent/264320547?tag=1.0.0-rc.6) | + +### Steps + +1. **Indexer Agent** + + - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). + - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + +2. **Indexer Service** + + - Fully replace your current configuration with the [new Indexer Service rs](https://github.com/graphprotocol/indexer-rs). It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + - Like the older version, you can scale Indexer Service horizontally easily. It is still stateless. + +3. **TAP Agent** + + - Run _one_ single instance of [TAP Agent](https://github.com/graphprotocol/indexer-rs) at all times. It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + +4. **Configure Indexer Service and TAP Agent** + + Configuration is a TOML file shared between `indexer-service` and `tap-agent`, supplied with the argument `--config /path/to/config.toml`. + + Check out the full [configuration](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) and the [default values](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) + +For minimal configuration, use the following template: + +```bash +# You will have to change *all* the values below to match your setup. +# +# Some of the config below are global graph network values, which you can find here: +# +# +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: +# +# [database] +# postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" + +[indexer] +indexer_address = "0x1111111111111111111111111111111111111111" +operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" + +[database] +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. +postgres_url = "postgres://postgres@postgres:5432/postgres" + +[graph_node] +# URL to your graph-node's query endpoint +query_url = "" +# URL to your graph-node's status endpoint +status_url = "" + +[subgraphs.network] +# Query URL for the Graph Network subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[subgraphs.escrow] +# Query URL for the Escrow subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[blockchain] +# The chain ID of the network that the graph network is running on +chain_id = 1337 +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. +receipts_verifier_address = "0x2222222222222222222222222222222222222222" + +######################################## +# Specific configurations to tap-agent # +######################################## +[tap] +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: +# max_amount_willing_to_lose_grt = "0.1" +max_amount_willing_to_lose_grt = 20 + +[tap.sender_aggregator_endpoints] +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. +0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" +``` + +Notes: + +- Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/tap/#gateway). +- Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/tap/#contracts) using the appropriate chain id. + +**Log Level** + +- You can set the log level by using the `RUST_LOG` environment variable. +- It’s recommended that you set it to `RUST_LOG=indexer_tap_agent=debug,info`. + +## Monitoring + +### Metrics + +All components expose the port 7300 to be queried by prometheus. + +### Grafana Dashboard + +You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) and import. + +### Launchpad + +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer) diff --git a/website/pages/ur/about.mdx b/website/pages/ur/about.mdx index 128488977220..9d53ba42d773 100644 --- a/website/pages/ur/about.mdx +++ b/website/pages/ur/about.mdx @@ -2,46 +2,66 @@ title: گراف کے بارے میں --- -یہ صفحہ وضاحت کرے گا کہ گراف کیا ہے اور آپ کیسے شروع کر سکتے ہیں. - ## گراف کیا ہے؟ -گراف بلاکچین ڈیٹا کی انڈیکسنگ اور کیوری کے لیے ایک ڈیسینٹرالائزڈ پروٹوکول ہے۔ گراف ڈیٹا سے کیوری کرنا ممکن بناتا ہے جس سے براہ راست کیوری کرنا مشکل ہے. +The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. + +## Understanding the Basics + +Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. + +### Challenges Without The Graph + +In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. + +- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. + +- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. + +- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. + +### Why is this a problem? + +It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. + +Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. + +Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. -پیچیدہ سمارٹ کنٹریکٹس والے پروجیکٹس جیسے [Uniswap](https://uniswap.org/) اور NFTs کے اقدامات جیسا کہ [بورڈ ایپ یاٹ کلب](https://boredapeyachtclub.com/) ایتھیریم بلاکچین پر ڈیٹا ذخیزہ کرتے ہیں, جس سے بلاکچین سے براہ راست بنیادی ڈیٹا کے علاوہ کچھ بھی پڑھنا کافی مشکل ہوجاتا ہے. +## The Graph Provides a Solution -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. -آپ اپنا سرور بھی بنا سکتے ہیں، وہاں لین دین پر کارروائی کر سکتے ہیں، انہیں ڈیٹا بیس میں محفوظ کر سکتے ہیں، اور ڈیٹا سے کیوری کرنے کے لیے ان سب کے اوپر ایک API اینڈ پوائنٹ بنا سکتے ہیں۔ تاہم، یہ آپشن [وسائل کی گہرائی](/network/benefits/) ہے، دیکھ بھال کی ضرورت ہے، ناکامی کا ایک نقطہ پیش کرتا ہے، اور ڈیسینٹرالائزیشن کے لیے ضروری حفاظتی خصوصیات کو توڑ دیتا ہے. +### How The Graph Functions -**بلاکچین ڈیٹا کو انڈیکس کرنا بہت، بہت مشکل ہے.** +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +#### Specifics -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. -## گراف کیسے کام کرتا ہے +- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -گراف سیکھتا ہے کہ سب گراف کی تفصیل کی بنیاد پر ایتھیریم کے ڈیٹا کو کیا اور کیسے انڈیکس کرنا ہے, سب گراف مینی فیسٹ کے نام سے جانا جاتا ہے. سب گراف کی تفصیل سب گراف کے لیے دلچسپی کے سمارٹ کنٹریکٹس کی وضاحت کرتی ہے, ان کنٹریکٹس کے واقعات جن پر توجہ دینے کی ضرورت ہے, اور ایونٹ کے ڈیٹا کو اس ڈیٹا میں میپ کرنے کا طریقہ جو گراف اپنے ڈیٹا بیس میں سٹور کرے گا. +- When creating a subgraph, you need to write a subgraph manifest. -ایک بار جب آپ `سب گراف مینی فیسٹ` لکھ لیتے ہیں،آپ IPFS میں تعریف کو ذخیرہ کرنے کے لیے گراف CLI کا استعمال کرتے ہیں اور انڈیکسر سے کہتے ہیں کہ اس سب گراف کے لیے ڈیٹا انڈیکس کرنا شروع کرے. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. -یہ خاکہ ڈیٹا کے بہاؤ کے بارے میں مزید تفصیل دیتا ہے ایک بار جب سب گراف مینی فیسٹ تعین ہو چکا ہو, ایتھیریم ٹرانزیکشنز سے نمٹتے ہوئے: +The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. ![ایک گرافک یہ بتاتا ہے کہ گراف کس طرح ڈیٹا صارفین کو کیوریز پیش کرنے کے لیے گراف نوڈ کا استعمال کرتا ہے](/img/graph-dataflow.png) بہاؤ ان مراحل کی پیروی کرتا ہے: -1. ایک ڈیپ سمارٹ کنٹریکٹ پر ٹرانزیکشن کے ذریعے سے ایتھیریم میں ڈیٹا کا اضافہ کرتی ہے. -2. سمارٹ کنٹریکٹ ٹرانزیکشن پر کارروائی کے دوران ایک یا ایک سے زیادہ واقعات کا اخراج کرتا ہے. -3. گراف نوڈ ایتھیریم کو نئے بلاکس اور آپ کے سب گراف کے ڈیٹا کے لیے مسلسل سکین کرتا ہے. -4. گراف نوڈ ان بلاکس میں آپ کے سب گراف کے لیے ایتھریم ایونٹس تلاش کرتا ہے اور آپ کے فراہم کردہ میپنگ ہینڈلرز کو چلاتا ہے. میپنگ ایک WASM ماڈیول ہے جو ڈیٹا ہستیوں کو تخلیق یا اپ ڈیٹ کرتا ہے جو ایتھیریم ایونٹس کے جواب میں گراف نوڈ ذخیرہ کرتا ہے. -5. ڈیپ بلاکچین سے انڈیکس کردہ ڈیٹا کے لیے گراف نوڈ کو کیوری کرتی ہے, نوڈ کے [GraphQL اینڈ پوائنٹ](https://graphql.org/learn/) کا استعمال کرتے ہوئے. گراف نوڈ بدلے میں اس ڈیٹا کو حاصل کرنے کے لیے GraphQL کی کیوریز کو اپنے بنیادی ڈیٹا اسٹور کی کیوریز میں تبدیل کرتا ہے, سٹور کی انڈیکسنگ کی صلاحیتوں کا استعمال کرتے ہوئے. ڈیسینٹرلائزڈ ایپلیکیشن اس ڈیٹا کو صارفین کے لیے ایک بھرپور UI میں دکھاتی ہے, جسے وہ ایتھیریم پر نئی ٹرانزیکشنز جاری کرنے کے لیے استعمال کرتے ہیں. یہ سلسلہ دہرایا جاتا ہے. +1. ایک ڈیپ سمارٹ کنٹریکٹ پر ٹرانزیکشن کے ذریعے سے ایتھیریم میں ڈیٹا کا اضافہ کرتی ہے. +2. سمارٹ کنٹریکٹ ٹرانزیکشن پر کارروائی کے دوران ایک یا ایک سے زیادہ واقعات کا اخراج کرتا ہے. +3. گراف نوڈ ایتھیریم کو نئے بلاکس اور آپ کے سب گراف کے ڈیٹا کے لیے مسلسل سکین کرتا ہے. +4. گراف نوڈ ان بلاکس میں آپ کے سب گراف کے لیے ایتھریم ایونٹس تلاش کرتا ہے اور آپ کے فراہم کردہ میپنگ ہینڈلرز کو چلاتا ہے. میپنگ ایک WASM ماڈیول ہے جو ڈیٹا ہستیوں کو تخلیق یا اپ ڈیٹ کرتا ہے جو ایتھیریم ایونٹس کے جواب میں گراف نوڈ ذخیرہ کرتا ہے. +5. ڈیپ بلاکچین سے انڈیکس کردہ ڈیٹا کے لیے گراف نوڈ کو کیوری کرتی ہے, نوڈ کے [GraphQL اینڈ پوائنٹ](https://graphql.org/learn/) کا استعمال کرتے ہوئے. گراف نوڈ بدلے میں اس ڈیٹا کو حاصل کرنے کے لیے GraphQL کی کیوریز کو اپنے بنیادی ڈیٹا اسٹور کی کیوریز میں تبدیل کرتا ہے, سٹور کی انڈیکسنگ کی صلاحیتوں کا استعمال کرتے ہوئے. ڈیسینٹرلائزڈ ایپلیکیشن اس ڈیٹا کو صارفین کے لیے ایک بھرپور UI میں دکھاتی ہے, جسے وہ ایتھیریم پر نئی ٹرانزیکشنز جاری کرنے کے لیے استعمال کرتے ہیں. یہ سلسلہ دہرایا جاتا ہے. ## اگلے مراحل -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +The following sections provide a more in-depth look at subgraphs, their deployment and data querying. -Before you start writing your own subgraph, you might want to have a look at [Graph Explorer](https://thegraph.com/explorer) and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/pages/ur/arbitrum/arbitrum-faq.mdx b/website/pages/ur/arbitrum/arbitrum-faq.mdx index 4e46325061e3..7e3277e62140 100644 --- a/website/pages/ur/arbitrum/arbitrum-faq.mdx +++ b/website/pages/ur/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: آربٹرم اکثر پوچھے گئے سوالات [یہاں](#billing-on-arbitrum-faqs) کلک کریں اگر آپ Arbitrum بلنگ FAQs پر جانا چاہتے ہیں۔ -## گراف ایک L2 حل کیوں نافذ کر رہا ہے؟ +## Why did The Graph implement an L2 Solution? -L2 پر گراف کو سکیل کرنے سے، نیٹ ورک کے شرکاء توقع کر سکتے ہیں: +By scaling The Graph on L2, network participants can now benefit from: - گیس فیس پر 26 گنا زیادہ کی بچت @@ -14,7 +14,7 @@ L2 پر گراف کو سکیل کرنے سے، نیٹ ورک کے شرکاء ت - سیکیورٹی ایتھیریم سے وراثت میں ملی -پروٹوکول نیٹ ورک کے شرکاء کو گیس فیس میں کم قیمت پر زیادہ کثرت سے بات چیت کرنے کی اجازت دیتا ہے۔ یہ انڈیکسرز کو زیادہ تعداد میں سب گرافس کو انڈیکس کرنے کے قابل بناتا ہے، ڈویلپرز کو سب گرافس کو زیادہ آسانی کے ساتھ تعینات کرنے اور اپ ڈیٹ کرنے کی اجازت دیتا ہے، ڈیلیگیٹرز کو زیادہ سے زیادہ تعداد کے ساتھ GRT کو ڈیلیگیٹ کرنے کے قابل بناتا ہے، اور کیوریٹرز کو سب گراف کی ایک بڑی تعداد میں سگنل شامل کرنے کی صلاحیت فراہم کرتا ہے. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. گراف کمیونٹی نے [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) بحث کے نتائج کے بعد گزشتہ سال Arbitrum کے ساتھ آگے بڑھنے کا فیصلہ کیا۔ @@ -41,27 +41,21 @@ L2 پر گراف استعمال کرنے کا فائدہ اٹھانے کے لی ## بطور سب گراف ڈویلپر، ڈیٹا کنزیومر، انڈیکسر، کیوریٹر، یا ڈیلیگیٹر، مجھے اب کیا کرنے کی ضرورت ہے؟ -فوری طور پر کسی کارروائی کی ضرورت نہیں ہے، نیٹ ورک کے شرکاء کی حوصلہ افزائی کی جاتی ہے کہ وہ L2 کے فوائد سے فائدہ اٹھانے کے لیے Arbitrum میں جانا شروع کریں۔ +Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) for additional support. -کور ڈویلپر ٹیمیں مائیگریشن مددگار بنانے کے لیے کام کر رہی ہیں جو ڈیلیگیشن، کیوریشن، اور سب گراف کو Arbitrum میں منتقل کرنے میں نمایاں طور پر آسان بنائے گی۔ نیٹ ورک کے شرکاء اپریل 2023 میں مائیگریشن کے مددگاروں کے دستیاب ہونے کی توقع کر سکتے ہیں. +All indexing rewards are now entirely on Arbitrum. -10 اپریل 2023 تک، تمام انڈیکسنگ کے انعامات کا 5% آربٹرم پر دیا جا رہا ہے۔ جیسے جیسے نیٹ ورک کی شرکت میں اضافہ ہوتا ہے، اور جیسے ہی کونسل اسے منظور کرتی ہے، انڈیکسنگ کے انعامات آہستہ آہستہ ایتھریم سے آربٹرم میں منتقل ہوتے جائیں گے، بالآخر مکمل طور پر آربٹرم میں منتقل ہو جائیں گے. - -## اگر میں L2 پر نیٹ ورک میں حصہ لینا چاہتا ہوں تو مجھے کیا کرنا چاہیے؟ - -براہ کرم L2 پر [نیٹ ورک](https://testnet.thegraph.com/explorer) کی جانچ کرنے میں مدد کریں اور [Discord](https://discord.gg/graphprotocol) میں اپنے تجربے کے بارے میں تاثرات کی اطلاع دیں. - -## کیا نیٹ ورک کو L2 کرنے سے متعلق کوئی خطرہ ہے؟ +## Were there any risks associated with scaling the network to L2? All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). ہر چیز کی اچھی طرح جانچ کی گئی ہے، اور ایک محفوظ اور ہموار منتقلی کو یقینی بنانے کے لیے ایک ہنگامی منصوبہ تیار کیا گیا ہے۔ تفصیلات دیکھی جا سکتی ہیں [یہاں](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## کیا ایتھیریم پر موجودہ سب گراف کام کرتے رہیں گے؟ +## Are existing subgraphs on Ethereum working? -جی ہاں، گراف نیٹ ورک کے کنٹریکٹس ایتھریم اور آربٹرم دونوں پر متوازی طور پر کام کریں گے جب تک کہ بعد کی تاریخ میں مکمل طور پر آربٹرم میں منتقل نہ ہو جائیں. +All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. -## کیا GRT کے پاس آربٹرم پر ایک نیا سمارٹ کنٹریکٹ تعینات ہوگا؟ +## Does GRT have a new smart contract deployed on Arbitrum? ہاں، GRT کے پاس ایک اضافی [Arbitrum پر سمارٹ کنٹریکٹ] (https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7)۔ تاہم، ایتھیریم مین نیٹ [GRT کنٹریکٹ] (https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) کا کام جاری رہے گا۔ diff --git a/website/pages/ur/billing.mdx b/website/pages/ur/billing.mdx index cc192cfe227b..95164e556c2c 100644 --- a/website/pages/ur/billing.mdx +++ b/website/pages/ur/billing.mdx @@ -14,7 +14,7 @@ There are two plans to use when querying subgraphs on The Graph Network. ## Query Payments with credit card -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) +- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. صفحہ کے اوپری دائیں کونے میں "کنیکٹ والیٹ" بٹن پر کلک کریں۔ آپ کو والیٹ کے انتخاب کے صفحہ پر بھیج دیا جائے گا۔ اپنا والیٹ منتخب کریں اور "کنیکٹ" پر کلک کریں. 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. @@ -69,7 +69,7 @@ Once you bridge GRT, you can add it to your billing balance. 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. 4. Enter the amount of GRT you would like to withdraw. 5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. 6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. @@ -83,7 +83,7 @@ Once you bridge GRT, you can add it to your billing balance. - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. 5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. 6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. + - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. 7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. 8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. @@ -127,7 +127,7 @@ This will be a step by step guide for purchasing GRT on Binance. 7. Review your purchase and click "Buy GRT". 8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. 9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - Click on the "wallet" button, click withdraw, and select GRT. - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - Click "Continue" and confirm your transaction. @@ -198,7 +198,7 @@ This will be a step by step guide for purchasing ETH on Coinbase. ### How many queries will I need? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. @@ -208,6 +208,6 @@ Of course, both new and existing users can reach out to Edge & Node's BD team fo Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### What happens when my billing balance runs out? Will I get a warning? You will receive several email notifications before your billing balance runs out. diff --git a/website/pages/ur/chain-integration-overview.mdx b/website/pages/ur/chain-integration-overview.mdx index f09de51959b9..43b53094048a 100644 --- a/website/pages/ur/chain-integration-overview.mdx +++ b/website/pages/ur/chain-integration-overview.mdx @@ -6,12 +6,13 @@ title: چین انٹیگریشن کے عمل کا جائزہ ## مرحلہ 1. تکنیکی انٹیگریشن -- ٹیمیں غیر ای وی ایم پر مبنی چینز کے لیے گراف نوڈ انٹیگریشن اور فائر ہوز پر کام کرتی ہیں۔ [یہ طریقہ ہے](/new-chain-integration/). -- ٹیمیں فورم تھریڈ بنا کر پروٹوکول انٹیگریشن کا عمل شروع کرتی ہیں [here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71)(گورننس اور GIPs کے تحت نئے ڈیٹا ذرائع ذیلی زمرہ) ۔ پہلے سے طے شدہ فورم ٹیمپلیٹ کا استعمال لازمی ہے. +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. +- ٹیمیں فورم تھریڈ بنا کر پروٹوکول انٹیگریشن کا عمل شروع کرتی ہیں [here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71)(گورننس اور GIPs کے تحت نئے ڈیٹا ذرائع ذیلی زمرہ) + ۔ پہلے سے طے شدہ فورم ٹیمپلیٹ کا استعمال لازمی ہے. ## مرحلہ 2۔ انٹیگریشن کی توثیق -- ٹیمیں ہموار انٹیگریشن کے عمل کو یقینی بنانے کے لیے بنیادی ڈویلپرز، گراف فاؤنڈیشن اور GUIs اور نیٹ ورک گیٹ ویز کے آپریٹرز، جیسے کہ [Subgraph Studio](https://thegraph.com/studio/) کے ساتھ تعاون کرتی ہیں۔ اس میں ضروری بیک اینڈ انفراسٹرکچر فراہم کرنا شامل ہے، جیسے انٹیگریٹنگ چین کے JSON RPC یا فائر ہوز اینڈ پوائنٹس۔ ایسی ٹیمیں جو اس طرح کے بنیادی ڈھانچے کی خود میزبانی سے گریز کرنا چاہتی ہیں وہ ایسا کرنے کے لیے گراف کی کمیونٹی آف نوڈ آپریٹرز (انڈیکسرز) سے فائدہ اٹھا سکتی ہیں، جس میں فاؤنڈیشن مدد کر سکتی ہے. +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - گراف انڈیکسرز گراف کے ٹیسٹ نیٹ پر انٹیگریشن کی جانچ کرتے ہیں. - کور ڈویلپرز اور انڈیکسرز استحکام، کارکردگی، اور ڈیٹا کے تعین کی نگرانی کرتے ہیں. @@ -38,7 +39,7 @@ title: چین انٹیگریشن کے عمل کا جائزہ یہ صرف سب سٹریمزسے چلنے والے سب گرافس پر انڈیکسنگ کے انعامات کے لیے پروٹوکول سپورٹ کو متاثر کرے گا۔ اس GIP میں اسٹیج 2 کے لیے بیان کردہ طریقہ کار کے بعد، نئے فائر ہوز کے نفاذ کو ٹیسٹ نیٹ پر جانچ کی ضرورت ہوگی۔ اسی طرح، یہ فرض کرتے ہوئے کہ نفاذ پرفارمنس اور قابل اعتماد ہے، [فیچر سپورٹ میٹرکس](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) پر ایک PR کی ضرورت ہوگی ( 'سب سٹریمز ڈیٹا سورسز' سب گراف فیچر)، نیز انڈیکسنگ انعامات کے لیے پروٹوکول سپورٹ کے لیے ایک نیا GIP۔ کوئی بھی PR اور GIP بنا سکتا ہے۔ فاؤنڈیشن کونسل کی منظوری میں مدد کرے گی. -### 3. اس عمل میں کتنا وقت لگے گا؟ +### 3. How much time will the process of reaching full protocol support take? مین نیٹ کرنے کا وقت کئی ہفتوں کا متوقع ہے، انٹیگریشن کی ترقی کے وقت کی بنیاد پر مختلف ہوتا ہے، چاہے اضافی تحقیق کی ضرورت ہو، جانچ اور بگ فکسز، اور ہمیشہ کی طرح، گورننس کے عمل کا وقت جس کے لیے کمیونٹی فیڈ بیک کی ضرورت ہوتی ہے. @@ -46,4 +47,4 @@ title: چین انٹیگریشن کے عمل کا جائزہ ### 4. ترجیحات کو کس طرح سنبھالا جائے گا؟ -Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. This is especially true for chains previously supported on the [hosted service](https://thegraph.com/hosted-service) or those relying on already tested stacks. +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/ur/cookbook/arweave.mdx b/website/pages/ur/cookbook/arweave.mdx index dc613dd1022a..e987c29eecdd 100644 --- a/website/pages/ur/cookbook/arweave.mdx +++ b/website/pages/ur/cookbook/arweave.mdx @@ -105,7 +105,7 @@ dataSources: پروسیسنگ ایونٹس کے ہینڈلرز [اسمبلی اسکرپٹ](https://www.assemblyscript.org/) میں لکھے گئے ہیں. -آرویو انڈیکسنگ آرویو سے متعلق مخصوص ڈیٹا کی اقسام کو [اسمبلی اسکرپٹ API](/developing/assemblyscript-api/) سے متعارف کراتی ہے. +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/ur/cookbook/base-testnet.mdx b/website/pages/ur/cookbook/base-testnet.mdx index c68ed62c589c..f72dffffcbdf 100644 --- a/website/pages/ur/cookbook/base-testnet.mdx +++ b/website/pages/ur/cookbook/base-testnet.mdx @@ -63,10 +63,10 @@ graph init --studio The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. +- سکیما (schema.graphql) - GraphQL سکیما اس بات کی وضاحت کرتا ہے کہ آپ سب گراف سے کون سا ڈیٹا حاصل کرنا چاہتے ہیں. - اسمبلی اسکرپٹ میپنگ (mapping.ts) - یہ وہ کوڈ ہے جو آپ کے ڈیٹا سورس سے ڈیٹا کو اسکیما میں بیان کردہ اداروں میں ترجمہ کرتا ہے. -If you want to index additional data, you will need extend the manifest, schema and mappings. +If you want to index additional data, you will need to extend the manifest, schema and mappings. For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). diff --git a/website/pages/ur/cookbook/cosmos.mdx b/website/pages/ur/cookbook/cosmos.mdx index 8199dfc8f966..1c94bed6c837 100644 --- a/website/pages/ur/cookbook/cosmos.mdx +++ b/website/pages/ur/cookbook/cosmos.mdx @@ -85,7 +85,7 @@ Schema definition describes the structure of the resulting subgraph database and پروسیسنگ ایونٹس کے ہینڈلرز [اسمبلی اسکرپٹ](https://www.assemblyscript.org/) میں لکھے گئے ہیں. -کوزموس انڈیکسنگ [اسمبلی اسکرپٹ API](/developing/assemblyscript-api/) میں کوزموس-مخصوص ڈیٹا کی اقسام کو متعارف کراتی ہے. +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { @@ -165,7 +165,7 @@ class Any { ہر ہینڈلر کی قسم اس کے اپنے ڈیٹا ڈھانچے کے ساتھ آتی ہے جو میپنگ فنکشن کی دلیل کے طور پر پاس کی جاتی ہے. -- بلاک ہینڈلرز کو `Block` قسم موصول ہوتی ہے. +- بلاک ہینڈلرز کو ` Block ` قسم موصول ہوتی ہے. - ایونٹ ہینڈلرز کو `EventData` قسم موصول ہوتی ہے. - ٹرانزیکشن ہینڈلرز کو `TransactionData` قسم موصول ہوتی ہے. - میسج ہینڈلرز کو `MessageData` قسم موصول ہوتی ہے. diff --git a/website/pages/ur/cookbook/grafting.mdx b/website/pages/ur/cookbook/grafting.mdx index 53278d2240ec..c204eee68424 100644 --- a/website/pages/ur/cookbook/grafting.mdx +++ b/website/pages/ur/cookbook/grafting.mdx @@ -22,7 +22,7 @@ title: ایک کنٹریکٹ کو تبدیل کریں اور اس کی تاری - [گرافٹنگ](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -اس ٹیوٹوریل میں، ہم استعمال کے ایک بنیادی کیس کا احاطہ کریں گے۔ ہم موجودہ معاہدے کو ایک جیسے کنٹریکٹ سے بدل دیں گے (ایک نئے پتہ کے ساتھ، لیکن ایک ہی کوڈ کے ساتھ)۔ اس کے بعد، موجودہ سب گراف کو "بیس" سب گراف پر گرافٹ کریں جو نئے کنٹریکٹ کو ٹریک کرتا ہے. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## نیٹ ورک میں اپ گریڈ کرتے وقت گرافٹنگ پر اہم نوٹ @@ -30,7 +30,7 @@ title: ایک کنٹریکٹ کو تبدیل کریں اور اس کی تاری ### یہ کیوں اہم ہے؟ -گرافٹنگ ایک طاقتور خصوصیت ہے جو آپ کو ایک سب گراف کو دوسرے پر "گرافٹ" کرنے کی اجازت دیتی ہے، مؤثر طریقے سے تاریخی ڈیٹا کو موجودہ سب گراف سے نئے ورژن میں منتقل کرتی ہے۔ اگرچہ یہ ڈیٹا کو محفوظ رکھنے اور انڈیکسنگ پر وقت بچانے کا ایک مؤثر طریقہ ہے، لیکن گرافٹنگ کسی میزبان ماحول سے ڈیسنٹرالا ئزڈ نیٹ ورک کی طرف ہجرت کرتے وقت پیچیدگیوں اور ممکنہ مسائل کو پیش کر سکتی ہے۔ گراف نیٹ ورک سے سب گراف کو ہوسٹڈ سروس یا سب گراف اسٹوڈیو میں واپس کرنا ممکن نہیں ہے. +Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. ### بہترین طریقے @@ -80,7 +80,7 @@ dataSources: ``` - `Lock` ڈیٹا کا ذریعہ abi اور کنٹریکٹ ایڈریس ہے جب ہم کنٹریکٹ کو مرتب اور تعینات کریں گے -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - `mapping` سیکشن دلچسپی کے محرکات اور ان افعال کی وضاحت کرتا ہے جنہیں ان محرکات کے جواب میں چلایا جانا چاہیے۔ اس صورت میں، ہم `Withdrawal` ایونٹ کو سن رہے ہیں اور جب یہ خارج ہوتا ہے تو `handleWithdrawal` فنکشن کو کال کر رہے ہیں. ## گرافٹنگ مینی فیسٹ کی تعریف @@ -191,7 +191,7 @@ Congrats! You have successfully grafted a subgraph onto another subgraph. ## اضافی وسائل -اگر آپ گرافٹنگ کے ساتھ مزید تجربہ چاہتے ہیں، تو یہاں مقبول کنٹریکٹس کے لیے چند مثالیں ہیں: +If you want more experience with grafting, here are a few examples for popular contracts: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) diff --git a/website/pages/ur/cookbook/near.mdx b/website/pages/ur/cookbook/near.mdx index 460a6307dbf7..eae8d8ab04eb 100644 --- a/website/pages/ur/cookbook/near.mdx +++ b/website/pages/ur/cookbook/near.mdx @@ -37,7 +37,7 @@ NEAR سب گراف ڈیولپمنٹ کے لیے `graph-cli` اوپر والے و ** schema.graphql:** ایک اسکیما فائل جو اس بات کی وضاحت کرتی ہے کہ آپ کے سب گراف کے لیے کون سا ڈیٹا محفوظ کیا جاتا ہے، اور GraphQL کے ذریعے اس سے کیوری کیسے کیا جائے۔ NEAR سب گراف کے تقاضوں کا احاطہ [موجودہ دستاویزات](/developing/creating-a-subgraph#the-graphql-schema) سے ہوتا ہے. -**اسمبلی اسکرپٹ میپنگس:** [اسمبلی اسکرپٹ کوڈ](/developing/assemblyscript-api) جو ایونٹ کے ڈیٹا سے آپ کے اسکیما میں بیان کردہ ہستیوں میں ترجمہ کرتا ہے۔ NEAR سپورٹ NEAR-مخصوص ڈیٹا کی اقسام اور نئی JSON پارسنگ فعالیت کو متعارف کراتی ہے. +**AssemblyScript Mappings:** [AssemblyScript code](/developing/graph-ts/api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. سب گراف کی ترقی کے دوران دو اہم کمانڈز ہیں: @@ -70,7 +70,7 @@ dataSources: file: ./src/mapping.ts # link to the file with the Assemblyscript mappings ``` -- NEAR سب گراف ڈیٹا ماخذ کا ایک نیا `kind` متعارف کراتے ہیں (`near`) +- NEAR سب گراف ڈیٹا ماخذ کا ایک نیا ` kind ` متعارف کراتے ہیں (`near`) - The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet` - NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account. - NEAR ڈیٹا کے ذرائع ایک متبادل اختیاری `source.accounts` فیلڈ متعارف کراتے ہیں، جس میں اختیاری لاحقے اور سابقے ہوتے ہیں۔ کم از کم سابقہ ​​یا لاحقہ متعین ہونا ضروری ہے، وہ بالترتیب اقدار کی فہرست کے ساتھ شروع یا ختم ہونے والے کسی بھی اکاؤنٹ سے مماثل ہوں گے۔ نیچے دی گئی مثال مماثل ہوگی: `[app|good].*[morning.near|morning.testnet]`۔ اگر صرف سابقوں یا لاحقوں کی فہرست ضروری ہو تو دوسری فیلڈ کو چھوڑا جا سکتا ہے. @@ -98,7 +98,7 @@ accounts: پروسیسنگ ایونٹس کے ہینڈلرز [اسمبلی اسکرپٹ](https://www.assemblyscript.org/) میں لکھے گئے ہیں. -NEAR انڈیکسنگ NEAR-مخصوص ڈیٹا کی اقسام کو [اسمبلی اسکرپٹ API](/developing/assemblyscript-api) میں متعارف کراتی ہے. +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/graph-ts/api). ```typescript @@ -165,9 +165,9 @@ class ReceiptWithOutcome { - بلاک ہینڈلرز کو ایک `Block` ملے گا - ریسیپٹ ہینڈلرز کو ایک `ReceiptWithOutcome` ملے گا -بصورت دیگر، باقی [اسمبلی اسکرپٹ API](/developing/assemblyscript-api) نقشہ سازی کے عمل کے دوران NEAR سب گراف ڈویلپرز کے لیے دستیاب ہے. +Otherwise, the rest of the [AssemblyScript API](/developing/graph-ts/api) is available to NEAR subgraph developers during mapping execution. -اس میں ایک نیا JSON پارسنگ فنکشن شامل ہے - NEAR پر لاگز کثرت سے سٹرنگیفائڈ JSONs کے طور پر خارج ہوتے ہیں۔ ڈویلپرز کو اجازت دینے کے لیے ایک نیا `json.fromString(...)` فنکشن [JSON API](/developing/assemblyscript-api#json-api) کے حصے کے طور پر دستیاب ہے۔ آسانی سے ان لاگز پر کارروائی کرنے کے لیے. +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/graph-ts/api#json-api) to allow developers to easily process these logs. ## NEAR سب گراف کی تعیناتی @@ -254,7 +254,7 @@ NEAR سپورٹ بیٹا میں ہے، جس کا مطلب ہے کہ API میں ### کیا رسید ہینڈلرز اکاؤنٹس اور ان کے سب اکاؤنٹس کو متحرک کریں گے؟ -اگر ایک `account` متعین کیا گیا ہے، تو وہ صرف صحیح اکاؤنٹ کے نام سے مماثل ہوگا۔ مثال کے طور پر، اکاؤنٹس اور سب اکاؤنٹس سے ملنے کے لیے مخصوص `prefixes` اور ` suffixes` کے ساتھ، `accounts` فیلڈ کی وضاحت کرکے سب اکاؤنٹس کو ملانا ممکن ہے۔ درج ذیل تمام `mintbase1.near` سب اکاؤنٹس سے مماثل ہوں گے: +اگر ایک ` account ` متعین کیا گیا ہے، تو وہ صرف صحیح اکاؤنٹ کے نام سے مماثل ہوگا۔ مثال کے طور پر، اکاؤنٹس اور سب اکاؤنٹس سے ملنے کے لیے مخصوص ` prefixes ` اور ` suffixes` کے ساتھ، `accounts` فیلڈ کی وضاحت کرکے سب اکاؤنٹس کو ملانا ممکن ہے۔ درج ذیل تمام `mintbase1.near` سب اکاؤنٹس سے مماثل ہوں گے: ```yaml accounts: diff --git a/website/pages/ur/cookbook/subgraph-uncrashable.mdx b/website/pages/ur/cookbook/subgraph-uncrashable.mdx index 0de9d6fa3a55..3bea4e892699 100644 --- a/website/pages/ur/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/ur/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: محفوظ سب گراف کوڈ جنریٹر - فریم ورک میں ہستی متغیرات کے گروپس کے لیے حسب ضرورت، لیکن محفوظ، سیٹر فنکشنز بنانے کا ایک طریقہ (کنفگ فائل کے ذریعے) بھی شامل ہے۔ اس طرح صارف کے لیے کسی باسی گراف ہستی کو لوڈ/استعمال کرنا ناممکن ہے اور فنکشن کے لیے مطلوبہ متغیر کو محفوظ کرنا یا سیٹ کرنا بھولنا بھی ناممکن ہے. -- انتباہی لاگز کو لاگز کے طور پر ریکارڈ کیا جاتا ہے جو اس بات کی نشاندہی کرتے ہیں کہ ڈیٹا کی درستگی کو یقینی بنانے کے لیے مسئلے کو پیچ کرنے میں مدد کے لیے سب گراف کی منطق کی خلاف ورزی کہاں ہے۔ یہ لاگز 'لاگز' سیکشن کے تحت گراف کی میزبانی کی ہوسٹڈ میں دیکھے جا سکتے ہیں. +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. گراف CLI کوڈجن کمانڈ کا استعمال کرتے ہوئے سب گراف ان کریش ایبل کو اختیاری پرچم کے طور پر چلایا جا سکتا ہے. diff --git a/website/pages/ur/cookbook/upgrading-a-subgraph.mdx b/website/pages/ur/cookbook/upgrading-a-subgraph.mdx index 0a3a917f5b62..803ebb10fd41 100644 --- a/website/pages/ur/cookbook/upgrading-a-subgraph.mdx +++ b/website/pages/ur/cookbook/upgrading-a-subgraph.mdx @@ -136,7 +136,7 @@ You can update the metadata of your subgraphs without having to publish a new ve ## گراف نیٹ ورک پر سب گراف کو فرسودہ کرنا -اپنے سب گراف کو فرسودہ کرنے اور اسے گراف نیٹ ورک سے ہٹانے کے لیے [یہاں](/managing/deprecating-a-subgraph) کے مراحل پر عمل کریں. +Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. ## سب گراف کا کیوری کرنا + گراف نیٹ ورک پر بلنگ diff --git a/website/pages/ur/deploying/multiple-networks.mdx b/website/pages/ur/deploying/multiple-networks.mdx new file mode 100644 index 000000000000..5e4fddd79b4f --- /dev/null +++ b/website/pages/ur/deploying/multiple-networks.mdx @@ -0,0 +1,241 @@ +--- +title: Deploying a Subgraph to Multiple Networks +--- + +This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph). + +## سب گراف کو متعدد نیٹ ورکس پر تعینات کرنا + +کچھ معاملات میں، آپ ایک ہی سب گراف کو متعدد نیٹ ورکس پر اس کے تمام کوڈ کی نقل کیے بغیر تعینات کرنا چاہیں گے۔ اس کے ساتھ آنے والا بنیادی چیلنج یہ ہے کہ ان نیٹ ورکس پر کنٹریکٹ ایڈریس مختلف ہیں. + +### Using `graph-cli` + +Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: + +```sh +Options: + + ... + --network Network configuration to use from the networks config file + --network-file Networks config file path (default: "./networks.json") +``` + +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. + +> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. + +If you don't have a `networks.json` file, you'll need to manually create one with the following structure: + +```json +{ + "network1": { // the network name + "dataSource1": { // the dataSource name + "address": "0xabc...", // the contract address (optional) + "startBlock": 123456 // the startBlock (optional) + }, + "dataSource2": { + "address": "0x123...", + "startBlock": 123444 + } + }, + "network2": { + "dataSource1": { + "address": "0x987...", + "startBlock": 123 + }, + "dataSource2": { + "address": "0xxyz..", + "startBlock": 456 + } + }, + ... +} +``` + +> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. + +Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x123...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +آپ کے نیٹ ورکس کی تشکیل فائل کو اس طرح نظر آنا چاہئے: + +```json +{ + "mainnet": { + "Gravity": { + "address": "0x123..." + } + }, + "sepolia": { + "Gravity": { + "address": "0xabc..." + } + } +} +``` + +اب ہم زیل میں دی گئ کمانڈز میں سے ایک چلا سکتے ہیں: + +```sh +# Using default networks.json file +yarn build --network sepolia + +# Using custom named file +yarn build --network sepolia --network-file path/to/config +``` + +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: sepolia + source: + address: '0xabc...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Now you are ready to `yarn deploy`. + +> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: + +```sh +# Using default networks.json file +yarn deploy --network sepolia + +# Using custom named file +yarn deploy --network sepolia --network-file path/to/config +``` + +### Subgraph.yaml ٹیمپلیٹ استعمال کرنا + +One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). + +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: + +```json +{ + "network": "mainnet", + "address": "0x123..." +} +``` + +اور + +```json +{ + "network": "sepolia", + "address": "0xabc..." +} +``` + +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + network: {{network}} + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + address: '{{address}}' + abi: Gravity + mapping: + kind: ethereum/events +``` + +In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: + +```json +{ + ... + "scripts": { + ... + "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", + "prepare:sepolia": "mustache config/sepolia.json subgraph.template.yaml > subgraph.yaml" + }, + "devDependencies": { + ... + "mustache": "^3.1.0" + } +} +``` + +To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: + +```sh +# Mainnet: +yarn prepare:mainnet && yarn deploy + +# Sepolia: +yarn prepare:sepolia && yarn deploy +``` + +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). + +**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. + +## سب گراف سٹوڈیو سب گراف آرکائیو پالیسی + +A subgraph version in Studio is archived if and only if it meets the following criteria: + +- The version is not published to the network (or pending publish) +- The version was created 45 or more days ago +- The subgraph hasn't been queried in 30 days + +In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. + +اس پالیسی سے متاثر ہونے والے ہر سب گراف کے پاس زیر بحث ورژن کو واپس لانے کا اختیار ہے. + +## سب گراف کی صحت کی جانچ کرنا + +اگر ایک سب گراف کامیابی کے ساتھ مطابقت پذیر ہوتا ہے، تو یہ ایک اچھی علامت ہے کہ یہ ہمیشہ کے لیے اچھی طرح چلتا رہے گا۔ تاہم، نیٹ ورک پر نئے محرکات آپ کے سب گراف کو بغیر جانچ کی خرابی کی حالت کو نشانہ بنا سکتے ہیں یا کارکردگی کے مسائل یا نوڈ آپریٹرز کے ساتھ مسائل کی وجہ سے یہ پیچھے پڑنا شروع کر سکتا ہے. + +Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: + +```graphql +{ + indexingStatusForCurrentVersion(subgraphName: "org/subgraph") { + synced + health + fatalError { + message + block { + number + hash + } + handler + } + chains { + chainHeadBlock { + number + } + latestBlock { + number + } + } + } +} +``` + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/pages/ur/developing/creating-a-subgraph.mdx b/website/pages/ur/developing/creating-a-subgraph.mdx index 402d3df868ea..84840f2c33b9 100644 --- a/website/pages/ur/developing/creating-a-subgraph.mdx +++ b/website/pages/ur/developing/creating-a-subgraph.mdx @@ -2,45 +2,47 @@ title: سب گراف بنانا --- -ایک سب گراف بلاکچین سے ڈیٹا نکالتا ہے, اس پر کارروائی کرتا ہے اور اسے ذخیرہ کرتا ہے تاکہ GraphQL کے ذریعے آسانی سے کیوری کیا جا سکے. +This detailed guide provides instructions to successfully create a subgraph. -![سب گراف کی تعریف](/img/defining-a-subgraph.png) +A subgraph extracts data from a blockchain, processes it, and stores it for efficient querying via GraphQL. -سب گراف کی تعریف چند فائلوں پر مشتمل ہے: +![سب گراف کی تعریف](/img/defining-a-subgraph.png) -- `subgraph.yaml`: سب گراف مینی فیسٹ پر مشتمل ایک YAML فائل ہے +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. -- `schema.graphql`: ایک GraphQL اسکیما جو اس بات کی وضاحت کرتا ہے کہ آپ کے سب گراف کے لیے کون سا ڈیٹا محفوظ ہے، اور GraphQL کے ذریعے اسے کیوری کیسے کیا جائے +## شروع ہوا چاہتا ہے -- `AssemblyScript Mappings`: [اسمبلی اسکرپٹ](https://github.com/AssemblyScript/assemblyscript) کوڈ جو ایونٹ کے ڈیٹا سے آپ کے اسکیما کی ہستیوں میں تبدیل کرتا ہے (جیسے `mapping.ts` اس ٹیوٹوریل میں) +### گراف CLI انسٹال کریں -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key). It is recommended that you [add signal](/network/curating/#how-to-signal) to your subgraph with at least [3,000 GRT](/sunrise/#how-can-i-ensure-high-quality-of-service-and-redundancy-for-subgraphs-on-the-graph-network). +To build and deploy a subgraph, you will need the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-tooling) which you will need to build and deploy a subgraph. +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. -## گراف CLI انسٹال کریں +اپنی مقامی مشین پر، درج زیل کمانڈز میں سے ایک کو رن کریں: -گراف CLI کو جاوا سکرپٹ میں لکھا کیا ہے, اور آپ کو اسے استعمال کرنے کے لیے یا تو `yarn` یا `npm` انسٹال کرنے کی ضرورت ہوگی; یہ فرض کیا جاتا ہے کہ آپ کے پاس مندرجہ ذیل میں سے yarn ہے. +#### Using [npm](https://www.npmjs.com/) -ایک بار جب آپ کے پاس `yarn` آجائے تو، یہ چلا کر Graph CLI انسٹال کریں +```bash +npm install -g @graphprotocol/graph-cli@latest +``` -**yarn کے ساتھ انسٹال کریں:** +#### Using [yarn](https://yarnpkg.com/) ```bash yarn global add @graphprotocol/graph-cli ``` -**npm کے ساتھ انسٹال کریں:** +- The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. -```bash -npm install -g @graphprotocol/graph-cli -``` +- This `graph init` command can also create a subgraph in Subgraph Studio by passing in `--product subgraph-studio`. + +- If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. -Once installed, the `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. This command can be used to create a subgraph in Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to your preferred network, bootstrapping a new subgraph from that contract can be a good way to get started. +## Create a subgraph -## ایک موجودہ کنٹریکٹ سے +### From an existing contract -مندرجہ ذیل کمانڈ ایک سب گراف بناتا ہے جو موجودہ کنٹریکٹ کے تمام ایوینٹس کو انڈیکس کرتا ہے. یہ ایتھر سکین سے کنٹریکٹ ABI حاصل کرنے کی کوشش کرتا ہے اور مقامی فائل پاتھ کی درخواست کرنے پر واپس آتا ہے. اگر اختیاری انتخابات میں سے کوئی غائب ہے، تو یہ آپ کو ایک انٹرایکٹو فارم پر لے جاتا ہے. +The following command creates a subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,21 +53,29 @@ graph init \ [] ``` -`` سب گراف سٹوڈیو میں آپ کے سب گراف کی ID ہے, یہ آپ کے سب گراف کی تفصیلات کے صفحہ پر پائی جا سکتی ہے. +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. -## ایک مثال کے سب گراف سے +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. -دوسرا موڈ `graph init` سپورٹ کرتا ہے مثال کے سب گراف سے ایک نیا پروجیکٹ بنا رہا ہے. درج ذیل کمانڈ یہ کرتی ہے: +### From an example subgraph + +The following command initializes a new project from an example subgraph: ```sh -graph init --studio +graph init --studio --from-example=example-subgraph ``` -The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. -## موجودہ سب گراف میں نئے ڈیٹا سورسز شامل کریں +## Add new `dataSources` to an existing subgraph -`v0.31.0` سے اب تک `graph add`, `graph-cli` کمانڈ کے ذریعے موجودہ سب گراف میں نئے ڈیٹا سورسز کو شامل کرنے کی حمایت کرتا ہے. +Since `v0.31.0`, the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: ```sh graph add
    [] @@ -78,22 +88,45 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -`add` کمانڈ ایتھر سکین سے ABI لے آئے گی (جب تک کہ `--abi` آپشن کے ساتھ ABI کا پاتھ متعین نہ کیا جائے)، اور ایک نیا `dataSource` بنائے گا۔ اسی طرح جس طرح `graph init` کمانڈ ایک `dataSource` `--from-contract` سے تخلیق کرتی ہے، اس کے مطابق اسکیما اور میپنگس کو اپ ڈیٹ کرتی ہے. +### Specifics + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- `--merge-entities` کا اپشن اس بات کی نشاندہی کرتا ہے کہ ڈیولپر کس طرح `entity` اور `event` نام کے تنازعات سے نمٹنا چاہے گا: + + - اگر `true`: نئے `data source` کو موجودہ `eventHandlers` اور `entities` کا استعمال کرنا چاہیے. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- کنٹریکٹ `address` متعلقہ نیٹ ورک کے لیے `networks.json` پر لکھا جائے گا. + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. -`--merge-entities` کا اپشن اس بات کی نشاندہی کرتا ہے کہ ڈیولپر کس طرح `entity` اور `event` نام کے تنازعات سے نمٹنا چاہے گا: +## Components of a subgraph -- اگر `true`: نئے `data source` کو موجودہ `eventHandlers` اور `entities` کا استعمال کرنا چاہیے. -- اگر `false`: ایک نئی اینٹیٹی اور ایونٹ ہینڈلر کو `${dataSourceName}{EventName}` کے ساتھ بنایا جانا چاہیے. +### سب گراف مینی فیسٹ -کنٹریکٹ `address` متعلقہ نیٹ ورک کے لیے `networks.json` پر لکھا جائے گا. +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -> **نوٹ:** انٹرایکٹو cli کا استعمال کرتے وقت، `graph init` کو کامیابی سے چلانے کے بعد، آپ کو ایک نیا `dataSource` شامل کرنے کا کہا جائے گا. +The **subgraph definition** consists of the following files: -## سب گراف مینی فیسٹ +- `subgraph.yaml`: Contains the subgraph manifest -سب گراف مینی فیسٹ `subgraph.yaml` آپ کے سب گراف کے انڈیکس کردہ سمارٹ کنٹریکٹ کی وضاحت کرتا ہے, ان کنٹریکٹس میں سے کن ایوینٹس پر توجہ دی جائے, اور ایونٹ کے ڈیٹا کو ان ہستیوں کے ساتھ میپ کرنے کا طریقہ جو گراف نوڈ ذخیرہ کرتا ہے اور کیوری کرنے کی اجازت دیتا ہےاور ایونٹ کے ڈیٹا کو ان ہستیوں کے ساتھ میپ کرنے کا طریقہ جو گراف نوڈ ذخیرہ کرتا ہے اور کیوری کرنے کی اجازت دیتا ہے. سب گراف مینی فیسٹ کے لیے مکمل تفصیلات [یہاں](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md) مل سکتی ہیں. +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL -مثال کے سب گراف کے لیے، `subgraph.yaml` یہ ہے: +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +A single subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: ```yaml specVersion: 0.0.4 @@ -162,7 +195,7 @@ dataSources: - `dataSources.source.endBlock`: بلاک کا اختیاری نمبر جس پر ڈیٹا سورس انڈیکس کرنا روکتا ہے، اس بلاک سمیت۔ کم از کم مخصوص ورژن درکار ہے: `0.0.9`۔ -- `dataSources.context`: کلیدی ویلیو کے جوڑے جو سب گراف میپنگ میں استعمال کیے جاسکتے ہیں۔ مختلف قسم کے ڈیٹا کو سپورٹ کرتا ہے جیسے `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`، `List`، اور `BigInt`۔ ہر متغیر کو اپنی `type` اور `data` کی وضاحت کرنے کی ضرورت ہے۔ یہ سیاق و سباق کے متغیرات پھر میپنگ فائلوں میں قابل رسائی ہوتے ہیں، جو سب گراف کی ترقی کے لیے مزید قابل ترتیب اختیارات پیش کرتے ہیں۔ +- `dataSources.context`: کلیدی ویلیو کے جوڑے جو سب گراف میپنگ میں استعمال کیے جاسکتے ہیں۔ مختلف قسم کے ڈیٹا کو سپورٹ کرتا ہے جیسے `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, ` Bytes `، ` List `، اور `BigInt`۔ ہر متغیر کو اپنی ` type ` اور ` data ` کی وضاحت کرنے کی ضرورت ہے۔ یہ سیاق و سباق کے متغیرات پھر میپنگ فائلوں میں قابل رسائی ہوتے ہیں، جو سب گراف کی ترقی کے لیے مزید قابل ترتیب اختیارات پیش کرتے ہیں۔ - `dataSources.mapping.entities`: وہ اینٹیٹیز جنہیں ڈیٹا سورس اسٹور کو لکھتا ہے۔ schema.graphql فائل میں ہر اینٹیٹی کے لیے اسکیما کی وضاحت کی گئی ہے. @@ -180,9 +213,9 @@ dataSources: بلاک کے اندر ڈیٹا سورس کے لیے محرکات درج ذیل عمل کا استعمال کرتے ہوئے ترتیب دیے گئے ہیں: -1. ایونٹ اور کال ٹریگرز کو پہلے بلاک کے اندر ٹرانزیکشن انڈیکس سے ترتیب دیا جاتا ہے. -2. ایک ہی ٹرانزیکشن کے اندر ایونٹ اور کال ٹرگرز کو روایت کا استعمال کرتے ہوئے ترتیب دیا جاتا ہے: پہلے ایونٹ ٹرگرز پھر کال ٹرگرز، ہر قسم اس ترتیب کا احترام کرتی ہے جس کی وضاحت مینی فیسٹ میں کی گئی ہے. -3. بلاک ٹریگرز ایونٹ اور کال ٹریگرز کے بعد چلائے جاتے ہیں، اس ترتیب میں جس کی وضاحت مینی فیسٹ میں کی گئی ہے. +1. ایونٹ اور کال ٹریگرز کو پہلے بلاک کے اندر ٹرانزیکشن انڈیکس سے ترتیب دیا جاتا ہے. +2. ایک ہی ٹرانزیکشن کے اندر ایونٹ اور کال ٹرگرز کو روایت کا استعمال کرتے ہوئے ترتیب دیا جاتا ہے: پہلے ایونٹ ٹرگرز پھر کال ٹرگرز، ہر قسم اس ترتیب کا احترام کرتی ہے جس کی وضاحت مینی فیسٹ میں کی گئی ہے. +3. بلاک ٹریگرز ایونٹ اور کال ٹریگرز کے بعد چلائے جاتے ہیں، اس ترتیب میں جس کی وضاحت مینی فیسٹ میں کی گئی ہے. ترتیب دینے کے یہ اصول تبدیل کیے جا سکتے ہیں. @@ -305,9 +338,9 @@ Imagine you have a subgraph that needs to make three Ethereum calls to fetch dat Traditionally, these calls might be made sequentially: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Total time taken = 3 + 2 + 4 = 9 seconds @@ -315,9 +348,9 @@ Total time taken = 3 + 2 + 4 = 9 seconds With this feature, you can declare these calls to be executed in parallel: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. @@ -325,9 +358,9 @@ Total time taken = max (3, 2, 4) = 4 seconds ### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. -2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. ### Example Configuration in Subgraph Manifest @@ -347,7 +380,7 @@ calls: Details for the example above: - `global0X128` is the declared `eth_call`. -- The text before colon(`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. @@ -360,17 +393,17 @@ calls: ### SpecVersion Releases -| ورزن | جاری کردہ نوٹس | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | +| ورزن | جاری کردہ نوٹس | +|:-----:| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | | 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | ### ABIs حاصل کرنا @@ -442,16 +475,16 @@ type GravatarDeclined @entity { ہم اپنے GraphQL API میں درج ذیل اسکیلرز کی حمایت کرتے ہیں: -| قسم | تفصیل | -| --- | --- | -| `Bytes` | Byte array، ایک ہیکساڈیسیمل سٹرنگ کے طور پر پیش کیا جاتا ہے. عام طور پر Ethereum hashes اور ایڈریسیس کے لیے استعمال ہوتا ہے. | -| `String` | `string` ویلیوز کے لیے اسکیلر. خالی حروف تعاون یافتہ نہیں ہیں اور خود بخود ہٹا دیے جاتے ہیں. | -| `Boolean` | `Boolean` ویلیوز کے لیے اسکیلر. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | ایک 8-بائٹ دستخط شدہ عدد، جسے 64-بٹ دستخط شدہ عدد بھی کہا جاتا ہے، -9,223,372,036,854,775,808 سے لے کر 9,223,372,036,854,775,807 تک کی ویلیوز کو ذخیرہ کرسکتا ہے۔ ایتھیریم سے `i64` کی نمائندگی کرنے کے لیے اسے استعمال کرنے کو ترجیح دیں۔ | -| `BigInt` | بڑے integers۔ Ethereum کی `uint32`، `int64`، `uint64`، ..., `uint256` اقسام کے لیے استعمال کیا جاتا ہے. نوٹ: `uint32` کے نیچے ہر چیز، جیسے `int32`، `uint24` یا `int8` کو `i32` کے طور پر دکھایا گیا ہے. | -| `BigDecimal` | `BigDecimal` اعلی درستگی والے اعشاریہ ایک significand اور ایک exponent کے طور پر پیش کیا جاتے ہہیں. Exponent رینج −6143 سے +6144 تک ہے۔ 34 سگنیفیکینڈ ہندسوں پر rounded کیا گیا۔. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| قسم | تفصیل | +| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `Bytes` | Byte array، ایک ہیکساڈیسیمل سٹرنگ کے طور پر پیش کیا جاتا ہے. عام طور پر Ethereum hashes اور ایڈریسیس کے لیے استعمال ہوتا ہے. | +| `String` | `string` ویلیوز کے لیے اسکیلر. خالی حروف تعاون یافتہ نہیں ہیں اور خود بخود ہٹا دیے جاتے ہیں. | +| `Boolean` | `Boolean` ویلیوز کے لیے اسکیلر. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | ایک 8-بائٹ دستخط شدہ عدد، جسے 64-بٹ دستخط شدہ عدد بھی کہا جاتا ہے، -9,223,372,036,854,775,808 سے لے کر 9,223,372,036,854,775,807 تک کی ویلیوز کو ذخیرہ کرسکتا ہے۔ ایتھیریم سے `i64` کی نمائندگی کرنے کے لیے اسے استعمال کرنے کو ترجیح دیں۔ | +| `BigInt` | بڑے integers۔ Ethereum کی `uint32`، `int64`، `uint64`، ..., `uint256` اقسام کے لیے استعمال کیا جاتا ہے. نوٹ: `uint32` کے نیچے ہر چیز، جیسے `int32`، `uint24` یا `int8` کو `i32` کے طور پر دکھایا گیا ہے. | +| `BigDecimal` | `BigDecimal` اعلی درستگی والے اعشاریہ ایک significand اور ایک exponent کے طور پر پیش کیا جاتے ہہیں. Exponent رینج −6143 سے +6144 تک ہے۔ 34 سگنیفیکینڈ ہندسوں پر rounded کیا گیا۔. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | #### Enums @@ -593,7 +626,7 @@ Many-to-many تعلقات کو ذخیرہ کرنے کے اس زیادہ وسیع #### اسکیما میں کامینٹس شامل کرنا -As per GraphQL spec, comments can be added above schema entity attributes using the hash symble `#`. This is illustrated in the example below: +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: ```graphql type MyFirstEntity @entity { @@ -873,7 +906,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **نوٹ:** ایک نیا ڈیٹا سورس صرف اس بلاک کے لیے کالز اور ایونٹس پر کارروائی کرے گا جس میں اسے بنایا گیا تھا اور تمام مندرجہ ذیل بلاکس، لیکن تاریخی ڈیٹا، یعنی ڈیٹا پر کارروائی نہیں کرے گا جو پہلے سے بلاکس میں موجود ہے. -> +> > اگر پہلے والے بلاکس میں نئے ڈیٹا سورس سے متعلقہ ڈیٹا ہوتا ہے، تو یہ بہترین ہے کہ کنٹریکٹ کی موجودہ حالت کو پڑھ کر اور ڈیٹا کا نیا سورس بننے کے وقت اس سٹیٹ کی نمائندگی کرنے والی اینٹیٹیز بنا کر اس ڈیٹا کو انڈیکس کریں. ### ڈیٹا سورس سیاق و سباق @@ -930,7 +963,7 @@ dataSources: ``` > **نوٹ:** کنٹریکٹ تخلیق والے بلاک کو ایتھر سکین پر تیزی سے دیکھا جا سکتا ہے: -> +> > 1. سرچ بار میں اس کا ایڈریس درج کرکے کنٹریکٹ کو تلاش کریں. > 2. `Contract Creator` سیکشن میں تخلیق ٹرانزیکشن ہیش پر کلک کریں. > 3. ٹرانزیکشن کی تفصیلات کا صفحہ لوڈ کریں جہاں آپ کو اس کنٹریکٹ کے لیے اسٹارٹ بلاک ملے گا. @@ -945,9 +978,9 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde `indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: -1. `"never"`: No pruning of historical data; retains the entire history. -2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. -3. A specific number: Sets a custom limit on the number of historical blocks to retain. +1. `"never"`: No pruning of historical data; retains the entire history. +2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +3. A specific number: Sets a custom limit on the number of historical blocks to retain. ``` indexerHints: @@ -982,29 +1015,6 @@ indexerHints: prune: never ``` -You can check the earliest block (with historical state) for a given subgraph by querying the [Indexing Status API](/deploying/deploying-a-subgraph-to-hosted/#checking-subgraph-health): - -``` -{ - indexingStatuses(subgraphs: ["Qm..."]) { - subgraph - synced - health - chains { - earliestBlock { - number - } - latestBlock { - number - } - chainHeadBlock { number } - } - } -} -``` - -Note that the `earliestBlock` is the earliest block with historical data, which will be more recent than the `startBlock` specified in the manifest, if the subgraph has been pruned. - ## Event Handlers Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. @@ -1477,7 +1487,7 @@ The file data source must specifically mention all the entity types which it wil #### فائلوں پر کارروائی کرنے کے لیے ایک نیا ہینڈلر بنائیں -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/assemblyscript-api/#json-api)). +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). پڑھنے کے قابل سٹرنگ کے طور پر فائل کی CID تک `dataSource` کے ذریعے اس طرح رسائی حاصل کی جا سکتی ہے: diff --git a/website/pages/ur/developing/developer-faqs.mdx b/website/pages/ur/developing/developer-faqs.mdx index 5070304b0f43..e60f4b2a7072 100644 --- a/website/pages/ur/developing/developer-faqs.mdx +++ b/website/pages/ur/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: ڈویلپر کے اکثر پوچھے گئے سوالات --- -## سب گراف کیا ہے؟ +This page summarizes some of the most common questions for developers building on The Graph. -سب گراف ایک کسٹم API ہے جو بلاکچین ڈیٹا پر بنا ہے. سب گرافس کا GraphQL کی کیوری لینگویج کا استعمال ہوتے ہوۓ کیوری ہوتا ہے اور گراف CLI کا استعمال ہوتے ہوۓ گراف نوڈ پر تعینات ہوتے ہیں. گراف کے ڈیسینٹرالائزڈ نیٹ ورک پر تعینات اور شائع ہونے کے بعد، انڈیکسرز سب گراف پر کارروائی کرتے ہیں اور انہیں سب گراف صارفین کے لیے کیوری کرنے کے لیے دستیاب کرتے ہیں. +## Subgraph Related -## 2. کیا میں اپنا سب گراف ختم کر سکتا ہوں +### سب گراف کیا ہے؟ -ایک بار سب گرافس بن جائیں تو ان کو ختم کرنا ممکن نہیں ہے. +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. کیا میں اپنے سب گراف کا نام تبدیل کر سکتا ہوں؟ +### 2. What is the first step to create a subgraph? -نہیں. ایک بار سب گراف بن جاۓ، اس کا نام بدل نہیں سکتا. اپنا سب گراف بنانے سے پہلے اس کے بارے میں احتیاط سے سوچنا یقینی بنائیں تاکہ یہ دوسرے ڈی ایپس کے ذریعے آسانی سے تلاش اور شناخت کے قابل ہو. +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. کیا میں گٹ ہب کا اکاونٹ بدل سکتا ہوں جو میرے سب گراف کے ساتھ وابستہ ہے؟ +### 3. Can I still create a subgraph if my smart contracts don't have events? -نہیں. ایک بار سب گراف بن جاۓ، متعلقہ گٹ ہب اکاؤنٹ کو تبدیل نہیں کیا جا سکتا. اپنا سب گراف بنانے سے پہلے اس کے بارے میں احتیاط سے سوچنا یقینی بنائیں. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. کیا میں اب بھی ایک سب گراف بنانے کے قابل ہوں اگر میرے سمارٹ کنٹریکٹ میں ایونٹس نہ ہوں؟ +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -یہ انتہائی سفارش کی جاتی ہے کہ آپ اپنے سمارٹ کنٹریکٹس کو ایونٹس کے لیے تشکیل دیں اس ڈیٹا سے وابستہ جس سے آپ کیوری کرنے میں دلچسپی رکھتے ہیں۔ سب گراف میں ایونٹ ہینڈلرز کنٹریکٹ کے واقعات سے متحرک ہوتے ہیں اور مفید ڈیٹا کو بازیافت کرنے کا اب تک کا تیز ترین طریقہ ہے. +### 4. کیا میں گٹ ہب کا اکاونٹ بدل سکتا ہوں جو میرے سب گراف کے ساتھ وابستہ ہے؟ -اگر آپ جن کنٹریکٹس کے ساتھ کام کر رہے ہیں ان میں ایونٹس شامل نہیں ہیں، تو آپ کا سب گراف انڈیکسنگ کو متحرک کرنے کے لیے کال اور بلاک ہینڈلرز کا استعمال کر سکتا ہے۔ اگرچہ اس کی سفارش نہیں کی جاتی ہے، کیونکہ کارکردگی کافی سست ہوگی. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. کیا متعدد نیٹ ورکس کے لیے ایک نام کے ساتھ ایک سب گراف تعینات کرنا ممکن ہے؟ +### 5. How do I update a subgraph on mainnet? -آپ کو متعدد نیٹ ورکس کے لیے الگ الگ ناموں کی ضرورت ہوگی۔ اگرچہ آپ کے پاس ایک ہی نام کے تحت مختلف سب گراف نہیں ہوسکتے ہیں، متعدد نیٹ ورکس کے لیے ایک کوڈ بیس رکھنے کے آسان طریقے ہیں۔ ہماری دستاویزات میں اس کے بارے میں مزید معلومات حاصل کریں: [سب گراف کو دوبارہ تعینات کرنا](/deploying/deploying-a-subgraph-to-hosted#redeploying-a-subgraph) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. ٹیمپلیٹس ڈیٹا سورسز سے کیسے مختلف ہے؟ +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -ٹیمپلیٹس آپ کو پرواز پر ڈیٹا کے ذرائع بنانے کی اجازت دیتے ہیں، جب کہ آپ کا سب گراف انڈیکس کر رہا ہوتا ہے۔ ایسا ہو سکتا ہے کہ آپ کا کنٹریکٹ نئے کنٹریکٹس کو جنم دے گا کیونکہ لوگ اس کے ساتھ تعامل کرتے ہیں، اور چونکہ آپ ان کنٹریکٹس (ABI، ایونٹس وغیرہ) کی شکل کو پہلے ہی جانتے ہیں، آپ اس بات کی وضاحت کر سکتے ہیں کہ آپ انہیں ٹیمپلیٹ میں کیسے ترتیب دینا چاہتے ہیں اور وہ کب آپ کا سب گراف کنٹریکٹ ایڈریس فراہم کرکے ایک متحرک ڈیٹا سورس بنائے گا. +آپ کو سب گراف کو دوبارہ تعینات کرنا ہوگا، لیکن اگر سب گراف ID (IPFS ہیش) تبدیل نہیں ہوتا ہے، تو اسے شروع سے مطابقت پذیر نہیں ہونا پڑے گا. + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +سب گراف کے اندر، ایونٹس کو ہمیشہ اسی ترتیب سے پروسیس کیا جاتا ہے جس ترتیب سے وہ بلاکس میں ظاہر ہوتے ہیں، قطع نظر اس کے کہ یہ متعدد کنٹریکٹس میں ہے یا نہیں. + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. [ڈیٹا سورس ٹیمپلیٹس](/developing/creating-a-subgraph#data-source-templates) پر "ڈیٹا سورس ٹیمپلیٹ کو تیز کرنا" سیکشن دیکھیں. -## 8. میں یہ کیسے یقینی بنا سکتا ہوں کہ میں اپنی مقامی تعیناتیوں کے لیے گراف نوڈ کا تازہ ترین ورژن استعمال کر رہا ہوں؟ +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -آپ زیل میں دی گئ کمانڈ چلا سکتے ہیں: +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -docker pull graphprotocol/graph-node:latest -``` +You can also use `graph add` command to add a new dataSource. -**نوٹ:** docker / docker-compose ہمیشہ استعمال کرے گا جو بھی گراف نوڈ ورژن آپ نے پہلی بار چلاتے وقت کھینچا تھا، لہذا یہ یقینی بنانے کے لیے یہ کرنا ضروری ہے کہ آپ گراف نوڈ کے تازہ ترین ورژن کے ساتھ اپ ٹو ڈیٹ ہیں. +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. میں کنٹریکٹ فنکشن کو کیسے کال کر سکتا ہوں یا اپنے سب گراف میپنگ سے پبلک سٹیٹ متغیر تک کیسے رسائ حاصل کر سکتا ہوں؟ +Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. -## 10. کیا دو کنٹریکٹس کے ساتھ `graph init` سے `graph` کا استعمال کرتے ہوئے سب گراف ترتیب دینا ممکن ہے؟ یا مجھے `graph init` چلانے کے بعد دستی طور پر `subgraph.yaml` میں دوسرا ڈیٹا سورس شامل کرنا چاہیے؟ +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Yes. On `graph init` command itself you can add multiple datasources by entering contracts one after the other. You can also use `graph add` command to add new datasource. +آپ زیل میں دی گئ کمانڈ چلا سکتے ہیں: -## 11. میں تعاون کرنا چاہتا ہوں یا گٹ ہب اشو شامل کرنا چاہتا ہوں۔ مجھے اوپن سورس ریپوزٹریز کہاں سے مل سکتی ہیں؟ +```sh +docker pull graphprotocol/graph-node:latest +``` -- [graph-node](https://github.com/graphprotocol/graph-node) -- [graph-tooling](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. ایونٹس کو سنبھالتے وقت کسی ہستی کے لیے "خود کار طریقے سے تیار کردہ" آئی ڈیز بنانے کا تجویز کردہ طریقہ کیا ہے؟ +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? اگر ایونٹ کے دوران صرف ایک ہستی بنائی جاتی ہے اور اگر اس سے بہتر کوئی چیز دستیاب نہیں ہے، تو ٹرانزیکشن ہیش + لاگ انڈیکس منفرد ہوگا۔ آپ اسے بائٹس میں تبدیل کرکے اور پھر اسے `crypto.keccak256` کے ذریعے پائپ کر کے مبہم کر سکتے ہیں لیکن یہ اسے مزید منفرد نہیں بنائے گا. -## 13. متعدد کنٹریکٹس کو سنتے وقت، کیا ایونٹس کو سننے کے لیے کنٹریکٹ آرڈر کو منتخب کرنا ممکن ہے؟ +### 15. Can I delete my subgraph? -سب گراف کے اندر، ایونٹس کو ہمیشہ اسی ترتیب سے پروسیس کیا جاتا ہے جس ترتیب سے وہ بلاکس میں ظاہر ہوتے ہیں، قطع نظر اس کے کہ یہ متعدد کنٹریکٹس میں ہے یا نہیں. +It is not possible to delete subgraphs once they are created. However, you can [transfer and deprecate your subgraph](/managing/transfer-and-deprecate-a-subgraph/). -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +## Network Related + +### 16. What networks are supported by The Graph? + +آپ کو تعاون یافتہ نیٹ ورکس کی فہرست [یہاں](/developing/supported-networks) مل سکتی ہے. + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? جی ہاں. آپ ذیل کی مثال کے مطابق `graph-ts` درآمد کر کے ایسا کر سکتے ہیں: @@ -78,23 +99,21 @@ dataSource.network() dataSource.address() ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Do you support block and call handlers on Sepolia? Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. -## 16. کیا میں ethers.js یا دوسری JS لائبریریز کو اپنے سب گراف میپنگس میں درآمد کر سکتا ہوں? - -فی الحال نہیں، جیسا کہ میپنگ اسمبلی اسکرپٹ میں لکھی جاتی ہے۔ اس کا ایک ممکنہ متبادل حل یہ ہے کہ خام ڈیٹا کو اداروں میں اسٹور کیا جائے اور وہ منطق انجام دی جائے جس کے لیے کلائنٹ پر JS لائبریریوں کی ضرورت ہو. +## Indexing & Querying Related -## 17. کیا یہ وضاحت کرنا ممکن ہے کہ کس بلاک پر انڈیکسنگ شروع کرنی ہے؟ +### 19. Is it possible to specify what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 18. کیا انڈیکسنگ کی کارکردگی کو بڑھانے کے لیے کچھ نکات ہیں؟ میرا سب گراف مطابقت پزیر ہونے میں بہت زیادہ وقت لے رہا ہے +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -ہاں، آپ کو اس بلاک سے انڈیکسنگ شروع کرنے کے لیے اختیاری اسٹارٹ بلاک کی خصوصیت پر ایک نظر ڈالنی چاہیے جس میں کنٹریکٹ تعینات کیا گیا تھا: [اسٹارٹ بلاکس](/developing/creating-a-subgraph#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 19. کیا سب گراف سے براہ رست کیوری کرنے کا کوئ طریقہ ہے تاکہ اس نے جو تازہ ترین بلاک نمبر ترتیب دیا ہو اس کا تعین کیا جا سکے؟ +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? جی ہاں! مندرجہ ذیل کمانڈ کو آزمائیں، "تنظیم/سب گراف نام" کو اس کے تحت شائع ہونے والی تنظیم کے ساتھ تبدیل کرتے ہوئے اور آپ کے سب گراف کا نام: @@ -102,44 +121,27 @@ Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the n curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -## 20. گراف کن نیٹ ورکس کو سپورٹ کرتا ہے؟ - -آپ کو تعاون یافتہ نیٹ ورکس کی فہرست [یہاں](/developing/supported-networks) مل سکتی ہے. - -## 21. کیا یہ ممکن ہے کہ سب گراف کو دوسرے اکاؤنٹ یا اینڈ پوائنٹ پر دوبارہ تعینات کیے بغیر نقل کیا جائے؟ - -آپ کو سب گراف کو دوبارہ تعینات کرنا ہوگا، لیکن اگر سب گراف ID (IPFS ہیش) تبدیل نہیں ہوتا ہے، تو اسے شروع سے مطابقت پذیر نہیں ہونا پڑے گا. - -## 22. کیا گراف نوڈ کے اوپر اپالو فیڈریشن کا استعمال کرنا ممکن ہے؟ +### 22. Is there a limit to how many objects The Graph can return per query? -فیڈریشن کو ابھی تک سپورٹ نہیں کیا گیا ہے، حالانکہ ہم مستقبل میں اس کی حمایت کرنا چاہتے ہیں۔ اس وقت، آپ جو کچھ کر سکتے ہیں وہ ہے اسکیما سٹیچنگ کا استعمال، یا تو کلائنٹ پر یا پراکسی سروس کے ذریعے. - -## 23. کیا اس بات کی کوئ حد ہے کہ گراف فی کیوری کتنے آبجیکٹ واپس کر سکتا ہے؟ - -پہلے سے طے شدہ طور پر، سوالات کے جوابات فی مجموعہ 100 آئٹمز تک محدود ہیں۔ اگر آپ مزید وصول کرنا چاہتے ہیں، تو آپ فی مجموعہ 1000 آئٹمز تک جا سکتے ہیں اور اس سے آگے، آپ اس کے ساتھ صفحہ بندی کر سکتے ہیں: +By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: ```graphql someCollection(first: 1000, skip: ) { ... } ``` -## 24. اگر میرا ڈیپ فرنٹ اینڈ کیوری کرنے کے لیے گراف کا استعمال کرتا ہے، تو مجھے اپنی کیوری کی کلید براہ راست فرنٹ اینڈ کیوری کی ضرورت ہے؟ کیا اگر ہم صارفین کے لیے کیوری کی فیس ادا کرتے ہیں - کیا بدنیتی پر مبنی صارفین ہماری کیوری کی فیس کو بہت زیادہ کرنے کا سبب بنیں گے؟ - -فی الحال، ڈیپ کے لیے تجویز کردہ طریقہ یہ ہے کہ کلید کو فرنٹ اینڈ میں شامل کیا جائے اور اسے اختتامی صارفین کے سامنے لایا جائے۔ اس نے کہا، آپ اس کلید کو میزبان نام تک محدود کر سکتے ہیں، جیسے _yourdapp.io_ اور سب گراف۔ گیٹ وے فی الحال ایج اور نوڈ گیٹ وے کی ذمہ داری کا حصہ بدسلوکی پر نظر رکھنا اور بدسلوکی والے کلائنٹس سے ٹریفک کو روکنا ہے. - -## 25. Where do I go to find my current subgraph on the hosted service? - -سب گراف تلاش کرنے کے لیے ہوسٹڈ سروس کی طرف جائیں جو آپ یا دوسروں نے ہوسٹڈ سروس میں تعینات کیے ہیں۔ آپ اسے [یہاں](https://thegraph.com/hosted-service) تلاش کر سکتے ہیں۔ - -## 26. Will the hosted service start charging query fees? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -گراف ہوسٹڈ سروس کے لیے کبھی بھی پیسے نہیں لے گا۔ گراف ایک ڈیسینٹرالائزڈ پروٹوکول ہے، اور سینٹرالائزڈ سروسز کے لیے پیسے لینا گراف کی اقدار کے ساتھ موافق نہیں ہے۔ ہوسٹڈ سروس ہمیشہ سے ہی ایک عارضی قدم ہوتا ہے تاکہ ڈیسینترالائزڈ نیٹ ورک تک پہنچنے میں مدد کی جا سکے۔ ڈویلپرز کے پاس ڈیسینٹرالائزڈ نیٹ ورک میں اپ گریڈ کرنے کے لیے کافی وقت ہوگا کیونکہ وہ آرام دہ ہیں۔ +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## 27. How do I update a subgraph on mainnet? +## Miscellaneous -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +### 24. Is it possible to use Apollo Federation on top of graph-node? -## 28. In what order are the event, block, and call handlers triggered for a data source? +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +- [graph-node](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/ur/developing/graph-ts/api.mdx b/website/pages/ur/developing/graph-ts/api.mdx index 4229c68d4bfe..a8d423b77148 100644 --- a/website/pages/ur/developing/graph-ts/api.mdx +++ b/website/pages/ur/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: اسمبلی اسکرپٹ API --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/release-notes/assemblyscript-migration-guide) +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -یہ صفحہ دستاویز کرتا ہے کہ سب گراف میپنگ لکھتے وقت کیا بلٹ ان APIs استعمال کیا جا سکتا ہے۔ دو قسم کے APIs باکس سے باہر دستیاب ہیں: +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## API حوالہ @@ -27,16 +29,16 @@ The `@graphprotocol/graph-ts` library provides the following APIs: The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. -| ورزن | جاری کردہ نوٹس | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
    Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
    Added `baseFeePerGas` to the Ethereum Block object | +| ورزن | جاری کردہ نوٹس | +| :---: | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
    Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
    Added `baseFeePerGas` to the Ethereum Block object | | 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
    `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
    `etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
    `etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### بلٹ ان اقسام @@ -252,7 +254,9 @@ export function handleTransfer(event: TransferEvent): void { When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. -دیگر ہستیوں کے ساتھ ٹکراؤ سے بچنے کے لیے ہر ایک کے پاس ایک منفرد ID ہونا ضروری ہے۔ ایونٹ کے پیرامیٹرز میں ایک منفرد شناخت کنندہ شامل کرنا کافی عام ہے جسے استعمال کیا جا سکتا ہے۔ نوٹ: ٹرانزیکشن ہیش کو ID کے طور پر استعمال کرنے سے یہ فرض ہوتا ہے کہ ایک ہی ٹرانزیکشن میں کوئی اور ایونٹ اس ہیش کے ساتھ ID کے طور پر نہیں بنتا ہے. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### اسٹور سے ہستیوں کو لوڈ کرنا @@ -268,15 +272,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### بلاک کے ساتھ تخلیق کردہ ہستیوں کو تلاش کرنا As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. -اسٹور API ان ہستیوں کی بازیافت میں سہولت فراہم کرتا ہے جو موجودہ بلاک میں تخلیق یا اپ ڈیٹ کی گئی تھیں۔ اس کے لیے ایک عام صورت حال یہ ہے کہ ایک ہینڈلر کسی آن چین ایونٹ سے ٹرانزیکشن بناتا ہے، اور بعد کا ہینڈلر اس ٹرانزیکشن تک رسائی حاصل کرنا چاہتا ہے اگر یہ موجود ہو۔ ایسی صورت میں جہاں ٹرانزیکشن موجود نہیں ہے، سب گراف کو صرف یہ جاننے کے لیے ڈیٹا بیس میں جانا پڑے گا کہ ہستی موجود نہیں ہے۔ اگر سب گراف مصنف پہلے ہی جانتا ہے کہ ہستی کو اسی بلاک میں بنایا گیا ہو گا، تو loadInBlock کا استعمال اس ڈیٹا بیس راؤنڈ ٹرپ سے گریز کرتا ہے۔ کچھ سب گرافس کے لیے، یہ کھوئی ہوئی تلاشیں انڈیکسنگ کے وقت میں اہم کردار ادا کر سکتی ہیں. +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -503,7 +510,9 @@ As long as the `ERC20Contract` on Ethereum has a public read-only function calle #### واپس آنے والی کالوں کو ہینڈل کرنا -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -515,7 +524,7 @@ if (callResult.reverted) { } ``` -نوٹ کریں کہ گیتھ یا انفورا کلائنٹ سے منسلک گراف نوڈ تمام ریورٹس کا پتہ نہیں لگا سکتا، اگر آپ اس پر بھروسہ کرتے ہیں تو ہم پیراٹی کلائنٹ سے منسلک گراف نوڈ استعمال کرنے کی تجویز کرتے ہیں. +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### انکوڈنگ/ڈی کوڈنگ ABI diff --git a/website/pages/ur/developing/substreams-powered-subgraphs-faq.mdx b/website/pages/ur/developing/substreams-powered-subgraphs-faq.mdx index d00028321854..ff87c267315c 100644 --- a/website/pages/ur/developing/substreams-powered-subgraphs-faq.mdx +++ b/website/pages/ur/developing/substreams-powered-subgraphs-faq.mdx @@ -44,7 +44,8 @@ Substreams-powered subgraphs combine all the benefits of Substreams with the que [StreamingFast](https://www.streamingfast.io/) کے ذریعے تیار کردہ، Firehose ایک بلاکچین ڈیٹا نکالنے کی پرت ہے جسے شروع سے بلاکچینز کی مکمل تاریخ کو اس رفتار سے پروسیس کرنے کے لیے ڈیزائن کیا گیا ہے جو پہلے نظر نہیں آتی تھیں۔ فائلوں پر مبنی اور سٹریمنگ فرسٹ اپروچ فراہم کرنا، یہ سٹریمنگ فاسٹ کے اوپن سورس ٹیکنالوجیز کے سوٹ کا بنیادی جزو اور سب اسٹریمز کی بنیاد ہے. -Firehose کے بارے میں مزید جاننے کے لیے[documentation] (https://firehose.streamingfast.io/) پر جائیں. +Firehose کے بارے میں مزید جاننے کے لیے[documentation] +(https://firehose.streamingfast.io/) پر جائیں. ## Firehose کے کیا فوائد ہیں؟ diff --git a/website/pages/ur/developing/supported-networks.json b/website/pages/ur/developing/supported-networks.json index 49b9ac4de457..1db47456c314 100644 --- a/website/pages/ur/developing/supported-networks.json +++ b/website/pages/ur/developing/supported-networks.json @@ -2,7 +2,7 @@ "network": "نیٹ ورک", "cliName": "CLI Name", "chainId": "Chain ID", - "hostedService": "ہوسٹڈ سروس", + "hostedService": "Hosted Service", "subgraphStudio": "سب گراف سٹوڈیو", "decentralizedNetwork": "Decentralized Network", "integrationType": "Integration Type" diff --git a/website/pages/ur/developing/supported-networks.mdx b/website/pages/ur/developing/supported-networks.mdx index cb315e14f536..bfa66b78c21b 100644 --- a/website/pages/ur/developing/supported-networks.mdx +++ b/website/pages/ur/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. +- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. - If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). diff --git a/website/pages/ur/developing/unit-testing-framework.mdx b/website/pages/ur/developing/unit-testing-framework.mdx index ddfd9122612e..e4e7f851f0cb 100644 --- a/website/pages/ur/developing/unit-testing-framework.mdx +++ b/website/pages/ur/developing/unit-testing-framework.mdx @@ -1368,18 +1368,18 @@ Global test coverage: 22.2% (2/9 handlers). > Critical: Could not create WasmInstance from valid module with context: unknown import: wasi_snapshot_preview1::fd_write has not been defined -اس کا مطلب ہے کہ آپ نے اپنے کوڈ میں `console.log` استعمال کیا ہے، جو اسمبلی اسکرپٹ سے تعاون یافتہ نہیں ہے۔ براہ کرم [لاگنگ API](/developing/assemblyscript-api/#logging-api) استعمال کرنے پر غور کریں +This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/graph-ts/api/#logging-api) > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Block(defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(18,12) -> +> > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Transaction(defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(24,12) دلائل میں عدم مماثلت `graph-ts` اور `matchstick-as` میں عدم مماثلت کی وجہ سے ہوتی ہے۔ اس طرح کے مسائل کو حل کرنے کا بہترین طریقہ یہ ہے کہ ہر چیز کو تازہ ترین جاری کردہ ورژن میں اپ ڈیٹ کیا جائے. diff --git a/website/pages/ur/glossary.mdx b/website/pages/ur/glossary.mdx index 637afbec2617..c7737cc60abb 100644 --- a/website/pages/ur/glossary.mdx +++ b/website/pages/ur/glossary.mdx @@ -10,11 +10,9 @@ title: لغت - **اینڈ پوائنٹ**: ایک URL جسے سب گراف سے کیوری کرنے کے لیے استعمال کیا جا سکتا ہے۔ سب گراف سٹوڈیو کے لیے ٹیسٹنگ اینڈ پوائنٹ ہے `https://api.studio.thegraph.com/query///` اور گراف ایکسپلورر اینڈ پوائنٹ ہے `https: //gateway.thegraph.com/api//subgraphs/id/`۔ گراف ایکسپلورر اینڈ پوائنٹ کا استعمال گراف کے ڈیسینٹرالائزڈ نیٹ ورک پر سب گراف کے بارے میں کیوری کرنے کے لیے کیا جاتا ہے. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Then, Indexers can begin indexing subgraphs to make them available to be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **Hosted service**: A temporary scaffold service for building and querying subgraphs as The Graph's decentralized network is maturing its cost of service, quality of service, and developer experience. - -- **انڈیکسرز**: نیٹ ورک کے شرکاء جو بلاکچینز سے ڈیٹا کو انڈیکس کرنے کے لیے انڈیکسنگ نوڈس چلاتے ہیں اور GraphQL کی کیوریز پیش کرتے ہیں. +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **انڈیکسر کے آمدنی کے سلسلے** انڈیکسرز کو GRT میں دو اجزاء کے ساتھ انعام دیا جاتا ہے: کیوری کی فیس میں چھوٹ اور انڈیکسنگ کے انعامات. @@ -24,17 +22,17 @@ title: لغت - **انڈیکسر سیلف سٹیک**: GRT کی وہ مقدار جو انڈیکسرز ڈیسینٹرالائزڈ نیٹ ورک میں حصہ لینے کے لیے لگاتے ہیں۔ کم از کم 100,000 GRT ہے، اور کوئی اوپری حد نہیں ہے. -- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service to The Graph Network. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **ڈیلیگیٹرز**: نیٹ ورک کے شرکاء جو GRT کے مالک ہیں اور اپنی GRT انڈیکسرز کو تفویض کرتے ہیں۔ یہ انڈیکسرز کو نیٹ ورک پر سب گراف میں اپنا حصہ بڑھانے کی اجازت دیتا ہے۔ بدلے میں، ڈیلیگیٹرز کو انڈیکسنگ کے انعامات کا ایک حصہ ملتا ہے جو انڈیکسرز سب گراف پر کارروائی کرنے کے لیے وصول کرتے ہیں. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **ڈیلیگیشن ٹیکس**: ڈیلیگیٹرز کی طرف سے 0.5% فیس ادا کی جاتی ہے جب وہ انڈیکسرز کو GRT تفویض کرتے ہیں۔ فیس کی ادائیگی کے لیے استعمال ہونے والی GRT کو جلا دیا جاتا ہے. -- **کیوریٹرز**: نیٹ ورک کے شرکاء جو اعلیٰ معیار کے سب گراف کی شناخت کرتے ہیں، اور کیوریشن شیئرز کے بدلے انہیں "کیوریٹ" کرتے ہیں (یعنی ان پر GRT کا اشارہ دیتے ہیں)۔ جب انڈیکسرز کسی سب گراف پر کیوری کی فیس کا دعویٰ کرتے ہیں، تو 10% اس سب گراف کے کیوریٹرز میں تقسیم کیا جاتا ہے۔ انڈیکسرز سب گراف پر سگنل کے متناسب انڈیکسنگ کے انعامات حاصل کرتے ہیں۔ ہم GRT سگنل کی مقدار اور سب گراف کو ترتیب دینے والے انڈیکسرز کی تعداد کے درمیان باہمی تعلق دیکھتے ہیں. +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **کیوریشن ٹیکس**: کیوریٹرز کے ذریعہ 1% فیس ادا کی جاتی ہے جب وہ سب گرافس پر GRT کا اشارہ دیتے ہیں۔ فیس کی ادائیگی کے لیے استعمال ہونے والی GRT کو جلا دیا جاتا ہے. -- **سب گراف کنزیومر**: کوئی بھی ایپلیکیشن یا صارف جو سب گراف سے کیوری کرتا ہے. +- **Data Consumer**: Any application or user that queries a subgraph. - **سب گراف ڈویلپر**: ایک ڈویلپر جو گراف کے ڈیسینٹرالائزڈ نیٹ ورک پر سب گراف بناتا اور تعینات کرتا ہے. @@ -46,11 +44,11 @@ title: لغت 1. **فعال**: ایک مختص کو فعال سمجھا جاتا ہے جب اسے آن چین بنایا جاتا ہے۔ اسے ایلوکیشن کھولنا کہا جاتا ہے، اور یہ نیٹ ورک کی طرف اشارہ کرتا ہے کہ انڈیکسر کسی خاص سب گراف کے لیے فعال طور پر انڈیکس کر رہا ہے اور کیوریز پیش کر رہا ہے۔ فعال مختصات سب گراف پر سگنل کے متناسب انڈیکسنگ انعامات اور مختص کردہ GRT کی رقم جمع کرتی ہیں. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **سب گراف اسٹوڈیو**: سب گراف کی تعمیر، تعیناتی اور اشاعت کے لیے ایک طاقتور ڈیپ. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed. Specifically, the Indexer will lose 2.5% of their self-stake of GRT. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. @@ -62,11 +60,11 @@ title: لغت - **GRT**: گراف کے کام کا یوٹیلیٹی ٹوکن۔ GRT نیٹ ورک میں حصہ ڈالنے کے لیے نیٹ ورک کے شرکاء کو اقتصادی مراعات فراہم کرتا ہے. -- **POI or Proof of Indexing**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent Proof of Indexing (POI). Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **گراف نوڈ**: گراف نوڈ وہ جزو ہے جو سب گراف کو انڈیکس کرتا ہے، اور نتیجے میں ڈیٹا کو GraphQL API کے ذریعے کیوری کے لیے دستیاب کرتا ہے۔ اس طرح یہ انڈیکسر اسٹیک میں مرکزی حیثیت رکھتا ہے، اور ایک کامیاب انڈیکسر چلانے کے لیے گراف نوڈ کا درست آپریشن بہت ضروری ہے. +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **انڈیکسر ایجنٹ**: انڈیکسر ایجنٹ انڈیکسر اسٹیک کا حصہ ہے۔ یہ انڈیکسر کے آن چین تعاملات کو سہولت فراہم کرتا ہے، بشمول نیٹ ورک پر رجسٹر کرنا، اس کے گراف نوڈز پر سب گراف کی تعیناتیوں کا انتظام کرنا، اور مختص کا انتظام کرنا. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **گراف کلائنٹ**: ڈیسینٹرالائزڈ طریقے سے GraphQL پر مبنی ڈیپ بنانے کے لیے ایک لائبریری. @@ -78,10 +76,6 @@ title: لغت - **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self stake. -- **گراف نیٹ ورک پر سب گراف کو _اپ گریڈ_ کرنا**: ہوسٹڈ سروس سے گراف نیٹ ورک پر سب گراف منتقل کرنے کا عمل۔ - -- **سب گراف کو _اپ ڈیٹ_ کرنا**: سب گراف کے مینی فیسٹ، سکیما، یا میپنگز میں اپ ڈیٹس کے ساتھ ایک نیا سب گراف ورزن جاری کرنے کا عمل۔ +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). - -- **Upgrade Window**: The countdown for hosted service users to upgrade their subgraph(s) to The Graph Network beginning on April 11th, and ending on June 12th 2024. diff --git a/website/pages/ur/index.json b/website/pages/ur/index.json index 520b5b31db7e..9fbd9c4b80fe 100644 --- a/website/pages/ur/index.json +++ b/website/pages/ur/index.json @@ -21,10 +21,6 @@ "createASubgraph": { "title": "سب گراف بنائیں", "description": "سب گراف بنانے کے لیے سٹوڈیو کا استعمال کریں" - }, - "migrateFromHostedService": { - "title": "ہوسٹڈ سروس سے اپ گریڈ کریں", - "description": "گراف نیٹ ورک میں سب گراف کو اپ گریڈ کرنا" } }, "networkRoles": { diff --git a/website/pages/ur/managing/transfer-and-deprecate-a-subgraph.mdx b/website/pages/ur/managing/transfer-and-deprecate-a-subgraph.mdx new file mode 100644 index 000000000000..9f62685c3a31 --- /dev/null +++ b/website/pages/ur/managing/transfer-and-deprecate-a-subgraph.mdx @@ -0,0 +1,65 @@ +--- +title: Transfer and Deprecate a Subgraph +--- + +## سب گراف کی ملکیت منتقل کرنا + +Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. + +**Please note the following:** + +- Whoever owns the NFT controls the subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. +- You can easily move control of a subgraph to a multi-sig. +- A community member can create a subgraph on behalf of a DAO. + +### View your subgraph as an NFT + +To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: + +``` +https://opensea.io/your-wallet-address +``` + +Or a wallet explorer like **Rainbow.me**: + +``` +https://rainbow.me/your-wallet-addres +``` + +### Step-by-Step + +To transfer ownership of a subgraph, do the following: + +1. Use the UI built into Subgraph Studio: + + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) + +2. Choose the address that you would like to transfer the subgraph to: + + ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) + +Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: + +![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) + +## Deprecating a subgraph + +Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. + +### Step-by-Step + +To deprecate your subgraph, do the following: + +1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). +2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. +3. Your subgraph will no longer appear in searches on Graph Explorer. + +**Please note the following:** + +- The owner's wallet should call the `deprecateSubgraph` function. +- کیوریٹرز اب سب گراف پر سگنل نہیں دے سکیں گے. +- Curators that already signaled on the subgraph can withdraw their signal at an average share price. +- Deprecated subgraphs will show an error message. + +> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. diff --git a/website/pages/ur/mips-faqs.mdx b/website/pages/ur/mips-faqs.mdx index e59e86551e55..a17f6b7d64ee 100644 --- a/website/pages/ur/mips-faqs.mdx +++ b/website/pages/ur/mips-faqs.mdx @@ -6,10 +6,6 @@ title: MIPs کے اکثر پوچھے گئے سوالات > نوٹ: MIPs پروگرام مئی 2023 سے بند ہے۔ حصہ لینے والے تمام انڈیکسرز کا شکریہ! -گراف کا ایکو سسٹم میں حصہ لینے کا یہ ایک دلچسپ وقت ہے! [گراف ڈے 2022](https://thegraph.com/graph-day/2022/) کے دوران Yaniv Tal نے اعلان کیا کہ [ہوسٹڈ سروس کے غروب آفتاب](https://thegraph.com/blog/sunsetting-hosted-service/)ایک لمحہ جس کی گراف کا ایکو سسٹم کئی سالوں سے کام کر رہا ہے. - -ہوسٹڈ سروس کے غروب ہونے اور اس کی تمام سرگرمیوں کی ڈیسنٹرالا ئزڈ نیٹ ورک میں منتقلی کی حمایت کرنے کے لیے، گراف فاؤنڈیشن نے [مائیگریشن انفراسٹرکچر پرووائیڈرز (MIPs) پروگرام](https://thegraph.com/blog/mips-multi-chain-indexing-incentivized-program) کا اعلان کیا ہے. - MIPs پروگرام انڈیکسر کے لیے ایک ترغیب دینے والا پروگرام ہے جو انہیں ایتھیریم مین نیٹ سے آگے انڈیکس چینز کے لیے وسائل کے ساتھ مدد فراہم کرتا ہے اور گراف پروٹوکول کو ڈیسنٹرالا ئزڈ نیٹ ورک کو ایک ملٹی چین انفراسٹرکچر پرت میں پھیلانے میں مدد کرتا ہے. MIPs پروگرام نے GRT سپلائی (75M GRT) کا 0.75% مختص کیا ہے، 0.5% انڈیکسرز کو انعام دینے کے لیے جو نیٹ ورک کو بوٹسٹریپ کرنے میں حصہ ڈالتے ہیں اور 0.25% نیٹ ورک گرانٹس کے لیے مختص کیے گئے ہیں جو ملٹی چین سب گراف استعمال کرنے والے سب گراف ڈویلپرز کے لیے ہیں. diff --git a/website/pages/ur/network/benefits.mdx b/website/pages/ur/network/benefits.mdx index 4bea83bb29a0..9509ac463b4c 100644 --- a/website/pages/ur/network/benefits.mdx +++ b/website/pages/ur/network/benefits.mdx @@ -27,49 +27,49 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| اخراجات کا موازنہ | خود میزبان | The Graph Network | -| :-: | :-: | :-: | -| ماہانہ سرور کی قیمت/\* | $350 فی مہینہ | $0 | -| استفسار کے اخراجات | $0+ | $0 per month | -| انجینئرنگ کا وقت | $400 فی مہینہ | کوئی بھی نہیں، عالمی سطح پر تقسیم شدہ انڈیکسرز کے ساتھ نیٹ ورک میں بنایا گیا ہے | -| فی مہینہ سوالات | بنیادی صلاحیتوں تک محدود | 100,000 (Free Plan) | -| قیمت فی سوال | $0 | $0 | -| بنیادی ڈھانچہ | سینٹرلائزڈ | ڈیسینٹرلائزڈ | -| جغرافیائی فالتو پن | $750+ فی اضافی نوڈ | شامل | -| اپ ٹائم | اتار چڑھاو | 99.9%+ | -| کل ماہانہ اخراجات | $750+ | $0 | +| اخراجات کا موازنہ | خود میزبان | The Graph Network | +|:----------------------------:|:---------------------------------------:|:-------------------------------------------------------------------------------:| +| ماہانہ سرور کی قیمت/* | $350 فی مہینہ | $0 | +| استفسار کے اخراجات | $0+ | $0 per month | +| انجینئرنگ کا وقت | $400 فی مہینہ | کوئی بھی نہیں، عالمی سطح پر تقسیم شدہ انڈیکسرز کے ساتھ نیٹ ورک میں بنایا گیا ہے | +| فی مہینہ سوالات | بنیادی صلاحیتوں تک محدود | 100,000 (Free Plan) | +| قیمت فی سوال | $0 | $0 | +| بنیادی ڈھانچہ | سینٹرلائزڈ | ڈیسینٹرلائزڈ | +| جغرافیائی فالتو پن | $750+ فی اضافی نوڈ | شامل | +| اپ ٹائم | اتار چڑھاو | 99.9%+ | +| کل ماہانہ اخراجات | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| اخراجات کا موازنہ | خود میزبان | The Graph Network | -| :-: | :-: | :-: | -| ماہانہ سرور کی قیمت/\* | $350 فی مہینہ | $0 | -| استفسار کے اخراجات | $500 فی مہینہ | $120 per month | -| انجینئرنگ کا وقت | $800 فی مہینہ | کوئی بھی نہیں، عالمی سطح پر تقسیم شدہ انڈیکسرز کے ساتھ نیٹ ورک میں بنایا گیا ہے | -| فی مہینہ سوالات | بنیادی صلاحیتوں تک محدود | ~3,000,000 | -| قیمت فی سوال | $0 | $0.00004 | -| بنیادی ڈھانچہ | سینٹرلائزڈ | ڈیسینٹرلائزڈ | -| انجینئرنگ کے اخراجات | $200 فی گھنٹہ | شامل | -| جغرافیائی فالتو پن | فی اضافی نوڈ کل اخراجات میں $1,200 | شامل | -| اپ ٹائم | اتار چڑھاو | 99.9%+ | -| کل ماہانہ اخراجات | $1,650+ | $120 | +| اخراجات کا موازنہ | خود میزبان | The Graph Network | +|:----------------------------:|:------------------------------------------:|:-------------------------------------------------------------------------------:| +| ماہانہ سرور کی قیمت/* | $350 فی مہینہ | $0 | +| استفسار کے اخراجات | $500 فی مہینہ | $120 per month | +| انجینئرنگ کا وقت | $800 فی مہینہ | کوئی بھی نہیں، عالمی سطح پر تقسیم شدہ انڈیکسرز کے ساتھ نیٹ ورک میں بنایا گیا ہے | +| فی مہینہ سوالات | بنیادی صلاحیتوں تک محدود | ~3,000,000 | +| قیمت فی سوال | $0 | $0.00004 | +| بنیادی ڈھانچہ | سینٹرلائزڈ | ڈیسینٹرلائزڈ | +| انجینئرنگ کے اخراجات | $200 فی گھنٹہ | شامل | +| جغرافیائی فالتو پن | فی اضافی نوڈ کل اخراجات میں $1,200 | شامل | +| اپ ٹائم | اتار چڑھاو | 99.9%+ | +| کل ماہانہ اخراجات | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| اخراجات کا موازنہ | سیلف ہوسٹڈ | The Graph Network | -| :-: | :-: | :-: | -| ماہانہ سرور کی قیمت/\* | $1100 فی مہینہ، فی نوڈ | $0 | -| استفسار کے اخراجات | $4000 | $1,200 per month | -| نوڈس کی تعداد درکار ہے | 10 | قابل اطلاق نہیں | -| انجینئرنگ کا وقت | $6,000 یا اس سے زیادہ فی مہینہ | کوئی بھی نہیں، عالمی سطح پر تقسیم شدہ انڈیکسرز کے ساتھ نیٹ ورک میں بنایا گیا ہے | -| فی مہینہ سوالات | بنیادی صلاحیتوں تک محدود | ~30,000,000 | -| قیمت فی سوال | $0 | $0.00004 | -| بنیادی ڈھانچہ | سینٹرلائزڈ | ڈیسینٹرلائزڈ | -| جغرافیائی فالتو پن | فی اضافی نوڈ کل اخراجات میں $1,200 | شامل | -| اپ ٹائم | اتار چڑھاو | 99.9%+ | -| کل ماہانہ اخراجات | $11,000+ | $1,200 | - -/\*بیک اپ کے اخراجات سمیت: $50-$100 فی مہینہ +| اخراجات کا موازنہ | سیلف ہوسٹڈ | The Graph Network | +|:----------------------------:|:-------------------------------------------:|:-------------------------------------------------------------------------------:| +| ماہانہ سرور کی قیمت/* | $1100 فی مہینہ، فی نوڈ | $0 | +| استفسار کے اخراجات | $4000 | $1,200 per month | +| نوڈس کی تعداد درکار ہے | 10 | قابل اطلاق نہیں | +| انجینئرنگ کا وقت | $6,000 یا اس سے زیادہ فی مہینہ | کوئی بھی نہیں، عالمی سطح پر تقسیم شدہ انڈیکسرز کے ساتھ نیٹ ورک میں بنایا گیا ہے | +| فی مہینہ سوالات | بنیادی صلاحیتوں تک محدود | ~30,000,000 | +| قیمت فی سوال | $0 | $0.00004 | +| بنیادی ڈھانچہ | سینٹرلائزڈ | ڈیسینٹرلائزڈ | +| جغرافیائی فالتو پن | فی اضافی نوڈ کل اخراجات میں $1,200 | شامل | +| اپ ٹائم | اتار چڑھاو | 99.9%+ | +| کل ماہانہ اخراجات | $11,000+ | $1,200 | + +/*بیک اپ کے اخراجات سمیت: $50-$100 فی مہینہ $200 فی گھنٹہ کے مفروضے کی بنیاد پر انجینئرنگ کا وقت diff --git a/website/pages/ur/network/curating.mdx b/website/pages/ur/network/curating.mdx index b8baf93a16a1..f975d61a9641 100644 --- a/website/pages/ur/network/curating.mdx +++ b/website/pages/ur/network/curating.mdx @@ -10,7 +10,7 @@ Before consumers can query a subgraph, it must be indexed. This is where curatio Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. @@ -18,7 +18,7 @@ The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). @@ -34,7 +34,7 @@ Within the Curator tab in Graph Explorer, curators will be able to signal and un آپ کے سگنل کو خود بخود جدید ترین پروڈکشن کی تعمیر میں منتقل کرنا اس بات کو یقینی بنانے کے لیے قابل قدر ہو سکتا ہے کہ آپ کیوری کی فیس جمع کرتے رہیں۔ جب بھی آپ کیوریشن کرتے ہیں، 1% کیوریشن ٹیکس لاگو ہوتا ہے۔ آپ ہر دفعہ منتقلی پر 0.5% کا کیوریشن ٹیکس ادا کریں گے. سب گراف ڈویلپرز کو نئے ورژنز کثرت سے شائع کرنے کی حوصلہ شکنی کی جاتی ہے - انہیں تمام خود کار طریقے سے منتقل کیوریشن شیئرز پر 0.5% کیوریشن ٹیکس ادا کرنا پڑتا ہے. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -49,7 +49,7 @@ However, it is recommended that curators leave their signaled GRT in place not o ## خطرات 1. گراف میں کیوری کی مارکیٹ فطری طور پر جوان ہے اور اس بات کا خطرہ ہے کہ آپ کا %APY مارکیٹ کی نئی حرکیات کی وجہ سے آپ کی توقع سے کم ہو سکتا ہے. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. 3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. ایک سب گراف ایک بگ کی وجہ سے ناکام ہو سکتا ہے. ایک ناکام سب گراف کیوری کی فیس جمع نہیں کرتا ہے. اس کے نتیجے میں،آپ کو انتظار کرنا پڑے گاجب تک کہ ڈویلپر اس بگ کو کو ٹھیک نہیں کرتا اور نیا ورژن تعینات کرتا ہے. - اگر آپ نےسب گراف کے نۓ ورژن کو سبسکرائب کیا ہے. آپ کے حصص خود بخود اس نئے ورژن میں منتقل ہو جائیں گے۔ اس پر 0.5% کیوریشن ٹیکس لگے گا. @@ -65,7 +65,7 @@ By signalling on a subgraph, you will earn a share of all the query fees that th اعلی معیار کے سب گراف تلاش کرنا پیچیدہ کام ہے. لیکن اس کوبہت سے مختلف طریقوں سے رابطہ کیا جا سکتا ہے. بطور کیوریٹر، آپ قابل اعتماد سب گرافس تلاش کرنا چاہتے ہیں جو کیوری کے حجم کو بڑھا رہے ہیں۔ایک قابل اعتماد سب گراف قابل قدر ہو سکتا ہے اگر یہ مکمل، درست، اور ڈیپ کے ڈیٹا کی ضروریات کی حمایت کرتا ہے۔ ایک ناقص تعمیراتی سب گراف کو نظر ثانی یا دوبارہ شائع کرنے کی ضرورت ہو سکتی ہے، اور یہ ناکامی بھی ختم ہو سکتی ہے۔کیوریٹرز کے لیے یہ اہم ہے کہ وہ سب گراف کے فن تعمیر یا کوڈ کا جائزہ لیں تاکہ یہ اندازہ لگایا جا سکے کہ آیا کوئی سب گراف قیمتی ہے۔ اس کے نتیجے میں: -- کیوریٹرز نیٹ ورک کے بارے میں اپنی سمجھ کا استعمال کرتے ہوئے یہ اندازہ لگا سکتے ہیں کہ کس طرح ایک انفرادی سب گراف مستقبل میں کیوری کا حجم زیادہ یا کم پیدا کر سکتا ہے +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future - Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. ### 3. سب گراف کو اپ ڈیٹ کرنے کی کیا قیمت ہے؟ @@ -78,50 +78,14 @@ Migrating your curation shares to a new subgraph version incurs a curation tax o ### 5. کیا میں اپنے کیوریشن شیئرز بیچ سکتا ہوں؟ -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). ### 6. Am I eligible for a curation grant? Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## بانڈنگ کریو 101 - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![فی حصص کی قیمت](/img/price-per-share.png) - -اس کے نتیجے میں،قیمتوں میں مسلسل اضافہ ہوتا ہے،اس کا مطلب ہے کہ وقت گزرنے کے ساتھ ساتھ شیئر خریدنا زیادہ مہنگا ہو جائے گا۔ یہاں پر ایک مثال ہے کی ہمارا کیا مطلب ہے، نیچے بانڈنگ وکر دیکھیں: - -![بانڈنگ وکر](/img/bonding-curve.png) - -فرض کریں ہمارے پاس دو کیوریٹر ہیں جو ایک سب گراف کے لیے شیئر کرتے ہیں: - -- سب گراف پر سب سے پہلے سگنل دینے والا کیوریٹر A ہے۔وکر میں 120,000 GRT شامل کرکے، وہ 2000 حصص کو ٹکسال کرنے کے قابل ہیں. -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- چونکہ دونوں کیوریٹر کل کیوریشن شیئرز کا نصف حصہ رکھتے ہیں، اس لیے انہیں کیوریٹر رائلٹی کی مساوی رقم ملے گی. -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- باقی کیورٹرز اب اس سب گراف کے لیے تمام کیوریٹر رائلٹی وصول کریں گے. اگر وہ GRT نکالنے کے لیے اپنے حصص جلاتے ہیں، تو وہ 120,000 GRT وصول کریں گے. -- **TLDR:** کیوریشن شیئرز کی GRT ویلیویشن کا تعین بانڈنگ کریو سے ہوتا ہے اور یہ غیر مستحکم ہو سکتا ہے۔بڑے نقصان کا خدشہ ہے۔جلدی سگنل دینے کا مطلب ہے کہ آپ ہر شیئر کے لیے کم GRT لگاتے ہیں۔توسیع کے لحاظ سے، اس کا مطلب ہے کہ آپ اسی سب گراف کے لیے بعد کے کیوریٹروں کے مقابلے فی GRT زیادہ کیوریٹر رائلٹی حاصل کرتے ہیں. - -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** - -گراف کے معاملے میں،[بانڈنگ کریو فارمولے پر بنکور کا نفاذ](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) کا فائدہ اٹھایا جاتا ہے. - ابھی بھی الجھن میں ہیں، ذیل میں ہماری کیوریشن ویڈیو گائیڈ دیکھیں: diff --git a/website/pages/ur/network/delegating.mdx b/website/pages/ur/network/delegating.mdx index ac3591610138..7c3019cc898b 100644 --- a/website/pages/ur/network/delegating.mdx +++ b/website/pages/ur/network/delegating.mdx @@ -2,13 +2,23 @@ title: ڈیلیگیٹنگ --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## ڈیلیگیٹر گائیڈ -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly, as that information is widely available online. There are three sections in this guide: @@ -24,64 +34,85 @@ There are three sections in this guide: برے رویۓ پر ڈیلیگیٹرز کو نہیں چھوڑا جا سکتا، لیکن ناقص فیصلہ سازی کی حوصلہ شکنی کے لیے ڈیلیگیٹرز پر ٹیکس ہے جو نیٹ ورک کی سالمیت کو نقصان پہنچا سکتا ہے. -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +As a Delegator, it's important to understand the following: -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### وفد کی بندش کی مدت Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely.
    - [ڈیلیگیشن ان بانڈنگ](/img/Delegation-Unbonding.png) _ڈیلیگیشن UI میں 0.5% فیس نوٹ کریں، ساتھ ہی 28 دن غیر بندھن کی - مدت۔_ + [ڈیلیگیشن ان بانڈنگ](/img/Delegation-Unbonding.png) _ڈیلیگیشن UI میں 0.5% فیس نوٹ کریں، ساتھ ہی 28 دن + غیر بندھن کی مدت۔_
    ### ڈیلیگیٹرز کے لیے منصفانہ انعامی ادائیگی کے ساتھ ایک قابل اعتماد انڈیکسر کا انتخاب کرنا -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +#### Delegation Parameters + +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
    - ![انڈیکسنگ ریوارڈ کٹ](/img/Indexing-Reward-Cut.png) *سب سے اوپر انڈیکسر ڈیلیگیٹرز کو 90% انعامات دے رہا ہے. درمیان - والا ڈیلیگیٹرز کو 20% دے رہا ہے۔ نیچے والا ڈیلیگیٹرز کو ~83% دے رہا ہے۔* + ![انڈیکسنگ ریوارڈ کٹ](/img/Indexing-Reward-Cut.png) *سب سے اوپر انڈیکسر ڈیلیگیٹرز کو 90% انعامات دے رہا ہے. درمیان والا ڈیلیگیٹرز کو 20% دے رہا ہے۔ نیچے والا ڈیلیگیٹرز کو ~83% دے رہا ہے۔*
    -- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. + +As you can see, in order to choose the right Indexer, you must consider multiple things. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -### ڈیلیگیٹرز کی متوقع واپسی کا حساب لگانا +## Calculating Delegators Expected Return -A Delegator must consider a lot of factors when determining the return. These include: +A Delegator must consider the following factors to determine a return: -- ایک ٹیکنیکل ڈیلیگیٹر انڈیکسر کی ان کے لیے دستیاب ڈیلیگیٹڈ ٹوکن استعمال کرنے کی صلاحیت کو بھی دیکھ سکتا ہے۔ اگرایک انڈیکسر دستیاب تمام ٹوکن مختص نہیں کر رہا ہے، وہ زیادہ سے زیادہ منافع نہیں کما رہے ہیں جو وہ اپنے یا اپنے ڈیلیگیٹرز کے لیے ہو سکتا ہے. -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### کیوری فیس میں کمی اور انڈیکسنگ کی فیس میں کمی پر غور کرنا -جیسا کہ اوپر والے حصوں میں بیان کیا گیا ہے، آپ کو ایک انڈیکسر کا انتخاب کرنا چاہیے جو ان کے کیوری کو ترتیب دینے کے بارے میں شفاف اور ایماندار ہو۔ فیس کٹ اور انڈیکسنگ فیس میں کٹوتی۔ ایک ڈیلیگیٹر کو پیرامیٹرز کولڈاؤن ٹائم کو بھی دیکھنا چاہئے تاکہ یہ معلوم ہو سکے کہ ان کے پاس کتنا ٹائم بفر ہے۔ اس کے مکمل ہونے کے بعد، ڈیلیگیٹرز کو ملنے والے انعامات کی مقدار کا حساب لگانا کافی آسان ہے۔ فارمولا ہے: +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![ڈیلیگیشن کی تصویر 3](/img/Delegation-Reward-Formula.png) ### انڈیکسر کے ڈیلیگیشن پول پر غور کرنا -ایک اور چیز جس پر ایک ڈیلیگیٹر کو غور کرنا ہوگا وہ یہ ہے کہ ڈیلیگیشن پول کا کتنا تناسب ان کے پاس ہے۔ تمام ڈیلیگیشن انعامات یکساں طور پر بانٹ دیے جاتے ہیں، پول کی ایک سادہ ری بیلنسنگ کے ساتھ جو ڈیلیگیٹر نے پول میں جمع کروائی ہے۔ یہ ڈیلیگیٹر کو پول کا حصہ دیتا ہے: +Delegators should consider the proportion of the Delegation Pool they own. -![فارمولہ شیئر کریں](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +This gives the Delegator a share of the pool: + +![فارمولہ شیئر کریں](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### ڈیلیگیشن کی صلاحیت کو مدنظر رکھتے ہوئے -غور کرنے کی ایک اور چیز ڈیلیگیشن کی صلاحیت ہے۔ فی الحال، ڈیلیگیشن تناسب 16 پر سیٹ ہے۔ اس کا مطلب یہ ہے کہ اگر کسی انڈیکسر نے 1,000,000 GRT کا حصہ لگایا ہے، ان کی ڈیلیگیشن کی صلاحیت ڈیلیگیٹڈ ٹوکنز کی 16,000,000 GRT ہے جسے وہ پروٹوکول میں استعمال کر سکتے ہیں۔ اس رقم پر کوئی بھی ڈیلیگیٹ ٹوکن ڈیلیگیٹر کے تمام انعامات کو کم کردے گا. +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. @@ -89,16 +120,21 @@ Therefore, a Delegator should always consider the Delegation Capacity of an Inde ### میٹا ماسک "پینڈنگ ٹرانزیکشن" بگ -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### مثال -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## نیٹ ورک UI کے لیے ویڈیو گائیڈ +## Video Guide -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/ur/network/developing.mdx b/website/pages/ur/network/developing.mdx index 68572a2ad8f9..be3b7134e2e8 100644 --- a/website/pages/ur/network/developing.mdx +++ b/website/pages/ur/network/developing.mdx @@ -2,52 +2,88 @@ title: ڈویلپنگ --- -ڈویلپرز گراف ایکو سسٹم کا مطلبہ کرنے والا پہلو ہے. ڈویلپرز گرافس بناتے ہیں اور گراف نیٹ ورک میں شائع کرتے ہیں. پھر، وہ اپنی ایپلیکیشنز کو طاقت دینے کے لیے GraphQL کے ساتھ لائیو سب گرافس سے کیوری کرتے ہیں. +To start coding right away, go to [Developer Quick Start](/quick-start/). + +## جائزہ + +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. + +On The Graph, you can: + +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. + +### What is GraphQL? + +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +### Developer Actions + +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. + +## Subgraph Specifics + +### What are subgraphs? + +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. + +A subgraph primarily consists of the following files: + +- `subgraph.yaml`: this YAML file contains the [subgraph manifest](/developing/creating-a-subgraph/#the-subgraph-manifest). +- `subgraph.graphql`: this GraphQL schema defines what data is stored for your subgraph, and how to query it via [GraphQL](/developing/creating-a-subgraph/#the-graphql-schema). +- `mappings`: this [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) mappings file translates data from the event data to the entities defined in your schema. + +Learn the detailed specifics to [create a subgraph](/developing/creating-a-subgraph/). ## سب گراف لائف سائیکل -نیٹ ورک پر تعینات سب گراف کا ایک متعین لائف سائیکل ہوتا ہے. +Here is a general overview of a subgraph’s lifecycle: -### مقامی طور پر تعمیر کریں +![سب گراف لائف سائیکل](/img/subgraph-lifecycle.png) -جیسا کہ تمام سب گراف کی ترقی کے ساتھ ، یہ مقامی تعمیر اور جانچ کے ساتھ شروع ہوتا ہے.ڈویلپرز ایک ہی مقامی سیٹ اپ کا استعمال کر سکتے ہیں چاہے وہ گراف نیٹ ورک، ہوسٹڈ سروس یا مقامی گراف نوڈ کے لیے تعمیر کر رہے ہوں، `graph-cli` اور `graph-ts` کا فائدہ اٹھاتے ہوئے سب گراف۔ ڈویلپرز کی حوصلہ افزائی کی جاتی ہے کہ وہ اپنے سب گراف کی مضبوطی کو بہتر بنانے کے لیے یونٹ ٹیسٹنگ کے لیے [Matchstick](https://github.com/LimeChain/matchstick) جیسے ٹولز استعمال کریں. +### مقامی طور پر تعمیر کریں -> گراف نیٹ ورک پر خصوصیت اور نیٹ ورک سپورٹ کے لحاظ سے کچھ رکاوٹیں ہیں۔ صرف [تعاون یافتہ نیٹ ورکس](/developing/supported-networks) پر سب گرافس ہی انڈیکسنگ انعامات حاصل کریں گے، اور وہ سب گراف جو IPFS سے ڈیٹا حاصل کرتے ہیں وہ بھی اہل نہیں ہیں. +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. ### Deploy to Subgraph Studio -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. - -### نیٹ ورک پر شائع کریں +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -جب ڈویلپر اپنی سب گراف سے خوش ہوتا ہے، وہ اسے گراف نیٹ ورک پر شائع کر سکتے ہیں.یہ ایک آن چین ایکشن ہے،جو سب گراف کو رجسٹر کرتا ہے تاکہ اسے انڈیکسرز کے ذریعے دریافت کیا جا سکے۔شائع شدہ سب گراف میں متعلقہ NFT ہے،جو پھر آسانی سے قابل منتقلی ہوتا ہے۔ شائع شدہ سب گراف میں میٹا ڈیٹا سے وابستہ ہے،جو نیٹ ورک کے دوسرے شرکاء کو مفید سیاق و سباق اور معلومات فراہم کرتا ہے. +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. -### انڈیکسنگ کی حوصلہ افزائی کا سگنل +### نیٹ ورک پر شائع کریں -شائع شدہ سب گرافس سگنل کے اضافے کے بغیر انڈیکسرز کے ذریعہ اٹھائے جانے کا امکان نہیں ہے۔ سگنل ایک دیے گئےسب گراف کے ساتھ منسلک GRT مقفل ہے،جوانڈیکسرز کو اشارہ کرتا ہے کہ دیۓ گے سب گراف کو کیوری کا حجم موصول ہوگا،اور اس پر کارروائی کرنے کے لیے دستیاب انڈیکسنگ کے انعامات میں بھی حصہ ڈالتا ہے۔سب گراف ڈویلپرز عام طور پر اپنے سب گراف میں سگنل شامل کریں گے،انڈیکسنگی کی حوصلہ افزائی کے لیے. تھرڈ پارٹی کیوریٹر بھی دیے گئےسب گراف پر اشارہ کر سکتے ہیں،اگر وہ سب گراف کو کیوری کے حجم کو چلانے کا امکان سمجھتے ہیں. +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. -### کیوری کرنا اور ایپلیکیشن ڈویلپمنٹ +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-and-deprecate-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. -یک بار جب انڈیکسرز کے ذریعہ سب گراف پر کارروائی ہو جائے اور کیوری کے لیے دستیاب ہو جائے،ڈویلپرز اپنی ایپلی کیشنز میں سب گراف استعمال کرنا شروع کرسکتے ہیں. ڈویلپرز ایک گیٹ وے کے ذریعے سب گرافس سے کیوری کرتے ہیں،جو ان کی کیوریز کو انڈیکسر کو بھیجتا ہے جس نے سب گراف پر کارروائی کی ہے،GRT میں کیوری کی فیس ادا کرنا. +### Add Curation Signal for Indexing -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. +#### What is signal? -### سب گرافس کو اپ ڈیٹ کرنا +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. +### کیوری کرنا اور ایپلیکیشن ڈویلپمنٹ -ایک بار جب سب گراف ڈویلپر اپ گریڈ کرنے کے لیے تیار ہو جائے، وہ اپنےسب گراف کو نئے ورزن کی طرف اشارہ کرنے کے لیے ٹرانزیکشن شروع کر سکتے ہیں۔ سب گراف کو اپ ڈیٹ کرنے سے کسی بھی سگنل کو نئے ورزن میں منتقل کیا جاتا ہے (یہ فرض کرتے ہوئے کہ صارف جس نے سگنل کو "آٹو مائیگریٹ" منتخب کیا ہے)، جس پرمنتقلی ٹیکس بھی لاگو ہوتا ہے۔ اس سگنل کی منتقلی سے انڈیکسرز کو سب گراف کے نئے ورزن کی انڈیکسنگ شروع کرنے کا اشارہ کرنا چاہیے، اس لیے اسے جلد ہی کیوری کے لیے دستیاب ہونا چاہیے. +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). -### سب گراف کو فرسودہ کرنا +Learn more about [querying subgraphs](/querying/querying-the-graph/). -کسی وقت ایک ڈویلپر فیصلہ کر سکتا ہے کہ انہیں اب شائع شدہ سب گراف کی ضرورت نہیں ہے۔اس وقت وہ سب گراف کو فرسودہ کر سکتے ہیں، جو کیوریٹرز کو کوئی بھی سگنل شدہ GRT واپس کرتا ہے. +### سب گرافس کو اپ ڈیٹ کرنا -### متنوع ڈویلپر کے کردار +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -کچھ ڈویلپرز نیٹ ورک پر مکمل سب گراف لائف سائیکل کے ساتھ مشغول ہوں گے، ان کے اپنے سب گراف پر اشاعت، کیوری اور تکرار کریں گے۔کچھ سب گراف کی ترقی پر توجہ مرکوز کر سکتے ہیں،کھلے APIs کی تعمیر جس پر دوسرے بنا سکتے ہیں۔کچھ ایپلیکیشن فوکسڈ ہو سکتے ہیں، دوسروں کے ذریعے تعینات کردہ سب گرافس سے کیوری کرتے ہیں. +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. -### ڈویلپرز اور نیٹ ورک اکنامکس +### Deprecating & Transferring Subgraphs -ڈویلپر نیٹ ورک میں ایک اہم اقتصادی اداکار ہیں، انڈیکسنگ کی حوصلہ افزائی کے لیے GRT کو لاک اپ کرنا، اور سب گرافس کو اہم طور پر کیوری کرنا، جو کہ نیٹ ورک کا بنیادی ویلیو ایکسچینج ہے۔ جب بھی سب گراف کو اپ ڈیٹ کیا جاتا ہے تو سب گراف ڈویلپر GRT کو بھی جلا دیتے ہیں۔ +If you no longer need a published subgraph, you can [deprecate or transfer a subgraph](/managing/transfer-and-deprecate-a-subgraph/). Deprecating a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/ur/network/explorer.mdx b/website/pages/ur/network/explorer.mdx index 09a259db5d50..15559398fc2e 100644 --- a/website/pages/ur/network/explorer.mdx +++ b/website/pages/ur/network/explorer.mdx @@ -2,21 +2,35 @@ title: گراف ایکسپلورر --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## سب گراف -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![ایکسپلورر امیج 1](/img/Subgraphs-Explorer-Landing.png) -جب آپ سب گراف پر کلک کرتے ہیں، آپ پلے گراؤنڈ میں سوالات کی جانچ کر سکیں گے اور باخبر فیصلے کرنے کے لیے نیٹ ورک کی تفصیلات سے فائدہ اٹھا سکیں گے۔ آپ انڈیکسرز کو اس کی اہمیت اور معیار سے آگاہ کرنے کے لیے اپنے اپنے سب گراف یا دوسروں کے سب گراف پر بھی GRT کا اشارہ دے سکیں گے۔ یہ بہت اہم ہے کیونکہ سب گراف پر سگنلنگ اسے انڈیکس کرنے کی ترغیب دیتا ہے، جس کا مطلب ہے کہ یہ آخر کار سوالات کو پورا کرنے کے لیے نیٹ ورک پر ظاہر ہوگا. +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![ایکسپلورر امیج 2](/img/Subgraph-Details.png) -ہر سب گراف کے سرشار صفحے پر، کئی تفصیلات منظر عام پر آتی ہیں. یہ شامل ہیں: +On each subgraph’s dedicated page, you can do the following: - سب گرافس پر سگنل/غیر سگنل - مزید تفصیلات دیکھیں جیسے چارٹس، موجودہ تعیناتی ID، اور دیگر میٹا ڈیٹا @@ -31,26 +45,32 @@ First things first, if you just finished deploying and publishing your subgraph ## امیدوار -اس ٹیب کے اندر، آپ کو ان تمام لوگوں کا پرندوں کا نظارہ ملے گا جو نیٹ ورک کی سرگرمیوں میں حصہ لے رہے ہیں۔ جیسے انڈیکسرز، ڈیلیگیٹرز اور کیوریٹرز۔ ذیل میں، ہم اس بات کا گہرائی سے جائزہ لیں گے کہ آپ کے لیے ہر ٹیب کا کیا مطلب ہے. +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### انڈیکسرز ![ایکسپلورر امیج 4](/img/Indexer-Pane.png) -آئیے انڈیکسرز کے ساتھ شروع کریں۔ انڈیکسرز پروٹوکول کی ریڑھ کی ہڈی ہیں، سب گرافس پر داؤ لگاتے ہیں،انڈیکس کرتے ہیں، اور سب گراف استعمال کرنے والے ہر فرد کو کیوریز پیش کرتے ہیں۔ انڈیکسرز ٹیبل میں، آپ انڈیکسرز کے ڈیلیلگیشن کے پیرامیٹرز دیکھ سکیں گے، ان کا حصہ، انہوں نے ہر سب گراف میں کتنا حصہ لگایا ہے، اور کیوری کی فیس اور انڈیکسنگ کے انعامات سے انہوں نے کتنی آمدنی حاصل کی ہے۔ نیچے گہرا غوطہ: +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. + +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. -- کیوری کی فیس کٹوتی - استفسار کی فیس کی چھوٹ جو انڈیکسر ڈیلیگیٹرز کے ساتھ تقسیم کرتے وقت رکھتا ہے -- مؤثر انعام کٹوتی - انڈیکسنگ انعام کٹوتی ڈیلیگیشن پول پر لاگو ہوتی ہے. اگر یہ منفی ہے، تو اس کا مطلب ہے کہ انڈیکسر اپنے انعامات کا کچھ حصہ دے رہا ہے۔ اگر یہ مثبت ہے، تو اس کا مطلب ہے کہ انڈیکسر اپنے کچھ انعامات رکھ رہا ہے -- کولڈاؤن باقی ہے - باقی وقت جب تک کہ انڈیکسر مذکورہ بالا ڈیلیگیشن پیرامیٹرز کو تبدیل نہیں کر سکتا۔ انڈیکسرز جب اپنے ڈیلیگیشن پیرامیٹرز کو اپ ڈیٹ کرتے ہیں تو کولڈاؤن پیریڈز ترتیب دیے جاتے ہیں -- ملکیت - یہ انڈیکسر کا جمع کردہ حصہ ہے، جسے بدنیتی پر مبنی یا غلط رویے کی وجہ سے کم کیا جا سکتا ہے -- ڈیلیگیٹڈ - ڈیلیگیٹرز کی طرف سے حصہ جو انڈیکسر کے ذریعہ مختص کیا جاسکتا ہے، لیکن اسے کم نہیں کیا جاسکتا -- مختص - اس بات کا دعویٰ کریں کہ انڈیکسرز ان سب گرافس کے لیے فعال طور پر مختص کر رہے ہیں جن کی وہ انڈیکس کر رہے ہیں -- دستیاب ڈیلیگیشن کی صلاحیت - ڈیلیگیٹ حصص کی وہ مقدار جو انڈیکسرز کو زیادہ ڈیلیگیٹ ہونے سے پہلے بھی مل سکتی ہے +**Specifics** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - زیادہ سے زیاد ڈیلیلگیشن صلاحیت - ڈیلیلگیٹڈ حصص کی زیادہ سے زیادہ مقدار کو انڈیکسر نتیجہ خیز طور پر قبول کر سکتا ہے۔ ایک اضافی حصص کو مختص کرنے یا انعامات کے حساب کتاب کے لیے استعمال نہیں کیا جا سکتا. -- کیوری کی فیس - یہ وہ کل فیس ہے جو آخری صارفین نے ہر وقت انڈیکسر سے سوالات کے لیے ادا کی ہے +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - انڈیکسرز کے انعامات - یہ انڈیکسر اور ان کے ڈیلیگیٹرز کی طرف سے ہر وقت کمائے گئے کل انڈیکسر انعامات ہیں۔ انڈیکسر انعامات GRT کے اجراء کے ذریعے ادا کیے جاتے ہیں. -انڈیکسر کیوری کی فیس اور انڈیکسر انعامات دونوں حاصل کر سکتے ہیں. عملی طور پر، ایسا اس وقت ہوتا ہے جب نیٹ ورک کے شرکاء GRT انڈیکسر کو ڈیلیگیٹ کرتے ہیں۔ یہ انڈیکسرز کو ان کے انڈیکسر پیرامیٹرز کے لحاظ سے کیوری کی فیس اور انعامات وصول کرنے کے قابل بناتا ہے۔انڈیکسنگ پیرامیٹرز ٹیبل کے دائیں جانب کلک کرکے سیٹ کیے جاتے ہیں، یا انڈیکسر کے پروفائل میں جا کر اور "ڈیلیگیٹ" بٹن پر کلک کر کے. +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. انڈیکسر بننے کے بارے میں مزید جاننے کے لیے، آپ [آفیشل دستاویزات](/network/indexing) یا [دی گراف اکیڈمی انڈیکسر گائیڈز۔](https://thegraph.academy/delegators/ پر ایک نظر ڈال سکتے ہیں۔ choosing-indexers/) @@ -58,9 +78,13 @@ First things first, if you just finished deploying and publishing your subgraph ### کیوریٹرز -کیوریٹرز سب گراف کا تجزیہ کرتے ہیں تاکہ یہ شناخت کیا جا سکے کہ کون سے سب گراف اعلیٰ ترین معیار کے ہیں۔ ایک بار جب کیوریٹر کو ممکنہ طور پر پرکشش سب گراف مل گیا، وہ اس کے بانڈنگ وکر پر سگنل دے کر اسے درست کر سکتے ہیں۔ ایسا کرنے سے، کیوریٹرز انڈیکسرز کو بتاتے ہیں کہ کون سے سب گراف اعلیٰ معیار کے ہیں اور ان کی ترتیب ہونی چاہیے. +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -کیوریٹر کمیونٹی کے ممبر ہو سکتے ہیں، ڈیٹا صارفین، یا یہاں تک کہ سب گراف ڈویلپرز جو GRT ٹوکن کو بانڈنگ کریو میں جمع کر کے اپنے پلے گراؤنڈ سب گراف پر سگنل دیتے ہیں۔ GRT جمع کر کے، کیوریٹرز ٹکسال ایک سب گراف کے کیوریشن شیئرز۔ نتیجے کے طور پر، کیوریٹرز کیوری فیس کا ایک حصہ حاصل کرنے کے اہل ہوتے ہیں جو انہوں نے جس سب گراف پر اشارہ کیا ہے وہ تیار کرتا ہے۔ بانڈنگ وکر کیوریٹرز کو اعلیٰ ترین کوالٹی ڈیٹا کے ذرائع کو درست کرنے کی ترغیب دیتا ہے۔ اس سیکشن میں کیوریٹر ٹیبل آپ کو یہ دیکھنے کی اجازت دے گا: +In the The Curator table listed below you can see: - جس تاریخ کیوریٹر نے کیوریٹنگ شروع کی - GRT کا نمبر جو جمع کیا گیا تھا @@ -68,34 +92,36 @@ First things first, if you just finished deploying and publishing your subgraph ![ایکسپلورر امیج 6](/img/Curation-Overview.png) -اگر آپ کیوریٹر کے کردار کے بارے میں مزید جاننا چاہتے ہیں، تو آپ [گراف اکیڈمی](https://thegraph.academy/curators/) یا کے درج ذیل لنکس پر جا کر ایسا کر سکتے ہیں۔ [آفیشل دستاویزات۔](/network/curating) +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### ڈیلیگیٹرز -ڈیلیگیٹرز گراف نیٹ ورک کی حفاظت اور ڈیسینٹرالائزیشن کو برقرار رکھنے میں کلیدی کردار ادا کرتے ہیں۔ وہ ایک یا ایک سے زیادہ انڈیکسرز کو GRT ٹوکن ڈیلیگیٹ (یعنی "اسٹیک")کر کے نیٹ ورک میں حصہ لیتے ہیں۔ ڈیلیگیٹرز کے بغیر، انڈیکسرز کے لیے اہم انعامات اور فیسیں حاصل کرنے کا امکان کم ہوتا ہے۔ لہٰذا، انڈیکسرز ڈیلیگیٹرزکو انڈیکسنگ کے انعامات اور استفسار کی فیس کا ایک حصہ پیش کرکے اپنی طرف متوجہ کرنے کی کوشش کرتے ہیں جو وہ کماتے ہیں. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -ڈیلیگیٹرز، بدلے میں، متعدد مختلف متغیرات کی بنیاد پر انڈیکسرز کو منتخب کرتے ہیں، جیسے کہ ماضی کی کارکردگی، انڈیکسنگ کے انعام کی شرح، اور کیوری فیس میں کمی۔ کمیونٹی کے اندر ساکھ بھی اس میں اہم کردار ادا کر سکتی ہے! [گراف ڈسکورڈ](https://discord.gg/graphprotocol) یا [گراف فورم](https://forum.thegraph.com/) کے ذریعے منتخب کردہ انڈیکسر کے ساتھ مربوط ہونے کی سفارش کی جاتی ہے! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![ایکسپلورر امیج 7](/img/Delegation-Overview.png) -ڈیلیگیٹرز ٹیبل آپ کو کمیونٹی میں فعال ڈیلیگیٹرز کے ساتھ ساتھ میٹرکس جیسے کہ: +In the Delegators table you can see the active Delegators in the community and important metrics: - انڈیکسرز کی تعداد جن کی طرف ایک ڈیلیگیٹر ڈیلیٹ کر رہا ہے - ایک ڈیلیگیٹر کی حقیقی ڈیلیگیشن - وہ انعامات جو انہوں نے جمع کر لیے ہیں لیکن پروٹوکول سے دستبردار نہیں ہوئے ہیں - وہ انعامات جو انہوں نے پروٹوکول سے واپس لے لیے - پروٹوکول میں ان کے پاس موجود GRT کی کل رقم -- جس تاریخ کو انہوں نے آخری بار ڈیلیگیٹ کیا تھا +- The date they last delegated -اگر آپ ڈیلیگیٹر بننے کے طریقے کے بارے میں مزید جاننا چاہتے ہیں تو مزید نہ دیکھیں! آپ کو صرف یہ کرنا ہے کہ [آفیشل دستاویزات](/network/delegating) یا [گراف اکیڈمی](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers) کی طرف جانا ہے۔ +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## نیٹ ورک -نیٹ ورک سیکشن میں، آپ عالمی KPIs کے ساتھ ساتھ فی دور کی بنیاد پر سوئچ کرنے اور نیٹ ورک میٹرکس کا مزید تفصیل سے تجزیہ کرنے کی صلاحیت دیکھیں گے۔ یہ تفصیلات آپ کو اس بات کا احساس دلائیں گی کہ نیٹ ورک وقت کے ساتھ کیسا کارکردگی دکھا رہا ہے. +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. ### جائزہ -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +The overview section has both all the current network metrics and some cumulative metrics over time: - موجودہ کل نیٹ ورک کا حصہ - انڈیکسرز اور ان کے ڈیلیگیٹرز کے درمیان حصص کی تقسیم @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - پروٹوکول کے پیرامیٹرز جیسے کیوریشن انعام، افراط زر کی شرح، اور مزید - موجودہ دور کے انعامات اور فیس -چند اہم تفصیلات جو قابل ذکر ہیں: +A few key details to note: -- **استفسار کی فیس صارفین کے ذریعہ تیار کردہ فیس کی نمائندگی کرتی ہے**، اور انڈیکسرز کے ذریعہ ان کا دعویٰ کیا جا سکتا ہے (یا نہیں) کم از کم 7 ادوار کی مدت کے بعد (نیچے ملاحظہ کریں) سب گرافس کے لیے ان کے مختص ہونے کے بعد اور صارفین کے ذریعہ ان کے فراہم کردہ ڈیٹا کی توثیق کر دی گئی ہے. -- **انڈیکس کرنے والے انعامات ان انعامات کی مقدار کی نمائندگی کرتے ہیں جن کا دعویٰ انڈیکسرز کے اپوچ کے دوران نیٹ ورک کے اجراء سے کیا تھا۔** اگرچہ پروٹوکول کا اجراء طے ہے، انعامات صرف اس صورت میں ملتے ہیں جب انڈیکسرز ان سب گرافس کے لیے اپنی مختص رقم کو بند کر دیتے ہیں جو وہ انڈیکس کر رہے ہیں۔ اس طرح انعامات کی فی اپوچ تعداد مختلف ہوتی ہے(یعنی کچھ اپوچس کے دوران، انڈیکسرز نے اجتماعی طور پر بند کر دیے ہوں گے جو کئی دنوں سے کھلے ہوئے ہیں). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![ایکسپلورر امیج 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ The overview section has all the current network metrics as well as some cumulat - فعال ایپوک وہ ہے جس میں انڈیکسرز فی الحال حصص مختص کر رہے ہیں اور کیوری کی فیس جمع کر رہے ہیں - طے پانے والے اپوچس وہ ہیں جن میں ریاستی چینلز آباد ہو رہے ہیں۔ اس کا مطلب یہ ہے کہ اگر صارفین ان کے خلاف تنازعات کھولتے ہیں تو انڈیکسرز کو کم کیا جا سکتا ہے. - تقسیم کرنے والے اپوچس وہ ایپوکس ہیں جن میں ایپوکس کے لیے ریاستی چینلز طے کیے جا رہے ہیں اور انڈیکسرز اپنی کیوری کی فیس میں چھوٹ کا دعویٰ کر سکتے ہیں. - - حتمی شکل کے اپوچس وہ اپوچس ہیں جن میں انڈیکسرز کے ذریعہ دعویٰ کرنے کے لیے کوئی سوال فیس کی چھوٹ باقی نہیں رہتی، اس طرح حتمی شکل دی جاتی ہے. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![ایکسپلورر امیج 9](/img/Epoch-Stats.png) ## آپ کا صارف پروفائل -اب جب کہ ہم نیٹ ورک کے اعدادوشمار کے بارے میں بات کر چکے ہیں، آئیے آپ کے ذاتی پروفائل پر چلتے ہیں۔ آپ کا ذاتی پروفائل آپ کے لیے اپنے نیٹ ورک کی سرگرمی دیکھنے کی جگہ ہے، چاہے آپ نیٹ ورک پر کس طرح شرکت کر رہے ہوں۔ آپ کا کریپٹو والیٹ آپ کے صارف پروفائل کے طور پر کام کرے گا، اور یوزر ڈیش بورڈ کے ساتھ، آپ یہ دیکھ سکیں گے: +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### پروفائل کا جائزہ -یہ وہ جگہ ہے جہاں آپ اپنی موجودہ کارروائیوں کو دیکھ سکتے ہیں. یہی وہ جگہ ہے جہاں آپ اپنی پروفائل کی معلومات، تفصیل، اور ویب سائٹ (اگر آپ نے شامل کی ہے) تلاش کر سکتے ہیں. +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![ایکسپلورر امیج 10](/img/Profile-Overview.png) ### سب گرافس ٹیب -اگر آپ سب گرافس ٹیب پر کلک کرتے ہیں، تو آپ کو اپنے شائع شدہ سب گرافس نظر آئیں گے۔ اس میں جانچ کے مقاصد کے لیے CLI کے ساتھ تعینات کوئی بھی سب گرافس شامل نہیں ہوں گے - سب گرافس صرف تب ظاہر ہوں گے جب وہ ڈیسینٹرالائزڈ نیٹ ورک پر شائع کیے جائیں گے. +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![ایکسپلورر امیج 11](/img/Subgraphs-Overview.png) ### انڈیکسنگ ٹیب -اگر آپ انڈیکسنگ ٹیب پر کلک کرتے ہیں، آپ کو سب گراف کے لیے تمام فعال اور تاریخی مختص کے ساتھ ایک ٹیبل ملے گا، نیز چارٹس جن کا آپ تجزیہ کر سکتے ہیں اور بطور انڈیکسر اپنی ماضی کی کارکردگی دیکھ سکتے ہیں. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. اس حصے میں آپ کے نیٹ انڈیکسر انعامات اور نیٹ استفسار کی فیس کے بارے میں تفصیلات بھی شامل ہوں گی۔ آپ کو درج ذیل میٹرکس نظر آئیں گے: @@ -158,7 +189,9 @@ The overview section has all the current network metrics as well as some cumulat ### ڈیلگیٹنگ ٹیب -ڈیلیگیٹرز گراف نیٹ ورک کے لیے اہم ہیں۔ ایک ڈیلیگیٹرزکو اپنے علم کا استعمال ایک انڈیکسر منتخب کرنے کے لیے کرنا چاہیے جو انعامات پر صحت مندانہ واپسی فراہم کرے۔ یہاں آپ اپنی فعال اور تاریخی ڈیلیگیشن کی تفصیلات حاصل کرسکتے ہیں، انڈیکسرز کے میٹرکس کے ساتھ جن کی طرف آپ نے ڈیلیگیٹ کیا ہے. +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. صفحہ کے پہلے نصف میں، آپ اپنے ڈیلیگیشن چارٹ کے ساتھ ساتھ صرف انعامات کا چارٹ دیکھ سکتے ہیں، بائیں طرف، آپ KPIs دیکھ سکتے ہیں جو آپ کی موجودہ ڈیلیگیشن میٹرکس کی عکاسی کرتے ہیں. diff --git a/website/pages/ur/network/indexing.mdx b/website/pages/ur/network/indexing.mdx index f640a613ae68..93e59945ae25 100644 --- a/website/pages/ur/network/indexing.mdx +++ b/website/pages/ur/network/indexing.mdx @@ -42,7 +42,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap کمیونٹی کے بنائے ہوئے بہت سے ڈیش بورڈز میں زیر التواء انعامات کی قدریں شامل ہیں اور ان اقدامات پر عمل کر کے انہیں آسانی سے دستی طور پر چیک کیا جا سکتا ہے: -1. تمام فعال ایلوکیشنز کی IDs حاصل کرنے کے لیے [mainnet سب گراف](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) کو کیوری کریں: +1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -112,12 +112,12 @@ query indexerAllocations { - **درمیانہ** - پروڈکشن انڈیکسر 100 سب گراف اور 200-500 درخواستیں فی سیکنڈ کو اٹھا سکتا ہے. - **بڑا** - تمام فی الحال زیر استعمال سب گرافس کو انڈیکس کرنے اور متعلقہ ٹریفک کے لیے درخواستیں پیش کرنے کے لیے تیار ہے. -| سیٹ اپ | Postgres
    (CPUs) | Postgres
    (GBs میں میموری) | Postgres
    (TBs میں ڈسک) | VMs
    (CPUs) | VMs
    (GBs میں میموری) | -| --- | :-: | :-: | :-: | :-: | :-: | -| چھوٹا | 4 | 8 | 1 | 4 | 16 | -| معیاری | 8 | 30 | 1 | 12 | 48 | -| درمیانہ | 16 | 64 | 2 | 32 | 64 | -| بڑا | 72 | 468 | 3.5 | 48 | 184 | +| سیٹ اپ | Postgres
    (CPUs) | Postgres
    (GBs میں میموری) | Postgres
    (TBs میں ڈسک) | VMs
    (CPUs) | VMs
    (GBs میں میموری) | +| ------- |:--------------------------:|:------------------------------------:|:---------------------------------:|:---------------------:|:-------------------------------:| +| چھوٹا | 4 | 8 | 1 | 4 | 16 | +| معیاری | 8 | 30 | 1 | 12 | 48 | +| درمیانہ | 16 | 64 | 2 | 32 | 64 | +| بڑا | 72 | 468 | 3.5 | 48 | 184 | ### وہ کون سی چند بنیادی حفاظتی تدابیر ہیں جو ایک انڈیکسر کو اختیار کرنی چاہیے؟ @@ -149,20 +149,20 @@ query indexerAllocations { #### گراف نوڈ -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
    (سب گراف کی کیوریز کے لیے) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    (سب گراف سبسکرپشنز کے لیے) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (تعیناتیوں کے انتظام کے لیے) | / | --admin-port | - | -| 8030 | سب گراف انڈیکسنگ اسٹیٹس API | /graphql | --index-node-port | - | -| 8040 | Prometheus میٹرکس | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | --------------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
    (سب گراف کی کیوریز کے لیے) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    (سب گراف سبسکرپشنز کے لیے) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (تعیناتیوں کے انتظام کے لیے) | / | --admin-port | - | +| 8030 | سب گراف انڈیکسنگ اسٹیٹس API | /graphql | --index-node-port | - | +| 8040 | Prometheus میٹرکس | /metrics | --metrics-port | - | #### انڈیکسر سروس -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
    (ادا شدہ سب گراف کی کیوریز کے لیے) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus میٹرکس | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
    (ادا شدہ سب گراف کی کیوریز کے لیے) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus میٹرکس | /metrics | --metrics-port | - | #### انڈیکسر ایجنٹ @@ -545,7 +545,7 @@ graph indexer status - `graph indexer rules maybe [options] ` — تعیناتی کے لیے `decisionBasis` کو `rules` پر سیٹ کریں، تاکہ انڈیکسر ایجنٹ یہ فیصلہ کرنے کے لیے انڈیکسنگ کے اصول استعمال کرے کہ آیا اس تعیناتی کو انڈیکس کرنا ہے. -- `graph indexer actions get [options] ` - `all` کا استعمال کرتے ہوئے ایک یا زیادہ کارروائیاں حاصل کریں یا تمام کارروائیاں حاصل کرنے کے لیے `action-id` کو خالی چھوڑ دیں. ایک اضافی argument `--status` کو کسی خاص status کی تمام کارروائیاں کو پرنٹ کرنے کے لیے استعمال کیا جا سکتا ہے. +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `graph indexer action queue allocate ` - قطار(Queue) مختص کرنے کی کارروائی diff --git a/website/pages/ur/network/overview.mdx b/website/pages/ur/network/overview.mdx index dd09e61c8425..4ad830a6a841 100644 --- a/website/pages/ur/network/overview.mdx +++ b/website/pages/ur/network/overview.mdx @@ -2,14 +2,20 @@ title: نیٹ ورک کا جائزہ --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## جائزہ +## How does it work? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Specifics + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![ٹوکن اکنامکس](/img/Network-roles@2x.png) -گراف نیٹ ورک کی اقتصادی حفاظت اور کیوری کیے جانے والے ڈیٹا کی سالمیت کو یقینی بنانے کے لیے، شرکاء گراف ٹوکنز ([GRT](/tokenomics)) کو داؤ پر لگاتے اور استعمال کرتے ہیں۔ GRT ایک ورک یوٹیلیٹی ٹوکن ہے جو کہ نیٹ ورک میں وسائل مختص کرنے کے لیے استعمال ہونے والا ERC-20 ہے. +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/ur/new-chain-integration.mdx b/website/pages/ur/new-chain-integration.mdx index cc4e1f532644..8a348a4970ea 100644 --- a/website/pages/ur/new-chain-integration.mdx +++ b/website/pages/ur/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: نئے نیٹ ورکس کو انٹیگریٹ کرنا +title: New Chain Integration --- -گراف نوڈ فی الحال ذیل میں دی گئ چین کی اقسام سے ڈیٹا انڈیکس کر سکتا ہے: +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- ایتھیریم, بذریعہ EVM JSON-RPC اور [Ethereum Firehose](https://github.com/streamingfast/firehose-ethereum) -- NEAR، بذریعہ [NEAR Firehose](https://github.com/streamingfast/near-firehose-indexer) -- Cosmos، بذریعہ [Cosmos Firehose](https://github.com/graphprotocol/firehose-cosmos) -- Arweave، ایک [Arweave Firehose] کے ذریعے (https://github.com/graphprotocol/firehose-arweave) +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -اگر آپ ان میں سے کسی بھی چین میں دلچسپی رکھتے ہیں تو، انٹیگریشن گراف نوڈ کی کنفگریشن اور جانچ کا معاملہ ہے۔ +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -اگر آپ مختلف قسم کی چین میں دلچسپی رکھتے ہیں تو، گراف نوڈ کے ساتھ ایک نئی انٹیگریشن ضرور بنایا جانا چاہیے۔ ہمارا تجویز کردہ نقطہ نظر زیر بحث چین کے لیے ایک نیا فائر ہوز تیار کر رہا ہے اور پھر اس فائر ہوز کا گراف نوڈ کے ساتھ انٹیگریشن۔ ذیل میں مزید معلومات ہیں۔ +## Integration Strategies -**1۔ EVM JSON-RPC** +### 1. EVM JSON-RPC -اگر بلاکچین EVM کے برابر ہے اور کلائنٹ/نوڈ معیاری EVM JSON-RPC API کو ظاہر کرتا ہے، تو گراف نوڈ کو نئی چین کی انڈیکس کرنے کے قابل ہونا چاہیے۔ مزید معلومات کے لیے، [EVM JSON-RPC کی ٹیسٹنگ کرنا](new-chain-integration#testing-an-evm-json-rpc) سے رجوع کریں۔ +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -**2. فائر ہوز** +#### EVM JSON-RPC کی ٹیسٹ کرنا -غیر EVM پر مبنی چینز کے لیے، گراف نوڈ کو gRPC اور معروف قسم کی تعریفوں کے ذریعے بلاکچین ڈیٹا کو ہضم کرنا چاہیے۔ یہ [فائر ہوز](فائر ہوز/) کے ذریعے کیا جا سکتا ہے، جو کہ [سٹریمنگ فاسٹ](https://www.streamingfast.io/) کی تیار کردہ ایک نئی ٹیکنالوجی ہے جو فائلوں پر مبنی اور سٹریمنگ کا استعمال کرتے ہوئے ایک انتہائی قابل توسیع انڈیکسنگ بلاکچین حل فراہم کرتی ہے۔ پہلا نقطہ نظر. اگر آپ کو فائر ہوز ڈیولپمنٹ میں مدد کی ضرورت ہو تو [سٹریمنگ فاسٹ ٹیم](میل کریں:integrations@streamingfast.io/) سے رابطہ کریں۔ +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## EVM JSON-RPC اور Firehose کے درمیان فرق +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, in a JSON-RPC batch request +- `trace_filter` _(optionally required for Graph Node to support call handlers)_ -جب کہ دونوں سب گرافس کے لیے موزوں ہیں، فائر ہوز ہمیشہ ڈویلپرز کے لیے درکار ہوتا ہے جو [سب سٹریمز](سب سٹریمز/) کے ساتھ تعمیر کرنا چاہتے ہیں، جیسا کہ [سب سٹریمز سے چلنے والے سب گرافس](cookbook/substreams-powered-subgraphs/)۔ اس کے علاوہ، فائر ہوز JSON-RPC کے مقابلے میں بہتر انڈیکسنگ کی رفتار کی اجازت دیتا ہے۔ +### 2. Firehose Integration -نئے EVM چین انٹیگریٹرز سب سٹریمز کے فوائد اور اس کے بڑے پیمانے پر متوازی انڈیکسنگ کی صلاحیتوں کو دیکھتے ہوئے، Firehose پر مبنی نقطہ نظر پر بھی غور کر سکتے ہیں۔ دونوں کو سپورٹ کرنے سے ڈویلپرز کو نئی چین کے لیے سب سٹریمز کی تعمیر یا سب گراف کے درمیان انتخاب کرنے کی اجازت ملتی ہے۔ +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> **نوٹ**: EVM چینز کے لیے فائر ہوز پر مبنی انٹیگریشن کے لیے اب بھی انڈیکسرز کو چین کے آرکائیو RPC نوڈ کو صحیح طریقے سے سب گرافس کو انڈیکس کرنے کے لیے چلانے کی ضرورت ہوگی۔ یہ فائر ہوز کی سمارٹ کنٹریکٹ سٹیٹ فراہم کرنے میں ناکامی کی وجہ سے ہے جو عام طور پر `eth_call` RPC طریقہ سے قابل رسائی ہے۔ (یہ یاد دلانے کے قابل ہے کہ ایتھ_کالز [ڈویلپرز کے لیے اچھا عمل نہیں ہے](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## EVM JSON-RPC کی ٹیسٹ کرنا +#### Specific Firehose Instrumentation for EVM (`geth`) chains -گراف نوڈ کے لیے EVM چین سے ڈیٹا ہضم کرنے کے لیے، RPC نوڈ کو درج ذیل EVM JSON RPC طریقوں کو بے نقاب کرنا چاہیے: +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \_(تاریخی بلاکس کے لیے, EIP-1898 کے ساتھ - آرکائیو نوڈ کی ضرورت ہے): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`, in a JSON-RPC batch request -- _`trace_filter`_ _(کال ہینڈلرز کو سپورٹ کرنے کے لیے گراف نوڈ کے لیے اختیاری طور پر درکار ہے)_ +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### گراف نوڈ کنفگریشن +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -**اپنے مقامی ماحول کو تیار کرکے شروع کریں** +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## گراف نوڈ کنفگریشن + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [گراف نوڈ کی نقل بنائیں](https://github.com/graphprotocol/graph-node) -2. نئے نیٹ ورک کا نام اور EVM JSON RPC کے مطابق URL کو شامل کرنے کے لیے [اس لائن](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) میں ترمیم کریں۔ - > env var نام ہی تبدیل نہ کریں۔ نیٹ ورک کا نام مختلف ہونے کے باوجود اسے 'ایتھیریم' ہی رہنا چاہیے۔ -3. IPFS نوڈ چلائیں یا گراف کے ذریعہ استعمال کردہ استعمال کریں: https://api.thegraph.com/ipfs/ -**مقامی طور پر سب گراف کو تعینات کر کے انٹیگریشن کی جانچ کریں** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. ایک سادہ مثالی سب گراف بنائیں۔ کچھ آپشنز ذیل میں ہیں: - 1. پہلے سے پیک [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) سمارٹ کنٹریکٹ اور سب گراف ایک اچھا نقطہ آغاز ہے - 2. کسی بھی موجودہ سمارٹ کنٹریکٹ یا سولیٹی ڈویلپر ماحول سے مقامی سب گراف کو بوٹسٹریپ کریں [گراف پلگ ان کے ساتھ Hardhat کا استعمال کرتے ہوئے](https://github.com/graphprotocol/hardhat-graph) -3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. -4. اپنا سب گراف گراف نوڈ میں بنائیں: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. اپنا سب گراف گراف نوڈ پر شائع کریں: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -اگر کوئی خرابی نہیں ہے تو گراف نوڈ کو تعینات سب گراف کو ہم آہنگ کرنا چاہیے۔ اسے مطابقت پذیری کے لیے وقت دیں، پھر لاگز میں پرنٹ کردہ API اینڈ پوائنٹ پر کچھ GraphQL کیوریز بھیجیں۔ +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## فائر ہوز سے چلنے والی ایک نئی چین کو انٹیگریٹ کرنا +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. ایک سادہ مثالی سب گراف بنائیں۔ کچھ آپشنز ذیل میں ہیں: + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +اگر کوئی خرابی نہیں ہے تو گراف نوڈ کو تعینات سب گراف کو ہم آہنگ کرنا چاہیے۔ اسے مطابقت پذیری کے لیے وقت دیں، پھر لاگز میں پرنٹ کردہ API اینڈ پوائنٹ پر کچھ GraphQL کیوریز بھیجیں۔ -فائر ہوز اپروچ کا استعمال کرتے ہوئے ایک نئی چین کو انٹیگریٹ کرنا بھی ممکن ہے۔ یہ فی الحال غیر EVM چینز کے لیے بہترین آپشن ہے اور سب سٹریمز سپورٹ کی ضرورت ہے۔ مزید دستاویزات اس بات پر مرکوز ہیں کہ فائر ہوز کیسے کام کرتا ہے، ایک نئی چین کے لیے فائر ہوز سپورٹ شامل کرنا اور اسے گراف نوڈ کے ساتھ انٹیگریٹ کرنا۔ انٹیگریٹز کے لیے تجویز کردہ دستاویزات: +## Substreams-powered Subgraphs -1. [فائر ہوز پر عمومی دستاویزات](فائر ہوز/) -2. [ایک نئی چین کے لیے فائر ہوز سپورٹ شامل کرنا](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [فائر ہوز کے ذریعے ایک نئی چین کے ساتھ گراف نوڈ کو انٹیگریٹ کرنا](https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/ur/operating-graph-node.mdx b/website/pages/ur/operating-graph-node.mdx index 53ed532c07f8..1591a45dd2d0 100644 --- a/website/pages/ur/operating-graph-node.mdx +++ b/website/pages/ur/operating-graph-node.mdx @@ -77,13 +77,13 @@ cargo run -p graph-node --release -- \ جب یہ چل رہا ہوتا ہے گراف نوڈ مندرجہ ذیل پورٹس کو بے نقاب کرتا ہے: -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
    (سب گراف کی کیوریز کے لیے) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    (سب گراف سبسکرپشنز کے لیے) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (تعیناتیوں کے انتظام کے لیے) | / | --admin-port | - | -| 8030 | سب گراف انڈیکسنگ اسٹیٹس API | /graphql | --index-node-port | - | -| 8040 | Prometheus میٹرکس | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | --------------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
    (سب گراف کی کیوریز کے لیے) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    (سب گراف سبسکرپشنز کے لیے) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (تعیناتیوں کے انتظام کے لیے) | / | --admin-port | - | +| 8030 | سب گراف انڈیکسنگ اسٹیٹس API | /graphql | --index-node-port | - | +| 8040 | Prometheus میٹرکس | /metrics | --metrics-port | - | > **اہم**: پورٹس کو عوامی طور پر ظاہر کرنے میں محتاط رہیں - **انتظامی پورٹس** کو بند رکھا جانا چاہیے. اس میں گراف نوڈ کا JSON-RPC اینڈ پوائنٹ شامل ہے. diff --git a/website/pages/ur/querying/graphql-api.mdx b/website/pages/ur/querying/graphql-api.mdx index 356677089b60..3dfdf8d64d89 100644 --- a/website/pages/ur/querying/graphql-api.mdx +++ b/website/pages/ur/querying/graphql-api.mdx @@ -2,11 +2,19 @@ title: GraphQL API --- -This guide explains the GraphQL Query API that is used for The Graph Protocol. +Learn about the GraphQL Query API used in The Graph. -## کیوریز +## What is GraphQL? -اپنے سب گراف اسکیما میں آپ `Entities` نامی اقسام کی وضاحت کرتے ہیں۔ ہر ایک `Entity` قسم کے لیے، ایک `entity` اور `entities` فیلڈ کو اعلی درجے کی `Query` قسم پر تیار کیا جائے گا۔ نوٹ کریں کہ گراف استعمال کرتے وقت `Query` کو `graphql` کیوری کے اوپر شامل کرنے کی ضرورت نہیں ہے. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +To understand the larger role that GraphQL plays, review [developing](/network/developing/) and [creating a subgraph](/developing/creating-a-subgraph/). + +## Queries with GraphQL + +In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. + +> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. ### مثالیں @@ -21,7 +29,7 @@ This guide explains the GraphQL Query API that is used for The Graph Protocol. } ``` -> **نوٹ:** کسی ایک ہستی کے لیے کیوری کرتے وقت، `id` فیلڈ درکار ہے, اور یہ ایک سٹرنگ ہونا چاہیے. +> Note: When querying for a single entity, the `id` field is required, and it must be writen as a string. تمام `Token` اداروں سے کیوری کریں: @@ -36,7 +44,10 @@ This guide explains the GraphQL Query API that is used for The Graph Protocol. ### سورٹنگ -کسی مجموعہ سے کیوری کرتے وقت، `orderBy` پیرامیٹر کو کسی خاص وصف کے مطابق ترتیب دینے کے لیے استعمال کیا جا سکتا ہے۔ اضافی طور پر، ترتیب کی سمت بتانے کے لیے `orderDirection` استعمال کیا جا سکتا ہے، `asc` چڑھنے کے لیے یا `desc` اترنے کے لیے. +When querying a collection, you may: + +- Use the `orderBy` parameter to sort by a specific attribute. +- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. #### مثال @@ -53,7 +64,7 @@ This guide explains the GraphQL Query API that is used for The Graph Protocol. گراف نوڈ کے مطابق [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) ہستیوں کو ترتیب دیا جا سکتا ہے نیسٹڈ اداروں کی بنیاد پر. -درج ذیل مثال میں، ہم ٹوکنز کو ان کے مالک کے نام سے ترتیب دیتے ہیں: +The following example shows tokens sorted by the name of their owner: ```graphql { @@ -70,11 +81,12 @@ This guide explains the GraphQL Query API that is used for The Graph Protocol. ### صفحہ بندی -کسی مجموعہ سے کیوری کرتے وقت، `first` پیرامیٹر کو مجموعہ کے آغاز سے صفحہ بندی کرنے کے لیے استعمال کیا جا سکتا ہے۔ یہ بات قابل غور ہے کہ ڈیفالٹ ترتیب ترتیب صعودی حرفی ترتیب میں ID کے لحاظ سے ہے ، تخلیق کے وقت سے نہیں. - -مزید، `skip` پیرامیٹر کو ہستیوں کو چھوڑنے اور صفحہ بندی کرنے کے لیے استعمال کیا جا سکتا ہے۔ جیسے `first:100` پہلی 100 ہستیوں کو دکھاتا ہے اور `first:100, skip:100` اگلی 100 ہستیوں کو دکھاتا ہے. +When querying a collection, it's best to: -کیوریز کو بہت بڑی `skip` اقدار کے استعمال سے گریز کرنا چاہیے کیونکہ وہ عام طور پر خراب کارکردگی کا مظاہرہ کرتے ہیں۔ بڑی تعداد میں آئٹمز کو بازیافت کرنے کے لیے، کسی خاصیت کی بنیاد پر ہستیوں کے ذریعے صفحہ بنانا بہت بہتر ہے جیسا کہ آخری مثال میں دکھایا گیا ہے. +- Use the `first` parameter to paginate from the beginning of the collection. + - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. +- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. #### `first` استعمال کرنے کی مثال @@ -106,7 +118,7 @@ This guide explains the GraphQL Query API that is used for The Graph Protocol. #### `first` اور `id_ge` استعمال کرنے کی مثال -اگر کسی کلائنٹ کو بڑی تعداد میں ہستیوں کو بازیافت کرنے کی ضرورت ہے، کسی وصف پر کیوری کی بنیاد رکھنا اور اس وصف سے فلٹر کرنا زیادہ پرفارمنس ہے۔ مثال کے طور پر، ایک کلائنٹ اس کیوری کا استعمال کرتے ہوئے بڑی تعداد میں ٹوکنز بازیافت کرے گا: +If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: ```graphql query manyTokens($lastID: String) { @@ -117,11 +129,12 @@ query manyTokens($lastID: String) { } ``` -پہلی بار، یہ `lastID = ""` کے ساتھ کیوری بھیجے گا، اور اس کے بعد کی درخواستوں کے لیے `lastID` کو آخری کی `id` پچھلی درخواست میں آخری ہستی کے وصف پر سیٹ کرے گا۔ یہ نقطہ نظر `skip` اقدار کو بڑھانے کے مقابلے میں نمایاں طور پر بہتر کارکردگی کا مظاہرہ کرے گا. +The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### فلٹرنگ -آپ مختلف خصوصیات کو فلٹر کرنے کے لیے اپنے کیوریز میں `where` پیرامیٹر استعمال کر سکتے ہیں۔ آپ `where` پیرامیٹر کے اندر متعدد اقدار پر فلٹر کرسکتے ہیں. +- You can use the `where` parameter in your queries to filter for different properties. +- You can filter on multiple values within the `where` parameter. #### `where` استعمال کرنے کی مثال @@ -155,7 +168,7 @@ query manyTokens($lastID: String) { #### بلاک فلٹرنگ کی مثال -آپ `_change_block(number_gte: Int)` کے ذریعے بھی ہستیوں کو فلٹر کر سکتے ہیں - یہ ان ہستیوں کو فلٹر کرتا ہے جو مخصوص بلاک میں یا اس کے بعد اپ ڈیٹ ہوئے تھے. +You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. یہ کارآمد ہو سکتا ہے اگر آپ صرف ان ہستیوں کو لانے کے خواہاں ہیں جو تبدیل ہو چکی ہیں، مثال کے طور پر آخری بار جب آپ نے پول کیا تھا۔ یا متبادل طور پر یہ تحقیق کرنا یا ڈیبگ کرنا مفید ہو سکتا ہے کہ آپ کے سب گراف میں ہستی کیسے تبدیل ہو رہی ہیں (اگر بلاک فلٹر کے ساتھ ملایا جائے تو، آپ صرف ان ہستیوں کو الگ تھلگ کر سکتے ہیں جو ایک مخصوص بلاک میں تبدیل ہوئی ہیں). @@ -193,7 +206,7 @@ query manyTokens($lastID: String) { ##### `AND` آپریٹر -مندرجہ ذیل مثال میں، ہم `outcome` `succeeded` اور `number` کے ساتھ چیلنجوں کے لیے فلٹر کر رہے ہیں جو `100` سے زیادہ یا اس کے برابر ہیں. +The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. ```graphql { @@ -208,7 +221,7 @@ query manyTokens($lastID: String) { ``` > **Syntactic شوگر:** آپ `and` آپریٹر کو ہٹا کر کوما سے الگ کردہ سب اظہار کو پاس کر کے مذکورہ کیوری کو آسان بنا سکتے ہیں. -> +> > ```graphql > { > challenges(where: { number_gte: 100, outcome: "succeeded" }) { @@ -223,7 +236,7 @@ query manyTokens($lastID: String) { ##### `OR` آپریٹر -مندرجہ ذیل مثال میں، ہم `outcome` `succeeded` اور `number` کے ساتھ چیلنجوں کے لیے فلٹر کر رہے ہیں جو `100` سے زیادہ یا اس کے برابر ہیں. +The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. ```graphql { @@ -278,9 +291,9 @@ _change_block(number_gte: Int) آپ اپنے ہستیوں کی حالت کے بارے میں نہ صرف تازہ ترین بلاک کے لیے کیوری کر سکتے ہیں، جو پہلے سے طے شدہ ہے، بلکہ ماضی میں کسی آربٹریری بلاک کے لیے بھی۔ جس بلاک پر کیوری ہونا چاہیے اس کی وضاحت یا تو اس کے بلاک نمبر یا اس کے بلاک ہیش سے کیوریز کے ٹاپ لیول فیلڈز میں `block` دلیل شامل کر کے کی جا سکتی ہے. -اس طرح کے کیوری کا نتیجہ وقت کے ساتھ نہیں بدلے گا، یعنی ماضی کے کسی مخصوص بلاک پر کیوریز کرنے سے وہی نتیجہ آئے گا چاہے اس پر عمل کیا جائے، اس استثنا کے ساتھ کہ اگر آپ چین کے سر کے بالکل قریب بلاک پر کیوری کرتے ہیں۔, نتیجہ تبدیل ہو سکتا ہے اگر وہ بلاک مین چین پر نہ ہو اور چین دوبارہ منظم ہو جائے۔ ایک بار جب کسی بلاک کو حتمی سمجھا جائے تو کیوری کا نتیجہ تبدیل نہیں ہوگا. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -نوٹ کریں کہ موجودہ نفاذ اب بھی کچھ حدود کے تابع ہے جو ان ضمانتوں کی خلاف ورزی کر سکتی ہے. نفاذ ہمیشہ یہ نہیں بتا سکتا کہ دیا گیا بلاک ہیش بالکل بھی مین چین پر نہیں ہے، یا یہ کہ کسی بلاک کے لیے بلاک ہیش کے ذریعے کی گئی کیوری کا نتیجہ جسے ابھی تک حتمی نہیں سمجھا جا سکتا ہے، کیوری کے ساتھ ساتھ چلنے والی بلاک کی تنظیم نو سے متاثر ہو سکتا ہے۔ وہ بلاک ہیش کے ذریعے سوالات کے نتائج کو متاثر نہیں کرتے جب بلاک حتمی ہو اور مین چین پر جانا جاتا ہو۔ [یہ مسئلہ](https://github.com/graphprotocol/graph-node/issues/1405) وضاحت کرتا ہے کہ یہ حدود کیا ہیں. +> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. #### مثال @@ -322,12 +335,12 @@ _change_block(number_gte: Int) فل ٹیکسٹ سرچ آپریٹرز: -| علامت | آپریٹر | تفصیل | -| --- | --- | --- | -| `&` | `And` | ایک سے زیادہ تلاش کی اصطلاحات کو ایک فلٹر میں یکجا کرنے کے لیے ان ہستیوں کے لیے جس میں فراہم کردہ تمام اصطلاحات شامل ہوں | -| | | `Or` | Or آپریٹر کے ذریعہ الگ کردہ متعدد تلاش کی اصطلاحات کے ساتھ کیوریز فراہم کردہ شرائط میں سے کسی سے بھی مماثلت کے ساتھ تمام ہستیوں کو واپس کریں گے | -| `<>` | `Follow by` | دو الفاظ کے درمیان فاصلہ بتائیں. | -| `:*` | `Prefix` | ایسے الفاظ تلاش کرنے کے لیے پریفکس ​​تلاش کی اصطلاح استعمال کریں جن کا سابقہ ​​مماثل ہو (۲ حروف درکار ہیں.) | +| علامت | آپریٹر | تفصیل | +| ---------- | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | +| `&` | `And` | ایک سے زیادہ تلاش کی اصطلاحات کو ایک فلٹر میں یکجا کرنے کے لیے ان ہستیوں کے لیے جس میں فراہم کردہ تمام اصطلاحات شامل ہوں | +| | | `Or` | Or آپریٹر کے ذریعہ الگ کردہ متعدد تلاش کی اصطلاحات کے ساتھ کیوریز فراہم کردہ شرائط میں سے کسی سے بھی مماثلت کے ساتھ تمام ہستیوں کو واپس کریں گے | +| `<>` | `Follow by` | دو الفاظ کے درمیان فاصلہ بتائیں. | +| `:*` | `Prefix` | ایسے الفاظ تلاش کرنے کے لیے پریفکس ​​تلاش کی اصطلاح استعمال کریں جن کا سابقہ ​​مماثل ہو (۲ حروف درکار ہیں.) | #### مثالیں @@ -376,11 +389,11 @@ _change_block(number_gte: Int) ## سکیما -آپ کے ڈیٹا کے ماخذ کا اسکیما -- یعنی، ہستی کی اقسام، اقدار اور رشتے جو کیوری کے لیے دستیاب ہیں -- کی وضاحت [GraphQL انٹرفیس ڈیفینیشن لینگویج (IDL)](https://facebook.github.io/graphql/draft/# کے ذریعے کی گئی ہے۔ sec-Type-System). +The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL اسکیمے عام طور پر `queries`، `subscriptions` اور `mutations` کے لیے جڑ کی اقسام کی وضاحت کرتے ہیں۔ گراف صرف `queries` کو سپورٹ کرتا ہے۔ آپ کے سب گراف کے لیے روٹ `Query` قسم خود بخود GraphQL اسکیما سے تیار ہوتی ہے جو آپ کے سب گراف مینی فیسٹ میں شامل ہے. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> **Note:** ہمارا API تغیرات کو ظاہر نہیں کرتا ہے کیونکہ ڈویلپرز سے توقع کی جاتی ہے کہ وہ اپنی ایپلیکیشنز سے بنیادی بلاکچین کے خلاف براہ راست ٹرانزیکشن جاری کریں. +> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. ### ہستیوں @@ -408,7 +421,7 @@ GraphQL اسکیمے عام طور پر `queries`، `subscriptions` اور `muta اگر کوئی بلاک فراہم کیا جاتا ہے تو، میٹا ڈیٹا اس بلاک کا ہوتا ہے، اگر تازہ ترین انڈیکسڈ بلاک استعمال نہیں کیا جاتا ہے۔ اگر فراہم کیا گیا ہو، تو بلاک سب گراف کے اسٹارٹ بلاک کے بعد ہونا چاہیے، اور حال ہی میں انڈیکس کیے گئے بلاک سے کم یا اس کے برابر ہونا چاہیے. -`deployment` ایک منفرد ID ہے، جو `subgraph.yaml` فائل کے IPFS CID سے مطابقت رکھتی ہے. +` deployment ` ایک منفرد ID ہے، جو `subgraph.yaml` فائل کے IPFS CID سے مطابقت رکھتی ہے. `block` تازہ ترین بلاک کے بارے میں معلومات فراہم کرتا ہے (`_meta` کو بھیجی گئی کسی بھی بلاک کی رکاوٹوں کو مدنظر رکھتے ہوئے): diff --git a/website/pages/ur/querying/querying-best-practices.mdx b/website/pages/ur/querying/querying-best-practices.mdx index 297edf934885..6abbca4a6d6f 100644 --- a/website/pages/ur/querying/querying-best-practices.mdx +++ b/website/pages/ur/querying/querying-best-practices.mdx @@ -2,11 +2,9 @@ title: بہترین طریقوں سے کیوری کرنا --- -گراف بلاکچینز سے ڈیٹا کو کیوری کرنے کا ایک ڈیسینٹرالائزڈ طریقہ فراہم کرتا ہے. +The Graph provides a decentralized way to query data from blockchains via GraphQL APIs, making it easier to query data with the GraphQL language. -گراف نیٹ ورک کا ڈیٹا GraphQL API کے ذریعے ظاہر کیا جاتا ہے، جس سے GraphQL لینگویج کے ساتھ ڈیٹا سے کیوری کرنا آسان ہو جاتا ہے. - -یہ صفحہ GraphQL لینگویج کے ضروری اصولوں اور GraphQL کیوریز کے بہترین طریقوں کے بارے میں آپ کی رہنمائی کرے گا. +Learn the essential GraphQL language rules and GraphQL querying best practices. --- @@ -55,7 +53,7 @@ query [operationName]([variableName]: [variableType]) { اگرچہ نحوی کرنے اور نہ کرنے کی فہرست طویل ہے، لیکن GraphQL کی کیوریز لکھنے کی بات کرتے وقت ذہن میں رکھنے کے لیے ضروری اصول یہ ہیں: - ہر ایک `queryName` کو فی آپریشن صرف ایک بار استعمال کیا جانا چاہیے. -- ہر ایک `field` کو انتخاب میں صرف ایک بار استعمال کیا جانا چاہیے (ہم `token` کے تحت دو بار `id` سے کیوری نہیں کرسکتے ہیں) +- ہر ایک ` field ` کو انتخاب میں صرف ایک بار استعمال کیا جانا چاہیے (ہم `token` کے تحت دو بار `id` سے کیوری نہیں کرسکتے ہیں) - Some `field`s or queries (like `tokens`) return complex types that require a selection of sub-field. Not providing a selection when expected (or providing one when not expected - for example, on `id`) will raise an error. To know a field type, please refer to [Graph Explorer](/network/explorer). - کسی دلیل کو تفویض کردہ کوئی بھی متغیر اس کی قسم سے مماثل ہونا چاہیے. - متغیرات کی دی گئی فہرست میں، ان میں سے ہر ایک منفرد ہونا چاہیے. @@ -71,7 +69,7 @@ GraphQL ایک لینگویج اور کنونشنز کا مجموعہ ہے جو اس کا مطلب ہے کہ آپ معیاری `fetch` (مقامی طور پر یا `@whatwg-node/fetch` یا `isomorphic-fetch` کے ذریعے) کا استعمال کرتے ہوئے GraphQL API سے کیوری کرسکتے ہیں. -تاہم، جیسا کہ ["ایک درخواست سے کیوری کرنا"](/querying/querying-from-an-application) میں بتایا گیا ہے، ہم آپ کو ہمارا `graph-client` استعمال کرنے کی تجویز کرتے ہیں جو منفرد خصوصیات کی حمایت کرتا ہے جیسے: +However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), it's recommend to use `graph-client` which supports unique features such as: - کراس چین سب گراف ہینڈلنگ: ایک کیوری میں متعدد سب گرافس سے کیوری کرنا - [خودکار بلاک ٹریکنگ](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) @@ -104,8 +102,6 @@ main() مزید GraphQL کلائنٹ متبادلات کا احاطہ ["ایک درخواست سے کیوری کرنا"](/querying/querying-from-an-application) میں کیا گیا ہے. -اب جب کہ ہم نے GraphQL کیوریز کی ترکیب کے بنیادی اصولوں کا احاطہ کیا ہے، آئیے اب GraphQL کیوری تحریر کے بہترین طریقوں کو دیکھتے ہیں. - --- ## بہترین طریقے @@ -164,11 +160,11 @@ const result = await execute(query, { - سرور کی سطح پر **متغیرات کو کیش کیا جا سکتا ہے** - **کیوریز کا مستحکم طور پر ٹولز کے ذریعے تجزیہ کیا جا سکتا ہے** (مندرجہ ذیل حصوں میں اس پر مزید) -**نوٹ: جامد کیوریز میں فیلڈز کو مشروط طور پر کیسے شامل کیا جائے** +### How to include fields conditionally in static queries -ہم صرف ایک خاص شرط پر `owner` فیلڈ کو شامل کرنا چاہتے ہیں. +You might want to include the `owner` field only on a particular condition. -اس کے لیے، ہم ذیل میں `@include(if:...)` ہدایت کا فائدہ اٹھا سکتے ہیں: +For this, you can leverage the `@include(if:...)` directive as follows: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -191,7 +187,7 @@ const result = await execute(query, { }) ``` -نوٹ: مخالف ہدایت `@skip(if: ...)` ہے. +> نوٹ: مخالف ہدایت `@skip(if: ...)` ہے. ### Ask for what you want @@ -199,9 +195,8 @@ GraphQL اپنی "جو چاہو مانگو" ٹیگ لائن کے لیے مشہو اس وجہ سے، GraphQL میں، تمام دستیاب فیلڈز کو انفرادی طور پر فہرست بنائے بغیر حاصل کرنے کا کوئی طریقہ نہیں ہے. -GraphQL APIs سے کیوری کرتے وقت، ہمیشہ صرف ان فیلڈز سے کیوری کرنے کے بارے میں سوچیں جو حقیقت میں استعمال ہوں گے. - -اوور فیچنگ کی ایک عام وجہ ہستیوں کا مجموعہ ہے۔ پہلے سے طے شدہ طور پر، کیوریز ایک مجموعہ میں 100 ہستیوں کو حاصل کریں گے، جو عام طور پر اس سے کہیں زیادہ ہوتا ہے جو اصل میں استعمال کیا جائے گا، مثلاً، صارف کو دکھانے کے لیے۔ اس لیے کیوریز کو تقریباً ہمیشہ پہلے واضح طور پر سیٹ کرنا چاہیے، اور اس بات کو یقینی بنانا چاہیے کہ وہ صرف اتنی ہی ہستیوں کو حاصل کریں جتنی انھیں درحقیقت ضرورت ہے۔ اس کا اطلاق نہ صرف کیوری میں اعلیٰ سطحی مجموعوں پر ہوتا ہے، بلکہ اس سے بھی زیادہ ہستیوں کے گھریلو مجموعوں پر ہوتا ہے. +- GraphQL APIs سے کیوری کرتے وقت، ہمیشہ صرف ان فیلڈز سے کیوری کرنے کے بارے میں سوچیں جو حقیقت میں استعمال ہوں گے. +- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. مثال کے طور پر، درج ذیل کیوری میں: @@ -337,8 +332,8 @@ query { اس طرح کے دہرائے جانے والے فیلڈز (`id`, `active`, `status`) بہت سے مسائل لاتے ہیں: -- مزید وسیع کیوریز کے لیے پڑھنا مشکل ہے -- ایسے ٹولز کا استعمال کرتے وقت جو کیوریز کی بنیاد پر ٹائپ اسکرپٹ کی قسمیں تیار کرتے ہیں (_آخری حصے میں اس پر مزید_)، `newDelegate` اور `oldDelegate` کے نتیجے میں دو الگ الگ ان لائن انٹرفیس ہوں گے. +- More extensive queries become harder to read. +- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. کیوری کا ایک ریفیکٹر ورژن درج ذیل ہوگا: @@ -364,13 +359,13 @@ fragment DelegateItem on Transcoder { } ``` -GraphQL `fragment` کا استعمال پڑھنے کی اہلیت کو بہتر بنائے گا (خاص طور پر پیمانے پر) لیکن اس کے نتیجے میں ٹائپ اسکپٹ ٹائپس جینریشن بہتر ہوں گی. +Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. ٹائپ جنریشن ٹول کا استعمال کرتے وقت، مندرجہ بالا کیوری ایک مناسب `DelegateItemFragment` قسم پیدا کرے گا (_آخری "ٹولز" سیکشن دیکھیں_). ### GraphQL فریگمنٹ کیا کریں اور نہ کریں -**فریگمنٹ بیس ایک ٹائپ کا ہونا چاہیے** +### فریگمنٹ بیس ایک ٹائپ کا ہونا چاہیے ایک فریگمینٹ غیر قابل اطلاق ٹائپ پر مبنی نہیں ہو سکتا، مختصراً، **اس ٹائپ پر جس میں فیلڈز نہیں ہیں**: @@ -382,7 +377,7 @@ fragment MyFragment on BigInt { `BigInt` ایک **اسکالر** (مقامی "سادہ" ٹائپ) ہے جسے فریگمینٹس کی بنیاد کے طور پر استعمال نہیں کیا جاسکتا. -**فریگمینٹ پھیلانے کا طریقہ** +#### فریگمینٹ پھیلانے کا طریقہ فریگمینٹس کی وضاحت مخصوص ٹائپس پر کی جاتی ہے اور اس کے مطابق کیوریز میں استعمال کیا جانا چاہیے. @@ -411,16 +406,16 @@ fragment VoteItem on Vote { یہاں `Vote` ٹائپ کے فریگمینٹس کو پھیلانا ممکن نہیں ہے. -**فریگمنٹ کو ڈیٹا کی ایٹم بزنس یونٹ کے طور پر بیان کریں** +#### فریگمنٹ کو ڈیٹا کی ایٹم بزنس یونٹ کے طور پر بیان کریں -GraphQL فریگمنٹ کو ان کے استعمال کی بنیاد پر بیان کیا جانا چاہیے. +GraphQL `Fragment`s must be defined based on their usage. زیادہ تر استعمال کے کیس کے لیے، فی ٹائپ کے ایک فریگمینٹ کی وضاحت کرنا (بار بار فیلڈز کے استعمال یا ٹائپ جنریشن کی صورت میں) کافی ہے. -فریگمینٹ استعمال کرنے کے لیے یہاں انگوٹھے کا اصول ہے: +Here is a rule of thumb for using fragments: -- جب ایک ہی قسم کے فیلڈز کو ایک کیوری میں دہرایا جاتا ہے، تو انہیں ایک فریگمینٹ میں گروپ کریں -- جب ایک جیسے لیکن ایک جیسے فیلڈز کو دہرایا نہیں جاتا ہے، تو متعدد فریگمینٹس بنائیں، مثال کے طور پر: +- When fields of the same type are repeated in a query, group them in a `Fragment`. +- When similar but different fields are repeated, create multiple fragments, for instance: ```graphql # base fragment (mostly used in listing) @@ -443,7 +438,7 @@ fragment VoteWithPoll on Vote { --- -## ضروری اوزار +## The Essential Tools ### GraphQL ویب پر مبنی ایکسپلوررز @@ -473,11 +468,11 @@ If you are looking for a more flexible way to debug/test your queries, other sim [GraphQL VSCode ایکسٹینشن](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) حاصل کرنے کے لیے آپ کے ترقیاتی ورک فلو میں ایک بہترین اضافہ ہے: -- نحو کو نمایاں کرنا -- خودکار تکمیل کی تجاویز -- اسکیما کے خلاف توثیق -- ٹکڑے -- ٹکڑوں اور ان پٹ کی اقسام کے لیے تعریف پر جائیں +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets +- Go to definition for fragments and input types اگر آپ `graphql-eslint` استعمال کر رہے ہیں، تو [ESLint VSCode ایکسٹینشن](https://marketplace. visualstudio. com/items? itemName=dbaeumer. vscode-eslint) ہے آپ کے کوڈ میں موجود غلطیوں اور انتباہات کو درست طریقے سے دیکھنا ضروری ہے. @@ -485,9 +480,9 @@ If you are looking for a more flexible way to debug/test your queries, other sim [JS GraphQL پلگ ان](https://plugins.jetbrains.com/plugin/8097-graphql/) فراہم کر کے GraphQL کے ساتھ کام کرتے ہوئے آپ کے تجربے کو نمایاں طور پر بہتر بنائے گا: -- نحو کو نمایاں کرنا -- خودکار تکمیل کی تجاویز -- اسکیما کے خلاف توثیق -- ٹکڑے +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets -اس [ویب سٹورم مضمون](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) پر مزید معلومات جو پلگ ان کی تمام اہم خصوصیات کو ظاہر کرتا ہے. +For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. diff --git a/website/pages/ur/quick-start.mdx b/website/pages/ur/quick-start.mdx index 5126122005e2..8cf4683ba7c7 100644 --- a/website/pages/ur/quick-start.mdx +++ b/website/pages/ur/quick-start.mdx @@ -2,24 +2,18 @@ title: فورا شروع کریں --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph to Subgraph Studio. +Learn how to easily publish and query a [subgraph](/developing/developer-faqs/#1-what-is-a-subgraph) on The Graph. -اس بات کو یقینی بنائیں کہ آپ کا سب گراف کسی [تعاون یافتہ نیٹ ورک] (/developing/supported-networks) سے ڈیٹا کو ترتیب دے رہا ہے۔. - -یہ گائیڈ یہ فرض کرتے ہوئے لکھی گئ ہے کہ آپ کے پاس ہے: +## Prerequisites for this guide - ایک کرپٹو والیٹ -- ایک سمارٹ کنٹریکٹ ایڈریس جو آپ کی مرضی کے نیٹ ورک پر ہے - -## 1. سب گراف سٹوڈیو پر سب گراف بنائیں - -[سب گراف سٹوڈیو](https://thegraph.com/studio/) پر جائیں اور اپنے والیٹ کو منسلک کریں۔ +- A smart contract address on one of the [supported networks](/developing/supported-networks/) -Once your wallet is connected, you can begin by clicking “Create a Subgraph." It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name." +## Step-by-step -## 2. گراف CLI انسٹال کریں +### ۱. گراف CLI انسٹال کریں -The Graph CLI is written in TypeScript and you will need to have `node` and either `npm` or `yarn` installed to use it. Check that you have the most recent CLI version installed. +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. اپنی مقامی مشین پر، درج زیل کمانڈز میں سے ایک کو رن کریں: @@ -35,133 +29,161 @@ npm install -g @graphprotocol/graph-cli@latest yarn global add @graphprotocol/graph-cli ``` -## 3. Initialize your subgraph from existing contract +### 2. Create your subgraph + +If your contract has events, the `init` command will automatically create a scaffold of a subgraph. + +#### Create via Graph CLI + +Use the following command to create a subgraph in Subgraph Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +#### Create via Subgraph Studio + +Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. -Initialize your subgraph from an existing contract by running the initialize command: +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +2. Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". + +For additional information on subgraph creation and the Graph CLI, see [Creating a Subgraph](/developing/creating-a-subgraph). + +### 3. Initialize your subgraph + +#### From an existing contract + +The following command initializes your subgraph from an existing contract: ```sh graph init --studio ``` -> آپ اپنے مخصوص سب گراف کے لیے سب گراف کے پیج پر [سب گراف سٹوڈیو](https://thegraph.com/studio/) میں کمانڈز تلاش کر سکتے ہیں. +> Note: If your contract was verified on Etherscan, then the ABI will automatically be created in the CLI. + +آپ اپنے مخصوص سب گراف کے لیے سب گراف کے پیج پر [سب گراف سٹوڈیو](https://thegraph.com/studio/) میں کمانڈز تلاش کر سکتے ہیں. -جب آپ اپنے سب گراف کو شروع کرتے ہیں, CLI ٹول درج ذیل معلومات کے لۓ آپ سے پوچھے گا: +When you initialize your subgraph, the CLI will ask you for the following information: -- پروٹوکول: پروٹوکول منتخب جس سے آپ کا سب گراف ڈیٹا انڈیکس کرے گا -- سب گراف سلگ: اپنے سب گراف کے لیےؑ نام رکھیں. آپ کا سب گراف سلگ آپ کع سب گراف کا شناخت کنندہ ہے. -- سب گراف بنانے کے لیۓ ڈائریکٹری: اپنی مقامی ڈائریکٹری منتخب کریں -- ایتھیریم نیٹ ورک(اختیاری): آپ کو یہ بتانے کی ضرورت ہو سکتی ہے کہ آپ کا سب گراف کس EVM سے مطابقت رکھنے والے نیٹ ورک سے ڈیٹا کو انڈیکس کرے گا -- کنٹریکٹ ایڈریس: وہ سمارٹ کنٹریکٹ ایڈریس تلاش کریں جس سے آپ ڈیٹا کیوری کرنا چاہتے ہیں -- ABI: اگر ABI خود بخود نہیں ہے، آپ کو اسے JSON فائل کے طور پر دستی طور پر ان پٹ کرنے کی ضرورت ہوگی -- سٹارٹ بلاک: یہ تجویز کیا جاتا ہے کے آپ وقت بچانے کے لیۓ سٹارٹ بلاک میں ان پٹ کریں جبکہ آپ کا سب گراف بلاکچین ڈیٹا کو انڈیکس کرتا ہے۔ آپ اس بلاک کو تلاش کر کے سٹارٹ بلاک کا پتہ لگا سکتے ہیں جہاں آپ کا کنٹریکٹ تعینات کیا گیا تھا. -- کنٹریکٹ کا نام: اپنے کنٹریکٹ کا نام درج کریں -- کنٹریکٹ کے واقعات کو انڈیکس کریں بطور ادارے: یہ تجویز کیا جاتا ہے کہ آپ اسے درست پر سیٹ کریں کیونکہ یہ خود بخود ہر خارج ہونے والے واقع کے لیے آپ کے سب گراف میں میپنگس کا اضافہ کر دے گا۔ -- ایک اور کنٹریکٹ شامل کریں(اختیاری): آپ ایک اور کنٹریکٹ شامل کر سکتے ہیں۔ +- Protocol: Choose the protocol your subgraph will be indexing data from. +- Subgraph slug: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. +- Directory to create the subgraph in: Choose your local directory. +- Ethereum network (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- Contract address: Locate the smart contract address you’d like to query data from. +- ABI: If the ABI is not auto-populated, you will need to input it manually as a JSON file. +- Start Block: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- Contract Name: Input the name of your contract. +- Index contract events as entities: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- Add another contract (optional): You can add another contract. اپنے سب گراف کو شروع کرتے وقت کیا توقع کی جائے اس کی مثال کے لیے درج ذیل اسکرین شاٹ دیکھیں: ![سب گراف کمانڈ](/img/subgraph-init-example.png) -## 4. Write your subgraph +### 4. Write your subgraph -پچھلی کمانڈز ایک سکیفولڈ سب گراف بناتی ہیں جسے آپ اپنے سب گراف کی تعمیر کے لیے نقطہ آغاز کے طور پر استعمال کر سکتے ہیں۔ سب گراف میں تبدیلی کرتے وقت، آپ بنیادی طور پر تین فائلوں کے ساتھ کام کریں گے: +The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. -- Manifest (`subgraph.yaml`) - The manifest defines what datasources your subgraphs will index. -- Schema (`schema.graphql`) - The GraphQL schema defines what data you wish to retrieve from the subgraph. -- AssemblyScript Mappings (`mapping.ts`) - This is the code that translates data from your datasources to the entities defined in the schema. +When making changes to the subgraph, you will mainly work with three files: -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). +- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -## 5. Deploy to Subgraph Studio +For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -ایک بار آپ کا سب گراف لکھا جائے، درج ذیل کمانڈز رن کریں: +### 5. Deploy your subgraph -```sh -$ graph codegen -$ graph build -``` +Remember, deploying is not the same as publishing. + +- When you deploy a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. +- When you publish a subgraph, you are publishing it onchain to the decentralized network. + +1. ایک بار آپ کا سب گراف لکھا جائے، درج ذیل کمانڈز رن کریں: + + ```sh + graph codegen + graph build + ``` + +2. Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. -- اپنے سب گراف کی تصدیق اور اسے تعینات کریں. تعیناتی کی کلید آپ کو سب گراف پیج پر ملے گی جو سب گراف سٹوڈیو میں موجود ہے. +![Deploy key](/img/subgraph-studio-deploy-key.jpg) +```` ```sh -$ graph auth --studio -$ graph deploy --studio -``` -آپ سے ورژن کا لیبل طلب کیا جائے گا۔ `0.0.1` کی طرح ورژن بنانے کے لیے [سیمور](https://semver.org/) استعمال کرنے کی پر زور سفارش کی جاتی ہے۔ کو بتاتی ہے، آپ کسی بھی سٹرنگ کو ورژن کے طور پر منتخب کرنے کے لیے آزاد ہیں جیسے: `v1`، `version1`، `asdf`۔ - -## 6. اپنے سب گراف کو ٹیسٹ کریں - -In Subgraph Studio's playground environment, you can test your subgraph by making a sample query. - -لوگز آپ کو بتائیں گے اکر آپ کے سب گراف میں مسائل ہیں۔ آپریشنل سب گراف کے لوگز اس طرح کے دکھیں گے: - -![سب گراف لاگز](/img/subgraph-logs-image.png) - -اگر آپ کا سب گراف ناکام ہو رہا ہے، تو آپ GraphiQL پلے گراؤنڈ کا استعمال کر کے سب گراف کی صحت کے بارے میں کیوری کر سکتے ہیں۔ نوٹ کریں کہ آپ نیچے دیے گئے کیوری سے فائدہ اٹھا سکتے ہیں اور اپنے سب گراف کے لیے اپنی تعیناتی شناخت درج کر سکتے ہیں۔ اس صورت میں، `Qm...` تعیناتی شناخت ہے (جو **تفصیلات** کے تحت سب گراف کے پیج پر واقع ہو سکتی ہے)۔ ذیل کا کیوری آپ کو بتائے گا کہ سب گراف کب ناکام ہوجاتا ہے، لہذا آپ اس کے مطابق ڈی بگ کرسکتے ہیں: - -```graphql -{ - indexingStatuses(subgraphs: ["Qm..."]) { - node - synced - health - fatalError { - message - block { - number - hash - } - handler - } - nonFatalErrors { - message - block { - number - hash - } - handler - } - chains { - network - chainHeadBlock { - number - } - earliestBlock { - number - } - latestBlock { - number - } - lastHealthyBlock { - number - } - } - entityCount - } -} +graph auth --studio + +graph deploy --studio ``` +```` -## 7. Publish your subgraph to The Graph’s Decentralized Network +- The CLI will ask for a version label. + - It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. That said, you can choose any string for the version such as: `v1`, `version1`, `asdf`, etc. -Once your subgraph has been deployed to Subgraph Studio, you have tested it out, and you are ready to put it into production, you can then publish it to the decentralized network. +### 6. Review your subgraph -In Subgraph Studio, you will be able to click the publish button on the top right of your subgraph's page. +If you’d like to examine your subgraph before publishing it to the network, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: -وہ نیٹ ورک منتخب کریں جس پر آپ اپنا سب گراف شائع کرنا چاہتے ہیں۔ Arbitrum One کے سب گراف شائع کرنے کی سفارش کی جاتی ہے تاکہ [تیز تر ٹرانزیکشن کی رفتار اور گیس کے کم اخراجات] (/arbitrum/arbitrum-faq) سے فائدہ اٹھایا جا سکے۔ +- Run a sample query. +- Analyze your subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: -The (upgrade Indexer)[/sunrise/#about-the-upgrade-indexer] will begin serving queries on your subgraph regardless of subgraph curation, and it will provide you with 100,000 free queries per month. + ![سب گراف لاگز](/img/subgraph-logs-image.png) -For a higher quality of service and stronger redundancy, you can curate your subgraph to attract more Indexers. At the time of writing, it is recommended that you curate your own subgraph with at least 3,000 GRT to ensure 3-5 additional Indexers begin serving queries on your subgraph. +### 7. Publish your subgraph to The Graph Network -گیس کی قیمتیں بچانے کے لیے، جب آپ اپنا سب گراف گراف کے ڈیسینٹرالائزڈ نیٹ ورک پر شائع کرتے ہیں تو آپ اس بٹن کو منتخب کرکے اپنے سب گراف کو اسی ٹرانزیکشن میں درست کر سکتے ہیں جسے آپ نے شائع کیا تھا: +Publishing a subgraph to the decentralized network makes it available for [Curators](/network/curating/) to begin curating it and [Indexers](/network/indexing/) to begin indexing it. -![سب گراف شائع کریں](/img/publish-and-signal-tx.png) +#### Publishing with Subgraph Studio + +1. To publish your subgraph, click the Publish button in the dashboard. +2. Select the network to which you would like to publish your subgraph. + +#### Publishing from the CLI + +As of version 0.73.0, you can also publish your subgraph with the Graph CLI. + +1. Open the `graph-cli`. + +2. Use the following commands: + + ```sh + graph codegen && graph build + ``` + + Then, + + ```sh + graph publish + ``` -## 8. Query your subgraph +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +To customize your deployment, see [Publishing a Subgraph](/publishing/publishing-a-subgraph/). + +#### Adding signal to your subgraph + +1. To attract indexers to query your subgraph, you should add GRT curation signal to it. + + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + +2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. + + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + +To learn more about curation, read [Curating](/network/curating/). + +To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: + +![سب گراف شائع کریں](/img/publish-and-signal-tx.png) -اب، آپ اپنے سب گراف کی کیوریز کو اپنے سب گراف کے کیوری URL پر بھیج کر اپنے سب گراف سے کیوری کر سکتے ہیں، جسے آپ کیوری کے بٹن پر کلک کر کے تلاش کر سکتے ہیں. +### 8. Query your subgraph -If you don't have your API key, you can query via the free, rate-limited development query URL, which can be used for development and staging. +Now, you can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -اپنے سب گراف سے ڈیٹا کیوری کرنے کے بارے میں مزید معلومات کے لیے، مزید پڑھیں [یہاں](/querying/querying-the-graph/)۔ +For more information about querying data from your subgraph, read [Querying The Graph](/querying/querying-the-graph/). diff --git a/website/pages/ur/release-notes/assemblyscript-migration-guide.mdx b/website/pages/ur/release-notes/assemblyscript-migration-guide.mdx index 31439d43c505..9c8d0512a4eb 100644 --- a/website/pages/ur/release-notes/assemblyscript-migration-guide.mdx +++ b/website/pages/ur/release-notes/assemblyscript-migration-guide.mdx @@ -127,11 +127,8 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - اگر آپ کے پاس متغیر شیڈونگ ہے تو آپ کو اپنے ڈپلیکیٹ متغیرات کا نام تبدیل کرنے کی ضرورت ہوگی. - ### کالعدم موازنہ - اپنے سب گراف پر اپ گریڈ کرنے سے، بعض اوقات آپ کو اس طرح کی غلطیاں مل سکتی ہیں: ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - حل کرنے کے لیے آپ صرف `if` اسٹیٹمنٹ کو اس طرح تبدیل کر سکتے ہیں: ```typescript @@ -288,7 +284,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - اس مسئلے کو حل کرنے کے لیے، آپ اس پراپرٹی تک رسائی کے لیے ایک متغیر بنا سکتے ہیں تاکہ مرتب کرنے والا منسوخی چیک میجک کر سکے: ```typescript diff --git a/website/pages/ur/release-notes/graphql-validations-migration-guide.mdx b/website/pages/ur/release-notes/graphql-validations-migration-guide.mdx index fba78a067915..3d1289e29dd2 100644 --- a/website/pages/ur/release-notes/graphql-validations-migration-guide.mdx +++ b/website/pages/ur/release-notes/graphql-validations-migration-guide.mdx @@ -4,7 +4,8 @@ title: GraphQL کی توثیق کی منتقلی گائیڈ جلد ہی `گراف نوڈ` [GraphQL توثیق کی تفصیلات](https://spec.graphql.org/June2018/#sec-Validation) کی 100% کوریج کو سپورٹ کرے گا. -`گراف نوڈ` کے پچھلے ورژن تمام توثیقوں کی حمایت نہیں کرتے تھے اور زیادہ خوبصورت جوابات فراہم کرتے تھے - لہذا، ابہام کی صورت میں، `گراف نوڈ` غلط گراف کیو ایل آپریشن کے اجزاء کو نظر انداز کر رہا تھا. +`گراف نوڈ` کے پچھلے ورژن تمام توثیقوں کی حمایت نہیں کرتے تھے اور زیادہ خوبصورت جوابات فراہم کرتے تھے - لہذا، ابہام کی صورت میں، `گراف نوڈ` غلط گراف کیو ایل + آپریشن کے اجزاء کو نظر انداز کر رہا تھا. GraphQL ویلیڈیشن سپورٹ آنے والی نئی خصوصیات اور گراف نیٹ ورک کے پیمانے پر کارکردگی کا ایک ستون ہے. diff --git a/website/pages/ur/sps/introduction.mdx b/website/pages/ur/sps/introduction.mdx new file mode 100644 index 000000000000..12e3f81c6d53 --- /dev/null +++ b/website/pages/ur/sps/introduction.mdx @@ -0,0 +1,19 @@ +--- +title: Introduction to Substreams-powered Subgraphs +--- + +By using a Substreams package (`.spkg`) as a data source, your subgraph gains access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. + +There are two methods of enabling this technology: + +Using Substreams [triggers](./triggers): Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. + +Using [Entity Changes](https://substreams.streamingfast.io/documentation/consume/subgraph/graph-out): By writing more of the logic into Substreams, you can consume the module's output directly into graph-node. In graph-node, you can use the Substreams data to create your subgraph entities. + +It is really a matter of where you put your logic, in the subgraph or the Substreams. Keep in mind that having more of your logic in Substreams benefits from a parallelized model, whereas triggers will be linearly consumed in graph-node. + +Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: + +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/ur/sps/triggers-example.mdx b/website/pages/ur/sps/triggers-example.mdx new file mode 100644 index 000000000000..062ad433e77b --- /dev/null +++ b/website/pages/ur/sps/triggers-example.mdx @@ -0,0 +1,140 @@ +--- +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" +--- + +## شرطیں + +Before starting, make sure to: + +- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. +- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. + +## Step 1: Initialize Your Project + + + +1. Open your Dev Container and run the following command to initialize your project: + + ```bash + substreams init + ``` + +2. Select the "minimal" project option. + +3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: + +```yaml +specVersion: v0.1.0 +package: + name: my_project_sol + version: v0.1.0 + +imports: # Pass your spkg of interest + solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg + +modules: + - name: map_spl_transfers + use: solana:map_block # Select corresponding modules available within your spkg + initialBlock: 260000082 + + - name: map_transactions_by_programid + use: solana:solana:transactions_by_programid_without_votes + +network: solana-mainnet-beta + +params: # Modify the param fields to meet your needs + # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA + map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE +``` + +## Step 2: Generate the Subgraph Manifest + +Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: + +```bash +substreams codegen subgraph +``` + +You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: + +```yaml +--- +dataSources: + - kind: substreams + name: my_project_sol + network: solana-mainnet-beta + source: + package: + moduleName: map_spl_transfers # Module defined in the substreams.yaml + file: ./my-project-sol-v0.1.0.spkg + mapping: + apiVersion: 0.0.7 + kind: substreams/graph-entities + file: ./src/mappings.ts + handler: handleTriggers +``` + +## Step 3: Define Entities in `schema.graphql` + +Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. Here is an example: + +```graphql +type MyTransfer @entity { + id: ID! + amount: String! + source: String! + designation: String! + signers: [String!]! +} +``` + +This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. + +## Step 4: Handle Substreams Data in `mappings.ts` + +With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: + +```ts +import { Protobuf } from 'as-proto/assembly' +import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events' +import { MyTransfer } from '../generated/schema' + +export function handleTriggers(bytes: Uint8Array): void { + const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode) + + for (let i = 0; i < input.data.length; i++) { + const event = input.data[i] + + if (event.transfer != null) { + let entity_id: string = `${event.txnId}-${i}` + const entity = new MyTransfer(entity_id) + entity.amount = event.transfer!.instruction!.amount.toString() + entity.source = event.transfer!.accounts!.source + entity.designation = event.transfer!.accounts!.destination + + if (event.transfer!.accounts!.signer!.single != null) { + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] + } else if (event.transfer!.accounts!.signer!.multisig != null) { + entity.signers = event.transfer!.accounts!.signer!.multisig!.signers + } + entity.save() + } + } +} +``` + +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + +## Conclusion + +You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. + +For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). diff --git a/website/pages/ur/sps/triggers.mdx b/website/pages/ur/sps/triggers.mdx new file mode 100644 index 000000000000..ed19635d4768 --- /dev/null +++ b/website/pages/ur/sps/triggers.mdx @@ -0,0 +1,37 @@ +--- +title: Substreams Triggers +--- + +Custom triggers allow you to send data directly into your subgraph mappings file and entities (similar to tables and fields), enabling full use of the GraphQL layer. By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data within your subgraph’s handler, ensuring efficient and streamlined data management within the subgraph framework. + +> Note: If you haven’t already, visit one of the How-To Guides found [here](./introduction) to scaffold your first project in the Development Container. + +The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. + +```tsx +export function handleTransactions(bytes: Uint8Array): void { + let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).trasanctions // 1. + if (transactions.length == 0) { + log.info('No transactions found', []) + return + } + + for (let i = 0; i < transactions.length; i++) { + // 2. + let transaction = transactions[i] + + let entity = new Transaction(transaction.hash) // 3. + entity.from = transaction.from + entity.to = transaction.to + entity.save() + } +} +``` + +Here's what you’re seeing in the `mappings.ts` file: + +1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object +2. Looping over the transactions +3. Create a new subgraph entity for every transaction + +To go through a detailed example of a trigger-based subgraph, [click here](./triggers-example). diff --git a/website/pages/ur/substreams.mdx b/website/pages/ur/substreams.mdx index 060d72a059d0..4a0f47205092 100644 --- a/website/pages/ur/substreams.mdx +++ b/website/pages/ur/substreams.mdx @@ -4,9 +4,11 @@ title: سب سٹریمز ![سب سٹریمز لوگو](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## سب سٹریمز 4 مراحل میں کیسے کام کرتا ہے۔ @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### اپنے علم کو وسعت دیں - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/ur/sunrise.mdx b/website/pages/ur/sunrise.mdx index 17e5e917240a..89e51796c01e 100644 --- a/website/pages/ur/sunrise.mdx +++ b/website/pages/ur/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Post-Sunrise + Upgrading to The Graph Network FAQ --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Note: The Sunrise of Decentralized Data ended June 12th, 2024. -## What is the Sunrise of Decentralized Data? +## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data is an initiative spearheaded by Edge & Node. The goal is to enable subgraph developers to seamlessly upgrade to The Graph’s decentralized network. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -یہ منصوبہ گراف ایکو سسٹم کی کئی پچھلی پیشرفتوں پر مبنی ہے، بشمول نئے شائع شدہ سب گرافس پر کیوریز پیش کرنے کے لیے ایک اپ گریڈ انڈیکسر، اور نئے بلاکچین نیٹ ورکس کو گراف میں ضم کرنے کی صلاحیت. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### What are the phases of the Sunrise? +### What happened to the hosted service? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. -## گراف نیٹ ورک میں سب گراف کو اپ گریڈ کرنا +During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. -### When will hosted service subgraphs no longer be available? +### Was Subgraph Studio impacted by this upgrade? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Why were subgraphs published to Arbitrum, did it start indexing a different network? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Will my hosted service subgraph be supported on The Graph Network? - -Yes, the upgrade Indexer will automatically support all hosted service subgraphs published to The Graph Network for a seamless upgrade experience. - -### How do I upgrade my hosted service subgraph? - -> Note: Upgrading a subgraph to The Graph Network cannot be undone. - - - -To upgrade a hosted service subgraph, you can visit the subgraph dashboard on the [hosted service](https://thegraph.com/hosted-service). - -1. Select the subgraph(s) you want to upgrade. -2. Select the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### How can I get support with the upgrade process? - -The Graph community is here to support developers as they move to The Graph Network. Join The Graph's [Discord server](https://discord.gg/vtvv7FP) and request help in the #upgrade-decentralized-network channel. - -### How can I ensure high quality of service and redundancy for subgraphs on The Graph Network? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Members from these blockchain communities are encouraged to integrate their chain through the [Chain Integration Process](/chain-integration-overview/). - -### How do I publish new versions to the network? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Upgrade to the latest version of [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Update your deploy command - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> Publishing requires Arbitrum ETH - upgrading your subgraph also airdrops a small amount to facilitate your first protocol interactions 🧑‍🚀 - -### I use a subgraph developed by someone else, how can I make sure that my service isn't interrupted? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### What happens if I don't upgrade my subgraph? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### How can I get started querying subgraphs on The Graph Network? - -You can explore available subgraphs on [Graph Explorer](https://thegraph.com/explorer). [Learn more about querying subgraphs on The Graph](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## About the Upgrade Indexer -### What is the upgrade Indexer? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> The upgrade Indexer is currently active. -### What chains does the upgrade Indexer support? +The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. -The upgrade Indexer supports chains that were previously only available on the hosted service. +### What does the upgrade Indexer do? -معاون چینز کی ایک جامع فہرست [یہاں](/developing/supported-networks/) تلاش کریں. +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### ایج اور نوڈ اپ گریڈ انڈیکسر کیوں چلا رہا ہے؟ -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. ### What does the upgrade indexer mean for existing Indexers? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. + +However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -اپ گریڈ انڈیکسر انڈیکسر کمیونٹی کو گراف نیٹ ورک پر سب گرافس اور نئی چینز کی ممکنہ مانگ کے بارے میں معلومات بھی فراہم کرتا ہے. +The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. ### ڈیلیگیٹرز کے لیے اس کا کیا مطلب ہے؟ -اپ گریڈ انڈیکسر ڈیلیگیٹرز کے لیے ایک طاقتور موقع فراہم کرتا ہے۔ چونکہ زیادہ سب گراف ہوسٹڈ سروس سے گراف نیٹ ورک میں اپ گریڈ ہوتے ہیں، ڈیلیگیٹرز نیٹ ورک کی بڑھتی ہوئی سرگرمی سے فائدہ اٹھاتے ہیں. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### کیا اپ گریڈ انڈیکسر انعامات کے لیے موجودہ انڈیکسرز کا مقابلہ کرے گا؟ +### Did the upgrade Indexer compete with existing Indexers for rewards? -نہیں، اپ گریڈ انڈیکسر صرف فی سب گراف کم از کم رقم مختص کرے گا اور انڈیکسنگ انعامات جمع نہیں کرے گا. +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### یہ سب گراف ڈویلپرز کو کیسے متاثر کرے گا؟ +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### اس سے ڈیٹا صارفین کو کیسے فائدہ ہوتا ہے؟ +### How does the upgrade Indexer benefit data consumers? The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. -### اپ گریڈ انڈیکسر کیوریز کی قیمت کیسے لگائے گا؟ - -اپ گریڈ انڈیکسر مارکیٹ ریٹ پر کیوریز کی قیمت لگائے گا تاکہ کیوری فیس مارکیٹ پر اثر انداز نہ ہو. - -### اپ گریڈ انڈیکسر کے سب گراف کو سپورٹ کرنے سے روکنے کے لیے کیا معیار ہیں؟ - -اپ گریڈ انڈیکسر ایک سب گراف پر کام کرے گا جب تک کہ یہ کم از کم 3 دیگر انڈیکسرز کے ذریعہ پیش کردہ مستقل کیوریز کے ساتھ کافی اور کامیابی کے ساتھ پیش نہیں کیا جاتا ہے. - -مزید برآں، اپ گریڈ انڈیکسر سب گراف کو سپورٹ کرنا بند کر دے گا اگر اس سے پچھلے 30 دنوں میں کیوری نہیں کیا گیا ہے. - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## About The Graph Network - -### کیا مجھے اپنا انفراسٹرکچر چلانے کی ضرورت ہے؟ - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -ایک بار جب آپ کا سب گراف مناسب کیوریشن سگنل تک پہنچ جاتا ہے اور دوسرے انڈیکسرز اس کی حمایت کرنا شروع کر دیتے ہیں، تو اپ گریڈ انڈیکسر آہستہ آہستہ ختم ہو جائے گا، جس سے دوسرے انڈیکسرز کو انڈیکسنگ انعامات اور کیوری کی فیسیں جمع کرنے کی اجازت مل جائے گی. - -### کیا مجھے اپنے انڈیکسنگ انفراسٹرکچر کی میزبانی کرنی چاہئے؟ - -گراف نیٹ ورک کے استعمال کے مقابلے میں آپ کے اپنے پروجیکٹ کے لیے انفراسٹرکچر چلانا [نمایاں طور پر زیادہ وسائل والا](/network/benefits/) ہے. - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -یہ کہا جا رہا ہے، اگر آپ ابھی بھی [گراف نوڈ](https://github.com/graphprotocol/graph-node) چلانے میں دلچسپی رکھتے ہیں، تو گراف نیٹ ورک میں شامل ہونے پر غور کریں [بطور انڈیکسر](https://thegraph. com/blog/how-to-become-indexer/) اپنے سب گراف اور دیگر پر ڈیٹا پیش کرکے انڈیکسنگ انعامات اور کیوری کی فیس حاصل کرنے کے لیے. - -### کیا مجھے سینٹرلائزڈ انڈیکسنگ فراہم کنندہ استعمال کرنا چاہیے؟ - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -سینٹرلائزڈ ہوسٹنگ پر گراف کے فوائد کی تفصیلی بریک ڈاؤن یہ ہے: +### How does the upgrade Indexer price queries? -- **لچک اور ریڈینڈینسی**: ڈیسینٹرالائزڈ نظام اپنی تقسیم شدہ نوعیت کی وجہ سے فطری طور پر زیادہ مضبوط اور لچکدار ہوتے ہیں۔ ڈیٹا کسی ایک سرور یا مقام پر محفوظ نہیں ہوتا ہے۔ اس کے بجائے، یہ دنیا بھر میں سینکڑوں آزاد انڈیکسرز کے ذریعہ پیش کیا جاتا ہے۔ اگر ایک نوڈ ناکام ہو جاتا ہے تو یہ ڈیٹا کے ضائع ہونے یا سروس میں رکاوٹ کا خطرہ کم کرتا ہے، جس سے غیر معمولی اپ ٹائم ہوتا ہے (99.99%). +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **سروس کا معیار**: متاثر کن اپ ٹائم کے علاوہ، گراف نیٹ ورک میں ~106ms میڈین کیوری کی رفتار (لیٹنسی) اور میزبان متبادل کے مقابلے میں کیوری کی کامیابی کی اعلی شرحیں شامل ہیں۔ مزید پڑھیں [اس بلاگ] میں (https://thegraph.com/blog/qos-the-graph-network/). +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -جس طرح آپ نے اپنے بلاکچین نیٹ ورک کو اس کی ڈیسینٹرالائزڈ، سیکورٹی اور شفافیت کے لیے منتخب کیا ہے، اسی طرح گراف نیٹ ورک کا انتخاب انہی اصولوں کی توسیع ہے۔ اپنے ڈیٹا انفراسٹرکچر کو ان اقدار کے ساتھ سیدھ میں لا کر، آپ ایک مربوط، لچکدار، اور اعتماد پر مبنی ترقیاتی ماحول کو یقینی بناتے ہیں. +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/ur/supported-network-requirements.mdx b/website/pages/ur/supported-network-requirements.mdx index f4b5a7768f13..445fe81bc504 100644 --- a/website/pages/ur/supported-network-requirements.mdx +++ b/website/pages/ur/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| نیٹ ورک | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | +| نیٹ ورک | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | diff --git a/website/pages/ur/tap.mdx b/website/pages/ur/tap.mdx new file mode 100644 index 000000000000..9a4ea21f0670 --- /dev/null +++ b/website/pages/ur/tap.mdx @@ -0,0 +1,197 @@ +--- +title: TAP Migration Guide +--- + +Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. + +## جائزہ + +[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: + +- Efficiently handles micropayments. +- Adds a layer of consolidations to on-chain transactions and costs. +- Allows Indexers control of receipts and payments, guaranteeing payment for queries. +- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. + +## Specifics + +TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. + +For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. + +### RAV Details + +- It’s money that is waiting to be sent to the blockchain. + +- It will continue to send requests to aggregate and ensure that the total value of non-aggregated receipts does not exceed the `amount willing to lose`. + +- Each RAV can be redeemed once in the contracts, which is why they are sent after the allocation is closed. + +### Redeeming RAV + +As long as you run `tap-agent` and `indexer-agent`, everything will be executed automatically. The following provides a detailed breakdown of the process: + +1. An Indexer closes allocation. + +2. ` period, tap-agent` takes all pending receipts for that specific allocation and requests an aggregation into a RAV, marking it as `last`. + +3. `indexer-agent` takes all the last RAVS and sends redeem requests to the blockchain, which will update the value of `redeem_at`. + +4. During the `` period, `indexer-agent` monitors if the blockchain has any reorganizations that revert the transaction. + + - If it was reverted, the RAV is resent to the blockchain. If it was not reverted, it gets marked as `final`. + +## Blockchain Addresses + +### Contracts + +| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| ------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | +| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | +| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | + +### Gateway + +| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| ---------- | --------------------------------------------- | --------------------------------------------- | +| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | + +### تقاضے + +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. + +- [Graph TAP Aribtrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) + +> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployement. As a result, you have to index it manually. + +## Migration Guide + +### Software versions + +| Component | ورزن | Image Link | +| --------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------- | +| indexer-service | v1.0.0-rc.6 | [indexer-service](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-service-rs/264320627?tag=1.0.0-rc.6) | +| indexer-agent | PR #995 | [indexer-agent](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent/266166026?tag=sha-d98cf80) | +| tap-agent | v1.0.0-rc.6 | [tap-agent](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-tap-agent/264320547?tag=1.0.0-rc.6) | + +### Steps + +1. **Indexer Agent** + + - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). + - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + +2. **Indexer Service** + + - Fully replace your current configuration with the [new Indexer Service rs](https://github.com/graphprotocol/indexer-rs). It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + - Like the older version, you can scale Indexer Service horizontally easily. It is still stateless. + +3. **TAP Agent** + + - Run _one_ single instance of [TAP Agent](https://github.com/graphprotocol/indexer-rs) at all times. It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + +4. **Configure Indexer Service and TAP Agent** + + Configuration is a TOML file shared between `indexer-service` and `tap-agent`, supplied with the argument `--config /path/to/config.toml`. + + Check out the full [configuration](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) and the [default values](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) + +For minimal configuration, use the following template: + +```bash +# You will have to change *all* the values below to match your setup. +# +# Some of the config below are global graph network values, which you can find here: +# +# +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: +# +# [database] +# postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" + +[indexer] +indexer_address = "0x1111111111111111111111111111111111111111" +operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" + +[database] +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. +postgres_url = "postgres://postgres@postgres:5432/postgres" + +[graph_node] +# URL to your graph-node's query endpoint +query_url = "" +# URL to your graph-node's status endpoint +status_url = "" + +[subgraphs.network] +# Query URL for the Graph Network subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[subgraphs.escrow] +# Query URL for the Escrow subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[blockchain] +# The chain ID of the network that the graph network is running on +chain_id = 1337 +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. +receipts_verifier_address = "0x2222222222222222222222222222222222222222" + +######################################## +# Specific configurations to tap-agent # +######################################## +[tap] +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: +# max_amount_willing_to_lose_grt = "0.1" +max_amount_willing_to_lose_grt = 20 + +[tap.sender_aggregator_endpoints] +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. +0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" +``` + +نوٹس: + +- Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/tap/#gateway). +- Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/tap/#contracts) using the appropriate chain id. + +**Log Level** + +- You can set the log level by using the `RUST_LOG` environment variable. +- It’s recommended that you set it to `RUST_LOG=indexer_tap_agent=debug,info`. + +## Monitoring + +### Metrics + +All components expose the port 7300 to be queried by prometheus. + +### Grafana Dashboard + +You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) and import. + +### Launchpad + +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer) diff --git a/website/pages/vi/about.mdx b/website/pages/vi/about.mdx index 461f029fbd25..b45ddbbc5c39 100644 --- a/website/pages/vi/about.mdx +++ b/website/pages/vi/about.mdx @@ -2,46 +2,66 @@ title: Về The Graph --- -Trang này sẽ giải thích The Graph là gì và cách bạn có thể bắt đầu. - ## What is The Graph? -The Graph is a decentralized protocol for indexing and querying blockchain data. The Graph makes it possible to query data that is difficult to query directly. +The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. + +## Understanding the Basics + +Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. + +### Challenges Without The Graph + +In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. + +- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. + +- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. + +- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. + +### Why is this a problem? + +It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. + +Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. + +Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. -Các dự án với các hợp đồng thông minh phức tạp như [Uniswap](https://uniswap.org/) và các sáng kiến NFT như [Bored Ape Yacht Club](https://boredapeyachtclub.com/) lưu trữ dữ liệu trên chuỗi khối Ethereum, khiến việc đọc bất kỳ thứ gì khác ngoài dữ liệu cơ bản trực tiếp từ chuỗi khối này thực sự khó khăn. +## The Graph Provides a Solution -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. -You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. +### How The Graph Functions -**Lập chỉ mục dữ liệu blockchain thực sự rất rất khó.** +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +#### Specifics -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. -## Cách thức hoạt động của The Graph +- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -The Graph tìm hiểu những gì và cách thức lập chỉ mục dữ liệu Ethereum dựa trên mô tả subgraph, được gọi là bản kê khai subgraph (subgraph manifest). Mô tả subgraph xác định các hợp đồng thông minh quan tâm cho một subgraph, các sự kiện trong các hợp đồng đó cần chú ý và cách ánh xạ dữ liệu sự kiện với dữ liệu mà The Graph sẽ lưu trữ trong cơ sở dữ liệu của nó. +- When creating a subgraph, you need to write a subgraph manifest. -Khi bạn đã viết một `subgraph manifest`, bạn sử dụng Graph CLI để lưu trữ định nghĩa trong IPFS và yêu cầu indexer bắt đầu lập chỉ mục dữ liệu cho subgraph đó. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. -Biểu đồ này cung cấp chi tiết hơn về luồng dữ liệu khi một tệp kê khai subgraph đã được triển khai, xử lý các giao dịch Ethereum: +The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. ![A graphic explaining how The Graph uses Graph Node to serve queries to data consumers](/img/graph-dataflow.png) Quy trình thực hiện theo các bước sau: -1. A dapp adds data to Ethereum through a transaction on a smart contract. -2. Hợp đồng thông minh phát ra một hoặc nhiều sự kiện trong khi xử lý giao dịch. -3. Graph Node liên tục quét Ethereum để tìm các khối mới và dữ liệu cho subgraph của bạn mà chúng có thể chứa. -4. Graph Node tìm các sự kiện Ethereum cho subgraph của bạn trong các khối này và chạy các trình xử lý ánh xạ mà bạn đã cung cấp. Ánh xạ là một mô-đun WASM tạo hoặc cập nhật các thực thể dữ liệu mà Graph Node lưu trữ để đáp ứng với các sự kiện Ethereum. -5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. +1. A dapp adds data to Ethereum through a transaction on a smart contract. +2. Hợp đồng thông minh phát ra một hoặc nhiều sự kiện trong khi xử lý giao dịch. +3. Graph Node liên tục quét Ethereum để tìm các khối mới và dữ liệu cho subgraph của bạn mà chúng có thể chứa. +4. Graph Node tìm các sự kiện Ethereum cho subgraph của bạn trong các khối này và chạy các trình xử lý ánh xạ mà bạn đã cung cấp. Ánh xạ là một mô-đun WASM tạo hoặc cập nhật các thực thể dữ liệu mà Graph Node lưu trữ để đáp ứng với các sự kiện Ethereum. +5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. ## Bước tiếp theo -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +The following sections provide a more in-depth look at subgraphs, their deployment and data querying. -Before you start writing your own subgraph, you might want to have a look at [Graph Explorer](https://thegraph.com/explorer) and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/pages/vi/arbitrum/arbitrum-faq.mdx b/website/pages/vi/arbitrum/arbitrum-faq.mdx index a36b0103772f..9c12c8816259 100644 --- a/website/pages/vi/arbitrum/arbitrum-faq.mdx +++ b/website/pages/vi/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: Arbitrum FAQ Click [here](#billing-on-arbitrum-faqs) if you would like to skip to the Arbitrum Billing FAQs. -## Why is The Graph implementing an L2 Solution? +## Why did The Graph implement an L2 Solution? -By scaling The Graph on L2, network participants can expect: +By scaling The Graph on L2, network participants can now benefit from: - Upwards of 26x savings on gas fees @@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can expect: - Security inherited from Ethereum -Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers could open and close allocations to index a greater number of subgraphs with greater frequency, developers could deploy and update subgraphs with greater ease, Delegators could delegate GRT with increased frequency, and Curators could add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion. @@ -41,27 +41,21 @@ To take advantage of using The Graph on L2, use this dropdown switcher to toggle ## As a subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now? -There is no immediate action required, however, network participants are encouraged to begin moving to Arbitrum to take advantage of the benefits of L2. +Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) for additional support. -Core developer teams are working to create L2 transfer tools that will make it significantly easier to move delegation, curation, and subgraphs to Arbitrum. Network participants can expect L2 transfer tools to be available by summer of 2023. +All indexing rewards are now entirely on Arbitrum. -As of April 10th, 2023, 5% of all indexing rewards are being minted on Arbitrum. As network participation increases, and as the Council approves it, indexing rewards will gradually shift from Ethereum to Arbitrum, eventually moving entirely to Arbitrum. - -## If I would like to participate in the network on L2, what should I do? - -Please help [test the network](https://testnet.thegraph.com/explorer) on L2 and report feedback about your experience in [Discord](https://discord.gg/graphprotocol). - -## Are there any risks associated with scaling the network to L2? +## Were there any risks associated with scaling the network to L2? All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). Everything has been tested thoroughly, and a contingency plan is in place to ensure a safe and seamless transition. Details can be found [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Will existing subgraphs on Ethereum continue to work? +## Are existing subgraphs on Ethereum working? -Yes, The Graph Network contracts will operate in parallel on both Ethereum and Arbitrum until moving fully to Arbitrum at a later date. +All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. -## Will GRT have a new smart contract deployed on Arbitrum? +## Does GRT have a new smart contract deployed on Arbitrum? Yes, GRT has an additional [smart contract on Arbitrum](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7). However, the Ethereum mainnet [GRT contract](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) will remain operational. diff --git a/website/pages/vi/billing.mdx b/website/pages/vi/billing.mdx index 37f9c840d00b..dec5cfdadc12 100644 --- a/website/pages/vi/billing.mdx +++ b/website/pages/vi/billing.mdx @@ -14,7 +14,7 @@ There are two plans to use when querying subgraphs on The Graph Network. ## Query Payments with credit card -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) +- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. You'll be redirected to the wallet selection page. Select your wallet and click on "Connect". 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. @@ -69,7 +69,7 @@ Once you bridge GRT, you can add it to your billing balance. 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. 4. Enter the amount of GRT you would like to withdraw. 5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. 6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. @@ -83,7 +83,7 @@ Once you bridge GRT, you can add it to your billing balance. - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. 5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. 6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. + - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. 7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. 8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. @@ -127,7 +127,7 @@ This will be a step by step guide for purchasing GRT on Binance. 7. Review your purchase and click "Buy GRT". 8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. 9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - Click on the "wallet" button, click withdraw, and select GRT. - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - Click "Continue" and confirm your transaction. @@ -198,7 +198,7 @@ You can learn more about getting ETH on Binance [here](https://www.binance.com/e ### How many queries will I need? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. @@ -208,6 +208,6 @@ Of course, both new and existing users can reach out to Edge & Node's BD team fo Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### What happens when my billing balance runs out? Will I get a warning? You will receive several email notifications before your billing balance runs out. diff --git a/website/pages/vi/chain-integration-overview.mdx b/website/pages/vi/chain-integration-overview.mdx index 2fe6c2580909..a142f3f817f9 100644 --- a/website/pages/vi/chain-integration-overview.mdx +++ b/website/pages/vi/chain-integration-overview.mdx @@ -6,12 +6,12 @@ A transparent and governance-based integration process was designed for blockcha ## Stage 1. Technical Integration -- Teams work on a Graph Node integration and Firehose for non-EVM based chains. [Here's how](/new-chain-integration/). +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. - Teams initiate the protocol integration process by creating a Forum thread [here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (New Data Sources sub-category under Governance & GIPs). Using the default Forum template is mandatory. ## Stage 2. Integration Validation -- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON RPC or Firehose endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - Graph Indexers test the integration on The Graph's testnet. - Core developers and Indexers monitor stability, performance, and data determinism. @@ -38,7 +38,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. -### 3. How much time will this process take? +### 3. How much time will the process of reaching full protocol support take? The time to mainnet is expected to be several weeks, varying based on the time of integration development, whether additional research is required, testing and bug fixes, and, as always, the timing of the governance process that requires community feedback. @@ -46,4 +46,4 @@ Protocol support for indexing rewards depends on the stakeholders' bandwidth to ### 4. How will priorities be handled? -Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. This is especially true for chains previously supported on the [hosted service](https://thegraph.com/hosted-service) or those relying on already tested stacks. +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/vi/cookbook/arweave.mdx b/website/pages/vi/cookbook/arweave.mdx index af52f6bcfe20..b01b9d7665d0 100644 --- a/website/pages/vi/cookbook/arweave.mdx +++ b/website/pages/vi/cookbook/arweave.mdx @@ -105,7 +105,7 @@ Schema definition describes the structure of the resulting subgraph database and The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). -Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/assemblyscript-api/). +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/vi/cookbook/base-testnet.mdx b/website/pages/vi/cookbook/base-testnet.mdx index 4aa3b662be8f..70582e0d121e 100644 --- a/website/pages/vi/cookbook/base-testnet.mdx +++ b/website/pages/vi/cookbook/base-testnet.mdx @@ -63,10 +63,10 @@ Your subgraph slug is an identifier for your subgraph. The CLI tool will walk yo The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. +- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retrieve from the subgraph. - Ánh xạ AssemblyScript (mapping.ts) - Đây là mã dịch dữ liệu từ các nguồn dữ liệu của bạn sang các thực thể được xác định trong lược đồ. -If you want to index additional data, you will need extend the manifest, schema and mappings. +If you want to index additional data, you will need to extend the manifest, schema and mappings. For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). diff --git a/website/pages/vi/cookbook/cosmos.mdx b/website/pages/vi/cookbook/cosmos.mdx index 21a83eaa8baa..7be4dd039524 100644 --- a/website/pages/vi/cookbook/cosmos.mdx +++ b/website/pages/vi/cookbook/cosmos.mdx @@ -85,7 +85,7 @@ Schema definition describes the structure of the resulting subgraph database and The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). -Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/assemblyscript-api/). +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/vi/cookbook/grafting.mdx b/website/pages/vi/cookbook/grafting.mdx index 3b4f46fe9bd1..c37ae95c28e0 100644 --- a/website/pages/vi/cookbook/grafting.mdx +++ b/website/pages/vi/cookbook/grafting.mdx @@ -22,7 +22,7 @@ For more information, you can check: - [Ghép](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic usecase. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network @@ -30,7 +30,7 @@ In this tutorial, we will be covering a basic usecase. We will replace an existi ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. While this is an effective way to preserve data and save time on indexing, grafting may introduce complexities and potential issues when migrating from a hosted environment to the decentralized network. It is not possible to graft a subgraph from The Graph Network back to the hosted service or Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. ### Best Practices @@ -80,7 +80,7 @@ dataSources: ``` - The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. ## Grafting Manifest Definition @@ -191,7 +191,7 @@ Congrats! You have successfully grafted a subgraph onto another subgraph. ## Additional Resources -If you want more experience with grafting, here's a few examples for popular contracts: +If you want more experience with grafting, here are a few examples for popular contracts: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) diff --git a/website/pages/vi/cookbook/near.mdx b/website/pages/vi/cookbook/near.mdx index bf0c1f2a0d92..2f54699539cb 100644 --- a/website/pages/vi/cookbook/near.mdx +++ b/website/pages/vi/cookbook/near.mdx @@ -37,7 +37,7 @@ There are three aspects of subgraph definition: **schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph#the-graphql-schema). -**AssemblyScript Mappings:** [AssemblyScript code](/developing/assemblyscript-api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. +**AssemblyScript Mappings:** [AssemblyScript code](/developing/graph-ts/api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. During subgraph development there are two key commands: @@ -98,7 +98,7 @@ Schema definition describes the structure of the resulting subgraph database and The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). -NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/assemblyscript-api). +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/graph-ts/api). ```typescript @@ -165,9 +165,9 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/developing/assemblyscript-api) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/developing/graph-ts/api) is available to NEAR subgraph developers during mapping execution. -This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/assemblyscript-api#json-api) to allow developers to easily process these logs. +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/graph-ts/api#json-api) to allow developers to easily process these logs. ## Deploying a NEAR Subgraph diff --git a/website/pages/vi/cookbook/subgraph-uncrashable.mdx b/website/pages/vi/cookbook/subgraph-uncrashable.mdx index 989310a3f9a0..0cc91a0fa2c3 100644 --- a/website/pages/vi/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/vi/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: Safe Subgraph Code Generator - The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. These logs can be viewed in the The Graph's hosted service under the 'Logs' section. +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. diff --git a/website/pages/vi/cookbook/upgrading-a-subgraph.mdx b/website/pages/vi/cookbook/upgrading-a-subgraph.mdx index 4917df67de8d..81de1ea90ad4 100644 --- a/website/pages/vi/cookbook/upgrading-a-subgraph.mdx +++ b/website/pages/vi/cookbook/upgrading-a-subgraph.mdx @@ -136,7 +136,7 @@ Make sure **Update Subgraph Details in Explorer** is checked and click on **Save ## Deprecating a Subgraph on The Graph Network -Follow the steps [here](/managing/deprecating-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. +Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. ## Querying a Subgraph + Billing on The Graph Network diff --git a/website/pages/vi/deploying/multiple-networks.mdx b/website/pages/vi/deploying/multiple-networks.mdx new file mode 100644 index 000000000000..dc2b8e533430 --- /dev/null +++ b/website/pages/vi/deploying/multiple-networks.mdx @@ -0,0 +1,241 @@ +--- +title: Deploying a Subgraph to Multiple Networks +--- + +This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph). + +## Deploying the subgraph to multiple networks + +In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. + +### Using `graph-cli` + +Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: + +```sh +Options: + + ... + --network Network configuration to use from the networks config file + --network-file Networks config file path (default: "./networks.json") +``` + +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. + +> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. + +If you don't have a `networks.json` file, you'll need to manually create one with the following structure: + +```json +{ + "network1": { // the network name + "dataSource1": { // the dataSource name + "address": "0xabc...", // the contract address (optional) + "startBlock": 123456 // the startBlock (optional) + }, + "dataSource2": { + "address": "0x123...", + "startBlock": 123444 + } + }, + "network2": { + "dataSource1": { + "address": "0x987...", + "startBlock": 123 + }, + "dataSource2": { + "address": "0xxyz..", + "startBlock": 456 + } + }, + ... +} +``` + +> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. + +Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x123...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +This is what your networks config file should look like: + +```json +{ + "mainnet": { + "Gravity": { + "address": "0x123..." + } + }, + "sepolia": { + "Gravity": { + "address": "0xabc..." + } + } +} +``` + +Now we can run one of the following commands: + +```sh +# Using default networks.json file +yarn build --network sepolia + +# Using custom named file +yarn build --network sepolia --network-file path/to/config +``` + +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: sepolia + source: + address: '0xabc...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Now you are ready to `yarn deploy`. + +> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: + +```sh +# Using default networks.json file +yarn deploy --network sepolia + +# Using custom named file +yarn deploy --network sepolia --network-file path/to/config +``` + +### Using subgraph.yaml template + +One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). + +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: + +```json +{ + "network": "mainnet", + "address": "0x123..." +} +``` + +and + +```json +{ + "network": "sepolia", + "address": "0xabc..." +} +``` + +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + network: {{network}} + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + address: '{{address}}' + abi: Gravity + mapping: + kind: ethereum/events +``` + +In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: + +```json +{ + ... + "scripts": { + ... + "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", + "prepare:sepolia": "mustache config/sepolia.json subgraph.template.yaml > subgraph.yaml" + }, + "devDependencies": { + ... + "mustache": "^3.1.0" + } +} +``` + +To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: + +```sh +# Mainnet: +yarn prepare:mainnet && yarn deploy + +# Sepolia: +yarn prepare:sepolia && yarn deploy +``` + +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). + +**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. + +## Subgraph Studio subgraph archive policy + +A subgraph version in Studio is archived if and only if it meets the following criteria: + +- The version is not published to the network (or pending publish) +- The version was created 45 or more days ago +- The subgraph hasn't been queried in 30 days + +In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. + +Every subgraph affected with this policy has an option to bring the version in question back. + +## Checking subgraph health + +If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. + +Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: + +```graphql +{ + indexingStatusForCurrentVersion(subgraphName: "org/subgraph") { + synced + health + fatalError { + message + block { + number + hash + } + handler + } + chains { + chainHeadBlock { + number + } + latestBlock { + number + } + } + } +} +``` + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/pages/vi/developing/creating-a-subgraph.mdx b/website/pages/vi/developing/creating-a-subgraph.mdx index c5553602a819..df2ed25fc3d8 100644 --- a/website/pages/vi/developing/creating-a-subgraph.mdx +++ b/website/pages/vi/developing/creating-a-subgraph.mdx @@ -2,45 +2,47 @@ title: Creating a Subgraph --- -A subgraph extracts data from a blockchain, processing it and storing it so that it can be easily queried via GraphQL. +This detailed guide provides instructions to successfully create a subgraph. -![Defining a Subgraph](/img/defining-a-subgraph.png) +A subgraph extracts data from a blockchain, processes it, and stores it for efficient querying via GraphQL. -Định nghĩa subgraph bao gồm một số tệp: +![Defining a Subgraph](/img/defining-a-subgraph.png) -- `subgraph.yaml`: một tệp YAML chứa tệp kê khai subgraph +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. -- `schema.graphql`: một lược đồ GraphQL xác định dữ liệu nào được lưu trữ cho subgraph của bạn và cách truy vấn nó qua GraphQL +## Getting Started -- `Ánh xạ AssemblyScript`: Mã [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) dịch từ dữ liệu sự kiện sang các thực thể được xác định trong lược đồ của bạn (ví dụ: `mapping.ts` trong hướng dẫn này) +### Cài đặt Graph CLI -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key). It is recommended that you [add signal](/network/curating/#how-to-signal) to your subgraph with at least [3,000 GRT](/sunrise/#how-can-i-ensure-high-quality-of-service-and-redundancy-for-subgraphs-on-the-graph-network). +To build and deploy a subgraph, you will need the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-tooling) which you will need to build and deploy a subgraph. +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. -## Cài đặt Graph CLI +On your local machine, run one of the following commands: -Graph CLI được viết bằng JavaScript và bạn sẽ cần cài đặt `yarn` hoặc `npm` để dùng nó; Chúng ta sẽ giả định rằng bạn đã có yarn trong những các bước sau. +#### Using [npm](https://www.npmjs.com/) -Một khi bạn có `yarn`, cài đặt Graph CLI bằng cách chạy +```bash +npm install -g @graphprotocol/graph-cli@latest +``` -**Cài đặt bằng yarn:** +#### Using [yarn](https://yarnpkg.com/) ```bash yarn global add @graphprotocol/graph-cli ``` -**Cài đặt bằng npm:** +- The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. -```bash -npm install -g @graphprotocol/graph-cli -``` +- This `graph init` command can also create a subgraph in Subgraph Studio by passing in `--product subgraph-studio`. + +- If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. -Once installed, the `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. This command can be used to create a subgraph in Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to your preferred network, bootstrapping a new subgraph from that contract can be a good way to get started. +## Create a subgraph -## Từ Một Hợp đồng Hiện có +### From an existing contract -Lệnh sau tạo một subgraph lập chỉ mục tất cả các sự kiện của một hợp đồng hiện có. Nó cố gắng lấy ABI hợp đồng từ Etherscan và quay trở lại yêu cầu đường dẫn tệp cục bộ. Nếu thiếu bất kỳ đối số tùy chọn nào, nó sẽ đưa bạn đến một biểu mẫu tương tác. +The following command creates a subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,21 +53,29 @@ graph init \ [] ``` -`` là ID của subgraph của bạn trong Subgraph Studio, bạn có thể tìm thấy mã này trên trang chi tiết subgraph của mình. +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. -## Từ một Subgraph mẫu +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. -Chế độ thứ hai mà `graph init` hỗ trợ là tạo một dự án mới từ một subgraph mẫu. Lệnh sau thực hiện điều này: +### From an example subgraph + +The following command initializes a new project from an example subgraph: ```sh -graph init --studio +graph init --studio --from-example=example-subgraph ``` -The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. -## Add New dataSources To An Existing Subgraph +## Add new `dataSources` to an existing subgraph -Since `v0.31.0` the `graph-cli` supports adding new dataSources to an existing subgraph through the `graph add` command. +Since `v0.31.0`, the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: ```sh graph add
    [] @@ -78,22 +88,45 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -The `add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option), and will create a new `dataSource` in the same way that `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. +### Specifics + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: + + - If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- The contract `address` will be written to the `networks.json` for the relevant network. + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. -The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: +## Components of a subgraph -- If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. -- If `false`: a new entity & event handler should be created with `${dataSourceName}{EventName}`. +### Tệp kê khai Subgraph -The contract `address` will be written to the `networks.json` for the relevant network. +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -> **Note:** When using the interactive cli, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. +The **subgraph definition** consists of the following files: -## Tệp kê khai Subgraph +- `subgraph.yaml`: Contains the subgraph manifest -Tệp kê khai subgraph `subgraph.yaml` xác định các hợp đồng thông minh lập chỉ mục subgraph của bạn, các sự kiện từ các hợp đồng này cần chú ý đến và cách ánh xạ dữ liệu sự kiện tới các thực thể mà Graph Node lưu trữ và cho phép truy vấn. Bạn có thể tìm thấy thông số kỹ thuật đầy đủ cho các tệp kê khai subgraph [tại đây](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL -Đối với subgraph mẫu, `subgraph.yaml` là: +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +A single subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: ```yaml specVersion: 0.0.4 @@ -180,9 +213,9 @@ A single subgraph can index data from multiple smart contracts. Add an entry for Các trình kích hoạt cho nguồn dữ liệu trong một khối được sắp xếp theo quy trình sau: -1. Trình kích hoạt sự kiện và cuộc gọi được sắp xếp đầu tiên theo chỉ mục giao dịch trong khối. -2. Event and call triggers within the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. -3. Trình kích hoạt chặn được chạy sau trình kích hoạt sự kiện và cuộc gọi, theo thứ tự chúng được xác định trong tệp kê khai. +1. Trình kích hoạt sự kiện và cuộc gọi được sắp xếp đầu tiên theo chỉ mục giao dịch trong khối. +2. Event and call triggers within the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. +3. Trình kích hoạt chặn được chạy sau trình kích hoạt sự kiện và cuộc gọi, theo thứ tự chúng được xác định trong tệp kê khai. These ordering rules are subject to change. @@ -305,9 +338,9 @@ Imagine you have a subgraph that needs to make three Ethereum calls to fetch dat Traditionally, these calls might be made sequentially: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Total time taken = 3 + 2 + 4 = 9 seconds @@ -315,9 +348,9 @@ Total time taken = 3 + 2 + 4 = 9 seconds With this feature, you can declare these calls to be executed in parallel: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. @@ -325,9 +358,9 @@ Total time taken = max (3, 2, 4) = 4 seconds ### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. -2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. ### Example Configuration in Subgraph Manifest @@ -347,7 +380,7 @@ calls: Details for the example above: - `global0X128` is the declared `eth_call`. -- The text before colon(`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. @@ -360,17 +393,17 @@ calls: ### SpecVersion Releases -| Phiên bản | Ghi chú phát hành | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +| Phiên bản | Ghi chú phát hành | +|:---------:| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | ### Nhận các ABI @@ -442,16 +475,16 @@ For some entity types the `id` is constructed from the id's of two other entitie We support the following scalars in our GraphQL API: -| Loại | Miêu tả | -| --- | --- | -| `Bytes` | Mảng byte, được biểu diễn dưới dạng chuỗi thập lục phân. Thường được sử dụng cho các mã băm và địa chỉ Ethereum. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| Loại | Miêu tả | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Mảng byte, được biểu diễn dưới dạng chuỗi thập lục phân. Thường được sử dụng cho các mã băm và địa chỉ Ethereum. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | #### Enums @@ -593,7 +626,7 @@ This more elaborate way of storing many-to-many relationships will result in les #### Adding comments to the schema -As per GraphQL spec, comments can be added above schema entity attributes using the hash symble `#`. This is illustrated in the example below: +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: ```graphql type MyFirstEntity @entity { @@ -873,7 +906,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **Lưu ý:** Nguồn dữ liệu mới sẽ chỉ xử lý các lệnh gọi và sự kiện cho khối mà nó được tạo và tất cả các khối tiếp theo, nhưng sẽ không xử lý dữ liệu lịch sử, tức là dữ liệu được chứa trong các khối trước đó. -> +> > Nếu các khối trước đó chứa dữ liệu có liên quan đến nguồn dữ liệu mới, tốt nhất là lập chỉ mục dữ liệu đó bằng cách đọc trạng thái hiện tại của hợp đồng và tạo các thực thể đại diện cho trạng thái đó tại thời điểm nguồn dữ liệu mới được tạo. ### Bối cảnh Nguồn Dữ liệu @@ -930,7 +963,7 @@ dataSources: ``` > **Lưu ý:** Khối tạo hợp đồng có thể được nhanh chóng tra cứu trên Etherscan: -> +> > 1. Tìm kiếm hợp đồng bằng cách nhập địa chỉ của nó vào thanh tìm kiếm. > 2. Nhấp vào băm giao dịch tạo trong phần `Contract Creator`. > 3. Tải trang chi tiết giao dịch nơi bạn sẽ tìm thấy khối bắt đầu cho hợp đồng đó. @@ -945,9 +978,9 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde `indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: -1. `"never"`: No pruning of historical data; retains the entire history. -2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. -3. A specific number: Sets a custom limit on the number of historical blocks to retain. +1. `"never"`: No pruning of historical data; retains the entire history. +2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +3. A specific number: Sets a custom limit on the number of historical blocks to retain. ``` indexerHints: @@ -982,29 +1015,6 @@ indexerHints: prune: never ``` -You can check the earliest block (with historical state) for a given subgraph by querying the [Indexing Status API](/deploying/deploying-a-subgraph-to-hosted/#checking-subgraph-health): - -``` -{ - indexingStatuses(subgraphs: ["Qm..."]) { - subgraph - synced - health - chains { - earliestBlock { - number - } - latestBlock { - number - } - chainHeadBlock { number } - } - } -} -``` - -Note that the `earliestBlock` is the earliest block with historical data, which will be more recent than the `startBlock` specified in the manifest, if the subgraph has been pruned. - ## Event Handlers Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. @@ -1477,7 +1487,7 @@ The file data source must specifically mention all the entity types which it wil #### Create a new handler to process files -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/assemblyscript-api/#json-api)). +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). The CID of the file as a readable string can be accessed via the `dataSource` as follows: diff --git a/website/pages/vi/developing/developer-faqs.mdx b/website/pages/vi/developing/developer-faqs.mdx index 9fd66e6afc9b..87a735748672 100644 --- a/website/pages/vi/developing/developer-faqs.mdx +++ b/website/pages/vi/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: Câu hỏi thường gặp dành cho nhà phát triển --- -## 1. What is a subgraph? +This page summarizes some of the most common questions for developers building on The Graph. -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using the Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available to be queried by subgraph consumers. +## Subgraph Related -## 2. Can I delete my subgraph? +### 1. What is a subgraph? -Không thể xóa các subgraph sau khi chúng được tạo. +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. Can I change my subgraph name? +### 2. What is the first step to create a subgraph? -Không. Khi một subgraph được tạo, không thể thay đổi tên. Hãy đảm bảo suy nghĩ kỹ về điều này trước khi bạn tạo subgraph của mình để các dapp khác có thể dễ dàng tìm kiếm và nhận dạng được. +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. Can I change the GitHub account associated with my subgraph? +### 3. Can I still create a subgraph if my smart contracts don't have events? -Không. Sau khi tạo subgraph, không thể thay đổi tài khoản GitHub được liên kết. Hãy đảm bảo suy nghĩ kỹ về điều này trước khi bạn tạo subgraph của mình. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. Am I still able to create a subgraph if my smart contracts don't have events? +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are by far the fastest way to retrieve useful data. +### 4. Can I change the GitHub account associated with my subgraph? -If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended, as performance will be significantly slower. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. Is it possible to deploy one subgraph with the same name for multiple networks? +### 5. How do I update a subgraph on mainnet? -You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/deploying/deploying-a-subgraph-to-hosted#redeploying-a-subgraph) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. How are templates different from data sources? +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) upfront you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. +Bạn phải triển khai lại subgraph, nhưng nếu ID subgraph (mã băm IPFS) không thay đổi, nó sẽ không phải đồng bộ hóa từ đầu. + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +Trong một subgraph, các sự kiện luôn được xử lý theo thứ tự chúng xuất hiện trong các khối, bất kể điều đó có qua nhiều hợp đồng hay không. + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph#data-source-templates). -## 8. How do I make sure I'm using the latest version of graph-node for my local deployments? +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -Bạn có thể chạy lệnh sau: +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -docker pull graphprotocol/graph-node:latest -``` +You can also use `graph add` command to add a new dataSource. -**LƯU Ý:** docker / docker-compose sẽ luôn sử dụng bất kỳ phiên bản graph-node nào được kéo vào lần đầu tiên bạn chạy nó, vì vậy điều quan trọng là phải làm điều này để đảm bảo bạn được cập nhật phiên bản mới nhất của graph-node. +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. How do I call a contract function or access a public state variable from my subgraph mappings? +Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. -## 10. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Yes. On `graph init` command itself you can add multiple datasources by entering contracts one after the other. You can also use `graph add` command to add new datasource. +Bạn có thể chạy lệnh sau: -## 11. I want to contribute or add a GitHub issue. Where can I find the open source repositories? +```sh +docker pull graphprotocol/graph-node:latest +``` -- [graph-node](https://github.com/graphprotocol/graph-node) -- [graph-tooling](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. What is the recommended way to build "autogenerated" ids for an entity when handling events? +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? Nếu chỉ một thực thể được tạo trong sự kiện và nếu không có gì tốt hơn khả dụng, thì chỉ mục log + băm giao dịch sẽ là duy nhất. Bạn có thể làm xáo trộn chúng bằng cách chuyển đổi nó thành Byte và sau đó chuyển nó qua`crypto.keccak256` nhưng điều này sẽ không làm cho nó độc đáo hơn. -## 13. When listening to multiple contracts, is it possible to select the contract order to listen to events? +### 15. Can I delete my subgraph? -Trong một subgraph, các sự kiện luôn được xử lý theo thứ tự chúng xuất hiện trong các khối, bất kể điều đó có qua nhiều hợp đồng hay không. +It is not possible to delete subgraphs once they are created. However, you can [transfer and deprecate your subgraph](/managing/transfer-and-deprecate-a-subgraph/). -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +## Network Related + +### 16. What networks are supported by The Graph? + +You can find the list of the supported networks [here](/developing/supported-networks). + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? Đúng. Bạn có thể thực hiện việc này bằng cách nhập `graph-ts` theo ví dụ bên dưới: @@ -78,23 +99,21 @@ dataSource.network() dataSource.address() ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Do you support block and call handlers on Sepolia? Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. -## 16. Can I import ethers.js or other JS libraries into my subgraph mappings? - -Hiện tại thì không, vì các ánh xạ được viết bằng AssemblyScript. Một giải pháp thay thế khả thi cho điều này là lưu trữ dữ liệu thô trong các thực thể và thực hiện logic yêu cầu thư viện JS trên máy khách. +## Indexing & Querying Related -## 17. Is it possible to specify what block to start indexing on? +### 19. Is it possible to specify what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 18. Are there some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 19. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? Có! Hãy thử lệnh sau, thay thế "organization/subgraphName" bằng tổ chức dưới nó được xuất bản và tên của subgraph của bạn: @@ -102,19 +121,7 @@ Có! Hãy thử lệnh sau, thay thế "organization/subgraphName" bằng tổ c curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -## 20. What networks are supported by The Graph? - -You can find the list of the supported networks [here](/developing/supported-networks). - -## 21. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? - -Bạn phải triển khai lại subgraph, nhưng nếu ID subgraph (mã băm IPFS) không thay đổi, nó sẽ không phải đồng bộ hóa từ đầu. - -## 22. Is this possible to use Apollo Federation on top of graph-node? - -Federation chưa được hỗ trợ, mặc dù chúng tôi muốn hỗ trợ nó trong tương lai. Hiện tại, điều bạn có thể làm là sử dụng tính năng ghép lược đồ, trên máy khách hoặc thông qua dịch vụ proxy. - -## 23. Is there a limit to how many objects The Graph can return per query? +### 22. Is there a limit to how many objects The Graph can return per query? By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: @@ -122,24 +129,19 @@ By default, query responses are limited to 100 items per collection. If you want someCollection(first: 1000, skip: ) { ... } ``` -## 24. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## 25. Where do I go to find my current subgraph on the hosted service? - -Head over to the hosted service in order to find subgraphs that you or others deployed to the hosted service. You can find it [here](https://thegraph.com/hosted-service). - -## 26. Will the hosted service start charging query fees? - -The Graph will never charge for the hosted service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The hosted service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to upgrade to the decentralized network as they are comfortable. - -## 27. How do I update a subgraph on mainnet? +## Miscellaneous -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +### 24. Is it possible to use Apollo Federation on top of graph-node? -## 28. In what order are the event, block, and call handlers triggered for a data source? +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +- [graph-node](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/vi/developing/graph-ts/api.mdx b/website/pages/vi/developing/graph-ts/api.mdx index 5cf44ec93b76..cff2389b0279 100644 --- a/website/pages/vi/developing/graph-ts/api.mdx +++ b/website/pages/vi/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: AssemblyScript API --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/release-notes/assemblyscript-migration-guide) +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## Tham chiếu API @@ -27,16 +29,16 @@ The `@graphprotocol/graph-ts` library provides the following APIs: The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. -| Phiên bản | Ghi chú phát hành | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
    Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
    Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
    `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
    `etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Phiên bản | Ghi chú phát hành | +| :-------: | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
    Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
    Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
    `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
    `etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Các loại cài sẵn @@ -252,7 +254,9 @@ export function handleTransfer(event: TransferEvent): void { When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. -Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### Loading entities from the store @@ -268,15 +272,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### Looking up entities created withing a block As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. -The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a Transaction from some on-chain event, and a later handler wants to access this transaction if it exists. In the case where the transaction does not exist, the subgraph will have to go to the database just to find out that the entity does not exist; if the subgraph author already knows that the entity must have been created in the same block, using loadInBlock avoids this database roundtrip. For some subgraphs, these missed lookups can contribute significantly to the indexing time. +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -503,7 +510,9 @@ Any other contract that is part of the subgraph can be imported from the generat #### Xử lý các lệnh gọi được hoàn nguyên -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -515,7 +524,7 @@ if (callResult.reverted) { } ``` -Note that a Graph node connected to a Geth or Infura client may not detect all reverts, if you rely on this we recommend using a Graph node connected to a Parity client. +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### Mã hóa / Giải mã ABI diff --git a/website/pages/vi/developing/supported-networks.mdx b/website/pages/vi/developing/supported-networks.mdx index 2782a2c04b10..2a9585d3213f 100644 --- a/website/pages/vi/developing/supported-networks.mdx +++ b/website/pages/vi/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. +- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. - If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). diff --git a/website/pages/vi/developing/unit-testing-framework.mdx b/website/pages/vi/developing/unit-testing-framework.mdx index 064373871b85..1e9c0c794e3f 100644 --- a/website/pages/vi/developing/unit-testing-framework.mdx +++ b/website/pages/vi/developing/unit-testing-framework.mdx @@ -1368,18 +1368,18 @@ The log output includes the test run duration. Here's an example: > Critical: Could not create WasmInstance from valid module with context: unknown import: wasi_snapshot_preview1::fd_write has not been defined -This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/assemblyscript-api/#logging-api) +This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/graph-ts/api/#logging-api) > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Block(defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(18,12) -> +> > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Transaction(defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(24,12) The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as`. The best way to fix issues like this one is to update everything to the latest released version. diff --git a/website/pages/vi/glossary.mdx b/website/pages/vi/glossary.mdx index cd24a22fd4d5..2978ecce3561 100644 --- a/website/pages/vi/glossary.mdx +++ b/website/pages/vi/glossary.mdx @@ -10,11 +10,9 @@ title: Glossary - **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Then, Indexers can begin indexing subgraphs to make them available to be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **Hosted service**: A temporary scaffold service for building and querying subgraphs as The Graph's decentralized network is maturing its cost of service, quality of service, and developer experience. - -- **Indexers**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. @@ -24,17 +22,17 @@ title: Glossary - **Indexer's Self Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. -- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service to The Graph Network. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegators**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curators**: Network participants that identify high-quality subgraphs, and “curate” them (i.e., signal GRT on them) in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. Indexers earn indexing rewards proportional to the signal on a subgraph. We see a correlation between the amount of GRT signalled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. -- **Subgraph Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a subgraph. - **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. @@ -46,11 +44,11 @@ title: Glossary 1. **Active**: An allocation is considered active when it is created on-chain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed. Specifically, the Indexer will lose 2.5% of their self-stake of GRT. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. @@ -62,11 +60,11 @@ title: Glossary - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **POI or Proof of Indexing**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent Proof of Indexing (POI). Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. @@ -78,10 +76,6 @@ title: Glossary - **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self stake. -- **_Upgrading_ a subgraph to The Graph Network**: The process of moving a subgraph from the hosted service to The Graph Network. - -- **_Updating_ a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). - -- **Upgrade Window**: The countdown for hosted service users to upgrade their subgraph(s) to The Graph Network beginning on April 11th, and ending on June 12th 2024. diff --git a/website/pages/vi/index.json b/website/pages/vi/index.json index d59c9ad93fa6..f8e880b2de89 100644 --- a/website/pages/vi/index.json +++ b/website/pages/vi/index.json @@ -21,10 +21,6 @@ "createASubgraph": { "title": "Tạo một Subgraph", "description": "Sử dụng Studio để tạo các subgraph" - }, - "migrateFromHostedService": { - "title": "Upgrade from the hosted service", - "description": "Upgrading subgraphs to The Graph Network" } }, "networkRoles": { diff --git a/website/pages/vi/managing/transfer-and-deprecate-a-subgraph.mdx b/website/pages/vi/managing/transfer-and-deprecate-a-subgraph.mdx new file mode 100644 index 000000000000..6bdd183f72d5 --- /dev/null +++ b/website/pages/vi/managing/transfer-and-deprecate-a-subgraph.mdx @@ -0,0 +1,65 @@ +--- +title: Transfer and Deprecate a Subgraph +--- + +## Transferring ownership of a subgraph + +Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. + +**Please note the following:** + +- Whoever owns the NFT controls the subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. +- You can easily move control of a subgraph to a multi-sig. +- A community member can create a subgraph on behalf of a DAO. + +### View your subgraph as an NFT + +To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: + +``` +https://opensea.io/your-wallet-address +``` + +Or a wallet explorer like **Rainbow.me**: + +``` +https://rainbow.me/your-wallet-addres +``` + +### Step-by-Step + +To transfer ownership of a subgraph, do the following: + +1. Use the UI built into Subgraph Studio: + + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) + +2. Choose the address that you would like to transfer the subgraph to: + + ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) + +Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: + +![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) + +## Deprecating a subgraph + +Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. + +### Step-by-Step + +To deprecate your subgraph, do the following: + +1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). +2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. +3. Your subgraph will no longer appear in searches on Graph Explorer. + +**Please note the following:** + +- The owner's wallet should call the `deprecateSubgraph` function. +- Curators will not be able to signal on the subgraph anymore. +- Curators that already signaled on the subgraph can withdraw their signal at an average share price. +- Deprecated subgraphs will show an error message. + +> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. diff --git a/website/pages/vi/mips-faqs.mdx b/website/pages/vi/mips-faqs.mdx index 89bcf6131bd7..4b4a069b7430 100644 --- a/website/pages/vi/mips-faqs.mdx +++ b/website/pages/vi/mips-faqs.mdx @@ -6,10 +6,6 @@ title: MIPs FAQs > Note: the MIPs program is closed as of May 2023. Thank you to all the Indexers who participated! -It's an exciting time to be participating in The Graph ecosystem! During [Graph Day 2022](https://thegraph.com/graph-day/2022/) Yaniv Tal announced the [sunsetting of the hosted service](https://thegraph.com/blog/sunsetting-hosted-service/), a moment The Graph ecosystem has been working towards for many years. - -To support the sunsetting of the hosted service and the migration of all of it's activity to the decentralized network, The Graph Foundation has announced the [Migration Infrastructure Providers (MIPs) program](https://thegraph.com/blog/mips-multi-chain-indexing-incentivized-program). - The MIPs program is an incentivization program for Indexers to support them with resources to index chains beyond Ethereum mainnet and help The Graph protocol expand the decentralized network into a multi-chain infrastructure layer. The MIPs program has allocated 0.75% of the GRT supply (75M GRT), with 0.5% to reward Indexers who contribute to bootstrapping the network and 0.25% allocated to Network Grants for subgraph developers using multi-chain subgraphs. diff --git a/website/pages/vi/network/benefits.mdx b/website/pages/vi/network/benefits.mdx index 048ef59484eb..a3da75477010 100644 --- a/website/pages/vi/network/benefits.mdx +++ b/website/pages/vi/network/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Cost Comparison | Self Hosted | Mạng The Graph | -| :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $0+ | $0 per month | -| Engineering time | $400 per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | -| Cost per query | $0 | $0 | -| Cơ sở hạ tầng | Centralized | Decentralized | -| Geographic redundancy | $750+ per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $750+ | $0 | +| Cost Comparison | Self Hosted | Mạng The Graph | +|:----------------------------:|:---------------------------------------:|:---------------------------------------------------------------:| +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $0+ | $0 per month | +| Engineering time | $400 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | +| Cost per query | $0 | $0 | +| Cơ sở hạ tầng | Centralized | Decentralized | +| Geographic redundancy | $750+ per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| Cost Comparison | Self Hosted | Mạng The Graph | -| :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $500 per month | $120 per month | -| Engineering time | $800 per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | ~3,000,000 | -| Cost per query | $0 | $0.00004 | -| Cơ sở hạ tầng | Centralized | Decentralized | -| Engineering expense | $200 per hour | Included | -| Geographic redundancy | $1,200 in total costs per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $1,650+ | $120 | +| Cost Comparison | Self Hosted | Mạng The Graph | +|:----------------------------:|:------------------------------------------:|:---------------------------------------------------------------:| +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $500 per month | $120 per month | +| Engineering time | $800 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~3,000,000 | +| Cost per query | $0 | $0.00004 | +| Cơ sở hạ tầng | Centralized | Decentralized | +| Engineering expense | $200 per hour | Included | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| Cost Comparison | Self Hosted | Mạng The Graph | -| :-: | :-: | :-: | -| Monthly server cost\* | $1100 per month, per node | $0 | -| Query costs | $4000 | $1,200 per month | -| Number of nodes needed | 10 | Not applicable | -| Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | ~30,000,000 | -| Cost per query | $0 | $0.00004 | -| Cơ sở hạ tầng | Centralized | Decentralized | -| Geographic redundancy | $1,200 in total costs per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $11,000+ | $1,200 | +| Cost Comparison | Self Hosted | Mạng The Graph | +|:----------------------------:|:-------------------------------------------:|:---------------------------------------------------------------:| +| Monthly server cost\* | $1100 per month, per node | $0 | +| Query costs | $4000 | $1,200 per month | +| Number of nodes needed | 10 | Not applicable | +| Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~30,000,000 | +| Cost per query | $0 | $0.00004 | +| Cơ sở hạ tầng | Centralized | Decentralized | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $11,000+ | $1,200 | \*including costs for backup: $50-$100 per month diff --git a/website/pages/vi/network/curating.mdx b/website/pages/vi/network/curating.mdx index c349ce0ccee3..cee61a8c8698 100644 --- a/website/pages/vi/network/curating.mdx +++ b/website/pages/vi/network/curating.mdx @@ -10,7 +10,7 @@ Before consumers can query a subgraph, it must be indexed. This is where curatio Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. @@ -18,7 +18,7 @@ The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). @@ -34,7 +34,7 @@ Signaling on a specific version is especially useful when one subgraph is used b Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay a 0.5% curation tax on all auto-migrated curation shares. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -49,7 +49,7 @@ However, it is recommended that curators leave their signaled GRT in place not o ## Những rủi ro 1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. 3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. Một subgraph có thể thất bại do một lỗi. Một subgraph thất bại không tích lũy phí truy vấn. Do đó, bạn sẽ phải đợi cho đến khi nhà phát triển sửa lỗi và triển khai phiên bản mới. - Nếu bạn đã đăng ký phiên bản mới nhất của một subgraph, các cổ phần của bạn sẽ tự động chuyển sang phiên bản mới đó. Điều này sẽ phát sinh một khoản thuế curation 0.5%. @@ -65,7 +65,7 @@ By signalling on a subgraph, you will earn a share of all the query fees that th Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dApp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result: -- Curator có thể sử dụng sự hiểu biết của họ về mạng để thử và dự đoán cách một subgraph riêng lẻ có thể tạo ra khối lượng truy vấn cao hơn hoặc thấp hơn trong tương lai +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future - Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. ### 3. What’s the cost of updating a subgraph? @@ -78,50 +78,14 @@ It’s suggested that you don’t update your subgraphs too frequently. See the ### 5. Tôi có thể bán cổ phần curation của mình không? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). ### 6. Am I eligible for a curation grant? Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## Căn bản về Bonding Curve - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![Giá mỗi cổ phần](/img/price-per-share.png) - -Kết quả là, giá tăng tuyến tính, có nghĩa là giá mua cổ phần sẽ đắt hơn theo thời gian. Dưới đây là một ví dụ về ý của chúng tôi, hãy xem đường cong liên kết bên dưới: - -![Đường cong liên kết](/img/bonding-curve.png) - -Hãy xem xét chúng ta có hai người curator cùng đúc cổ phần của một subgraph: - -- Curator A là người đầu tiên phát tín hiệu trên subgraph này. Bằng cách thêm 120,000 GRT vào đường cong, anh ấy có thể kiếm được 2000 cổ phần. -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- Vì cả hai người curator này đều nắm giữ một nửa tổng số cổ phần curate, họ sẽ nhận được một số tiền bản quyền của curator bằng nhau. -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- Người curator còn lại bây giờ sẽ nhận được tất cả tiền bản quyền của curator cho subgraph đó. Nếu đốt cổ phần để rút GRT, anh ấy sẽ nhận được 120,000 GRT. -- **TLDR:** The GRT valuation of curation shares is determined by the bonding curve and can be volatile. There is potential to incur big losses. Signaling early means you put in less GRT for each share. By extension, this means you earn more curator royalties per GRT than later curators for the same subgraph. - -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** - -Trong trường hợp của The Graph, [Triển khai của Bancor về công thức đường cong liên kết](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) được sử dụng. - Vẫn còn thắc mắc? Xem video hướng dẫn Curation của chúng tôi bên dưới: diff --git a/website/pages/vi/network/delegating.mdx b/website/pages/vi/network/delegating.mdx index 94555fdaaf3b..d0bd86971cf3 100644 --- a/website/pages/vi/network/delegating.mdx +++ b/website/pages/vi/network/delegating.mdx @@ -2,13 +2,23 @@ title: Delegating --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## Hướng dẫn Delegator -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly, as that information is widely available online. There are three sections in this guide: @@ -24,15 +34,19 @@ Lưu ý quan trọng là mỗi lần bạn ủy quyền, bạn sẽ bị tính p Delegators cannot be slashed for bad behavior, but there is a tax on Delegators to disincentivize poor decision-making that could harm the integrity of the network. -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +As a Delegator, it's important to understand the following: -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### Khoảng thời gian bỏ ràng buộc ủy quyền Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely.
    ![Delegation unbonding](/img/Delegation-Unbonding.png) Lưu ý khoản phí 0.5% trong Giao diện người dùng Ủy quyền, cũng @@ -41,47 +55,65 @@ Another thing to consider is how to choose an Indexer wisely. If you choose an I ### Chọn một indexer đáng tin cậy với phần thưởng hợp lý cho delegator -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +#### Delegation Parameters + +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
    ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) Indexer hàng đầu đang trao cho delegator 90% phần thưởng. Những Indexer tầm trung đang trao cho delegator 20%. Những Indexer dưới cùng đang trao cho delegator khoản 83%.
    -- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. + +As you can see, in order to choose the right Indexer, you must consider multiple things. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -### Tính toán lợi nhuận dự kiến của Delegator +## Calculating Delegators Expected Return -A Delegator must consider a lot of factors when determining the return. These include: +A Delegator must consider the following factors to determine a return: -- Một Delegator có trình độ kỹ thuật cũng có thể xem xét cách mà Indexer sử dụng các token được Ủy quyền khả dụng cho họ. Nếu một indexer không phân bổ tất cả các token khả dụng, họ sẽ không kiếm được lợi nhuận tối đa mà họ có thể dành cho chính họ hoặc Delegator của họ. -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### Xem xét Phần cắt Phí Truy vấn và Phần cắt Phí indexing -As described in the above sections, you should choose an Indexer that is transparent and honest about setting their Query Fee Cut and Indexing Fee Cuts. A Delegator should also look at the Parameters Cooldown time to see how much of a time buffer they have. After that is done, it is fairly simple to calculate the amount of rewards the Delegators are getting. The formula is: +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![Delegation Image 3](/img/Delegation-Reward-Formula.png) ### Xem xét Delegation pool của Indexer -Another thing a Delegator has to consider is what proportion of the Delegation Pool they own. All delegation rewards are shared evenly, with a simple rebalancing of the pool determined by the amount the Delegator has deposited into the pool. This gives the Delegator a share of the pool: +Delegators should consider the proportion of the Delegation Pool they own. -![Share formula](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +This gives the Delegator a share of the pool: + +![Share formula](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### Xem xét Delegation Capacity (Năng lực Ủy quyền) -Another thing to consider is the delegation capacity. Currently, the Delegation Ratio is set to 16. This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of Delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. @@ -89,16 +121,21 @@ Therefore, a Delegator should always consider the Delegation Capacity of an Inde ### MetaMask "Pending Transaction" Bug -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### Ví dụ -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## Video guide for the network UI +## Video Guide -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/vi/network/developing.mdx b/website/pages/vi/network/developing.mdx index 1b76eb94ccca..e7b71039ab2f 100644 --- a/website/pages/vi/network/developing.mdx +++ b/website/pages/vi/network/developing.mdx @@ -2,52 +2,88 @@ title: Developing --- -Developers are the demand side of The Graph ecosystem. Developers build subgraphs and publish them to The Graph Network. Then, they query live subgraphs with GraphQL in order to power their applications. +To start coding right away, go to [Developer Quick Start](/quick-start/). + +## Tổng quan + +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. + +On The Graph, you can: + +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. + +### What is GraphQL? + +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +### Developer Actions + +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. + +## Subgraph Specifics + +### What are subgraphs? + +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. + +A subgraph primarily consists of the following files: + +- `subgraph.yaml`: this YAML file contains the [subgraph manifest](/developing/creating-a-subgraph/#the-subgraph-manifest). +- `subgraph.graphql`: this GraphQL schema defines what data is stored for your subgraph, and how to query it via [GraphQL](/developing/creating-a-subgraph/#the-graphql-schema). +- `mappings`: this [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) mappings file translates data from the event data to the entities defined in your schema. + +Learn the detailed specifics to [create a subgraph](/developing/creating-a-subgraph/). ## Subgraph Lifecycle -Subgraphs deployed to the network have a defined lifecycle. +Here is a general overview of a subgraph’s lifecycle: -### Build locally +![Subgraph Lifecycle](/img/subgraph-lifecycle.png) -As with all subgraph development, it starts with local development and testing. Developers can use the same local setup whether they are building for The Graph Network, the hosted service or a local Graph Node, leveraging `graph-cli` and `graph-ts` to build their subgraph. Developers are encouraged to use tools such as [Matchstick](https://github.com/LimeChain/matchstick) for unit testing to improve the robustness of their subgraphs. +### Build locally -> There are certain constraints on The Graph Network, in terms of feature and network support. Only subgraphs on [supported networks](/developing/supported-networks) will earn indexing rewards, and subgraphs which fetch data from IPFS are also not eligible. +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. ### Deploy to Subgraph Studio -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. - -### Publish to the Network +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -When the developer is happy with their subgraph, they can publish it to The Graph Network. This is an on-chain action, which registers the subgraph so that it is discoverable by Indexers. Published subgraphs have a corresponding NFT, which is then easily transferable. The published subgraph has associated metadata, which provides other network participants with useful context and information. +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. -### Signal to Encourage Indexing +### Publish to the Network -Published subgraphs are unlikely to be picked up by Indexers without the addition of signal. Signal is locked GRT associated with a given subgraph, which indicates to Indexers that a given subgraph will receive query volume, and also contributes to the indexing rewards available for processing it. Subgraph developers will generally add signal to their subgraph, in order to encourage indexing. Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. -### Querying & Application Development +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-and-deprecate-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. -Once a subgraph has been processed by Indexers and is available for querying, developers can start to use the subgraph in their applications. Developers query subgraphs via a gateway, which forwards their queries to an Indexer who has processed the subgraph, paying query fees in GRT. +### Add Curation Signal for Indexing -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. +#### What is signal? -### Updating Subgraphs +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. +### Querying & Application Development -Once the Subgraph Developer is ready to update, they can initiate a transaction to point their subgraph at the new version. Updating the subgraph migrates any signal to the new version (assuming the user who applied the signal selected "auto-migrate"), which also incurs a migration tax. This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). -### Deprecating Subgraphs +Learn more about [querying subgraphs](/querying/querying-the-graph/). -At some point a developer may decide that they no longer need a published subgraph. At that point they may deprecate the subgraph, which returns any signalled GRT to the Curators. +### Updating Subgraphs -### Diverse Developer Roles +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -Some developers will engage with the full subgraph lifecycle on the network, publishing, querying and iterating on their own subgraphs. Some may be focused on subgraph development, building open APIs which others can build on. Some may be application focused, querying subgraphs deployed by others. +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. -### Developers and Network Economics +### Deprecating & Transferring Subgraphs -Developers are a key economic actor in the network, locking up GRT in order to encourage indexing, and crucially querying subgraphs, which is the network's primary value exchange. Subgraph developers also burn GRT whenever a subgraph is updated. +If you no longer need a published subgraph, you can [deprecate or transfer a subgraph](/managing/transfer-and-deprecate-a-subgraph/). Deprecating a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/vi/network/explorer.mdx b/website/pages/vi/network/explorer.mdx index f3c441eb0ab1..85dded99b651 100644 --- a/website/pages/vi/network/explorer.mdx +++ b/website/pages/vi/network/explorer.mdx @@ -2,21 +2,35 @@ title: Trình khám phá Graph --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## Subgraphs -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -Khi bạn nhấp vào một subgraph, bạn sẽ có thể thử các truy vấn trong playground và có thể tận dụng chi tiết mạng để đưa ra quyết định sáng suốt. Bạn cũng sẽ có thể báo hiệu GRT trên subgraph của riêng bạn hoặc các subgraph của người khác để làm cho các indexer nhận thức được tầm quan trọng và chất lượng của nó. Điều này rất quan trọng vì việc báo hiệu trên một subgraph khuyến khích nó được lập chỉ mục, có nghĩa là nó sẽ xuất hiện trên mạng để cuối cùng phục vụ các truy vấn. +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -Trên trang chuyên dụng của mỗi subgraph, một số chi tiết được hiển thị. Bao gồm: +On each subgraph’s dedicated page, you can do the following: - Báo hiệu / Hủy báo hiệu trên subgraph - Xem thêm chi tiết như biểu đồ, ID triển khai hiện tại và siêu dữ liệu khác @@ -31,26 +45,32 @@ Trên trang chuyên dụng của mỗi subgraph, một số chi tiết được ## Những người tham gia -Within this tab, you’ll get a bird’s eye view of all the people that are participating in the network activities, such as Indexers, Delegators, and Curators. Below, we’ll go into an in-depth review of what each tab means for you. +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### 1. Indexers ![Explorer Image 4](/img/Indexer-Pane.png) -Hãy bắt đầu với Indexers (Người lập chỉ mục). Các Indexers là xương sống của giao thức, là những người đóng góp vào các subgraph, lập chỉ mục chúng và phục vụ các truy vấn cho bất kỳ ai sử dụng subgraph. Trong bảng Indexers, bạn sẽ có thể thấy các thông số ủy quyền của Indexer, lượng stake của họ, số lượng họ đã stake cho mỗi subgraph và doanh thu mà họ đã kiếm được từ phí truy vấn và phần thưởng indexing. Đi sâu hơn: +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. + +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. -- Phần Cắt Phí Truy vấn - là % hoàn phí truy vấn mà Indexer giữ lại khi ăn chia với Delegators -- Phần Cắt Thưởng Hiệu quả - phần thưởng indexing được áp dụng cho nhóm ủy quyền (delegation pool). Nếu là âm, điều đó có nghĩa là Indexer đang cho đi một phần phần thưởng của họ. Nếu là dương, điều đó có nghĩa là Indexer đang giữ lại một số phần thưởng của họ -- Cooldown Remaining (Thời gian chờ còn lại) - thời gian còn lại cho đến khi Indexer có thể thay đổi các thông số ủy quyền ở trên. Thời gian chờ Cooldown được Indexers thiết lập khi họ cập nhật thông số ủy quyền của mình -- Được sở hữu - Đây là tiền stake Indexer đã nạp vào, có thể bị phạt cắt giảm (slashed) nếu có hành vi độc hại hoặc không chính xác -- Được ủy quyền - Lượng stake từ các Delegator có thể được Indexer phân bổ, nhưng không thể bị phạt cắt giảm -- Được phân bổ - phần stake mà Indexers đang tích cực phân bổ cho các subgraph mà họ đang lập chỉ mục -- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated +**Specifics** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations. -- Phí Truy vấn - đây là tổng số phí mà người dùng cuối đã trả cho các truy vấn từ Indexer đến hiện tại +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - Thưởng Indexer - đây là tổng phần thưởng indexer mà Indexer và các Delegator của họ kiếm được cho đến hiện tại. Phần thưởng Indexer được trả thông qua việc phát hành GRT. -Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. Indexing parameters are set by clicking on the right-hand side of the table, or by going into an Indexer’s profile and clicking the “Delegate” button. +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. To learn more about how to become an Indexer, you can take a look at the [official documentation](/network/indexing) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) @@ -58,9 +78,13 @@ To learn more about how to become an Indexer, you can take a look at the [offici ### 2. Curators -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially attractive subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -Curators có thể là các thành viên cộng đồng, người tiêu dùng dữ liệu hoặc thậm chí là nhà phát triển subgraph, những người báo hiệu trên subgraph của chính họ bằng cách nạp token GRT vào một đường cong liên kết. Bằng cách nạp GRT, Curator đúc ra cổ phần curation của một subgraph. Kết quả là, Curators có đủ điều kiện để kiếm một phần phí truy vấn mà subgraph mà họ đã báo hiệu tạo ra. Đường cong liên kết khuyến khích Curators quản lý các nguồn dữ liệu chất lượng cao nhất. Bảng Curator trong phần này sẽ cho phép bạn xem: +In the The Curator table listed below you can see: - Ngày Curator bắt đầu curate - Số GRT đã được nạp @@ -68,34 +92,36 @@ Curators có thể là các thành viên cộng đồng, người tiêu dùng d ![Explorer Image 6](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting the following links of [The Graph Academy](https://thegraph.academy/curators/) or [official documentation.](/network/curating) +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### 3. Delegators -Delegators (Người Ủy quyền) đóng một vai trò quan trọng trong việc duy trì tính bảo mật và phân quyền của Mạng The Graph. Họ tham gia vào mạng bằng cách ủy quyền (tức là "staking") token GRT cho một hoặc nhiều indexer. Không có những Delegator, các Indexer ít có khả năng kiếm được phần thưởng và phí đáng kể. Do đó, Indexer tìm cách thu hút Delegator bằng cách cung cấp cho họ một phần của phần thưởng lập chỉ mục và phí truy vấn mà họ kiếm được. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![Explorer Image 7](/img/Delegation-Overview.png) -Bảng Delegators sẽ cho phép bạn xem các Delegator đang hoạt động trong cộng đồng, cũng như các chỉ số như: +In the Delegators table you can see the active Delegators in the community and important metrics: - Số lượng Indexers mà một Delegator đang ủy quyền cho - Ủy quyền ban đầu của Delegator - Phần thưởng họ đã tích lũy nhưng chưa rút khỏi giao thức - Phần thưởng đã ghi nhận ra mà họ rút khỏi giao thức - Tổng lượng GRT mà họ hiện có trong giao thức -- Ngày họ ủy quyền lần cuối cùng +- The date they last delegated -If you want to learn more about how to become a Delegator, look no further! All you have to do is to head over to the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## Mạng lưới -Trong phần Mạng lưới, bạn sẽ thấy các KPI toàn cầu cũng như khả năng chuyển sang cơ sở từng epoch và phân tích các chỉ số mạng chi tiết hơn. Những chi tiết này sẽ cho bạn biết mạng hoạt động như thế nào theo thời gian. +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. ### Tổng quan -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +The overview section has both all the current network metrics and some cumulative metrics over time: - Tổng stake mạng hiện tại - Phần chia stake giữa Indexer và các Delegator của họ @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - Các thông số giao thức như phần thưởng curation, tỷ lệ lạm phát,... - Phần thưởng và phí của epoch hiện tại -Một vài chi tiết quan trọng đáng được đề cập: +A few key details to note: -- **Phí truy vấn đại diện cho phí do người tiêu dùng tạo ra**, và chúng có thể được Indexer yêu cầu (hoặc không) sau một khoảng thời gian ít nhất 7 epochs (xem bên dưới) sau khi việc phân bổ của họ cho các subgraph đã được đóng lại và dữ liệu mà chúng cung cấp đã được người tiêu dùng xác thực. -- **Phần thưởng Indexing đại diện cho số phần thưởng mà Indexer đã yêu cầu được từ việc phát hành mạng trong epoch đó.** Mặc dù việc phát hành giao thức đã được cố định, nhưng phần thưởng chỉ nhận được sau khi Indexer đóng phân bổ của họ cho các subgraph mà họ đã lập chỉ mục. Do đó, số lượng phần thưởng theo từng epoch khác nhau (nghĩa là trong một số epoch, Indexer có thể đã đóng chung các phân bổ đã mở trong nhiều ngày). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ In the Epochs section, you can analyze on a per-epoch basis, metrics such as: - Epoch đang hoạt động là epoch mà Indexer hiện đang phân bổ cổ phần và thu phí truy vấn - Epoch đang giải quyết là những epoch mà các kênh trạng thái đang được giải quyết. Điều này có nghĩa là Indexers có thể bị phạt cắt giảm nếu người tiêu dùng công khai tranh chấp chống lại họ. - Epoch đang phân phối là epoch trong đó các kênh trạng thái cho các epoch đang được giải quyết và Indexer có thể yêu cầu hoàn phí truy vấn của họ. - - Epoch được hoàn tất là những epoch không còn khoản hoàn phí truy vấn nào để Indexer yêu cầu, do đó sẽ được hoàn thiện. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![Explorer Image 9](/img/Epoch-Stats.png) ## Hồ sơ Người dùng của bạn -Now that we’ve talked about the network stats, let’s move on to your personal profile. Your personal profile is the place for you to see your network activity, no matter how you’re participating on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see: +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### Tổng quan Hồ sơ -Đây là nơi bạn có thể xem bất kỳ hành động hiện tại nào bạn đã thực hiện. Đây cũng là nơi bạn có thể tìm thấy thông tin hồ sơ, mô tả và trang web của mình (nếu bạn đã thêm). +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![Explorer Image 10](/img/Profile-Overview.png) ### Tab Subgraphs -Nếu bạn nhấp vào tab Subgraphs, bạn sẽ thấy các subgraph đã xuất bản của mình. Điều này sẽ không bao gồm bất kỳ subgraph nào được triển khai với CLI cho mục đích thử nghiệm - các subgraph sẽ chỉ hiển thị khi chúng được xuất bản lên mạng phi tập trung. +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Tab Indexing -Nếu bạn nhấp vào tab Indexing, bạn sẽ tìm thấy một bảng với tất cả các phân bổ hiện hoạt và lịch sử cho các subgraph, cũng như các biểu đồ mà bạn có thể phân tích và xem hiệu suất trước đây của mình với tư cách là Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. Phần này cũng sẽ bao gồm thông tin chi tiết về phần thưởng Indexer ròng của bạn và phí truy vấn ròng. Bạn sẽ thấy các số liệu sau: @@ -158,7 +189,9 @@ Phần này cũng sẽ bao gồm thông tin chi tiết về phần thưởng Ind ### Tab Delegating -Delegator rất quan trọng đối với Mạng The Graph. Một Delegator phải sử dụng kiến thức của họ để chọn một Indexer sẽ mang lại lợi nhuận lành mạnh từ các phần thưởng. Tại đây, bạn có thể tìm thấy thông tin chi tiết về các ủy quyền đang hoạt động và trong lịch sử của mình, cùng với các chỉ số của Indexer mà bạn đã ủy quyền. +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. Trong nửa đầu của trang, bạn có thể thấy biểu đồ ủy quyền của mình, cũng như biểu đồ chỉ có phần thưởng. Ở bên trái, bạn có thể thấy các KPI phản ánh các chỉ số ủy quyền hiện tại của bạn. diff --git a/website/pages/vi/network/indexing.mdx b/website/pages/vi/network/indexing.mdx index fca0bd12028a..6eeb274c4446 100644 --- a/website/pages/vi/network/indexing.mdx +++ b/website/pages/vi/network/indexing.mdx @@ -42,7 +42,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Nhiều trang tổng quan (dashboard) do cộng đồng tạo bao gồm các giá trị phần thưởng đang chờ xử lý và bạn có thể dễ dàng kiểm tra chúng theo cách thủ công bằng cách làm theo các bước sau: -1. Truy vấn [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) để nhận ID cho tất cả phần phân bổ đang hoạt động: +1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -112,12 +112,12 @@ Indexer có thể tự phân biệt bản thân bằng cách áp dụng các k - **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. - **Lớn** - Được chuẩn bị để index tất cả các subgraph hiện đang được sử dụng và phục vụ các yêu cầu cho lưu lượng truy cập liên quan. -| Cài đặt | Postgres
    (CPUs) | Postgres
    (bộ nhớ tính bằng GB) | Postgres
    (đĩa tính bằng TB) | VMs
    (CPUs) | VMs
    (bộ nhớ tính bằng GB) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Nhỏ | 4 | 8 | 1 | 4 | 16 | -| Tiêu chuẩn | 8 | 30 | 1 | 12 | 48 | -| Trung bình | 16 | 64 | 2 | 32 | 64 | -| Lớn | 72 | 468 | 3.5 | 48 | 184 | +| Cài đặt | Postgres
    (CPUs) | Postgres
    (bộ nhớ tính bằng GB) | Postgres
    (đĩa tính bằng TB) | VMs
    (CPUs) | VMs
    (bộ nhớ tính bằng GB) | +| ----------- |:--------------------------:|:-----------------------------------------:|:--------------------------------------:|:---------------------:|:------------------------------------:| +| Nhỏ | 4 | 8 | 1 | 4 | 16 | +| Tiêu chuẩn | 8 | 30 | 1 | 12 | 48 | +| Trung bình | 16 | 64 | 2 | 32 | 64 | +| Lớn | 72 | 468 | 3.5 | 48 | 184 | ### What are some basic security precautions an Indexer should take? @@ -149,20 +149,20 @@ Lưu ý: Để hỗ trợ mở rộng quy mô nhanh, bạn nên tách các mối #### Graph Node -| Cổng | Mục đích | Tuyến | Đối số CLI | Biến môi trường | -| --- | --- | --- | --- | --- | -| 8000 | Máy chủ GraphQL HTTP
    (cho các truy vấn subgraph) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    (cho các đăng ký subgraph) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (để quản lý triển khai) | / | --admin-port | - | -| 8030 | API trạng thái lập chỉ mục Subgraph | /graphql | --index-node-port | - | -| 8040 | Số liệu Prometheus | /metrics | --metrics-port | - | +| Cổng | Mục đích | Tuyến | Đối số CLI | Biến môi trường | +| ---- | ----------------------------------------------------------- | ---------------------------------------------------- | ----------------- | --------------- | +| 8000 | Máy chủ GraphQL HTTP
    (cho các truy vấn subgraph) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    (cho các đăng ký subgraph) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (để quản lý triển khai) | / | --admin-port | - | +| 8030 | API trạng thái lập chỉ mục Subgraph | /graphql | --index-node-port | - | +| 8040 | Số liệu Prometheus | /metrics | --metrics-port | - | #### Dịch vụ Indexer -| Cổng | Mục đích | Tuyến | Đối số CLI | Biến môi trường | -| --- | --- | --- | --- | --- | -| 7600 | Máy chủ GraphQL HTTP
    (cho các truy vấn subgraph có trả phí) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Số liệu Prometheus | /metrics | --metrics-port | - | +| Cổng | Mục đích | Tuyến | Đối số CLI | Biến môi trường | +| ---- | ---------------------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | Máy chủ GraphQL HTTP
    (cho các truy vấn subgraph có trả phí) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Số liệu Prometheus | /metrics | --metrics-port | - | #### Đại lý Indexer @@ -545,7 +545,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules maybe [options] ` — Set the `decisionBasis` for a deployment to `rules`, so that the Indexer agent will use indexing rules to decide whether to index this deployment. -- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additonal argument `--status` can be used to print out all actions of a certain status. +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `graph indexer action queue allocate ` - Queue allocation action diff --git a/website/pages/vi/network/overview.mdx b/website/pages/vi/network/overview.mdx index bfdd5a7ea294..0779d9a6cb00 100644 --- a/website/pages/vi/network/overview.mdx +++ b/website/pages/vi/network/overview.mdx @@ -2,14 +2,20 @@ title: Network Overview --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## Tổng quan +## How does it work? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Specifics + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![Token Economics](/img/Network-roles@2x.png) -To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20 used to allocate resources in the network. +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/vi/new-chain-integration.mdx b/website/pages/vi/new-chain-integration.mdx index 35b2bc7c8b4a..534d6701efdb 100644 --- a/website/pages/vi/new-chain-integration.mdx +++ b/website/pages/vi/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: Integrating New Networks +title: New Chain Integration --- -Graph Node can currently index data from the following chain types: +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- Ethereum, via EVM JSON-RPC and [Ethereum Firehose](https://github.com/streamingfast/firehose-ethereum) -- NEAR, via a [NEAR Firehose](https://github.com/streamingfast/near-firehose-indexer) -- Cosmos, via a [Cosmos Firehose](https://github.com/graphprotocol/firehose-cosmos) -- Arweave, via an [Arweave Firehose](https://github.com/graphprotocol/firehose-arweave) +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -If you are interested in any of those chains, integration is a matter of Graph Node configuration and testing. +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -If you are interested in a different chain type, a new integration with Graph Node must be built. Our recommended approach is developing a new Firehose for the chain in question and then the integration of that Firehose with Graph Node. More info below. +## Integration Strategies -**1. EVM JSON-RPC** +### 1. EVM JSON-RPC -If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. For more information, refer to [Testing an EVM JSON-RPC](new-chain-integration#testing-an-evm-json-rpc). +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -**2. Firehose** +#### Testing an EVM JSON-RPC -For non-EVM-based chains, Graph Node must ingest blockchain data via gRPC and known type definitions. This can be done via [Firehose](firehose/), a new technology developed by [StreamingFast](https://www.streamingfast.io/) that provides a highly-scalable indexing blockchain solution using a files-based and streaming-first approach. Reach out to the [StreamingFast team](mailto:integrations@streamingfast.io/) if you need help with Firehose development. +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## Difference between EVM JSON-RPC & Firehose +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, in a JSON-RPC batch request +- `trace_filter` _(optionally required for Graph Node to support call handlers)_ -While the two are suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](substreams/), like building [Substreams-powered subgraphs](cookbook/substreams-powered-subgraphs/). In addition, Firehose allows for improved indexing speeds when compared to JSON-RPC. +### 2. Firehose Integration -New EVM chain integrators may also consider the Firehose-based approach, given the benefits of substreams and its massive parallelized indexing capabilities. Supporting both allows developers to choose between building substreams or subgraphs for the new chain. +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> **NOTE**: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that eth_calls are [not a good practice for developers](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## Testing an EVM JSON-RPC +#### Specific Firehose Instrumentation for EVM (`geth`) chains -For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON RPC methods: +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \_(for historical blocks, with EIP-1898 - requires archive node): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`, in a JSON-RPC batch request -- _`trace_filter`_ _(optionally required for Graph Node to support call handlers)_ +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### Graph Node Configuration +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -**Start by preparing your local environment** +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## Graph Node Configuration + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) -2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON RPC compliant URL - > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ -**Test the integration by locally deploying a subgraph** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. Create a simple example subgraph. Some options are below: - 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point - 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) -3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. -4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -Graph Node should be syncing the deployed subgraph if there are no errors. Give it time to sync, then send some GraphQL queries to the API endpoint printed in the logs. +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## Integrating a new Firehose-enabled chain +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. Create a simple example subgraph. Some options are below: + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +Graph Node should be syncing the deployed subgraph if there are no errors. Give it time to sync, then send some GraphQL queries to the API endpoint printed in the logs. -Integrating a new chain is also possible using the Firehose approach. This is currently the best option for non-EVM chains and a requirement for substreams support. Additional documentation focuses on how Firehose works, adding Firehose support for a new chain and integrating it with Graph Node. Recommended docs for integrators: +## Substreams-powered Subgraphs -1. [General docs on Firehose](firehose/) -2. [Adding Firehose support for a new chain](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [Integrating Graph Node with a new chain via Firehose](https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/vi/operating-graph-node.mdx b/website/pages/vi/operating-graph-node.mdx index 82cebd402554..9185043e72a4 100644 --- a/website/pages/vi/operating-graph-node.mdx +++ b/website/pages/vi/operating-graph-node.mdx @@ -77,13 +77,13 @@ A complete Kubernetes example configuration can be found in the [indexer reposit When it is running Graph Node exposes the following ports: -| Cổng | Mục đích | Tuyến | Đối số CLI | Biến môi trường | -| --- | --- | --- | --- | --- | -| 8000 | Máy chủ GraphQL HTTP
    (cho các truy vấn subgraph) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    (cho các đăng ký subgraph) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (để quản lý triển khai) | / | --admin-port | - | -| 8030 | API trạng thái lập chỉ mục Subgraph | /graphql | --index-node-port | - | -| 8040 | Số liệu Prometheus | /metrics | --metrics-port | - | +| Cổng | Mục đích | Tuyến | Đối số CLI | Biến môi trường | +| ---- | ----------------------------------------------------------- | ---------------------------------------------------- | ----------------- | --------------- | +| 8000 | Máy chủ GraphQL HTTP
    (cho các truy vấn subgraph) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    (cho các đăng ký subgraph) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (để quản lý triển khai) | / | --admin-port | - | +| 8030 | API trạng thái lập chỉ mục Subgraph | /graphql | --index-node-port | - | +| 8040 | Số liệu Prometheus | /metrics | --metrics-port | - | > **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. diff --git a/website/pages/vi/querying/graphql-api.mdx b/website/pages/vi/querying/graphql-api.mdx index 8e1257c0b74b..65d0a40a7bd4 100644 --- a/website/pages/vi/querying/graphql-api.mdx +++ b/website/pages/vi/querying/graphql-api.mdx @@ -2,11 +2,19 @@ title: GraphQL API --- -This guide explains the GraphQL Query API that is used for The Graph Protocol. +Learn about the GraphQL Query API used in The Graph. -## Các truy vấn +## What is GraphQL? -Trong lược đồ subgraph của bạn, bạn xác định các loại được gọi là `Entities`. Với mỗi loại `Entity`, một trường `entity` và `entities` sẽ được tạo ở loại `Query` cấp cao nhất. Lưu ý là `query` không cần phải được bao gồm ở đầu truy vấn `graphql` khi sử dụng The Graph. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +To understand the larger role that GraphQL plays, review [developing](/network/developing/) and [creating a subgraph](/developing/creating-a-subgraph/). + +## Queries with GraphQL + +In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. + +> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. ### Examples @@ -21,7 +29,7 @@ Truy vấn cho một thực thể `Token` được xác định trong lược đ } ``` -> **Note:** When querying for a single entity, the `id` field is required, and it must be a string. +> Note: When querying for a single entity, the `id` field is required, and it must be writen as a string. Query all `Token` entities: @@ -36,7 +44,10 @@ Query all `Token` entities: ### Sorting -When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. +When querying a collection, you may: + +- Use the `orderBy` parameter to sort by a specific attribute. +- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. #### Ví dụ @@ -53,7 +64,7 @@ When querying a collection, the `orderBy` parameter may be used to sort by a spe As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities. -In the following example, we sort the tokens by the name of their owner: +The following example shows tokens sorted by the name of their owner: ```graphql { @@ -70,11 +81,12 @@ In the following example, we sort the tokens by the name of their owner: ### Pagination -When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. - -Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +When querying a collection, it's best to: -Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. +- Use the `first` parameter to paginate from the beginning of the collection. + - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. +- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. #### Example using `first` @@ -106,7 +118,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect #### Example using `first` and `id_ge` -If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: +If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: ```graphql query manyTokens($lastID: String) { @@ -117,11 +129,12 @@ query manyTokens($lastID: String) { } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### Filtering -You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. +- You can use the `where` parameter in your queries to filter for different properties. +- You can filter on multiple values within the `where` parameter. #### Example using `where` @@ -155,7 +168,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: #### Example for block filtering -You can also filter entities by the `_change_block(number_gte: Int)` - this filters entities which were updated in or after the specified block. +You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). @@ -193,7 +206,7 @@ As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/release ##### `AND` Operator -In the following example, we are filtering for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. +The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. ```graphql { @@ -208,7 +221,7 @@ In the following example, we are filtering for challenges with `outcome` `succee ``` > **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas. -> +> > ```graphql > { > challenges(where: { number_gte: 100, outcome: "succeeded" }) { @@ -223,7 +236,7 @@ In the following example, we are filtering for challenges with `outcome` `succee ##### `OR` Operator -In the following example, we are filtering for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. +The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. ```graphql { @@ -278,9 +291,9 @@ _change_block(number_gte: Int) You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -Note that the current implementation is still subject to certain limitations that might violate these gurantees. The implementation can not always tell that a given block hash is not on the main chain at all, or that the result of a query by block hash for a block that can not be considered final yet might be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. #### Ví dụ @@ -322,12 +335,12 @@ Fulltext search queries have one required field, `text`, for supplying search te Fulltext search operators: -| Biểu tượng | Toán tử | Miêu tả | -| --- | --- | --- | -| `&` | `And` | Để kết hợp nhiều cụm từ tìm kiếm thành một bộ lọc cho các thực thể bao gồm tất cả các cụm từ được cung cấp | -| | | `Or` | Các truy vấn có nhiều cụm từ tìm kiếm được phân tách bằng toán tử hoặc sẽ trả về tất cả các thực thể có kết quả khớp với bất kỳ cụm từ nào được cung cấp | -| `<->` | `Follow by` | Chỉ định khoảng cách giữa hai từ. | -| `:*` | `Prefix` | Sử dụng cụm từ tìm kiếm tiền tố để tìm các từ có tiền tố khớp với nhau (yêu cầu 2 ký tự.) | +| Biểu tượng | Toán tử | Miêu tả | +| ----------- | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `&` | `And` | Để kết hợp nhiều cụm từ tìm kiếm thành một bộ lọc cho các thực thể bao gồm tất cả các cụm từ được cung cấp | +| | | `Or` | Các truy vấn có nhiều cụm từ tìm kiếm được phân tách bằng toán tử hoặc sẽ trả về tất cả các thực thể có kết quả khớp với bất kỳ cụm từ nào được cung cấp | +| `<->` | `Follow by` | Chỉ định khoảng cách giữa hai từ. | +| `:*` | `Prefix` | Sử dụng cụm từ tìm kiếm tiền tố để tìm các từ có tiền tố khớp với nhau (yêu cầu 2 ký tự.) | #### Examples @@ -376,11 +389,11 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 ## Lược đồ -The schema of your data source--that is, the entity types, values, and relationships that are available to query--are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> **Note:** Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. +> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. ### Entities diff --git a/website/pages/vi/querying/querying-best-practices.mdx b/website/pages/vi/querying/querying-best-practices.mdx index 32d1415b20fa..5654cf9e23a5 100644 --- a/website/pages/vi/querying/querying-best-practices.mdx +++ b/website/pages/vi/querying/querying-best-practices.mdx @@ -2,11 +2,9 @@ title: Querying Best Practices --- -The Graph provides a decentralized way to query data from blockchains. +The Graph provides a decentralized way to query data from blockchains via GraphQL APIs, making it easier to query data with the GraphQL language. -The Graph network's data is exposed through a GraphQL API, making it easier to query data with the GraphQL language. - -This page will guide you through the essential GraphQL language rules and GraphQL queries best practices. +Learn the essential GraphQL language rules and GraphQL querying best practices. --- @@ -71,7 +69,7 @@ GraphQL is a language and set of conventions that transport over HTTP. It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`). -However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), we recommend you to use our `graph-client` that supports unique features such as: +However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), it's recommend to use `graph-client` which supports unique features such as: - Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) @@ -104,8 +102,6 @@ main() More GraphQL client alternatives are covered in ["Querying from an Application"](/querying/querying-from-an-application). -Now that we covered the basic rules of GraphQL queries syntax, let's now look at the best practices of GraphQL query writing. - --- ## Best Practices @@ -164,11 +160,11 @@ Doing so brings **many advantages**: - **Variables can be cached** at server-level - **Queries can be statically analyzed by tools** (more on this in the following sections) -**Note: How to include fields conditionally in static queries** +### How to include fields conditionally in static queries -We might want to include the `owner` field only on a particular condition. +You might want to include the `owner` field only on a particular condition. -For this, we can leverage the `@include(if:...)` directive as follows: +For this, you can leverage the `@include(if:...)` directive as follows: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -191,7 +187,7 @@ const result = await execute(query, { }) ``` -Note: The opposite directive is `@skip(if: ...)`. +> Note: The opposite directive is `@skip(if: ...)`. ### Ask for what you want @@ -199,9 +195,8 @@ GraphQL became famous for its "Ask for what you want" tagline. For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually. -When querying GraphQL APIs, always think of querying only the fields that will be actually used. - -A common cause of over-fetching is collections of entities. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. Queries should therefore almost always set first explicitly, and make sure they only fetch as many entities as they actually need. This applies not just to top-level collections in a query, but even more so to nested collections of entities. +- When querying GraphQL APIs, always think of querying only the fields that will be actually used. +- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. For example, in the following query: @@ -337,8 +332,8 @@ query { Such repeated fields (`id`, `active`, `status`) bring many issues: -- harder to read for more extensive queries -- when using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. +- More extensive queries become harder to read. +- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. A refactored version of the query would be the following: @@ -364,13 +359,13 @@ fragment DelegateItem on Transcoder { } ``` -Using GraphQL `fragment` will improve readability (especially at scale) but also will result in better TypeScript types generation. +Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_). ### GraphQL Fragment do's and don'ts -**Fragment base must be a type** +### Fragment base must be a type A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**: @@ -382,7 +377,7 @@ fragment MyFragment on BigInt { `BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base. -**How to spread a Fragment** +#### How to spread a Fragment Fragments are defined on specific types and should be used accordingly in queries. @@ -411,16 +406,16 @@ fragment VoteItem on Vote { It is not possible to spread a fragment of type `Vote` here. -**Define Fragment as an atomic business unit of data** +#### Define Fragment as an atomic business unit of data -GraphQL Fragment must be defined based on their usage. +GraphQL `Fragment`s must be defined based on their usage. For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient. -Here is a rule of thumb for using Fragment: +Here is a rule of thumb for using fragments: -- when fields of the same type are repeated in a query, group them in a Fragment -- when similar but not the same fields are repeated, create multiple fragments, ex: +- When fields of the same type are repeated in a query, group them in a `Fragment`. +- When similar but different fields are repeated, create multiple fragments, for instance: ```graphql # base fragment (mostly used in listing) @@ -443,7 +438,7 @@ fragment VoteWithPoll on Vote { --- -## The essential tools +## The Essential Tools ### GraphQL web-based explorers @@ -473,11 +468,11 @@ This will allow you to **catch errors without even testing queries** on the play The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get: -- syntax highlighting -- autocomplete suggestions -- validation against schema -- snippets -- go to definition for fragments and input types +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets +- Go to definition for fragments and input types If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly. @@ -485,9 +480,9 @@ If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketp The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing: -- syntax highlighting -- autocomplete suggestions -- validation against schema -- snippets +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets -More information on this [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) that showcases all the plugin's main features. +For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. diff --git a/website/pages/vi/quick-start.mdx b/website/pages/vi/quick-start.mdx index d091de61071e..716f1b8d241e 100644 --- a/website/pages/vi/quick-start.mdx +++ b/website/pages/vi/quick-start.mdx @@ -2,24 +2,18 @@ title: Bắt đầu nhanh --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph to Subgraph Studio. +Learn how to easily publish and query a [subgraph](/developing/developer-faqs/#1-what-is-a-subgraph) on The Graph. -Ensure that your subgraph will be indexing data from a [supported network](/developing/supported-networks). - -This guide is written assuming that you have: +## Prerequisites for this guide - A crypto wallet -- A smart contract address on the network of your choice - -## 1. Create a subgraph on Subgraph Studio - -Go to the [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- A smart contract address on one of the [supported networks](/developing/supported-networks/) -Once your wallet is connected, you can begin by clicking “Create a Subgraph." It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name." +## Step-by-step -## 2. Install the Graph CLI +### 1. Cài đặt Graph CLI -The Graph CLI is written in TypeScript and you will need to have `node` and either `npm` or `yarn` installed to use it. Check that you have the most recent CLI version installed. +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. On your local machine, run one of the following commands: @@ -35,133 +29,161 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -## 3. Initialize your subgraph from existing contract +### 2. Create your subgraph + +If your contract has events, the `init` command will automatically create a scaffold of a subgraph. + +#### Create via Graph CLI + +Use the following command to create a subgraph in Subgraph Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +#### Create via Subgraph Studio + +Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. -Initialize your subgraph from an existing contract by running the initialize command: +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +2. Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". + +For additional information on subgraph creation and the Graph CLI, see [Creating a Subgraph](/developing/creating-a-subgraph). + +### 3. Initialize your subgraph + +#### From an existing contract + +The following command initializes your subgraph from an existing contract: ```sh graph init --studio ``` -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> Note: If your contract was verified on Etherscan, then the ABI will automatically be created in the CLI. + +You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -When you initialize your subgraph, the CLI tool will ask you for the following information: +When you initialize your subgraph, the CLI will ask you for the following information: -- Protocol: choose the protocol your subgraph will be indexing data from -- Subgraph slug: create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- Directory to create the subgraph in: choose your local directory -- Ethereum network(optional): you may need to specify which EVM-compatible network your subgraph will be indexing data from -- Contract address: Locate the smart contract address you’d like to query data from -- ABI: If the ABI is not autopopulated, you will need to input it manually as a JSON file -- Start Block: it is suggested that you input the start block to save time while your subgraph indexes blockchain data. You can locate the start block by finding the block where your contract was deployed. -- Contract Name: input the name of your contract -- Index contract events as entities: it is suggested that you set this to true as it will automatically add mappings to your subgraph for every emitted event -- Add another contract(optional): you can add another contract +- Protocol: Choose the protocol your subgraph will be indexing data from. +- Subgraph slug: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. +- Directory to create the subgraph in: Choose your local directory. +- Ethereum network (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- Contract address: Locate the smart contract address you’d like to query data from. +- ABI: If the ABI is not auto-populated, you will need to input it manually as a JSON file. +- Start Block: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- Contract Name: Input the name of your contract. +- Index contract events as entities: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- Add another contract (optional): You can add another contract. See the following screenshot for an example for what to expect when initializing your subgraph: ![Subgraph command](/img/subgraph-init-example.png) -## 4. Write your subgraph +### 4. Write your subgraph -The previous commands create a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: +The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. -- Manifest (`subgraph.yaml`) - The manifest defines what datasources your subgraphs will index. -- Schema (`schema.graphql`) - The GraphQL schema defines what data you wish to retrieve from the subgraph. -- AssemblyScript Mappings (`mapping.ts`) - This is the code that translates data from your datasources to the entities defined in the schema. +When making changes to the subgraph, you will mainly work with three files: -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). +- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -## 5. Deploy to Subgraph Studio +For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -Once your subgraph is written, run the following commands: +### 5. Deploy your subgraph -```sh -$ graph codegen -$ graph build -``` +Remember, deploying is not the same as publishing. + +- When you deploy a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. +- When you publish a subgraph, you are publishing it onchain to the decentralized network. + +1. Once your subgraph is written, run the following commands: + + ```sh + graph codegen + graph build + ``` + +2. Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. -- Xác thực và triển khai subgraph của bạn. Bạn có thể tìm thấy khóa triển khai trên trang Subgraph trong Subgraph Studio. +![Deploy key](/img/subgraph-studio-deploy-key.jpg) +```` ```sh -$ graph auth --studio -$ graph deploy --studio -``` -You will be asked for a version label. It's strongly recommended to use [semver](https://semver.org/) for versioning like `0.0.1`. That said, you are free to choose any string as version such as:`v1`, `version1`, `asdf`. - -## 6. Test your subgraph - -In Subgraph Studio's playground environment, you can test your subgraph by making a sample query. - -The logs will tell you if there are any errors with your subgraph. The logs of an operational subgraph will look like this: - -![Subgraph logs](/img/subgraph-logs-image.png) - -If your subgraph is failing, you can query the subgraph health by using the GraphiQL Playground. Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails, so you can debug accordingly: - -```graphql -{ - indexingStatuses(subgraphs: ["Qm..."]) { - node - synced - health - fatalError { - message - block { - number - hash - } - handler - } - nonFatalErrors { - message - block { - number - hash - } - handler - } - chains { - network - chainHeadBlock { - number - } - earliestBlock { - number - } - latestBlock { - number - } - lastHealthyBlock { - number - } - } - entityCount - } -} +graph auth --studio + +graph deploy --studio ``` +```` -## 7. Publish your subgraph to The Graph’s Decentralized Network +- The CLI will ask for a version label. + - It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. That said, you can choose any string for the version such as: `v1`, `version1`, `asdf`, etc. -Once your subgraph has been deployed to Subgraph Studio, you have tested it out, and you are ready to put it into production, you can then publish it to the decentralized network. +### 6. Review your subgraph -In Subgraph Studio, you will be able to click the publish button on the top right of your subgraph's page. +If you’d like to examine your subgraph before publishing it to the network, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: -Select the network you would like to publish your subgraph to. It is recommended to publish subgraphs to Arbitrum One to take advantage of the [faster transaction speeds and lower gas costs](/arbitrum/arbitrum-faq). +- Run a sample query. +- Analyze your subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: -The (upgrade Indexer)[/sunrise/#about-the-upgrade-indexer] will begin serving queries on your subgraph regardless of subgraph curation, and it will provide you with 100,000 free queries per month. + ![Subgraph logs](/img/subgraph-logs-image.png) -For a higher quality of service and stronger redundancy, you can curate your subgraph to attract more Indexers. At the time of writing, it is recommended that you curate your own subgraph with at least 3,000 GRT to ensure 3-5 additional Indexers begin serving queries on your subgraph. +### 7. Publish your subgraph to The Graph Network -To save on gas costs, you can curate your subgraph in the same transaction that you published it by selecting this button when you publish your subgraph to The Graph’s decentralized network: +Publishing a subgraph to the decentralized network makes it available for [Curators](/network/curating/) to begin curating it and [Indexers](/network/indexing/) to begin indexing it. -![Subgraph publish](/img/publish-and-signal-tx.png) +#### Publishing with Subgraph Studio + +1. To publish your subgraph, click the Publish button in the dashboard. +2. Select the network to which you would like to publish your subgraph. + +#### Publishing from the CLI + +As of version 0.73.0, you can also publish your subgraph with the Graph CLI. + +1. Open the `graph-cli`. + +2. Use the following commands: + + ```sh + graph codegen && graph build + ``` + + Then, + + ```sh + graph publish + ``` -## 8. Query your subgraph +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +To customize your deployment, see [Publishing a Subgraph](/publishing/publishing-a-subgraph/). + +#### Adding signal to your subgraph + +1. To attract indexers to query your subgraph, you should add GRT curation signal to it. + + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + +2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. + + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + +To learn more about curation, read [Curating](/network/curating/). + +To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: + +![Subgraph publish](/img/publish-and-signal-tx.png) -Now, you can query your subgraph by sending GraphQL queries to your subgraph’s Query URL, which you can find by clicking on the query button. +### 8. Query your subgraph -If you don't have your API key, you can query via the free, rate-limited development query URL, which can be used for development and staging. +Now, you can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read more [here](/querying/querying-the-graph/). +For more information about querying data from your subgraph, read [Querying The Graph](/querying/querying-the-graph/). diff --git a/website/pages/vi/release-notes/assemblyscript-migration-guide.mdx b/website/pages/vi/release-notes/assemblyscript-migration-guide.mdx index 69c36218d8af..8536b657e78a 100644 --- a/website/pages/vi/release-notes/assemblyscript-migration-guide.mdx +++ b/website/pages/vi/release-notes/assemblyscript-migration-guide.mdx @@ -127,11 +127,8 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - Bạn sẽ cần đổi tên các biến trùng lặp của mình nếu bạn có che biến. - ### So sánh Null - Bằng cách thực hiện nâng cấp trên subgraph của bạn, đôi khi bạn có thể gặp các lỗi như sau: ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - Để giải quyết, bạn có thể chỉ cần thay đổi câu lệnh `if` thành một cái gì đó như sau: ```typescript @@ -288,7 +284,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - Để khắc phục sự cố này, bạn có thể tạo một biến cho quyền truy cập thuộc tính đó để trình biên dịch có thể thực hiện phép thuật kiểm tra tính nullability: ```typescript diff --git a/website/pages/vi/sps/introduction.mdx b/website/pages/vi/sps/introduction.mdx new file mode 100644 index 000000000000..12e3f81c6d53 --- /dev/null +++ b/website/pages/vi/sps/introduction.mdx @@ -0,0 +1,19 @@ +--- +title: Introduction to Substreams-powered Subgraphs +--- + +By using a Substreams package (`.spkg`) as a data source, your subgraph gains access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. + +There are two methods of enabling this technology: + +Using Substreams [triggers](./triggers): Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. + +Using [Entity Changes](https://substreams.streamingfast.io/documentation/consume/subgraph/graph-out): By writing more of the logic into Substreams, you can consume the module's output directly into graph-node. In graph-node, you can use the Substreams data to create your subgraph entities. + +It is really a matter of where you put your logic, in the subgraph or the Substreams. Keep in mind that having more of your logic in Substreams benefits from a parallelized model, whereas triggers will be linearly consumed in graph-node. + +Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: + +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/vi/sps/triggers-example.mdx b/website/pages/vi/sps/triggers-example.mdx new file mode 100644 index 000000000000..fc422a195436 --- /dev/null +++ b/website/pages/vi/sps/triggers-example.mdx @@ -0,0 +1,140 @@ +--- +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" +--- + +## Điều kiện tiên quyết + +Before starting, make sure to: + +- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. +- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. + +## Step 1: Initialize Your Project + + + +1. Open your Dev Container and run the following command to initialize your project: + + ```bash + substreams init + ``` + +2. Select the "minimal" project option. + +3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: + +```yaml +specVersion: v0.1.0 +package: + name: my_project_sol + version: v0.1.0 + +imports: # Pass your spkg of interest + solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg + +modules: + - name: map_spl_transfers + use: solana:map_block # Select corresponding modules available within your spkg + initialBlock: 260000082 + + - name: map_transactions_by_programid + use: solana:solana:transactions_by_programid_without_votes + +network: solana-mainnet-beta + +params: # Modify the param fields to meet your needs + # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA + map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE +``` + +## Step 2: Generate the Subgraph Manifest + +Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: + +```bash +substreams codegen subgraph +``` + +You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: + +```yaml +--- +dataSources: + - kind: substreams + name: my_project_sol + network: solana-mainnet-beta + source: + package: + moduleName: map_spl_transfers # Module defined in the substreams.yaml + file: ./my-project-sol-v0.1.0.spkg + mapping: + apiVersion: 0.0.7 + kind: substreams/graph-entities + file: ./src/mappings.ts + handler: handleTriggers +``` + +## Step 3: Define Entities in `schema.graphql` + +Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. Here is an example: + +```graphql +type MyTransfer @entity { + id: ID! + amount: String! + source: String! + designation: String! + signers: [String!]! +} +``` + +This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. + +## Step 4: Handle Substreams Data in `mappings.ts` + +With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: + +```ts +import { Protobuf } from 'as-proto/assembly' +import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events' +import { MyTransfer } from '../generated/schema' + +export function handleTriggers(bytes: Uint8Array): void { + const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode) + + for (let i = 0; i < input.data.length; i++) { + const event = input.data[i] + + if (event.transfer != null) { + let entity_id: string = `${event.txnId}-${i}` + const entity = new MyTransfer(entity_id) + entity.amount = event.transfer!.instruction!.amount.toString() + entity.source = event.transfer!.accounts!.source + entity.designation = event.transfer!.accounts!.destination + + if (event.transfer!.accounts!.signer!.single != null) { + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] + } else if (event.transfer!.accounts!.signer!.multisig != null) { + entity.signers = event.transfer!.accounts!.signer!.multisig!.signers + } + entity.save() + } + } +} +``` + +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + +## Conclusion + +You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. + +For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). diff --git a/website/pages/vi/sps/triggers.mdx b/website/pages/vi/sps/triggers.mdx new file mode 100644 index 000000000000..ed19635d4768 --- /dev/null +++ b/website/pages/vi/sps/triggers.mdx @@ -0,0 +1,37 @@ +--- +title: Substreams Triggers +--- + +Custom triggers allow you to send data directly into your subgraph mappings file and entities (similar to tables and fields), enabling full use of the GraphQL layer. By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data within your subgraph’s handler, ensuring efficient and streamlined data management within the subgraph framework. + +> Note: If you haven’t already, visit one of the How-To Guides found [here](./introduction) to scaffold your first project in the Development Container. + +The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. + +```tsx +export function handleTransactions(bytes: Uint8Array): void { + let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).trasanctions // 1. + if (transactions.length == 0) { + log.info('No transactions found', []) + return + } + + for (let i = 0; i < transactions.length; i++) { + // 2. + let transaction = transactions[i] + + let entity = new Transaction(transaction.hash) // 3. + entity.from = transaction.from + entity.to = transaction.to + entity.save() + } +} +``` + +Here's what you’re seeing in the `mappings.ts` file: + +1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object +2. Looping over the transactions +3. Create a new subgraph entity for every transaction + +To go through a detailed example of a trigger-based subgraph, [click here](./triggers-example). diff --git a/website/pages/vi/substreams.mdx b/website/pages/vi/substreams.mdx index 710e110012cc..a838a6924e2f 100644 --- a/website/pages/vi/substreams.mdx +++ b/website/pages/vi/substreams.mdx @@ -4,9 +4,11 @@ title: Substreams ![Substreams Logo](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## How Substreams Works in 4 Steps @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### Expand Your Knowledge - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/vi/sunrise.mdx b/website/pages/vi/sunrise.mdx index 32bf6c6d26d4..14d1444cf8cd 100644 --- a/website/pages/vi/sunrise.mdx +++ b/website/pages/vi/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Post-Sunrise + Upgrading to The Graph Network FAQ --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Note: The Sunrise of Decentralized Data ended June 12th, 2024. -## What is the Sunrise of Decentralized Data? +## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data is an initiative spearheaded by Edge & Node. The goal is to enable subgraph developers to seamlessly upgrade to The Graph’s decentralized network. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan draws on many previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs, and the ability to integrate new blockchain networks to The Graph. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### What are the phases of the Sunrise? +### What happened to the hosted service? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. -## Upgrading subgraphs to The Graph Network +During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. -### When will hosted service subgraphs no longer be available? +### Was Subgraph Studio impacted by this upgrade? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Why were subgraphs published to Arbitrum, did it start indexing a different network? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Will my hosted service subgraph be supported on The Graph Network? - -Yes, the upgrade Indexer will automatically support all hosted service subgraphs published to The Graph Network for a seamless upgrade experience. - -### How do I upgrade my hosted service subgraph? - -> Note: Upgrading a subgraph to The Graph Network cannot be undone. - - - -To upgrade a hosted service subgraph, you can visit the subgraph dashboard on the [hosted service](https://thegraph.com/hosted-service). - -1. Select the subgraph(s) you want to upgrade. -2. Select the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### How can I get support with the upgrade process? - -The Graph community is here to support developers as they move to The Graph Network. Join The Graph's [Discord server](https://discord.gg/vtvv7FP) and request help in the #upgrade-decentralized-network channel. - -### How can I ensure high quality of service and redundancy for subgraphs on The Graph Network? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Members from these blockchain communities are encouraged to integrate their chain through the [Chain Integration Process](/chain-integration-overview/). - -### How do I publish new versions to the network? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Upgrade to the latest version of [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Update your deploy command - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> Publishing requires Arbitrum ETH - upgrading your subgraph also airdrops a small amount to facilitate your first protocol interactions 🧑‍🚀 - -### I use a subgraph developed by someone else, how can I make sure that my service isn't interrupted? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### What happens if I don't upgrade my subgraph? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### How can I get started querying subgraphs on The Graph Network? - -You can explore available subgraphs on [Graph Explorer](https://thegraph.com/explorer). [Learn more about querying subgraphs on The Graph](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## About the Upgrade Indexer -### What is the upgrade Indexer? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> The upgrade Indexer is currently active. -### What chains does the upgrade Indexer support? +The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. -The upgrade Indexer supports chains that were previously only available on the hosted service. +### What does the upgrade Indexer do? -Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. ### What does the upgrade indexer mean for existing Indexers? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. + +However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As more subgraphs are upgraded from the hosted service to The Graph Network, Delegators stand to benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### Will the upgrade Indexer compete with existing Indexers for rewards? +### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer will only allocate the minimum amount per subgraph and will not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### How will this affect subgraph developers? +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### How does this benefit data consumers? +### How does the upgrade Indexer benefit data consumers? The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. -### How will the upgrade Indexer price queries? - -The upgrade Indexer will price queries at the market rate so as not to influence the query fee market. - -### What are the criteria for the upgrade Indexer to stop supporting a subgraph? - -The upgrade Indexer will serve a subgraph until it is sufficiently and successfully served with consistent queries served by at least 3 other Indexers. - -Furthermore, the upgrade Indexer will stop supporting a subgraph if it has not been queried in the last 30 days. - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## About The Graph Network - -### Do I need to run my own infrastructure? - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -Once your subgraph has reached adequate curation signal and other Indexers begin supporting it, the upgrade Indexer will gradually taper off, allowing other Indexers to collect indexing rewards and query fees. - -### Should I host my own indexing infrastructure? - -Running infrastructure for your own project is [significantly more resource intensive](/network/benefits/) when compared to using The Graph Network. - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -That being said, if you’re still interested in running a [Graph Node](https://github.com/graphprotocol/graph-node), consider joining The Graph Network [as an Indexer](https://thegraph.com/blog/how-to-become-indexer/) to earn indexing rewards and query fees by serving data on your subgraph and others. - -### Should I use a centralized indexing provider? - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -Here's a detailed breakdown of the benefits of The Graph over centralized hosting: +### How does the upgrade Indexer price queries? -- **Resilience and Redundancy**: Decentralized systems are inherently more robust and resilient due to their distributed nature. Data isn't stored on a single server or location. Instead, it's served by hundreds of independent Indexers around the globe. This reduces the risk of data loss or service interruptions if one node fails, leading to exceptional uptime (99.99%). +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **Quality of Service**: In addition to the impressive uptime, The Graph Network features a ~106ms median query speed (latency), and higher query success rates compared to hosted alternatives. Read more in [this blog](https://thegraph.com/blog/qos-the-graph-network/). +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -Just as you've chosen your blockchain network for its decentralized nature, security, and transparency, opting for The Graph Network is an extension of those same principles. By aligning your data infrastructure with these values, you ensure a cohesive, resilient, and trust-driven development environment. +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/vi/supported-network-requirements.mdx b/website/pages/vi/supported-network-requirements.mdx index 50cd5e88b459..ee47dc38cc29 100644 --- a/website/pages/vi/supported-network-requirements.mdx +++ b/website/pages/vi/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| Mạng lưới | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | +| Mạng lưới | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | diff --git a/website/pages/vi/tap.mdx b/website/pages/vi/tap.mdx new file mode 100644 index 000000000000..0c5c21395165 --- /dev/null +++ b/website/pages/vi/tap.mdx @@ -0,0 +1,197 @@ +--- +title: TAP Migration Guide +--- + +Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. + +## Tổng quan + +[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: + +- Efficiently handles micropayments. +- Adds a layer of consolidations to on-chain transactions and costs. +- Allows Indexers control of receipts and payments, guaranteeing payment for queries. +- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. + +## Specifics + +TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. + +For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. + +### RAV Details + +- It’s money that is waiting to be sent to the blockchain. + +- It will continue to send requests to aggregate and ensure that the total value of non-aggregated receipts does not exceed the `amount willing to lose`. + +- Each RAV can be redeemed once in the contracts, which is why they are sent after the allocation is closed. + +### Redeeming RAV + +As long as you run `tap-agent` and `indexer-agent`, everything will be executed automatically. The following provides a detailed breakdown of the process: + +1. An Indexer closes allocation. + +2. ` period, tap-agent` takes all pending receipts for that specific allocation and requests an aggregation into a RAV, marking it as `last`. + +3. `indexer-agent` takes all the last RAVS and sends redeem requests to the blockchain, which will update the value of `redeem_at`. + +4. During the `` period, `indexer-agent` monitors if the blockchain has any reorganizations that revert the transaction. + + - If it was reverted, the RAV is resent to the blockchain. If it was not reverted, it gets marked as `final`. + +## Blockchain Addresses + +### Contracts + +| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| ------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | +| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | +| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | + +### Gateway + +| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| ---------- | --------------------------------------------- | --------------------------------------------- | +| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | + +### Requirements + +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. + +- [Graph TAP Aribtrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) + +> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployement. As a result, you have to index it manually. + +## Migration Guide + +### Software versions + +| Component | Phiên bản | Image Link | +| --------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------- | +| indexer-service | v1.0.0-rc.6 | [indexer-service](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-service-rs/264320627?tag=1.0.0-rc.6) | +| indexer-agent | PR #995 | [indexer-agent](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent/266166026?tag=sha-d98cf80) | +| tap-agent | v1.0.0-rc.6 | [tap-agent](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-tap-agent/264320547?tag=1.0.0-rc.6) | + +### Steps + +1. **Indexer Agent** + + - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). + - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + +2. **Indexer Service** + + - Fully replace your current configuration with the [new Indexer Service rs](https://github.com/graphprotocol/indexer-rs). It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + - Like the older version, you can scale Indexer Service horizontally easily. It is still stateless. + +3. **TAP Agent** + + - Run _one_ single instance of [TAP Agent](https://github.com/graphprotocol/indexer-rs) at all times. It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + +4. **Configure Indexer Service and TAP Agent** + + Configuration is a TOML file shared between `indexer-service` and `tap-agent`, supplied with the argument `--config /path/to/config.toml`. + + Check out the full [configuration](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) and the [default values](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) + +For minimal configuration, use the following template: + +```bash +# You will have to change *all* the values below to match your setup. +# +# Some of the config below are global graph network values, which you can find here: +# +# +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: +# +# [database] +# postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" + +[indexer] +indexer_address = "0x1111111111111111111111111111111111111111" +operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" + +[database] +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. +postgres_url = "postgres://postgres@postgres:5432/postgres" + +[graph_node] +# URL to your graph-node's query endpoint +query_url = "" +# URL to your graph-node's status endpoint +status_url = "" + +[subgraphs.network] +# Query URL for the Graph Network subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[subgraphs.escrow] +# Query URL for the Escrow subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[blockchain] +# The chain ID of the network that the graph network is running on +chain_id = 1337 +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. +receipts_verifier_address = "0x2222222222222222222222222222222222222222" + +######################################## +# Specific configurations to tap-agent # +######################################## +[tap] +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: +# max_amount_willing_to_lose_grt = "0.1" +max_amount_willing_to_lose_grt = 20 + +[tap.sender_aggregator_endpoints] +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. +0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" +``` + +Notes: + +- Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/tap/#gateway). +- Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/tap/#contracts) using the appropriate chain id. + +**Log Level** + +- You can set the log level by using the `RUST_LOG` environment variable. +- It’s recommended that you set it to `RUST_LOG=indexer_tap_agent=debug,info`. + +## Monitoring + +### Metrics + +All components expose the port 7300 to be queried by prometheus. + +### Grafana Dashboard + +You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) and import. + +### Launchpad + +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer) diff --git a/website/pages/yo/about.mdx b/website/pages/yo/about.mdx index 36c6a49f8fbc..9c21bf00d08f 100644 --- a/website/pages/yo/about.mdx +++ b/website/pages/yo/about.mdx @@ -2,46 +2,66 @@ title: About The Graph --- -This page will explain what The Graph is and how you can get started. - ## What is The Graph? -The Graph is a decentralized protocol for indexing and querying blockchain data. The Graph makes it possible to query data that is difficult to query directly. +The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. + +## Understanding the Basics + +Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. + +### Challenges Without The Graph + +In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. + +- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. + +- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. + +- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. + +### Why is this a problem? + +It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. + +Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. + +Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. -Projects with complex smart contracts like [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it really difficult to read anything other than basic data directly from the blockchain. +## The Graph Provides a Solution -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. -You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. +### How The Graph Functions -**Indexing blockchain data is really, really hard.** +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +#### Specifics -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. -## How The Graph Works +- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -The Graph learns what and how to index Ethereum data based on subgraph descriptions, known as the subgraph manifest. The subgraph description defines the smart contracts of interest for a subgraph, the events in those contracts to pay attention to, and how to map event data to data that The Graph will store in its database. +- When creating a subgraph, you need to write a subgraph manifest. -Once you have written a `subgraph manifest`, you use the Graph CLI to store the definition in IPFS and tell the indexer to start indexing data for that subgraph. +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. -This diagram gives more detail about the flow of data once a subgraph manifest has been deployed, dealing with Ethereum transactions: +The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. ![A graphic explaining how The Graph uses Graph Node to serve queries to data consumers](/img/graph-dataflow.png) The flow follows these steps: -1. A dapp adds data to Ethereum through a transaction on a smart contract. -2. The smart contract emits one or more events while processing the transaction. -3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. -4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. -5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. +1. A dapp adds data to Ethereum through a transaction on a smart contract. +2. The smart contract emits one or more events while processing the transaction. +3. Graph Node continually scans Ethereum for new blocks and the data for your subgraph they may contain. +4. Graph Node finds Ethereum events for your subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events. +5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats. ## Next Steps -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +The following sections provide a more in-depth look at subgraphs, their deployment and data querying. -Before you start writing your own subgraph, you might want to have a look at [Graph Explorer](https://thegraph.com/explorer) and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/pages/yo/arbitrum/arbitrum-faq.mdx b/website/pages/yo/arbitrum/arbitrum-faq.mdx index 17fc65167f06..d35ec825e6f7 100644 --- a/website/pages/yo/arbitrum/arbitrum-faq.mdx +++ b/website/pages/yo/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: Arbitrum FAQ Te [ibi yìí](#ìwé ọ̀rọ̀ lórí àwọn Ìbéèrè loorekoore tí Arbitrum) Tí ó ba fe jásí Àwọn ìbéèrè tí àwọn ènìyàn sáábà máa ń Béèrè Lórí Arbitrum. -## Kíni ìdí tí The Graph ń ṣé ìmúṣẹ ojútùú L2? +## Why did The Graph implement an L2 Solution? -Nípa wí wọn The Graph lórí L2, àwọn olukopa nẹtiwọọki lè nírètí: +By scaling The Graph on L2, network participants can now benefit from: - Ìṣirò ọ̀nà mẹ́rìndínlọ́gbọ̀n itulara lórí owo gaasi @@ -14,7 +14,7 @@ Nípa wí wọn The Graph lórí L2, àwọn olukopa nẹtiwọọki lè nírèt - Ààbò jíjógún lati ọdọ Ethereum -Gidiwọn àwọn àdéhùn jíjáfáfá ìlànà lórí L2 ngbanilaaye àwọn olukopa nẹtiwọọki làti ṣé ajọṣepọ nígbà gbogbo ní ìdíyelé idinku nínú àwọn ìdíyelé gaasi. Fún àpẹẹrẹ, Àwọn alatọka lè kopa ninu ṣíṣí ati títí awọn ipin si atọ́ka nọmba ti o tóbi ju ti àwọn Subgrafu pẹlu igbohunsafẹfẹ nla, àwọn olupilẹṣẹ lè ràn lọ àti imudojuiwọn àwọn Subgrafu pẹ̀lú irọrun ńlá, Àwọn aṣojú lè ṣé aṣojú GRT pẹ̀lú igbohunsafẹfẹ ti o pọ si, ati Awọn olutọpa le ṣafikun tabi yọ ami ifihan kuro si nọmba nla ti awọn Subgrafu-awọn iṣe ti a ti ro tẹlẹ ju iye owo idinamọ lati ṣe nigbagbogbo nitori gaasi. +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. Agbegbe The Graph pinnu lati lọ siwaju pẹlu Arbitrum ni ọdun to kọja lẹhin abajade ti ijiroro [GIP-0031] (https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) @@ -41,27 +41,21 @@ Lati lo anfani ti lilo Bola lori L2, lo switcher dropdown yii lati yi laarin aw ## Gẹgẹbi olupilẹṣẹ Subgrafu, Olumulo data, Atọka, Curator, tabi Aṣoju, kini Mo nilo lati ṣe ni bayi? -Ko si igbese lẹsẹkẹsẹ ti o nílò láti ṣe, sibẹsibẹ, awọn olukopa nẹtiwọọki ni iwuri lati bẹrẹ gbigbe si Arbitrum lati lo awọn anfani ti L2 +Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) for additional support. -Awọn ẹgbẹ olupilẹṣẹ imojuto n ṣiṣẹ lati ṣẹda awọn irinṣẹ gbigbe L2 ti yoo jẹ ki o rọrun pupọ lati gbe aṣoju, itọju, ati awọn ipin si Arbitrum. Awọn olukopa nẹtiwọki le nireti awọn irinṣẹ gbigbe L2 lati wa nipasẹ ooru ti 2023 +All indexing rewards are now entirely on Arbitrum. -Ni Oṣu Kẹrin Ọjọ Kẹ̀wá tí Ọdun 2023, 5% ti gbogbo awọn ere alatoka ni a nṣe lori Arbitrum. Bi ikopa nẹtiwọọki ti n pọ si, ati bi Igbimọ ṣe fọwọsi rẹ, awọn ere itọka yoo yipada ni kutukutu lati Ethereum si Arbitrum, nikẹhin gbigbe patapata si Arbitrum. - -## Ti èmi yóò fẹ́ láti kópa nínú nẹtiwọki lórí L2, kíni ó yẹ kí ń ṣé? - -Jọwọ ṣe iranlọwọ [ṣe idanwo netiwọki](https://testnet.thegraph.com/explorer) lori L2 ki o jabo esi nipa iriri rẹ ni [Discord](https://discord.gg/graphprotocol). - -## Ṣe awọn ewu eyikeyi wa ni nkan ṣe pẹlu iwọn nẹtiwọọki si L2? +## Were there any risks associated with scaling the network to L2? All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). Ohùn gbogbo tí ní ìdánwò dáradára, àti pé èrò airotẹlẹ kàn wá ní ayé láti ríi dájú wípé ìyípadà ailewu àti ailẹgbẹ. Àwọn àlàyé lè ṣé rí [níbi yìí](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20). -## Ǹjẹ́ àwọn Subgrafu tí o wà tẹlẹ lórí Ethereum yóò tẹsiwaju láti ṣiṣẹ́? +## Are existing subgraphs on Ethereum working? -Bẹẹni, Àwọn àdéhùn Nẹtiwọọki lórí The Graph yóò ṣiṣẹ́ ní afiwe lórí méjèèjì Ethereum àti Arbitrum títí gbígbé ní kíkún sì Arbitrum ni ọjọ mìíràn. +All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. -## Se GRT a ni awọn adehun ọlọgbọn ransogun lori Arbitrum? +## Does GRT have a new smart contract deployed on Arbitrum? Bẹẹni, GRT ni afikun [adehun ọlọgbọn lori Arbitrum](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7). Sibẹsibẹ, mainnet ti Ethereum[adehun GRT](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7) yoo wa ni ṣiṣiṣẹ. diff --git a/website/pages/yo/billing.mdx b/website/pages/yo/billing.mdx index 37f9c840d00b..dec5cfdadc12 100644 --- a/website/pages/yo/billing.mdx +++ b/website/pages/yo/billing.mdx @@ -14,7 +14,7 @@ There are two plans to use when querying subgraphs on The Graph Network. ## Query Payments with credit card -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) +- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. You'll be redirected to the wallet selection page. Select your wallet and click on "Connect". 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. @@ -69,7 +69,7 @@ Once you bridge GRT, you can add it to your billing balance. 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. 4. Enter the amount of GRT you would like to withdraw. 5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. 6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. @@ -83,7 +83,7 @@ Once you bridge GRT, you can add it to your billing balance. - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. 5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. 6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. + - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. 7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. 8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. @@ -127,7 +127,7 @@ This will be a step by step guide for purchasing GRT on Binance. 7. Review your purchase and click "Buy GRT". 8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. 9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - Click on the "wallet" button, click withdraw, and select GRT. - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - Click "Continue" and confirm your transaction. @@ -198,7 +198,7 @@ You can learn more about getting ETH on Binance [here](https://www.binance.com/e ### How many queries will I need? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. @@ -208,6 +208,6 @@ Of course, both new and existing users can reach out to Edge & Node's BD team fo Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### What happens when my billing balance runs out? Will I get a warning? You will receive several email notifications before your billing balance runs out. diff --git a/website/pages/yo/chain-integration-overview.mdx b/website/pages/yo/chain-integration-overview.mdx index 2fe6c2580909..a142f3f817f9 100644 --- a/website/pages/yo/chain-integration-overview.mdx +++ b/website/pages/yo/chain-integration-overview.mdx @@ -6,12 +6,12 @@ A transparent and governance-based integration process was designed for blockcha ## Stage 1. Technical Integration -- Teams work on a Graph Node integration and Firehose for non-EVM based chains. [Here's how](/new-chain-integration/). +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. - Teams initiate the protocol integration process by creating a Forum thread [here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71) (New Data Sources sub-category under Governance & GIPs). Using the default Forum template is mandatory. ## Stage 2. Integration Validation -- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON RPC or Firehose endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - Graph Indexers test the integration on The Graph's testnet. - Core developers and Indexers monitor stability, performance, and data determinism. @@ -38,7 +38,7 @@ This process is related to the Subgraph Data Service, applicable only to new Sub This would only impact protocol support for indexing rewards on Substreams-powered subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval. -### 3. How much time will this process take? +### 3. How much time will the process of reaching full protocol support take? The time to mainnet is expected to be several weeks, varying based on the time of integration development, whether additional research is required, testing and bug fixes, and, as always, the timing of the governance process that requires community feedback. @@ -46,4 +46,4 @@ Protocol support for indexing rewards depends on the stakeholders' bandwidth to ### 4. How will priorities be handled? -Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. This is especially true for chains previously supported on the [hosted service](https://thegraph.com/hosted-service) or those relying on already tested stacks. +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/yo/cookbook/arweave.mdx b/website/pages/yo/cookbook/arweave.mdx index 15538454e3ff..b079da30a013 100644 --- a/website/pages/yo/cookbook/arweave.mdx +++ b/website/pages/yo/cookbook/arweave.mdx @@ -105,7 +105,7 @@ Schema definition describes the structure of the resulting subgraph database and The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). -Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/assemblyscript-api/). +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/yo/cookbook/base-testnet.mdx b/website/pages/yo/cookbook/base-testnet.mdx index 3a1d98a44103..0cc5ad365dfd 100644 --- a/website/pages/yo/cookbook/base-testnet.mdx +++ b/website/pages/yo/cookbook/base-testnet.mdx @@ -63,10 +63,10 @@ Your subgraph slug is an identifier for your subgraph. The CLI tool will walk yo The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. +- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retrieve from the subgraph. - AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema. -If you want to index additional data, you will need extend the manifest, schema and mappings. +If you want to index additional data, you will need to extend the manifest, schema and mappings. For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). diff --git a/website/pages/yo/cookbook/cosmos.mdx b/website/pages/yo/cookbook/cosmos.mdx index 5e9edfd82931..a8c359b3098c 100644 --- a/website/pages/yo/cookbook/cosmos.mdx +++ b/website/pages/yo/cookbook/cosmos.mdx @@ -85,7 +85,7 @@ Schema definition describes the structure of the resulting subgraph database and The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). -Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/assemblyscript-api/). +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/yo/cookbook/grafting.mdx b/website/pages/yo/cookbook/grafting.mdx index 6b4f419390d5..6c3b85419af9 100644 --- a/website/pages/yo/cookbook/grafting.mdx +++ b/website/pages/yo/cookbook/grafting.mdx @@ -22,7 +22,7 @@ For more information, you can check: - [Grafting](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -In this tutorial, we will be covering a basic usecase. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network @@ -30,7 +30,7 @@ In this tutorial, we will be covering a basic usecase. We will replace an existi ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. While this is an effective way to preserve data and save time on indexing, grafting may introduce complexities and potential issues when migrating from a hosted environment to the decentralized network. It is not possible to graft a subgraph from The Graph Network back to the hosted service or Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. ### Best Practices @@ -80,7 +80,7 @@ dataSources: ``` - The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted. ## Grafting Manifest Definition @@ -191,7 +191,7 @@ Congrats! You have successfully grafted a subgraph onto another subgraph. ## Additional Resources -If you want more experience with grafting, here's a few examples for popular contracts: +If you want more experience with grafting, here are a few examples for popular contracts: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) diff --git a/website/pages/yo/cookbook/near.mdx b/website/pages/yo/cookbook/near.mdx index 28486f8bb0be..a4f27caf6f3c 100644 --- a/website/pages/yo/cookbook/near.mdx +++ b/website/pages/yo/cookbook/near.mdx @@ -37,7 +37,7 @@ There are three aspects of subgraph definition: **schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph#the-graphql-schema). -**AssemblyScript Mappings:** [AssemblyScript code](/developing/assemblyscript-api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. +**AssemblyScript Mappings:** [AssemblyScript code](/developing/graph-ts/api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. During subgraph development there are two key commands: @@ -98,7 +98,7 @@ Schema definition describes the structure of the resulting subgraph database and The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/). -NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/assemblyscript-api). +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/graph-ts/api). ```typescript @@ -165,9 +165,9 @@ These types are passed to block & receipt handlers: - Block handlers will receive a `Block` - Receipt handlers will receive a `ReceiptWithOutcome` -Otherwise, the rest of the [AssemblyScript API](/developing/assemblyscript-api) is available to NEAR subgraph developers during mapping execution. +Otherwise, the rest of the [AssemblyScript API](/developing/graph-ts/api) is available to NEAR subgraph developers during mapping execution. -This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/assemblyscript-api#json-api) to allow developers to easily process these logs. +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/graph-ts/api#json-api) to allow developers to easily process these logs. ## Deploying a NEAR Subgraph diff --git a/website/pages/yo/cookbook/subgraph-uncrashable.mdx b/website/pages/yo/cookbook/subgraph-uncrashable.mdx index 989310a3f9a0..0cc91a0fa2c3 100644 --- a/website/pages/yo/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/yo/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: Safe Subgraph Code Generator - The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function. -- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. These logs can be viewed in the The Graph's hosted service under the 'Logs' section. +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command. diff --git a/website/pages/yo/cookbook/upgrading-a-subgraph.mdx b/website/pages/yo/cookbook/upgrading-a-subgraph.mdx index 5502b16d9288..a546f02c0800 100644 --- a/website/pages/yo/cookbook/upgrading-a-subgraph.mdx +++ b/website/pages/yo/cookbook/upgrading-a-subgraph.mdx @@ -136,7 +136,7 @@ Make sure **Update Subgraph Details in Explorer** is checked and click on **Save ## Deprecating a Subgraph on The Graph Network -Follow the steps [here](/managing/deprecating-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. +Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. ## Querying a Subgraph + Billing on The Graph Network diff --git a/website/pages/yo/deploying/multiple-networks.mdx b/website/pages/yo/deploying/multiple-networks.mdx new file mode 100644 index 000000000000..dc2b8e533430 --- /dev/null +++ b/website/pages/yo/deploying/multiple-networks.mdx @@ -0,0 +1,241 @@ +--- +title: Deploying a Subgraph to Multiple Networks +--- + +This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph). + +## Deploying the subgraph to multiple networks + +In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different. + +### Using `graph-cli` + +Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: + +```sh +Options: + + ... + --network Network configuration to use from the networks config file + --network-file Networks config file path (default: "./networks.json") +``` + +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. + +> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. + +If you don't have a `networks.json` file, you'll need to manually create one with the following structure: + +```json +{ + "network1": { // the network name + "dataSource1": { // the dataSource name + "address": "0xabc...", // the contract address (optional) + "startBlock": 123456 // the startBlock (optional) + }, + "dataSource2": { + "address": "0x123...", + "startBlock": 123444 + } + }, + "network2": { + "dataSource1": { + "address": "0x987...", + "startBlock": 123 + }, + "dataSource2": { + "address": "0xxyz..", + "startBlock": 456 + } + }, + ... +} +``` + +> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. + +Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x123...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +This is what your networks config file should look like: + +```json +{ + "mainnet": { + "Gravity": { + "address": "0x123..." + } + }, + "sepolia": { + "Gravity": { + "address": "0xabc..." + } + } +} +``` + +Now we can run one of the following commands: + +```sh +# Using default networks.json file +yarn build --network sepolia + +# Using custom named file +yarn build --network sepolia --network-file path/to/config +``` + +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: sepolia + source: + address: '0xabc...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Now you are ready to `yarn deploy`. + +> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: + +```sh +# Using default networks.json file +yarn deploy --network sepolia + +# Using custom named file +yarn deploy --network sepolia --network-file path/to/config +``` + +### Using subgraph.yaml template + +One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). + +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: + +```json +{ + "network": "mainnet", + "address": "0x123..." +} +``` + +and + +```json +{ + "network": "sepolia", + "address": "0xabc..." +} +``` + +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + network: {{network}} + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + address: '{{address}}' + abi: Gravity + mapping: + kind: ethereum/events +``` + +In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: + +```json +{ + ... + "scripts": { + ... + "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", + "prepare:sepolia": "mustache config/sepolia.json subgraph.template.yaml > subgraph.yaml" + }, + "devDependencies": { + ... + "mustache": "^3.1.0" + } +} +``` + +To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: + +```sh +# Mainnet: +yarn prepare:mainnet && yarn deploy + +# Sepolia: +yarn prepare:sepolia && yarn deploy +``` + +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). + +**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. + +## Subgraph Studio subgraph archive policy + +A subgraph version in Studio is archived if and only if it meets the following criteria: + +- The version is not published to the network (or pending publish) +- The version was created 45 or more days ago +- The subgraph hasn't been queried in 30 days + +In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. + +Every subgraph affected with this policy has an option to bring the version in question back. + +## Checking subgraph health + +If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators. + +Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: + +```graphql +{ + indexingStatusForCurrentVersion(subgraphName: "org/subgraph") { + synced + health + fatalError { + message + block { + number + hash + } + handler + } + chains { + chainHeadBlock { + number + } + latestBlock { + number + } + } + } +} +``` + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/pages/yo/developing/creating-a-subgraph.mdx b/website/pages/yo/developing/creating-a-subgraph.mdx index b4a2f306d8ed..2a97c2f051a0 100644 --- a/website/pages/yo/developing/creating-a-subgraph.mdx +++ b/website/pages/yo/developing/creating-a-subgraph.mdx @@ -2,45 +2,47 @@ title: Creating a Subgraph --- -A subgraph extracts data from a blockchain, processing it and storing it so that it can be easily queried via GraphQL. +This detailed guide provides instructions to successfully create a subgraph. -![Defining a Subgraph](/img/defining-a-subgraph.png) +A subgraph extracts data from a blockchain, processes it, and stores it for efficient querying via GraphQL. -The subgraph definition consists of a few files: +![Defining a Subgraph](/img/defining-a-subgraph.png) -- `subgraph.yaml`: a YAML file containing the subgraph manifest +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. -- `schema.graphql`: a GraphQL schema that defines what data is stored for your subgraph, and how to query it via GraphQL +## Getting Started -- `AssemblyScript Mappings`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) code that translates from the event data to the entities defined in your schema (e.g. `mapping.ts` in this tutorial) +### Install the Graph CLI -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key). It is recommended that you [add signal](/network/curating/#how-to-signal) to your subgraph with at least [3,000 GRT](/sunrise/#how-can-i-ensure-high-quality-of-service-and-redundancy-for-subgraphs-on-the-graph-network). +To build and deploy a subgraph, you will need the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-tooling) which you will need to build and deploy a subgraph. +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. -## Install the Graph CLI +On your local machine, run one of the following commands: -The Graph CLI is written in JavaScript, and you will need to install either `yarn` or `npm` to use it; it is assumed that you have yarn in what follows. +#### Using [npm](https://www.npmjs.com/) -Once you have `yarn`, install the Graph CLI by running +```bash +npm install -g @graphprotocol/graph-cli@latest +``` -**Install with yarn:** +#### Using [yarn](https://yarnpkg.com/) ```bash yarn global add @graphprotocol/graph-cli ``` -**Install with npm:** +- The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. -```bash -npm install -g @graphprotocol/graph-cli -``` +- This `graph init` command can also create a subgraph in Subgraph Studio by passing in `--product subgraph-studio`. + +- If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. -Once installed, the `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. This command can be used to create a subgraph in Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to your preferred network, bootstrapping a new subgraph from that contract can be a good way to get started. +## Create a subgraph -## From An Existing Contract +### From an existing contract -The following command creates a subgraph that indexes all events of an existing contract. It attempts to fetch the contract ABI from Etherscan and falls back to requesting a local file path. If any of the optional arguments are missing, it takes you through an interactive form. +The following command creates a subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,21 +53,29 @@ graph init \ [] ``` -The `` is the ID of your subgraph in Subgraph Studio, it can be found on your subgraph details page. +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. -## From An Example Subgraph +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. -The second mode `graph init` supports is creating a new project from an example subgraph. The following command does this: +### From an example subgraph + +The following command initializes a new project from an example subgraph: ```sh -graph init --studio +graph init --studio --from-example=example-subgraph ``` -The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. -## Add New dataSources To An Existing Subgraph +## Add new `dataSources` to an existing subgraph -Since `v0.31.0` the `graph-cli` supports adding new dataSources to an existing subgraph through the `graph add` command. +Since `v0.31.0`, the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: ```sh graph add
    [] @@ -78,22 +88,45 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -The `add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option), and will create a new `dataSource` in the same way that `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. +### Specifics + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: + + - If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- The contract `address` will be written to the `networks.json` for the relevant network. + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. -The `--merge-entities` option identifies how the developer would like to handle `entity` and `event` name conflicts: +## Components of a subgraph -- If `true`: the new `dataSource` should use existing `eventHandlers` & `entities`. -- If `false`: a new entity & event handler should be created with `${dataSourceName}{EventName}`. +### The Subgraph Manifest -The contract `address` will be written to the `networks.json` for the relevant network. +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -> **Note:** When using the interactive cli, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. +The **subgraph definition** consists of the following files: -## The Subgraph Manifest +- `subgraph.yaml`: Contains the subgraph manifest -The subgraph manifest `subgraph.yaml` defines the smart contracts your subgraph indexes, which events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL -For the example subgraph, `subgraph.yaml` is: +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +A single subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: ```yaml specVersion: 0.0.4 @@ -180,9 +213,9 @@ A single subgraph can index data from multiple smart contracts. Add an entry for The triggers for a data source within a block are ordered using the following process: -1. Event and call triggers are first ordered by transaction index within the block. -2. Event and call triggers within the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. -3. Block triggers are run after event and call triggers, in the order they are defined in the manifest. +1. Event and call triggers are first ordered by transaction index within the block. +2. Event and call triggers within the same transaction are ordered using a convention: event triggers first then call triggers, each type respecting the order they are defined in the manifest. +3. Block triggers are run after event and call triggers, in the order they are defined in the manifest. These ordering rules are subject to change. @@ -305,9 +338,9 @@ Imagine you have a subgraph that needs to make three Ethereum calls to fetch dat Traditionally, these calls might be made sequentially: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Total time taken = 3 + 2 + 4 = 9 seconds @@ -315,9 +348,9 @@ Total time taken = 3 + 2 + 4 = 9 seconds With this feature, you can declare these calls to be executed in parallel: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. @@ -325,9 +358,9 @@ Total time taken = max (3, 2, 4) = 4 seconds ### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. -2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. ### Example Configuration in Subgraph Manifest @@ -347,7 +380,7 @@ calls: Details for the example above: - `global0X128` is the declared `eth_call`. -- The text before colon(`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. @@ -360,17 +393,17 @@ calls: ### SpecVersion Releases -| Version | Release notes | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | -| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +| Version | Release notes | +|:-------:| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | +| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | +| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | ### Getting The ABIs @@ -442,16 +475,16 @@ For some entity types the `id` is constructed from the id's of two other entitie We support the following scalars in our GraphQL API: -| Type | Description | -| --- | --- | -| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | -| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | -| `Boolean` | Scalar for `boolean` values. | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | -| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| Type | Description | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `Bytes` | Byte array, represented as a hexadecimal string. Commonly used for Ethereum hashes and addresses. | +| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. | +| `Boolean` | Scalar for `boolean` values. | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. | +| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | #### Enums @@ -593,7 +626,7 @@ This more elaborate way of storing many-to-many relationships will result in les #### Adding comments to the schema -As per GraphQL spec, comments can be added above schema entity attributes using the hash symble `#`. This is illustrated in the example below: +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: ```graphql type MyFirstEntity @entity { @@ -873,7 +906,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **Note:** A new data source will only process the calls and events for the block in which it was created and all following blocks, but will not process historical data, i.e., data that is contained in prior blocks. -> +> > If prior blocks contain data relevant to the new data source, it is best to index that data by reading the current state of the contract and creating entities representing that state at the time the new data source is created. ### Data Source Context @@ -930,7 +963,7 @@ dataSources: ``` > **Note:** The contract creation block can be quickly looked up on Etherscan: -> +> > 1. Search for the contract by entering its address in the search bar. > 2. Click on the creation transaction hash in the `Contract Creator` section. > 3. Load the transaction details page where you'll find the start block for that contract. @@ -945,9 +978,9 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde `indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: -1. `"never"`: No pruning of historical data; retains the entire history. -2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. -3. A specific number: Sets a custom limit on the number of historical blocks to retain. +1. `"never"`: No pruning of historical data; retains the entire history. +2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +3. A specific number: Sets a custom limit on the number of historical blocks to retain. ``` indexerHints: @@ -982,29 +1015,6 @@ indexerHints: prune: never ``` -You can check the earliest block (with historical state) for a given subgraph by querying the [Indexing Status API](/deploying/deploying-a-subgraph-to-hosted/#checking-subgraph-health): - -``` -{ - indexingStatuses(subgraphs: ["Qm..."]) { - subgraph - synced - health - chains { - earliestBlock { - number - } - latestBlock { - number - } - chainHeadBlock { number } - } - } -} -``` - -Note that the `earliestBlock` is the earliest block with historical data, which will be more recent than the `startBlock` specified in the manifest, if the subgraph has been pruned. - ## Event Handlers Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. @@ -1477,7 +1487,7 @@ The file data source must specifically mention all the entity types which it wil #### Create a new handler to process files -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/assemblyscript-api/#json-api)). +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). The CID of the file as a readable string can be accessed via the `dataSource` as follows: diff --git a/website/pages/yo/developing/developer-faqs.mdx b/website/pages/yo/developing/developer-faqs.mdx index 7a15b1eb60ef..6b7c13479e6b 100644 --- a/website/pages/yo/developing/developer-faqs.mdx +++ b/website/pages/yo/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: Olùgbéejáde FAQs --- -## 1. What is a subgraph? +This page summarizes some of the most common questions for developers building on The Graph. -A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using the Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available to be queried by subgraph consumers. +## Subgraph Related -## 2. Can I delete my subgraph? +### 1. What is a subgraph? -It is not possible to delete subgraphs once they are created. +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. Can I change my subgraph name? +### 2. What is the first step to create a subgraph? -No. Once a subgraph is created, the name cannot be changed. Make sure to think of this carefully before you create your subgraph so it is easily searchable and identifiable by other dapps. +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. Can I change the GitHub account associated with my subgraph? +### 3. Can I still create a subgraph if my smart contracts don't have events? -No. Once a subgraph is created, the associated GitHub account cannot be changed. Make sure to think of this carefully before you create your subgraph. +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. Am I still able to create a subgraph if my smart contracts don't have events? +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are by far the fastest way to retrieve useful data. +### 4. Can I change the GitHub account associated with my subgraph? -If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended, as performance will be significantly slower. +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. Is it possible to deploy one subgraph with the same name for multiple networks? +### 5. How do I update a subgraph on mainnet? -You will need separate names for multiple networks. While you can't have different subgraphs under the same name, there are convenient ways of having a single codebase for multiple networks. Find more on this in our documentation: [Redeploying a Subgraph](/deploying/deploying-a-subgraph-to-hosted#redeploying-a-subgraph) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. How are templates different from data sources? +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -Templates allow you to create data sources on the fly, while your subgraph is indexing. It might be the case that your contract will spawn new contracts as people interact with it, and since you know the shape of those contracts (ABI, events, etc) upfront you can define how you want to index them in a template and when they are spawned your subgraph will create a dynamic data source by supplying the contract address. +You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph#data-source-templates). -## 8. How do I make sure I'm using the latest version of graph-node for my local deployments? +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -You can run the following command: +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -docker pull graphprotocol/graph-node:latest -``` +You can also use `graph add` command to add a new dataSource. -**NOTE:** docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so it is important to do this to make sure you are up to date with the latest version of graph-node. +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. How do I call a contract function or access a public state variable from my subgraph mappings? +Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. -## 10. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`? +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Yes. On `graph init` command itself you can add multiple datasources by entering contracts one after the other. You can also use `graph add` command to add new datasource. +You can run the following command: -## 11. I want to contribute or add a GitHub issue. Where can I find the open source repositories? +```sh +docker pull graphprotocol/graph-node:latest +``` -- [graph-node](https://github.com/graphprotocol/graph-node) -- [graph-tooling](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. What is the recommended way to build "autogenerated" ids for an entity when handling events? +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique. -## 13. When listening to multiple contracts, is it possible to select the contract order to listen to events? +### 15. Can I delete my subgraph? -Within a subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not. +It is not possible to delete subgraphs once they are created. However, you can [transfer and deprecate your subgraph](/managing/transfer-and-deprecate-a-subgraph/). -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +## Network Related + +### 16. What networks are supported by The Graph? + +You can find the list of the supported networks [here](/developing/supported-networks). + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? Yes. You can do this by importing `graph-ts` as per the example below: @@ -78,23 +99,21 @@ dataSource.network() dataSource.address() ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Do you support block and call handlers on Sepolia? Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. -## 16. Can I import ethers.js or other JS libraries into my subgraph mappings? - -Not currently, as mappings are written in AssemblyScript. One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. +## Indexing & Querying Related -## 17. Is it possible to specify what block to start indexing on? +### 19. Is it possible to specify what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 18. Are there some tips to increase the performance of indexing? My subgraph is taking a very long time to sync +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 19. Is there a way to query the subgraph directly to determine the latest block number it has indexed? +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? Yes! Try the following command, substituting "organization/subgraphName" with the organization under it is published and the name of your subgraph: @@ -102,19 +121,7 @@ Yes! Try the following command, substituting "organization/subgraphName" with th curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -## 20. What networks are supported by The Graph? - -You can find the list of the supported networks [here](/developing/supported-networks). - -## 21. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? - -You have to redeploy the subgraph, but if the subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning. - -## 22. Is this possible to use Apollo Federation on top of graph-node? - -Federation is not supported yet, although we do want to support it in the future. At the moment, something you can do is use schema stitching, either on the client or via a proxy service. - -## 23. Is there a limit to how many objects The Graph can return per query? +### 22. Is there a limit to how many objects The Graph can return per query? By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: @@ -122,24 +129,19 @@ By default, query responses are limited to 100 items per collection. If you want someCollection(first: 1000, skip: ) { ... } ``` -## 24. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## 25. Where do I go to find my current subgraph on the hosted service? - -Head over to the hosted service in order to find subgraphs that you or others deployed to the hosted service. You can find it [here](https://thegraph.com/hosted-service). - -## 26. Will the hosted service start charging query fees? - -The Graph will never charge for the hosted service. The Graph is a decentralized protocol, and charging for a centralized service is not aligned with The Graph’s values. The hosted service was always a temporary step to help get to the decentralized network. Developers will have a sufficient amount of time to upgrade to the decentralized network as they are comfortable. - -## 27. How do I update a subgraph on mainnet? +## Miscellaneous -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +### 24. Is it possible to use Apollo Federation on top of graph-node? -## 28. In what order are the event, block, and call handlers triggered for a data source? +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +- [graph-node](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/yo/developing/graph-ts/api.mdx b/website/pages/yo/developing/graph-ts/api.mdx index 46442dfa941e..8fc1f4b48b61 100644 --- a/website/pages/yo/developing/graph-ts/api.mdx +++ b/website/pages/yo/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: AssemblyScript API --- -> Note: if you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, you're using an older version of AssemblyScript, we recommend taking a look at the [`Migration Guide`](/release-notes/assemblyscript-migration-guide) +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -This page documents what built-in APIs can be used when writing subgraph mappings. Two kinds of APIs are available out of the box: +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- code generated from subgraph files by `graph codegen`. +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features. +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## API Reference @@ -27,16 +29,16 @@ The `@graphprotocol/graph-ts` library provides the following APIs: The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph. -| Version | Release notes | -| :-: | --- | -| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
    Added `receipt` field to the Ethereum Event object | -| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
    Added `baseFeePerGas` to the Ethereum Block object | -| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
    `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | -| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | -| 0.0.3 | Added `from` field to the Ethereum Call object
    `etherem.call.address` renamed to `ethereum.call.to` | -| 0.0.2 | Added `input` field to the Ethereum Transaction object | +| Version | Release notes | +| :-----: | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types
    Added `receipt` field to the Ethereum Event object | +| 0.0.6 | Added `nonce` field to the Ethereum Transaction object
    Added `baseFeePerGas` to the Ethereum Block object | +| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide))
    `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` | +| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object | +| 0.0.3 | Added `from` field to the Ethereum Call object
    `etherem.call.address` renamed to `ethereum.call.to` | +| 0.0.2 | Added `input` field to the Ethereum Transaction object | ### Built-in Types @@ -252,7 +254,9 @@ export function handleTransfer(event: TransferEvent): void { When a `Transfer` event is encountered while processing the chain, it is passed to the `handleTransfer` event handler using the generated `Transfer` type (aliased to `TransferEvent` here to avoid a naming conflict with the entity type). This type allows accessing data such as the event's parent transaction and its parameters. -Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### Loading entities from the store @@ -268,15 +272,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may thus be necessary to check for the `null` case before using the value. +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> **Note:** Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### Looking up entities created withing a block As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotocol/graph-cli` v0.49.0 the `loadInBlock` method is available on all entity types. -The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a Transaction from some on-chain event, and a later handler wants to access this transaction if it exists. In the case where the transaction does not exist, the subgraph will have to go to the database just to find out that the entity does not exist; if the subgraph author already knows that the entity must have been created in the same block, using loadInBlock avoids this database roundtrip. For some subgraphs, these missed lookups can contribute significantly to the indexing time. +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -503,7 +510,9 @@ Any other contract that is part of the subgraph can be imported from the generat #### Handling Reverted Calls -If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -515,7 +524,7 @@ if (callResult.reverted) { } ``` -Note that a Graph node connected to a Geth or Infura client may not detect all reverts, if you rely on this we recommend using a Graph node connected to a Parity client. +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### Encoding/Decoding ABI diff --git a/website/pages/yo/developing/supported-networks.mdx b/website/pages/yo/developing/supported-networks.mdx index 7c2d8d858261..797202065e99 100644 --- a/website/pages/yo/developing/supported-networks.mdx +++ b/website/pages/yo/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. +- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. - If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md). diff --git a/website/pages/yo/developing/unit-testing-framework.mdx b/website/pages/yo/developing/unit-testing-framework.mdx index f826a5ccb209..308135181ccb 100644 --- a/website/pages/yo/developing/unit-testing-framework.mdx +++ b/website/pages/yo/developing/unit-testing-framework.mdx @@ -1368,18 +1368,18 @@ The log output includes the test run duration. Here's an example: > Critical: Could not create WasmInstance from valid module with context: unknown import: wasi_snapshot_preview1::fd_write has not been defined -This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/assemblyscript-api/#logging-api) +This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/graph-ts/api/#logging-api) > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Block(defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(18,12) -> +> > ERROR TS2554: Expected ? arguments, but got ?. -> +> > return new ethereum.Transaction(defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(24,12) The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as`. The best way to fix issues like this one is to update everything to the latest released version. diff --git a/website/pages/yo/glossary.mdx b/website/pages/yo/glossary.mdx index cd24a22fd4d5..2978ecce3561 100644 --- a/website/pages/yo/glossary.mdx +++ b/website/pages/yo/glossary.mdx @@ -10,11 +10,9 @@ title: Glossary - **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network. -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Then, Indexers can begin indexing subgraphs to make them available to be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **Hosted service**: A temporary scaffold service for building and querying subgraphs as The Graph's decentralized network is maturing its cost of service, quality of service, and developer experience. - -- **Indexers**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards. @@ -24,17 +22,17 @@ title: Glossary - **Indexer's Self Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit. -- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service to The Graph Network. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegators**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned. -- **Curators**: Network participants that identify high-quality subgraphs, and “curate” them (i.e., signal GRT on them) in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. Indexers earn indexing rewards proportional to the signal on a subgraph. We see a correlation between the amount of GRT signalled and the number of Indexers indexing a subgraph. +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned. -- **Subgraph Consumer**: Any application or user that queries a subgraph. +- **Data Consumer**: Any application or user that queries a subgraph. - **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network. @@ -46,11 +44,11 @@ title: Glossary 1. **Active**: An allocation is considered active when it is created on-chain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated. - 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs. -- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed. Specifically, the Indexer will lose 2.5% of their self-stake of GRT. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network. @@ -62,11 +60,11 @@ title: Glossary - **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network. -- **POI or Proof of Indexing**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent Proof of Indexing (POI). Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer. +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: The Indexer agent is part of the indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way. @@ -78,10 +76,6 @@ title: Glossary - **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self stake. -- **_Upgrading_ a subgraph to The Graph Network**: The process of moving a subgraph from the hosted service to The Graph Network. - -- **_Updating_ a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2). - -- **Upgrade Window**: The countdown for hosted service users to upgrade their subgraph(s) to The Graph Network beginning on April 11th, and ending on June 12th 2024. diff --git a/website/pages/yo/index.json b/website/pages/yo/index.json index 74fc544251b1..b22a82414173 100644 --- a/website/pages/yo/index.json +++ b/website/pages/yo/index.json @@ -21,10 +21,6 @@ "createASubgraph": { "title": "Ṣẹda Subgraph kan", "description": "Lo Studio lati ṣẹda awọn subgraphs" - }, - "migrateFromHostedService": { - "title": "Upgrade from the hosted service", - "description": "Upgrading subgraphs to The Graph Network" } }, "networkRoles": { diff --git a/website/pages/yo/managing/transfer-and-deprecate-a-subgraph.mdx b/website/pages/yo/managing/transfer-and-deprecate-a-subgraph.mdx new file mode 100644 index 000000000000..6bdd183f72d5 --- /dev/null +++ b/website/pages/yo/managing/transfer-and-deprecate-a-subgraph.mdx @@ -0,0 +1,65 @@ +--- +title: Transfer and Deprecate a Subgraph +--- + +## Transferring ownership of a subgraph + +Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. + +**Please note the following:** + +- Whoever owns the NFT controls the subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. +- You can easily move control of a subgraph to a multi-sig. +- A community member can create a subgraph on behalf of a DAO. + +### View your subgraph as an NFT + +To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: + +``` +https://opensea.io/your-wallet-address +``` + +Or a wallet explorer like **Rainbow.me**: + +``` +https://rainbow.me/your-wallet-addres +``` + +### Step-by-Step + +To transfer ownership of a subgraph, do the following: + +1. Use the UI built into Subgraph Studio: + + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) + +2. Choose the address that you would like to transfer the subgraph to: + + ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) + +Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: + +![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) + +## Deprecating a subgraph + +Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. + +### Step-by-Step + +To deprecate your subgraph, do the following: + +1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). +2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. +3. Your subgraph will no longer appear in searches on Graph Explorer. + +**Please note the following:** + +- The owner's wallet should call the `deprecateSubgraph` function. +- Curators will not be able to signal on the subgraph anymore. +- Curators that already signaled on the subgraph can withdraw their signal at an average share price. +- Deprecated subgraphs will show an error message. + +> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. diff --git a/website/pages/yo/mips-faqs.mdx b/website/pages/yo/mips-faqs.mdx index ae460989f96e..1f7553923765 100644 --- a/website/pages/yo/mips-faqs.mdx +++ b/website/pages/yo/mips-faqs.mdx @@ -6,10 +6,6 @@ title: MIPs FAQs > Note: the MIPs program is closed as of May 2023. Thank you to all the Indexers who participated! -It's an exciting time to be participating in The Graph ecosystem! During [Graph Day 2022](https://thegraph.com/graph-day/2022/) Yaniv Tal announced the [sunsetting of the hosted service](https://thegraph.com/blog/sunsetting-hosted-service/), a moment The Graph ecosystem has been working towards for many years. - -To support the sunsetting of the hosted service and the migration of all of it's activity to the decentralized network, The Graph Foundation has announced the [Migration Infrastructure Providers (MIPs) program](https://thegraph.com/blog/mips-multi-chain-indexing-incentivized-program). - The MIPs program is an incentivization program for Indexers to support them with resources to index chains beyond Ethereum mainnet and help The Graph protocol expand the decentralized network into a multi-chain infrastructure layer. The MIPs program has allocated 0.75% of the GRT supply (75M GRT), with 0.5% to reward Indexers who contribute to bootstrapping the network and 0.25% allocated to Network Grants for subgraph developers using multi-chain subgraphs. diff --git a/website/pages/yo/network/benefits.mdx b/website/pages/yo/network/benefits.mdx index 26a350d7af68..7ab5fc984b18 100644 --- a/website/pages/yo/network/benefits.mdx +++ b/website/pages/yo/network/benefits.mdx @@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| Cost Comparison | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $0+ | $0 per month | -| Engineering time | $400 per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | -| Cost per query | $0 | $0 | -| Infrastructure | Centralized | Decentralized | -| Geographic redundancy | $750+ per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $750+ | $0 | +| Cost Comparison | Self Hosted | The Graph Network | +|:----------------------------:|:---------------------------------------:|:---------------------------------------------------------------:| +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $0+ | $0 per month | +| Engineering time | $400 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) | +| Cost per query | $0 | $0 | +| Infrastructure | Centralized | Decentralized | +| Geographic redundancy | $750+ per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $750+ | $0 | ## Medium Volume User (~3M queries per month) -| Cost Comparison | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Monthly server cost\* | $350 per month | $0 | -| Query costs | $500 per month | $120 per month | -| Engineering time | $800 per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | ~3,000,000 | -| Cost per query | $0 | $0.00004 | -| Infrastructure | Centralized | Decentralized | -| Engineering expense | $200 per hour | Included | -| Geographic redundancy | $1,200 in total costs per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $1,650+ | $120 | +| Cost Comparison | Self Hosted | The Graph Network | +|:----------------------------:|:------------------------------------------:|:---------------------------------------------------------------:| +| Monthly server cost\* | $350 per month | $0 | +| Query costs | $500 per month | $120 per month | +| Engineering time | $800 per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~3,000,000 | +| Cost per query | $0 | $0.00004 | +| Infrastructure | Centralized | Decentralized | +| Engineering expense | $200 per hour | Included | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $1,650+ | $120 | ## High Volume User (~30M queries per month) -| Cost Comparison | Self Hosted | The Graph Network | -| :-: | :-: | :-: | -| Monthly server cost\* | $1100 per month, per node | $0 | -| Query costs | $4000 | $1,200 per month | -| Number of nodes needed | 10 | Not applicable | -| Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | -| Queries per month | Limited to infra capabilities | ~30,000,000 | -| Cost per query | $0 | $0.00004 | -| Infrastructure | Centralized | Decentralized | -| Geographic redundancy | $1,200 in total costs per additional node | Included | -| Uptime | Varies | 99.9%+ | -| Total Monthly Costs | $11,000+ | $1,200 | +| Cost Comparison | Self Hosted | The Graph Network | +|:----------------------------:|:-------------------------------------------:|:---------------------------------------------------------------:| +| Monthly server cost\* | $1100 per month, per node | $0 | +| Query costs | $4000 | $1,200 per month | +| Number of nodes needed | 10 | Not applicable | +| Engineering time | $6,000 or more per month | None, built into the network with globally distributed Indexers | +| Queries per month | Limited to infra capabilities | ~30,000,000 | +| Cost per query | $0 | $0.00004 | +| Infrastructure | Centralized | Decentralized | +| Geographic redundancy | $1,200 in total costs per additional node | Included | +| Uptime | Varies | 99.9%+ | +| Total Monthly Costs | $11,000+ | $1,200 | \*including costs for backup: $50-$100 per month diff --git a/website/pages/yo/network/curating.mdx b/website/pages/yo/network/curating.mdx index fb2107c53884..b2864660fe8c 100644 --- a/website/pages/yo/network/curating.mdx +++ b/website/pages/yo/network/curating.mdx @@ -10,7 +10,7 @@ Before consumers can query a subgraph, it must be indexed. This is where curatio Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. @@ -18,7 +18,7 @@ The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). @@ -34,7 +34,7 @@ Signaling on a specific version is especially useful when one subgraph is used b Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay a 0.5% curation tax on all auto-migrated curation shares. -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -49,7 +49,7 @@ However, it is recommended that curators leave their signaled GRT in place not o ## Risks 1. The query market is inherently young at The Graph and there is risk that your %APY may be lower than you expect due to nascent market dynamics. -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. 3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. A subgraph can fail due to a bug. A failed subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version. - If you are subscribed to the newest version of a subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax. @@ -78,50 +78,14 @@ It’s suggested that you don’t update your subgraphs too frequently. See the ### 5. Can I sell my curation shares? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). ### 6. Am I eligible for a curation grant? Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## Bonding Curve 101 - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![Price per shares](/img/price-per-share.png) - -As a result, price increases linearly, meaning that it will get more expensive to purchase a share over time. Here’s an example of what we mean, see the bonding curve below: - -![Bonding curve](/img/bonding-curve.png) - -Consider we have two curators that mint shares for a subgraph: - -- Curator A is the first to signal on the subgraph. By adding 120,000 GRT into the curve, they are able to mint 2000 shares. -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- Since both curators hold half the total of curation shares, they would receive an equal amount of curator royalties. -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- The remaining curator would now receive all the curator royalties for that subgraph. If they were to burn their shares to withdraw GRT, they would receive 120,000 GRT. -- **TLDR:** The GRT valuation of curation shares is determined by the bonding curve and can be volatile. There is potential to incur big losses. Signaling early means you put in less GRT for each share. By extension, this means you earn more curator royalties per GRT than later curators for the same subgraph. - -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** - -In the case of The Graph, [Bancor’s implementation of a bonding curve formula](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) is leveraged. - Still confused? Check out our Curation video guide below: diff --git a/website/pages/yo/network/delegating.mdx b/website/pages/yo/network/delegating.mdx index 81824234e072..f7430c5746ae 100644 --- a/website/pages/yo/network/delegating.mdx +++ b/website/pages/yo/network/delegating.mdx @@ -2,13 +2,23 @@ title: Delegating --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## Delegator Guide -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly, as that information is widely available online. There are three sections in this guide: @@ -24,15 +34,19 @@ Listed below are the main risks of being a Delegator in the protocol. Delegators cannot be slashed for bad behavior, but there is a tax on Delegators to disincentivize poor decision-making that could harm the integrity of the network. -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +As a Delegator, it's important to understand the following: -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### The delegation unbonding period Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely.
    ![Delegation unbonding](/img/Delegation-Unbonding.png) _Note the 0.5% fee in the Delegation UI, as well as the 28 day @@ -41,47 +55,65 @@ Another thing to consider is how to choose an Indexer wisely. If you choose an I ### Choosing a trustworthy Indexer with a fair reward payout for Delegators -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +#### Delegation Parameters + +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
    ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *The top Indexer is giving Delegators 90% of the rewards. The middle one is giving Delegators 20%. The bottom one is giving Delegators ~83%.*
    -- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. + +As you can see, in order to choose the right Indexer, you must consider multiple things. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -### Calculating Delegators expected return +## Calculating Delegators Expected Return -A Delegator must consider a lot of factors when determining the return. These include: +A Delegator must consider the following factors to determine a return: -- A technical Delegator can also look at the Indexer's ability to use the Delegated tokens available to them. If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### Considering the query fee cut and indexing fee cut -As described in the above sections, you should choose an Indexer that is transparent and honest about setting their Query Fee Cut and Indexing Fee Cuts. A Delegator should also look at the Parameters Cooldown time to see how much of a time buffer they have. After that is done, it is fairly simple to calculate the amount of rewards the Delegators are getting. The formula is: +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![Delegation Image 3](/img/Delegation-Reward-Formula.png) ### Considering the Indexer's delegation pool -Another thing a Delegator has to consider is what proportion of the Delegation Pool they own. All delegation rewards are shared evenly, with a simple rebalancing of the pool determined by the amount the Delegator has deposited into the pool. This gives the Delegator a share of the pool: +Delegators should consider the proportion of the Delegation Pool they own. -![Share formula](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +This gives the Delegator a share of the pool: + +![Share formula](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### Considering the delegation capacity -Another thing to consider is the delegation capacity. Currently, the Delegation Ratio is set to 16. This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of Delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. @@ -89,16 +121,21 @@ Therefore, a Delegator should always consider the Delegation Capacity of an Inde ### MetaMask "Pending Transaction" Bug -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### Example -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## Video guide for the network UI +## Video Guide -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/yo/network/developing.mdx b/website/pages/yo/network/developing.mdx index 1b76eb94ccca..81231c36ad59 100644 --- a/website/pages/yo/network/developing.mdx +++ b/website/pages/yo/network/developing.mdx @@ -2,52 +2,88 @@ title: Developing --- -Developers are the demand side of The Graph ecosystem. Developers build subgraphs and publish them to The Graph Network. Then, they query live subgraphs with GraphQL in order to power their applications. +To start coding right away, go to [Developer Quick Start](/quick-start/). + +## Overview + +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. + +On The Graph, you can: + +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. + +### What is GraphQL? + +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +### Developer Actions + +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. + +## Subgraph Specifics + +### What are subgraphs? + +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. + +A subgraph primarily consists of the following files: + +- `subgraph.yaml`: this YAML file contains the [subgraph manifest](/developing/creating-a-subgraph/#the-subgraph-manifest). +- `subgraph.graphql`: this GraphQL schema defines what data is stored for your subgraph, and how to query it via [GraphQL](/developing/creating-a-subgraph/#the-graphql-schema). +- `mappings`: this [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) mappings file translates data from the event data to the entities defined in your schema. + +Learn the detailed specifics to [create a subgraph](/developing/creating-a-subgraph/). ## Subgraph Lifecycle -Subgraphs deployed to the network have a defined lifecycle. +Here is a general overview of a subgraph’s lifecycle: -### Build locally +![Subgraph Lifecycle](/img/subgraph-lifecycle.png) -As with all subgraph development, it starts with local development and testing. Developers can use the same local setup whether they are building for The Graph Network, the hosted service or a local Graph Node, leveraging `graph-cli` and `graph-ts` to build their subgraph. Developers are encouraged to use tools such as [Matchstick](https://github.com/LimeChain/matchstick) for unit testing to improve the robustness of their subgraphs. +### Build locally -> There are certain constraints on The Graph Network, in terms of feature and network support. Only subgraphs on [supported networks](/developing/supported-networks) will earn indexing rewards, and subgraphs which fetch data from IPFS are also not eligible. +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. ### Deploy to Subgraph Studio -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. - -### Publish to the Network +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -When the developer is happy with their subgraph, they can publish it to The Graph Network. This is an on-chain action, which registers the subgraph so that it is discoverable by Indexers. Published subgraphs have a corresponding NFT, which is then easily transferable. The published subgraph has associated metadata, which provides other network participants with useful context and information. +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. -### Signal to Encourage Indexing +### Publish to the Network -Published subgraphs are unlikely to be picked up by Indexers without the addition of signal. Signal is locked GRT associated with a given subgraph, which indicates to Indexers that a given subgraph will receive query volume, and also contributes to the indexing rewards available for processing it. Subgraph developers will generally add signal to their subgraph, in order to encourage indexing. Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. -### Querying & Application Development +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-and-deprecate-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. -Once a subgraph has been processed by Indexers and is available for querying, developers can start to use the subgraph in their applications. Developers query subgraphs via a gateway, which forwards their queries to an Indexer who has processed the subgraph, paying query fees in GRT. +### Add Curation Signal for Indexing -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. +#### What is signal? -### Updating Subgraphs +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. +### Querying & Application Development -Once the Subgraph Developer is ready to update, they can initiate a transaction to point their subgraph at the new version. Updating the subgraph migrates any signal to the new version (assuming the user who applied the signal selected "auto-migrate"), which also incurs a migration tax. This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). -### Deprecating Subgraphs +Learn more about [querying subgraphs](/querying/querying-the-graph/). -At some point a developer may decide that they no longer need a published subgraph. At that point they may deprecate the subgraph, which returns any signalled GRT to the Curators. +### Updating Subgraphs -### Diverse Developer Roles +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -Some developers will engage with the full subgraph lifecycle on the network, publishing, querying and iterating on their own subgraphs. Some may be focused on subgraph development, building open APIs which others can build on. Some may be application focused, querying subgraphs deployed by others. +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. -### Developers and Network Economics +### Deprecating & Transferring Subgraphs -Developers are a key economic actor in the network, locking up GRT in order to encourage indexing, and crucially querying subgraphs, which is the network's primary value exchange. Subgraph developers also burn GRT whenever a subgraph is updated. +If you no longer need a published subgraph, you can [deprecate or transfer a subgraph](/managing/transfer-and-deprecate-a-subgraph/). Deprecating a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/yo/network/explorer.mdx b/website/pages/yo/network/explorer.mdx index bca2993eb0b3..02dca6ed2f9f 100644 --- a/website/pages/yo/network/explorer.mdx +++ b/website/pages/yo/network/explorer.mdx @@ -2,21 +2,35 @@ title: Graph Explorer --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## Subgraphs -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -When you click into a subgraph, you’ll be able to test queries in the playground and be able to leverage network details to make informed decisions. You’ll also be able to signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. This is critical because signaling on a subgraph incentivizes it to be indexed, which means that it’ll surface on the network to eventually serve queries. +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![Explorer Image 2](/img/Subgraph-Details.png) -On each subgraph’s dedicated page, several details are surfaced. These include: +On each subgraph’s dedicated page, you can do the following: - Signal/Un-signal on subgraphs - View more details such as charts, current deployment ID, and other metadata @@ -31,26 +45,32 @@ On each subgraph’s dedicated page, several details are surfaced. These include ## Participants -Within this tab, you’ll get a bird’s eye view of all the people that are participating in the network activities, such as Indexers, Delegators, and Curators. Below, we’ll go into an in-depth review of what each tab means for you. +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### 1. Indexers ![Explorer Image 4](/img/Indexer-Pane.png) -Let’s start with the Indexers. Indexers are the backbone of the protocol, being the ones that stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. In the Indexers table, you’ll be able to see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made off of query fees and indexing rewards. Deep dives below: +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. + +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. -- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators -- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards -- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters -- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior -- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed -- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing -- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated +**Specifics** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations. -- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - Indexer Rewards - this is the total indexer rewards earned by the Indexer and their Delegators over all time. Indexer rewards are paid through GRT issuance. -Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. Indexing parameters are set by clicking on the right-hand side of the table, or by going into an Indexer’s profile and clicking the “Delegate” button. +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. To learn more about how to become an Indexer, you can take a look at the [official documentation](/network/indexing) or [The Graph Academy Indexer guides.](https://thegraph.academy/delegators/choosing-indexers/) @@ -58,9 +78,13 @@ To learn more about how to become an Indexer, you can take a look at the [offici ### 2. Curators -Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially attractive subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. By depositing GRT, Curators mint curation shares of a subgraph. As a result, Curators are eligible to earn a portion of the query fees that the subgraph they have signaled on generates. The bonding curve incentivizes Curators to curate the highest quality data sources. The Curator table in this section will allow you to see: +In the The Curator table listed below you can see: - The date the Curator started curating - The number of GRT that was deposited @@ -68,34 +92,36 @@ Curators can be community members, data consumers, or even subgraph developers w ![Explorer Image 6](/img/Curation-Overview.png) -If you want to learn more about the Curator role, you can do so by visiting the following links of [The Graph Academy](https://thegraph.academy/curators/) or [official documentation.](/network/curating) +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### 3. Delegators -Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers seek to attract Delegators by offering them a portion of the indexing rewards and query fees that they earn. +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![Explorer Image 7](/img/Delegation-Overview.png) -The Delegators table will allow you to see the active Delegators in the community, as well as metrics such as: +In the Delegators table you can see the active Delegators in the community and important metrics: - The number of Indexers a Delegator is delegating towards - A Delegator’s original delegation - The rewards they have accumulated but have not withdrawn from the protocol - The realized rewards they withdrew from the protocol - Total amount of GRT they have currently in the protocol -- The date they last delegated at +- The date they last delegated -If you want to learn more about how to become a Delegator, look no further! All you have to do is to head over to the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## Network -In the Network section, you will see global KPIs as well as the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. ### Overview -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +The overview section has both all the current network metrics and some cumulative metrics over time: - The current total network stake - The stake split between the Indexers and their Delegators @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - Protocol parameters such as curation reward, inflation rate, and more - Current epoch rewards and fees -A few key details that are worth mentioning: +A few key details to note: -- **Query fees represent the fees generated by the consumers**, and they can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. -- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once the Indexers close their allocations towards the subgraphs they’ve been indexing. Thus the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![Explorer Image 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ In the Epochs section, you can analyze on a per-epoch basis, metrics such as: - The active epoch is the one in which Indexers are currently allocating stake and collecting query fees - The settling epochs are the ones in which the state channels are being settled. This means that the Indexers are subject to slashing if the consumers open disputes against them. - The distributing epochs are the epochs in which the state channels for the epochs are being settled and Indexers can claim their query fee rebates. - - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers, thus being finalized. + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![Explorer Image 9](/img/Epoch-Stats.png) ## Your User Profile -Now that we’ve talked about the network stats, let’s move on to your personal profile. Your personal profile is the place for you to see your network activity, no matter how you’re participating on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see: +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### Profile Overview -This is where you can see any current actions you took. This is also where you can find your profile information, description, and website (if you added one). +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![Explorer Image 10](/img/Profile-Overview.png) ### Subgraphs Tab -If you click into the Subgraphs tab, you’ll see your published subgraphs. This will not include any subgraphs deployed with the CLI for testing purposes – subgraphs will only show up when they are published to the decentralized network. +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![Explorer Image 11](/img/Subgraphs-Overview.png) ### Indexing Tab -If you click into the Indexing tab, you’ll find a table with all the active and historical allocations towards the subgraphs, as well as charts that you can analyze and see your past performance as an Indexer. +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. This section will also include details about your net Indexer rewards and net query fees. You’ll see the following metrics: @@ -158,7 +189,9 @@ This section will also include details about your net Indexer rewards and net qu ### Delegating Tab -Delegators are important to the Graph Network. A Delegator must use their knowledge to choose an Indexer that will provide a healthy return on rewards. Here you can find details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. In the first half of the page, you can see your delegation chart, as well as the rewards-only chart. To the left, you can see the KPIs that reflect your current delegation metrics. diff --git a/website/pages/yo/network/indexing.mdx b/website/pages/yo/network/indexing.mdx index 77013e86a790..ea382714aeff 100644 --- a/website/pages/yo/network/indexing.mdx +++ b/website/pages/yo/network/indexing.mdx @@ -42,7 +42,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps: -1. Query the [mainnet subgraph](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) to get the IDs for all active allocations: +1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -112,12 +112,12 @@ Indexers may differentiate themselves by applying advanced techniques for making - **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second. - **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic. -| Setup | Postgres
    (CPUs) | Postgres
    (memory in GBs) | Postgres
    (disk in TBs) | VMs
    (CPUs) | VMs
    (memory in GBs) | -| --- | :-: | :-: | :-: | :-: | :-: | -| Small | 4 | 8 | 1 | 4 | 16 | -| Standard | 8 | 30 | 1 | 12 | 48 | -| Medium | 16 | 64 | 2 | 32 | 64 | -| Large | 72 | 468 | 3.5 | 48 | 184 | +| Setup | Postgres
    (CPUs) | Postgres
    (memory in GBs) | Postgres
    (disk in TBs) | VMs
    (CPUs) | VMs
    (memory in GBs) | +| -------- |:--------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:| +| Small | 4 | 8 | 1 | 4 | 16 | +| Standard | 8 | 30 | 1 | 12 | 48 | +| Medium | 16 | 64 | 2 | 32 | 64 | +| Large | 72 | 468 | 3.5 | 48 | 184 | ### What are some basic security precautions an Indexer should take? @@ -149,20 +149,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer #### Graph Node -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
    (for subgraph queries) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    (for subgraph subscriptions) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
    (for subgraph queries) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    (for subgraph subscriptions) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Service -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP server
    (for paid subgraph queries) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ---------------------------------------------------------- | ----------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP server
    (for paid subgraph queries) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus metrics | /metrics | --metrics-port | - | #### Indexer Agent @@ -545,7 +545,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar - `graph indexer rules maybe [options] ` — Set the `decisionBasis` for a deployment to `rules`, so that the Indexer agent will use indexing rules to decide whether to index this deployment. -- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additonal argument `--status` can be used to print out all actions of a certain status. +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `graph indexer action queue allocate ` - Queue allocation action diff --git a/website/pages/yo/network/overview.mdx b/website/pages/yo/network/overview.mdx index 16214028dbc9..0779d9a6cb00 100644 --- a/website/pages/yo/network/overview.mdx +++ b/website/pages/yo/network/overview.mdx @@ -2,14 +2,20 @@ title: Network Overview --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## Overview +## How does it work? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Specifics + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![Token Economics](/img/Network-roles@2x.png) -To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20 used to allocate resources in the network. +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/yo/new-chain-integration.mdx b/website/pages/yo/new-chain-integration.mdx index 35b2bc7c8b4a..534d6701efdb 100644 --- a/website/pages/yo/new-chain-integration.mdx +++ b/website/pages/yo/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: Integrating New Networks +title: New Chain Integration --- -Graph Node can currently index data from the following chain types: +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- Ethereum, via EVM JSON-RPC and [Ethereum Firehose](https://github.com/streamingfast/firehose-ethereum) -- NEAR, via a [NEAR Firehose](https://github.com/streamingfast/near-firehose-indexer) -- Cosmos, via a [Cosmos Firehose](https://github.com/graphprotocol/firehose-cosmos) -- Arweave, via an [Arweave Firehose](https://github.com/graphprotocol/firehose-arweave) +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -If you are interested in any of those chains, integration is a matter of Graph Node configuration and testing. +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -If you are interested in a different chain type, a new integration with Graph Node must be built. Our recommended approach is developing a new Firehose for the chain in question and then the integration of that Firehose with Graph Node. More info below. +## Integration Strategies -**1. EVM JSON-RPC** +### 1. EVM JSON-RPC -If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. For more information, refer to [Testing an EVM JSON-RPC](new-chain-integration#testing-an-evm-json-rpc). +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -**2. Firehose** +#### Testing an EVM JSON-RPC -For non-EVM-based chains, Graph Node must ingest blockchain data via gRPC and known type definitions. This can be done via [Firehose](firehose/), a new technology developed by [StreamingFast](https://www.streamingfast.io/) that provides a highly-scalable indexing blockchain solution using a files-based and streaming-first approach. Reach out to the [StreamingFast team](mailto:integrations@streamingfast.io/) if you need help with Firehose development. +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## Difference between EVM JSON-RPC & Firehose +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, in a JSON-RPC batch request +- `trace_filter` _(optionally required for Graph Node to support call handlers)_ -While the two are suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](substreams/), like building [Substreams-powered subgraphs](cookbook/substreams-powered-subgraphs/). In addition, Firehose allows for improved indexing speeds when compared to JSON-RPC. +### 2. Firehose Integration -New EVM chain integrators may also consider the Firehose-based approach, given the benefits of substreams and its massive parallelized indexing capabilities. Supporting both allows developers to choose between building substreams or subgraphs for the new chain. +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> **NOTE**: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that eth_calls are [not a good practice for developers](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## Testing an EVM JSON-RPC +#### Specific Firehose Instrumentation for EVM (`geth`) chains -For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON RPC methods: +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \_(for historical blocks, with EIP-1898 - requires archive node): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`, in a JSON-RPC batch request -- _`trace_filter`_ _(optionally required for Graph Node to support call handlers)_ +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### Graph Node Configuration +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -**Start by preparing your local environment** +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## Graph Node Configuration + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) -2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON RPC compliant URL - > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ -**Test the integration by locally deploying a subgraph** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. Create a simple example subgraph. Some options are below: - 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point - 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) -3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. -4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` -5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -Graph Node should be syncing the deployed subgraph if there are no errors. Give it time to sync, then send some GraphQL queries to the API endpoint printed in the logs. +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## Integrating a new Firehose-enabled chain +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. Create a simple example subgraph. Some options are below: + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +Graph Node should be syncing the deployed subgraph if there are no errors. Give it time to sync, then send some GraphQL queries to the API endpoint printed in the logs. -Integrating a new chain is also possible using the Firehose approach. This is currently the best option for non-EVM chains and a requirement for substreams support. Additional documentation focuses on how Firehose works, adding Firehose support for a new chain and integrating it with Graph Node. Recommended docs for integrators: +## Substreams-powered Subgraphs -1. [General docs on Firehose](firehose/) -2. [Adding Firehose support for a new chain](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. [Integrating Graph Node with a new chain via Firehose](https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/yo/operating-graph-node.mdx b/website/pages/yo/operating-graph-node.mdx index dbbfcd5fc545..fb3d538f952a 100644 --- a/website/pages/yo/operating-graph-node.mdx +++ b/website/pages/yo/operating-graph-node.mdx @@ -77,13 +77,13 @@ A complete Kubernetes example configuration can be found in the [indexer reposit When it is running Graph Node exposes the following ports: -| Port | Purpose | Routes | CLI Argument | Environment Variable | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP server
    (for subgraph queries) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    (for subgraph subscriptions) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (for managing deployments) | / | --admin-port | - | -| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | -| 8040 | Prometheus metrics | /metrics | --metrics-port | - | +| Port | Purpose | Routes | CLI Argument | Environment Variable | +| ---- | ----------------------------------------------------- | ---------------------------------------------------- | ----------------- | -------------------- | +| 8000 | GraphQL HTTP server
    (for subgraph queries) | /subgraphs/id/...
    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    (for subgraph subscriptions) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (for managing deployments) | / | --admin-port | - | +| 8030 | Subgraph indexing status API | /graphql | --index-node-port | - | +| 8040 | Prometheus metrics | /metrics | --metrics-port | - | > **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint. diff --git a/website/pages/yo/querying/graphql-api.mdx b/website/pages/yo/querying/graphql-api.mdx index 2bbc71b5bb9c..d8671e53a77c 100644 --- a/website/pages/yo/querying/graphql-api.mdx +++ b/website/pages/yo/querying/graphql-api.mdx @@ -2,11 +2,19 @@ title: GraphQL API --- -This guide explains the GraphQL Query API that is used for The Graph Protocol. +Learn about the GraphQL Query API used in The Graph. -## Queries +## What is GraphQL? -In your subgraph schema you define types called `Entities`. For each `Entity` type, an `entity` and `entities` field will be generated on the top-level `Query` type. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph. +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +To understand the larger role that GraphQL plays, review [developing](/network/developing/) and [creating a subgraph](/developing/creating-a-subgraph/). + +## Queries with GraphQL + +In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. + +> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. ### Examples @@ -21,7 +29,7 @@ Query for a single `Token` entity defined in your schema: } ``` -> **Note:** When querying for a single entity, the `id` field is required, and it must be a string. +> Note: When querying for a single entity, the `id` field is required, and it must be writen as a string. Query all `Token` entities: @@ -36,7 +44,10 @@ Query all `Token` entities: ### Sorting -When querying a collection, the `orderBy` parameter may be used to sort by a specific attribute. Additionally, the `orderDirection` can be used to specify the sort direction, `asc` for ascending or `desc` for descending. +When querying a collection, you may: + +- Use the `orderBy` parameter to sort by a specific attribute. +- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. #### Example @@ -53,7 +64,7 @@ When querying a collection, the `orderBy` parameter may be used to sort by a spe As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0) entities can be sorted on the basis of nested entities. -In the following example, we sort the tokens by the name of their owner: +The following example shows tokens sorted by the name of their owner: ```graphql { @@ -70,11 +81,12 @@ In the following example, we sort the tokens by the name of their owner: ### Pagination -When querying a collection, the `first` parameter can be used to paginate from the beginning of the collection. It is worth noting that the default sort order is by ID in ascending alphanumeric order, not by creation time. - -Further, the `skip` parameter can be used to skip entities and paginate. e.g. `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +When querying a collection, it's best to: -Queries should avoid using very large `skip` values since they generally perform poorly. For retrieving a large number of items, it is much better to page through entities based on an attribute as shown in the last example. +- Use the `first` parameter to paginate from the beginning of the collection. + - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. +- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. #### Example using `first` @@ -106,7 +118,7 @@ Query 10 `Token` entities, offset by 10 places from the beginning of the collect #### Example using `first` and `id_ge` -If a client needs to retrieve a large number of entities, it is much more performant to base queries on an attribute and filter by that attribute. For example, a client would retrieve a large number of tokens using this query: +If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: ```graphql query manyTokens($lastID: String) { @@ -117,11 +129,12 @@ query manyTokens($lastID: String) { } ``` -The first time, it would send the query with `lastID = ""`, and for subsequent requests would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. +The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### Filtering -You can use the `where` parameter in your queries to filter for different properties. You can filter on mulltiple values within the `where` parameter. +- You can use the `where` parameter in your queries to filter for different properties. +- You can filter on multiple values within the `where` parameter. #### Example using `where` @@ -155,7 +168,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison: #### Example for block filtering -You can also filter entities by the `_change_block(number_gte: Int)` - this filters entities which were updated in or after the specified block. +You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block). @@ -193,7 +206,7 @@ As of Graph Node [`v0.30.0`](https://github.com/graphprotocol/graph-node/release ##### `AND` Operator -In the following example, we are filtering for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. +The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. ```graphql { @@ -208,7 +221,7 @@ In the following example, we are filtering for challenges with `outcome` `succee ``` > **Syntactic sugar:** You can simplify the above query by removing the `and` operator by passing a sub-expression separated by commas. -> +> > ```graphql > { > challenges(where: { number_gte: 100, outcome: "succeeded" }) { @@ -223,7 +236,7 @@ In the following example, we are filtering for challenges with `outcome` `succee ##### `OR` Operator -In the following example, we are filtering for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. +The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. ```graphql { @@ -278,9 +291,9 @@ _change_block(number_gte: Int) You can query the state of your entities not just for the latest block, which is the default, but also for an arbitrary block in the past. The block at which a query should happen can be specified either by its block number or its block hash by including a `block` argument in the toplevel fields of queries. -The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to not be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -Note that the current implementation is still subject to certain limitations that might violate these gurantees. The implementation can not always tell that a given block hash is not on the main chain at all, or that the result of a query by block hash for a block that can not be considered final yet might be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. +> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. #### Example @@ -322,12 +335,12 @@ Fulltext search queries have one required field, `text`, for supplying search te Fulltext search operators: -| Symbol | Operator | Description | -| --- | --- | --- | -| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | -| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | -| `<->` | `Follow by` | Specify the distance between two words. | -| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | +| Symbol | Operator | Description | +| ----------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| `&` | `And` | For combining multiple search terms into a filter for entities that include all of the provided terms | +| | | `Or` | Queries with multiple search terms separated by the or operator will return all entities with a match from any of the provided terms | +| `<->` | `Follow by` | Specify the distance between two words. | +| `:*` | `Prefix` | Use the prefix search term to find words whose prefix match (2 characters required.) | #### Examples @@ -376,11 +389,11 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021 ## Schema -The schema of your data source--that is, the entity types, values, and relationships that are available to query--are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). +The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your subgraph manifest. +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> **Note:** Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. +> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. ### Entities diff --git a/website/pages/yo/querying/querying-best-practices.mdx b/website/pages/yo/querying/querying-best-practices.mdx index 32d1415b20fa..5654cf9e23a5 100644 --- a/website/pages/yo/querying/querying-best-practices.mdx +++ b/website/pages/yo/querying/querying-best-practices.mdx @@ -2,11 +2,9 @@ title: Querying Best Practices --- -The Graph provides a decentralized way to query data from blockchains. +The Graph provides a decentralized way to query data from blockchains via GraphQL APIs, making it easier to query data with the GraphQL language. -The Graph network's data is exposed through a GraphQL API, making it easier to query data with the GraphQL language. - -This page will guide you through the essential GraphQL language rules and GraphQL queries best practices. +Learn the essential GraphQL language rules and GraphQL querying best practices. --- @@ -71,7 +69,7 @@ GraphQL is a language and set of conventions that transport over HTTP. It means that you can query a GraphQL API using standard `fetch` (natively or via `@whatwg-node/fetch` or `isomorphic-fetch`). -However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), we recommend you to use our `graph-client` that supports unique features such as: +However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), it's recommend to use `graph-client` which supports unique features such as: - Cross-chain Subgraph Handling: Querying from multiple subgraphs in a single query - [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) @@ -104,8 +102,6 @@ main() More GraphQL client alternatives are covered in ["Querying from an Application"](/querying/querying-from-an-application). -Now that we covered the basic rules of GraphQL queries syntax, let's now look at the best practices of GraphQL query writing. - --- ## Best Practices @@ -164,11 +160,11 @@ Doing so brings **many advantages**: - **Variables can be cached** at server-level - **Queries can be statically analyzed by tools** (more on this in the following sections) -**Note: How to include fields conditionally in static queries** +### How to include fields conditionally in static queries -We might want to include the `owner` field only on a particular condition. +You might want to include the `owner` field only on a particular condition. -For this, we can leverage the `@include(if:...)` directive as follows: +For this, you can leverage the `@include(if:...)` directive as follows: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -191,7 +187,7 @@ const result = await execute(query, { }) ``` -Note: The opposite directive is `@skip(if: ...)`. +> Note: The opposite directive is `@skip(if: ...)`. ### Ask for what you want @@ -199,9 +195,8 @@ GraphQL became famous for its "Ask for what you want" tagline. For this reason, there is no way, in GraphQL, to get all available fields without having to list them individually. -When querying GraphQL APIs, always think of querying only the fields that will be actually used. - -A common cause of over-fetching is collections of entities. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. Queries should therefore almost always set first explicitly, and make sure they only fetch as many entities as they actually need. This applies not just to top-level collections in a query, but even more so to nested collections of entities. +- When querying GraphQL APIs, always think of querying only the fields that will be actually used. +- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. For example, in the following query: @@ -337,8 +332,8 @@ query { Such repeated fields (`id`, `active`, `status`) bring many issues: -- harder to read for more extensive queries -- when using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. +- More extensive queries become harder to read. +- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. A refactored version of the query would be the following: @@ -364,13 +359,13 @@ fragment DelegateItem on Transcoder { } ``` -Using GraphQL `fragment` will improve readability (especially at scale) but also will result in better TypeScript types generation. +Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. When using the types generation tool, the above query will generate a proper `DelegateItemFragment` type (_see last "Tools" section_). ### GraphQL Fragment do's and don'ts -**Fragment base must be a type** +### Fragment base must be a type A Fragment cannot be based on a non-applicable type, in short, **on type not having fields**: @@ -382,7 +377,7 @@ fragment MyFragment on BigInt { `BigInt` is a **scalar** (native "plain" type) that cannot be used as a fragment's base. -**How to spread a Fragment** +#### How to spread a Fragment Fragments are defined on specific types and should be used accordingly in queries. @@ -411,16 +406,16 @@ fragment VoteItem on Vote { It is not possible to spread a fragment of type `Vote` here. -**Define Fragment as an atomic business unit of data** +#### Define Fragment as an atomic business unit of data -GraphQL Fragment must be defined based on their usage. +GraphQL `Fragment`s must be defined based on their usage. For most use-case, defining one fragment per type (in the case of repeated fields usage or type generation) is sufficient. -Here is a rule of thumb for using Fragment: +Here is a rule of thumb for using fragments: -- when fields of the same type are repeated in a query, group them in a Fragment -- when similar but not the same fields are repeated, create multiple fragments, ex: +- When fields of the same type are repeated in a query, group them in a `Fragment`. +- When similar but different fields are repeated, create multiple fragments, for instance: ```graphql # base fragment (mostly used in listing) @@ -443,7 +438,7 @@ fragment VoteWithPoll on Vote { --- -## The essential tools +## The Essential Tools ### GraphQL web-based explorers @@ -473,11 +468,11 @@ This will allow you to **catch errors without even testing queries** on the play The [GraphQL VSCode extension](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql) is an excellent addition to your development workflow to get: -- syntax highlighting -- autocomplete suggestions -- validation against schema -- snippets -- go to definition for fragments and input types +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets +- Go to definition for fragments and input types If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint) is a must-have to visualize errors and warnings inlined in your code correctly. @@ -485,9 +480,9 @@ If you are using `graphql-eslint`, the [ESLint VSCode extension](https://marketp The [JS GraphQL plugin](https://plugins.jetbrains.com/plugin/8097-graphql/) will significantly improve your experience while working with GraphQL by providing: -- syntax highlighting -- autocomplete suggestions -- validation against schema -- snippets +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets -More information on this [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) that showcases all the plugin's main features. +For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. diff --git a/website/pages/yo/quick-start.mdx b/website/pages/yo/quick-start.mdx index cf7facbdc32e..3856aeb9e264 100644 --- a/website/pages/yo/quick-start.mdx +++ b/website/pages/yo/quick-start.mdx @@ -2,24 +2,18 @@ title: Ibẹrẹ kiakia --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph to Subgraph Studio. +Learn how to easily publish and query a [subgraph](/developing/developer-faqs/#1-what-is-a-subgraph) on The Graph. -Ensure that your subgraph will be indexing data from a [supported network](/developing/supported-networks). - -This guide is written assuming that you have: +## Prerequisites for this guide - A crypto wallet -- A smart contract address on the network of your choice - -## 1. Create a subgraph on Subgraph Studio - -Go to the [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- A smart contract address on one of the [supported networks](/developing/supported-networks/) -Once your wallet is connected, you can begin by clicking “Create a Subgraph." It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name." +## Step-by-step -## 2. Install the Graph CLI +### 1. Install the Graph CLI -The Graph CLI is written in TypeScript and you will need to have `node` and either `npm` or `yarn` installed to use it. Check that you have the most recent CLI version installed. +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. On your local machine, run one of the following commands: @@ -35,133 +29,161 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -## 3. Initialize your subgraph from existing contract +### 2. Create your subgraph + +If your contract has events, the `init` command will automatically create a scaffold of a subgraph. + +#### Create via Graph CLI + +Use the following command to create a subgraph in Subgraph Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +#### Create via Subgraph Studio + +Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. -Initialize your subgraph from an existing contract by running the initialize command: +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +2. Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". + +For additional information on subgraph creation and the Graph CLI, see [Creating a Subgraph](/developing/creating-a-subgraph). + +### 3. Initialize your subgraph + +#### From an existing contract + +The following command initializes your subgraph from an existing contract: ```sh graph init --studio ``` -> You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). +> Note: If your contract was verified on Etherscan, then the ABI will automatically be created in the CLI. + +You can find commands for your specific subgraph on the subgraph page in [Subgraph Studio](https://thegraph.com/studio/). -When you initialize your subgraph, the CLI tool will ask you for the following information: +When you initialize your subgraph, the CLI will ask you for the following information: -- Protocol: choose the protocol your subgraph will be indexing data from -- Subgraph slug: create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. -- Directory to create the subgraph in: choose your local directory -- Ethereum network(optional): you may need to specify which EVM-compatible network your subgraph will be indexing data from -- Contract address: Locate the smart contract address you’d like to query data from -- ABI: If the ABI is not autopopulated, you will need to input it manually as a JSON file -- Start Block: it is suggested that you input the start block to save time while your subgraph indexes blockchain data. You can locate the start block by finding the block where your contract was deployed. -- Contract Name: input the name of your contract -- Index contract events as entities: it is suggested that you set this to true as it will automatically add mappings to your subgraph for every emitted event -- Add another contract(optional): you can add another contract +- Protocol: Choose the protocol your subgraph will be indexing data from. +- Subgraph slug: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. +- Directory to create the subgraph in: Choose your local directory. +- Ethereum network (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- Contract address: Locate the smart contract address you’d like to query data from. +- ABI: If the ABI is not auto-populated, you will need to input it manually as a JSON file. +- Start Block: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- Contract Name: Input the name of your contract. +- Index contract events as entities: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- Add another contract (optional): You can add another contract. See the following screenshot for an example for what to expect when initializing your subgraph: ![Subgraph command](/img/subgraph-init-example.png) -## 4. Write your subgraph +### 4. Write your subgraph -The previous commands create a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: +The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. -- Manifest (`subgraph.yaml`) - The manifest defines what datasources your subgraphs will index. -- Schema (`schema.graphql`) - The GraphQL schema defines what data you wish to retrieve from the subgraph. -- AssemblyScript Mappings (`mapping.ts`) - This is the code that translates data from your datasources to the entities defined in the schema. +When making changes to the subgraph, you will mainly work with three files: -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). +- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -## 5. Deploy to Subgraph Studio +For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -Once your subgraph is written, run the following commands: +### 5. Deploy your subgraph -```sh -$ graph codegen -$ graph build -``` +Remember, deploying is not the same as publishing. + +- When you deploy a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. +- When you publish a subgraph, you are publishing it onchain to the decentralized network. + +1. Once your subgraph is written, run the following commands: + + ```sh + graph codegen + graph build + ``` + +2. Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. -- Authenticate and deploy your subgraph. The deploy key can be found on the Subgraph page in Subgraph Studio. +![Deploy key](/img/subgraph-studio-deploy-key.jpg) +```` ```sh -$ graph auth --studio -$ graph deploy --studio -``` -You will be asked for a version label. It's strongly recommended to use [semver](https://semver.org/) for versioning like `0.0.1`. That said, you are free to choose any string as version such as:`v1`, `version1`, `asdf`. - -## 6. Test your subgraph - -In Subgraph Studio's playground environment, you can test your subgraph by making a sample query. - -The logs will tell you if there are any errors with your subgraph. The logs of an operational subgraph will look like this: - -![Subgraph logs](/img/subgraph-logs-image.png) - -If your subgraph is failing, you can query the subgraph health by using the GraphiQL Playground. Note that you can leverage the query below and input your deployment ID for your subgraph. In this case, `Qm...` is the deployment ID (which can be located on the Subgraph page under **Details**). The query below will tell you when a subgraph fails, so you can debug accordingly: - -```graphql -{ - indexingStatuses(subgraphs: ["Qm..."]) { - node - synced - health - fatalError { - message - block { - number - hash - } - handler - } - nonFatalErrors { - message - block { - number - hash - } - handler - } - chains { - network - chainHeadBlock { - number - } - earliestBlock { - number - } - latestBlock { - number - } - lastHealthyBlock { - number - } - } - entityCount - } -} +graph auth --studio + +graph deploy --studio ``` +```` -## 7. Publish your subgraph to The Graph’s Decentralized Network +- The CLI will ask for a version label. + - It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. That said, you can choose any string for the version such as: `v1`, `version1`, `asdf`, etc. -Once your subgraph has been deployed to Subgraph Studio, you have tested it out, and you are ready to put it into production, you can then publish it to the decentralized network. +### 6. Review your subgraph -In Subgraph Studio, you will be able to click the publish button on the top right of your subgraph's page. +If you’d like to examine your subgraph before publishing it to the network, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: -Select the network you would like to publish your subgraph to. It is recommended to publish subgraphs to Arbitrum One to take advantage of the [faster transaction speeds and lower gas costs](/arbitrum/arbitrum-faq). +- Run a sample query. +- Analyze your subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: -The (upgrade Indexer)[/sunrise/#about-the-upgrade-indexer] will begin serving queries on your subgraph regardless of subgraph curation, and it will provide you with 100,000 free queries per month. + ![Subgraph logs](/img/subgraph-logs-image.png) -For a higher quality of service and stronger redundancy, you can curate your subgraph to attract more Indexers. At the time of writing, it is recommended that you curate your own subgraph with at least 3,000 GRT to ensure 3-5 additional Indexers begin serving queries on your subgraph. +### 7. Publish your subgraph to The Graph Network -To save on gas costs, you can curate your subgraph in the same transaction that you published it by selecting this button when you publish your subgraph to The Graph’s decentralized network: +Publishing a subgraph to the decentralized network makes it available for [Curators](/network/curating/) to begin curating it and [Indexers](/network/indexing/) to begin indexing it. -![Subgraph publish](/img/publish-and-signal-tx.png) +#### Publishing with Subgraph Studio + +1. To publish your subgraph, click the Publish button in the dashboard. +2. Select the network to which you would like to publish your subgraph. + +#### Publishing from the CLI + +As of version 0.73.0, you can also publish your subgraph with the Graph CLI. + +1. Open the `graph-cli`. + +2. Use the following commands: + + ```sh + graph codegen && graph build + ``` + + Then, + + ```sh + graph publish + ``` -## 8. Query your subgraph +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +To customize your deployment, see [Publishing a Subgraph](/publishing/publishing-a-subgraph/). + +#### Adding signal to your subgraph + +1. To attract indexers to query your subgraph, you should add GRT curation signal to it. + + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + +2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. + + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + +To learn more about curation, read [Curating](/network/curating/). + +To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: + +![Subgraph publish](/img/publish-and-signal-tx.png) -Now, you can query your subgraph by sending GraphQL queries to your subgraph’s Query URL, which you can find by clicking on the query button. +### 8. Query your subgraph -If you don't have your API key, you can query via the free, rate-limited development query URL, which can be used for development and staging. +Now, you can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read more [here](/querying/querying-the-graph/). +For more information about querying data from your subgraph, read [Querying The Graph](/querying/querying-the-graph/). diff --git a/website/pages/yo/release-notes/assemblyscript-migration-guide.mdx b/website/pages/yo/release-notes/assemblyscript-migration-guide.mdx index 85f6903a6c69..17224699570d 100644 --- a/website/pages/yo/release-notes/assemblyscript-migration-guide.mdx +++ b/website/pages/yo/release-notes/assemblyscript-migration-guide.mdx @@ -127,11 +127,8 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - You'll need to rename your duplicate variables if you had variable shadowing. - ### Null Comparisons - By doing the upgrade on your subgraph, sometimes you might get errors like these: ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - To solve you can simply change the `if` statement to something like this: ```typescript @@ -288,7 +284,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - To fix this issue, you can create a variable for that property access so that the compiler can do the nullability check magic: ```typescript diff --git a/website/pages/yo/sps/introduction.mdx b/website/pages/yo/sps/introduction.mdx new file mode 100644 index 000000000000..12e3f81c6d53 --- /dev/null +++ b/website/pages/yo/sps/introduction.mdx @@ -0,0 +1,19 @@ +--- +title: Introduction to Substreams-powered Subgraphs +--- + +By using a Substreams package (`.spkg`) as a data source, your subgraph gains access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. + +There are two methods of enabling this technology: + +Using Substreams [triggers](./triggers): Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. + +Using [Entity Changes](https://substreams.streamingfast.io/documentation/consume/subgraph/graph-out): By writing more of the logic into Substreams, you can consume the module's output directly into graph-node. In graph-node, you can use the Substreams data to create your subgraph entities. + +It is really a matter of where you put your logic, in the subgraph or the Substreams. Keep in mind that having more of your logic in Substreams benefits from a parallelized model, whereas triggers will be linearly consumed in graph-node. + +Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: + +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/yo/sps/triggers-example.mdx b/website/pages/yo/sps/triggers-example.mdx new file mode 100644 index 000000000000..8e4f96eba14a --- /dev/null +++ b/website/pages/yo/sps/triggers-example.mdx @@ -0,0 +1,140 @@ +--- +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" +--- + +## Prerequisites + +Before starting, make sure to: + +- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. +- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. + +## Step 1: Initialize Your Project + + + +1. Open your Dev Container and run the following command to initialize your project: + + ```bash + substreams init + ``` + +2. Select the "minimal" project option. + +3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: + +```yaml +specVersion: v0.1.0 +package: + name: my_project_sol + version: v0.1.0 + +imports: # Pass your spkg of interest + solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg + +modules: + - name: map_spl_transfers + use: solana:map_block # Select corresponding modules available within your spkg + initialBlock: 260000082 + + - name: map_transactions_by_programid + use: solana:solana:transactions_by_programid_without_votes + +network: solana-mainnet-beta + +params: # Modify the param fields to meet your needs + # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA + map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE +``` + +## Step 2: Generate the Subgraph Manifest + +Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: + +```bash +substreams codegen subgraph +``` + +You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: + +```yaml +--- +dataSources: + - kind: substreams + name: my_project_sol + network: solana-mainnet-beta + source: + package: + moduleName: map_spl_transfers # Module defined in the substreams.yaml + file: ./my-project-sol-v0.1.0.spkg + mapping: + apiVersion: 0.0.7 + kind: substreams/graph-entities + file: ./src/mappings.ts + handler: handleTriggers +``` + +## Step 3: Define Entities in `schema.graphql` + +Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. Here is an example: + +```graphql +type MyTransfer @entity { + id: ID! + amount: String! + source: String! + designation: String! + signers: [String!]! +} +``` + +This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. + +## Step 4: Handle Substreams Data in `mappings.ts` + +With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: + +```ts +import { Protobuf } from 'as-proto/assembly' +import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events' +import { MyTransfer } from '../generated/schema' + +export function handleTriggers(bytes: Uint8Array): void { + const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode) + + for (let i = 0; i < input.data.length; i++) { + const event = input.data[i] + + if (event.transfer != null) { + let entity_id: string = `${event.txnId}-${i}` + const entity = new MyTransfer(entity_id) + entity.amount = event.transfer!.instruction!.amount.toString() + entity.source = event.transfer!.accounts!.source + entity.designation = event.transfer!.accounts!.destination + + if (event.transfer!.accounts!.signer!.single != null) { + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] + } else if (event.transfer!.accounts!.signer!.multisig != null) { + entity.signers = event.transfer!.accounts!.signer!.multisig!.signers + } + entity.save() + } + } +} +``` + +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + +## Conclusion + +You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. + +For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). diff --git a/website/pages/yo/sps/triggers.mdx b/website/pages/yo/sps/triggers.mdx new file mode 100644 index 000000000000..ed19635d4768 --- /dev/null +++ b/website/pages/yo/sps/triggers.mdx @@ -0,0 +1,37 @@ +--- +title: Substreams Triggers +--- + +Custom triggers allow you to send data directly into your subgraph mappings file and entities (similar to tables and fields), enabling full use of the GraphQL layer. By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data within your subgraph’s handler, ensuring efficient and streamlined data management within the subgraph framework. + +> Note: If you haven’t already, visit one of the How-To Guides found [here](./introduction) to scaffold your first project in the Development Container. + +The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. + +```tsx +export function handleTransactions(bytes: Uint8Array): void { + let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).trasanctions // 1. + if (transactions.length == 0) { + log.info('No transactions found', []) + return + } + + for (let i = 0; i < transactions.length; i++) { + // 2. + let transaction = transactions[i] + + let entity = new Transaction(transaction.hash) // 3. + entity.from = transaction.from + entity.to = transaction.to + entity.save() + } +} +``` + +Here's what you’re seeing in the `mappings.ts` file: + +1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object +2. Looping over the transactions +3. Create a new subgraph entity for every transaction + +To go through a detailed example of a trigger-based subgraph, [click here](./triggers-example). diff --git a/website/pages/yo/substreams.mdx b/website/pages/yo/substreams.mdx index 710e110012cc..a838a6924e2f 100644 --- a/website/pages/yo/substreams.mdx +++ b/website/pages/yo/substreams.mdx @@ -4,9 +4,11 @@ title: Substreams ![Substreams Logo](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## How Substreams Works in 4 Steps @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### Expand Your Knowledge - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/yo/sunrise.mdx b/website/pages/yo/sunrise.mdx index 32bf6c6d26d4..14d1444cf8cd 100644 --- a/website/pages/yo/sunrise.mdx +++ b/website/pages/yo/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Post-Sunrise + Upgrading to The Graph Network FAQ --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Note: The Sunrise of Decentralized Data ended June 12th, 2024. -## What is the Sunrise of Decentralized Data? +## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data is an initiative spearheaded by Edge & Node. The goal is to enable subgraph developers to seamlessly upgrade to The Graph’s decentralized network. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -This plan draws on many previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs, and the ability to integrate new blockchain networks to The Graph. +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### What are the phases of the Sunrise? +### What happened to the hosted service? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. -## Upgrading subgraphs to The Graph Network +During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. -### When will hosted service subgraphs no longer be available? +### Was Subgraph Studio impacted by this upgrade? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Why were subgraphs published to Arbitrum, did it start indexing a different network? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Will my hosted service subgraph be supported on The Graph Network? - -Yes, the upgrade Indexer will automatically support all hosted service subgraphs published to The Graph Network for a seamless upgrade experience. - -### How do I upgrade my hosted service subgraph? - -> Note: Upgrading a subgraph to The Graph Network cannot be undone. - - - -To upgrade a hosted service subgraph, you can visit the subgraph dashboard on the [hosted service](https://thegraph.com/hosted-service). - -1. Select the subgraph(s) you want to upgrade. -2. Select the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### How can I get support with the upgrade process? - -The Graph community is here to support developers as they move to The Graph Network. Join The Graph's [Discord server](https://discord.gg/vtvv7FP) and request help in the #upgrade-decentralized-network channel. - -### How can I ensure high quality of service and redundancy for subgraphs on The Graph Network? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Members from these blockchain communities are encouraged to integrate their chain through the [Chain Integration Process](/chain-integration-overview/). - -### How do I publish new versions to the network? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Upgrade to the latest version of [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Update your deploy command - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> Publishing requires Arbitrum ETH - upgrading your subgraph also airdrops a small amount to facilitate your first protocol interactions 🧑‍🚀 - -### I use a subgraph developed by someone else, how can I make sure that my service isn't interrupted? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### What happens if I don't upgrade my subgraph? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### How can I get started querying subgraphs on The Graph Network? - -You can explore available subgraphs on [Graph Explorer](https://thegraph.com/explorer). [Learn more about querying subgraphs on The Graph](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## About the Upgrade Indexer -### What is the upgrade Indexer? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> The upgrade Indexer is currently active. -### What chains does the upgrade Indexer support? +The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. -The upgrade Indexer supports chains that were previously only available on the hosted service. +### What does the upgrade Indexer do? -Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### Why is Edge & Node running the upgrade Indexer? -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. ### What does the upgrade indexer mean for existing Indexers? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. + +However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -The upgrade Indexer also provides the Indexer community with information about potential demand for subgraphs and new chains on The Graph Network. +The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. ### What does this mean for Delegators? -The upgrade Indexer offers a powerful opportunity for Delegators. As more subgraphs are upgraded from the hosted service to The Graph Network, Delegators stand to benefit from the increased network activity. +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### Will the upgrade Indexer compete with existing Indexers for rewards? +### Did the upgrade Indexer compete with existing Indexers for rewards? -No, the upgrade Indexer will only allocate the minimum amount per subgraph and will not collect indexing rewards. +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### How will this affect subgraph developers? +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### How does this benefit data consumers? +### How does the upgrade Indexer benefit data consumers? The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. -### How will the upgrade Indexer price queries? - -The upgrade Indexer will price queries at the market rate so as not to influence the query fee market. - -### What are the criteria for the upgrade Indexer to stop supporting a subgraph? - -The upgrade Indexer will serve a subgraph until it is sufficiently and successfully served with consistent queries served by at least 3 other Indexers. - -Furthermore, the upgrade Indexer will stop supporting a subgraph if it has not been queried in the last 30 days. - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## About The Graph Network - -### Do I need to run my own infrastructure? - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -Once your subgraph has reached adequate curation signal and other Indexers begin supporting it, the upgrade Indexer will gradually taper off, allowing other Indexers to collect indexing rewards and query fees. - -### Should I host my own indexing infrastructure? - -Running infrastructure for your own project is [significantly more resource intensive](/network/benefits/) when compared to using The Graph Network. - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -That being said, if you’re still interested in running a [Graph Node](https://github.com/graphprotocol/graph-node), consider joining The Graph Network [as an Indexer](https://thegraph.com/blog/how-to-become-indexer/) to earn indexing rewards and query fees by serving data on your subgraph and others. - -### Should I use a centralized indexing provider? - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -Here's a detailed breakdown of the benefits of The Graph over centralized hosting: +### How does the upgrade Indexer price queries? -- **Resilience and Redundancy**: Decentralized systems are inherently more robust and resilient due to their distributed nature. Data isn't stored on a single server or location. Instead, it's served by hundreds of independent Indexers around the globe. This reduces the risk of data loss or service interruptions if one node fails, leading to exceptional uptime (99.99%). +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **Quality of Service**: In addition to the impressive uptime, The Graph Network features a ~106ms median query speed (latency), and higher query success rates compared to hosted alternatives. Read more in [this blog](https://thegraph.com/blog/qos-the-graph-network/). +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -Just as you've chosen your blockchain network for its decentralized nature, security, and transparency, opting for The Graph Network is an extension of those same principles. By aligning your data infrastructure with these values, you ensure a cohesive, resilient, and trust-driven development environment. +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/yo/supported-network-requirements.mdx b/website/pages/yo/supported-network-requirements.mdx index df15ef48d762..9662552e4e6a 100644 --- a/website/pages/yo/supported-network-requirements.mdx +++ b/website/pages/yo/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| Network | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | +| Network | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | diff --git a/website/pages/yo/tap.mdx b/website/pages/yo/tap.mdx new file mode 100644 index 000000000000..872ad6231e9c --- /dev/null +++ b/website/pages/yo/tap.mdx @@ -0,0 +1,197 @@ +--- +title: TAP Migration Guide +--- + +Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. + +## Overview + +[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: + +- Efficiently handles micropayments. +- Adds a layer of consolidations to on-chain transactions and costs. +- Allows Indexers control of receipts and payments, guaranteeing payment for queries. +- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. + +## Specifics + +TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. + +For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. + +### RAV Details + +- It’s money that is waiting to be sent to the blockchain. + +- It will continue to send requests to aggregate and ensure that the total value of non-aggregated receipts does not exceed the `amount willing to lose`. + +- Each RAV can be redeemed once in the contracts, which is why they are sent after the allocation is closed. + +### Redeeming RAV + +As long as you run `tap-agent` and `indexer-agent`, everything will be executed automatically. The following provides a detailed breakdown of the process: + +1. An Indexer closes allocation. + +2. ` period, tap-agent` takes all pending receipts for that specific allocation and requests an aggregation into a RAV, marking it as `last`. + +3. `indexer-agent` takes all the last RAVS and sends redeem requests to the blockchain, which will update the value of `redeem_at`. + +4. During the `` period, `indexer-agent` monitors if the blockchain has any reorganizations that revert the transaction. + + - If it was reverted, the RAV is resent to the blockchain. If it was not reverted, it gets marked as `final`. + +## Blockchain Addresses + +### Contracts + +| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| ------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | +| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | +| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | + +### Gateway + +| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| ---------- | --------------------------------------------- | --------------------------------------------- | +| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | + +### Requirements + +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. + +- [Graph TAP Aribtrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) + +> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployement. As a result, you have to index it manually. + +## Migration Guide + +### Software versions + +| Component | Version | Image Link | +| --------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------- | +| indexer-service | v1.0.0-rc.6 | [indexer-service](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-service-rs/264320627?tag=1.0.0-rc.6) | +| indexer-agent | PR #995 | [indexer-agent](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent/266166026?tag=sha-d98cf80) | +| tap-agent | v1.0.0-rc.6 | [tap-agent](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-tap-agent/264320547?tag=1.0.0-rc.6) | + +### Steps + +1. **Indexer Agent** + + - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). + - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + +2. **Indexer Service** + + - Fully replace your current configuration with the [new Indexer Service rs](https://github.com/graphprotocol/indexer-rs). It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + - Like the older version, you can scale Indexer Service horizontally easily. It is still stateless. + +3. **TAP Agent** + + - Run _one_ single instance of [TAP Agent](https://github.com/graphprotocol/indexer-rs) at all times. It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + +4. **Configure Indexer Service and TAP Agent** + + Configuration is a TOML file shared between `indexer-service` and `tap-agent`, supplied with the argument `--config /path/to/config.toml`. + + Check out the full [configuration](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) and the [default values](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) + +For minimal configuration, use the following template: + +```bash +# You will have to change *all* the values below to match your setup. +# +# Some of the config below are global graph network values, which you can find here: +# +# +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: +# +# [database] +# postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" + +[indexer] +indexer_address = "0x1111111111111111111111111111111111111111" +operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" + +[database] +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. +postgres_url = "postgres://postgres@postgres:5432/postgres" + +[graph_node] +# URL to your graph-node's query endpoint +query_url = "" +# URL to your graph-node's status endpoint +status_url = "" + +[subgraphs.network] +# Query URL for the Graph Network subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[subgraphs.escrow] +# Query URL for the Escrow subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[blockchain] +# The chain ID of the network that the graph network is running on +chain_id = 1337 +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. +receipts_verifier_address = "0x2222222222222222222222222222222222222222" + +######################################## +# Specific configurations to tap-agent # +######################################## +[tap] +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: +# max_amount_willing_to_lose_grt = "0.1" +max_amount_willing_to_lose_grt = 20 + +[tap.sender_aggregator_endpoints] +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. +0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" +``` + +Notes: + +- Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/tap/#gateway). +- Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/tap/#contracts) using the appropriate chain id. + +**Log Level** + +- You can set the log level by using the `RUST_LOG` environment variable. +- It’s recommended that you set it to `RUST_LOG=indexer_tap_agent=debug,info`. + +## Monitoring + +### Metrics + +All components expose the port 7300 to be queried by prometheus. + +### Grafana Dashboard + +You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) and import. + +### Launchpad + +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer) diff --git a/website/pages/zh/about.mdx b/website/pages/zh/about.mdx index c7f09a215cfd..d057befe67cf 100644 --- a/website/pages/zh/about.mdx +++ b/website/pages/zh/about.mdx @@ -2,46 +2,66 @@ title: 关于 Graph --- -本页将解释什么是 Graph,以及你将如何开始。 - ## 什么是Graph? -Graph 是一个去中心化的协议,用于索引和查询区块链的数据。 它使查询那些难以直接查询的数据成为可能。 +The Graph is a powerful decentralized protocol that enables seamless querying and indexing of blockchain data. It simplifies the complex process of querying blockchain data, making dapp development faster and easier. + +## Understanding the Basics + +Projects with complex smart contracts such as [Uniswap](https://uniswap.org/) and NFTs initiatives like [Bored Ape Yacht Club](https://boredapeyachtclub.com/) store data on the Ethereum blockchain, making it very difficult to read anything other than basic data directly from the blockchain. + +### Challenges Without The Graph + +In the case of the example listed above, Bored Ape Yacht Club, you can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code). You can read the owner of a certain Ape, read the content URI of an Ape based on their ID, or read the total supply. + +- This can be done because these read operations are programmed directly into the smart contract itself. However, more advanced, specific, and real-world queries and operations like aggregation, search, relationships, and non-trivial filtering, **are not possible**. + +- For instance, if you want to inquire about Apes owned by a specific address and refine your search based on a particular characteristic, you would not be able to obtain that information by directly interacting with the contract itself. + +- To get more data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. + +### Why is this a problem? + +It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. + +Alternatively, you have the option to set up your own server, process the transactions, store them in a database, and create an API endpoint to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization. + +Blockchain properties, such as finality, chain reorganizations, and uncled blocks, add complexity to the process, making it time-consuming and conceptually challenging to retrieve accurate query results from blockchain data. -像 [Uniswap](https://uniswap.org/)这样具有复杂智能合约的项目,以及像 [Bored Ape Yacht Club](https://boredapeyachtclub.com/) 这样的 NFTs 倡议,都在以太坊区块链上存储数据,因此,除了直接从区块链上读取基本数据外,真的很难。 +## The Graph Provides a Solution -In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself. +The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. -To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions. +Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process. -你也可以建立你自己的服务器,在那里处理交易,把它们保存到数据库,并在上面建立一个 API 终端,以便查询数据。 然而,这种选择是[资源密集型的](/network/benefits/),需要维护,会出现单点故障,并破坏了去中心化化所需的重要安全属性。 +### How The Graph Functions -**为区块链数据编制索引真的非常非常难。** +Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL. -Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data. +#### Specifics -The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node). +- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph. -## Graph 是如何工作的 +- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database. -Graph基于子图描述(称为子图清单)学习如何索引以太坊数据。子图描述定义了子图感兴趣的智能合约、要注意的合约中的事件,以及如何将事件数据映射到Graph将存储在其数据库中的数据。 +- When creating a subgraph, you need to write a subgraph manifest. -一旦编写了`子图清单`,就可以使用Graph CLI将定义存储在IPFS中,并告诉索引人开始为该子图的数据编制索引。 +- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph. -此图提供了部署子图清单后用于处理以太坊交易的数据流的更多细节 +The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions. ![一图解释Graph如何使用Graph节点向数据消费者提供查询的图形](/img/graph-dataflow.png) 流程遵循这些步骤: -1. 一个去中心化的应用程序通过智能合约上的交易向以太坊添加数据。 -2. 智能合约在处理交易时,会发出一个或多个事件。 -3. Graph 节点不断扫描以太坊的新区块和它们可能包含的子图的数据。 -4. Graph 节点在这些区块中为你的子图找到以太坊事件并运行你提供的映射处理程序。 映射是一个 WASM 模块,它创建或更新 Graph 节点存储的数据实体,以响应以太坊事件。 -5. 去中心化的应用程序使用Graph节点的[GraphQL 端点](https://graphql.org/learn/),从区块链的索引中查询 Graph 节点的数据。 Graph 节点反过来将 GraphQL 查询转化为对其底层数据存储的查询,以便利用存储的索引功能来获取这些数据。 去中心化的应用程序在一个丰富的用户界面中为终端用户显示这些数据,他们用这些数据在以太坊上发行新的交易。 就这样周而复始。 +1. 一个去中心化的应用程序通过智能合约上的交易向以太坊添加数据。 +2. 智能合约在处理交易时,会发出一个或多个事件。 +3. Graph 节点不断扫描以太坊的新区块和它们可能包含的子图的数据。 +4. Graph 节点在这些区块中为你的子图找到以太坊事件并运行你提供的映射处理程序。 映射是一个 WASM 模块,它创建或更新 Graph 节点存储的数据实体,以响应以太坊事件。 +5. 去中心化的应用程序使用Graph节点的[GraphQL 端点](https://graphql.org/learn/),从区块链的索引中查询 Graph 节点的数据。 Graph 节点反过来将 GraphQL 查询转化为对其底层数据存储的查询,以便利用存储的索引功能来获取这些数据。 去中心化的应用程序在一个丰富的用户界面中为终端用户显示这些数据,他们用这些数据在以太坊上发行新的交易。 就这样周而复始。 ## 下一步 -The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds. +The following sections provide a more in-depth look at subgraphs, their deployment and data querying. -Before you start writing your own subgraph, you might want to have a look at [Graph Explorer](https://thegraph.com/explorer) and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL. +Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data. diff --git a/website/pages/zh/arbitrum/arbitrum-faq.mdx b/website/pages/zh/arbitrum/arbitrum-faq.mdx index afdba606c23e..6ae6a6ccd818 100644 --- a/website/pages/zh/arbitrum/arbitrum-faq.mdx +++ b/website/pages/zh/arbitrum/arbitrum-faq.mdx @@ -4,9 +4,9 @@ title: Arbitrum网络常见问题解答 如果您想跳到Arbitrum计费常见问题解答,请单击[here](#billing-on-arbitrum-faqs)。 -## 为什么Graph实施L2解决方案? +## Why did The Graph implement an L2 Solution? -通过缩放L2上的Graph,网络参与者可以预期: +By scaling The Graph on L2, network participants can now benefit from: - 燃气费节省26倍以上 @@ -14,7 +14,7 @@ title: Arbitrum网络常见问题解答 - 从以太坊继承的安全性 -将协议智能合约扩展到L2允许网络参与者更频繁地交互,同时降低了gas费用成本。例如,索引人可以打开和关闭已有分工,以更高的频率索引更多的子图,开发人员可以更轻松地部署和升级子图,委托人可以更频繁委托GRT,策展人可以添加或删除大量子图的信号,这些操作以前被认为成本太高,无法频繁执行。 +Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas. 去年,The Graph社区在[GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) 讨论的结果之后,决定推进Arbitrum。 @@ -41,27 +41,21 @@ Once you have GRT on Arbitrum, you can add it to your billing balance. ## 作为子图开发人员、数据消费者、索引人、策展人或授权者,我现在需要做什么? -There is no immediate action required, however, network participants are encouraged to begin moving to Arbitrum to take advantage of the benefits of L2. +Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) for additional support. -核心开发团队正在努力创建迁移工具,这将大大简化将委托、策展和子图移动到Arbitrum的过程。网络参与者预计2023年4月将提供迁移工具。 +All indexing rewards are now entirely on Arbitrum. -截至2023年4月10日,所有索引奖励的5%都在Arbitrum上铸造。随着网络参与度的增加,以及理事会的批准,索引奖励将逐渐从以太坊转移到Arbitrum,最终完全转移到Arbitrum。 - -## 如果我想参加L2上的Graph网络,该怎么办? - -Please help [test the network](https://testnet.thegraph.com/explorer) on L2 and report feedback about your experience in [Discord](https://discord.gg/graphprotocol). - -## 将网络扩展到L2是否存在任何风险? +## Were there any risks associated with scaling the network to L2? All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf). 所有事项已经经过了彻底测试,并制定了应急计划,以确保安全和无缝过渡。详细信息可以在这里 [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20)找到。 -## 以太坊上现有的子图会继续工作吗? +## Are existing subgraphs on Ethereum working? -是的,Graph网络合约将在以太坊和Arbitrum上并行运行,直到稍后完全迁移到Arbitrum。 +All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly. -## GRT会在Arbitrum上部署新的智能合约吗? +## Does GRT have a new smart contract deployed on Arbitrum? 是的,GRT在Arbitrum上有一个额外的智能合约[smart contract on Arbitrum](https://arbiscan.io/address/0x9623063377ad1b27544c965ccd7342f7ea7e88c7)。然而,以太坊主网上的GRT合约 [GRT contract](https://etherscan.io/token/0xc944e90c64b2c07662a292be6244bdf05cda44a7)将继续保持运营。 diff --git a/website/pages/zh/arbitrum/l2-transfer-tools-guide.mdx b/website/pages/zh/arbitrum/l2-transfer-tools-guide.mdx index bfdbc9013821..71601a39c69d 100644 --- a/website/pages/zh/arbitrum/l2-transfer-tools-guide.mdx +++ b/website/pages/zh/arbitrum/l2-transfer-tools-guide.mdx @@ -2,7 +2,7 @@ title: L2转移工具指南 --- -The Graph 使迁移到 Arbitrum One(L2) 上变得非常容易。对于每个协议参与者,都有一组 L2 转账工具,使所有网络参与者无缝地迁移到 L2。根据你要转移的内容,这些工具会要求你按照特定的步骤操作。 +The Graph has made it easy to move to L2 on Arbitrum One. For each protocol participant, there are a set of L2 Transfer Tools to make transferring to L2 seamless for all network participants. These tools will require you to follow a specific set of steps depending on what you are transferring. 关于这些工具的一些常见问题在 L2 Transfer Tools FAQ(/arbitrum/l2-transfer-tools-faq) 中有详细解答。FAQ 中深入解释了如何使用这些工具、它们的工作原理以及在使用过程中需要注意的事项。 diff --git a/website/pages/zh/billing.mdx b/website/pages/zh/billing.mdx index 894302b4aa9b..8d63ddec5ba1 100644 --- a/website/pages/zh/billing.mdx +++ b/website/pages/zh/billing.mdx @@ -14,7 +14,7 @@ There are two plans to use when querying subgraphs on The Graph Network. ## Query Payments with credit card -- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/) +- To set up billing with credit/debit cards, users should access Subgraph Studio (https://thegraph.com/studio/) 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. 单击页面右上角的“Connect Wallet”(连接钱包)按钮。您将被重定向到钱包选择页面。选择您的钱包,然后单击“Connect”(连接)。 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step. @@ -69,7 +69,7 @@ Once you bridge GRT, you can add it to your billing balance. 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/). 2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". -3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. +3. Click the "Manage" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear. 4. Enter the amount of GRT you would like to withdraw. 5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet. 6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet. @@ -83,7 +83,7 @@ Once you bridge GRT, you can add it to your billing balance. - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page. 5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network. 6. Select the number of months you would like to prepay. - - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. + - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time. 7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet. - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas. 8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs. @@ -127,7 +127,7 @@ This will be a step by step guide for purchasing GRT on Binance. 7. Review your purchase and click "Buy GRT". 8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet. 9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/). - - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist. + - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawal whitelist. - Click on the "wallet" button, click withdraw, and select GRT. - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to. - Click "Continue" and confirm your transaction. @@ -198,7 +198,7 @@ You can learn more about getting ETH on Binance [here](https://www.binance.com/e ### How many queries will I need? -You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time. +You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdraw GRT from your account at any time. We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening. @@ -208,6 +208,6 @@ Of course, both new and existing users can reach out to Edge & Node's BD team fo Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161). -### What happens when my billing balance runs? Will I get a warning? +### What happens when my billing balance runs out? Will I get a warning? You will receive several email notifications before your billing balance runs out. diff --git a/website/pages/zh/chain-integration-overview.mdx b/website/pages/zh/chain-integration-overview.mdx index d468efc7653f..e8ee7d47ad02 100644 --- a/website/pages/zh/chain-integration-overview.mdx +++ b/website/pages/zh/chain-integration-overview.mdx @@ -6,12 +6,12 @@ title: 链集成过程概述 ## 阶段1:技术集成 -- 团队致力于进行Graph Node和非EVM基础链的Firehose技术集成。[详细信息请参阅此处](/new-chain-integration/)。 +- Please visit [New Chain Integration](/new-chain-integration) for information on `graph-node` support for new chains. - 团队通过在[here](https://forum.thegraph.com/c/governance-gips/new-chain-support/71)(治理与GIPs下的新数据源子类别)创建一个论坛帖子来启动协议集成过程。强制使用默认的论坛模板。 ## 阶段2:集成验证 -- 团队与核心开发者、Graph Foundation以及GUI和网络网关的运营者合作,例如[Subgraph Studio](https://thegraph.com/studio/),以确保顺利的集成过程。这包括提供必要的后端基础设施,如集成链的JSON RPC或Firehose端点。希望避免自行托管这种基础设施的团队可以利用The Graph节点运营者(Indexers)社区来实现,而Foundation可以提供帮助。 +- Teams collaborate with core developers, Graph Foundation and operators of GUIs and network gateways, such as [Subgraph Studio](https://thegraph.com/studio/), to ensure a smooth integration process. This involves providing the necessary backend infrastructure, such as the integrating chain's JSON-RPC, Firehose or Substreams endpoints. Teams wanting to avoid self-hosting such infrastructure can leverage The Graph's community of node operators (Indexers) to do so, which the Foundation can help with. - Graph索引人在The Graph的测试网上测试集成。 - 核心开发者和索引人监控稳定性、性能和数据确定性。 @@ -38,7 +38,7 @@ title: 链集成过程概述 这只会影响 Substreams 驱动的子图上的索引奖励的协议支持。新的 Firehose 实现需要在测试网上进行测试,遵循了本 GIP 中第二阶段所概述的方法论。同样地,假设实现是高性能且可靠的,那么需要在 [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) 上提出 PR(`Substreams 数据源` 子图特性),以及一个新的 GIP 来支持索引奖励的协议。任何人都可以创建这个 PR 和 GIP;基金会将协助获得理事会的批准。 -### 3. 这个过程需要多长时间? +### 3. How much time will the process of reaching full protocol support take? 主网上线预计还有数周时间,具体取决于集成开发的时间、是否需要额外的研究、测试和漏洞修复,以及始终如一地需要社区反馈的治理过程的时间。 @@ -46,4 +46,4 @@ title: 链集成过程概述 ### 4. 如何处理优先事项? -Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. This is especially true for chains previously supported on the [hosted service](https://thegraph.com/hosted-service) or those relying on already tested stacks. +Similar to #3, it will depend on overall readiness and involved stakeholders' bandwidth. For example, a new chain with a brand new Firehose implementation may take longer than integrations that have already been battle-tested or are farther along in the governance process. diff --git a/website/pages/zh/cookbook/arweave.mdx b/website/pages/zh/cookbook/arweave.mdx index db502118f96b..7833811f469c 100644 --- a/website/pages/zh/cookbook/arweave.mdx +++ b/website/pages/zh/cookbook/arweave.mdx @@ -105,7 +105,7 @@ Arweave 数据源支持两种类型的处理程序: 处理事件的处理程序是用 [AssemblyScript](https://www.assemblyscript.org/) 编写的。 -Arweave 索引向 [AssemblyScriptAPI](/developing/assemblyscript-api/) 引入了特定于 Arweave 的数据类型。 +Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/zh/cookbook/base-testnet.mdx b/website/pages/zh/cookbook/base-testnet.mdx index 0158cdd95f3f..26570ef005a9 100644 --- a/website/pages/zh/cookbook/base-testnet.mdx +++ b/website/pages/zh/cookbook/base-testnet.mdx @@ -63,10 +63,10 @@ graph init --studio The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. +- 模式(schema.graphql)--GraphQL 模式定义从子图中检索到的数据。 - AssemblyScript 映射(mapping.ts)--将数据源中的数据转换为模式中定义的实体的代码。 -If you want to index additional data, you will need extend the manifest, schema and mappings. +If you want to index additional data, you will need to extend the manifest, schema and mappings. For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). diff --git a/website/pages/zh/cookbook/cosmos.mdx b/website/pages/zh/cookbook/cosmos.mdx index 958ecc21706a..532663706111 100644 --- a/website/pages/zh/cookbook/cosmos.mdx +++ b/website/pages/zh/cookbook/cosmos.mdx @@ -85,7 +85,7 @@ Schema definition describes the structure of the resulting subgraph database and 处理事件的处理程序是用 [AssemblyScript](https://www.assemblyscript.org/) 编写的。 -Cosmos 索引向 [AssemblyScriptAPI ](/developing/assemblyscript-api/)引入了特定于 Cosmos 的数据类型。 +Cosmos indexing introduces Cosmos-specific data types to the [AssemblyScript API](/developing/graph-ts/api/). ```tsx class Block { diff --git a/website/pages/zh/cookbook/grafting.mdx b/website/pages/zh/cookbook/grafting.mdx index c570b3abbd93..c98e74e7d670 100644 --- a/website/pages/zh/cookbook/grafting.mdx +++ b/website/pages/zh/cookbook/grafting.mdx @@ -22,7 +22,7 @@ title: 用嫁接替换合约并保持合约的历史 - [嫁接](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs) -在本教程中,我们将介绍一个基本用例。我们将用一个相同的合约(用一个新的地址,但相同的代码) 替换现有的合约。然后,将现有的子图移植到跟踪新合约的基本子图上。 +In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract. ## Important Note on Grafting When Upgrading to the Network @@ -30,7 +30,7 @@ title: 用嫁接替换合约并保持合约的历史 ### Why Is This Important? -Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. While this is an effective way to preserve data and save time on indexing, grafting may introduce complexities and potential issues when migrating from a hosted environment to the decentralized network. It is not possible to graft a subgraph from The Graph Network back to the hosted service or Subgraph Studio. +Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio. ### Best Practices @@ -80,7 +80,7 @@ dataSources: ``` - `Lock`数据源是我们在编译和部署合约时获得的abi和合约地址 -- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` +- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia` - `mapping`部分定义了感兴趣的触发器以及应该响应这些触发器而运行的函数。在这种情况下,我们正在监听`Withdrawl`事件,并在发出该事件时调用`处理Withdrawal`函数。 ## 嫁接清单定义 @@ -191,7 +191,7 @@ Congrats! You have successfully grafted a subgraph onto another subgraph. ## 其他资源 -如果你想要更多的嫁接经验,这里有一些流行合约的例子: +If you want more experience with grafting, here are a few examples for popular contracts: - [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml) - [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml) diff --git a/website/pages/zh/cookbook/near.mdx b/website/pages/zh/cookbook/near.mdx index 15d35a251427..f5092731660d 100644 --- a/website/pages/zh/cookbook/near.mdx +++ b/website/pages/zh/cookbook/near.mdx @@ -37,7 +37,7 @@ NEAR 子图开发需要`0.23.0`以上版本的`graph-cli`,以及 `0.23.0`以 **schema.graphql:** 一个模式文件,定义子图存储的数据以及如何通过 GraphQL 查询数据。NEAR 子图的要求已经在[现有的文档](/developing/creating-a-subgraph#the-graphql-schema)中介绍了。 -**AssemblyScript 映射**: 从事件数据转换为模式中定义的实体的[ AssemblyScript 代码](/developing/assemblyscript-api)。NEAR 支持引入了特定于 NEAR 的数据类型和新的 JSON 解析功能。 +**AssemblyScript Mappings:** [AssemblyScript code](/developing/graph-ts/api) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality. 在子图开发过程中,有两个关键命令: @@ -98,7 +98,7 @@ NEAR 数据源支持两种类型的处理程序: 处理事件的处理程序是用 [AssemblyScript](https://www.assemblyscript.org/) 编写的。 -NEAR 索引向 [AssemblyScriptAPI](/developing/assemblyscript-api) 引入了特定于 NEAR 的数据类型。 +NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/developing/graph-ts/api). ```typescript @@ -165,9 +165,9 @@ class ReceiptWithOutcome { - 块处理程序将收到 `Block` - 收据处理程序将收到 `ReceiptWithOutcome` -否则,在映射执行期间,NEAR 子图开发人员可以使用 [AssemblyScriptAPI](/developing/assemblyscript-api) 的其余部分。 +Otherwise, the rest of the [AssemblyScript API](/developing/graph-ts/api) is available to NEAR subgraph developers during mapping execution. -这包括一个新的 JSON 解析函数—— NEAR 上的日志经常作为带字符串的 JSONs 发出。作为[JSON API](/developing/assemblyscript-api#json-api)的一部分,可以使用一个新的 `json.fromString(...)`函数来允许开发人员轻松地处理这些日志。 +This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/developing/graph-ts/api#json-api) to allow developers to easily process these logs. ## 部署 NEAR 子图 diff --git a/website/pages/zh/cookbook/subgraph-debug-forking.mdx b/website/pages/zh/cookbook/subgraph-debug-forking.mdx index 9407cd0d670a..96fdd046a87f 100644 --- a/website/pages/zh/cookbook/subgraph-debug-forking.mdx +++ b/website/pages/zh/cookbook/subgraph-debug-forking.mdx @@ -6,7 +6,7 @@ As with many systems processing large amounts of data, The Graph's Indexers (Gra ## 首先,让我们来看什么是子图分叉 -**子图分叉** 是从*另一个* 子图的存储(通常是远程存储)中缓慢获取实体的过程。 +**子图分叉** 是从_另一个_ 子图的存储(通常是远程存储)中缓慢获取实体的过程。 在调试时,**subgraph forking** 允许您在固定的区块 _X_ 中调试失败的子图,而无需等待区块同步 _X_。 diff --git a/website/pages/zh/cookbook/subgraph-uncrashable.mdx b/website/pages/zh/cookbook/subgraph-uncrashable.mdx index b0c1d6607410..5726f6409b80 100644 --- a/website/pages/zh/cookbook/subgraph-uncrashable.mdx +++ b/website/pages/zh/cookbook/subgraph-uncrashable.mdx @@ -18,7 +18,7 @@ title: 安全子图代码生成器 - 该框架还包括一种方法(通过配置文件) 为实体变量组创建自定义但安全的 setter 函数。这样,用户就不可能加载/使用过时的图形实体,也不可能忘记保存或设置函数所需的变量。 -- 警告日志被记录为日志,指示哪里存在子图逻辑中断以帮助修补问题,从而确保数据的准确性。这些日志可以在 Graph 的托管服务中的“ Logs”部分中查看。 +- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy. 使用 Graph CLI codegen 命令,Subgraph Uncrashable 可以作为一个可选标志运行。 diff --git a/website/pages/zh/cookbook/upgrading-a-subgraph.mdx b/website/pages/zh/cookbook/upgrading-a-subgraph.mdx index 2f6f04d065e4..6f9ef2dd503e 100644 --- a/website/pages/zh/cookbook/upgrading-a-subgraph.mdx +++ b/website/pages/zh/cookbook/upgrading-a-subgraph.mdx @@ -136,7 +136,7 @@ You can update the metadata of your subgraphs without having to publish a new ve ## 弃用Graph网络上的子图 -按照这里的步骤废弃您的子图并将其从Graph 网络中删除。 +Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. ## 在Graph网络上查询子图 + 计费 @@ -151,6 +151,6 @@ On The Graph Network, query fees have to be paid as a core part of the protocol' - [The Graph Network Contracts](https://github.com/graphprotocol/contracts) -- 策展合约 - GNS - 包裹的底层合约 - 地址 - 0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538\` +- 策展合约 - GNS 包裹的底层合约 + - 地址 - 0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538\` - [Subgraph Studio documentation](/deploying/subgraph-studio) diff --git a/website/pages/zh/deploying/multiple-networks.mdx b/website/pages/zh/deploying/multiple-networks.mdx new file mode 100644 index 000000000000..77292d1df580 --- /dev/null +++ b/website/pages/zh/deploying/multiple-networks.mdx @@ -0,0 +1,241 @@ +--- +title: Deploying a Subgraph to Multiple Networks +--- + +This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph). + +## 将子图部署到多个网络 + +在某些情况下,您需要将相同的子图部署到多个网络,而不复制其所有代码。随之而来的主要挑战是这些网络上的合约地址不同。 + +### Using `graph-cli` + +Both `graph build` (since `v0.29.0`) and `graph deploy` (since `v0.32.0`) accept two new options: + +```sh +Options: + + ... + --network Network configuration to use from the networks config file + --network-file Networks config file path (default: "./networks.json") +``` + +You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development. + +> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks. + +If you don't have a `networks.json` file, you'll need to manually create one with the following structure: + +```json +{ + "network1": { // the network name + "dataSource1": { // the dataSource name + "address": "0xabc...", // the contract address (optional) + "startBlock": 123456 // the startBlock (optional) + }, + "dataSource2": { + "address": "0x123...", + "startBlock": 123444 + } + }, + "network2": { + "dataSource1": { + "address": "0x987...", + "startBlock": 123 + }, + "dataSource2": { + "address": "0xxyz..", + "startBlock": 456 + } + }, + ... +} +``` + +> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option. + +Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + source: + address: '0x123...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +您的网络配置文件应该是这样的: + +```json +{ + "mainnet": { + "Gravity": { + "address": "0x123..." + } + }, + "sepolia": { + "Gravity": { + "address": "0xabc..." + } + } +} +``` + +现在我们可以运行以下命令之一: + +```sh +# Using default networks.json file +yarn build --network sepolia + +# Using custom named file +yarn build --network sepolia --network-file path/to/config +``` + +The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: sepolia + source: + address: '0xabc...' + abi: Gravity + mapping: + kind: ethereum/events +``` + +Now you are ready to `yarn deploy`. + +> Note: As mentioned earlier, since `graph-cli 0.32.0` you can directly run `yarn deploy` with the `--network` option: + +```sh +# Using default networks.json file +yarn deploy --network sepolia + +# Using custom named file +yarn deploy --network sepolia --network-file path/to/config +``` + +### 使用 subgraph.yaml 模板 + +One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/). + +To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network: + +```json +{ + "network": "mainnet", + "address": "0x123..." +} +``` + +和 + +```json +{ + "network": "sepolia", + "address": "0xabc..." +} +``` + +Along with that, you would substitute the network name and addresses in the manifest with variable placeholders `{{network}}` and `{{address}}` and rename the manifest to e.g. `subgraph.template.yaml`: + +```yaml +# ... +dataSources: + - kind: ethereum/contract + name: Gravity + network: mainnet + network: {{network}} + source: + address: '0x2E645469f354BB4F5c8a05B3b30A929361cf77eC' + address: '{{address}}' + abi: Gravity + mapping: + kind: ethereum/events +``` + +In order to generate a manifest to either network, you could add two additional commands to `package.json` along with a dependency on `mustache`: + +```json +{ + ... + "scripts": { + ... + "prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml", + "prepare:sepolia": "mustache config/sepolia.json subgraph.template.yaml > subgraph.yaml" + }, + "devDependencies": { + ... + "mustache": "^3.1.0" + } +} +``` + +To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands: + +```sh +# Mainnet: +yarn prepare:mainnet && yarn deploy + +# Sepolia: +yarn prepare:sepolia && yarn deploy +``` + +A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759). + +**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well. + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. + +## 子图工作室子图封存策略 + +A subgraph version in Studio is archived if and only if it meets the following criteria: + +- The version is not published to the network (or pending publish) +- The version was created 45 or more days ago +- The subgraph hasn't been queried in 30 days + +In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived. + +受此策略影响的每个子图都有一个选项,可以回复有问题的版本。 + +## 检查子图状态 + +如果子图成功同步,这是一个好信号,表明它将永远运行良好。然而,网络上的新触发器可能会导致子图遇到未经测试的错误条件,或者由于性能问题或节点操作符的问题,子图开始落后。 + +Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph: + +```graphql +{ + indexingStatusForCurrentVersion(subgraphName: "org/subgraph") { + synced + health + fatalError { + message + block { + number + hash + } + handler + } + chains { + chainHeadBlock { + number + } + latestBlock { + number + } + } + } +} +``` + +This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error. diff --git a/website/pages/zh/developing/creating-a-subgraph.mdx b/website/pages/zh/developing/creating-a-subgraph.mdx index f73ffce3a3e9..755ad7f01a3c 100644 --- a/website/pages/zh/developing/creating-a-subgraph.mdx +++ b/website/pages/zh/developing/creating-a-subgraph.mdx @@ -2,45 +2,47 @@ title: 创建子图 --- -子图从区块链中提取数据,对其进行处理并存储,以便通过 GraphQL 轻松查询。 +This detailed guide provides instructions to successfully create a subgraph. -![定义子图](/img/defining-a-subgraph.png) +A subgraph extracts data from a blockchain, processes it, and stores it for efficient querying via GraphQL. -子图定义由几个文件组成: +![定义子图](/img/defining-a-subgraph.png) -- `subgraph.yaml`: 包含子图清单的 YAML 文件 +> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. -- `schema.graphql`: 一个 GraphQL 模式文件,它定义了为您的子图存储哪些数据,以及如何通过 GraphQL 查询这些数据 +## 开始 -- `AssemblyScript映射`: 将事件数据转换为模式中定义的实体(例如本教程中的`mapping.ts`)的 [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) 代码 +### 安装 Graph CLI -> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key). It is recommended that you [add signal](/network/curating/#how-to-signal) to your subgraph with at least [3,000 GRT](/sunrise/#how-can-i-ensure-high-quality-of-service-and-redundancy-for-subgraphs-on-the-graph-network). +To build and deploy a subgraph, you will need the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). -Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-tooling) which you will need to build and deploy a subgraph. +The Graph CLI is written in TypeScript, and you must have `node` and either `npm` or `yarn` installed to use it. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. -## 安装 Graph CLI +在本地计算机上,运行以下命令之一: -Graph CLI 是使用 JavaScript 编写的,您需要安装`yarn`或 `npm`才能使用它;以下教程中假设您已经安装了 yarn。 +#### Using [npm](https://www.npmjs.com/) -一旦您安装了`yarn`,可以通过运行以下命令安装 Graph CLI +```bash +npm install -g @graphprotocol/graph-cli@latest +``` -**用 yarn 安装:** +#### Using [yarn](https://yarnpkg.com/) ```bash yarn global add @graphprotocol/graph-cli ``` -**用 npm 安装:** +- The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. -```bash -npm install -g @graphprotocol/graph-cli -``` +- This `graph init` command can also create a subgraph in Subgraph Studio by passing in `--product subgraph-studio`. + +- If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started. -Once installed, the `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. This command can be used to create a subgraph in Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to your preferred network, bootstrapping a new subgraph from that contract can be a good way to get started. +## Create a subgraph -## 基于现有合约 +### From an existing contract -以下命令创建一个索引现有合约的所有事件的子图。 它尝试从 Etherscan 获取合约 ABI 并回退到请求本地文件路径。 如果缺少任何可选参数,它会带您进入交互式表单。 +The following command creates a subgraph that indexes all events of an existing contract: ```sh graph init \ @@ -51,21 +53,29 @@ graph init \ [] ``` -`` 是您在 Subgraph Studio 中的子图 ID,可以在您的子图详细信息页面上找到。 +- The command tries to retrieve the contract ABI from Etherscan. + + - The Graph CLI relies on a public RPC endpoint. While occasional failures are expected, retries typically resolve this issue. If failures persist, consider using a local ABI. + +- If any of the optional arguments are missing, it guides you through an interactive form. -## 基于子图示例 +- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page. -`graph init` 支持的第二种模式是从示例子图创建新项目。 以下命令执行此操作: +### From an example subgraph + +The following command initializes a new project from an example subgraph: ```sh -graph init --studio +graph init --studio --from-example=example-subgraph ``` -The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example. +- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. + +- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. -## 将新数据源添加到现有子图 +## Add new `dataSources` to an existing subgraph -从`v0.31.0`开始,`graph cli`支持通过`graph add`命令向现有子图添加新的数据源。 +Since `v0.31.0`, the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command: ```sh graph add
    [] @@ -78,22 +88,45 @@ Options: --network-file Networks config file path (default: "./networks.json") ``` -`add` 命令将从 Etherscan 获取 ABI(除非使用 `--abi` 选项指定 ABI 路径),并创建一个新的 `dataSource` 与 `graph init` 命令创建 `dataSource` `--from-contract` 的方式相同,相应地更新架构和映射。 +### Specifics + +The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is specified with the `--abi` option) and creates a new `dataSource`, similar to how the `graph init` command creates a `dataSource` `--from-contract`, updating the schema and mappings accordingly. This allows you to index implementation contracts from their proxy contracts. + +- `--merge-实体`选项标识开发人员希望如何处理`实体`和`事件`名称冲突: + + - 如果为`true`:新的`数据源`应该使用现有的`事件处理程序`& 和`实体`。 + + - If `false`: a new `entity` & `event` handler should be created with `${dataSourceName}{EventName}`. + +- 合约`地址`将写入相关网络的`networks.json`。 + +> Note: When using the interactive CLI, after successfully running `graph init`, you'll be prompted to add a new `dataSource`. -`--merge-实体`选项标识开发人员希望如何处理`实体`和`事件`名称冲突: +## Components of a subgraph -- 如果为`true`:新的`数据源`应该使用现有的`事件处理程序`& 和`实体`。 -- 如果为`false`:应使用`${dataSourceName}{EventName}`创建新的实体& 和事件处理程序。 +### 子图清单文件 -合约`地址`将写入相关网络的`networks.json`。 +The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query. -> **注意:**使用交互式cli时,在成功运行`graph init`后,将提示您添加新的`dataSource`。 +The **subgraph definition** consists of the following files: -## 子图清单文件 +- `subgraph.yaml`: Contains the subgraph manifest -子图清单 `subgraph.yaml` 定义了您的子图索引的智能合约,这些合约中需要关注的事件,以及如何将事件数据映射到 Graph 节点存储并允许查询的实体。 子图清单的完整规范可以在[这里](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md)找到。 +- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL -对于示例子图,`subgraph.yaml` 的内容是: +- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide) + +A single subgraph can: + +- Index data from multiple smart contracts (but not multiple networks). + +- Index data from IPFS files using File Data Sources. + +- Add an entry for each contract that requires indexing to the `dataSources` array. + +The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md). + +For the example subgraph listed above, `subgraph.yaml` is: ```yaml specVersion: 0.0.4 @@ -180,9 +213,9 @@ dataSources: 区块内数据源的触发器使用以下流程进行排序: -1. 事件和调用触发器首先按区块内的交易索引排序。 -2. 同一交易中的事件和调用触发器使用约定进行排序:首先是事件触发器,然后是调用触发器,每种类型都遵循它们在清单中定义的顺序。 -3. 区块触发器按照它们在清单中定义的顺序,在事件和调用触发器之后运行。 +1. 事件和调用触发器首先按区块内的交易索引排序。 +2. 同一交易中的事件和调用触发器使用约定进行排序:首先是事件触发器,然后是调用触发器,每种类型都遵循它们在清单中定义的顺序。 +3. 区块触发器按照它们在清单中定义的顺序,在事件和调用触发器之后运行。 这些排序规则可能会发生变化。 @@ -305,9 +338,9 @@ Imagine you have a subgraph that needs to make three Ethereum calls to fetch dat Traditionally, these calls might be made sequentially: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Total time taken = 3 + 2 + 4 = 9 seconds @@ -315,9 +348,9 @@ Total time taken = 3 + 2 + 4 = 9 seconds With this feature, you can declare these calls to be executed in parallel: -1. Call 1 (Transactions): Takes 3 seconds -2. Call 2 (Balance): Takes 2 seconds -3. Call 3 (Token Holdings): Takes 4 seconds +1. Call 1 (Transactions): Takes 3 seconds +2. Call 2 (Balance): Takes 2 seconds +3. Call 3 (Token Holdings): Takes 4 seconds Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call. @@ -325,9 +358,9 @@ Total time taken = max (3, 2, 4) = 4 seconds ### How it Works -1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. -2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. -3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. +1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel. +2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously. +3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing. ### Example Configuration in Subgraph Manifest @@ -347,7 +380,7 @@ calls: Details for the example above: - `global0X128` is the declared `eth_call`. -- The text before colon(`global0X128`) is the label for this `eth_call` which is used when logging errors. +- The text (`global0X128`) is the label for this `eth_call` which is used when logging errors. - The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)` - The `address` and `arguments` can be replaced with variables that will be available when the handler is executed. @@ -360,17 +393,17 @@ calls: ### SpecVersion Releases -| 版本 | Release 说明 | -| :-: | --- | -| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | -| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | -| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | -| 0.0.9 | Supports `endBlock` feature | +| 版本 | Release 说明 | +|:-----:| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` | +| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. | +| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs | +| 0.0.9 | Supports `endBlock` feature | | 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). | -| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | -| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | -| 0.0.5 | Added support for event handlers having access to transaction receipts. | -| 0.0.4 | Added support for managing subgraph features. | +| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). | +| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. | +| 0.0.5 | Added support for event handlers having access to transaction receipts. | +| 0.0.4 | Added support for managing subgraph features. | ### 获取 ABI @@ -442,16 +475,16 @@ Null value resolved for non-null field 'name' 我们在 GraphQL API 中支持以下标量: -| 类型 | 描述 | -| --- | --- | -| `字节` | 字节数组,表示为十六进制字符串。 通常用于以太坊hash和地址。 | -| `字符串` | `string` 值的标量。 不支持空字符,并会自动进行删除。 | -| `Boolean` | `boolean` 值的标量。 | -| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | -| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | -| `BigInt` | 大整数。 用于以太坊的 `uint32`、`int64`、`uint64`、...、`uint256` 类型。 注意:`uint32`以下的所有类型,例如`int32`、`uint24`或`int8`都表示为`i32`。 | -| `BigDecimal` | `BigDecimal` 表示为有效数字和指数的高精度小数。 指数范围是 -6143 到 +6144。 四舍五入到 34 位有效数字。 | -| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | +| 类型 | 描述 | +| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `字节` | 字节数组,表示为十六进制字符串。 通常用于以太坊hash和地址。 | +| `字符串` | `string` 值的标量。 不支持空字符,并会自动进行删除。 | +| `Boolean` | `boolean` 值的标量。 | +| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. | +| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. | +| `BigInt` | 大整数。 用于以太坊的 `uint32`、`int64`、`uint64`、...、`uint256` 类型。 注意:`uint32`以下的所有类型,例如`int32`、`uint24`或`int8`都表示为`i32`。 | +| `BigDecimal` | `BigDecimal` 表示为有效数字和指数的高精度小数。 指数范围是 -6143 到 +6144。 四舍五入到 34 位有效数字。 | +| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. | #### 枚举类型 @@ -593,7 +626,7 @@ query usersWithOrganizations { #### 向模式添加注释 -As per GraphQL spec, comments can be added above schema entity attributes using the hash symble `#`. This is illustrated in the example below: +As per GraphQL spec, comments can be added above schema entity attributes using the hash symbol `#`. This is illustrated in the example below: ```graphql type MyFirstEntity @entity { @@ -653,33 +686,33 @@ query { 支持的语言词典: -| 代码 | 词典 | -| ------ | ---------- | -| simple | 通用 | -| da | 丹麦语 | -| nl | 荷兰语 | -| en | 英语 | -| fi | 芬兰语 | -| fr | 法语 | -| de | 德语 | -| hu | 匈牙利语 | -| it | 意大利语 | -| no | 挪威语 | -| pt | 葡萄牙语 | +| 代码 | 词典 | +| ------ | ----- | +| simple | 通用 | +| da | 丹麦语 | +| nl | 荷兰语 | +| en | 英语 | +| fi | 芬兰语 | +| fr | 法语 | +| de | 德语 | +| hu | 匈牙利语 | +| it | 意大利语 | +| no | 挪威语 | +| pt | 葡萄牙语 | | ro | 罗马尼亚语 | -| ru | 俄语 | -| es | 西班牙语 | -| sv | 瑞典语 | -| tr | 土耳其语 | +| ru | 俄语 | +| es | 西班牙语 | +| sv | 瑞典语 | +| tr | 土耳其语 | ### 排序算法 支持的排序结果算法: -| 算法 | 描述 | -| ------------- | --------------------------------------------- | +| 算法 | 描述 | +| ------------- | -------------------------- | | rank | 使用全文查询的匹配质量 (0-1) 对结果进行排序。 | -| proximityRank | 与 rank 类似,但也包括匹配的接近程度。 | +| proximityRank | 与 rank 类似,但也包括匹配的接近程度。 | ## 编写映射 @@ -873,7 +906,7 @@ export function handleNewExchange(event: NewExchange): void { ``` > **注意:** 新的数据源只会处理创建它的区块和所有后续区块的调用和事件,而不会处理历史数据,也就是包含在先前区块中的数据。 -> +> > 如果先前的区块包含与新数据源相关的数据,最好通过读取合约的当前状态,并在创建新数据源时创建表示该状态的实体来索引该数据。 ### 数据源背景 @@ -930,7 +963,7 @@ dataSources: ``` > **注意:** 合约创建区块可以在 Etherscan 上快速查找: -> +> > 1. 通过在搜索栏中输入合约地址来搜索合约。 > 2. 单击 `Contract Creator` 部分中的创建交易hash。 > 3. 加载交易详情页面,您将在其中找到该合约的起始区块。 @@ -945,9 +978,9 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde `indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include: -1. `"never"`: No pruning of historical data; retains the entire history. -2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. -3. A specific number: Sets a custom limit on the number of historical blocks to retain. +1. `"never"`: No pruning of historical data; retains the entire history. +2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance. +3. A specific number: Sets a custom limit on the number of historical blocks to retain. ``` indexerHints: @@ -982,29 +1015,6 @@ indexerHints: prune: never ``` -You can check the earliest block (with historical state) for a given subgraph by querying the [Indexing Status API](/deploying/deploying-a-subgraph-to-hosted/#checking-subgraph-health): - -``` -{ - indexingStatuses(subgraphs: ["Qm..."]) { - subgraph - synced - health - chains { - earliestBlock { - number - } - latestBlock { - number - } - chainHeadBlock { number } - } - } -} -``` - -Note that the `earliestBlock` is the earliest block with historical data, which will be more recent than the `startBlock` specified in the manifest, if the subgraph has been pruned. - ## Event Handlers Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic. @@ -1224,11 +1234,11 @@ eventHandlers: 从 `specVersion` `0.0.4` 开始,子图特征必须使用它们的 `camelCase` 名称,在清单文件顶层的 `features` 部分中显式声明,如下表所列: -| 特征 | 名称 | -| ----------------------------- | ---------------- | -| [非致命错误](#非致命错误) | `nonFatalErrors` | +| 特征 | 名称 | +| ----------------- | ---------------- | +| [非致命错误](#非致命错误) | `nonFatalErrors` | | [全文搜索](#定义全文搜索字段) | `fullTextSearch` | -| [嫁接](#嫁接到现有子图) | `grafting` | +| [嫁接](#嫁接到现有子图) | `grafting` | 例如,如果子图使用 **Full-Text Search** 和 **Non-fatal Errors** 功能,则清单中的 `features` 字段应为: @@ -1355,7 +1365,7 @@ _meta { > **注意:** 在初次升级到The Graph Network时,不建议使用grafting。可以在[这里](/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network)了解更多信息。 -首次部署子图时,它会在相应链的启动区块(或每个数据源定义的 `startBlock` 处)开始索引事件。在某些情况下,可以使用现有子图已经索引的数据并在更晚的区块上开始索引。 这种索引模式称为*Grafting*。 例如,嫁接在开发过程中非常有用,可以快速克服映射中的简单错误,或者在现有子图失败后暂时恢复工作。 +首次部署子图时,它会在相应链的启动区块(或每个数据源定义的 `startBlock` 处)开始索引事件。在某些情况下,可以使用现有子图已经索引的数据并在更晚的区块上开始索引。 这种索引模式称为_Grafting_。 例如,嫁接在开发过程中非常有用,可以快速克服映射中的简单错误,或者在现有子图失败后暂时恢复工作。 当 `subgraph.yaml` 中的子图清单在顶层包含 `graft` 区块时,子图被嫁接到基础子图: @@ -1477,7 +1487,7 @@ The file data source must specifically mention all the entity types which it wil #### 创建新处理程序以处理文件 -This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/assemblyscript-api/#json-api)). +This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/graph-ts/api/#json-api)). 文件的CID作为可读字符串可通过`数据源访问`,如下所示: @@ -1528,7 +1538,7 @@ For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave b ```typescript import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates' -const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm' +const ipfshash = "QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm" //This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs. export function handleTransfer(event: TransferEvent): void { diff --git a/website/pages/zh/developing/developer-faqs.mdx b/website/pages/zh/developing/developer-faqs.mdx index e223135992e7..7849cefe83d3 100644 --- a/website/pages/zh/developing/developer-faqs.mdx +++ b/website/pages/zh/developing/developer-faqs.mdx @@ -2,72 +2,93 @@ title: 开发者常见问题 --- -## 什么是子图? +This page summarizes some of the most common questions for developers building on The Graph. -子图是基于区块链数据构建的自定义API。子图使用GraphQL查询语言进行查询,并使用Graph CLI部署到Graph节点。一旦部署并发布到Graph的去中心化网络,索引人就会处理子图,并使其可供子图消费者查询。 +## Subgraph Related -## 2. 我可以删除我的子图吗? +### 什么是子图? -子图一旦创建就无法删除。 +A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query. -## 3. 我可以更改我的子图名称吗? +### 2. What is the first step to create a subgraph? -不可以。一旦创建子图,就不能更改名称。 请务必在创建子图之前仔细考虑这一点,以便其他 dapp 可以轻松搜索和识别它。 +To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/). -## 4. 我可以更改与我的子图关联的 GitHub 账户吗? +### 3. Can I still create a subgraph if my smart contracts don't have events? -不可以。一旦创建了子图,就不能更改关联的 GitHub 账户。 在创建子图之前,请务必仔细考虑这一点。 +It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data. -## 5. 如果我的智能合约没有事件,还能创建子图吗? +If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower. -强烈建议您构建智能合约,以使事件与您有兴趣查询的数据相关联。 子图中的事件处理程序由合约事件触发,是迄今为止检索有用数据的最快方式。 +### 4. 我可以更改与我的子图关联的 GitHub 账户吗? -如果您正在使用的合约不包含事件,您的子图可以使用调用和区块处理程序来触发索引。 因为这样做会严重影响性能,所以不建议。 +No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph. -## 6. 是否可以在多个网络上部署同名的子图? +### 5. How do I update a subgraph on mainnet? -在多个网络的情况下,您将需要不同的名称。 虽然您不能在同一个名称下拥有不同的子图,但有一些方便的方法可以为多个网络提供一个代码库。 请在我们的文档中找到更多相关信息:[重新部署子图](/deploying/deploying-a-subgraph-to-hosted#redeploying-a-subgraph) +You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. -## 7. 模板与数据源有何不同? +### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying? -模板允许您在子图索引时动态创建数据源。 当人们与之交互时,您的合约可能会产生新的合约,并且由于您预先知道这些合同的架构(ABI、事件等),您可以定义您希望如何在模板中索引它们,当这些合约创建您的子图时将通过提供合约地址来创建动态数据源。 +您必须重新部署子图,但如果子图 ID(IPFS hash)没有更改,则不必从头开始同步。 + +### 7. How do I call a contract function or access a public state variable from my subgraph mappings? + +Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). + +### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings? + +Not currently, as mappings are written in AssemblyScript. + +One possible alternative solution to this is to store raw data in entities and perform logic that requires JS libraries on the client. + +### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events? + +在子图中,无论是否跨多个合约,事件始终按照它们在区块中出现的顺序进行处理的。 + +### 10. How are templates different from data sources? + +Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address. 请查看“实例化数据源模板”部分:[数据源模板](/developing/creating-a-subgraph#data-source-templates)。 -## 8. 如何确保我使用最新版本的 graph-node 进行本地部署? +### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`? -您可以运行以下命令: +Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other. -```sh -docker pull graphprotocol/graph-node:latest -``` +You can also use `graph add` command to add a new dataSource. -**注意:** docker / docker-compose 将始终使用您第一次运行时提取的任何 graph-node 版本,因此执行此操作非常重要,可以确保您使用的是最新版本的 graph-node。 +### 12. In what order are the event, block, and call handlers triggered for a data source? -## 9. 如何从我的子图映射中调用合约函数,或访问公共状态变量? +Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. -Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state). +When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. -## 10. 是否可以使用 `graph-cli` 中的 `graph init` 和两个合约来设置子图? 还是应该在运行 `graph init` 之后在 `subgraph.yaml` 中手动添加另一个数据源? +### 13. How do I make sure I'm using the latest version of graph-node for my local deployments? -Yes. On `graph init` command itself you can add multiple datasources by entering contracts one after the other. You can also use `graph add` command to add new datasource. +您可以运行以下命令: -## 11. 我想向 GitHub 贡献代码或者添加 issue,在哪里可以找到相关代码? +```sh +docker pull graphprotocol/graph-node:latest +``` -- [图节点](https://github.com/graphprotocol/graph-node) -- [graph-tooling](https://github.com/graphprotocol/graph-tooling) -- [graph-docs](https://github.com/graphprotocol/docs) -- [graph-client](https://github.com/graphprotocol/graph-client) +> Note: docker / docker-compose will always use whatever graph-node version was pulled the first time you ran it, so make sure you're up to date with the latest version of graph-node. -## 12. 在处理事件时,为实体构建“自动生成”id 的推荐方法是什么? +### 14. What is the recommended way to build "autogenerated" ids for an entity when handling events? 如果在事件期间只创建了一个实体并且没有更好的其他方法,那么交易hash + 日志索引的组合是唯一的。 您可以先将其转换为字节,然后将调用 `crypto.keccak256` 来混淆这些内容,但这不会使其更加独特。 -## 13、监听多个合约时,是否可以选择监听事件的合约顺序? +### 15. Can I delete my subgraph? -在子图中,无论是否跨多个合约,事件始终按照它们在区块中出现的顺序进行处理的。 +It is not possible to delete subgraphs once they are created. However, you can [transfer and deprecate your subgraph](/managing/transfer-and-deprecate-a-subgraph/). -## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers? +## Network Related + +### 16. What networks are supported by The Graph? + +您可以在[这里](/developing/supported-networks)找到支持的网络列表。 + +### 17. Is it possible to differentiate between networks (mainnet, Sepolia, local) within event handlers? 是的。 您可以按照以下示例通过导入 `graph-ts` 来做到这一点: @@ -78,23 +99,21 @@ dataSource.network() dataSource.address() ``` -## 15. Do you support block and call handlers on Sepolia? +### 18. Do you support block and call handlers on Sepolia? Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network. -## 16. 我可以将 ethers.js 或其他 JS 库导入我的子图映射吗? - -目前不能,因为映射是用 AssemblyScript 编写的。 一种可能的替代解决方案是将原始数据存储在实体中,并在客户端执行需要 JS 库的逻辑。 +## Indexing & Querying Related -## 17. 是否可以指定从哪个特定区块开始索引? +### 19. Is it possible to specify what block to start indexing on? -Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) +Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 18. 有没有一些提高索引性能的技巧? 子图需要很长时间才能同步。 +### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync -是的,您应该看看可选的起始区块功能,以便从部署合约的区块开始索引:[起始区块](/developing/creating-a-subgraph#start-blocks) +Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks) -## 19. 有没有办法直接查询子图,来确定它索引的最新区块号是多少? +### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed? 是的! 请尝试以下命令,并将“organization/subgraphName”替换为发布的组织和子图名称: @@ -102,44 +121,27 @@ Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the n curl -X POST -d '{ "query": "{indexingStatusForCurrentVersion(subgraphName: \"organization/subgraphName\") { chains { latestBlock { hash number }}}}"}' https://api.thegraph.com/index-node/graphql ``` -## 20. Graph 支持哪些网络? - -您可以在[这里](/developing/supported-networks)找到支持的网络列表。 - -## 21. 是否可以在不重新部署的情况下,将子图复制到另一个账户或端点? - -您必须重新部署子图,但如果子图 ID(IPFS hash)没有更改,则不必从头开始同步。 - -## 22. 可以在 graph节点之上使用 Apollo Federation 吗? +### 22. Is there a limit to how many objects The Graph can return per query? -虽然我们确实希望在未来支持联合(Federation),但目前还不支持。 目前,您可以在客户端或通过代理服务使用模式拼接。 - -## 23. Graph 每次查询可以返回多少个对象有限制吗? - -默认情况下,每个集合的查询响应限制为 100 个项目。 如果您想收到更多,则每个收藏最多可以包含 1000 个项目,并且可以使用以下查询进行分页: +By default, query responses are limited to 100 items per collection. If you want to receive more, you can go up to 1000 items per collection and beyond that, you can paginate with: ```graphql someCollection(first: 1000, skip: ) { ... } ``` -## 24. 如果我的 dapp 前端使用Graph 进行查询,我是否需要将我的查询密钥直接写入前端? 如果我们为用户支付查询费用,恶意用户会不会导致我们的查询费用非常高? - -目前,推荐的 dapp 方法是将密钥添加到前端并将其公开给最终用户。 也就是说,您可以将该键限制为主机名,例如 _yourdapp.io_ 和子图。 网关目前由 Edge & Node 运营。 网关的部分职责是监控滥用行为,并阻止来自恶意客户端的流量。 - -## 25. Where do I go to find my current subgraph on the hosted service? - -请前往托管服务,查找您或其他人部署到托管服务的子图。 您可以在[这里](https://thegraph.com/hosted-service)找到托管服务。 - -## 26. Will the hosted service start charging query fees? +### 23. If my dapp frontend uses The Graph for querying, do I need to write my query key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high? -Graph 永远不会对托管服务收费。 Graph 是一个去中心化的协议,中心化服务的收费与 Graph 的价值观不一致。 托管服务始终是帮助进入去中心化网络的临时步骤。 开发人员将有足够的时间在他们适宜时迁移到去中心化网络。 +Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients. -## 27. How do I update a subgraph on mainnet? +## Miscellaneous -If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on. +### 24. Is it possible to use Apollo Federation on top of graph-node? -## 28. In what order are the event, block, and call handlers triggered for a data source? +Federation is not supported yet. At the moment, you can use schema stitching, either on the client or via a proxy service. -Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change. +### 25. I want to contribute or add a GitHub issue. Where can I find the open source repositories? -When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered. +- [图节点](https://github.com/graphprotocol/graph-node) +- [graph-tooling](https://github.com/graphprotocol/graph-tooling) +- [graph-docs](https://github.com/graphprotocol/docs) +- [graph-client](https://github.com/graphprotocol/graph-client) diff --git a/website/pages/zh/developing/graph-ts/api.mdx b/website/pages/zh/developing/graph-ts/api.mdx index 31c2c0eaf4f5..0bb53153fae9 100644 --- a/website/pages/zh/developing/graph-ts/api.mdx +++ b/website/pages/zh/developing/graph-ts/api.mdx @@ -2,14 +2,16 @@ title: AssemblyScript API --- -> 注意:如果您在 `graph-cli`/`graph-ts` 版本 `0.22.0` 之前创建了子图,那么您正在使用较旧版本的 AssemblyScript,我们建议查看[`迁移指南`](/release-notes/assemblyscript-migration-guide)。 +> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/release-notes/assemblyscript-migration-guide). -此页面记录了编写子图映射时可以使用的内置 API。有两种开箱即用的 API: +Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box: -- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and -- 通过 `graph codegen` 生成的子图文件中的代码。 +- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) +- Code generated from subgraph files by `graph codegen` -也可以添加其他库作为依赖项,只要它们与[AssemblyScript](https://github.com/AssemblyScript/assemblyscript)兼容即可。由于这是语言映射所写的,因此[AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki)是语言和标准库特性的良好来源。 +You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). + +Since language mappings are written in AssemblyScript, it is useful to review the language and standard library features from the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki). ## API 参考 @@ -27,16 +29,16 @@ title: AssemblyScript API 子图清单中的 `apiVersion` 指定了由 Graph Node 运行的特定子图的映射 API 版本。 -| 版本 | Release 说明 | -| :-: | --- | +| 版本 | Release 说明 | +| :---: | ---------------------------------------------------------------------------------------------------------------------------- | | 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) | -| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | -| 0.0.7 | 添加了 `TransactionReceipt` 和 `Log` 类到以太坊类型。
    已将 `receipt` 字段添加到Ethereum Event对象。 | -| 0.0.6 | 向Ethereum Transaction对象添加了 nonce 字段 向 Etherum Block对象添加
    baseFeePerGas字段 | -| 0.0.5 | AssemblyScript 升级到版本 0.19.10(这包括重大更改,参阅
    迁移指南)ethereum.transaction.gasUsed 重命名为 ethereum.transaction.gasLimit | -| 0.0.4 | 已向 Ethereum SmartContractCall对象添加了 `functionSignature` 字段。 | -| 0.0.3 | 已向Ethereum Call 对象添加了 `from` 字段。
    `etherem.call.address` 被重命名为 `ethereum.call.to`。 | -| 0.0.2 | 已向Ethereum Transaction对象添加了 `input` 字段。 | +| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. | +| 0.0.7 | 添加了 `TransactionReceipt` 和 `Log` 类到以太坊类型。
    已将 `receipt` 字段添加到Ethereum Event对象。 | +| 0.0.6 | 向Ethereum Transaction对象添加了 nonce 字段 向 Etherum Block对象添加
    baseFeePerGas字段 | +| 0.0.5 | AssemblyScript 升级到版本 0.19.10(这包括重大更改,参阅
    迁移指南)ethereum.transaction.gasUsed 重命名为 ethereum.transaction.gasLimit | +| 0.0.4 | 已向 Ethereum SmartContractCall对象添加了 `functionSignature` 字段。 | +| 0.0.3 | 已向Ethereum Call 对象添加了 `from` 字段。
    `etherem.call.address` 被重命名为 `ethereum.call.to`。 | +| 0.0.2 | 已向Ethereum Transaction对象添加了 `input` 字段。 | ### 内置类型 @@ -252,7 +254,9 @@ export function handleTransfer(event: TransferEvent): void { 如果在处理链时遇到 Transfer 事件,它会使用生成的 Transfer 类型(别名为 TransferEvent 以避免与实体类型的命名冲突) 传递给 handleTransfer 事件处理器。 此类型允许访问事件的母交易及其参数等数据。 -每个实体都必须有一个唯一的 ID 以避免与其他实体发生冲突。 事件参数包含可以使用的唯一标识符是相当常见的。 注意:使用交易hash作为 ID 时, 假定同一交易中没有其他事件创建以该hash作为 ID 的实体。 +Each entity must have a unique ID to avoid collisions with other entities. It is fairly common for event parameters to include a unique identifier that can be used. + +> Note: Using the transaction hash as the ID assumes that no other events in the same transaction create entities with this hash as the ID. #### 从存储中加载实体 @@ -268,15 +272,18 @@ if (transfer == null) { // Use the Transfer entity as before ``` -由于实体可能尚未存在于存储中,因此 `load` 方法返回一个类型为 `Transfer | null` 的值。因此,在使用该值之前可能需要检查 `null` 情况。 +As the entity may not exist in the store yet, the `load` method returns a value of type `Transfer | null`. It may be necessary to check for the `null` case before using the value. -> **注意:** 只有在映射中的更改依赖于实体的先前数据时,加载实体才是必要的。请参阅下一节,了解更新现有实体的两种方法。 +> Note: Loading entities is only necessary if the changes made in the mapping depend on the previous data of an entity. See the next section for the two ways of updating existing entities. #### 查找在区块中创建的实体 截至 `graph-node` v0.31.0、`@graphprotocol/graph-ts` v0.30.0 和 `@graphprotocol/graph-cli` v0.49.0,所有实体类型上都提供了 `loadInBlock` 方法。 -存储API有助于检索在当前区块中创建或更新的实体。这方面的一种典型情况是,一个处理程序从某个链上事件创建一个,之交易后的处理程序希望访问该交易(如果存在)。在交易不存在的情况下,子图必须去数据库才能发现实体不存在;如果子图作者已经知道实体必须是在同一个区块中创建的,那么使用loadInBlock可以避免这种数据库往返。对于某些子图,这些遗漏的查找可能会显著增加索引时间。 +The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some on-chain event, and a later handler wants to access this transaction if it exists. + +- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip. +- For some subgraphs, these missed lookups can contribute significantly to the indexing time. ```typescript let id = event.transaction.hash // or however the ID is constructed @@ -502,7 +509,9 @@ export function handleTransfer(event: TransferEvent) { #### 处理重复调用 -如果您的合约的只读方法可能回滚,则应通过在生成的合约方法前加上 `try_` 来处理。例如,Gravity 合约暴露了 `gravatarToOwner` 方法。下面的代码可以处理该方法中的回滚: +If the read-only methods of your contract may revert, then you should handle that by calling the generated contract method prefixed with `try_`. + +- For example, the Gravity contract exposes the `gravatarToOwner` method. This code would be able to handle a revert in that method: ```typescript let gravity = Gravity.bind(event.address) @@ -514,7 +523,7 @@ if (callResult.reverted) { } ``` -请注意,连接到 Geth 或 Infura 客户端的 Graph 节点可能无法检测到所有重复使用,如果您依赖于此,我们建议使用连接到 Parity 客户端的 Graph 节点。 +> Note: A Graph node connected to a Geth or Infura client may not detect all reverts. If you rely on this, we recommend using a Graph Node connected to a Parity client. #### 编码/解码 ABI @@ -760,19 +769,19 @@ if (value.kind == JSONValueKind.BOOL) { ### 类型转换参考 -| 源类型 | 目标类型 | 转换函数 | +| 源类型 | 目标类型 | 转换函数 | | -------------------- | -------------------- | ---------------------------- | | Address | Bytes | none | | Address | String | s.toHexString() | | BigDecimal | String | s.toString() | | BigInt | BigDecimal | s.toBigDecimal() | -| BigInt | String (hexadecimal) | s.toHexString() 或 s.toHex() | +| BigInt | String (hexadecimal) | s.toHexString() 或 s.toHex() | | BigInt | String (unicode) | s.toString() | | BigInt | i32 | s.toI32() | | Boolean | Boolean | none | | Bytes (signed) | BigInt | BigInt.fromSignedBytes(s) | | Bytes (unsigned) | BigInt | BigInt.fromUnsignedBytes(s) | -| Bytes | String (hexadecimal) | s.toHexString() 或 s.toHex() | +| Bytes | String (hexadecimal) | s.toHexString() 或 s.toHex() | | Bytes | String (unicode) | s.toString() | | Bytes | String (base58) | s.toBase58() | | Bytes | i32 | s.toI32() | diff --git a/website/pages/zh/developing/supported-networks.mdx b/website/pages/zh/developing/supported-networks.mdx index 1d19cb36097b..487d607586af 100644 --- a/website/pages/zh/developing/supported-networks.mdx +++ b/website/pages/zh/developing/supported-networks.mdx @@ -13,7 +13,7 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename) \*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). ⁠ Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs). - Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints. -- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs. +- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. - If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks. - 有关去中心化网络支持哪些功能的完整列表,请参阅[本页](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md)。 diff --git a/website/pages/zh/developing/unit-testing-framework.mdx b/website/pages/zh/developing/unit-testing-framework.mdx index e79f2b9c844f..2e8445250e70 100644 --- a/website/pages/zh/developing/unit-testing-framework.mdx +++ b/website/pages/zh/developing/unit-testing-framework.mdx @@ -24,7 +24,7 @@ Postgres 安装命令: brew install postgresql ``` -创建到最新 libpq.5. lib* 的符号链接,可能需要首先创建这个目录*`/usr/local/opt/postgreql/lib/` +创建到最新 libpq.5. lib_ 的符号链接,可能需要首先创建这个目录_`/usr/local/opt/postgreql/lib/` ```sh ln -sf /usr/local/opt/postgresql@14/lib/postgresql@14/libpq.5.dylib /usr/local/opt/postgresql/lib/libpq.5.dylib @@ -227,7 +227,7 @@ test("handleNewGravatar() should create a new entity", () => { 例子: -`beforeAll`中的代码将在文件中的*all*测试之前执行一次。 +`beforeAll`中的代码将在文件中的_all_测试之前执行一次。 ```typescript import { describe, test, beforeAll } from "matchstick-as/assembly/index" @@ -287,7 +287,7 @@ describe("handleUpdatedGravatar()", () => { 例子: -`afterAll`中的代码将在文件中的*all*测试之后执行一次。 +`afterAll`中的代码将在文件中的_all_测试之后执行一次。 ```typescript import { describe, test, afterAll } from "matchstick-as/assembly/index" @@ -1368,18 +1368,18 @@ Global test coverage: 22.2% (2/9 handlers). > 关键:无法从具有背景的有效模块创建WasmInstance:未知导入:wasi_snapshot_preview1::尚未定义fd_write -这意味着您在代码中使用了`console.log`,而AssemblyScript不支持此选项。请考虑使用[日志API](/developing/assemblyscript-api/#logging-api) +This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/graph-ts/api/#logging-api) > ERROR TS2554: Expected ? arguments, but got ?. -> +> > 返回ethereum.Block(defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(18,12) -> +> > ERROR TS2554: Expected ? arguments, but got ?. -> +> > 返回新ethereum.Transaction(defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt); -> +> > in ~lib/matchstick-as/assembly/defaults.ts(24,12) 参数不匹配是由`graph-ts` and `matchstick-as`不匹配造成的。解决此类问题的最佳方法是将所有内容更新到最新发布的版本。 diff --git a/website/pages/zh/glossary.mdx b/website/pages/zh/glossary.mdx index dea556fafe08..a7ab1a362f28 100644 --- a/website/pages/zh/glossary.mdx +++ b/website/pages/zh/glossary.mdx @@ -10,11 +10,9 @@ title: 术语汇编 - **Endpoint**: 可以用来查询子图的URL。Subgraph Studio的测试端点是`https://api.studio.thegraph.com/query///`,Graph浏览器端点`为https://gateway.thegraph.com/api//subgraphs/id/`。Graph浏览器端点用于查询Graph的去中心化网络上的子图。 -- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Then, Indexers can begin indexing subgraphs to make them available to be queried by anyone. +- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone. -- **Hosted Service**: 作为 The Graph 分布式网络成熟的一个临时支架服务,用于构建和查询子图,以提高服务成本、服务质量和开发者体验。 - -- **Indexers**:网络参与者运行索引节点,从区块链索引数据并提供 GraphQL 查询。 +- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries. - **Indexer Revenue Streams**:索引人在 GRT 中获得两个组成部分: 查询费用回扣和索引奖励。 @@ -24,17 +22,17 @@ title: 术语汇编 - **Indexer's Self Stake**: 索引人参与去中心化网络的 GRT 金额。最低为100000 GRT,并且没有上限。 -- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service to The Graph Network. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service. +- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers. -- **Delegators**: 拥有 GRT 并将其 GRT 委托给索引人的网络参与者。这使得索引人可以增加它们在网络子图中的份额。作为回报,委托方将获得索引方为处理子图而获得的索引奖励的一部分。 +- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs. - **Delegation Tax**: 委托人将 GRT 委托给索引人时支付的0.5% 的费用。用于支付费用的 GRT 将被消耗。 -- **Curators**: 网络参与者,识别高质量的子图,并“策展”他们(即,对他们发GRT信号) ,以换取策展份额。当索引人索赔子图上的查询费用时,10% 将分配给该子图的策展人。索引认获得与子图上的信号成比例的索引奖励。我们可以看到发出信号的 GRT 数量与索引子图的索引人数量之间的相关性。 +- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph. - **Curation Tax**: 当策展人在子图上显示 GRT 时,他们要支付1% 的费用。用于支付费用的 GRT 将被消耗。 -- **Subgraph Consumer**: 查询子图的任何应用程序或用户。 +- **Data Consumer**: Any application or user that queries a subgraph. - **Subgraph Developer**: 构建并部署子图到 Graph 去中心化网络的开发人员。 @@ -46,11 +44,11 @@ title: 术语汇编 1. **Active**: 分配在链上创建时被认为是活动的。这称为打开一个分配,并向网络表明索引人正在为特定子图建立索引并提供查询服务。主动分配的增值索引奖励与子图上的信号以及分配的 GRT 的数量成比例。 - 2. **Closed**: 索引人可以通过提交最近的、有效的索引证明(POI)来领取在给定子图上累积的索引奖励。这被称为关闭分配。在关闭之前,分配必须至少开放一个纪元。最大分配期为 28 个纪元。如果索引人在 28 个纪元之后仍然保持分配开放状态,则被称为过时分配。当分配处于 **Closed** 状态时,渔夫仍然可以提出异议,挑战索引人提供虚假数据。 + 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data. - **Subgraph Studio**: 用于构建、部署和发布子图的强大 dapp。 -- **Fishermen**: The Graph Network 中的一个角色,由监视索引人提供的数据的准确性和完整性的参与者担任。当渔夫发现他们认为是不正确的查询响应或 POI 时,他们可以对索引人提起争议。如果争议裁定有利于渔夫,索引人将被削减。具体而言,索引人将失去他们的 GRT 自有股份的 2.5%。其中,50%作为对渔夫的奖励,以表彰他们的警惕性,剩下的50%将被从流通中移除(销毁)。这个机制旨在鼓励渔夫通过确保索引人对他们提供的数据负责来帮助维护网络的可靠性。 +- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide. - **Arbitrators**: 仲裁员是通过治理设置的网络参与者。仲裁员的作用是决定索引和查询争议的结果。他们的目标是最大限度地提高Graph网络的效用和可靠性。 @@ -62,11 +60,11 @@ title: 术语汇编 - **GRT**: Graph的工作效用代币。 GRT 为网络参与者提供经济激励,鼓励他们为网络做出贡献。 -- **POI or Proof of Indexing**: 当一个索引人关闭他们的分配,并希望要求他们的累积索引人奖励在一个给定的子图,他们必须提供一个有效的和最近的索引证明(POI)。Fishermen可以对索引人提供的 POI 提出异议。Fisherman胜出的争端将导致索引人被惩罚。 +- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer. -- **Graph Node**: Graph节点是索引子图的组件,并使生成的数据可通过GraphQL API进行查询。因此,它是索引人堆栈的中心,Graph节点的正确操作对于运行成功的索引人至关重要。 +- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer. -- **Indexer agent**: 索引人代理是索引人堆栈的一部分。它促进了索引人在链上的交互,包括在网络上注册、管理到其 Graph节点的子图部署以及分配管理。 +- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions on-chain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations. - **The Graph Client**: 用于以去中心化方式构建基于 GraphQL 的 dapps 的库。 @@ -78,10 +76,6 @@ title: 术语汇编 - **L2 Transfer Tools**: 智能合约和UI,使网络参与者能够从以太坊主网转移到Arbitrum One。网络参与者可以转移委托的GRT、子图、策展股份和索引者自己的股份。 -- **_升级_ 子图到 Graph网络中**: 将子图从托管服务移动到Graph网络的过程. - -- **_更新_ 子图**: 发布新子图版本的过程,其中包含对子图的清单、模式或映射的更新。 +- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings. - **Migrating**: 策展份额从子图的旧版本移动到子图的新版本的过程(例如,从 v0.0.1 更新到 v0.0.2)。 - -- **Upgrade Window**: The countdown for hosted service users to upgrade their subgraph(s) to The Graph Network beginning on April 11th, and ending on June 12th 2024. diff --git a/website/pages/zh/index.json b/website/pages/zh/index.json index d9c9473eb5da..7be9a7ca1935 100644 --- a/website/pages/zh/index.json +++ b/website/pages/zh/index.json @@ -21,10 +21,6 @@ "createASubgraph": { "title": "创建子图", "description": "在子图工作室中创建子图" - }, - "migrateFromHostedService": { - "title": "Upgrade from the hosted service", - "description": "Upgrading subgraphs to The Graph Network" } }, "networkRoles": { @@ -60,10 +56,6 @@ "graphExplorer": { "title": "Graph 浏览器", "description": "探索子图并与协议互动" - }, - "hostedService": { - "title": "托管服务", - "description": "Create and explore subgraphs on the hosted service" } } }, diff --git a/website/pages/zh/managing/deprecating-a-subgraph.mdx b/website/pages/zh/managing/deprecating-a-subgraph.mdx index 62f56f2e52e6..edb95ddf0340 100644 --- a/website/pages/zh/managing/deprecating-a-subgraph.mdx +++ b/website/pages/zh/managing/deprecating-a-subgraph.mdx @@ -5,7 +5,7 @@ title: 弃用子图 So you'd like to deprecate your subgraph on Graph Explorer. You've come to the right place! Follow the steps below: 1. Visit the contract address for Mainnet subgraphs [here](https://etherscan.io/address/0xadca0dd4729c8ba3acf3e99f3a9f471ef37b6825#writeProxyContract) and Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). -2. 使用`SubgraphID`作为参数调用`deprecateSubgraph`。 +2. 使用`SubgraphID`作为参数调用` deprecateSubgraph `。 3. Voilà! Your subgraph will no longer show up on searches on Graph Explorer. 请注意以下事项: diff --git a/website/pages/zh/managing/transfer-and-deprecate-a-subgraph.mdx b/website/pages/zh/managing/transfer-and-deprecate-a-subgraph.mdx new file mode 100644 index 000000000000..3d20917235ac --- /dev/null +++ b/website/pages/zh/managing/transfer-and-deprecate-a-subgraph.mdx @@ -0,0 +1,65 @@ +--- +title: Transfer and Deprecate a Subgraph +--- + +## 子图所有权转移 + +Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network. + +**Please note the following:** + +- Whoever owns the NFT controls the subgraph. +- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network. +- You can easily move control of a subgraph to a multi-sig. +- A community member can create a subgraph on behalf of a DAO. + +### View your subgraph as an NFT + +To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**: + +``` +https://opensea.io/your-wallet-address +``` + +Or a wallet explorer like **Rainbow.me**: + +``` +https://rainbow.me/your-wallet-addres +``` + +### Step-by-Step + +To transfer ownership of a subgraph, do the following: + +1. Use the UI built into Subgraph Studio: + + ![Subgraph Ownership Transfer](/img/subgraph-ownership-transfer-1.png) + +2. Choose the address that you would like to transfer the subgraph to: + + ![Subgraph Ownership Trasfer](/img/subgraph-ownership-transfer-2.png) + +Optionally, you can also use the built-in UI of NFT marketplaces like OpenSea: + +![Subgraph Ownership Transfer from NFT marketplace](/img/subgraph-ownership-transfer-nft-marketplace.png) + +## Deprecating a subgraph + +Although you cannot delete a subgraph, you can deprecate it on Graph Explorer. + +### Step-by-Step + +To deprecate your subgraph, do the following: + +1. Visit the contract address for Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract). +2. Call `deprecateSubgraph` with your `SubgraphID` as your argument. +3. Your subgraph will no longer appear in searches on Graph Explorer. + +**Please note the following:** + +- The owner's wallet should call the `deprecateSubgraph` function. +- 策展人将无法再对该子图发出信号。 +- Curators that already signaled on the subgraph can withdraw their signal at an average share price. +- Deprecated subgraphs will show an error message. + +> If you interacted with the deprecated subgraph, you can find it in your user profile under the "Subgraphs", "Indexing", or "Curating" tab, respectively. diff --git a/website/pages/zh/mips-faqs.mdx b/website/pages/zh/mips-faqs.mdx index b99b7aabbfd2..c88077e15015 100644 --- a/website/pages/zh/mips-faqs.mdx +++ b/website/pages/zh/mips-faqs.mdx @@ -6,10 +6,6 @@ title: MIP常见问题解答 > 注意:自2023年5月起,MIPs项目已关闭。感谢所有参与的索引人! -这是一个可以参与Graph生态系统,激动人心的时刻!2022年[Graph日]期间(https://thegraph.com/graph-day/2022/)Yaniv Tal宣布[即将结束托管服务](https://thegraph.com/blog/sunsetting-hosted-service/),这是Graph生态系统多年来一直致力于的一刻。 - -To support the sunsetting of the hosted service and the migration of all of it's activity to the decentralized network, The Graph Foundation has announced the [Migration Infrastructure Providers (MIPs) program](https://thegraph.com/blog/mips-multi-chain-indexing-incentivized-program). - The MIPs program is an incentivization program for Indexers to support them with resources to index chains beyond Ethereum mainnet and help The Graph protocol expand the decentralized network into a multi-chain infrastructure layer. The MIPs program has allocated 0.75% of the GRT supply (75M GRT), with 0.5% to reward Indexers who contribute to bootstrapping the network and 0.25% allocated to Network Grants for subgraph developers using multi-chain subgraphs. diff --git a/website/pages/zh/network/benefits.mdx b/website/pages/zh/network/benefits.mdx index 4af9c67518ce..a408000402ff 100644 --- a/website/pages/zh/network/benefits.mdx +++ b/website/pages/zh/network/benefits.mdx @@ -27,49 +27,49 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar ## Low Volume User (less than 100,000 queries per month) -| 成本比较 | 自托管 | Graph网络 | -| :------------------: | :-------------------------------------: | :----------------------------------------: | -| 每月服务器费用 \* | 每月350美元 | 0美元 | -| 查询成本 | $0+ | $0 per month | -| 工程时间 | 400美元每月 | 没有,内置在具有全球去中心化索引者的网络中 | -| 每月查询 | 受限于基础设施能力 | 100,000 (Free Plan) | -| 每个查询的成本 | 0美元 | $0 | -| 基础设施 | 中心化 | 去中心化 | -| 异地备援 | 每个额外节点 $750 + | 包括在内 | -| 正常工作时间 | 变量 | 99.9%+ | -| 每月总成本 | $750+ | 0美元 | +| 成本比较 | 自托管 | Graph网络 | +|:----------------:|:---------------------------------------:|:---------------------:| +| 每月服务器费用 \* | 每月350美元 | 0美元 | +| 查询成本 | $0+ | $0 per month | +| 工程时间 | 400美元每月 | 没有,内置在具有全球去中心化索引者的网络中 | +| 每月查询 | 受限于基础设施能力 | 100,000 (Free Plan) | +| 每个查询的成本 | 0美元 | $0 | +| 基础设施 | 中心化 | 去中心化 | +| 异地备援 | 每个额外节点 $750 + | 包括在内 | +| 正常工作时间 | 变量 | 99.9%+ | +| 每月总成本 | $750+ | 0美元 | ## Medium Volume User (~3M queries per month) -| 成本比较 | 自托管 | Graph网络 | -| :------------------: | :---------------------------------------------: | :----------------------------------------: | -| 每月服务器费用 \* | 每月350美元 | 0美元 | -| 查询成本 | 每月500美元 | $120 per month | +| 成本比较 | 自托管 | Graph网络 | +|:----------------:|:-------------------------------------------:|:---------------------:| +| 每月服务器费用 \* | 每月350美元 | 0美元 | +| 查询成本 | 每月500美元 | $120 per month | | 工程时间 | 每月800美元 | 没有,内置在具有全球去中心化索引者的网络中 | -| 每月查询 | 受限于基础设施能力 | ~3,000,000 | -| 每个查询的成本 | 0美元 | $0.00004 | -| 基础设施 | 中心化 | 去中心化 | -| 工程费用 | 每小时200美元 | 包括在内 | -| 异地备援 | 每个额外节点的总成本为1200美元 | 包括在内 | -| 正常工作时间 | 变量 | 99.9%+ | -| 每月总成本 | 1650美元以上 | $120 | +| 每月查询 | 受限于基础设施能力 | ~3,000,000 | +| 每个查询的成本 | 0美元 | $0.00004 | +| 基础设施 | 中心化 | 去中心化 | +| 工程费用 | 每小时200美元 | 包括在内 | +| 异地备援 | 每个额外节点的总成本为1200美元 | 包括在内 | +| 正常工作时间 | 变量 | 99.9%+ | +| 每月总成本 | 1650美元以上 | $120 | ## High Volume User (~30M queries per month) -| 成本比较 | 自托管 | Graph网络 | -| :------------------: | :-------------------------------------------: | :----------------------------------------: | -| 每月服务器费用 \* | 1100美元每月每节点 | 0美元 | -| 查询成本 | 4000美元 | $1,200 per month | -| 需要的节点数量 | 10 | 不适用 | -| 工程时间 | 每月6000美元或以上 | 没有,内置在具有全球去中心化索引人的网络中 | -| 每月查询 | 受限于基础设施能力 | ~30,000,000 | -| 每个查询的成本 | 0美元 | $0.00004 | -| 基础设施 | 中心化 | 去中心化 | -| 异地备援 | 每个额外节点的总成本为1200美元 | 包括在内 | -| 正常工作时间 | 变量 | 99.9%+ | -| 每月总成本 | 11000+美元 | $1,200 | - -- 包括后备费用: 每月$50-$100美元 +| 成本比较 | 自托管 | Graph网络 | +|:----------------:|:-------------------------------------------:|:---------------------:| +| 每月服务器费用 \* | 1100美元每月每节点 | 0美元 | +| 查询成本 | 4000美元 | $1,200 per month | +| 需要的节点数量 | 10 | 不适用 | +| 工程时间 | 每月6000美元或以上 | 没有,内置在具有全球去中心化索引人的网络中 | +| 每月查询 | 受限于基础设施能力 | ~30,000,000 | +| 每个查询的成本 | 0美元 | $0.00004 | +| 基础设施 | 中心化 | 去中心化 | +| 异地备援 | 每个额外节点的总成本为1200美元 | 包括在内 | +| 正常工作时间 | 变量 | 99.9%+ | +| 每月总成本 | 11000+美元 | $1,200 | + +* 包括后备费用: 每月$50-$100美元 按每小时200美元的假设计算的工程时间 diff --git a/website/pages/zh/network/curating.mdx b/website/pages/zh/network/curating.mdx index cfeb33c6551b..be5fc20150df 100644 --- a/website/pages/zh/network/curating.mdx +++ b/website/pages/zh/network/curating.mdx @@ -10,7 +10,7 @@ Before consumers can query a subgraph, it must be indexed. This is where curatio Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives. -Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. +Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them. @@ -18,7 +18,7 @@ The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version. -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. +If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below). @@ -34,7 +34,7 @@ Within the Curator tab in Graph Explorer, curators will be able to signal and un 让你的策展份额自动迁移到最新的生产构建,对确保你不断累积查询费用是有价值的。 每次你策展时,都会产生 1%的策展税。 每次迁移时,你也将支付 0.5%的策展税。 不鼓励子图开发人员频繁发布新版本--他们必须为所有自动迁移的策展份额支付 0.5%的策展税。 -> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy. +> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy. ## Withdrawing your GRT @@ -49,7 +49,7 @@ However, it is recommended that curators leave their signaled GRT in place not o ## 风险 1. 在Graph,查询市场本来就很年轻,由于市场动态刚刚开始,你的年收益率可能低于你的预期,这是有风险的。 -2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve. +2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned. 3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating). 4. 一个子图可能由于错误而失败。 一个失败的子图不会累积查询费用。 因此,你必须等待,直到开发人员修复错误并部署一个新的版本。 - 如果你订阅了一个子图的最新版本,你的份额将自动迁移到该新版本。 这将产生 0.5%的策展税。 @@ -65,7 +65,7 @@ By signalling on a subgraph, you will earn a share of all the query fees that th 寻找高质量的子图是一项复杂的任务,但它可以通过许多不同的方式来实现。 作为策展人,你要寻找那些推动查询量的值得信赖的子图。 这些值得信赖的子图是有价值的,因为它们是完整的,准确的,并支持 dApp 的数据需求。 一个架构不良的子图可能需要修改或重新发布,也可能最终失败。 策展人审查子图的架构或代码,以评估一个子图是否有价值,这是至关重要的。 因此: -- 策展人可以利用市场知识,尝试预测单个子图在未来可能产生更多或更少查询量 +- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future - Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on. ### 3. 升级一个子图的成本是多少? @@ -78,50 +78,14 @@ Migrating your curation shares to a new subgraph version incurs a curation tax o ### 5. 我可以出售我的策展份额吗? -Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve: +Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed). -- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited. -- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). +As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax). ### 6. Am I eligible for a curation grant? Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com. -## Curating on Ethereum vs Arbitrum - -The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated. - -The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment. - -On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk. - -If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/) - -## 收益率曲线 101 - -> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum. - -Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted. - -![每份价格](/img/price-per-share.png) - -因此,价格是线性增长的,这意味着随着时间的推移,购买股票的成本会越来越高。 这里有一个例子说明我们的意思,请看下面的粘合曲线。 - -![收益率曲线](/img/bonding-curve.png) - -考虑到我们有两个策展人,他们为一个子图铸造了份额: - -- 策展人 A 是第一个对子图发出信号的人。 通过在曲线中加入 120,000 GRT,他们能够铸造出 2000 股。 -- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve. -- 由于两位策展人都持有策展人股份总数的一半,他们将获得同等数量的策展人使用费。 -- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT. -- 剩下的策展人现在将收到该子图的所有策展人使用费。 如果他们烧掉他们的股份来提取 GRT,他们将得到 12 万 GRT。 -- **TLDR:** 策展人股份的 GRT 估值是由粘合曲线决定的,可能会有波动。 有可能出现大的收益,也有可能出现大的损失。 提前发出信号意味着你为每只股票投入的 GRT 较少。 推而广之,这意味着在相同的子图上,你比后来的策展人在每个 GRT 上赚取更多的策展人使用费。 - -In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** - -在Graph 的案例中, [Bancor 对粘合曲线公式的实施](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) 被利用。 - 还有困惑吗? 点击下面查看管理视频指导: diff --git a/website/pages/zh/network/delegating.mdx b/website/pages/zh/network/delegating.mdx index 572cd98d99bb..f665195476a9 100644 --- a/website/pages/zh/network/delegating.mdx +++ b/website/pages/zh/network/delegating.mdx @@ -2,13 +2,23 @@ title: 委托 --- -Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves. +Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. -Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process. +- They help secure the network without running a Graph Node themselves. + +- They earn a portion of an Indexer's query fees and rewards by delegating to them. + +## How does this work? + +The number of queries an Indexer can process depends on their own stake, **the delegated stake**, and the price the Indexer charges for each query. Therefore, the more stake allocated to an Indexer, the more potential queries an Indexer can process. ## 委托人指南 -This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet. +Learn how to be an effective Delegator in The Graph Network. + +Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. Therefore, they must use their best judgment to choose Indexers based on multiple factors. + +> Please note this guide does not cover steps such as setting up MetaMask properly, as that information is widely available online. There are three sections in this guide: @@ -24,61 +34,84 @@ There are three sections in this guide: 委托人不能因为不良行为而被取消,但对委托有税,以抑制可能损害网络完整性的不良决策。 -It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT. +As a Delegator, it's important to understand the following: -In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation. +- You will be charged 0.5% every time you delegate. This means that if you delegate 1,000 GRT, you will automatically burn 5 GRT. + +- In order to be safe, you should calculate your potential return when delegating to an Indexer. For example, you might calculate how many days it will take before you have earned back the 0.5% tax on your delegation. ### 委托解约期 Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days. -Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. +### Why is this important? + +If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT. As a result, it’s recommended that you choose an Indexer wisely. -
    请注意委托用户界面中的0.5%费用,以及28天的解约期。
    +
    + 请注意委托用户界面中的0.5%费用,以及28天的解约期。 +
    ### 选择一个为委托人提供公平的奖励分配的值得信赖的索引人 -This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters. +In order to understand how to choose a trustworthy Indexer, you need to understand the Delegation Parameters. + +#### Delegation Parameters -Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards. +- **Indexing Reward Cut** - The portion of the rewards the Indexer will keep for themselves. + - If an Indexer's reward cut is set to 100%, as a Delegator, you will get 0 indexing rewards. + - If it is set to 80%, as a Delegator, you will receive 20%.
    - ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *上 面的索引人分给委托人 90% 的收益。 中间的给委托人 20%。 - 下面的给委托人约 83%。 + ![Indexing Reward Cut](/img/Indexing-Reward-Cut.png) *上 面的索引人分给委托人 90% 的收益。 中间的给委托人 20%。 下面的给委托人约 83%。
    -- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant. +- **Query Fee Cut** - This is just like the Indexing Reward Cut, but it applies to returns on the query fees that the Indexer collects. -As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. +As you can see, in order to choose the right Indexer, you must consider multiple things. -### 计算委托人的预期收益 +- It is highly recommend that you explore [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which ones reward Delegators consistently. +- Many Indexers are very active in Discord and will be happy to answer your questions. +- Many of them have been Indexing for months, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network. -A Delegator must consider a lot of factors when determining the return. These include: +## Calculating Delegators Expected Return -- 有技术的委托人还可以查看索引人使用他们可用的委托代币的能力。 如果索引人没有分配所有可用的代币,他们就不会为自己或他们的委托人赚取最大利润。 -- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days. +A Delegator must consider the following factors to determine a return: + +- Consider an Indexer's ability to use the Delegated tokens available to them. + - If an Indexer is not allocating all the tokens available, they are not earning the maximum profit they could be for themselves or their Delegators. +- Pay attention to the first few days of delegating. + - An Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. It is possible that an Indexer might have a lot of rewards they still need to collect, so their total rewards are low. ### 考虑到查询费用的分成和索引费用的分成 -如上文所述,你应该选择一个在设置他们的查询费分成和索引奖励分成方面透明和诚实的索引人。 委托人还应该看一下参数冷却时间,看看他们有多少时间缓冲区。 做完这些之后,计算委托人会获得的奖励金额就相当简单了。 计算公式是: +You should choose an Indexer that is transparent and honest about setting their Query Fee and Indexing Fee Cuts. You should also look at the Parameters Cooldown time to see how much of a time buffer you have. After that is done, it is simple to calculate the amount of rewards you are getting. + +The formula is: ![委托图片3](/img/Delegation-Reward-Formula.png) ### 考虑索引人委托池 -委托人必须考虑的另一件事是他们拥有的委托池的比例。 所有的委托奖励都是平均分配的,根据委托人存入池子的数额来决定池子的简单再平衡。 这使委托人就拥有了委托池的份额: +Delegators should consider the proportion of the Delegation Pool they own. -![共享公式](/img/Share-Forumla.png) +All delegation rewards are shared evenly, with a pool rebalancing based on the amount the Delegator deposited into the pool. -Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%. +This gives the Delegator a share of the pool: + +![共享公式](/img/Share-Forumla.png) -Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return. +> The formula above shows that it is possible for an Indexer offering only 20% to Delegators to provide a better reward than an Indexer giving 90%. Simply do the math to determine the best reward. ### 考虑委托容量 -另一个需要考虑的是委托容量。 目前,委托比例被设置为 16。 这意味着,如果一个索引人质押了100万GRT,他们的委托容量是 16,00万GRT 的委托代币,他们可以在协议中使用。 任何超过这个数量的委托代币将稀释所有的委托人奖励。 +Finally, consider the delegation capacity. Currently, the Delegation Ratio is set to 16. -Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be. +#### Why does this matter? + +This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards. + +Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, both the Delegators and the Indexers are earning less rewards than they could be. Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making. @@ -86,16 +119,21 @@ Therefore, a Delegator should always consider the Delegation Capacity of an Inde ### MetaMask“待定交易”错误 -**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?** +1. When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do? + +At times, attempts to delegate to Indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. + +#### 示例 -At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. +Let's say you attempt to delegate with an insufficient gas fee relative to the current prices. -For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. +- This action can cause the transaction attempt to display as "Pending" in your MetaMask wallet for 15+ minutes. When this happens, you can attempt subsequent transactions, but these will only be processed until the initial transaction is mined because transactions for an address must be processed in order. +- In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. -A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. +A simple resolution to this bug is to restart the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate. -## 网络界面视频指南 +## Video Guide -This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI. +This video guide reviews this page while interacting with the UI. diff --git a/website/pages/zh/network/developing.mdx b/website/pages/zh/network/developing.mdx index 97b241bcb081..ffe78c31b25c 100644 --- a/website/pages/zh/network/developing.mdx +++ b/website/pages/zh/network/developing.mdx @@ -2,52 +2,88 @@ title: 开发 --- -开发人员是 Graph 生态系统的需求方。开发人员构建子图并将其发布到图形Graph网络。然后,它们使用 GraphQL 查询实时子图,以便为应用程序助力。 +To start coding right away, go to [Developer Quick Start](/quick-start/). + +## 概述 + +As a developer, you need data to build and power your dapp. Querying and indexing blockchain data is challenging, but The Graph provides a solution to this issue. + +On The Graph, you can: + +1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/). +2. Use GraphQL to query existing subgraphs. + +### What is GraphQL? + +- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +### Developer Actions + +- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps. +- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers. +- Deploy, publish and signal your subgraphs within The Graph Network. + +## Subgraph Specifics + +### What are subgraphs? + +A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. + +A subgraph primarily consists of the following files: + +- `subgraph.yaml`: this YAML file contains the [subgraph manifest](/developing/creating-a-subgraph/#the-subgraph-manifest). +- `subgraph.graphql`: this GraphQL schema defines what data is stored for your subgraph, and how to query it via [GraphQL](/developing/creating-a-subgraph/#the-graphql-schema). +- `mappings`: this [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) mappings file translates data from the event data to the entities defined in your schema. + +Learn the detailed specifics to [create a subgraph](/developing/creating-a-subgraph/). ## 子图生命周期 -部署到网络中的子图有一个确定的生命周期。 +Here is a general overview of a subgraph’s lifecycle: -### 本地建造 +![子图生命周期](/img/subgraph-lifecycle.png) -与所有子图开发一样,它从本地开发和测试开始。开发人员可以使用相同的本地设置,无论是为Graph 网络、托管服务还是本地 Graph 节点构建,都可以利用`graph-cli` 和 `graph-ts`构建子图。鼓励开发人员使用[Matchstick](https://github.com/LimeChain/matchstick)等工具进行单元测试,以提高子图的可靠性。 +### 本地建造 -> Graph网络在功能和网络支持方面存在一定限制。只有[支持的网络](/developing/supported-networks)上的子图才能获得索引奖励,从IPFS获取数据的子图也不符合条件。 +Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/developing/graph-ts/) and [Matchstick](/developing/unit-testing-framework/) to create robust subgraphs. ### Deploy to Subgraph Studio -Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected. - -### 发布到网络 +Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following: -当开发人员对他们的子图感到满意时,他们可以将其发布到 Graph网络。这是一个链上操作,它注册子图,以便索引人可以发现它。已发布的子图具有相应的 NFT,这样就很容易转移。已发布的子图具有关联的元数据,这些元数据为其他网络参与者提供有用的背景和信息。 +- Use its staging environment to index the deployed subgraph and make it available for review. +- Verify that your subgraph doesn't have any indexing errors and works as expected. -### 鼓励索引的信号 +### 发布到网络 -索引人不可能在不添加信号的情况下获取已发布的子图。信号被锁定与给定子图相关联的 GRT,它向索引人指示给定子图将接收查询量,并且还有助于处理它的索引奖励。子图开发人员通常会向他们的子图添加信号,以鼓励索引。如果第三方策展人认为某个子图可能驱动查询量,他们也可以在给定的子图上发出信号。 +When you're happy with your subgraph, you can [publish it](/publishing/publishing-a-subgraph/) to The Graph Network. -### 查询& 应用开发 +- This is an on-chain action, which registers the subgraph and makes it discoverable by Indexers. +- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/managing/transfer-and-deprecate-a-subgraph/) by sending the NFT. +- Published subgraphs have associated metadata, which provides other network participants with useful context and information. -一旦子图被索引者处理并用于查询,开发人员就可以开始在其应用程序中使用该子图。开发人员通过网关查询子图,该网关将他们的查询转发给处理子图的索引者,并以 GRT 支付查询费用。 +### Add Curation Signal for Indexing -In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time. +Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/network/curating/) on The Graph. -Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio. +#### What is signal? -### 升级子图 +- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it. +- Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume. -After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing. +### 查询& 应用开发 -一旦子图开发人员准备升级,他们就可以发起一个交易,将子图指向新版本。升级子图将任何信号迁移到新版本(假设应用该信号的用户选择了“自动迁移”) ,这也会带来迁移税。这种信号迁移应该会提示索引者开始为子图的新版本建立索引,因此它应该很快就可以用于查询。 +Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/billing/). -### 弃用子图 +Learn more about [querying subgraphs](/querying/querying-the-graph/). -在某种程度上,开发人员可能决定不再需要已发布的子图。在这一点上,他们可能不赞成子图,它将任何有信号的 GRT 返回给管理员。 +### 升级子图 -### 不同的开发人员角色 +To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing. -一些开发人员将参与网络上子图的整个生命周期,在他们自己的子图上发布、查询和迭代。有些可能专注于子图开发,构建其他人可以构建的开放 API。有些可能是专注于应用程序的,查询由其他人部署的子图。 +- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax. +- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying. -### 开发商与网络经济 +### Deprecating & Transferring Subgraphs -开发人员是网络中关键的经济参与者,锁定 GRT 以鼓励索引,关键是查询子图,这是网络的主要价值交换。每当子图升级时,子图开发人员也会销毁GRT。 +If you no longer need a published subgraph, you can [deprecate or transfer a subgraph](/managing/transfer-and-deprecate-a-subgraph/). Deprecating a subgraph returns any signaled GRT to [Curators](/network/curating/). diff --git a/website/pages/zh/network/explorer.mdx b/website/pages/zh/network/explorer.mdx index 2701151c884a..1c080a696a81 100644 --- a/website/pages/zh/network/explorer.mdx +++ b/website/pages/zh/network/explorer.mdx @@ -2,21 +2,35 @@ title: Graph浏览器 --- -Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽‍🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below): +Learn about The Graph Explorer and access the world of subgraphs and network data. + +Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. + +## Video Guide + +For a general overview of Graph Explorer, check out the video below: ## 子图 -First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name. +After you just finish deploying and publishing your subgraph in Subgraph Studio, click on the "subgraphs tab” at the top of the navigation bar to access the following: + +- Your own finished subgraphs +- Subgraphs published by others +- The exact subgraph you want (based on the date created, signal amount, or name). ![Explorer Image 1](/img/Subgraphs-Explorer-Landing.png) -当您单击子图时,您将能够在面板上测试查询,并能够利用网络详细信息做出明智决策。 您还可以在自己的子图或其他人的子图中发出 GRT 信号,以使索引人意识到其重要性和质量。 这很关键,因为子图上的信号会激励它被索引,这意味着它将出现在网络上,最终为查询提供服务。 +When you click into a subgraph, you will be able to do the following: + +- Test queries in the playground and be able to leverage network details to make informed decisions. +- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality. +- This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries. ![资源管理器图像 2](/img/Subgraph-Details.png) -在每个子图的专用页面上,会显示一些详细信息。 包括: +On each subgraph’s dedicated page, you can do the following: - 子图上的信号/非信号 - 查看详细信息,例如图表、当前部署 ID 和其他元数据 @@ -31,26 +45,32 @@ First things first, if you just finished deploying and publishing your subgraph ## 参与者 -在此选项卡中,您可以鸟瞰所有参与网络活动的人员,例如索引人、委托人和策展人。 下面,我们将深入了解每个选项卡的意义。 +This section provides a bird' s-eye view of all "participants," which includes everyone participating in the network, such as Indexers, Delegators, and Curators. ### 1. 索引人 ![资源管理器图片 4](/img/Indexer-Pane.png) -让我们从索引人开始。 索引人是协议的骨干,是那些质押子图、索引并向使用子图的任何人提供查询服务的人。 在 索引人选项中,您将能够看到索引人的委托参数、他们的权益、他们对每个子图的权益以及他们从查询费用和索引奖励中获得的收入。 细则如下: +Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs. + +In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards. -- 查询费用划分 - 索引人与委托人划分查询费用的百分比 -- 有效的奖励划分 - 应用于委托池的索引奖励削减。 如果是负数,则意味着索引人正在赠送部分奖励。 如果是正数,则意味着索引人保留了他们的一些奖励 -- 冷却时间剩余 - 索引人可以更改上述委托参数之前的剩余时间。 冷却时间由索引人在更新委托参数时设置 -- 已拥有 - 索引人的存入份额,可能会因恶意或不正确的行为被削减 -- 已委托 - 委托人的份额可以由索引人分配,但不能被削减 -- 已分配 - 索引人积极分配给他们正在索引的子图 -- 可用委托容量 - 索引人在过度委托之前仍然可以收到的委托数量 +**Specifics** + +- Query Fee Cut - the % of the query fee rebates that the Indexer keeps when splitting with Delegators. +- Effective Reward Cut - the indexing reward cut applied to the delegation pool. If it’s negative, it means that the Indexer is giving away part of their rewards. If it’s positive, it means that the Indexer is keeping some of their rewards. +- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters. +- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior. +- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed. +- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing. +- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated. - 最大委托容量 - 索引人可以有效接受的最大委托份额数量。 超出的委托权益不能用于分配或奖励计算。 -- 查询费用 - 这是最终用户一直以来为索引人的查询支付的费用 +- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time. - 索引人奖励 - 这是索引者及其委托者在所有时间获得的总索引人奖励。 索引人奖励通过 GRT 发行支付。 -索引人可以获得查询费用和索引奖励。 从功能上讲,当网络参与者将 GRT 委托给索引人时,就会发生这种情况。 这使索引人能够根据其索引参数接收查询费用和奖励。 索引参数可以通过点击表格的右侧来设置,或者通过进入索引人的配置文件并点击“委托”按钮来设置。 +Indexers can earn both query fees and indexing rewards. Functionally, this happens when network participants delegate GRT to an Indexer. This enables Indexers to receive query fees and rewards depending on their Indexer parameters. + +- Indexing parameters can be set by clicking on the right-hand side of the table or by going into an Indexer’s profile and clicking the “Delegate” button. 要了解有关如何成为 Indexer 的更多信息,您可以查看[官方文档](/network/indexing) 或 [Graph Academy 索引器指南。](https://thegraph.academy/delegators/选择索引器/) @@ -58,9 +78,13 @@ First things first, if you just finished deploying and publishing your subgraph ### 2. 策展人 -策展人分析子图以确定哪些子图质量最高。 一旦策展人发现一个潜在有吸引力的子图,他们就可以通过在其粘合曲线上发出信号来设计。 在这样做时,策展人让索引人知道哪些子图是高质量的且应该被索引。 +Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed. + +- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve. + - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on. + - The bonding curve incentivizes Curators to curate the highest quality data sources. -策展人可以是社区成员、数据消费者,甚至是子图开发者,他们通过将 GRT 代币放入粘合曲线来在自己的子图上发出信号。 通过存入 GRT,策展人铸造了子图的份额。 因此,策展人有资格获得他们发出信号的子图生成的一部分查询费用。 粘合曲线激励策展人设计最高质量的数据源。 本节中的策展人表将允许查看: +In the The Curator table listed below you can see: - 策展人开始策展的日期 - 已存入的 GRT 数量 @@ -68,34 +92,36 @@ First things first, if you just finished deploying and publishing your subgraph ![资源管理器图片 6](/img/Curation-Overview.png) -如果你想了解更多关于策展人的角色,可以访问 [Graph学院](https://thegraph.academy/curators/) 或[正式文档](/network/curating)。 +If you want to learn more about the Curator role, you can do so by visiting [official documentation.](/network/curating) or [The Graph Academy](https://thegraph.academy/curators/). ### 3. 委托人 -委托人在维护 The Graph 网络的安全性和去中心化方面发挥着关键作用。 他们通过将 GRT 代币委托给一个或多个索引人(即“质押”)来参与网络。 如果没有委托人,索引人不太可能获得可观的奖励和费用。 因此,索引人试图通过向委托人提供他们获得的一部分索引奖励和查询费用来吸引委托人。 +Delegators play a key role in maintaining the security and decentralization of The Graph Network. They participate in the network by delegating (i.e., “staking”) GRT tokens to one or multiple indexers. -Delegators, in turn, select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. Reputation within the community can also play a factor in this! It’s recommended to connect with the indexers selected via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! +- Without Delegators, Indexers are less likely to earn significant rewards and fees. Therefore, Indexers attract Delegators by offering them a portion of their indexing rewards and query fees. +- Delegators select Indexers based on a number of different variables, such as past performance, indexing reward rates, and query fee cuts. +- Reputation within the community can also play a factor in the selection process. It’s recommended to connect with the selected Indexers via [The Graph’s Discord](https://discord.gg/graphprotocol) or [The Graph Forum](https://forum.thegraph.com/)! ![资源管理器图像 7](/img/Delegation-Overview.png) -委托人表将允许您查看社区中的活跃委托人,以及以下指标: +In the Delegators table you can see the active Delegators in the community and important metrics: - 委托人委托给的索引人数量 - 委托人的原始委托 - 协议中已经产生但没有提现的奖励 - 从协议中提取的已实现奖励 - 目前在协议中的 GRT 总量 -- 上次授权的日期 +- The date they last delegated -如果您想了解更多有关如何成为委托人的信息,不要犹豫! 您可以前往 [正式文档](/network/delegating) 或者 [Graph学院](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). +If you want to learn more about how to become a Delegator, check out the [official documentation](/network/delegating) or [The Graph Academy](https://docs.thegraph.academy/official-docs/delegator/choosing-indexers). ## 网络 -在网络部分,您将看到全局 KPI 以及切换到每个时期的基础和更详细地分析网络指标的能力。 这些详细信息将让您了解网络随时间推移的表现。 +In this section, you can see global KPIs and view the ability to switch to a per-epoch basis and analyze network metrics in more detail. These details will give you a sense of how the network is performing over time. ### 概述 -The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like: +The overview section has both all the current network metrics and some cumulative metrics over time: - 当前网络总份额 - 索引人和他们的委托人之间的份额分配 @@ -104,10 +130,10 @@ The overview section has all the current network metrics as well as some cumulat - 协议参数,例如管理奖励、通货膨胀率等 - 当前时期奖励和费用 -一些值得一提的关键细节: +A few key details to note: -- **查询费用代表消费者产生的费用,**在他们对子图的分配已经关闭并且他们提供的数据已经被关闭后,在至少 7 个周期(见下文)之后,索引人可以要求(或不要求)它们得到消费者的认可。 -- **索引奖励表示索引器在该纪元内从网络发行中获得的奖励数额。**虽然协议发行是固定的,但只有在索引器关闭分配后才会产生奖励他们一直在索引的子图。因此,每个时期的奖励数量会有所不同(即,在某些时期,索引器可能会集体关闭已经开放很多天的分配)。 +- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers. +- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days). ![资源管理器图像 8](/img/Network-Stats.png) @@ -121,29 +147,34 @@ The overview section has all the current network metrics as well as some cumulat - 活跃时期是索引人目前正在分配权益并收取查询费用的时期 - 稳定时期是状态通道正在稳定的时期。 这意味着如果消费者对他们提出争议,索引人将受到严厉惩罚。 - 分发时期是时期的状态通道正在结算的时期,索引人可以要求他们的查询费用回扣。 - - 最终确定的时期是索引人没有留下查询费回扣的时期,因此被最终确定。 + - The finalized epochs are the epochs that have no query fee rebates left to claim by the Indexers. ![资源管理器图像 9](/img/Epoch-Stats.png) ## 您的用户资料 -既然我们已经讨论了网络统计信息,让我们继续讨论您的个人资料。 无论您以何种方式参与网络,您的个人资料都是您查看网络活动的地方。 您的加密钱包将作为您的用户资料,通过用户仪表板,您将能够看到: +Your personal profile is the place where you can see your network activity, regardless of your role on the network. Your crypto wallet will act as your user profile, and with the User Dashboard, you’ll be able to see the following tabs: ### 个人资料概览 -您可以在此处查看您当前采取的任何操作。 您也可以在这里找到您的个人资料信息、描述和网站(如果您添加了)。 +In this section, you can view the following: + +- Any of your current actions you've done. +- Your profile information, description, and website (if you added one). ![资源管理器图像 10](/img/Profile-Overview.png) ### 子图标签 -如果单击子图选项卡,您将看到已发布的子图。 这将不包括为测试目的使用 CLI 部署的任何子图——子图只会在它们发布到去中心化网络时显示。 +In the Subgraphs tab, you’ll see your published subgraphs. + +> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network. ![资源管理器图像 11](/img/Subgraphs-Overview.png) ### 索引标签 -如果您单击“索引”选项卡,您将找到一个表格,其中包含对子图的所有活动和历史分配,以及您可以分析和查看过去作为索引人的表现的图表。 +In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer. 本节还将包括有关您的净索引人奖励和净查询费用的详细信息。 您将看到以下指标: @@ -158,7 +189,9 @@ The overview section has all the current network metrics as well as some cumulat ### 委托标签 -委托人对Graph 网络很重要。 委托人必须利用他们的知识来选择能够提供健康回报的索引人。 在这里,您可以找到您的活动和历史委托的详细信息,以及您委托给的索引人的指标。 +Delegators are important to the Graph Network. They must use their knowledge to choose an Indexer that will provide a healthy return on rewards. + +In the Delegators tab, you can find the details of your active and historical delegations, along with the metrics of the Indexers that you delegated towards. 在页面的前半部分,您可以看到您的委托图表,以及仅奖励图表。 在左侧,您可以看到反映您当前委托指标的 KPI。 diff --git a/website/pages/zh/network/indexing.mdx b/website/pages/zh/network/indexing.mdx index 36d5e4858150..8a1e9df310f4 100644 --- a/website/pages/zh/network/indexing.mdx +++ b/website/pages/zh/network/indexing.mdx @@ -42,7 +42,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap 许多社区制作的仪表板包含悬而未决的奖励值,通过以下步骤可以很容易地手动检查这些值: -1. 查询[主网子图](https://thegraph.com/hosted-service/subgraph/graphprotocol/graph-network-mainnet) 以获取所有活动分配的 ID: +1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations: ```graphql query indexerAllocations { @@ -113,11 +113,11 @@ Query fees are collected by the gateway and distributed to indexers according to - **大型** -准备对当前使用的所有子图进行索引,并为相关流量的请求提供服务。 | 设置 | (CPU 数量) | (内存 GB) | (硬盘 TB) | (CPU 数量) | (内存 GB) | -| ---- | :--------: | :-------: | :-------: | :--------: | :-------: | -| 小型 | 4 | 8 | 1 | 4 | 16 | -| 标准 | 8 | 30 | 1 | 12 | 48 | -| 中型 | 16 | 64 | 2 | 32 | 64 | -| 大型 | 72 | 468 | 3.5 | 48 | 184 | +| -- |:--------:|:-------:|:-------:|:--------:|:-------:| +| 小型 | 4 | 8 | 1 | 4 | 16 | +| 标准 | 8 | 30 | 1 | 12 | 48 | +| 中型 | 16 | 64 | 2 | 32 | 64 | +| 大型 | 72 | 468 | 3.5 | 48 | 184 | ### 索引人应该采取哪些基本的安全防范措施? @@ -149,26 +149,26 @@ Query fees are collected by the gateway and distributed to indexers according to #### Graph 节点 -| 端口 | 用途 | 路径 | CLI 参数 | 环境 变量 | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP 服务
    (用于子图查询) | /subgraphs/id/...

    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    (用于子图订阅) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (用于管理部署) | / | --admin-port | - | -| 8030 | 子图索引状态 API | /graphql | --index-node-port | - | -| 8040 | Prometheus 指标 | /metrics | --metrics-port | - | +| 端口 | 用途 | 路径 | CLI 参数 | 环境 变量 | +| ---- | ------------------------------------ | ------------------------------------------------------------------- | ----------------- | ----- | +| 8000 | GraphQL HTTP 服务
    (用于子图查询) | /subgraphs/id/...

    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    (用于子图订阅) | /subgraphs/id/...
    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (用于管理部署) | / | --admin-port | - | +| 8030 | 子图索引状态 API | /graphql | --index-node-port | - | +| 8040 | Prometheus 指标 | /metrics | --metrics-port | - | #### 索引人服务 -| 端口 | 用途 | 路径 | CLI 参数 | 环境 变量 | -| --- | --- | --- | --- | --- | -| 7600 | GraphQL HTTP 服务器
    (用于付费子图查询) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | -| 7300 | Prometheus 指标 | /metrics | --metrics-port | - | +| 端口 | 用途 | 路径 | CLI 参数 | 环境 变量 | +| ---- | --------------------------------------- | --------------------------------------------------------------------------- | -------------- | ---------------------- | +| 7600 | GraphQL HTTP 服务器
    (用于付费子图查询) | /subgraphs/id/...
    /status
    /channel-messages-inbox | --port | `INDEXER_SERVICE_PORT` | +| 7300 | Prometheus 指标 | /metrics | --metrics-port | - | #### 索引人代理 -| 端口 | 用途 | 路径 | CLI 参数 | 环境 变量 | -| ---- | -------------- | ---- | ------------------------- | --------------------------------------- | -| 8000 | 索引人管理 API | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | +| 端口 | 用途 | 路径 | CLI 参数 | 环境 变量 | +| ---- | --------- | -- | ------------------------- | --------------------------------------- | +| 8000 | 索引人管理 API | / | --indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` | ### 在谷歌云上使用 Terraform 建立服务器基础设施 @@ -544,7 +544,7 @@ graph indexer status - `graph indexer rules maybe [options] ` —将部署的 `thedecisionBasis`设置为`规则`, 这样索引人代理将使用索引规则来决定是否对这个部署进行索引。 -- `graph indexer actions get [options] `使用 `all` 获取一个或多个操作,或者将 `action-id` 保持为空以获取所有操作。一个附加的参数—— `status` 可以用来打印出某个状态的所有操作。 +- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status. - `graph indexer action queue allocate ` -队列分配操作 @@ -730,7 +730,7 @@ default => 0.1 * $SYSTEM_LOAD; 使用上述模型的查询成本计算示例: -| 询问 | 价格 | +| 询问 | 价格 | | ---------------------------------------------------------------------------- | ------- | | { pairs(skip: 5000) { id } } | 0.5 GRT | | { tokens { symbol } } | 0.1 GRT | diff --git a/website/pages/zh/network/overview.mdx b/website/pages/zh/network/overview.mdx index 58d3359f6692..b2ab6df18b27 100644 --- a/website/pages/zh/network/overview.mdx +++ b/website/pages/zh/network/overview.mdx @@ -2,14 +2,20 @@ title: 网络概述 --- -The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. +The Graph Network is a decentralized indexing protocol for organizing blockchain data. -## 概述 +## How does it work? -The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data. +Applications use [GraphQL](/querying/graphql-api/) to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure. + +## Specifics + +The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to web3 applications. ![代币经济学](/img/Network-roles@2x.png) -为了确保Graph网络的经济安全和被查询数据的完整性,参与者持有并使用Graph代币([GRT](/tokenomics))。GRT是一种工作实用代币,是一种用于在网络中分配资源的ERC-20代币。 +### Economics + +To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20, which is used to allocate resources in the network. -Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake. +Active Indexers, Curators, and Delegators can provide services and earn income from the network. The income they earn is proportional to the amount of work they perform and their GRT stake. diff --git a/website/pages/zh/new-chain-integration.mdx b/website/pages/zh/new-chain-integration.mdx index 99a5e5bdb30a..a3074f0c8223 100644 --- a/website/pages/zh/new-chain-integration.mdx +++ b/website/pages/zh/new-chain-integration.mdx @@ -1,75 +1,80 @@ --- -title: 集成新网络 +title: New Chain Integration --- -Graph Node目前可以从以下链类型中索引数据: +Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies: -- 通过EVM JSON-RPC和Ethereum Firehose(https://github.com/streamingfast/firehose-ethereum)进行以太坊的索引 -- 通过NEAR Firehose(https://github.com/streamingfast/near-firehose-indexer)进行NEAR的索引 -- 通过Cosmos Firehose(https://github.com/graphprotocol/firehose-cosmos)进行Cosmos的索引 -- 通过Arweave Firehose(https://github.com/graphprotocol/firehose-arweave)进行Arweave的索引 +1. **EVM JSON-RPC** +2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms. -如果您对其中任何一种链感兴趣,集成是Graph Node配置和测试的事情。 +> Note that while the recommended approach is to develop a new Firehose for all new chains, it is only required for non-EVM chains. -If you are interested in a different chain type, a new integration with Graph Node must be built. Our recommended approach is developing a new Firehose for the chain in question and then the integration of that Firehose with Graph Node. More info below. +## Integration Strategies -**1. EVM JSON-RPC** +### 1. EVM JSON-RPC -如果区块链与EVM等效,并且客户端/节点公开标准的EVM JSON-RPC API,Graph Node应该能够索引新的链。有关更多信息,请参阅EVM JSON-RPC测试(new-chain-integration#testing-an-evm-json-rpc)。 +If the blockchain is EVM equivalent and the client/node exposes the standard EVM JSON-RPC API, Graph Node should be able to index the new chain. -2. Firehose +#### 测试EVM JSON-RPC -For non-EVM-based chains, Graph Node must ingest blockchain data via gRPC and known type definitions. This can be done via [Firehose](firehose/), a new technology developed by [StreamingFast](https://www.streamingfast.io/) that provides a highly-scalable indexing blockchain solution using a files-based and streaming-first approach. Reach out to the [StreamingFast team](mailto:integrations@streamingfast.io/) if you need help with Firehose development. +For Graph Node to be able to ingest data from an EVM chain, the RPC node must expose the following EVM JSON-RPC methods: -## EVM JSON-RPC和Firehose之间的区别 +- `eth_getLogs` +- `eth_call` (for historical blocks, with EIP-1898 - requires archive node) +- `eth_getBlockByNumber` +- `eth_getBlockByHash` +- `net_version` +- `eth_getTransactionReceipt`, 在JSON-RPC批量请求中 +- `trace_filter` _(optionally required for Graph Node to support call handlers)_ -虽然这两者都适用于子图(Subgraph),但是对于想要构建[Substreams](substreams/) 的开发人员,始终需要一个Firehose,比如构建Substreams-powered子图(cookbook/substreams-powered-subgraphs/)。此外,与JSON-RPC相比,Firehose可以实现更快的索引速度。 +### 2. Firehose Integration -新的EVM链集成者也可以考虑基于Firehose的方法,考虑到substreams的好处和其大规模并行化的索引能力。支持这两种方法允许开发人员在新链上选择构建substreams或子图。 +[Firehose](https://firehose.streamingfast.io/firehose-setup/overview) is a next-generation extraction layer. It collects history in flat files and streams in real time. Firehose technology replaces those polling API calls with a stream of data utilizing a push model that sends data to the indexing node faster. This helps increase the speed of syncing and indexing. -> 注意:基于Firehose的EVM链集成仍需要索引器运行链的归档RPC节点,以正确索引子图。这是因为Firehose无法提供通常通过eth_call RPC方法访问的智能合约状态。 (值得提醒的是,eth_calls对于开发人员来说不是一个好的实践(https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)) +The primary method to integrate the Firehose into chains is to use an RPC polling strategy. Our polling algorithm will predict when a new block will arrive and increase the rate at which it checks for a new block near that time, making it a very low-latency and efficient solution. For help with the integration and maintenance of the Firehose, contact the [StreamingFast team](https://www.streamingfast.io/firehose-integration-program). New chains and their integrators will appreciate the [fork awareness](https://substreams.streamingfast.io/documentation/consume/reliability-guarantees) and massive parallelized indexing capabilities that Firehose and Substreams bring to their ecosystem. ---- +> NOTE: All integrations done by the StreamingFast team include maintenance for the Firehose replication protocol into the chain's codebase. StreamingFast tracks any changes and releases binaries when you change code and when StreamingFast changes code. This includes releasing Firehose/Substreams binaries for the protocol, maintaining Substreams modules for the block model of the chain, and releasing binaries for the blockchain node with instrumentation if need be. -## 测试EVM JSON-RPC +#### Specific Firehose Instrumentation for EVM (`geth`) chains -为了使Graph Node能够从EVM链中获取数据,RPC节点必须公开以下EVM JSON RPC方法: +For EVM chains, there exists a deeper level of data that can be achieved through the `geth` [live-tracer](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0), a collaboration between Go-Ethereum and StreamingFast, in building a high-throughput and rich transaction tracing system. The Live Tracer is the most comprehensive solution, resulting in [Extended](https://streamingfastio.medium.com/new-block-model-to-accelerate-chain-integration-9f65126e5425) block details. This enables new indexing paradigms, like pattern matching of events based on state changes, calls, parent call trees, or triggering of events based on changes to the actual variables in a smart contract. -- `eth_getLogs` -- `eth_call` \\(对于历史块,使用EIP-1898 - 需要归档节点): -- `eth_getBlockByNumber` -- `eth_getBlockByHash` -- `net_version` -- `eth_getTransactionReceipt`, 在JSON-RPC批量请求中 -- _`trace_filter`_ \_(可选择支持Graph Node的调用处理程序) +![Base block vs Extended block](/img/extended-vs-base-substreams-blocks.png) -### Graph Node配置 +> NOTE: This improvement upon the Firehose requires chains make use of the EVM engine `geth version 1.13.0` and up. -首先准备您的本地环境 +## EVM considerations - Difference between JSON-RPC & Firehose + +While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing. + +- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes. + +> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers) + +## Graph Node配置 + +Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph. 1. [Clone Graph Node](https://github.com/graphprotocol/graph-node) -2. 修改此行(https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22),包括新网络名称和符合EVM JSON RPC的URL - > 不要更改环境变量名称本身。它必须保持为ethereum,即使网络名称不同也是如此。 -3. 运行IPFS节点,或使用The Graph使用的IPFS节点:https://api.thegraph.com/ipfs/ -**Test the integration by locally deploying a subgraph** +2. Modify [this line](https://github.com/graphprotocol/graph-node/blob/master/docker/docker-compose.yml#L22) to include the new network name and the EVM JSON-RPC compliant URL -1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) -2. 创建一个简单的示例子图。以下是一些选项: - 1. 预打包的[Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323)智能合约和子图是一个很好的起点 - 2. 使用任何现有的智能合约或solidity开发环境使用Hardhat和Graph插件引导本地子图(https://github.com/graphprotocol/hardhat-graph) -3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. -4. 在Graph Node中创建子图:graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT -5. 将子图发布到Graph Node:graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT + > Do not change the env var name itself. It must remain `ethereum` even if the network name is different. -如果没有错误,Graph Node应该正在同步部署的子图。给它一些时间来同步,然后向API端点发送一些GraphQL查询。 +3. Run an IPFS node or use the one used by The Graph: https://api.thegraph.com/ipfs/ ---- +### Testing an EVM JSON-RPC by locally deploying a subgraph -## 集成一个新的Firehose启用链 +1. Install [graph-cli](https://github.com/graphprotocol/graph-cli) +2. 创建一个简单的示例子图。以下是一些选项: + 1. The pre-packed [Gravitar](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) smart contract and subgraph is a good starting point + 2. Bootstrap a local subgraph from any existing smart contract or solidity dev environment [using Hardhat with a Graph plugin](https://github.com/graphprotocol/hardhat-graph) +3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node. +4. Create your subgraph in Graph Node: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT` +5. Publish your subgraph to Graph Node: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT` + +如果没有错误,Graph Node应该正在同步部署的子图。给它一些时间来同步,然后向API端点发送一些GraphQL查询。 -使用Firehose方法也可以集成新的链。这是目前非EVM链的最佳选择,也是支持substreams的要求。有关如何使用Firehose,为新链添加Firehose支持以及如何将其与Graph Node集成的其他文档。集成者的推荐文档: +## Substreams-powered Subgraphs -1. Firehose的通用文档(firehose/) -2. [Adding Firehose support for a new chain](https://firehose.streamingfast.io/integrate-new-chains/integration-overview) -3. 通过Firehose将Graph Node与新链集成(https://github.com/graphprotocol/graph-node/blob/master/docs/implementation/add-chain.md) +For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/sps/introduction). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself. diff --git a/website/pages/zh/operating-graph-node.mdx b/website/pages/zh/operating-graph-node.mdx index 0847fd1c03c6..ecda26fa9061 100644 --- a/website/pages/zh/operating-graph-node.mdx +++ b/website/pages/zh/operating-graph-node.mdx @@ -77,13 +77,13 @@ cargo run -p graph-node --release -- \ 当运行Graph Node时,会暴露以下端口: -| 端口 | 用途 | 路径 | CLI 参数 | 环境 变量 | -| --- | --- | --- | --- | --- | -| 8000 | GraphQL HTTP 服务
    (用于子图查询) | /subgraphs/id/...

    /subgraphs/name/.../... | --http-port | - | -| 8001 | GraphQL WS
    (用于子图订阅) | /subgraphs/id/...

    /subgraphs/name/.../... | --ws-port | - | -| 8020 | JSON-RPC
    (用于管理部署) | / | --admin-port | - | -| 8030 | 子图索引状态 API | /graphql | --index-node-port | - | -| 8040 | Prometheus 指标 | /metrics | --metrics-port | - | +| 端口 | 用途 | 路径 | CLI 参数 | 环境 变量 | +| ---- | ------------------------------------ | ------------------------------------------------------------------- | ----------------- | ----- | +| 8000 | GraphQL HTTP 服务
    (用于子图查询) | /subgraphs/id/...

    /subgraphs/name/.../... | --http-port | - | +| 8001 | GraphQL WS
    (用于子图订阅) | /subgraphs/id/...

    /subgraphs/name/.../... | --ws-port | - | +| 8020 | JSON-RPC
    (用于管理部署) | / | --admin-port | - | +| 8030 | 子图索引状态 API | /graphql | --index-node-port | - | +| 8040 | Prometheus 指标 | /metrics | --metrics-port | - | > **重要**: 公开暴露端口时要小心 - **管理端口** 应保持锁定。 这包括下面详述的 Graph 节点 JSON-RPC 和索引人管理端点。 diff --git a/website/pages/zh/querying/graphql-api.mdx b/website/pages/zh/querying/graphql-api.mdx index 341e78e4ad60..2e4b4a0e6f54 100644 --- a/website/pages/zh/querying/graphql-api.mdx +++ b/website/pages/zh/querying/graphql-api.mdx @@ -2,11 +2,19 @@ title: GraphQL API --- -This guide explains the GraphQL Query API that is used for The Graph Protocol. +Learn about the GraphQL Query API used in The Graph. -## 查询 +## What is GraphQL? -在您的子图模式中,定义了称为 `Entities` 的类型。 对于每个 `Entity` 类型,将在顶级 `Query` 类型上生成一个 `entity` 和 `entities` 字段。 请注意,使用Graph 时,`query` 不需要包含在 `graphql` 查询的顶部。 +[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs. + +To understand the larger role that GraphQL plays, review [developing](/network/developing/) and [creating a subgraph](/developing/creating-a-subgraph/). + +## Queries with GraphQL + +In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type. + +> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph. ### 例子 @@ -21,7 +29,7 @@ This guide explains the GraphQL Query API that is used for The Graph Protocol. } ``` -> **注意:**查询单个实体时,`id`字段为必填项,而且必须为字符串。 +> Note: When querying for a single entity, the `id` field is required, and it must be writen as a string. 查询所有 `Token` 实体: @@ -36,7 +44,10 @@ This guide explains the GraphQL Query API that is used for The Graph Protocol. ### 排序 -查询集合时,`orderBy` 参数可用于按特定属性排序。 此外,`orderDirection` 可用于指定排序方向,`asc` 用于升序,而`desc` 用于降序。 +When querying a collection, you may: + +- Use the `orderBy` parameter to sort by a specific attribute. +- Use the `orderDirection` to specify the sort direction, `asc` for ascending or `desc` for descending. #### 示例 @@ -53,7 +64,7 @@ This guide explains the GraphQL Query API that is used for The Graph Protocol. 从Graph 节点 [`v0.30.0`](https://github.com/graphprotocol/graph-node/releases/tag/v0.30.0)开始,可以根据嵌套实体对实体进行排序。 -在以下示例中,我们根据代币所有者的名称对其进行排序: +The following example shows tokens sorted by the name of their owner: ```graphql { @@ -70,11 +81,12 @@ This guide explains the GraphQL Query API that is used for The Graph Protocol. ### 分页 -查询集合时,可以使用 `first` 参数从集合的开头进行分页。 值得注意的是,默认排序顺序是按 ID 以字母数字升序排列,而不是按创建时间排列。 - -此外,`skip` 参数可用于跳过实体和分页。 例如 `first:100` 显示前 100 个实体,`first:100, skip:100` 显示接下来的 100 个实体。 +When querying a collection, it's best to: -查询应该避免使用非常大的 `skip` 值,因为它们通常性能表现不佳。 要检索大量项目,最好根据上一个示例中所示的属性对实体进行分页。 +- Use the `first` parameter to paginate from the beginning of the collection. + - The default sort order is by `ID` in ascending alphanumeric order, **not** by creation time. +- Use the `skip` parameter to skip entities and paginate. For instance, `first:100` shows the first 100 entities and `first:100, skip:100` shows the next 100 entities. +- Avoid using `skip` values in queries because they generally perform poorly. To retrieve a large number of items, it's best to page through entities based on an attribute as shown in the previous example above. #### 使用`first`的示例 @@ -106,7 +118,7 @@ This guide explains the GraphQL Query API that is used for The Graph Protocol. #### 使用`first`和`id_ge`的示例 -如果客户端需要检索大量实体,则基于属性进行查询和过滤会明显提高性能。 例如,客户端可以使用以下查询检索大量代币: +If a client needs to retrieve a large number of entities, it's more performant to base queries on an attribute and filter by that attribute. For example, a client could retrieve a large number of tokens using this query: ```graphql query manyTokens($lastID: String) { tokens(first: 1000, where: { id_gt: $lastID }) { @@ -117,11 +129,12 @@ query manyTokens($lastID: String) { tokens(first: 1000, where: { id_gt: $last } ``` -第一次,它会发送带有 `lastID = ""` 的查询,对于后续请求,会将 `lastID` 设置为上一个请求中的最后一个实体的`id` 属性。 与简单的提高 `skip` 值相比,这种方法的性能要好得多。 +The first time, it would send the query with `lastID = ""`, and for subsequent requests it would set `lastID` to the `id` attribute of the last entity in the previous request. This approach will perform significantly better than using increasing `skip` values. ### 过滤 -您可以在查询中使用 `where` 参数来过滤不同的属性。 您可以在 `where` 参数内过滤多个值。 +- You can use the `where` parameter in your queries to filter for different properties. +- You can filter on multiple values within the `where` parameter. #### 使用`where`的示例 @@ -155,7 +168,7 @@ query manyTokens($lastID: String) { tokens(first: 1000, where: { id_gt: $last #### 区块过滤示例 -您还可以通过` _ change _ block (number _ gte: Int)`过滤实体-这个过滤器在指定的区块中或之后更新的实体。 +You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`. 如果您只想获取已经更改的实体,例如自上次轮询以来改变的实体,那么这将非常有用。或者也可以调查或调试子图中实体的变化情况(如果与区块过滤器结合使用,则只能隔离在特定区块中发生变化的实体)。 @@ -193,7 +206,7 @@ query manyTokens($lastID: String) { tokens(first: 1000, where: { id_gt: $last ##### `AND`运算符 -在下面的示例中,我们正在筛选`outcome``succeeded`且`number`大于或等于`100`的挑战。 +The following example filters for challenges with `outcome` `succeeded` and `number` greater than or equal to `100`. ```graphql { @@ -208,7 +221,7 @@ query manyTokens($lastID: String) { tokens(first: 1000, where: { id_gt: $last ``` > **语法糖:**您可以通过传递一个用逗号分隔的子表达式来删除`and`运算符,从而简化上述查询。 -> +> > ```graphql > { > challenges(where: { number_gte: 100, outcome: "succeeded" }) { @@ -223,7 +236,7 @@ query manyTokens($lastID: String) { tokens(first: 1000, where: { id_gt: $last ##### `OR`运算符 -在下面的示例中,我们正在筛选`outcome``succeeded`或`number`大于或等于`100`的挑战。 +The following example filters for challenges with `outcome` `succeeded` or `number` greater than or equal to `100`. ```graphql { @@ -278,9 +291,9 @@ _change_block(number_gte: Int) 您可以查询实体的状态,不仅查询默认的最新区块,还可以查询过去的任意区块。通过在查询的顶级字段中包含`block`参数,可以通过区块号或区块哈希指定应该发生查询的区块。 -这种查询结果不会随着时间的推移而改变,即对过去某个区块的查询,无论何时执行,都将返回相同的结果。唯一的例外是,如果您在非常靠近链头的区块上进行查询,如果该区块不在主链上,并且链被重新组织,则结果可能会改变。 一旦一个区块被确认是最终的区块,那么查询的结果就不会改变。 +The result of such a query will not change over time, i.e., querying at a certain past block will return the same result no matter when it is executed, with the exception that if you query at a block very close to the head of the chain, the result might change if that block turns out to **not** be on the main chain and the chain gets reorganized. Once a block can be considered final, the result of the query will not change. -请注意,当前的实现仍然受到某些限制,这些限制可能会违反这些保证。该实现不能总是判断给定的区块哈希根本不在主链上,或者对于一个不能被认为是最终的区块,逐块哈希查询的结果可能会受到与查询同时运行的区块重组的影响。当区块是最终区块并且已知在主链上时,它们不会影响区块哈希查询的结果。[这个](https://github.com/graphprotocol/graph-node/issues/1405)问题详细解释了这些限制是什么。 +> Note: The current implementation is still subject to certain limitations that might violate these guarantees. The implementation can not always tell that a given block hash is not on the main chain at all, or if a query result by a block hash for a block that is not yet considered final could be influenced by a block reorganization running concurrently with the query. They do not affect the results of queries by block hash when the block is final and known to be on the main chain. [This issue](https://github.com/graphprotocol/graph-node/issues/1405) explains what these limitations are in detail. #### 示例 @@ -322,12 +335,12 @@ _change_block(number_gte: Int) 全文搜索运算符: -| 符号 | 运算符 | 描述 | -| ------ | ----------- | ---------------------------------------------------------------------- | -| `&` | `And` | 用于将多个搜索词组合到包含所有提供词条的实体的过滤器中 | -| | | `Or` | 由 or 运算符分隔的多个搜索词的查询,将返回与任何提供的词匹配的所有实体 | -| `<->` | `Follow by` | 指定两个单词之间的距离。 | -| `:*` | `Prefix` | 使用前缀搜索词查找前缀匹配的单词(需要 2 个字符) | +| 符号 | 运算符 | 描述 | +| ----------- | ----------- | ------------------------------------- | +| `&` | `And` | 用于将多个搜索词组合到包含所有提供词条的实体的过滤器中 | +| | | `Or` | 由 or 运算符分隔的多个搜索词的查询,将返回与任何提供的词匹配的所有实体 | +| `<->` | `Follow by` | 指定两个单词之间的距离。 | +| `:*` | `Prefix` | 使用前缀搜索词查找前缀匹配的单词(需要 2 个字符) | #### 例子 @@ -376,11 +389,11 @@ Graph Node使用[graphql-tools-rs](https://github.com/dotansimha/graphql-tools-r ## 模式 -数据源的模式,即可用于查询的实体类型、值和关系,是通过[GraphQL接口定义语言(IDL)定义](https://facebook.github.io/graphql/draft/#sec-Type-System)的。 +The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Langauge (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System). -GraphQL模式通常定义`查询`、`订阅`和`突变`的根类型。Graph仅支持`查询`。子图的根`查询`类型是从子图清单中包含的GraphQL模式自动生成的。 +GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph). -> **注意:**我们的 API 不提供对变种的支持,因为开发人员会从他们的应用程序中直接针对底层区块链发出交易。 +> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications. ### 实体 diff --git a/website/pages/zh/querying/querying-best-practices.mdx b/website/pages/zh/querying/querying-best-practices.mdx index 0f1af234da9e..36a5ab1aecba 100644 --- a/website/pages/zh/querying/querying-best-practices.mdx +++ b/website/pages/zh/querying/querying-best-practices.mdx @@ -2,11 +2,9 @@ title: 查询最佳实践 --- -Graph提供了一种从区块链查询数据的去中心化方式。 +The Graph provides a decentralized way to query data from blockchains via GraphQL APIs, making it easier to query data with the GraphQL language. -Graph网络的数据通过GraphQL API公开,使得使用GraphQL语言查询数据更加容易。 - -本页将指导您了解基本的GraphQL语言规则和GraphQL查询最佳实践。 +Learn the essential GraphQL language rules and GraphQL querying best practices. --- @@ -71,7 +69,7 @@ GraphQL 是一种通过 HTTP 传输的语言和一组协议。 这意味着您可以使用标准`fetch`(本机或通过`@whatwg-node/提取`或`isomorphic-fetch`) 查询 GraphQL API。 -但是,正如[“从应用程序查询”](/querying/querying-from-an-application)中所说,我们建议您使用我们的`graph-client`,该客户端支持以下独特功能: +However, as stated in ["Querying from an Application"](/querying/querying-from-an-application), it's recommend to use `graph-client` which supports unique features such as: - 跨链子图处理: 在一个查询中从多个子图进行查询 - [自动区块跟踪](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md) @@ -104,8 +102,6 @@ main() [“从应用程序查询”](/querying/querying-from-an-application)中介绍了更多的 GraphQL 客户端替代方案。 -现在我们已经介绍了 GraphQL 查询语法的基本规则,接下来让我们看看 GraphQL 查询编写的最佳实践。 - --- ## Best Practices @@ -164,11 +160,11 @@ const result = await execute(query, { - 可以在服务器级别**缓存变量** - **查询可以通过工具进行静态分析**(下面几节将详细介绍) -**注意: 如何在静态查询中有条件地包括字段** +### How to include fields conditionally in static queries -我们可能希望仅在特定条件下包括 `owner` 字段。 +You might want to include the `owner` field only on a particular condition. -为此,我们可以利用`@include (if:...)`,如下所示: +For this, you can leverage the `@include(if:...)` directive as follows: ```tsx import { execute } from 'your-favorite-graphql-client' @@ -191,7 +187,7 @@ const result = await execute(query, { }) ``` -注意:相反的指令是@skip(if:…)。 +> 注意:相反的指令是@skip(if:…)。 ### Ask for what you want @@ -199,9 +195,8 @@ GraphQL以其“问你所想”的口号而闻名。 因此,在GraphQL中,不单独列出所有可用字段,就无法获取所有可用字段。 -在查询GraphQL API时,请始终考虑只查询实际使用的字段。 - -过度获取的一个常见原因是实体集合。默认情况下,查询将获取集合中的100个实体,这通常比实际使用的实体多得多,例如,用于向用户显示的实体。因此,查询几乎总是首先显式设置,并确保它们只获取实际需要的实体。这不仅适用于查询中的一层集合,更适用于实体的嵌套集合。 +- 在查询GraphQL API时,请始终考虑只查询实际使用的字段。 +- Make sure queries only fetch as many entities as you actually need. By default, queries will fetch 100 entities in a collection, which is usually much more than what will actually be used, e.g., for display to the user. This applies not just to top-level collections in a query, but even more so to nested collections of entities. 例如,在以下查询中: @@ -337,8 +332,8 @@ query { 此类重复字段(`id`、`active`、`status`)会带来许多问题: -- 当查询更广泛时将更难阅读 -- 当使用基于查询生成TypeScript类型的工具时(_上一节将详细介绍_),`newDelegate`和`oldDelegate`将产生两个不同的内联接口。 +- More extensive queries become harder to read. +- When using tools that generate TypeScript types based on queries (_more on that in the last section_), `newDelegate` and `oldDelegate` will result in two distinct inline interfaces. 查询的重构版本如下: @@ -364,13 +359,13 @@ fragment DelegateItem on Transcoder { } ``` -使用GraphQL`fragment`将提高可读性(特别是在规模上),也将使更好的TypeScript类型生成。 +Using GraphQL `fragment` will improve readability (especially at scale) and result in better TypeScript types generation. 当使用类型生成工具时,上述查询将生成一个正确的`DelegateItemFragment`类型(_请参阅上一节“工具”_)。 ### GraphQL片段的注意事项 -**片段必须是一种类型** +### 片段必须是一种类型 片段不能基于不适用的类型,简而言之,**基于没有字段的类型**: @@ -382,7 +377,7 @@ fragment MyFragment on BigInt { `BigInt`是一个**标量**(原生“纯”类型),不能用作片段的基础类型。 -**如何传播片段** +#### 如何传播片段 片段是在特定类型上定义的,应该在查询中相应地使用。 @@ -411,16 +406,16 @@ fragment VoteItem on Vote { 无法在此处传播`Vote`类型的片段。 -**将片段定义为数据的原子业务单元。** +#### 将片段定义为数据的原子业务单元。 -GraphQL Fragment必须根据其用法进行定义。 +GraphQL `Fragment`s must be defined based on their usage. 对于大多数用例,为每个类型定义一个片段(在重复使用字段或生成类型的情况下)就足够。 -以下是使用Fragment的经验法则: +Here is a rule of thumb for using fragments: -- 当相同类型的字段在查询中重复时,将它们分组为片段 -- 当重复类似但不相同的字段时,创建多个片段,例如: +- When fields of the same type are repeated in a query, group them in a `Fragment`. +- When similar but different fields are repeated, create multiple fragments, for instance: ```graphql # base fragment (主要在上架中使用) @@ -443,7 +438,7 @@ fragment VoteWithPoll on Vote { --- -## 重要工具 +## The Essential Tools ### GraphQL基于web的浏览器 @@ -473,11 +468,11 @@ If you are looking for a more flexible way to debug/test your queries, other sim [GraphQL VSCode扩展](https://marketplace.visualstudio.com/items?itemName=GraphQL.vscode-graphql)是对开发工作流的一个极好补充,可以获得: -- 语法高亮显示 -- 自动完成建议 -- 根据模式验证 -- 片段 -- 转到片段和输入类型的定义 +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets +- Go to definition for fragments and input types 如果您使用的是`graphql-eslit`,[ESLintVSCode扩展](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint)是正确可视化代码中内联的错误和警告的必备工具。 @@ -485,9 +480,9 @@ If you are looking for a more flexible way to debug/test your queries, other sim [JS GraphQL插件](https://plugins.jetbrains.com/plugin/8097-graphql/)将通过提供以下功能显著改善您在使用GraphQL时的体验: -- 语法高亮显示 -- 自动完成建议 -- 根据模式验证 -- 片段 +- Syntax highlighting +- Autocomplete suggestions +- Validation against schema +- Snippets -有关这篇[WebStorm文章](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/)的更多信息,其中展示了插件的所有主要功能。 +For more information on this topic, check out the [WebStorm article](https://blog.jetbrains.com/webstorm/2019/04/featured-plugin-js-graphql/) which showcases all the plugin's main features. diff --git a/website/pages/zh/quick-start.mdx b/website/pages/zh/quick-start.mdx index dfd8e5f92aa2..e17142d58c90 100644 --- a/website/pages/zh/quick-start.mdx +++ b/website/pages/zh/quick-start.mdx @@ -2,24 +2,18 @@ title: 快速开始 --- -This guide will quickly take you through how to initialize, create, and deploy your subgraph to Subgraph Studio. +Learn how to easily publish and query a [subgraph](/developing/developer-faqs/#1-what-is-a-subgraph) on The Graph. -确保您的子图将从一个[受支持的网络](/developing/supported-networks) 中索引数据。 - -本指南是在假设您具备以下条件的情况下编写的: +## Prerequisites for this guide - 一个加密钱包 -- 您选择的网络上的智能合约地址 - -## 1. 在子图工作室中创建子图 - -Go to the [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +- A smart contract address on one of the [supported networks](/developing/supported-networks/) -Once your wallet is connected, you can begin by clicking “Create a Subgraph." It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name." +## Step-by-step -## 2. 安装 Graph CLI +### 1. 安装 Graph CLI -The Graph CLI is written in TypeScript and you will need to have `node` and either `npm` or `yarn` installed to use it. Check that you have the most recent CLI version installed. +You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm`, `yarn` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version. 在本地计算机上,运行以下命令之一: @@ -35,133 +29,161 @@ Using [yarn](https://yarnpkg.com/): yarn global add @graphprotocol/graph-cli ``` -## 3. Initialize your subgraph from existing contract +### 2. Create your subgraph + +If your contract has events, the `init` command will automatically create a scaffold of a subgraph. + +#### Create via Graph CLI + +Use the following command to create a subgraph in Subgraph Studio using the CLI: + +```sh +graph init --product subgraph-studio +``` + +#### Create via Subgraph Studio + +Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys. -Initialize your subgraph from an existing contract by running the initialize command: +1. Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet. +2. Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name". + +For additional information on subgraph creation and the Graph CLI, see [Creating a Subgraph](/developing/creating-a-subgraph). + +### 3. Initialize your subgraph + +#### From an existing contract + +The following command initializes your subgraph from an existing contract: ```sh graph init --studio ``` -> 您可以在[Subgraph Studio](https://thegraph.com/studio/)的子图页面找到针对您特定子图的命令。 +> Note: If your contract was verified on Etherscan, then the ABI will automatically be created in the CLI. + +您可以在[Subgraph Studio](https://thegraph.com/studio/)的子图页面找到针对您特定子图的命令。 -初始化子图时,CLI工具会要求您提供以下信息: +When you initialize your subgraph, the CLI will ask you for the following information: -- 协议:选择子图索引数据的协议 -- 子图段塞:为您的子图创建一个名称。您的子图段塞是子图的标识符。 -- 创建子图的目录:选择您的本地目录 -- 以太坊网络(可选):您可能需要指定子图将从哪个EVM兼容网络索引数据 -- 合约地址:找到要查询数据的智能合约地址 -- ABI:如果ABI不是自动填充的,则需要将其手动输入为JSON文件 -- 起始区块:建议您在子图索引区块链数据时输入起始区块以节省时间。您可以通过查找部署合约区块来定位起始区块。 -- 合约名称:输入您的合约名称 -- 将合约事件作为实体进行索引:建议您将其设置为true,因为它将自动为每个发出的事件向子图添加映射 -- 添加其他合约(可选):您可以添加其他合约 +- Protocol: Choose the protocol your subgraph will be indexing data from. +- Subgraph slug: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph. +- Directory to create the subgraph in: Choose your local directory. +- Ethereum network (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from. +- Contract address: Locate the smart contract address you’d like to query data from. +- ABI: If the ABI is not auto-populated, you will need to input it manually as a JSON file. +- Start Block: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed. +- Contract Name: Input the name of your contract. +- Index contract events as entities: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event. +- Add another contract (optional): You can add another contract. 请参阅下面的屏幕截图,以获取初始化子图时所需的示例: ![Subgraph command](/img/subgraph-init-example.png) -## 4. Write your subgraph +### 4. Write your subgraph -前面的命令创建了一个原始子图,可以将其用作构建子图的起点。当对子图进行更改时,将主要使用三个文件: +The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph. -- Manifest (`subgraph.yaml`) - The manifest defines what datasources your subgraphs will index. -- Schema (`schema.graphql`) - The GraphQL schema defines what data you wish to retrieve from the subgraph. -- AssemblyScript Mappings (`mapping.ts`) - This is the code that translates data from your datasources to the entities defined in the schema. +When making changes to the subgraph, you will mainly work with three files: -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). +- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index. +- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph. +- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema. -## 5. Deploy to Subgraph Studio +For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/). -一旦您的子图被编写好,请运行以下命令: +### 5. Deploy your subgraph -```sh -$ graph codegen -$ graph build -``` +Remember, deploying is not the same as publishing. + +- When you deploy a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. +- When you publish a subgraph, you are publishing it onchain to the decentralized network. + +1. 一旦您的子图被编写好,请运行以下命令: + + ```sh + graph codegen + graph build + ``` + +2. Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio. -- 认证并部署子图。部署密钥可以在子图工作室的子图页面上找到。 +![Deploy key](/img/subgraph-studio-deploy-key.jpg) +```` ```sh -$ graph auth --studio -$ graph deploy --studio -``` -您将被要求输入一个版本标签。强烈建议使用(语义化版本)[semver](https://semver.org/) 进行版本控制,如`0.0.1`。尽管如此,您可以自由选择任何字符串作为版本,比如:`v1`,`version1`,`asdf`。 - -## 6. 测试子图 - -In Subgraph Studio's playground environment, you can test your subgraph by making a sample query. - -日志会告诉你你的子图是否有任何错误。操作子图的日志如下所示: - -![Subgraph logs](/img/subgraph-logs-image.png) - -如果子图失败了,可以通过使用 GraphiQL Playground查询子图的健康状况。注意,你可以利用下面的查询,输入你的子图的部署 ID。在这种情况下,Qm... 是部署 ID(可以在子图页面的详细信息下找到)。下面的查询会提示,当一个子图失败时,则可进行相应调试。 - -```graphql -{ - indexingStatuses(subgraphs: ["Qm..."]) { - node - synced - health - fatalError { - message - block { - number - hash - } - handler - } - nonFatalErrors { - message - block { - number - hash - } - handler - } - chains { - network - chainHeadBlock { - number - } - earliestBlock { - number - } - latestBlock { - number - } - lastHealthyBlock { - number - } - } - entityCount - } -} +graph auth --studio + +graph deploy --studio ``` +```` -## 7. Publish your subgraph to The Graph’s Decentralized Network +- The CLI will ask for a version label. + - It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`. That said, you can choose any string for the version such as: `v1`, `version1`, `asdf`, etc. -Once your subgraph has been deployed to Subgraph Studio, you have tested it out, and you are ready to put it into production, you can then publish it to the decentralized network. +### 6. Review your subgraph -In Subgraph Studio, you will be able to click the publish button on the top right of your subgraph's page. +If you’d like to examine your subgraph before publishing it to the network, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following: -选择您希望发布子图的网络。建议将子图发布到 Arbitrum One,以利用[更快的交易速度和更低的Gas费用](/arbitrum/arbitrum-faq)。 +- Run a sample query. +- Analyze your subgraph in the dashboard to check information. +- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this: -The (upgrade Indexer)[/sunrise/#about-the-upgrade-indexer] will begin serving queries on your subgraph regardless of subgraph curation, and it will provide you with 100,000 free queries per month. + ![Subgraph logs](/img/subgraph-logs-image.png) -For a higher quality of service and stronger redundancy, you can curate your subgraph to attract more Indexers. At the time of writing, it is recommended that you curate your own subgraph with at least 3,000 GRT to ensure 3-5 additional Indexers begin serving queries on your subgraph. +### 7. Publish your subgraph to The Graph Network -为了节省gas成本,您可以在将子图发布到Graph的去中心化网络时选择此按钮,在发布子图的同一交易中策展子图: +Publishing a subgraph to the decentralized network makes it available for [Curators](/network/curating/) to begin curating it and [Indexers](/network/indexing/) to begin indexing it. -![Subgraph publish](/img/publish-and-signal-tx.png) +#### Publishing with Subgraph Studio + +1. To publish your subgraph, click the Publish button in the dashboard. +2. Select the network to which you would like to publish your subgraph. + +#### Publishing from the CLI + +As of version 0.73.0, you can also publish your subgraph with the Graph CLI. + +1. Open the `graph-cli`. + +2. Use the following commands: + + ```sh + graph codegen && graph build + ``` + + Then, + + ```sh + graph publish + ``` -## 8. Query your subgraph +3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice. + +![cli-ui](/img/cli-ui.png) + +To customize your deployment, see [Publishing a Subgraph](/publishing/publishing-a-subgraph/). + +#### Adding signal to your subgraph + +1. To attract indexers to query your subgraph, you should add GRT curation signal to it. + + - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph. + +2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount. + + - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks. + +To learn more about curation, read [Curating](/network/curating/). + +To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option: + +![Subgraph publish](/img/publish-and-signal-tx.png) -现在,您可以通过将GraphQL查询发送到子图的查询URL来查询子图,您可以单击查询按钮找到该查询URL。 +### 8. Query your subgraph -If you don't have your API key, you can query via the free, rate-limited development query URL, which can be used for development and staging. +Now, you can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button. -For more information about querying data from your subgraph, read more [here](/querying/querying-the-graph/). +For more information about querying data from your subgraph, read [Querying The Graph](/querying/querying-the-graph/). diff --git a/website/pages/zh/release-notes/assemblyscript-migration-guide.mdx b/website/pages/zh/release-notes/assemblyscript-migration-guide.mdx index 622bdeef307e..3ff2d318c88f 100644 --- a/website/pages/zh/release-notes/assemblyscript-migration-guide.mdx +++ b/website/pages/zh/release-notes/assemblyscript-migration-guide.mdx @@ -127,11 +127,8 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a' ~~~~~~~~~~~~~ in assembly/index.ts(4,3) ``` - 如果您有变量遮蔽的情况,则需要重命名重名变量。 - ### 空值比较 - 对子图进行升级后,有时您可能会遇到如下错误: ```typescript @@ -140,7 +137,6 @@ ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' i ~~~~ in src/mappings/file.ts(41,21) ``` - 要解决此问题,您只需将 `if` 语句更改为如下所示代码: ```typescript @@ -288,7 +284,6 @@ ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/s let somethingOrElse: string = container.data ? container.data : "else"; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``` - 要解决此问题,您可以为该属性访问创建一个变量,以便编译器可以执行可空性检查: ```typescript diff --git a/website/pages/zh/release-notes/graphql-validations-migration-guide.mdx b/website/pages/zh/release-notes/graphql-validations-migration-guide.mdx index bdd11ecfbb29..6316302a7142 100644 --- a/website/pages/zh/release-notes/graphql-validations-migration-guide.mdx +++ b/website/pages/zh/release-notes/graphql-validations-migration-guide.mdx @@ -62,7 +62,7 @@ npx@graphql验证/cli-shttps://api-npx @graphql-validate/cli -s https://api-next 或者 -- `https://api-next.thegraph.com/subgraphs/name`\/\ +- `https://api-next.thegraph.com/subgraphs/name`/ 要处理标记为存在验证错误的查询,可以使用您最喜欢的GraphQL查询工具,如Altair或[GraphiQL](https://cloud.hasura.io/public/graphiql),然后尝试您的查询。这些工具还会在用户界面中标记这些错误,甚至在您运行之前。 diff --git a/website/pages/zh/sps/introduction.mdx b/website/pages/zh/sps/introduction.mdx new file mode 100644 index 000000000000..12e3f81c6d53 --- /dev/null +++ b/website/pages/zh/sps/introduction.mdx @@ -0,0 +1,19 @@ +--- +title: Introduction to Substreams-powered Subgraphs +--- + +By using a Substreams package (`.spkg`) as a data source, your subgraph gains access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks. + +There are two methods of enabling this technology: + +Using Substreams [triggers](./triggers): Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph. + +Using [Entity Changes](https://substreams.streamingfast.io/documentation/consume/subgraph/graph-out): By writing more of the logic into Substreams, you can consume the module's output directly into graph-node. In graph-node, you can use the Substreams data to create your subgraph entities. + +It is really a matter of where you put your logic, in the subgraph or the Substreams. Keep in mind that having more of your logic in Substreams benefits from a parallelized model, whereas triggers will be linearly consumed in graph-node. + +Visit the following links for How-To Guides on using code-generation tooling to build your first end-to-end project quickly: + +- [Solana](https://substreams.streamingfast.io/documentation/how-to-guides/solana) +- [EVM](https://substreams.streamingfast.io/documentation/how-to-guides/evm) +- [Injective](https://substreams.streamingfast.io/documentation/how-to-guides/injective) diff --git a/website/pages/zh/sps/triggers-example.mdx b/website/pages/zh/sps/triggers-example.mdx new file mode 100644 index 000000000000..182ef008f110 --- /dev/null +++ b/website/pages/zh/sps/triggers-example.mdx @@ -0,0 +1,140 @@ +--- +title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana" +--- + +## 先决条件 + +Before starting, make sure to: + +- Complete the [Getting Started Guide](https://github.com/streamingfast/substreams-starter) to set up your development environment using a Dev Container. +- Be familiar with The Graph and basic blockchain concepts such as transactions and Protobufs. + +## Step 1: Initialize Your Project + + + +1. Open your Dev Container and run the following command to initialize your project: + + ```bash + substreams init + ``` + +2. Select the "minimal" project option. + +3. Replace the contents of the generated `substreams.yaml` file with the following configuration, which filters transactions for the Orca account on the SPL token program ID: + +```yaml +specVersion: v0.1.0 +package: + name: my_project_sol + version: v0.1.0 + +imports: # Pass your spkg of interest + solana: https://github.com/streamingfast/substreams-solana-spl-token/raw/master/tokens/solana-spl-token-v0.1.0.spkg + +modules: + - name: map_spl_transfers + use: solana:map_block # Select corresponding modules available within your spkg + initialBlock: 260000082 + + - name: map_transactions_by_programid + use: solana:solana:transactions_by_programid_without_votes + +network: solana-mainnet-beta + +params: # Modify the param fields to meet your needs + # For program_id: TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA + map_spl_transfers: token_contract:orcaEKTdK7LKz57vaAYr9QeNsVEPfiu6QeMU1kektZE +``` + +## Step 2: Generate the Subgraph Manifest + +Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container: + +```bash +substreams codegen subgraph +``` + +You will generate a`subgraph.yaml` manifest which imports the Substreams package as a data source: + +```yaml +--- +dataSources: + - kind: substreams + name: my_project_sol + network: solana-mainnet-beta + source: + package: + moduleName: map_spl_transfers # Module defined in the substreams.yaml + file: ./my-project-sol-v0.1.0.spkg + mapping: + apiVersion: 0.0.7 + kind: substreams/graph-entities + file: ./src/mappings.ts + handler: handleTriggers +``` + +## Step 3: Define Entities in `schema.graphql` + +Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file. Here is an example: + +```graphql +type MyTransfer @entity { + id: ID! + amount: String! + source: String! + designation: String! + signers: [String!]! +} +``` + +This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `source`, `designation`, and `signers`. + +## Step 4: Handle Substreams Data in `mappings.ts` + +With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory. The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id: + +```ts +import { Protobuf } from 'as-proto/assembly' +import { Events as protoEvents } from './pb/sf/solana/spl/token/v1/Events' +import { MyTransfer } from '../generated/schema' + +export function handleTriggers(bytes: Uint8Array): void { + const input: protoEvents = Protobuf.decode(bytes, protoEvents.decode) + + for (let i = 0; i < input.data.length; i++) { + const event = input.data[i] + + if (event.transfer != null) { + let entity_id: string = `${event.txnId}-${i}` + const entity = new MyTransfer(entity_id) + entity.amount = event.transfer!.instruction!.amount.toString() + entity.source = event.transfer!.accounts!.source + entity.designation = event.transfer!.accounts!.destination + + if (event.transfer!.accounts!.signer!.single != null) { + entity.signers = [event.transfer!.accounts!.signer!.single!.signer] + } else if (event.transfer!.accounts!.signer!.multisig != null) { + entity.signers = event.transfer!.accounts!.signer!.multisig!.signers + } + entity.save() + } + } +} +``` + +## Step 5: Generate Protobuf Files + +To generate Protobuf objects in AssemblyScript, run the following command: + +```bash +npm run protogen +``` + +This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler. + +## Conclusion + +You’ve successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can now further customize your schema, mappings, and modules to suit your specific use case. + +For more advanced customization and optimizations, check out the official [Substreams documentation](https://substreams.streamingfast.io/tutorials/solana). diff --git a/website/pages/zh/sps/triggers.mdx b/website/pages/zh/sps/triggers.mdx new file mode 100644 index 000000000000..ed19635d4768 --- /dev/null +++ b/website/pages/zh/sps/triggers.mdx @@ -0,0 +1,37 @@ +--- +title: Substreams Triggers +--- + +Custom triggers allow you to send data directly into your subgraph mappings file and entities (similar to tables and fields), enabling full use of the GraphQL layer. By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data within your subgraph’s handler, ensuring efficient and streamlined data management within the subgraph framework. + +> Note: If you haven’t already, visit one of the How-To Guides found [here](./introduction) to scaffold your first project in the Development Container. + +The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created. + +```tsx +export function handleTransactions(bytes: Uint8Array): void { + let transactions = assembly.eth.transaction.v1.Transactions.decode(bytes.buffer).trasanctions // 1. + if (transactions.length == 0) { + log.info('No transactions found', []) + return + } + + for (let i = 0; i < transactions.length; i++) { + // 2. + let transaction = transactions[i] + + let entity = new Transaction(transaction.hash) // 3. + entity.from = transaction.from + entity.to = transaction.to + entity.save() + } +} +``` + +Here's what you’re seeing in the `mappings.ts` file: + +1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object +2. Looping over the transactions +3. Create a new subgraph entity for every transaction + +To go through a detailed example of a trigger-based subgraph, [click here](./triggers-example). diff --git a/website/pages/zh/substreams.mdx b/website/pages/zh/substreams.mdx index 359a610c10a2..6823792dec41 100644 --- a/website/pages/zh/substreams.mdx +++ b/website/pages/zh/substreams.mdx @@ -4,9 +4,11 @@ title: 子流 ![子流Logo](/img/substreams-logo.png) -Substreams is a powerful blockchain indexing technology developed for The Graph Network. It enables developers to write Rust modules, compose data streams alongside the community, and provide extremely high-performance indexing due to parallelization in a streaming-first approach. +Substreams is a powerful blockchain indexing technology designed to enhance performance and scalability within The Graph Network. It offers the following features: -With Substreams, developers can quickly extract data from different blockchains (Ethereum, BNB, Solana, ect.) and send it to various locations of their choice, such as a Postgres database, a Mongo database, or a Subgraph. Additionally, Substreams packages enable developers to specify which data they want to extract from the blockchain. +- **Accelerated Indexing**: Substreams reduce subgraph indexing time thanks to a parallelized engine, enabling faster data retrieval and processing. +- **Multi-Chain Support**: Substreams expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara. +- **Multi-Sink Support:** Subgraph, Postgres database, Clickhouse, Mongo database ## 子流的工作原理分为四个步骤 @@ -44,3 +46,7 @@ To learn about the latest version of Substreams CLI, which enables developers to ### 知识拓展 - Take a look at the [Ethereum Explorer Tutorial](https://substreams.streamingfast.io/tutorials/evm) to learn about the basic transformations you can create with Substreams. + +### Substreams Registry + +A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs. Visit [substreams.dev](https://substreams.dev/) to explore a growing collection of ready-to-use Substreams packages across various blockchain networks. diff --git a/website/pages/zh/sunrise.mdx b/website/pages/zh/sunrise.mdx index e38d91f89489..a39e1853cc94 100644 --- a/website/pages/zh/sunrise.mdx +++ b/website/pages/zh/sunrise.mdx @@ -1,233 +1,79 @@ --- -title: Sunrise + Upgrading to The Graph Network FAQ +title: Post-Sunrise + Upgrading to The Graph Network FAQ --- -> Note: This document is continually updated to ensure the most accurate and helpful information is provided. New questions and answers are added on a regular basis. If you can’t find the information you’re looking for, or if you require immediate assistance, [reach out on Discord](https://discord.gg/graphprotocol). If you are looking for billing information, then please refer to [billing](/billing/). +> Note: The Sunrise of Decentralized Data ended June 12th, 2024. -## What is the Sunrise of Decentralized Data? +## What was the Sunrise of Decentralized Data? -The Sunrise of Decentralized Data is an initiative spearheaded by Edge & Node. The goal is to enable subgraph developers to seamlessly upgrade to The Graph’s decentralized network. +The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly. -这一计划借鉴了 The Graph 生态系统的许多先前的发展,包括升级索引人以提供对新发布子图的查询服务,以及将新的区块链网络集成到 The Graph 中的能力。 +This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs. -### What are the phases of the Sunrise? +### What happened to the hosted service? -**Sunray**: Enable support for hosted service chains, introduce a seamless upgrade flow, offer a free plan on The Graph Network, and provide simple payment options.\ -**Sunbeam**: The upgrade window that subgraph developers will have to upgrade their hosted service subgraphs to The Graph Network. This window will end at 10 a.m. PT on June 12th 2024.\ -**Sunrise**: Hosted service endpoints will no longer be available after 10 a.m. PT on June 12th, 2024. +The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service. -## Upgrading subgraphs to The Graph Network +During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs. -### When will hosted service subgraphs no longer be available? +### Was Subgraph Studio impacted by this upgrade? -Hosted service query endpoints will remain active until 10 a.m. PT on June 12th. After June 12th at 10 a.m. PT, query endpoints will no longer be available, and developers will no longer be able to deploy new subgraph versions on the hosted service. +No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. -### I didn’t upgrade my subgraph by June 12th at 10 a.m. PT. What should I do if I still want to use it? +### Why were subgraphs published to Arbitrum, did it start indexing a different network? -The hosted service homepage is still accessible and can be used to search for legacy hosted service subgraphs. If your hosted service subgraph has already been auto-upgraded, you may claim its network equivalent as the original owner. If your subgraph was not [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam), you can still upgrade and publish it. - -Upgrading takes less than five minutes on average. Once your subgraph is up, simply set up an API key in Subgraph Studio, update your API query endpoint, and begin querying! - -### Will my hosted service subgraph be supported on The Graph Network? - -Yes, the upgrade Indexer will automatically support all hosted service subgraphs published to The Graph Network for a seamless upgrade experience. - -### How do I upgrade my hosted service subgraph? - -> Note: Upgrading a subgraph to The Graph Network cannot be undone. - - - -To upgrade a hosted service subgraph, you can visit the subgraph dashboard on the [hosted service](https://thegraph.com/hosted-service). - -1. Select the subgraph(s) you want to upgrade. -2. Select the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -Once your subgraph is published, the [upgrade Indexer](#what-is-the-upgrade-indexer) will begin serving queries on it. Once you have generated an API key, you can begin making queries immediately. [Learn more](/cookbook/upgrading-a-subgraph/#what-next). - -### How can I get support with the upgrade process? - -The Graph community is here to support developers as they move to The Graph Network. Join The Graph's [Discord server](https://discord.gg/vtvv7FP) and request help in the #upgrade-decentralized-network channel. - -### How can I ensure high quality of service and redundancy for subgraphs on The Graph Network? - -All subgraphs will be supported by the upgrade Indexer. For a higher quality of service and more robust redundancy, you can add a curation signal to subgraphs eligible for indexing rewards. It is recommended that you curate your subgraph with at least 3000 GRT (per subgraph) to attract about 3 Indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -Please note that this indexing incentive does not deplete over time; it has no depletion rate and is instantly withdrawable at any time. If you want to add 3000 GRT in signal, you will need to signal 3030 GRT (as 1% would be burned). Note that a 0.5% fee is also deducted from the subgraph’s signal every time a new version is published. - -Subgraphs which are not eligible for indexing rewards may struggle to attract further Indexers. For example, indexing rewards may not be available for subgraphs on certain chains (check support [here](/developing/supported-networks)). - -Members from these blockchain communities are encouraged to integrate their chain through the [Chain Integration Process](/chain-integration-overview/). - -### How do I publish new versions to the network? - -You can deploy new versions of your subgraph directly to Subgraph Studio, which provides a testing environment, before publishing to the network for production usage. Subgraph Studio has a different deployment command and requires a `version-label` for each new deployment. - -1. Upgrade to the latest version of [graph-cli](https://www.npmjs.com/package/@graphprotocol/graph-cli) -2. Update your deploy command - -```sh -# Authorize with Subgraph Studio, available on your subgraph page -## Alternativel pass this into the deploy command as --access-token (see below) -graph auth --studio - -# Deploy to Subgraph Studio -## Unlike the hosted service, the name is just the subgraph name (no github id) -## If using `--node` directly, you can pass in https://api.studio.thegraph.com/deploy/ -graph deploy --studio --version --access-token -``` - -This new version will then sync in Subgraph Studio, a testing and sandbox environment. When you are ready to move a new version to production, you can [publish the subgraph version](/publishing/publishing-a-subgraph). - -> Publishing requires Arbitrum ETH - upgrading your subgraph also airdrops a small amount to facilitate your first protocol interactions 🧑‍🚀 - -### I use a subgraph developed by someone else, how can I make sure that my service isn't interrupted? - -When the owner has upgraded their subgraph, you will be able to easily go from the subgraph's hosted service page to the corresponding subgraph on The Graph Network, and update your application to use the new subgraph's query URL. [Learn more](/querying/querying-the-graph). - -Around the start of June, Edge & Node will automatically upgrade actively queried subgraphs. This will give any third-party data consumers an opportunity to move subgraph endpoints to The Graph Network before 10 a.m. on June 12th. The subgraph owners will still be able to claim these subgraphs on the network using the hosted service upgrade flow. - -### My subgraph has been auto-upgraded, what does that mean? - -Subgraphs on the hosted service are open APIs, and many subgraphs are relied upon by third-party developers to build their applications. To give those developers sufficient time to move to The Graph Network, Edge & Node will be "auto-upgrading" highly used subgraphs. A link to the "auto-upgraded" subgraph will be visible on the original subgraph's page on the hosted service. - -Owners of "auto-upgraded" subgraphs can easily claim their upgraded subgraphs using the same [upgrade flow](/cookbook/upgrading-a-subgraph) - such subgraphs can be identified by their "auto-upgraded" tag. Ownership of the subgraph on The Graph Network will be transferred to the owner's wallet. - -### My subgraph has been auto-upgraded, but I need to deploy a new version - -You can use the [upgrade flow](/cookbook/upgrading-a-subgraph) to claim the auto-upgraded subgraph, and then you can deploy a new version in Subgraph Studio, using the same infrastructure that powers the hosted service. - -If you require an urgent fix, please contact support. - -### What happens if I don't upgrade my subgraph? - -Subgraphs will be queryable on the hosted service until 10 a.m. PT on June 12th. After this date, the hosted service homepage will still be accessible, however, query endpoints will no longer be available. Owners of hosted service subgraphs will still be able to upgrade their subgraphs to The Graph Network after June 12th, though earlier upgrades are entitled to [earn rewards](https://thegraph.com/sunrise-upgrade-program/). Developers will also be able to claim [auto-upgraded subgraphs](https://thegraph.com/blog/unveiling-updated-sunrise-decentralized-data/#phase-2-sunbeam). - -### What should I do with my subgraphs on the hosted service? Will they stop working and should I delete them? - -It is not possible to delete subgraphs. Query endpoints will remain active until 10 a.m. PT on June 12th, regardless of whether they have been upgraded or not. - -### Will Subgraph Studio be impacted by this upgrade? - -No, Subgraph Studio will not be impacted by Sunrise. - -### What will happen to the hosted service? - -After 10 a.m. PT on June 12th, query endpoints will no longer be available, and owners won't be able to deploy or query the hosted service. However, the hosted service UI will still show subgraph pages, and subgraph owners will be able to upgrade their subgraphs if they haven't already. The hosted service UI will be retired at a later date. - -### Will subgraphs need to be re-indexed again? - -No, rest assured that your subgraph will not need to be re-indexed when it is upgraded to The Graph Network. Subgraphs will be immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service. If your subgraph is indexing a network that is eligible for indexing rewards, you can add signal to attract indexers. [Learn more about adding signal to your subgraph](/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph). - -### I’m experiencing indexing issues. What should I do? - -Rest assured that network Indexers are prepared to provide support during this upgrade. If you experience issues with any of your subgraph queries, please reach out to support@thegraph.zendesk.com - -### Why is my subgraph being published to Arbitrum, is it indexing a different network? - -The Graph Network was originally deployed on mainnet Ethereum but moved to Arbitrum One to reduce gas costs for all users. As such any new subgraphs are published to The Graph Network on Arbitrum so that they can be supported by Indexers. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](https://thegraph.com/docs/en/developing/supported-networks/) - -### How can I get started querying subgraphs on The Graph Network? - -You can explore available subgraphs on [Graph Explorer](https://thegraph.com/explorer). [Learn more about querying subgraphs on The Graph](/querying/querying-the-graph). +The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/developing/supported-networks/) ## About the Upgrade Indexer -### What is the upgrade Indexer? - -The upgrade Indexer is designed to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and to support new versions of existing subgraphs that have not yet been indexed. - -The upgrade Indexer aims to bootstrap chains that don't have indexing rewards yet on The Graph Network and to serve as a fallback for new subgraph versions. The goal is to ensure that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +> The upgrade Indexer is currently active. -### What chains does the upgrade Indexer support? +The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed. -The upgrade Indexer supports chains that were previously only available on the hosted service. +### What does the upgrade Indexer do? -请在[此处](/developing/supported-networks/) 查找支持的链的综合列表。 +- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published. +- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/developing/supported-networks/). +- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them. ### 为什么 Edge & Node 运行升级索引人? -Edge & Node has historically maintained the hosted service and, as a result, has already synced data for hosted service subgraphs. - -All Indexers are encouraged to become upgrade Indexers as well. However, note that operating an upgrade Indexer is primarily a public service to support new subgraphs and additional chains that lack indexing rewards before they are approved by The Graph Council. +Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs. ### What does the upgrade indexer mean for existing Indexers? -Chains that were previously only supported on the hosted service will now be available to developers on The Graph Network without indexing rewards at first, but it will unlock query fees for any Indexer that is interested. This should lead to an increase in the number of subgraphs being published on The Graph Network, providing more opportunities for Indexers to index and serve these subgraphs in return for query fees, even before indexing rewards are enabled for a chain. +Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first. + +However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain. -升级索引人还向索引人社区提供关于 The Graph Network 上潜在的子图需求和新链的信息。 +The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network. ### 这对于委托人来说意味着什么? -升级索引人为代币委托人提供了强大的机会。随着越来越多的子图从托管服务迁移到 The Graph Network,委托人将从增加的网络活动中获益。 +The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity. -### 升级索引人会与现有的索引人竞争奖励吗? +### Did the upgrade Indexer compete with existing Indexers for rewards? -不,升级索引人只会为每个子图分配最低金额,并不会收集索引奖励。 +No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards. -It operates on an “as needed” basis and serves as a fallback until sufficient service quality is achieved by at least 3 other Indexers in the network for respective chains and subgraphs. +It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs. -### 这将如何影响子图开发者? +### How does this affect subgraph developers? -Subgraph developers will be able to query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or publishing from Subgraph Studio, as no lead time will be required for indexing. +Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph) was not impacted by this upgrade. -### 这如何使数据消费者受益? +### How does the upgrade Indexer benefit data consumers? The upgrade Indexer enables chains on the network that were previously only supported on the hosted service. Therefore, it widens the scope and availability of data that can be queried on the network. -### 升级的索引人将如何定价查询? - -升级的索引人将按市场价格定价查询,以不影响查询费用市场。 - -### 升级索引人停止支持一个子图的标准是什么? - -升级的索引人将为一个子图提供服务,直到它能够获得由至少3个其他索引人提供的持续且成功的查询服务。 - -此外,如果一个子图在过去的30天内没有被查询,升级的索引人将停止支持该子图。 - -Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it will have a small allocation size, and other Indexers will be chosen for queries ahead of it. - -## About The Graph Network - -### 我需要自己运行基础设施吗? - -No, all infrastructure is operated by independent Indexers on The Graph Network, including the upgrade Indexer ([read more below](#what-is-the-upgrade-indexer)). - -You can use [Subgraph Studio](https://thegraph.com/studio/) to create, test, and publish your subgraph. All hosted service users must upgrade their subgraph to The Graph Network before 10 a.m. PT on June 12th, 2024. - -The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability. - -When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on that specific version. - -To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with. - -一旦您的子图达到足够的策展信号,并且其他索引人开始支持它,那么升级索引人将逐渐减少,这将允许其他索引人收集索引奖励和查询费用。 - -### 我应该自己托管索引基础设施吗? - -与使用The Graph Network相比,为您自己的项目运行基础设施需要的资源明显更多(/network/benefits/)。 - -Additionally, The Graph Network is significantly more robust, reliable, and cost-efficient than anything provided by a single organization or team. Hundreds of independent Indexers around the world power The Graph Network, ensuring safety, security, and redundancy. - -话虽如此,如果您仍有兴趣运行[Graph Node](https://github.com/graphprotocol/graph-node),考虑加入The Graph Network,作为索引人,通过为您的子图和其他子图提供数据来赚取索引奖励和查询费用。了解如何成为索引人的更多信息,请参阅[此链接](https://thegraph.com/blog/how-to-become-indexer/)。 - -### 我应该使用中心化的索引提供者吗? - -If you are building in web3, the moment you use a centralized indexing provider, you are giving them control of your dapp and data. The Graph’s decentralized network offers [superior quality of service](https://thegraph.com/blog/qos-the-graph-network/), reliability with unbeatable uptime thanks to node redundancy, significantly [lower costs](/network/benefits/), and keeps you from being hostage at the data layer. - -With The Graph Network, your subgraph is public and anyone can query it openly, which increases the usage and network effects of your dapp. - -Additionally, Subgraph Studio provides 100,000 free monthly queries on the Free Plan, before payment is needed for additional usage. - -以下是The Graph相对于中心化托管的好处的详细分析: +### How does the upgrade Indexer price queries? -- **弹性与冗余性**:分散系统因其分布式性质而本质上更加坚固和弹性十足。数据不存储在单一服务器或位置上,而是由全球数百个独立的索引人提供。这降低了数据丢失或服务中断的风险,即使一个节点失败,也能实现卓越的可用性(99.99%)。 +The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market. -- **服务质量**:除了令人印象深刻的可用性之外,The Graph Network具有约106毫秒的中位查询速度(延迟),以及相对于托管的替代方案,更高的查询成功率。更多信息请参阅[此博客](https://thegraph.com/blog/qos-the-graph-network/)。 +### When will the upgrade Indexer stop supporting a subgraph? -- **Censorship Resistance**: Centralized systems are targets for censorship, either through regulatory pressures or network attacks. In contrast, the dispersed architecture of decentralized systems makes them much harder to censor, which ensures continuous data availability. +The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it. -- **Transparency and Trust**: Decentralized systems operate openly, enabling anyone to independently verify the data. This transparency builds trust among network participants because they can verify the system's integrity without relying on a central authority. +Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days. -正如您选择区块链网络以获取分散性、安全性和透明性一样,选择The Graph Network是这些相同原则的延伸。通过将您的数据基础设施与这些价值观保持一致,您确保了一个有凝聚力、弹性十足且以信任为驱动的开发环境。 +Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it. diff --git a/website/pages/zh/supported-network-requirements.mdx b/website/pages/zh/supported-network-requirements.mdx index 62bb55c63746..d9e844cbbe43 100644 --- a/website/pages/zh/supported-network-requirements.mdx +++ b/website/pages/zh/supported-network-requirements.mdx @@ -2,17 +2,17 @@ title: Supported Network Requirements --- -| 网络 | Guides | System Requirements | Indexing Rewards | -| --- | --- | --- | :-: | -| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | -| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | -| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| 以太坊 | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | -| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | -| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | -| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | +| 网络 | Guides | System Requirements | Indexing Rewards | +| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | :--------------: | +| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 5 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
    Debian 12/Ubuntu 22.04
    16 GB RAM
    >= 4.5TB (NVME preffered)
    _last updated 14th May 2024_ | ✅ | +| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
    Ubuntu 22.04
    >=32 GB RAM
    >= 14 TiB NVMe SSD
    _last updated 22nd June 2024_ | ✅ | +| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 2 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| 以太坊 | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
    Ubuntu 22.04
    16GB+ RAM
    >=3TB (NVMe recommended)
    _last updated August 2023_ | ✅ | +| Fantom | [Docker Guide](https://docs.infradao.com/archive-nodes-101/fantom/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 13 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Gnosis | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/gnosis/erigon/baremetal) | 6 core / 12 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 3 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Linea | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/linea/baremetal) | 4+ core CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 2nd April 2024_ | ✅ | +| Optimism | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/erigon/baremetal)

    [GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal)
    [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU
    Ubuntu 22.04
    16GB+ RAM
    >= 8 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU
    Ubuntu 22.04
    32GB+ RAM
    >= 10 TiB NVMe SSD
    _last updated August 2023_ | ✅ | +| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal)
    [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU
    Debian 12
    16GB+ RAM
    >= 1 TiB NVMe SSD
    _last updated 3rd April 2024_ | ✅ | diff --git a/website/pages/zh/tap.mdx b/website/pages/zh/tap.mdx new file mode 100644 index 000000000000..b6bf97310363 --- /dev/null +++ b/website/pages/zh/tap.mdx @@ -0,0 +1,197 @@ +--- +title: TAP Migration Guide +--- + +Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust. + +## 概述 + +[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features: + +- Efficiently handles micropayments. +- Adds a layer of consolidations to on-chain transactions and costs. +- Allows Indexers control of receipts and payments, guaranteeing payment for queries. +- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders. + +## Specifics + +TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process. + +For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value. + +### RAV Details + +- It’s money that is waiting to be sent to the blockchain. + +- It will continue to send requests to aggregate and ensure that the total value of non-aggregated receipts does not exceed the `amount willing to lose`. + +- Each RAV can be redeemed once in the contracts, which is why they are sent after the allocation is closed. + +### Redeeming RAV + +As long as you run `tap-agent` and `indexer-agent`, everything will be executed automatically. The following provides a detailed breakdown of the process: + +1. An Indexer closes allocation. + +2. ` period, tap-agent` takes all pending receipts for that specific allocation and requests an aggregation into a RAV, marking it as `last`. + +3. `indexer-agent` takes all the last RAVS and sends redeem requests to the blockchain, which will update the value of `redeem_at`. + +4. During the `` period, `indexer-agent` monitors if the blockchain has any reorganizations that revert the transaction. + + - If it was reverted, the RAV is resent to the blockchain. If it was not reverted, it gets marked as `final`. + +## Blockchain Addresses + +### Contracts + +| Contract | Arbitrum Sepolia (421614) | Arbitrum Mainnet (42161) | +| ------------------- | -------------------------------------------- | -------------------------------------------- | +| TAP Verifier | `0xfC24cE7a4428A6B89B52645243662A02BA734ECF` | `0x33f9E93266ce0E108fc85DdE2f71dab555A0F05a` | +| AllocationIDTracker | `0xAaC28a10d707bbc6e02029f1bfDAEB5084b2aD11` | `0x5B2F33d7Ca6Ec88f5586f2528f58c20843D9FE7c` | +| Escrow | `0x1e4dC4f9F95E102635D8F7ED71c5CdbFa20e2d02` | `0x8f477709eF277d4A880801D01A140a9CF88bA0d3` | + +### Gateway + +| Component | Edge and Node Mainnet (Arbitrum Sepolia) | Edge and Node Testnet (Aribtrum Mainnet) | +| ---------- | --------------------------------------------- | --------------------------------------------- | +| Sender | `0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467` | `0xC3dDf37906724732FfD748057FEBe23379b0710D` | +| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` | +| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` | + +### 要求 + +In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`. + +- [Graph TAP Aribtrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD) +- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1) + +> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployement. As a result, you have to index it manually. + +## Migration Guide + +### Software versions + +| Component | 版本 | Image Link | +| --------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------- | +| indexer-service | v1.0.0-rc.6 | [indexer-service](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-service-rs/264320627?tag=1.0.0-rc.6) | +| indexer-agent | PR #995 | [indexer-agent](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent/266166026?tag=sha-d98cf80) | +| tap-agent | v1.0.0-rc.6 | [tap-agent](https://github.com/graphprotocol/indexer-rs/pkgs/container/indexer-tap-agent/264320547?tag=1.0.0-rc.6) | + +### Steps + +1. **Indexer Agent** + + - Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components). + - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs. + +2. **Indexer Service** + + - Fully replace your current configuration with the [new Indexer Service rs](https://github.com/graphprotocol/indexer-rs). It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + - Like the older version, you can scale Indexer Service horizontally easily. It is still stateless. + +3. **TAP Agent** + + - Run _one_ single instance of [TAP Agent](https://github.com/graphprotocol/indexer-rs) at all times. It's recommend that you use the [container image](https://github.com/orgs/graphprotocol/packages?repo_name=indexer-rs). + +4. **Configure Indexer Service and TAP Agent** + + Configuration is a TOML file shared between `indexer-service` and `tap-agent`, supplied with the argument `--config /path/to/config.toml`. + + Check out the full [configuration](https://github.com/graphprotocol/indexer-rs/blob/main/config/maximal-config-example.toml) and the [default values](https://github.com/graphprotocol/indexer-rs/blob/main/config/default_values.toml) + +For minimal configuration, use the following template: + +```bash +# You will have to change *all* the values below to match your setup. +# +# Some of the config below are global graph network values, which you can find here: +# +# +# Pro tip: if you need to load some values from the environment into this config, you +# can overwrite with environment variables. For example, the following can be replaced +# by [PREFIX]_DATABASE_POSTGRESURL, where PREFIX can be `INDEXER_SERVICE` or `TAP_AGENT`: +# +# [database] +# postgres_url = "postgresql://indexer:${POSTGRES_PASSWORD}@postgres:5432/indexer_components_0" + +[indexer] +indexer_address = "0x1111111111111111111111111111111111111111" +operator_mnemonic = "celery smart tip orange scare van steel radio dragon joy alarm crane" + +[database] +# The URL of the Postgres database used for the indexer components. The same database +# that is used by the `indexer-agent`. It is expected that `indexer-agent` will create +# the necessary tables. +postgres_url = "postgres://postgres@postgres:5432/postgres" + +[graph_node] +# URL to your graph-node's query endpoint +query_url = "" +# URL to your graph-node's status endpoint +status_url = "" + +[subgraphs.network] +# Query URL for the Graph Network subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[subgraphs.escrow] +# Query URL for the Escrow subgraph. +query_url = "" +# Optional, deployment to look for in the local `graph-node`, if locally indexed. +# Locally indexing the subgraph is recommended. +# NOTE: Use `query_url` or `deployment_id` only +deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + +[blockchain] +# The chain ID of the network that the graph network is running on +chain_id = 1337 +# Contract address of TAP's receipt aggregate voucher (RAV) verifier. +receipts_verifier_address = "0x2222222222222222222222222222222222222222" + +######################################## +# Specific configurations to tap-agent # +######################################## +[tap] +# This is the amount of fees you are willing to risk at any given time. For ex. +# if the sender stops supplying RAVs for long enough and the fees exceed this +# amount, the indexer-service will stop accepting queries from the sender +# until the fees are aggregated. +# NOTE: Use strings for decimal values to prevent rounding errors +# e.g: +# max_amount_willing_to_lose_grt = "0.1" +max_amount_willing_to_lose_grt = 20 + +[tap.sender_aggregator_endpoints] +# Key-Value of all senders and their aggregator endpoints +# This one below is for the E&N testnet gateway for example. +0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = "https://tap-aggregator.network.thegraph.com" +``` + +注意: + +- Values for `tap.sender_aggregator_endpoints` can be found in the [gateway section](/tap/#gateway). +- Values for `blockchain.receipts_verifier_address` must be used accordingly to the [Blockchain addresses section](/tap/#contracts) using the appropriate chain id. + +**Log Level** + +- You can set the log level by using the `RUST_LOG` environment variable. +- It’s recommended that you set it to `RUST_LOG=indexer_tap_agent=debug,info`. + +## Monitoring + +### Metrics + +All components expose the port 7300 to be queried by prometheus. + +### Grafana Dashboard + +You can download [Grafana Dashboard](https://github.com/graphprotocol/indexer-rs/blob/main/docs/dashboard.json) and import. + +### Launchpad + +Currently, there is a WIP version of `indexer-rs` and `tap-agent` that can be found [here](https://github.com/graphops/launchpad-charts/tree/feat/indexer-rs/charts/graph-network-indexer)