diff --git a/.github/ISSUE_TEMPLATE/documentation_bug_report.md b/.github/ISSUE_TEMPLATE/documentation_bug_report.md
deleted file mode 100644
index f1c79e2b9a08f..0000000000000
--- a/.github/ISSUE_TEMPLATE/documentation_bug_report.md
+++ /dev/null
@@ -1,41 +0,0 @@
----
-name: "\U0001F41B Documentation/aptos.dev Bug report"
-about: Create a bug report to help improve the Aptos Developers' Website
-title: "[Docs]"
-labels: ["documentation"]
-assignees: 'clay-aptos'
-
----
-
-# Aptos Documentation Issue
-
-
-
-## Location
-
-
-
-
-
-## Description
-
-
-
-
-
-
-
-## Audience
-
-
-
-
-
-## Additional context
-
-
-
-
-
-
diff --git a/.github/actions/file-change-determinator/action.yaml b/.github/actions/file-change-determinator/action.yaml
index a8bd8f83ef25c..c2501baa9f34c 100644
--- a/.github/actions/file-change-determinator/action.yaml
+++ b/.github/actions/file-change-determinator/action.yaml
@@ -14,4 +14,4 @@ runs:
uses: fkirc/skip-duplicate-actions@v5
with:
skip_after_successful_duplicate: false # Don't skip if the action is a duplicate (this may cause false positives)
- paths_ignore: '["**/*.md", "developer-docs-site/**"]'
+ paths_ignore: '["**/*.md"]'
diff --git a/.github/workflows/lint-test.yaml b/.github/workflows/lint-test.yaml
index c8108e3115a1e..5f06e6884c713 100644
--- a/.github/workflows/lint-test.yaml
+++ b/.github/workflows/lint-test.yaml
@@ -51,21 +51,6 @@ jobs:
- run: echo "Skipping general lints! Unrelated changes detected."
if: needs.file_change_determinator.outputs.only_docs_changed == 'true'
- # Run the docs linter. This is a PR required job.
- docs-lint:
- runs-on: ubuntu-latest
- steps:
- - uses: actions/checkout@v3
- - uses: actions/setup-node@v3
- with:
- node-version-file: .node-version
- - uses: pnpm/action-setup@v2
- - run: pnpm lint
- working-directory: developer-docs-site
- - run: sudo apt update -y && sudo apt install -y aspell aspell-en
- - run: pnpm spellcheck
- working-directory: developer-docs-site
-
# Run the crypto hasher domain separation checks
rust-cryptohasher-domain-separation-check:
needs: file_change_determinator
diff --git a/developer-docs-site/.gitattributes b/developer-docs-site/.gitattributes
deleted file mode 100644
index d12d8a0491d22..0000000000000
--- a/developer-docs-site/.gitattributes
+++ /dev/null
@@ -1 +0,0 @@
-*.md -whitespace
diff --git a/developer-docs-site/.gitignore b/developer-docs-site/.gitignore
deleted file mode 100644
index b021b9473bc90..0000000000000
--- a/developer-docs-site/.gitignore
+++ /dev/null
@@ -1,20 +0,0 @@
-# OSX
-*.DS_Store
-venv
-
-# Installation
-package-lock.json
-
-# Docusaurus
-build/
-i18n/
-node_modules/
-
-# Generated files
-.docusaurus/
-.cache-loader
-.vercel
-.idea/
-
-# ignore autogenerated docs
-static/docs/rustdocs/
diff --git a/developer-docs-site/.prettierignore b/developer-docs-site/.prettierignore
deleted file mode 100644
index 0350ddb57ede6..0000000000000
--- a/developer-docs-site/.prettierignore
+++ /dev/null
@@ -1,13 +0,0 @@
-# Symlinked SDKs
-static/sdks
-
-# Docusaurus
-build/
-i18n/
-node_modules/
-
-# Generated files
-.docusaurus/
-
-# ignore autogenerated docs
-static/docs/rustdocs/
diff --git a/developer-docs-site/README.md b/developer-docs-site/README.md
index 3dc4e6a505c16..2d2afc9cd6bed 100644
--- a/developer-docs-site/README.md
+++ b/developer-docs-site/README.md
@@ -1,113 +1,3 @@
# Developer Documentation
- - [Installation](#installation)
- - [Requirements](#requirements)
- - [Fork and clone the Aptos repo](#fork-and-clone-the-aptos-repo)
- - [Build and serve the docs locally](#build-and-serve-the-docs-locally)
- - [Build static html files](#build-static-html-files)
- - [Debug/Format files](#debugging)
-
-This Aptos Developer Documentation is built using [Docusaurus 2](https://docusaurus.io/) and displayed on https://aptos.dev/. Follow the below steps to build the docs locally and test your contribution.
-
-We now use [lychee-broken-link-checker](https://github.com/marketplace/actions/lychee-broken-link-checker) to check for broken links in the GitHub Markdown. We are a corresponding link checker for pages on Aptos.dev.
-
-With results visible at:
-https://github.com//aptos-labs/aptos-core/actions/workflows/links.yml
-
-
-## Installation
-
-**IMPORTANT**: These installation steps apply to macOS environment.
-
-### Requirements
-
-Before you proceed, make sure you install the following tools.
-
-- Install [Node.js](https://nodejs.org/en/download/) by executing the below command on your Terminal:
-
-```
-brew install node
-```
-
-- Install the latest [pnpm](https://pnpm.io/installation) by executing the below command on your Terminal:
-
-```
-curl -fsSL https://get.pnpm.io/install.sh | sh -
-```
-
-## Clone the Aptos repo
-
- ```
- git clone https://github.com/aptos-labs/aptos-core.git
-
- ```
-
-## Build and serve the docs locally
-
-1. `cd` into the `developer-docs-site` directory in your clone.
-
- ```
- cd aptos-core/developer-docs-site
- ```
-2. Run `pnpm`.
-
- ```
- pnpm install
- ```
-This step will configure the Docusaurus static site generator.
-
-3. Start the server locally. This will also open the locally built docs in your default browser.
-
-> **NOTE**: This step will not generate static html files, but will render the docs dynamically.
-
- ```
- pnpm start
- ```
-
- 4. See your changes staged at: http://localhost:3000/
-
- 5. Create a pull request with your changes as described in our [Contributing](https://github.com/aptos-labs/aptos-core/blob/main/CONTRIBUTING.md) README.
-
-## (Optional) Build static html files
-
-Execute the below steps if you want to generate static html documentation files. A `build` directory will be created with the static html files and assets contained in it.
-
-1. Make sure you install dependencies.
-
- ```
- pnpm install
- ```
-2. Build static html files with pnpm.
-
- ```
- pnpm build
- ```
-
-This command generates static html content and places it in the `build` directory.
-
-3. Finally, use the below command to start the documentation server on your localhost.
-
- ```
- pnpm run serve
- ```
-
-## Debugging
-
-Fix formatting issues by running:
-
-```
-pnpm fmt
-```
-
-## Regenerating contributors
-The src/contributors.json file (which powers the list of Authors at the bottom of doc pages) needs to be manually generated.
-
-In order to generate the contributor map you must authenticate with GitHub. The best way to do that is using GitHub CLI ([installation guide(https://github.com/cli/cli#installation)]). Once you have the GitHub CLI installed, you can run the following command to authenticate:
-```
-gh auth login --scopes read:user,user:email
-```
-
-Once that is done, you can generate the map with this command:
-```
-pnpm contributors
-```
+This has been moved to https://github.com/aptos-labs/developer-docs
\ No newline at end of file
diff --git a/developer-docs-site/babel.config.js b/developer-docs-site/babel.config.js
deleted file mode 100644
index bfd75dbdfc72a..0000000000000
--- a/developer-docs-site/babel.config.js
+++ /dev/null
@@ -1,3 +0,0 @@
-module.exports = {
- presets: [require.resolve("@docusaurus/core/lib/babel/preset")],
-};
diff --git a/developer-docs-site/docs/CODEOWNERS b/developer-docs-site/docs/CODEOWNERS
deleted file mode 100644
index 3899b59e62cb8..0000000000000
--- a/developer-docs-site/docs/CODEOWNERS
+++ /dev/null
@@ -1,9 +0,0 @@
-# This is the overarching CODEOWNERS file for Aptos.dev documentation.
-# It exists to help route review requests and ensure proper review of changes.
-# We include each subdirectory and relevant owners below:
-# Global rule:
-* @davidiw @gregnazario @movekevin
-## Aptos White Paper
-/aptos-white-paper/** @aching @ShaikhAliMo
-## Nodes
-/nodes/** @rustielin @aptos-labs/prod-eng
diff --git a/developer-docs-site/docs/apis/aptos-labs-developer-portal.md b/developer-docs-site/docs/apis/aptos-labs-developer-portal.md
deleted file mode 100644
index 5fe8da5ad1204..0000000000000
--- a/developer-docs-site/docs/apis/aptos-labs-developer-portal.md
+++ /dev/null
@@ -1,40 +0,0 @@
----
-title: "Aptos Labs Developer Portal"
----
-
-import BetaNotice from '../../src/components/_dev_portal_beta_notice.mdx';
-
-
-
-The [Aptos Labs Developer Portal](https://developers.aptoslabs.com) is a your gateway to access Aptos Labs provided APIs in a quick and easy fashion to power your dApp.
-It consists of a Portal (UI) and a set of API Gateways operated by Aptos Labs.
-
-The Developer Portal aims to make it easier to build dApps by:
-
-1. Providing [unified domain names/URLs](../nodes/networks.md) for each API.
-2. Giving you personalized examples on how to use each API.
-3. Observability into your personal usage, error rates and latency of APIs.
-4. Rate limiting by API developer account/app instead of origin IP.
-5. (Coming Soon) Customizable Rate limits for high traffic apps.
-
-In order to create an Aptos Labs developer account simply go to https://developers.aptoslabs.com/ and follow the instructions.
-
-### Default Rate Limits for Developer Portal accounts
-
-Currently the following rate limits apply:
-
-1. GRPC Transaction Stream: 20 concurrent streams per user
-2. Fullnode API: 5000 requests per 5 minutes sliding window.
-3. GraphQL API: 5000 requests per 5 minutes sliding window.
-
- Note that requests for the Fullnode API / GraphQL API are counted separately, so you can make 5000 Fullnode API requests AND 5000 GraphQL API requests in the same 5 minutes window. The rate limit is applied as a continuous sliding window.
-
-Rate limits are customizable per user upon request. If you have a use-case that requires higher rate limits than the default, please open a support case through one of the supported channels in the portal.
-
-### Known Limitations
-
-1. Only authenticated access supported.
-
- At the moment the new URLs introduced by the Developer Portal / API Gateway only support requests with an API Key (Bearer authentication).
- Effectively this means you can only use the new API gateway provided URLs from backend apps that can securely hold credentials.
- We plan to add soon support for anonymous authentication in combination with more sophisticated rate limit protections, which then makes then these new URLs usable in end-user / client-side only apps like Browser Wallets etc.
diff --git a/developer-docs-site/docs/apis/fullnode-rest-api.md b/developer-docs-site/docs/apis/fullnode-rest-api.md
deleted file mode 100644
index 4d6f00e3472e8..0000000000000
--- a/developer-docs-site/docs/apis/fullnode-rest-api.md
+++ /dev/null
@@ -1,83 +0,0 @@
----
-title: "Fullnode Rest API"
-slug: "fullnode-rest-api"
----
-
-# Use the Aptos Fullnode REST API
-
-If you with to employ the [Aptos API](https://aptos.dev/nodes/aptos-api-spec/#/), then this guide is for you. This guide will walk you through all you need to integrate the Aptos blockchain into your platform with the Aptos API.
-
-:::tip
-Also see the [System Integrators Guide](../guides/system-integrators-guide.md) for a thorough walkthrough of Aptos integration.
-:::
-
-## Understanding rate limits
-
-As with the [Aptos Indexer](../indexer/api/labs-hosted.md#rate-limits), the Aptos REST API has a rate limit of 5000 requests per five minutes by IP address, whether submitting transactions or querying the API on Aptos-provided nodes. (As a node operator, you may raise those limits on your own node.) Note that this limit can change with or without prior notice.
-
-## Viewing current and historical state
-
-Most integrations into the Aptos blockchain benefit from a holistic and comprehensive overview of the current and historical state of the blockchain. Aptos provides historical transactions, state, and events, all the result of transaction execution.
-
-* Historical transactions specify the execution status, output, and tie to related events. Each transaction has a unique version number associated with it that dictates its global sequential ordering in the history of the blockchain ledger.
-* The state is the representation of all transaction outputs up to a specific version. In other words, a state version is the accumulation of all transactions inclusive of that transaction version.
-* As transactions execute, they may emit events. [Events](../concepts/events.md) are hints about changes in on-chain data.
-
-:::important
-Ensure the [fullnode](../nodes/networks.md) you are communicating with is up to date. The fullnode must reach the version containing your transaction to retrieve relevant data from it. There can be latency from the fullnodes retrieving state from [validator fullnodes](../concepts/fullnodes.md), which in turn rely upon [validator nodes](../concepts/validator-nodes.md) as the source of truth.
-:::
-
-The storage service on a node employs two forms of pruning that erase data from nodes:
-
-* state
-* events, transactions, and everything else
-
-While either of these may be disabled, storing the state versions is not particularly sustainable.
-
-Events and transactions pruning can be disabled via setting the [`enable_ledger_pruner`](https://github.com/aptos-labs/aptos-core/blob/cf0bc2e4031a843cdc0c04e70b3f7cd92666afcf/config/src/config/storage_config.rs#L141) to `false` in `storage_config.rs`. This is default behavior in Mainnet. In the near future, Aptos will provide indexers that mitigate the need to directly query from a node.
-
-The REST API offers querying transactions and events in these ways:
-
-* [Transactions for an account](https://fullnode.devnet.aptoslabs.com/v1/spec#/operations/get_account_transactions)
-* [Transactions by version](https://fullnode.devnet.aptoslabs.com/v1/spec#/operations/get_transaction_by_version)
-* [Events by event handle](https://fullnode.devnet.aptoslabs.com/v1/spec#/operations/get_events_by_event_handle)
-
-## Reading state with the View function
-
-View functions do not modify blockchain state when called from the API. A [View](https://github.com/aptos-labs/aptos-core/blob/main/api/src/view_function.rs) function and its [input](https://github.com/aptos-labs/aptos-core/blob/main/api/types/src/view.rs) can be used to read potentially complex on-chain state using Move. For example, you can evaluate who has the highest bid in an auction contract. Here are related files:
-
-* [`view_function.rs`](https://github.com/aptos-labs/aptos-core/blob/main/api/src/tests/view_function.rs) for an example
-* related [Move](https://github.com/aptos-labs/aptos-core/blob/90c33dc7a18662839cd50f3b70baece0e2dbfc71/aptos-move/framework/aptos-framework/sources/coin.move#L226) code
-* [specification](https://github.com/aptos-labs/aptos-core/blob/90c33dc7a18662839cd50f3b70baece0e2dbfc71/api/doc/spec.yaml#L8513).
-
-The view function operates like the [Aptos Simulation API](../guides/system-integrators-guide.md#testing-transactions-or-transaction-pre-execution), though with no side effects and an accessible output path. View functions can be called via the `/view` endpoint. Calls to view functions require the module and function names along with input type parameters and values.
-
-A function does not have to be immutable to be tagged as `#[view]`, but if the function is mutable it will not result in state mutation when called from the API.
-If you want to tag a mutable function as `#[view]`, consider making it private so that it cannot be maliciously called during runtime.
-
-In order to use the View functions, you need to [publish the module](../move/move-on-aptos/cli.md#publishing-a-move-package-with-a-named-address) through the [Aptos CLI](../tools/aptos-cli/install-cli/index.md).
-
-In the Aptos CLI, a view function request would look like this:
-```
-aptos move view --function-id devnet::message::get_message --profile devnet --args address:devnet
-{
- "Result": [
- "View functions rock!"
- ]
-}
-```
-
-In the TypeScript SDK, a view function request would look like this:
-```
- const payload: Gen.ViewRequest = {
- function: "0x1::coin::balance",
- type_arguments: ["0x1::aptos_coin::AptosCoin"],
- arguments: [alice.address().hex()],
- };
-
- const balance = await client.view(payload);
-
- expect(balance[0]).toBe("100000000");
-```
-
-The view function returns a list of values as a vector. By default, the results are returned in JSON format; however, they can be optionally returned in Binary Canonical Serialization (BCS) encoded format.
diff --git a/developer-docs-site/docs/apis/index.md b/developer-docs-site/docs/apis/index.md
deleted file mode 100644
index 9dc1353c8cfdc..0000000000000
--- a/developer-docs-site/docs/apis/index.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: "Aptos APIs"
----
-
-The Aptos Blockchain network can be accessed by several APIs, depending on your use-case.
-
-1. #### [Aptos Fullnode-embedded REST API](./fullnode-rest-api.md).
-
- This API - embedded into Fullnodes - provides a simple, low latency, yet low-level way of _reading_ state and _submitting_ transactions to the Aptos Blockchain. It also supports transaction simulation.
-
-2. #### [Aptos Indexer-powered GraphQL API](../indexer/indexer-landing.md).
-
- This API provides a high-level, opinionated GraphQL API to _read_ state from the Aptos Blockchain. If your app needs to interact with high level constructs, such as NFTs, Aptos Objects or custom Move contracts, you likely want to incorporate the Aptos GraphQL Indexer API in some fashion. Learn more about the Indexer-powered GraphQL API here.
-
-3. #### [Aptos GRPC Transaction Stream API](../indexer/txn-stream/index.md)
-
- This API provides a way to stream historical and current transaction data in real-time to an indexing processor. This API is used by the Aptos Core Indexing infrastructure itself but also can be used to build app-specific custom indexing processors that process blockchain data in real-time. Learn more here.
-
-4. #### Faucet API (Only Testnet/Devnet)
-
- This API provides the ability to mint coins on the Aptos Labs operated devnet and testnet and it's primary purpose is development and testing of your apps and Move contracts before deploying them to mainnet.
-
-
-The code of each of the above mentioned APIs is open-sourced on [GitHub](https://github.com/aptos-labs/aptos-core). As such anyone can operate these APIs and many independent operators and builders worldwide choose to do so.
-
-
-### Aptos Labs operated API Deployments
-
-[Aptos Labs](https://aptoslabs.com) operates a deployment of these APIs on behalf of [Aptos Foundation](https://aptosfoundation.org/) for each [Aptos Network](../nodes/networks.md) and makes them available for public consumption.
-
-At the moment there are 2 sets of Aptos Labs API deployments:
-
-1. [APIs with anonymous access and IP-based rate-limiting](../nodes/networks.md)
-2. [[Beta] APIs with authentication and developer-account based rate limiting through the Aptos Labs Developer Portal](./aptos-labs-developer-portal.md)
diff --git a/developer-docs-site/docs/aptos-white-paper/in-korean.md b/developer-docs-site/docs/aptos-white-paper/in-korean.md
deleted file mode 100644
index a43c8cbdf6689..0000000000000
--- a/developer-docs-site/docs/aptos-white-paper/in-korean.md
+++ /dev/null
@@ -1,49 +0,0 @@
----
-title: "한글 (In Korean)"
-slug: "aptos-white-paper-in-korean"
----
-
-# Aptos 블록체인
-## 안전성, 확장성, 향상성 있는 웹3 인프라
-
-초록
-
-블록체인이 새로운 인터넷 인프라로 부상하면서 수만개의 탈중앙 애플리케이션이 배포되어 왔다.
-불행히도, 블록체인은 잦은 중단, 높은 비용, 낮은 처리량 제한 및 수 많은 보안 문제들로 인해
-사용이 아직 보편화되지 않았다. 블록체인 인프라는 웹3 시대의 대중화를 위해서 신뢰할 수 있고
-확장 가능하고, 비용 효율적이며, 널리 사용되는 애플리케이션의 구축을 위해 지속적으로 발전하는
-플랫폼인 클라우드 인프라의 길을 따라야 한다.
-
-우리는 이러한 과제를 해결하기 위해 확장성, 안전성, 신뢰성 및 향상성을 핵심 원칙으로 설계된
-Aptos 블록체인을 제시한다. 지난 3년간 전 세계 350명 이상의 개발자들이 Aptos 블록체인을
-개발했다 [1]. Aptos는 합의, 스마트 컨트랙트 설계, 시스템 보안, 성능 및 탈중앙성 면에서 새롭고
-참신한 혁신을 제안한다. 이러한 기술들의 조합은 웹3 대중화를 위한 기본 구성 요소가 될 것이다:
-
-* 먼저, Aptos 블록체인은 빠르고 안전한 트랜잭션 실행을 위해 Move 언어를 자체적으로 통합하고
-내부적으로 사용한다 [2]. Move Prover는 Move 언어의 정형검증기로써 스마트 컨트랙트의
-불변속성 및 행위에 대한 추가적인 안전을 제공한다. 보안에 대한 이러한 집중을 통해
-개발자는 악성 개체로부터 소프트웨어를 더 잘 보호할 수 있다.
-* 둘째, Aptos 데이터 모델은 유연한 키 관리 및 하이브리드 수탁 옵션을 지원한다. 이는 서명
-전 트랜잭션 투명성과 함께 실제 라이트 클라이언트 프로토콜과 함께 보다 안전하고 신뢰할수
-있는 사용자 경험을 제공한다.
-* 셋째, 높은 처리량과 낮은 지연 시간을 달성하기 위해 Aptos 블록체인은 트랜잭션 처리의 주요
-단계에 모듈화된 파이프라인 방식을 사용한다. 구체적으로는 트랜잭션 전파, 블록 메타데이터
-정렬, 병렬 트랜잭션 실행, 배치(batch) 스토리지 및 원장 인증 등이 동시에 운영된다.
-이 접근 방식은 사용 가능한 모든 물리적 자원을 완벽하게 활용하고, 하드웨어 효율성을 향상
-시키며, 매우 병렬적인 실행을 가능하게 한다.
-* 넷째, 데이터에 대한 사전 지식을 읽고 쓰도록 요구함으로써 트랜잭션 원자성을 파괴하는 다른
-병렬 실행 엔진과 달리 Aptos 블록체인은 개발자에게 그러한 제한을 두지 않는다. 임의로
-복잡한 트랜잭션이 원자성을 효율적으로 지원하여 실제 애플리케이션의 처리량을 높이고 대기
-시간을 단축할 수 있으며 개발을 단순화할 수 있다.
-* 다섯째, Aptos의 모듈형으로 설계된 아키텍처는 클라이언트 유연성을 지원하고 빈번하고 즉각적인
-업그레이드를 위해 최적화되었다. 또한 Aptos 블록체인은 내장된 온체인 변경 관리
-프로토콜을 제공하여, 혁신적인 새로운 기술을 신속하게 배포하고 새로운 웹3 사용 사례를 지원할 수 있다.
-
-
-## 전체 PDF 버전
-
-:::tip 전체 PDF 버전
-
-- **초록**: Aptos 백서의 한국어 버전 전체 PDF를 보려면 [여기를 클릭하십시오](/papers/whitepaper-korean.pdf).
-- **English**: Get the [full PDF of the Aptos White Paper](/papers/Aptos-Whitepaper.pdf).
-:::
diff --git a/developer-docs-site/docs/aptos-white-paper/index.md b/developer-docs-site/docs/aptos-white-paper/index.md
deleted file mode 100644
index 28ca53dafd786..0000000000000
--- a/developer-docs-site/docs/aptos-white-paper/index.md
+++ /dev/null
@@ -1,55 +0,0 @@
----
-title: "Aptos White Paper"
----
-
-# The Aptos Blockchain
-## A Safe, Scalable, and Upgradeable Web3 Infrastructure
-
-Abstract
-
-The rise of blockchains as a new Internet infrastructure has led to developers deploying tens of
-thousands of decentralized applications at rapidly growing rates. Unfortunately, blockchain usage
-is not yet ubiquitous due to frequent outages, high costs, low throughput limits, and numerous
-security concerns. To enable mass adoption in the web3 era, blockchain infrastructure needs
-to follow the path of cloud infrastructure as a trusted, scalable, cost-efficient, and continually
-improving platform for building widely-used applications.
-
-We present the Aptos blockchain, designed with scalability, safety, reliability, and upgradeability
-as key principles, to address these challenges. The Aptos blockchain has been developed over the
-past three years by over 350+ developers across the globe. It offers new and novel innovations
-in consensus, smart contract design, system security, performance, and decentralization. The
-combination of these technologies will provide a fundamental building block to bring web3 to the
-masses:
-
-- First, the Aptos blockchain natively integrates and internally uses the Move language for fast
-and secure transaction execution. The Move prover, a formal verifier for smart contracts
-written in the Move language, provides additional safeguards for contract invariants and
-behavior. This focus on security allows developers to better protect their software from
-malicious entities.
-- Second, the Aptos data model enables flexible key management and hybrid custodial options.
-This, alongside transaction transparency prior to signing and practical light client protocols,
-provides a safer and more trustworthy user experience.
-- Third, to achieve high throughput and low latency, the Aptos blockchain leverages a pipelined
-and modular approach for the key stages of transaction processing. Specifically, transaction
-dissemination, block metadata ordering, parallel transaction execution, batch storage, and
-ledger certification all operate concurrently. This approach fully leverages all available physical resources, improves hardware efficiency, and enables highly parallel execution.
-- Fourth, unlike other parallel execution engines that break transaction atomicity by requiring
-upfront knowledge of the data to be read and written, the Aptos blockchain does not put
-such limitations on developers. It can efficiently support atomicity with arbitrarily complex
-transactions, enabling higher throughput and lower latency for real-world applications and
-simplifying development.
-- Fifth, the Aptos modular architecture design supports client flexibility and optimizes for
-frequent and instant upgrades. Moreover, to rapidly deploy new technology innovations
-and support new web3 use cases, the Aptos blockchain provides embedded on-chain change
-management protocols.
-- Finally, the Aptos blockchain is experimenting with future initiatives to scale beyond individual validator performance: its modular design and parallel execution engine support internal
-sharding of a validator and homogeneous state sharding provides the potential for horizontal
-throughput scalability without adding additional complexity for node operators.
-
-## Full PDF versions
-
-:::tip Full PDF versions
-
-- **English**: Get the [full PDF of the Aptos White Paper](/papers/Aptos-Whitepaper.pdf).
-- **Korean**: Get the [Korean version full PDF of the Aptos White Paper](/papers/whitepaper-korean.pdf).
-:::
diff --git a/developer-docs-site/docs/community/aptos-style.md b/developer-docs-site/docs/community/aptos-style.md
deleted file mode 100644
index 9dc129f1117ee..0000000000000
--- a/developer-docs-site/docs/community/aptos-style.md
+++ /dev/null
@@ -1,229 +0,0 @@
----
-title: "Follow Aptos Style"
-slug: "aptos-style"
----
-
-# Follow Aptos Writing Style
-
-When making [site updates](./site-updates.md), Aptos recommends adhering to this writing and formatting style guide for consistency with the rest of Aptos.dev, as well as accessibility directly in GitHub.com and source code editors.
-
-## Hold contributions to high standards
-
-All doc updates should be thorough and tested. This includes external contributions from the community.
-
-So when reviewing changes, do not merge them in unless all feedback has been addressed.
-
-## Single source in Markdown
-
-There should be one external upstream source of truth for Aptos development. And we aim for that to be Aptos.dev. Edit away in [Markdown](https://www.markdownguide.org/basic-syntax/) format using our instructions for making [site updates](./site-updates.md).
-
-Note, you can easily convert Google Docs to Markdown format using the [Docs to Markdown](https://workspace.google.com/marketplace/app/docs_to_markdown/700168918607) add-on.
-
-## Link from product to docs
-
-Whether you work on an external product or an internal tool, your work likely has an interface. From it, you should link to your user docs, along with bug queues and contact information.
-
-## Peer review docs
-
-Your users should not be the first people to use your documentation. Have your peers review your docs just as they review your code. Walk through the flow. If they cannot, your users can't either.
-
-## Form links properly
-
-When linking to absolute files (code, reference) not on Aptos.dev, always use the fully qualified domain. Else, use relative links. Always include the file extension (`.md` for Markdown).
-
-Correct:
-
-```markdown
-[How Base Gas Works](../../../../concepts/base-gas.md)
-```
-
-Incorrect:
-
-```markdown
-[How Base Gas Works](/concepts/base-gas)
-```
-
-The second example will work in [Aptos.dev](http://Aptos.dev) but not when navigating the docs via [GitHub.com](http://GitHub.com) or in source viewer/editor. For links to files in the same directory, include the leading `./` like so:
-
-```markdown
-[proofs](./txns-states.md#proofs)
-```
-
-## Use permanent links to code
-
-When linking to code files in GitHub, use a [permanent link](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-a-permanent-link-to-a-code-snippet) to the relative line or set of lines.
-
-## Link check your pages
-
-It never hurts to run a link check against your pages or entire site. Here are some freely available and useful tools for **public** site checking:
-
- * https://validator.w3.org/checklink
- * https://www.drlinkcheck.com/
-
-Set recursion depth accordingly to delve into sub-links.
-
-## Add images to `static` directory
-
-Place all images in the [`developer-docs-site/static/img`](https://github.com/aptos-labs/aptos-core/tree/main/developer-docs-site/static/img) directory and use relative links to include them. See the image paths in [Set up a React app](../tutorials/build-e2e-dapp/2-set-up-react-app.md) for examples.
-
-## Redirect moved pages
-
-Avoid losing users by adding redirects for moved and renamed [Aptos.dev](http://Aptos.dev) pages in:
-https://github.com/aptos-labs/aptos-core/blob/main/developer-docs-site/docusaurus.config.js
-
-## Name files succinctly
-
-Use short, detailed names with no spaces:
-* hyphenate rather than underscore
-
-* be descriptive
-* use noun (topic) first, with verb optional: ex. accounts.md, site-updates.md
-
-## Use active tense
-
-Avoid passive tense and gerunds when possible:
-
-- Good - Use Aptos API
-- Not-so-good - Using Aptos API
-- Needs improvement - Aptos API Use
-
-## Employ direct style and tone
-
-- Address the user directly. Use "you" instead of "user" or "they".
-- Avoid writing the way you speak, i.e., avoid using contractions, jokes or using colloquial content.
-
- 💡 **Example**:
-
- - **Preferred**: “it will” or “we will” or “it would”.
- - **Avoid**: “it’ll” or “we’ll” or “it’d”.
-
-- Use the active voice.
-
- 💡 **Example**:
-
- - **Preferred**: Fork and clone the Aptos repo.
- - **Avoid**: The Aptos repo should be cloned.
- - **Preferred**: Copy the `Config path` information from the terminal.
- - **Avoid**: The `Config path` information should be copied from the terminal.
-
-- Avoid hypothetical future "would". Instead, write in present tense.
-
- 💡 **Example**:
-
- - **Preferred**: "The compiler sends".
- - **Avoid**: “The compiler would then send”.
-
-## Ensure readability
-
-- Break up walls of text (long passages of text) into smaller chunks for easier reading.
-- Use lists. When you use lists, keep each item as distinct as you can from another item.
-- Provide context. Your readers can be beginner developers or experts in specialized fields. They may not know what you are talking about without any context.
-- Use shorter sentences (26 words or less) They are easier to understand (and translate).
-- Define acronyms and abbreviations at the first usage in every topic.
-- Keep in mind our documentation is written in US English, but the audience will include people for whom English is not their primary language.
-- Avoid culturally specific references, humor, names.
-- Write dates and times in unambiguous and clear ways using the [international standard](https://en.wikipedia.org/wiki/Date_format_by_country). Write "27 November 2020" instead of either "11/27/2020" or "27/11/2020" or "November 27, 2020".
-- Avoid negative sentence construction.
-
- 💡 **Example**:
-
- - **Preferred**: It is common.
- - **Avoid**: It is not uncommon.
-
- Yes there is a subtle difference between the two, but for technical writing this simplification works better.
-
-- Avoid directional language (below, left) in procedural documentation, **unless** you are pointing to an item that is immediately next to it.
-- Be consistent in capitalization and punctuation.
-- Avoid the `&` character in the descriptive text. Use the English word "and".
-
-## Avoid foreshadowing
-
-- Do not refer to future features or products.
-- Avoid making excessive or unsupported claims about future enhancements.
-
-## Use proper casing
-
-Use title case for page titles and sentence case for section headers. Ex:
-
-- Page title - Integrate Aptos with Your Platform
-- Section title - Choose a network
-
-Of course, capitalize [proper nouns](https://www.scribbr.com/nouns-and-pronouns/proper-nouns/), such as “Aptos” in “Accounts on Aptos”
-
-## Write clear titles and headings
-
-- Document titles and section headings should:
- - Explicitly state the purpose of the section.
- - Be a call to action, or intention.
-
-This approach makes it easier for the reader to get her specific development task done.
-
-💡 **Examples**
-
-- **Preferred**: Running a fullnode (section heading)
-- **Avoid**: FullNode running fundamentals (title is not purpose-driven)
-- **Preferred**: Creating your first Move module
-- **Avoid**: Move module
-
-**Document titles (h1)**
-
-- Use title case. For example: "Running a Model"
-
-A document title is the main title of a document page. A document has only one document title.
-
-💡 **Example**: "Writing Style Guide" at the beginning of this page. The document title also appears at the top level in the navigation bar, so it must be short, preferably four to five words or less.
-
-
-**Section headings within a document (h2, h3, h4, h5)**
-
-- Use sentence case. **For example**: "Verify initial synchronization"
-
-A section heading is the title for an individual section within a document page.
-
-💡 **Example**: "Titles and headings" at the top of this section. A document page can have multiple sections, and hence multiple section headings.
-
-- Use a heading hierarchy. Do not skip levels of the heading hierarchy. **For example**, put h3 only under h2.
-- To change the visual formatting of a heading, use CSS instead of using a heading level that does not fit the hierarchy.
-- Do not keep blank headings or headings with no associated content.
-- Avoid using question mark in document titles and section headings.
-
- 💡 **Example**:
-
- - **Preferred**: How it works
- - **Avoid**: How it works?
-
-- Avoid using emphasis or italics in document titles or section headings.
-- Avoid joining words using a slash.
-
- 💡 **Example**:
-
- - **Preferred**: Execute on your macOS or Linux system
- - **Avoid**: Execute on your macOS/Linux system
-
-## Avoid duplication
-
-We face too many challenges to tackle the same one from scratch again or split our efforts into silos. We must collaborate to make best use of our diverse and growing skillset.
-
-Search and navigate across this site to see if an existing document already serves your purpose and garners an update before starting anew. As with code, [don't repeat yourself](https://www.wikipedia.org/wiki/Don%27t_repeat_yourself).
-
-## Use these Aptos words and phrases consistently
-
-The below table lists the correct usage of Aptos words and phrases.
-
-| Recommended way to use in mid-sentence | Avoid these forms |
-| --- | --- |
-| First letter uppercase if appearing at the start of a sentence. | |
-| fullnode (FN) | FullNode, Fullnode |
-| validator or validator node (VN) | Validator Node, ValidatorNode |
-| validator fullnode (VFN) | Validator FullNode or ValidatorFullNode |
-| public fullnode | Public FullNode |
-| Aptos blockchain | Aptos Blockchain |
-| Move module | Move Module |
-| Move resource | Move Resource |
-| Aptos framework | Aptos Framework |
-| Faucet | faucet |
-| mempool | Mempool |
-| bytecode | bytecodes |
-| MoveVM | Move VM |
-| REST service | REST Service |
-| upgradeable | upgradable |
diff --git a/developer-docs-site/docs/community/contributions/remix-ide-plugin.md b/developer-docs-site/docs/community/contributions/remix-ide-plugin.md
deleted file mode 100644
index 64d8a89c9a950..0000000000000
--- a/developer-docs-site/docs/community/contributions/remix-ide-plugin.md
+++ /dev/null
@@ -1,159 +0,0 @@
----
-title: "Use Remix IDE Plugin"
-slug: "remix-ide-plugin"
----
-
-import ThemedImage from '@theme/ThemedImage';
-import useBaseUrl from '@docusaurus/useBaseUrl';
-
-# Use Remix IDE Plugin
-
-This tutorial explains how to deploy and run Move modules with the [WELLDONE Code Remix IDE](https://docs.welldonestudio.io/code) plugin. This tool offers a graphical interface for developing Move [modules](../../move/book/modules-and-scripts.md#modules).
-
-Here are the steps to use the Remix IDE plugin for Move (described in detail below):
-
-1. [Connect to Remix IDE](#step-1-connect-to-remix-ide).
-2. [Select a chain](#step-2-select-a-chain).
-3. [Install a browser extension wallet](#step-3-install-a-wallet).
-4. [Create the project](#step-4-create-the-project).
-5. [Compile and publish a Move module to the Aptos blockchain](#step-5-compile-and-publish-a-move-module-to-the-aptos-blockchain).
-6. [Interact with a Move module](#step-6-interact-with-a-move-module).
-
-## Step 1: Connect to Remix IDE
-
-1. Load the [Remix IDE](https://remix.ethereum.org/).
-
-2. Accept or decline the personal information agreement and dismiss any demonstrations.
-
-3. Click the **Plugin Manager** button near the bottom left, search for *CODE BY WELLDONE STUDIO*, and click **Activate**.
-
-
-
-
-
-## Step 2: Select a Chain
-
-Click the newly created icon at the bottom of the left menu. Then, select **Aptos (MoveVM)** from the chain list.
-
-
-
-
-
-## Step 3: Install a wallet
-
-WELLDONE Wallet can be used with the Remix IDE plugin now, with support for [Petra wallet](https://petra.app/) coming soon. See the list of [Aptos wallets](https://github.com/aptos-foundation/ecosystem-projects#wallets) available in the ecosystem.
-
-This steps assumes you are using the WELLDONE Wallet. Follow [the manual](https://docs.welldonestudio.io/wallet/manual/) to install the wallet and create an account for the Aptos blockchain. Once that is done, follow these steps:
-
-1. Choose a network (e.g. devnet) in the dropdown menu at the top of the main tab.
-1. Go into the **Settings** tab of your wallet and activate **Developer Mode**.
-
-Now in the Remix UI click the **Connect to WELLDONE** button to connect to the **WELLDONE Wallet**.
-
-Click the **Refresh** button in the upper right corner of the plug-in to apply changes to your wallet.
-
-## Step 4: Create the Project
-
-In Aptos, you can write smart contracts with the [Move programming language](../../move/move-on-aptos.md). **WELLDONE Code** provides two features to help developers new to Aptos and Move.
-
-### Select a template
-
-Create simple example contract code written in Move. You can create a sample contract by selecting the *template* option and clicking the **Create** button.
-
-
-
-
-
-### Create a new project
-
-Automatically generate the Move module structure. Write a name for the project, and click the **Create** button to create a Move module structure.
-
-:::info
-You can create your own Move projects without using the features above. However, for the Remix IDE plugin to build and deploy the Move module, it must be built within the directory `aptos/`. If you start a new project, the structure should resemble:
-:::
-
- ```
- aptos
- └──
- ├── Move.toml
- └── sources
- └── YOUR_CONTRACT_FILE.move
- ```
-
-## Step 5: Compile and publish a Move module to the Aptos blockchain
-
-1. Select the project you want to compile in the **PROJECT TO COMPILE** section.
-2. Add your address to the `Move.toml` file.
-3. Click the `Compile` button.
-
-```toml
-[package]
-name = "Examples"
-version = "0.0.0"
-
-[addresses]
-hello_blockchain = "your address"
-
-[dependencies]
-AptosFramework = { git = "https://github.com/aptos-labs/aptos-core.git", subdir = "aptos-move/framework/aptos-framework/", rev = "aptos-node-v1.2.0" }
-```
-
-4. When the compilation is complete, a compiled binary file is returned in the `aptos//out` directory.
-
-If you need to revise the contract and compile again, delete the `out` directory and click **Compile** once more.
-
-5. Once you have compiled contract code, the `Deploy` button will be activated.
-
-## Step 6: Interact with a Move module
-
-:::info
-There are two ways to import contracts.
-1. Automatically import contracts deployed through the above process.
-2. Import existing deployed contracts through the **At Address** button.
-:::
-
-1. Check the modules and resources owned by the current account and read the resources through the **Get Resource** button.
-2. You can select a function, enter parameters as needed, and click a button to run the function. For an entry function - not the view function - a signature from the WELLDONE Wallet is required because the transaction signature and request are required.
-
-
-
-
-
-
-
-## Get support
-
-Click the **Documentation** button to seek help with this Remix IDE plugin. To file requests, click the **Make an issue** button to go to the [welldonestudio](https://github.com/welldonestudio/welldonestudio.github.io) GitHub Repository and [file an issue](https://github.com/welldonestudio/welldonestudio.github.io/issues/new/choose).
diff --git a/developer-docs-site/docs/community/contributors.md b/developer-docs-site/docs/community/contributors.md
deleted file mode 100644
index 5bd5a5cf9c6a3..0000000000000
--- a/developer-docs-site/docs/community/contributors.md
+++ /dev/null
@@ -1,8 +0,0 @@
----
-title: "Contributors"
-hide_table_of_contents: true
-hide_title: true
----
-import Contributors from "@site/src/components/Contributors";
-
-
diff --git a/developer-docs-site/docs/community/external-resources.md b/developer-docs-site/docs/community/external-resources.md
deleted file mode 100644
index 98b33008b8e43..0000000000000
--- a/developer-docs-site/docs/community/external-resources.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: "External Resources"
-slug: "external-resources"
----
-
-# Aptos External Resources
-
-:::caution Proceed with caution
-This page links to third-party contents. Aptos neither endorses nor supports these contributions. Nor can we guarantee their effects.
-:::
-
-This page contains links to external resources supplied by the Aptos community. These may be useful, technical posts to the [Aptos Forum](https://forum.aptoslabs.com/) or links to Aptos-related technologies documented elsewhere.
-
-To add your own resource, click **Edit this page** at the bottom, add your resource in Markdown, and create a pull request for review.
-
-## Tools
-
-| Contribution | Description | Author | Date added/updated |
-| --- | --- | --- | --- |
-| [Aptos Staking Dashboard](https://dashboard.stakeaptos.com) · [Repo](https://github.com/pakaplace/swtb-frontend/) | A dashboard to monitor your Aptos validator performance, view network stats, or request delegation commissions. By [Paymagic Labs](https://paymagic.xyz/). | [pakaplace](https://github.com/pakaplace/) | 2023-03-10 |
-| [Aptos Validator/Staking Postman Collection](https://github.com/pakaplace/aptos-validator-staking-postman) | A Postman collection for querying staking pool, staking contract, and account resources/events. | [pakaplace](https://github.com/pakaplace/) | 2023-03-10 |
-| [One-stop solution for Aptos node monitoring](https://github.com/LavenderFive/aptos-monitoring) | A monitoring solution for Aptos nodes utilizing Docker containers with Prometheus, Grafana, cAdvisor, NodeExporter, and alerting with AlertManager. | [Lavender.Five Nodes](https://github.com/LavenderFive) | 2023-03-10 |
-| [Monitor Your Aptos validator and validator fullnodes with Prometheus and Grafana](https://github.com/RhinoStake/aptos_monitoring) | A full-featured Grafana/Prometheus dashboard to monitor key infrastructure, node, and chain-related metrics and data relationships. | [RHINO](https://rhinostake.com) | 2023-03-10 |
-
-## Tutorials
-
-| Contribution | Description | Author | Date added/updated |
-| --- | --- | --- | --- |
-| [Alerts integration on your validator/full node](https://forum.aptoslabs.com/t/alerts-integration-on-your-validator-full-node/196210) | Explains how to integrate alerts on your validator (fullnode). | [cryptomolot](https://forum.aptoslabs.com/u/unlimitedmolot) | 2023-06-11 |
-| [Tools to monitor your validator](https://forum.aptoslabs.com/t/tools-to-monitore-your-validator/197163) | Explains what tools to use to monitor your validator (fullnode). | [cryptomolot](https://forum.aptoslabs.com/u/unlimitedmolot) and [p1xel32](https://forum.aptoslabs.com/u/p1xel32) | 2023-06-11 |
-| [How to join validator set via snapshot](https://forum.aptoslabs.com/t/how-to-join-validator-set-via-snapshot/207568) | Demonstrates a method to join a validator set with a snapshot. | [cryptomolot](https://forum.aptoslabs.com/u/unlimitedmolot) | 2023-06-11 |
-| [Alerts for your validator via Telegram public](https://forum.aptoslabs.com/t/alerts-for-your-validator-via-telegram-public/201959) | Demonstrates a useful method for receiving alerts. | [cryptomolot](https://forum.aptoslabs.com/u/unlimitedmolot) | 2023-06-11 |
-| [Ansible playbook for Node Management (Bare Metal)](https://github.com/RhinoStake/ansible-aptos) | This Ansible Playbook is for the initialization, configuration, planned and hotfix upgrades of Aptos Validators, VFNs and PFNs on bare metal servers. | [RHINO](https://rhinostake.com) | 2023-03-14 |
-| [Ansible playbook for Node Management (Docker)](https://github.com/LavenderFive/aptos-ansible) | This Ansible Playbook is intended for node management, including initial launch and handling upgrades of nodes. | [Lavender.Five Nodes](https://github.com/LavenderFive) | 2023-03-13 |
-| [Write Your First Smart Contract On Aptos](https://medium.com/mokshyaprotocol/write-your-first-smart-contract-on-aptos-a-step-by-step-guide-e16a6f5c2be6) | This blog is created to help you start writing smart contracts in Aptos Blockchain. | [Samundra Karki](https://medium.com/@samundrakarki56), [MokshyaProtocol](https://mokshya.io/) | 2023-02-27 |
-| [Transfer validator node to other server (no FN required)](https://forum.aptoslabs.com/t/transfer-validator-node-to-other-server-no-fn-required/194629/1) | Shows how to transfer a validator node to another server without using an intermediate full node server. | [p1xel32](https://forum.aptoslabs.com/u/p1xel32) | 2023-02-03 |
-| [Failover and migrate Validator Nodes for less downtime](https://forum.aptoslabs.com/t/failover-and-migrate-validator-nodes-for-less-downtime/144846) | Explains how to hot swap a validator node with a validator full node with Docker setup and inspired the generic [Update Aptos Validator Node via Failover](../nodes/validator-node/operator/update-validator-node.md). | [guguru](https://forum.aptoslabs.com/u/guguru) | 2022-11-22 |
diff --git a/developer-docs-site/docs/community/index.md b/developer-docs-site/docs/community/index.md
deleted file mode 100644
index f079aa3e38052..0000000000000
--- a/developer-docs-site/docs/community/index.md
+++ /dev/null
@@ -1,51 +0,0 @@
----
-title: "Contribute to the Aptos Ecosystem"
-slug: ./
----
-
-# Contribute to the Aptos Ecosystem
-
-We welcome your own [contributions](https://github.com/aptos-labs/aptos-core/blob/main/CONTRIBUTING.md) to the [Aptos](https://aptosfoundation.org/currents) blockchain and this site! Aptos exists to foster an inclusive ecosystem. This page describes ways you can help, while the other pages in this section highlight our community's contributions.
-
-As always, adhere to the [Aptos Code of Conduct](https://github.com/aptos-labs/aptos-core/blob/main/CODE_OF_CONDUCT.md) when taking part in our ecosystem.
-
-## Ask questions and offer answers
-
-Join [Aptos Discord](https://discord.gg/aptosnetwork) to speak with developers and hop into the Aptos community. It's the best way to keep up to date with news and developments in the Aptos universe. Be sure to check pinned messages in the channels - this is where we like to post topic-specific links, events, and more.
-
-For technical questions, we recommend [Stack Overflow](https://stackoverflow.com/questions/tagged/aptos) so anyone in the world may search for, benefit from, and upvote questions and answers in a persistent location. To offer your own advice and find tips from others, post to and use the [Aptos Forum](https://forum.aptoslabs.com/).
-
-Please remember, community managers will never message or DM you first, and they will never ask you to send them money or share any sensitive, private, or personal information. If this happens to you, please report it to us in [Aptos Discord](https://discord.gg/aptosnetwork).
-
-## Report issues, request enhancements
-
-Review and upvote [existing issues](https://github.com/aptos-labs/aptos-core/issues) in the Aptos blockchain.
-
-File [new issues](https://github.com/aptos-labs/aptos-core/issues/new/choose) to report problems or request enhancements. For security concerns, instead follow the [Aptos Core Bug Bounty](https://github.com/aptos-labs/aptos-core/blob/main/SECURITY.md) process.
-
-Here are the primary bug queues:
-
-* [Bug report](https://github.com/aptos-labs/aptos-core/issues/new?assignees=&labels=bug&template=bug_report.md&title=%5BBug%5D) - Create a bug report to help improve Aptos Core.
-* [DevEx RFC](https://github.com/aptos-labs/aptos-core/issues/new?assignees=&labels=DevEx&template=devex_rfc.md&title=%5BDevEx+RFC%5D+) - Open a Request for Comments (RFC) for DevEx improvements.
-* [Documentation/aptos.dev Bug report](https://github.com/aptos-labs/aptos-core/issues/new?assignees=clay-aptos&labels=bug%2Cdocumentation&template=documentation_bug_report.md&title=%5BDocs%5D) - Create a bug report to help improve the Aptos Developers website.
-* [Feature request](https://github.com/aptos-labs/aptos-core/issues/new?assignees=&labels=enhancement&template=feature_request.md&title=%5BFeature+Request%5D) - Suggest a new feature in Aptos Core.
-
-## Develop your own project
-
-See, employ and join the growing number of delightful [community-driven projects](https://github.com/aptos-foundation/ecosystem-projects) in the Aptos ecosystem.
-
-## Become an Aptos ambassador
-
-Help organize events, develop content and more for the ecosystem by joining the [Aptos Collective](https://aptosfoundation.org/currents/join-the-aptos-collective) with plenty of perks in return!
-
-## Fix the source code
-
-We at Aptos love direct contributions in the form of [pull requests](https://github.com/aptos-labs/aptos-core/pulls). Help us make small fixes to code. Follow our coding conventions for:
-
-* [Move](../move/book/coding-conventions.md)
-* [Rust](./rust-coding-guidelines.md)
-
-## Update the documentation
-
-You may report problems and supply other input in the [#docs-feedback](https://discord.com/channels/945856774056083548/1034215378299133974) channel of [Aptos Discord](https://discord.gg/aptosnetwork). To help with our contents, follow [Update Aptos.dev](./site-updates.md).
-
diff --git a/developer-docs-site/docs/community/rust-coding-guidelines.md b/developer-docs-site/docs/community/rust-coding-guidelines.md
deleted file mode 100644
index d6907b5a8821c..0000000000000
--- a/developer-docs-site/docs/community/rust-coding-guidelines.md
+++ /dev/null
@@ -1,367 +0,0 @@
----
-id: rust-coding-guidelines
-title: Rust Coding Guidelines
----
-
-This document describes the coding guidelines for the [Aptos Core](https://github.com/aptos-labs/aptos-core) Rust codebase. For the Move language, see the [Move Coding Conventions](../move/book/coding-conventions.md).
-
-## Code formatting
-
-All code formatting is enforced with [rustfmt](https://github.com/rust-lang/rustfmt) with a project-specific configuration. Below is an example command to adhere to the Aptos Core project conventions.
-
-```
-cargo fmt
-```
-
-## Code analysis
-
-[Clippy](https://github.com/rust-lang/rust-clippy) is used to catch common mistakes and is run as a part of continuous integration. Before submitting your code for review, you can run clippy with our configuration:
-
-```
-cargo xclippy --all-targets
-```
-
-In general, we follow the recommendations from [rust-lang](https://rust-lang.github.io/api-guidelines/about.html) and [The Rust Programming Language](https://doc.rust-lang.org/book/). The remainder of this guide provides detailed guidelines on specific topics in order to achieve uniformity of the codebase.
-
-## Code documentation
-
-Any public fields, functions, and methods should be documented with [Rustdoc](https://doc.rust-lang.org/book/ch14-02-publishing-to-crates-io.html#making-useful-documentation-comments).
-
- Please follow the conventions as detailed below for modules, structs, enums, and functions. The *single line* is used as a preview when navigating Rustdoc. As an example, see the 'Structs' and 'Enums' sections in the [collections](https://doc.rust-lang.org/std/collections/index.html) Rustdoc.
-
- ```
- /// [Single line] One line summary description
- ///
- /// [Longer description] Multiple lines, inline code
- /// examples, invariants, purpose, usage, etc.
- [Attributes] If attributes exist, add after Rustdoc
- ```
-
-Example below:
-
-```rust
-/// Represents (x, y) of a 2-dimensional grid
-///
-/// A line is defined by 2 instances.
-/// A plane is defined by 3 instances.
-#[repr(C)]
-struct Point {
- x: i32,
- y: i32,
-}
-```
-
-### Terminology
-
-The Aptos codebase uses inclusive terminology (similar to other projects such as [the Linux kernel](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=49decddd39e5f6132ccd7d9fdc3d7c470b0061bb)). The terms below are recommended when appropriate.
-* allowlist - a set of entities allowed access
-* blocklist - a set of entities that are blocked from access
-* primary/leader/main - a primary entity
-* secondary/replica/follower - a secondary entity
-
-### Constants and fields
-
-Describe the purpose and definition of this data. If the unit is a measurement of time, include it, e.g., `TIMEOUT_MS` for timeout in milliseconds.
-
-### Functions and methods
-
-Document the following for each function:
-
-* The action the method performs - “This method *adds* a new transaction to the mempool.” Use *active voice* and *present tense* (i.e. adds/creates/checks/updates/deletes).
-* Describe how and why to use this method.
-* Any condition that must be met _before_ calling the method.
-* State conditions under which the function will `panic!()` or returns an `Error`
-* Brief description of return values.
-* Any special behavior that is not obvious
-
-### README.md for top-level directories and other major components
-
-Each major component of Aptos Core needs to have a `README.md` file. Major components are:
-* top-level directories (e.g. `aptos-core/network`, `aptos-core/language`)
-* the most important crates in the system (e.g. `vm-runtime`)
-
-This file should contain:
-
- * The *conceptual* *documentation* of the component.
- * A link to the external API documentation for the component.
- * A link to the main license of the project.
- * A link to the main contributing guide for the project.
-
-A template for readmes:
-
-```markdown
-# Component Name
-
-[Summary line: Start with one sentence about this component.]
-
-## Overview
-
-* Describe the purpose of this component and how the code in
-this directory works.
-* Describe the interaction of the code in this directory with
-the other components.
-* Describe the security model and assumptions about the crates
-in this directory. Examples of how to describe the security
-assumptions will be added in the future.
-
-## Implementation Details
-
-* Describe how the component is modeled. For example, why is the
- code organized the way it is?
-* Other relevant implementation details.
-
-## API Documentation
-
-For the external API of this crate refer to [Link to rustdoc API].
-
-[For a top-level directory, link to the most important APIs within.]
-
-## Contributing
-
-Refer to the Aptos Project contributing guide [LINK].
-
-## License
-
-Refer to the Aptos Project License [LINK].
-```
-
-A good example of README.md is `aptos-core/network/README.md` that describes the networking crate.
-
-## Binary, Argument, and Crate Naming
-
-Most tools that we use everyday (rustc, cargo, git, rg, etc.) use dashes `-` as
-a separator for binary names and arguments and the [GNU software
-manual](https://www.gnu.org/software/libc/manual/html_node/Argument-Syntax.html)
-dictates that long options should "consist of `--` followed by a name made of
-alphanumeric characters and dashes". As such dashes `-` should be used as
-separators in both binary names and command line arguments.
-
-In addition, it is generally accepted by many in the Rust community that dashes
-`-` should be used as separators in crate names, i.e. `x25519-dalek`.
-
-## Code suggestions
-
-In the following sections, we have suggested some best practices for a uniform codebase. We will investigate and identify the practices that can be enforced using Clippy. This information will evolve and improve over time.
-
-### Attributes
-
-Make sure to use the appropriate attributes for handling dead code:
-
-```
-// For code that is intended for production usage in the future
-#[allow(dead_code)]
-// For code that is only intended for testing and
-// has no intended production use
-#[cfg(test)]
-```
-
-### Avoid Deref polymorphism
-
-Don't abuse the Deref trait to emulate inheritance between structs, and thus reuse methods. For more information, read [Deref polymorphism](https://rust-unofficial.github.io/patterns/anti_patterns/deref.html).
-
-### Comments
-
-We recommend that you use `//` and `///` comments rather than block comments `/* ... */` for uniformity and simpler grepping.
-
-### Concurrent types
-
-Concurrent types such as [`CHashMap`](https://docs.rs/crate/chashmap), [`AtomicUsize`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html), etc. have an immutable borrow on self i.e. `fn foo_mut(&self,...)` in order to support concurrent access on interior mutating methods. Good practices (such as those in the examples mentioned) avoid exposing synchronization primitives externally (e.g. `Mutex`, `RwLock`) and document the method semantics and invariants clearly.
-
-*When to use channels vs concurrent types?*
-
-Listed below are high-level suggestions based on experience:
-
-* Channels are for ownership transfer, decoupling of types, and coarse-grained messages. They fit well for transferring ownership of data, distributing units of work, and communicating async results. Furthermore, they help break circular dependencies (e.g. `struct Foo` contains an `Arc` and `struct Bar` contains an `Arc` that leads to complex initialization).
-
-* Concurrent types (e.g. such as [`CHashMap`](https://docs.rs/crate/chashmap) or structs that have interior mutability building on [`Mutex`](https://doc.rust-lang.org/std/sync/struct.Mutex.html), [`RwLock`](https://doc.rust-lang.org/std/sync/struct.RwLock.html), etc.) are better suited for caches and states.
-
-### Error handling
-
-Error handling suggestions follow the [Rust book guidance](https://doc.rust-lang.org/book/ch09-00-error-handling.html). Rust groups errors into two major categories: recoverable and unrecoverable errors. Recoverable errors should be handled with [Result](https://doc.rust-lang.org/std/result/). Our suggestions on unrecoverable errors are listed below:
-
-*Fallible functions*
-
-* `duration_since_epoch()` - to obtain the unix time, call the function provided by `aptos-infallible`.
-* `RwLock` and `Mutex` - Instead of calling `unwrap()` on the standard library implementations of these functions, use the infallible equivalent types that we provide in `aptos-infallible`.
-
-*Panic*
-
-* `unwrap()` - Unwrap should only be used for test code. For all other use cases, prefer `expect()`. The only exception is if the error message is custom-generated, in which case use `.unwrap_or_else(|| panic!("error: {}", foo))`.
-* `expect()` - Expect should be invoked when a system invariant is expected to be preserved. `expect()` is preferred over `unwrap()` and should contain a detailed error message on failure in most cases.
-* `assert!()` - This macro is kept in both debug/release and should be used to protect invariants of the system as necessary.
-* `unreachable!()` - This macro will panic on code that should not be reached (violating an invariant) and can be used where appropriate.
-
-In production (non-test) code, outside of lock management, all unrecoverable errors should be cleanly documented describing why said event is unrecoverable. For example, if the system is now in a bad state, state what that state is and the motivation for why a crash / restart is more effective than resolving it within a running system, and what if any steps an operator would need to take to resolve the issue.
-
-### Generics
-
-Generics allow dynamic behavior (similar to [`trait`](https://doc.rust-lang.org/book/ch10-02-traits.html) methods) with static dispatch. As the number of generic type parameters increases, the difficulty of using the type/method also increases (e.g. consider the combination of trait bounds required for this type, duplicate trait bounds on related types, etc.). In order to avoid this complexity, we generally try to avoid using a large number of generic type parameters. We have found that converting code with a large number of generic objects to trait objects with dynamic dispatch often simplifies our code.
-
-### Getters/setters
-
-In general, we follow naming recommendations for getters as specified [here](https://rust-lang.github.io/api-guidelines/naming.html#getter-names-follow-rust-convention-c-getter) and for setters as defined [here](https://github.com/rust-lang/rfcs/blob/master/text/0344-conventions-galore.md#gettersetter-apis).
-
-Getters/setters should be avoided for [`struct`](https://doc.rust-lang.org/book/ch05-00-structs.html) types in the C spirit: compound, passive data structures without internal invariants. Adding them only increases the complexity and number of lines of code without improving the developer experience.
-
-```rust
-struct Foo {
- size: usize,
- key_to_value: HashMap
-}
-impl Foo {
- /// Simple getter follows xxx pattern
- fn size(&self) -> usize {
- self.size
- }
- /// Setter follows set_xxx pattern
- fn set_foo(&mut self, size: usize){
- self.size = size;
- }
- /// Complex getter follows get_xxx pattern
- fn get_value(&self, key: u32) -> Option<&u32> {
- self.key_to_value.get(&key)
- }
-}
-```
-
-### Integer Arithmetic
-
-As every integer operation (`+`, `-`, `/`, `*`, etc.) implies edge-cases (e.g. overflows `u64::MAX + 1`, underflows `0u64 -1`, division by zero, etc.),
-we use checked arithmetic instead of directly using math symbols.
-It forces us to think of edge-cases, and handle them explicitely.
-This is a brief and simplified mini guide of the different functions that exist to handle integer arithmetic:
-
-* [checked_](https://doc.rust-lang.org/std/primitive.u32.html#method.checked_add): use this function if you want to handle overflows and underflows as a special edge-case. It returns `None` if an underflow or overflow has happened, and `Some(operation_result)` otherwise.
-* [overflowing_](https://doc.rust-lang.org/std/primitive.u32.html#method.overflowing_add): use this function if you want the result of an overflow to potentially wrap around (e.g. `u64::MAX.overflow_add(10) == (9, true)`). It returns the underflowed or overflowed result as well as a flag indicating if an overflow has occured or not.
-* [wrapping_](https://doc.rust-lang.org/std/primitive.u32.html#method.wrapping_add): this is similar to overflowing operations, except that it returns the result directly. Use this function if you are sure that you want to handle underflows and overflows by wrapping around.
-* [saturating_](https://doc.rust-lang.org/std/primitive.u32.html#method.saturating_add): if an overflow occurs, the result is kept within the boundary of the type (e.g. `u64::MAX.saturating_add(1) == u64::MAX`).
-
-### Logging
-
-We currently use [log](https://docs.rs/log/) for logging.
-
-* [error!](https://docs.rs/log/latest/log/macro.error.html) - Error-level messages have the highest urgency in [log](https://docs.rs/log/). An unexpected error has occurred (e.g. exceeded the maximum number of retries to complete an RPC or inability to store data to local storage).
-* [warn!](https://docs.rs/log/latest/log/macro.warn.html) - Warn-level messages help notify admins about automatically handled issues (e.g. retrying a failed network connection or receiving the same message multiple times, etc.).
-* [info!](https://docs.rs/log/latest/log/macro.info.html) - Info-level messages are well suited for "one-time" events (such as logging state on one-time startup and shutdown) or periodic events that are not frequently occurring - e.g. changing the validator set every day.
-* [debug!](https://docs.rs/log/latest/log/macro.debug.html) - Debug-level messages can occur frequently (i.e. potentially > 1 message per second) and are not typically expected to be enabled in production.
-* [trace!](https://docs.rs/log/latest/log/macro.trace.html) - Trace-level logging is typically only used for function entry/exit.
-
-### Testing
-
-*Unit tests*
-
-We follow the general guidance provided [here](https://doc.rust-lang.org/book/ch11-03-test-organization.html). Ideally, all code should be unit tested. Unit tests should be in the same file as the code it is testing though in a distinct module, using the following syntax:
-
-```rust
-struct Foo {
-}
-impl Foo {
- pub fn magic_number() -> u8 {
- 42
- }
-}
-#[cfg(test)]
-mod tests {
- #test
- fn verify_magic_number() {
- assert_eq!(Foo::magic_number(), 42);
- }
-}
-```
-
-*Property-based tests*
-
-Aptos contains [property-based tests](https://blog.jessitron.com/2013/04/25/property-based-testing-what-is-it/) written in Rust using the [`proptest` framework](https://github.com/AltSysrq/proptest). Property-based tests generate random test cases and assert that invariants, also called *properties*, hold for the code under test.
-
-Some examples of properties tested in Aptos:
-
-* Every serializer and deserializer pair is tested for correctness with random inputs to the serializer. Any pair of functions that are inverses of each other can be tested this way.
-* The results of executing common transactions through the VM are tested using randomly generated scenarios and verified with an *Oracle*.
-
-A tutorial for `proptest` can be found in the [`proptest` book](https://altsysrq.github.io/proptest-book/proptest/getting-started.html).
-
-References:
-
-* [What is Property Based Testing?](https://hypothesis.works/articles/what-is-property-based-testing/) (includes a comparison with fuzzing)
-* [An introduction to property-based testing](https://fsharpforfunandprofit.com/posts/property-based-testing/)
-* [Choosing properties for property-based testing](https://fsharpforfunandprofit.com/posts/property-based-testing-2/)
-
-*Fuzzing*
-
-Aptos contains harnesses for fuzzing crash-prone code like deserializers, using [`libFuzzer`](https://llvm.org/docs/LibFuzzer.html) through [`cargo fuzz`](https://rust-fuzz.github.io/book/cargo-fuzz.html). For more examples, see the `testsuite/aptos_fuzzer` directory.
-
-### Conditional compilation of tests
-
-Aptos [conditionally
-compiles](https://doc.rust-lang.org/stable/reference/conditional-compilation.html)
-code that is *only relevant for tests, but does not consist of tests* (unitary
-or otherwise). Examples of this include proptest strategies, implementations
-and derivations of specific traits (e.g. the occasional `Clone`), helper
-functions, etc. Since Cargo is [currently not equipped for automatically activating features
-in tests/benchmarks](https://github.com/rust-lang/cargo/issues/2911), we rely on two
-conditions to perform this conditional compilation:
-- the test flag, which is activated by dependent test code in the same crate
- as the conditional test-only code.
-- the `fuzzing` custom feature, which is used to enable fuzzing and testing
-related code in downstream crates. Note that this must be passed explicitly to
-`cargo xtest` and `cargo x bench`. Never use this in `[dependencies]` unless
-the crate is only for testing.
-
-As a consequence, it is recommended that you set up your test-only code in the following fashion.
-
-**For production crates:**
-
-Production crates are defined as the set of crates that create externally published artifacts, e.g. the Aptos validator,
-the Move compiler, and so on.
-
-For the sake of example, we'll consider you are defining a test-only helper function `foo` in `foo_crate`:
-
-1. Define the `fuzzing` flag in `foo_crate/Cargo.toml` and make it non-default:
- ```toml
- [features]
- default = []
- fuzzing = []
- ```
-2. Annotate your test-only helper `foo` with both the `test` flag (for in-crate callers) and the `"fuzzing"` custom feature (for out-of-crate callers):
- ```rust
- #[cfg(any(test, feature = "fuzzing"))]
- fn foo() { ... }
- ```
-3. (optional) Use `cfg_attr` to make test-only trait derivations conditional:
- ```rust
- #[cfg_attr(any(test, feature = "testing"), derive(FooTrait))]
- #[derive(Debug, Display, ...)] // inconditional derivations
- struct Foo { ... }
- ```
-4. (optional) Set up feature transitivity for crates that call crates that have test-only members. Let's say it's the case of `bar_crate`, which, through its test helpers, calls into `foo_crate` to use your test-only `foo`. Here's how you would set up `bar_crate/Cargo.toml`:
- ```toml
- [features]
- default = []
- fuzzing = ["foo_crate/fuzzing"]
- ```
-
-**For test-only crates:**
-
-Test-only crates do not create published artifacts. They consist of tests, benchmarks or other code that verifies
-the correctness or performance of published artifacts. Test-only crates are
-explicitly listed in `x.toml` under `[workspace.test-only]`.
-
-These crates do not need to use the above setup. Instead, they can enable the `fuzzing` feature in production crates
-directly.
-
-```toml
-[dependencies]
-foo_crate = { path = "...", features = ["fuzzing"] }
-```
-
-*A final note on integration tests*: All tests that use conditional test-only
-elements in another crate need to activate the "fuzzing" feature through the
-`[features]` section in their `Cargo.toml`. [Integration
-tests](https://doc.rust-lang.org/rust-by-example/testing/integration_testing.html)
-can neither rely on the `test` flag nor do they have a proper `Cargo.toml` for
-feature activation. In the Aptos codebase, we therefore recommend that
-*integration tests which depend on test-only code in their tested crate* be
-extracted to their own test-only crate. See `language/move-binary-format/serializer_tests`
-for an example of such an extracted integration test.
-
-*Note for developers*: The reason we use a feature re-export (in the `[features]` section of the `Cargo.toml` is that a profile is not enough to activate the `"fuzzing"` feature flag. See [cargo-issue #291](https://github.com/rust-lang/cargo/issues/2911) for details).
diff --git a/developer-docs-site/docs/community/site-updates.md b/developer-docs-site/docs/community/site-updates.md
deleted file mode 100644
index 60a640caac377..0000000000000
--- a/developer-docs-site/docs/community/site-updates.md
+++ /dev/null
@@ -1,89 +0,0 @@
----
-title: "Update Aptos.dev"
-slug: "site-updates"
----
-
-# Update Aptos.dev
-
-As an open source project, Aptos needs your knowledge to grow. Follow the instructions on this page to update [Aptos.dev](https://aptos.dev/), the developer website for the Aptos blockchain. Every contributor to Aptos.dev is listed as an *author* on the pages they edit and update. See the *Authors* list at the bottom of any page for an example.
-
-See the [Aptos Docs](https://github.com/orgs/aptos-labs/projects/14/views/1) project for open issues by status. See detailed instructions for making updates below.
-
-## tl;dr
-
-Simply click **Edit this page** at the bottom of any location to go to the source and trigger editing there. The contents are in [Markdown](https://www.markdownguide.org/basic-syntax/) format. You may then edit in browser and use the *Preview* function to view your changes.
-
-Here are the basic steps for editing in your web browser:
-
-1. Click **Edit this page** at the bottom to get started.
-2. Modify and add source Markdown files in the [developer-docs-site](https://github.com/aptos-labs/aptos-core/tree/main/developer-docs-site) directory.
-3. See your changes in Netlify (by swapping `prnumber` in):
- [https://deploy-preview-prnumber--aptos-developer-docs.netlify.app/](https://deploy-preview-prnumber--aptos-developer-docs.netlify.app/)
-4. Have at least two verified reviewers examine and test the change.
-5. Merge in the change and see it go live.
-
-For more complex documentation updates, we recommend [forking the repository](https://github.com/aptos-labs/aptos-core/blob/main/CONTRIBUTING.md#developer-workflow) and using a local editor to make changes. To edit at the command line and preview your changes on your localhost, see our [Developer Documentation](https://github.com/aptos-labs/aptos-core/blob/main/developer-docs-site/README.md) README.
-
-When ready, [start a pull request](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request) with your changes. We will get back to you shortly.
-
-
-## Supporting resources
-
-The Aptos Docs recommends these materials for good documentation:
-
-- [Aptos Style](./aptos-style.md) - A brief set of guidance for contributions to Aptos.dev.
-- [Google Style Guide](https://developers.google.com/style) - A Google standard adopted by companies large and small.
-- [Technical writing courses](https://developers.google.com/tech-writing) - Google offers basic courses on tech writing for engineers and others.
-- [DITA](https://en.wikipedia.org/wiki/Darwin_Information_Typing_Architecture) - The Aptos Docs team adheres to the [Darwin Information Typing Architecture](https://en.wikipedia.org/wiki/Darwin_Information_Typing_Architecture) whereby all technical documentation is broken down into concepts (overviews), tasks (procedures), and references (lists) to best suit our audiences and their mindsets (learning, doing, finding) at the time of reading.
-- [Open source templates](https://gitlab.com/tgdp/templates) - The [Good Docs Project](https://thegooddocsproject.dev/) gives us myriad templates in Markdown for various documentation types we should take advantage of in Aptos.dev.
-
-## Make updates directly
-
-Whenever possible, update [Aptos.dev](http://Aptos.dev) directly to reflect your changes to development. This might be as simple as changing a value or as complex as adding an entirely new page or set of pages.
-
-To update [Aptos.dev](http://Aptos.dev) directly:
-
-1. Trigger an edit to the source files in the [developer-docs-site](https://github.com/aptos-labs/aptos-core/tree/main/developer-docs-site) directory:
- 1. In web browser:
- * for simple, one-page changes, use the ***Edit this page*** link on the bottom of any page to access the source Markdown file in GitHub:
- ![v-fn-network.svg](../../static/img/docs/trigger-edits-aptosdev.png)
- Then click the pencil icon and select **Edit this file** to work in the GitHub web editor, and create a pull request to have it reviewed:
- ![v-fn-network.svg](../../static/img/docs/edit-file-in-GH.png)
- * To add a new page, navigate to the relevant subdirectory of the [developer-docs-site/docs/](https://github.com/aptos-labs/aptos-core/tree/main/developer-docs-site/docs/) directory, click **Add file**, give it a name, append the `.md` file extension, include your contents, and create a pull request to have it reviewed:
- ![v-fn-network.svg](../../static/img/docs/add-file-in-GH.png)
- 2. Via local editor - for more complex, multi-page changes, use your preferred source code editor to navigate to and update the source Markdown files in GitHub. See our [CONTRIBUTING](https://github.com/aptos-labs/aptos-core/blob/main/CONTRIBUTING.md) README for `git clone` instructions.
-2. For web edits, use the *Preview* function at top to see your updates in browser.
-3. For local edits, use the [local doc build instructions](https://github.com/aptos-labs/aptos-core/blob/main/developer-docs-site/README.md) to see your updates at: [http://localhost:3000](http://localhost:3000)
-4. After creating the pull request, use the *Deploy Preview* in Netlify to see your updates made in web browser or via local editor by replacing the *prnumber* with your own in:
-[https://deploy-preview-prnumber--aptos-developer-docs.netlify.app/](https://deploy-preview-prnumber--aptos-developer-docs.netlify.app/)
-5. Have at least two verified reviewers review and test your changes.
-6. Make direct commits during review.
-7. Request review from the Docs team (currently, clay-aptos in GitHub).
-8. Use the *Assignee* field in the PR to identify the review the change is blocking upon.
-9. Receive and address *all feedback*.
-10. Get approval from at least two verified reviewers.
-11. Merge in the change.
-12. Monitor builds at: [https://app.netlify.com/sites/aptos-developer-docs/overview](https://app.netlify.com/sites/aptos-developer-docs/overview)
-
-## Request docs changes
-
-If you are unable to make the update yourself or simply need Docs team help along the way:
-
-1. See the existing list of [open issues tagged as Documentation](https://github.com/aptos-labs/aptos-core/issues?q=is%3Aissue+is%3Aopen+label%3Adocumentation) in GitHub.
-2. If one does not exist, file a new [Documentation issue](https://github.com/aptos-labs/aptos-core/issues/new?assignees=clay-aptos&labels=bug%2Cdocumentation&template=documentation_bug_report.md&title=%5BDocs%5D).
-3. Answer all relevant questions/sections in the bug template (such as URL to the affected page).
-4. Set a priority for the doc issue:
- 1. [P0](https://github.com/aptos-labs/aptos-core/issues?q=is%3Aissue+is%3Aopen+label%3Adocumentation+label%3Ap0+) - critical and urgent
- 2. [P1](https://github.com/aptos-labs/aptos-core/issues?q=is%3Aissue+is%3Aopen+label%3Adocumentation+label%3Ap1+) - important and needed soon
- 3. [P2](https://github.com/aptos-labs/aptos-core/issues?q=is%3Aissue+is%3Aopen+label%3Adocumentation+label%3Ap2+) - can wait for this; still dependent on other work
- 4. [P3](https://github.com/aptos-labs/aptos-core/issues?q=is%3Aissue+is%3Aopen+label%3Adocumentation+label%3Ap3+) - back burner item; there is no urgency here
-5. Explain in the issue precisely what is expected in the doc; what requirements must it meet?
-6. Assign the issue to and work with the subject matter experts and the Docs team to generate new and updated materials.
-7. Associate all related pull requests with the issue by adding the issue number to the *Development* field of each PR.
-8. Re-open the issue when related PRs are merged and work is still needed.
-9. Close the issue only when all relevant parties are satisfied with the work.
-
-
-
-
-
diff --git a/developer-docs-site/docs/concepts/_category_.json b/developer-docs-site/docs/concepts/_category_.json
deleted file mode 100644
index daec3e16a89dc..0000000000000
--- a/developer-docs-site/docs/concepts/_category_.json
+++ /dev/null
@@ -1,4 +0,0 @@
-{
- "label": "Basics",
- "position": 1
-}
diff --git a/developer-docs-site/docs/concepts/accounts.md b/developer-docs-site/docs/concepts/accounts.md
deleted file mode 100755
index 4df214323f514..0000000000000
--- a/developer-docs-site/docs/concepts/accounts.md
+++ /dev/null
@@ -1,159 +0,0 @@
----
-title: "Accounts"
-id: "accounts"
----
-
-# Accounts
-
-An account on the Aptos blockchain represents access control over a set of assets including on-chain currency and NFTs. In Aptos, these assets are represented by a Move language primitive known as a **resource** that emphasizes both access control and scarcity.
-
-Each account on the Aptos blockchain is identified by a 32-byte account address. You can employ the [Aptos Name Service](../integration/aptos-name-service-connector.md) at [www.aptosnames.com](https://www.aptosnames.com/) to secure .apt domains for key accounts to make them memorable and unique.
-
-Different from other blockchains where accounts and addresses are implicit, accounts on Aptos are explicit and need to be created before they can execute transactions. The account can be created explicitly or implicitly by transferring Aptos tokens (APT) there. See the [Creating an account](#creating-an-account) section for more details. In a way, this is similar to other chains where an address needs to be sent funds for gas before it can send transactions.
-
-Explicit accounts allow first-class features that are not available on other networks such as:
-* Rotating authentication key. The account's authentication key can be changed to be controlled via a different private key. This is similar to changing passwords in the web2 world.
-* Native multisig support. Accounts on Aptos support k-of-n multisig using both Ed25519 and Secp256k1 ECDSA signature schemes when constructing the authentication key.
-
-There are three types of accounts on Aptos:
- * *Standard account* - This is a typical account corresponding to an address with a corresponding pair of public/private keys.
- * [*Resource account*](../move/move-on-aptos/resource-accounts.md) - An autonomous account without a corresponding private key used by developers to store resources or publish modules on-chain.
- * [*Object*](../standards/aptos-object.md) - A complex set of resources stored within a single address representing a single entity.
-
-:::tip Account address example
-Account addresses are 32-bytes. They are usually shown as 64 hex characters, with each hex character a nibble.
-Sometimes the address is prefixed with a 0x. See the [Your First Transaction](../tutorials/first-transaction.md) for an example
-of how an address appears, reproduced below:
-
-```text
-Alice: 0xeeff357ea5c1a4e7bc11b2b17ff2dc2dcca69750bfef1e1ebcaccf8c8018175b
-Bob: 0x19aadeca9388e009d136245b9a67423f3eee242b03142849eb4f81a4a409e59c
-```
-:::
-
-## Account address
-
-Currently, Aptos supports only a single, unified identifier for an account. Accounts on Aptos are universally represented as a 32-byte hex string. A hex string shorter than 32-bytes is also valid; in those scenarios, the hex string can be padded with leading zeroes, e.g., `0x1x` => `0x0000000000000...01`. While Aptos standards indicate leading zeroes may be removed from an Address, most applications attempt to eschew that legacy behavior and only support the removal of 0s for special addresses ranging from `0x0` to `0xa`.
-
-## Creating an account
-
-When a user requests to create an account, for example by using the [Aptos SDK](https://aptos-labs.github.io/ts-sdk-doc/classes/AptosAccount.html), the following steps are executed:
-
-- Select the authentication scheme for managing the user's account, e.g., Ed25519 or Secp256k1 ECDSA.
-- Generate a new private key, public key pair.
-- Combine the public key with the public key's authentication scheme to generate a 32-byte authentication key and the account address.
-
-The user should use the private key for signing the transactions associated with this account.
-
-## Account sequence number
-
-The sequence number for an account indicates the number of transactions that have been submitted and committed on-chain from that account. Committed transactions either execute with the resulting state changes committed to the blockchain or abort wherein state changes are discarded and only the transaction is stored.
-
-Every transaction submitted must contain a unique sequence number for the given sender's account. When the Aptos blockchain processes the transaction, it looks at the sequence number in the transaction and compares it with the sequence number in the on-chain account. The transaction is processed only if the sequence number is equal to or larger than the current sequence number. Transactions are only forwarded to other mempools or executed if there is a contiguous series of transactions from the current sequence number. Execution rejects out of order sequence numbers preventing replay attacks of older transactions and guarantees ordering of future transactions.
-
-## Authentication key
-
-The initial account address is set to the authentication key derived during account creation. However, the authentication key may subsequently change, for example when you generate a new public-private key pair, public keys to rotate the keys. An account address never changes.
-
-The Aptos blockchain supports the following authentication schemes:
-
-1. [Ed25519](https://ed25519.cr.yp.to/)
-2. [Secp256k1 ECDSA](https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-49.md)
-3. [K-of-N multi-signatures](https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-55.md)
-4. A dedicated, now legacy, MultiEd25519 scheme
-
-:::note
-The Aptos blockchain defaults to Ed25519 signature transactions.
-:::
-
-### Ed25519 authentication
-
-To generate an authentication key and the account address for an Ed25519 signature:
-
-1. **Generate a key-pair**: Generate a fresh key-pair (`privkey_A`, `pubkey_A`). The Aptos blockchain uses the PureEdDSA scheme over the Ed25519 curve, as defined in RFC 8032.
-2. **Derive a 32-byte authentication key**: Derive a 32-byte authentication key from the `pubkey_A`:
- ```
- auth_key = sha3-256(pubkey_A | 0x00)
- ```
- where `|` denotes concatenation. The `0x00` is the 1-byte single-signature scheme identifier.
-3. Use this initial authentication key as the permanent account address.
-
-### MultiEd25519 authentication
-
-With K-of-N multisig authentication, there are a total of N signers for the account, and at least K of those N signatures
-must be used to authenticate a transaction.
-
-To generate a K-of-N multisig account's authentication key and the account address:
-
-1. **Generate key-pairs**: Generate `N` ed25519 public keys `p_1`, ..., `p_n`.
-2. Decide on the value of `K`, the threshold number of signatures needed for authenticating the transaction.
-3. **Derive a 32-byte authentication key**: Compute the authentication key as described below:
- ```
- auth_key = sha3-256(p_1 | . . . | p_n | K | 0x01)
- ```
- The `0x01` is the 1-byte multisig scheme identifier.
-4. Use this initial authentication key as the permanent account address.
-
-### Generalized authentication
-
-Generalized authentication supports both Ed25519 and Secp256k1 ECDSA. Like the previous authentication schemes, these schemes contain a scheme value, `0x02` and `0x03` for single and multikey respectively, but also each key contains a prefix value to indicate its key type:
-
-- **1-byte Ed25519 generalized scheme**: `0x00`,
-- **1-byte Secp256k1 ECDSA generalized scheme**: `0x01`.
-
-For a single key Secp256k1 ECDSA account, using public key `pubkey`, the authentication key would be derived as follows:
-```
-auth_key = sha3-256(0x01 | pubkey | 0x02)
-```
-Where
-* the first entry, `0x01`, represents the use of a Secp256k1 ECDSA key;
-* the last entry, `0x02`, represents the authentication scheme.
-
-For a multi-key account containing, a single Secp256k1 ECDSA public key, `pubkey_0`, and a single Ed25519 public key, `pubkey_1`, where one signature suffices, the authentication key would be derived as follows:
-```
-auth_key = sha3-256(0x02 | 0x01 | pubkey_0 | 0x02 | pubkey_2 | 0x01 | 0x03)
-```
-Where
-* the first entry, `0x02`, represents the total number of keys as a single byte;
-* the second to last entry, `0x01`, represents the required number of singatures as a single byte;
-* the last entry, `0x03`, represents the authentication scheme.
-
-## Rotating the keys
-An Account on Aptos has the ability to rotate keys so that potentially compromised keys cannot be used to access the accounts. Keys can be rotated via the `account::rotate_authentication_key` function.
-
-Refreshing the keys is generally regarded as good hygiene in the security field. However, this presents a challenge for system integrators who are used to using a mnemonic to represent both a private key and its associated account. To simplify this for the system integrators, Aptos provides an on-chain mapping via aptos account lookup-address. The on-chain data maps an effective account address as defined by the current mnemonic to the actual account address.
-
-For more information, see [`account.move`](https://github.com/aptos-labs/aptos-core/blob/a676c1494e246c31c5e96d3363d99e2422e30f49/aptos-move/framework/aptos-framework/sources/account.move#L274).
-
-## State of an account
-
-The state of each account comprises both the code (Move modules) and the data (Move resources). An account may contain an arbitrary number of Move modules and Move resources:
-
-- **Move modules**: Move modules contain code, for example, type and procedure declarations; but they do not contain data. A Move module encodes the rules for updating the Aptos blockchain's global state.
-- **Move resources**: Move resources contain data but no code. Every resource value has a type that is declared in a module published on the Aptos blockchain.
-
-## Access control with signers
-
-The sender of a transaction is represented by a signer. When a function in a Move module takes `signer` as an argument, the Aptos Move VM translates the identity of the account that signed the transaction into a signer in a Move module entry point. See the below Move example code with `signer` in the `initialize` and `withdraw` functions. When a `signer` is not specified in a function, for example, the below `deposit` function, then no signer-based access controls will be provided for this function:
-
-```rust
-module Test::Coin {
- struct Coin has key { amount: u64 }
-
- public fun initialize(account: &signer) {
- move_to(account, Coin { amount: 1000 });
- }
-
- public fun withdraw(account: &signer, amount: u64): Coin acquires Coin {
- let balance = &mut borrow_global_mut(Signer::address_of(account)).amount;
- *balance = *balance - amount;
- Coin { amount }
- }
-
- public fun deposit(account: address, coin: Coin) acquires Coin {
- let balance = &mut borrow_global_mut(account).amount;
- *balance = *balance + coin.amount;
- Coin { amount: _ } = coin;
- }
-}
-```
diff --git a/developer-docs-site/docs/concepts/base-gas.md b/developer-docs-site/docs/concepts/base-gas.md
deleted file mode 100644
index e1aa94452255d..0000000000000
--- a/developer-docs-site/docs/concepts/base-gas.md
+++ /dev/null
@@ -1,319 +0,0 @@
----
-title: "Computing Transaction Gas"
-id: "base-gas"
----
-
-# Computing Transaction Gas
-
-Aptos transactions by default charge a base gas fee, regardless of market conditions.
-For each transaction, this "base gas" amount is based on three conditions:
-
-1. Instructions.
-2. Storage.
-3. Payload.
-
-The more function calls, branching conditional statements, etc. that a transaction requires, the more instruction gas it will cost.
-Likewise, the more reads from and writes into global storage that a transaction requires, the more storage gas it will cost.
-Finally, the more bytes in a transaction payload, the more it will cost.
-
-As explained in the [optimization principles](#optimization-principles) section, storage gas has by far the largest effect on base gas. For background on the Aptos gas model, see [The Making of the Aptos Gas Schedule](https://aptoslabs.medium.com/the-making-of-the-aptos-gas-schedule-508d5686a350).
-
-
-## Instruction gas
-
-Basic instruction gas parameters are defined at [`instr.rs`] and include the following instruction types:
-
-### No-operation
-
-| Parameter | Meaning |
-|-----------|----------------|
-| `nop` | A no-operation |
-
-### Control flow
-
-| Parameter | Meaning |
-|------------|----------------------------------|
-| `ret` | Return |
-| `abort` | Abort |
-| `br_true` | Execute conditional true branch |
-| `br_false` | Execute conditional false branch |
-| `branch` | Branch |
-
-### Stack
-
-| Parameter | Meaning |
-|---------------------|----------------------------------|
-| `pop` | Pop from stack |
-| `ld_u8` | Load a `u8` |
-| `ld_u16` | Load a `u16` |
-| `ld_u32` | Load a `u32` |
-| `ld_u64` | Load a `u64` |
-| `ld_u128` | Load a `u128` |
-| `ld_256` | Load a `u256` |
-| `ld_true` | Load a `true` |
-| `ld_false` | Load a `false` |
-| `ld_const_base` | Base cost to load a constant |
-| `ld_const_per_byte` | Per-byte cost to load a constant |
-
-### Local scope
-
-| Parameter | Meaning |
-|-----------------------------|--------------------------|
-| `imm_borrow_loc` | Immutably borrow |
-| `mut_borrow_loc` | Mutably borrow |
-| `imm_borrow_field` | Immutably borrow a field |
-| `mut_borrow_field` | Mutably borrow a field |
-| `imm_borrow_field_generic` | |
-| `mut_borrow_field_generic` | |
-| `copy_loc_base` | Base cost to copy |
-| `copy_loc_per_abs_val_unit` | |
-| `move_loc_base` | Move |
-| `st_loc_base` | |
-
-### Calling
-
-| Parameter | Meaning |
-|---------------------------|---------------------------------|
-| `call_base` | Base cost for a function call |
-| `call_per_arg` | Cost per function argument |
-| `call_per_local` | Cost per local argument |
-| `call_generic_base` | |
-| `call_generic_per_ty_arg` | Cost per type argument |
-| `call_generic_per_arg` | |
-| `call_generic_per_local` | Cost generic per local argument |
-
-### Structs
-
-| Parameter | Meaning |
-|----------------------------|--------------------------------------|
-| `pack_base` | Base cost to pack a `struct` |
-| `pack_per_field` | Cost to pack a `struct`, per field |
-| `pack_generic_base` | |
-| `pack_generic_per_field` | |
-| `unpack_base` | Base cost to unpack a `struct` |
-| `unpack_per_field` | Cost to unpack a `struct`, per field |
-| `unpack_generic_base` | |
-| `unpack_generic_per_field` | |
-
-### References
-
-| Parameter | Meaning |
-|-----------------------------|------------------------------------|
-| `read_ref_base` | Base cost to read from a reference |
-| `read_ref_per_abs_val_unit` | |
-| `write_ref_base` | Base cost to write to a reference |
-| `freeze_ref` | Freeze a reference |
-
-### Casting
-
-| Parameter | Meaning |
-|-------------|------------------|
-| `cast_u8` | Cast to a `u8` |
-| `cast_u16` | Cast to a `u16` |
-| `cast_u32` | Cast to a `u32` |
-| `cast_u64` | Cast to a `u64` |
-| `cast_u128` | Cast to a `u128` |
-| `cast_u256` | Cast to a `u256` |
-
-### Arithmetic
-
-| Parameter | Meaning |
-|-----------|----------|
-| `add` | Add |
-| `sub` | Subtract |
-| `mul` | Multiply |
-| `mod_` | Modulo |
-| `div` | Divide |
-
-
-### Bitwise
-
-| Parameter | Meaning |
-|-----------|---------------------------|
-| `bit_or` | `OR`: | |
-| `bit_and` | `AND`: `&` |
-| `xor` | `XOR`: `^` |
-| `shl` | Shift left: `<<` |
-| `shr` | Shift right: `>>` |
-
-### Boolean
-
-| Parameter | Meaning |
-|-----------|---------------------------------|
-| `or` | `OR`: || |
-| `and` | `AND`: `&&` |
-| `not` | `NOT`: `!` |
-
-
-### Comparison
-
-| Parameter | Meaning |
-|------------------------|--------------------------------|
-| `lt` | Less than: `<` |
-| `gt` | Greater than: `>` |
-| `le` | Less than or equal to: `<=` |
-| `ge` | Greater than or equal to: `>=` |
-| `eq_base` | Base equality cost: `==` |
-| `eq_per_abs_val_unit` | |
-| `neq_base` | Base not equal cost: `!=` |
-| `neq_per_abs_val_unit` | |
-
-### Global storage
-
-| Parameter | Meaning |
-|----------------------------------|-------------------------------------------------------|
-| `imm_borrow_global_base` | Base cost to immutably borrow: `borrow_global()` |
-| `imm_borrow_global_generic_base` | |
-| `mut_borrow_global_base` | Base cost to mutably borrow: `borrow_global_mut()` |
-| `mut_borrow_global_generic_base` | |
-| `exists_base` | Base cost to check existence: `exists()` |
-| `exists_generic_base` | |
-| `move_from_base` | Base cost to move from: `move_from()` |
-| `move_from_generic_base` | |
-| `move_to_base` | Base cost to move to: `move_to()` |
-| `move_to_generic_base` | |
-
-### Vectors
-
-| Parameter | Meaning |
-|--------------------------------|------------------------------------------|
-| `vec_len_base` | Length of a vector |
-| `vec_imm_borrow_base` | Immutably borrow an element |
-| `vec_mut_borrow_base` | Mutably borrow an element |
-| `vec_push_back_base` | Push back |
-| `vec_pop_back_base` | Pop from the back |
-| `vec_swap_base` | Swap elements |
-| `vec_pack_base` | Base cost to pack a vector |
-| `vec_pack_per_elem` | Cost to pack a vector per element |
-| `vec_unpack_base` | Base cost to unpack a vector |
-| `vec_unpack_per_expected_elem` | Base cost to unpack a vector per element |
-
-Additional storage gas parameters are defined in [`table.rs`], [`move_stdlib.rs`], and other assorted source files in [`aptos-gas-schedule/src/`].
-
-## IO and Storage charges
-
-The following gas parameters are applied (i.e., charged) to represent the costs associated with transient storage device resources, including disk IOPS and bandwidth:
-
-| Parameter | Meaning |
-|---------------------------------|--------------------------------------------------------------------|
-| storage_io_per_state_slot_write | charged per state write operation in the transaction output |
-| storage_io_per_state_byte_write | charged per byte in all state write ops in the transaction output |
-| storage_io_per_state_slot_read | charged per item loaded from global state |
-| storage_io_per_state_byte_read | charged per byte loaded from global state |
-
-The following storage fee parameters are applied (i.e., charged in absolute APT values) to represent the disk space and structural costs associated with using the [Aptos authenticated data structure](../reference/glossary.md#merkle-trees) for storing items on the blockchain. This encompasses actions such as creating things in the global state, emitting events, and similar operations:
-
-| Parameter | Meaning |
-|-----------------------------------|----------------------------------------------------------------------------------------|
-| free_write_bytes_quota | 1KB (configurable) free bytes per state slot. (*Subject to short-term change.*) |
-| free_event_bytes_quota | 1KB (configurable) free event bytes per transaction. (*Subject to short-term change.*) |
-| storage_fee_per_state_slot_create | allocating a state slot, by `move_to()`, `table::add()`, etc |
-| storage_fee_per_excess_state_byte | per byte beyond `free_write_bytes_quota` per state slot. Notice this is charged every time the slot is written to, not only at allocation time. |
-| storage_fee_per_event_byte | per byte beyond `free_event_bytes_quota` per transaction. |
-| storage_fee_per_transaction_byte | each transaction byte beyond `large_transaction_cutoff`. (search in the page) |
-
-### Vectors
-
-Byte-wise fees are similarly assessed on vectors, which consume $\sum_{i = 0}^{n - 1} e_i + b(n)$ bytes, where:
-
-* $n$ is the number of elements in the vector
-* $e_i$ is the size of element $i$
-* $b(n)$ is a "base size" which is a function of $n$
-
-See the [BCS sequence specification] for more information on vector base size (technically a `ULEB128`), which typically occupies just one byte in practice, such that a vector of 100 `u8` elements accounts for $100 + 1 = 101$ bytes.
-Hence per the item-wise read methodology described above, reading the last element of such a vector is treated as a 101-byte read.
-
-## Payload gas
-
-Payload gas is defined in [`transaction.rs`](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/aptos-gas-schedule/src/gas_schedule/transaction.rs), which incorporates storage gas with several payload- and pricing-associated parameters:
-
-| Parameter | Meaning |
-|---------------------------------|----------------------------------------------------------------------------------------|
-| `min_transaction_gas_units` | Minimum internal gas units for a transaction, charged at the start of execution |
-| `large_transaction_cutoff` | Size, in bytes, above which transactions will be charged an additional amount per byte |
-| `intrinsic_gas_per_byte` | Internal gas units charged per byte for payloads above `large_transaction_cutoff` |
-| `maximum_number_of_gas_units` | Upper limit on external gas units for a transaction |
-| `min_price_per_gas_unit` | Minimum gas price allowed for a transaction |
-| `max_price_per_gas_unit` | Maximum gas price allowed for a transaction |
-| `max_transaction_size_in_bytes` | Maximum transaction payload size in bytes |
-| `gas_unit_scaling_factor` | Conversion factor between internal gas units and external gas units |
-
-Here, "internal gas units" are defined as constants in source files like [`instr.rs`] and [`storage_gas.move`], which are more granular than "external gas units" by a factor of `gas_unit_scaling_factor`:
-to convert from internal gas units to external gas units, divide by `gas_unit_scaling_factor`.
-Then, to convert from external gas units to octas, multiply by the "gas price", which denotes the number of octas per unit of external gas.
-
-## Optimization principles
-
-### Unit and pricing constants
-
-As of the time of this writing, `min_price_per_gas_unit` in [`transaction.rs`](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/aptos-gas-schedule/src/gas_schedule/transaction.rs) is defined as [`aptos_global_constants`]`::GAS_UNIT_PRICE` (which is itself defined as 100), with other noteworthy [`transaction.rs`](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/aptos-gas-schedule/src/gas_schedule/transaction.rs) constants as follows:
-
-| Constant | Value |
-|---------------------------|--------|
-| `min_price_per_gas_unit` | 100 |
-| `max_price_per_gas_unit` | 10,000,000,000 |
-| `gas_unit_scaling_factor` | 1,000,000 |
-
-See [Payload gas](#payload-gas) for the meaning of these constants.
-
-### Storage Fee
-
-When the network load is low, the gas unit price is expected to be low, making most aspects of the transaction cost more affordable. However, the storage fee is an exception, as it's priced in terms of absolute APT value. In most instances, the transaction fee is the predominant component of the overall transaction cost. This is especially true when a transaction allocates state slots, writes to sizable state items, emits numerous or large events, or when the transaction itself is a large one. All of these factors consume disk space on Aptos nodes and are charged accordingly.
-
-On the other hand, the storage refund incentivizes releasing state slots by deleting state items. The state slot fee is fully refunded upon slot deallocation, while the excess state byte fee is non-refundable. This will soon change by differentiating between permanent bytes (those in the global state) and relative ephemeral bytes (those that traverse the ledger history).
-
-Some cost optimization strategies concerning the storage fee:
-
-1. Minimize state item creation.
-2. Minimize event emissions.
-3. Avoid large state items, events, and transactions.
-4. Clean up state items that are no longer in use.
-5. If two fields are consistently updated together, group them into the same resource or resource group.
-6. If a struct is large and only a few fields are updated frequently, move those fields to a separate resource or resource group.
-
-
-### Instruction gas
-
-As of the time of this writing, all instruction gas operations are multiplied by the `EXECUTION_GAS_MULTIPLIER` defined in [`gas_meter.rs`], which is set to 20.
-Hence the following representative operations assume gas costs as follows (divide internal gas by scaling factor, then multiply by minimum gas price):
-
-| Operation | Minimum octas |
-|------------------------------|---------------|
-| Table add/borrow/remove box | 240 |
-| Function call | 200 |
-| Load constant | 130 |
-| Globally borrow | 100 |
-| Read/write reference | 40 |
-| Load `u128` on stack | 16 |
-| Table box operation per byte | 2 |
-
-(Note that per-byte table box operation instruction gas does not account for storage gas, which is assessed separately).
-
-For comparison, reading a 100-byte item costs $r_i + 100 * r_b = 3000 + 100 * 3 = 3300$ octas at minimum, some 16.5 times as much as a function call, and in general, instruction gas costs are largely dominated by storage gas costs.
-
-Notably, however, there is still technically an incentive to reduce the number of function calls in a program, but engineering efforts are more effectively dedicated to writing modular, decomposed code that is geared toward reducing storage gas costs, rather than attempting to write repetitive code blocks with fewer nested functions (in nearly all cases).
-
-In extreme cases it is possible for instruction gas to far outweigh storage gas, for example if a loopwise mathematical function takes 10,000 iterations to converge; but again this is an extreme case and for most applications storage gas has a larger impact on base gas than does instruction gas.
-
-### Payload gas
-
-As of the time of this writing, [`transaction/mod.rs`](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/aptos-gas-schedule/src/gas_schedule/transaction.rs) defines the minimum amount of internal gas per transaction as 1,500,000 internal units (15,000 octas at minimum), an amount that increases by 2,000 internal gas units (20 octas minimum) per byte for payloads larger than 600 bytes, with the maximum number of bytes permitted in a transaction set at 65536.
-Hence in practice, payload gas is unlikely to be a concern.
-
-
-
-[#4540]: https://github.com/aptos-labs/aptos-core/pull/4540/files
-[`aptos-gas-schedule/src/`]: https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/aptos-gas-schedule/src
-[`aptos_global_constants`]: https://github.com/aptos-labs/aptos-core/blob/main/config/global-constants/src/lib.rs
-[`base_8192_exponential_curve()`]: https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-framework/doc/storage_gas.md#0x1_storage_gas_base_8192_exponential_curve
-[BCS sequence specification]: https://github.com/diem/bcs#fixed-and-variable-length-sequences
-[`gas_meter.rs`]: https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/aptos-gas/src/gas_meter.rs
-[`initialize()`]: https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-framework/doc/storage_gas.md#0x1_storage_gas_initialize
-[`instr.rs`]: https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/aptos-gas-schedule/src/gas_schedule/instr.rs
-[`move_stdlib.rs`]: https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/aptos-gas-schedule/src/gas_schedule/move_stdlib.rs
-[`on_reconfig()`]: https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-framework/doc/storage_gas.md#@Specification_16_on_reconfig
-[`storage_gas.md`]: https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-framework/doc/storage_gas.md
-[`storage_gas.move`]: https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-framework/sources/storage_gas.move
-[`StorageGas`]: https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-framework/doc/storage_gas.md#resource-storagegas
-[`table.rs`]: https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/aptos-gas-schedule/src/gas_schedule/table.rs
-[`transaction.rs`]: https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/aptos-gas-schedule/src/gas_schedule/transaction.rs
diff --git a/developer-docs-site/docs/concepts/blockchain.md b/developer-docs-site/docs/concepts/blockchain.md
deleted file mode 100755
index 85286b12e1873..0000000000000
--- a/developer-docs-site/docs/concepts/blockchain.md
+++ /dev/null
@@ -1,351 +0,0 @@
----
-title: "Aptos Blockchain Deep Dive"
-slug: "blockchain"
----
-
-import ThemedImage from '@theme/ThemedImage';
-import useBaseUrl from '@docusaurus/useBaseUrl';
-
-# Aptos Blockchain Deep Dive
-
-For a deeper understanding of the lifecycle of an Aptos transaction (from an operational perspective), we will follow a transaction on its journey, from being submitted to an Aptos fullnode, to being committed to the Aptos blockchain. We will then focus on the logical components of Aptos nodes and take a look how the transaction interacts with these components.
-
-## Life of a Transaction
-
-* Alice and Bob are two users who each have an [account](../reference/glossary.md#account) on the Aptos blockchain.
-* Alice's account has 110 Aptos Coins.
-* Alice is sending 10 Aptos Coins to Bob.
-* The current [sequence number](../reference/glossary.md#sequence-number) of Alice's account is 5 (which indicates that 5 transactions have already been sent from Alice's account).
-* There are a total of 100 validator nodes — V1 to V100 on the network.
-* An Aptos client submits Alice's transaction to a REST service on an Aptos Fullnode. The fullnode forwards this transaction to a validator fullnode which in turn forwards it to validator V1.
-* Validator V1 is a proposer/leader for the current round.
-
-### The Journey
-
-In this section, we will describe the lifecycle of transaction T5, from when the client submits it to when it is committed to the Aptos blockchain.
-
-For the relevant steps, we've included a link to the corresponding inter-component interactions of the validator node. After you are familiar with all the steps in the lifecycle of the transaction, you may want to refer to the information on the corresponding inter-component interactions for each step.
-
-
-
-
-
-:::tip Alert
-The arrows in all the visuals in this article originate on the component initiating an interaction/action and terminate on the component on which the action is being performed. The arrows do not represent data read, written, or returned.
-:::
-
-The lifecycle of a transaction has five stages:
-
-* **Accepting**: [Accepting the transaction](#accepting-the-transaction)
-* **Sharing**: [Sharing the transaction with other validator nodes](#sharing-the-transaction-with-other-validator-nodes)
-* **Proposing**: [Proposing the block](#proposing-the-block)
-* **Executing and Consensus**: [Executing the block and reaching consensus](#executing-the-block-and-reaching-consensus)
-* **Committing**: [Committing the block](#committing-the-block)
-
-We've described what happens in each stage below, along with links to the corresponding Aptos node component interactions.
-
-:::warning
-
-Transactions are validated upon entering a mempool and prior to execution by consensus. The client only learns of validation results returned during the initial submission via the REST service. Transactions may silently fail to execute, especially in the case where the account has run out of utility token or changed its authentication key in the midst of many transactions. While this happens infrequently, there are ongoing efforts to improve the visibility in this space.
-
-:::
-
-### Client submits a transaction
-
-An Aptos **client constructs a raw transaction** (let's call it Traw5) to transfer 10 Aptos Coins from Alice’s account to Bob’s account. The Aptos client signs the transaction with Alice's private key. The signed transaction T5 includes the following:
-
-* The raw transaction.
-* Alice's public key.
-* Alice's signature.
-
-The raw transaction includes the following fields:
-
-| Fields | Description |
-| ------ | ----------- |
-| [Account address](../reference/glossary.md#account-address) | Alice's account address |
-| Payload | Indicates an action or set of actions Alice's behalf. In the case this is a Move function, it directly calls into Move bytecode on the chain. Alternatively, it may be Move bytecode peer-to-peer [transaction script](../reference/glossary.md#transaction-script). It also contains a list of inputs to the function or script. For this example, it is a function call to transfer an amount of Aptos Coins from Alice account to Bob's account, where Alice's account is implied by sending the transaction and Bob's account and the amount are specified as transaction inputs. |
-| [Gas unit price](../reference/glossary.md#gas-unit-price) | The amount the sender is willing to pay per unit of gas, to execute the transaction. This is represented as Octa or units of 10-8 Aptos utility tokens.
-| [Maximum gas amount](../reference/glossary.md#maximum-gas-amount) | The maximum gas amount in Aptos utility tokens Alice is willing to pay for this transaction. Gas charges are equal to the base gas cost covered by computation and IO multiplied by the gas price. Gas costs also include storage with an Apt-fixed priced storage model. This is represents as Octa or units of 10-8 Aptos utility tokens.
-| [Expiration time](../reference/glossary.md#expiration-time) | Expiration time of the transaction. |
-| [Sequence number](../reference/glossary.md#sequence-number) | The sequence number (5, in this example) for an account indicates the number of transactions that have been submitted and committed on-chain from that account. In this case, 5 transactions have been submitted from Alice’s account, including Traw5. Note: a transaction with sequence number 5 can only be committed on-chain if the account sequence number is 5. |
-| [Chain ID](https://github.com/aptos-labs/aptos-core/blob/main/types/src/chain_id.rs) | An identifier that distinguishes the Aptos networks (to prevent cross-network attacks). |
-
-### Accepting the transaction
-
-| Description | Aptos Node Component Interactions |
-| ------------------------------------------------------------ | ---------------------------------------------------------- |
-| 1. **Client → REST service**: The client submits transaction T5 to the REST service of an Aptos fullnode. The fullnode uses the REST service to forward the transaction to its own mempool, which then forwards the transaction to mempools running on other nodes in the network. The transaction will eventually be forwarded to a mempool running on a validator fullnode, which will send it to a validator node (V1 in this case). | [1. REST Service](#1-client--rest-service) |
-| 2. **REST service → Mempool**: The fullnode's mempool transmits transaction T5 to validator V1's mempool. | [2. REST Service](#2-rest-service--mempool), [1. Mempool](#1-rest-service--mempool) |
-| 3. **Mempool → Virtual Machine (VM)**: Mempool will use the virtual machine (VM) component to perform transaction validation, such as signature verification, account balance verification and replay resistance using the sequence number. | [4. Mempool](#4-mempool--vm), [3. Virtual Machine](#3-mempool--virtual-machine) |
-
-
-### Sharing the transaction with other validator nodes
-
-| Description | Aptos Node Component Interactions |
-| ------------------------------------------------------------ | -------------------------------- |
-| 4. **Mempool**: The mempool will hold T5 in an in-memory buffer. Mempool may already contain multiple transactions sent from Alice's address. | [Mempool](#mempool) |
-| 5. **Mempool → Other Validators**: Using the shared-mempool protocol, V1 will share the transactions (including T5) in its mempool with other validator nodes and place transactions received from them into its own (V1) mempool. | [2. Mempool](#2-mempool--other-validator-nodes) |
-
-### Proposing the block
-
-| Description | Aptos Node Component Interactions |
-| ------------------------------------------------------------ | ---------------------------------------- |
-| 6. **Consensus → Mempool**: — As validator V1 is a proposer/leader for this transaction, it will pull a block of transactions from its mempool and replicate this block as a proposal to other validator nodes via its consensus component. | [1. Consensus](#1-consensus--mempool), [3. Mempool](#3-consensus--mempool) |
-| 7. **Consensus → Other Validators**: The consensus component of V1 is responsible for coordinating agreement among all validators on the order of transactions in the proposed block. | [2. Consensus](#2-consensus--other-validators) |
-
-### Executing the block and reaching consensus
-
-| Description | Aptos Node Component Interactions |
-| ------------------------------------------------------------ | ------------------------------------------------ |
-| 8. **Consensus → Execution**: As part of reaching agreement, the block of transactions (containing T5) is shared with the execution component. | [3. Consensus](#3-consensus--execution-consensus--other-validators), [1. Execution](#1-consensus--execution) |
-| 9. **Execution → Virtual Machine**: The execution component manages the execution of transactions in the VM. Note that this execution happens speculatively before the transactions in the block have been agreed upon. | [2. Execution](#2-execution--vm), [3. Virtual Machine](#3-mempool--virtual-machine) |
-| 10. **Consensus → Execution**: After executing the transactions in the block, the execution component appends the transactions in the block (including T5) to the [Merkle accumulator](../reference/glossary.md#merkle-accumulator) (of the ledger history). This is an in-memory/temporary version of the Merkle accumulator. The necessary part of the proposed/speculative result of executing these transactions is returned to the consensus component to agree on. The arrow from "consensus" to "execution" indicates that the request to execute transactions was made by the consensus component. | [3. Consensus](#3-consensus--execution-consensus--other-validators), [1. Execution](#1-consensus--execution) |
-| 11. **Consensus → Other Validators**: V1 (the consensus leader) attempts to reach consensus on the proposed block's execution result with the other validator nodes participating in consensus. | [3. Consensus](#3-consensus--execution-consensus--other-validators) |
-
-### Committing the block
-
-| Description | Aptos Node Component Interactions |
-| ------------------------------------------------------------ | ------------------------------------------------------------ |
-| 12. **Consensus → Execution**, **Execution → Storage**: If the proposed block's execution result is agreed upon and signed by a set of validators that have the quorum of votes, validator V1's execution component reads the full result of the proposed block execution from the speculative execution cache and commits all the transactions in the proposed block to persistent storage with their results. | [4. Consensus](#4-consensus--execution), [3. Execution](#3-consensus--execution), [4. Execution](#4-execution--storage), [3. Storage](#3-execution--storage) |
-
-Alice's account will now have 100 Aptos Coins, and its sequence number will be 6. If T5 is replayed by Bob, it will be rejected as the sequence number of Alice's account (6) is greater than the sequence number of the replayed transaction (5).
-
-## Aptos node component interactions
-
-In the [Life of a Transaction](#life-of-a-transaction) section, we described the typical lifecycle of a transaction (from transaction submission to transaction commit). Now let's look at the inter-component interactions of Aptos nodes as the blockchain processes transactions and responds to queries. This information will be most useful to those who:
-
-* Would like to get an idea of how the system works under the covers.
-* Are interested in eventually contributing to the Aptos blockchain.
-
-You can learn more about the different types of Aptos nodes here:
-* [Validator nodes](../concepts/validator-nodes.md)
-* [Fullnodes](../concepts/fullnodes.md)
-
-For our narrative, we will assume that a client submits a transaction TN to a validator VX. For each validator component, we will describe each of its inter-component interactions in subsections under the respective component's section. Note that subsections describing the inter-component interactions are not listed strictly in the order in which they are performed. Most of the interactions are relevant to the processing of a transaction, and some are relevant to clients querying the blockchain (queries for existing information on the blockchain).
-
-The following are the core components of an Aptos node used in the lifecycle of a transaction:
-
-**Fullnode**
-
-* [REST Service](#rest-service)
-
-**Validator node**
-
-* [Mempool](#mempool)
-* [Consensus](#consensus)
-* [Execution](#execution)
-* [Virtual Machine](#virtual-machine-vm)
-* [Storage](#storage)
-
-## REST Service
-
-
-
-
-
-Any request made by a client goes to the REST Service of a fullnode first. Then, the submitted transaction is forwarded to the validator fullnode, which then sends it to the validator node VX.
-
-### 1. Client → REST Service
-
-A client submits a transaction to the REST service of an Aptos fullnode.
-
-### 2. REST Service → Mempool
-
-The REST service of the fullnode transfers the transaction to its mempool. After mempool does some initial checks, the REST Service will return a status to the client indicating whether the transaction was accepted or rejected. For example, out-of-date transactions will be rejected: mempool will accept the transaction TN only if the sequence number of TN is greater than or equal to the current sequence number of the sender's account.
-
-### 3. Mempool -> Mempool
-
-The mempool on the fullnode sends the transaction to the mempool of a validator fullnode, which then sends the transaction to validator node VX's mempool. Note that the transaction will not be sent to the next mempool (or passed to consensus) until the sequence number matches the sequence number of the sender’s account. Furthermore, each mempool performs the same initial checks upon receiving a transaction, this may result in a transaction being discarded on its way to consensus. The current implementation of mempool does not provide any feedback if a transaction is discarded during this process.
-
-### 4. REST Service → Storage
-
-When a client performs a read query on the Aptos blockchain (for example, to get the balance of Alice's account), the REST service interacts with the storage component directly to obtain the requested information.
-
-## Virtual Machine (VM)
-
-
-
-
-
-The Move VM verifies and executes transaction scripts written in Move bytecode.
-
-### 1. Virtual Machine → Storage
-
-When mempool requests the VM to validate a transaction via `VMValidator::validate_transaction()`, the VM loads the transaction sender's account from storage and performs verifications, some of which have been described in the list below.
-
-* Checks that the input signature on the signed transaction is correct (to reject incorrectly signed transactions).
-* Checks that the sender's account authentication key is the same as the hash of the public key (corresponding to the private key used to sign the transaction).
-* Verifies that the sequence number for the transaction is greater than or equal to the current sequence number for the sender's account. Completing this check prevents the replay of the same transaction against the sender's account.
-* Verifies that the program in the signed transaction is not malformed, as a malformed program cannot be executed by the VM.
-* Verifies that the sender's account balance contains at least the maximum gas amount multiplied by the gas price specified in the transaction, which ensures that the transaction can pay for the resources it uses.
-
-### 2. Execution → Virtual Machine
-
-The execution component utilizes the VM to execute a transaction via `ExecutorTask::execute_transaction()`.
-
-It is important to understand that executing a transaction is different from updating the state of the ledger and persisting the results in storage. A transaction TN is first executed as part of an attempt to reach agreement on blocks during consensus. If agreement is reached with the other validators on the ordering of transactions and their execution results, the results are persisted in storage and the state of the ledger is updated.
-
-### 3. Mempool → Virtual Machine
-
-When mempool receives a transaction from other validators via shared mempool or from the REST service, mempool invokes `VMValidator::validate_transaction()` on the VM to validate the transaction.
-
-For implementation details refer to the [Move Virtual Machine README](https://github.com/move-language/move/tree/main/language/move-vm).
-
-## Mempool
-
-
-
-
-
-Mempool is a shared buffer that holds the transactions that are “waiting” to be executed. When a new transaction is added to the mempool, the mempool shares this transaction with other validator nodes in the system. To reduce network consumption in the “shared mempool,” each validator is responsible for delivering its own transactions to other validators. When a validator receives a transaction from the mempool of another validator, the transaction is added to the mempool of the recipient validator.
-
-### 1. REST Service → Mempool
-
-* After receiving a transaction from the client, the REST service sends the transaction to its own mempool, which then shares the transaction with the mempool of a validator fullnode. The mempool on the validator fullnode then shares the transaction with the mempool of a validator.
-* The mempool for validator node VX accepts transaction TN for the sender's account only if the sequence number of TN is greater than or equal to the current sequence number of the sender's account.
-
-### 2. Mempool → Other validator nodes
-
-* The mempool of validator node VX shares transaction TN with the other validators on the same network.
-* Other validators share the transactions in their respective mempools with VX’s mempool.
-
-### 3. Consensus → Mempool
-
-* When the transaction is forwarded to a validator node and once the validator node becomes the leader, its consensus component will pull a block of transactions from its mempool and replicate the proposed block to other validators. It does this to arrive at a consensus on the ordering of transactions and the execution results of the transactions in the proposed block.
-* Note that just because a transaction TN was included in a proposed consensus block, it does not guarantee that TN will eventually be persisted in the distributed database of the Aptos blockchain.
-
-
-### 4. Mempool → VM
-
-When mempool receives a transaction from other validators, mempool invokes VMValidator::validate_transaction() on the VM to validate the transaction.
-
-## Consensus
-
-
-
-
-
-The consensus component is responsible for ordering blocks of transactions and agreeing on the results of execution by participating in the [consensus protocol](../reference/glossary.md#consensus-protocol) with other validators in the network.
-
-
-### 1. Consensus → Mempool
-
-When validator VX is a leader/proposer, the consensus component of VX pulls a block of transactions from its mempool via: `Mempool::get_batch()`, and forms a proposed block of transactions.
-
-### 2. Consensus → Other Validators
-
-If VX is a proposer/leader, its consensus component replicates the proposed block of transactions to other validators.
-
-### 3. Consensus → Execution, Consensus → Other Validators
-
-* To execute a block of transactions, consensus interacts with the execution component. Consensus executes a block of transactions via `BlockExecutorTrait::execute_block()` (Refer to [Consensus → execution](#1-consensus--execution))
-* After executing the transactions in the proposed block, the execution component responds to the consensus component with the result of executing these transactions.
-* The consensus component signs the execution result and attempts to reach agreement on this result with other validators.
-
-### 4. Consensus → Execution
-
-If enough validators vote for the same execution result, the consensus component of VX informs execution via `BlockExecutorTrait::commit_blocks()` that this block is ready to be committed.
-
-## Execution
-
-
-
-
-
-The execution component coordinates the execution of a block of transactions and maintains a transient state that can be voted upon by consensus. If these transactions are successful, they are committed to storage.
-
-### 1. Consensus → Execution
-
-* Consensus requests execution to execute a block of transactions via: `BlockExecutorTrait::execute_block()`.
-* Execution maintains a “scratchpad,” which holds in-memory copies of the relevant portions of the [Merkle accumulator](../reference/glossary.md#merkle-accumulator). This information is used to calculate the root hash of the current state of the Aptos blockchain.
-* The root hash of the current state is combined with the information about the transactions in the proposed block to determine the new root hash of the accumulator. This is done prior to persisting any data, and to ensure that no state or transaction is stored until agreement is reached by a quorum of validators.
-* Execution computes the speculative root hash and then the consensus component of VX signs this root hash and attempts to reach agreement on this root hash with other validators.
-
-### 2. Execution → VM
-
-When consensus requests execution to execute a block of transactions via `BlockExecutorTrait::execute_block()`, execution uses the VM to determine the results of executing the block of transactions.
-
-### 3. Consensus → Execution
-
-If a quorum of validators agrees on the block execution results, the consensus component of each validator informs its execution component via `BlockExecutorTrait::commit_blocks()` that this block is ready to be committed. This call to the execution component will include the signatures of the validators to provide proof of their agreement.
-
-### 4. Execution → Storage
-
-Execution takes the values from its “scratchpad” and sends them to storage for persistence via `DbWriter::save_transactions()`. Execution then prunes the old values from the “scratchpad” that are no longer needed (for example, parallel blocks that cannot be committed).
-
-For implementation details refer to the [Execution README](https://github.com/aptos-labs/aptos-core/tree/main/execution).
-
-## Storage
-
-
-
-
-
-The storage component persists agreed upon blocks of transactions and their execution results to the Aptos blockchain. A block of transactions (which includes transaction TN) will be saved via storage when there is agreement between more than a quorum (2f+1) of the validators participating in consensus. Agreement must include all of the following:
-* The transactions to include in the block
-* The order of the transactions
-* The execution results of the transactions in the block
-
-Refer to [Merkle accumulator](../reference/glossary.md#merkle-accumulator) for information on how a transaction is appended to the data structure representing the Aptos blockchain.
-
-### 1. VM → Storage
-
-When mempool invokes `VMValidator::validate_transaction()` to validate a transaction, `VMValidator::validate_transaction()` loads the sender's account from storage and performs read-only validity checks on the transaction.
-
-### 2. Execution → Storage
-
-When the consensus component calls `BlockExecutorTrait::execute_block()`, execution reads the current state from storage combined with the in-memory “scratchpad” data to determine the execution results.
-
-### 3. Execution → Storage
-
-Once consensus is reached on a block of transactions, execution calls storage via `DbWriter::save_transactions()` to save the block of transactions and permanently record them. This will also store the signatures from the validator nodes that agreed on this block of transactions. The in-memory data in “scratchpad” for this block is passed to update storage and persist the transactions. When the storage is updated, every account that was modified by these transactions will have its sequence number incremented by one.
-
-Note: The sequence number of an account on the Aptos blockchain increments by one for each committed transaction originating from that account.
-
-### 4. REST Service → Storage
-
-For client queries that read information from the blockchain, the REST service directly interacts with storage to read the requested information.
-
-For implementation details refer to the [Storage README](https://github.com/aptos-labs/aptos-core/tree/main/storage).
diff --git a/developer-docs-site/docs/concepts/blocks.md b/developer-docs-site/docs/concepts/blocks.md
deleted file mode 100644
index 25c8cf5d59170..0000000000000
--- a/developer-docs-site/docs/concepts/blocks.md
+++ /dev/null
@@ -1,20 +0,0 @@
----
-title: "Blocks"
-id: "blocks"
----
-
-# Blocks
-
-Aptos is a per-transaction versioned database. When transactions are executed, the resulting state of each transaction is stored separately and thus allows for more granular data access. This is different from other blockchains where only the resulting state of a block (a group of transactions) is stored.
-
-Blocks are still a fundamental unit within Aptos. Transactions are batched and executed together in a block. In addition, the [proofs](./txns-states.md#proofs) within storage are at the block-level granularity. The number of transactions within a block varies depending on network activity and a configurable maximum block size limit. As the blockchain becomes busier, blocks will likely contain more transactions.
-
-## System transactions
-
-Each Aptos block contains both user transactions and special system transactions to *mark* the beginning and end of the transaction batch. Specifically, there are two system transactions:
-1. `BlockMetadataTransaction` - is inserted at the beginning of the block. A `BlockMetadata` transaction can also mark the end of an [epoch](#epoch) and trigger reward distribution to validators.
-2. `StateCheckpointTransaction` - is appended at the end of the block and is used as a checkpoint milestone.
-
-## Epochs
-
-In Aptos, epochs represent a longer period of time in order to safely synchronize major changes such as validator set additions/removals. An epoch is a fixed duration of time, currently defined as two hours on mainnet. The number of blocks in an epoch depends on how many blocks can execute within this period of time. It is only at the start of a new epoch that major changes such as a validator joining the validator set don't immediately take effect among the validators.
diff --git a/developer-docs-site/docs/concepts/delegated-staking.md b/developer-docs-site/docs/concepts/delegated-staking.md
deleted file mode 100644
index 7fb7d6dcc9d44..0000000000000
--- a/developer-docs-site/docs/concepts/delegated-staking.md
+++ /dev/null
@@ -1,136 +0,0 @@
----
-title: "Delegated Staking"
----
-
-import ThemedImage from '@theme/ThemedImage';
-import useBaseUrl from '@docusaurus/useBaseUrl';
-
-# Delegated Staking
-
-## Delegated Staking on the Aptos Blockchain
-
-:::tip We strongly recommend that you read about [Staking](../concepts/staking.md) first.
-:::
-
-Delegated staking is an extension of the staking protocol. A delegation pool abstracts the stake owner to an entity capable of collecting stake from delegators and adding it on their behalf to the native stake pool attached to the validator. This allows multiple entities to form a stake pool that achieves the minimum requirements for the validator to join the validator set. While delegators can add stake to an inactive pool, the delegation pool will not earn rewards until it is active.
-
-:::danger Delegation pools are permissionless and anyone can add stake. Delegation pools cannot be changed to stake pools once it's created or vice versa, though it can be removed from the validator set and assets withdrawn. For full details of the stake pool, see [Staking](../concepts/staking.md)
-:::
-
-For the full delegation pool smart contract, see [delegation_pool.move](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-framework/sources/delegation_pool.move)
-
-Unlike a stake pool, a delegation pool can be initialized with zero stake. When initialized, the delegated stake pool is owned indirectly via a resource account. This account will manage the stake of the underlying stake pool on behalf of the delegators by forwarding their stake-management operations to it (add, unlock, reactivate, withdraw) while the resource account cannot be directly accessed nor externally owned.
-
-See full list of [Delegation Pool Operations](../nodes/validator-node/operator/delegation-pool-operations.md)
-
-![image](https://user-images.githubusercontent.com/120680608/234953723-ae6cc89e-76d8-4014-89f3-ec8799c7b281.png)
-
-
-There are four entity types:
-
-- Owner
-- Operator
-- Voter
-- Delegator
-
-
-Using this model, the owner does not have to stake on the Aptos blockchain in order to run a validator.
-
-
-[How Validation on the Aptos blockchain works](../concepts/staking.md#validation-on-the-aptos-blockchain)
-
-
-### Owner
-
-The delegation pool owner has the following capabilities:
-
-1. Creates delegation pool
-2. Assigns operator for the delegation pool
-3. Sets operator commission percentage for the delegation pool
-4. Assigns voter for the delegation pool
-
-### Operator
-
-A node operator is assigned by the pool owner to run the validator node. The operator has the following capabilities:
-
-1. Join or leave the validator set once the delegation pool reaches 1M APT
-2. Perform validating functions
-3. Change the consensus key and network addresses. The consensus key is used to participate in the validator consensus process, i.e., to vote and propose a block. The operator is allowed to change ("rotate") this key in case this key is compromised.
-
-The operator receives commission that is distributed automatically at the end of each epoch as rewards.
-
-### Voter
-
-An owner can designate a voter. This enables the voter to participate in governance. The voter will use the voter key to sign the governance votes in the transactions.
-
-:::tip Governance
-This document describes staking. See [Governance](./governance.md) for how to participate in the Aptos on-chain governance using the owner-voter model.
-:::
-
-### Delegator
-
-A delegator is anyone who has stake in the delegation pool. Delegators earn rewards on their stake minus any commissions for the operator. Delegators can perform the following delegator operations:
-
-1. Add stake
-2. Unlock stake
-3. Reactivate stake
-4. Withdraw stake
-
-## Validator flow
-
-:::tip Delegation pool operations
-See [Delegation pool operations](../nodes/validator-node/operator/delegation-pool-operations.md) for the correct sequence of commands to run for the below flow.
-:::
-
-1. [Operator deploys validator node](../nodes/validator-node/operator/running-validator-node/index.md)
-2. [Run command to get delegation pool address](../nodes/validator-node/operator/delegation-pool-operations.md#connect-to-aptos-network)
-3. [Operator connects to the network using pool address derived in step 2](../nodes/validator-node/operator/connect-to-aptos-network.md)
-4. [Owner initializes the delegation pool and sets operator](../nodes/validator-node/operator/delegation-pool-operations.md#initialize-a-delegation-pool)
-5. Delegators can add stake at any time
-6. When the delegation pool reaches 1M APT, the operator can call aptos node join-validator-set to join the active validator set. Changes will be effective in the next epoch.
-7. Validator validates (proposes blocks as a leader-validator) and gains rewards. Rewards are distributed to delegators proportionally to stake amount. The stake will automatically be locked up for a fixed duration (set by governance) and automatically renewed at expiration.
-8. At any point, if the operator wants to update the consensus key or validator network addresses, they can call aptos node update-consensus-key or aptos node update-validator-network-addresses. Similar to changes to stake, the changes to consensus key or validator network addresses are only effective in the next epoch.
-9. Delegators can request to unlock their stake at any time. However, their stake will only become withdrawable when the delegation pool lockup expires.
-10. Validator can either explicitly leave the validator set by calling aptos node leave-validator-set or if their stake drops below the min required, they would get removed at the end of the epoch.
-
-
-## Joining the validator set
-
-Participating as a delegation validator node on the Aptos network works like this:
-
-1. Operator runs a validator node and configures the on-chain validator network addresses and rotates the consensus key.
-2. Owner initializes the delegation pool.
-3. The validator node cannot sync until the delegation pool becomes active. The delegation pool becomes active when it reaches 1M APT.
-4. Operator validates and gains rewards.
-5. The stake pool is automatically locked up for a fixed duration (set by the Aptos governance) and will be automatically renewed at expiration. Commissions are automatically distributed to the operator as rewards. The operator can unlock stake at any time, but cannot withdraw until the delegation pool’s lockup period expires.
-6. Operator must wait until the new epoch starts before their validator becomes active.
-
-:::tip Joining the validator set
-For step-by-step instructions on how to join the validator set, see: [Joining Validator Set](../nodes/validator-node/operator/staking-pool-operations.md#joining-validator-set).
-:::
-
-### Automatic lockup duration
-
-When the operator joins the validator set, the delegation pool's stake will automatically be locked up for a fixed duration that is set by the Aptos governance. Delegators will follow the delegation pool's lockup cycle.
-
-### Automatic lockup renewal
-
-When the lockup period expires, it will be automatically renewed, so that the validator can continue to validate and receive the rewards.
-
-### Unlocking your stake
-
-Delegators can unlock stake at any time. However, the stake will only become withdrawable after the delegation pool's lockup period expires. Unlocked stake will continue earning rewards until the stake becomes withdrawable.
-
-### Resetting the lockup
-
-Lockup cannot be reset.
-
-## Rewards
-
-Rewards for delegated staking are calculated by using:
-
-1. The rewards_rate, an annual percentage yield (APY), i.e., rewards accrue as a compound interest on your current staked amount.
-2. Delegator stake
-3. [Validator rewards performance](../concepts/staking.md#rewards-formula)
-
-See [Computing delegation pool rewards](../nodes/validator-node/operator/delegation-pool-operations.md#compute-delegation-pool-rewards-earned)
diff --git a/developer-docs-site/docs/concepts/events.md b/developer-docs-site/docs/concepts/events.md
deleted file mode 100755
index 4e1e7a4aeac25..0000000000000
--- a/developer-docs-site/docs/concepts/events.md
+++ /dev/null
@@ -1,92 +0,0 @@
----
-title: "Events"
-slug: "events"
----
-
-Events are emitted during the execution of a transaction. Each Move module can define its own events and choose when to emit the events upon execution of the module. Aptos Move supports two form of events: module events and EventHandle events. Module events are the modern event mechanism and shipped in the framework release 1.7. EventHandle events are deprecated and shipped with the original framework. Because of how blockchains work, EventHandle events will likely never be fully removed from Aptos.
-
-# Module Events
-
-Module events are global event streams identified by a struct type. To define an event struct, add the attribute `#[event]` to a normal Move struct that has `drop` and `store` abilities. For example,
-
-```
-/// 0xcafe::my_module_name
-/// An example module event struct denotes a coin transfer.
-#[event]
-struct TransferEvent has drop, store {
- sender: address,
- receiver: address,
- amount: u64
-}
-```
-
-And then create and emit the event:
-
-```
-// Define an event.
-let event = TransferEvent {
- sender: 0xcafe,
- receiver: 0xface,
- amount: 100
-};
-// Emit the event just defined.
-0x1::event::emit(event);
-```
-
-Example module events are available [here](https://explorer.aptoslabs.com/txn/682252266/events?network=testnet). Indices 0, 1, 2 are three module events of
-type `0x66c34778730acbb120cefa57a3d98fd21e0c8b3a51e9baee530088b2e444e94c::event::MyEvent`. For API compatibility, module events contain the fields `Account Address`, `Creation Number` and `Sequence Number` with all set to 0.
-
-![Module event example](../../static/img/module-event.png "Module event example")
-
-## Access in Tests
-
-Events are stored in a separate merkle tree called event accumulator for each transaction. As it is ephemeral and hence independent from the state tree, MoveVM does not have read access to events when executing transaction in production. But in tests, Aptos Move supports two native functions that read emitted events for testing and debugging purposes:
-
-```rust
-/// Return all emitted module events with type T as a vector.
-# [test_only]
-public native fun emitted_events(): vector;
-
-/// Return true iff `msg` was emitted.
-# [test_only]
-public fun was_event_emitted(msg: & T): bool
-```
-
-## API Access
-
-The API for querying module event is under construction. [GraphQL API](https://aptos.dev/guides/system-integrators-guide/#production-network-access) support remains to query both module events and EventHandle events.
-
-# Event-Handle Events (Deprecated)
-
-As part of our legacy, Aptos inherited the Libra/Diem event streams derived from EventHandles. Where each EventHandle is identified by a globally unique value, GUID, and a per-event sequence number and stored within a resource. Each event within a stream has a unique sequence number derived from the EventHandle sequence number.
-
-For example, during a [coin transfer](../tutorials/first-transaction.md), both the sender and receiver's accounts will emit `SentEvent` and `ReceivedEvent`, respectively. This data is stored within the ledger and can be queried via the REST interface's [Get events by event handle](https://fullnode.devnet.aptoslabs.com/v1/spec#/operations/get_events_by_event_handle).
-
-Assuming that an account `0xc40f1c9b9fdc204cf77f68c9bb7029b0abbe8ad9e5561f7794964076a4fbdcfd` had sent coins to another account, the following query could be made to the REST interface: `https://fullnode.devnet.aptoslabs.com/v1/accounts/c40f1c9b9fdc204cf77f68c9bb7029b0abbe8ad9e5561f7794964076a4fbdcfd/events/0x1::coin::CoinStore<0x1::aptos_coin::AptosCoin>/withdraw_events`. The output would be all `WithdrawEvent`s stored on that account, it would look like
-
-```json
-[
- {
- "key": "0x0000000000000000caa60eb4a01756955ab9b2d1caca52ed",
- "sequence_number": "0",
- "type": "0x1::coin::WithdrawEvent",
- "data": {
- "amount": "1000"
- }
- }
-]
-```
-
-Each registered event has a unique `key`. The key `0x0000000000000000caa60eb4a01756955ab9b2d1caca52ed` maps to the event `0x1::coin::CoinStore<0x1::aptos_coin::AptosCoin>/sent_events` registered on account `0xc40f1c9b9fdc204cf77f68c9bb7029b0abbe8ad9e5561f7794964076a4fbdcfd`. This key can then be used to directly make event queries, e.g., `https://fullnode.devnet.aptoslabs.com/v1/events/0x0000000000000000caa60eb4a01756955ab9b2d1caca52ed`.
-
-These represent event streams, or a list of events with each entry containing a sequentially increasing `sequence_number` beginning at `0`, a `type`, and `data`. Each event must be defined by some `type`. There may be multiple events defined by the same or similar `type`s especially when using generics. Events have associated `data`. The general principle is to include all data necessary to understand the changes to the underlying resources before and after the execution of the transaction that changed the data and emitted the event.
-
-[coin_transfer]: https://github.com/aptos-labs/aptos-core/blob/bdd0a7fe82cd6aab4b47250e5eb6298986777cf7/aptos-move/framework/aptos-framework/sources/coin.move#L412
-
-[get_events]: https://fullnode.devnet.aptoslabs.com/v1/spec#/operations/get_events_by_event_handle
-
-## Migration to Module Events
-
-With the release of module events, EventHandle events are deprecated. To support migration to the module events, projects should emit a module event wherever they currently emit EventHandle events. Once external systems have sufficiently adopted module events, the legacy event may no longer need to be emitted.
-
-Note, the EventHandle events cannot and will not be deleted and hence projects that are unable to upgrade will continue to be able to leverage them.
diff --git a/developer-docs-site/docs/concepts/fullnodes.md b/developer-docs-site/docs/concepts/fullnodes.md
deleted file mode 100755
index be38fc274702f..0000000000000
--- a/developer-docs-site/docs/concepts/fullnodes.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: "Fullnodes Overview"
-slug: "fullnodes"
----
-An Aptos node is an entity of the Aptos ecosystem that tracks the [state](../reference/glossary.md#state) of the Aptos blockchain. Clients interact with the blockchain via Aptos nodes. There are two types of nodes:
-* [Validator nodes](./validator-nodes.md)
-* Fullnodes
-
-Each Aptos node comprises several logical components:
-* [REST service](../reference/glossary.md#rest-service)
-* [Mempool](./validator-nodes.md#mempool)
-* [Execution](./validator-nodes.md#execution)
-* [Virtual Machine](./validator-nodes.md#virtual-machine)
-* [Storage](./validator-nodes.md#storage)
-* [State synchronizer](./validator-nodes.md#state-synchronizer)
-
-The [Aptos-core](../reference/glossary.md#aptos-core) software can be configured to run as a validator node or as a fullnode.
-
-## Overview
-
-Fullnodes can be run by anyone. Fullnodes verify blockchain history by either re-executing all transactions in the history of the Aptos blockchain or replaying each transaction's output. Fullnodes replicate the entire state of the blockchain by synchronizing with upstream participants, e.g., other fullnodes or validator nodes. To verify blockchain state, fullnodes receive the set of transactions and the [accumulator hash root](../reference/glossary.md#accumulator-root-hash) of the ledger signed by the validators. In addition, fullnodes accept transactions submitted by Aptos clients and forward them directly (or indirectly) to validator nodes. While fullnodes and validators share the same code, fullnodes do not participate in consensus.
-
-Depending on the fullnode upstream, a fullnode can be called as a validator fullnode, or a public fullnode:
-* **Validator fullnode** state sync from a validator node directly.
-* **Public fullnode** state sync from other fullnodes.
-
-There's no difference in their functionality, only whether their upstream node is a validator or another fullnode. Read more details about network topology [here](./node-networks-sync.md)
-
-Third-party blockchain explorers, wallets, exchanges, and DApps may run a local fullnode to:
-* Leverage the REST interface for blockchain interactions.
-* Get a consistent view of the Aptos ledger.
-* Avoid rate limitations on read traffic.
-* Run custom analytics on historical data.
-* Get notifications about particular on-chain events.
diff --git a/developer-docs-site/docs/concepts/gas-txn-fee.md b/developer-docs-site/docs/concepts/gas-txn-fee.md
deleted file mode 100755
index e72d6cdc9a299..0000000000000
--- a/developer-docs-site/docs/concepts/gas-txn-fee.md
+++ /dev/null
@@ -1,190 +0,0 @@
----
-title: "Gas and Storage Fees"
-slug: "gas-txn-fee"
----
-
-# Gas and Storage Fees
-
-Any transaction execution on the Aptos blockchain requires a processing fee. As of today, this fee comprises two components:
-1. Execution & IO costs
- - This covers your usage of transient computation resources, such as processing your transactions and propagating the validated record throughout the distributed network of the mainnet.
- - It is measured in Gas Units whose price may fluctuate according to the load of the network. This allows execution & io costs to be low when the network is less busy.
- - This portion of gas is burned permanently upon the execution of a transaction.
-2. Storage fees
- - This covers the cost to persistently store validated record in the distributed blockchain storage.
- - It is measured in fixed APT prices, so the permanent storage cost stays stable even as the gas unit price fluctuates with the network's transient load.
- - The storage fee can be refunded when the allocated storage space is deleted. The refund amount may be full or partial, based on the size and duration of the storage used.
- - To keep system implementation simple, this portion of gas is burned and minted again upon refund.
-
-:::tip
-Conceptually, this fee can be thought of as quite similar to how we pay for our home electric or water utilities.
-:::
-
-## Unit of gas
-
-Transactions can range from simple and inexpensive to complicated based upon what they do. In the Aptos blockchain, a **unit of gas** represents a basic unit of consumption for transient resources, such as doing computation or accessing the storage. The latter should not be conflated with the long-term storage aspect of such operations, as that is covered by the storage fees separately.
-
-See [How Base Gas Works](./base-gas.md) for a detailed description of gas fee types and available optimizations.
-
-:::tip Unit of gas
-👉 A **unit of gas** is a dimensionless number or a unit that is not associated with any one item such as a coin, expressed as an integer. The total gas units consumed by your transaction depend on the complexity of your transaction. The **gas price**, on the other hand, is expressed in terms of Aptos blockchain’s native coin (Octas). Also see [Transactions and States](txns-states.md) for how a transaction submitted to the Aptos blockchain looks like.
-:::
-
-## The Fee Statement
-
-As of Aptos Framework release 1.7, the breakdown of fee charges and refunds is emitted as a module event represented by struct `0x1::transaction_fee::FeeStatement`.
-
-```Rust
- #[event]
- /// Breakdown of fee charge and refund for a transaction.
- /// The structure is:
- ///
- /// - Net charge or refund (not in the statement)
- /// - total charge: total_charge_gas_units, matches `gas_used` in the on-chain `TransactionInfo`.
- /// This is the sum of the sub-items below. Notice that there's potential precision loss when
- /// the conversion between internal and external gas units and between native token and gas
- /// units, so it's possible that the numbers don't add up exactly. -- This number is the final
- /// charge, while the break down is merely informational.
- /// - gas charge for execution (CPU time): `execution_gas_units`
- /// - gas charge for IO (storage random access): `io_gas_units`
- /// - storage fee charge (storage space): `storage_fee_octas`, to be included in
- /// `total_charge_gas_unit`, this number is converted to gas units according to the user
- /// specified `gas_unit_price` on the transaction.
- /// - storage deletion refund: `storage_fee_refund_octas`, this is not included in `gas_used` or
- /// `total_charge_gas_units`, the net charge / refund is calculated by
- /// `total_charge_gas_units` * `gas_unit_price` - `storage_fee_refund_octas`.
- ///
- /// This is meant to emitted as a module event.
- struct FeeStatement has drop, store {
- /// Total gas charge.
- total_charge_gas_units: u64,
- /// Execution gas charge.
- execution_gas_units: u64,
- /// IO gas charge.
- io_gas_units: u64,
- /// Storage fee charge.
- storage_fee_octas: u64,
- /// Storage fee refund.
- storage_fee_refund_octas: u64,
- }
-```
-
-## Gas price and prioritizing transactions
-
-In the Aptos network, the Aptos governance sets the absolute minimum gas unit price. However, the market determines how quickly a transaction with a particular gas unit price is processed. See [Ethereum Gas Tracker](https://etherscan.io/gastracker), for example, which shows the market price movements of Ethereum gas price.
-
-By specifying a higher gas unit price than the current market price, you can **increase** the priority level for your transaction on the blockchain by paying a larger processing fee. As part of consensus, when the leader selects transactions from its mempool to propose as part of the next block, it will prioritize selecting transactions with a higher gas unit price. Please note that higher gas fees only prioritize transaction selection for the next block.
-
-However, within a block, the order of transaction execution is determined by the system. This order is based on [transaction shuffling](https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-27.md), which makes parallel execution more efficient by considering conflict patterns. While in most cases this is unnecessary, if the network is under load this measure can ensure your transaction is processed more quickly. See the `gas_unit_price` entry under [Estimating the gas units via simulation](#estimating-the-gas-units-via-simulation) for details.
-
-:::caution Increasing gas unit price with in-flight transactions
-👉 If you are increasing gas unit price, but have in-flight (uncommitted) transactions for the same account, you should resubmit all of those transactions with the higher gas unit price. This is because transactions within the same account always have to respect sequence number, so effectively the higher gas unit price transaction will increase priority only after the in-flight transactions are included in a block.
-:::
-
-## Specifying gas fees within a transaction
-
-When a transaction is submitted to the Aptos blockchain, the transaction must contain the following mandatory gas fields:
-
-- `max_gas_amount`: The maximum number of gas units that the transaction sender is willing to spend to execute the transaction. This determines the maximum computational resources that can be consumed by the transaction.
-- `gas_price`: The gas price the transaction sender is willing to pay. It is expressed in Octa units, where 1 Octa equals 10-8 Aptos utility token.
-
- During the transaction execution, the total gas amount, expressed as:
- ```
- (total gas units consumed) * (gas_price)
- ```
- must not exceed `max_gas_amount`, or else the transaction will abort the execution.
-
-The transaction fee charged to the client will be at the most `gas_price * max_gas_amount`.
-
-## Gas parameters set by governance
-
-The following gas parameters are set by Aptos governance.
-
-:::tip On-chain gas schedule
-These on-chain gas parameters are published on the Aptos blockchain at `0x1::gas_schedule::GasScheduleV2`.
-:::
-
-- `txn.maximum_number_of_gas_units`: Maximum number of gas units that can be spent (this is the maximum allowed value for the `max_gas_amount` gas parameter in the transaction). This is to ensure that the dynamic pricing adjustments do not exceed how much you are willing to pay in total.
-- `txn.min_transaction_gas_units`: Minimum number of gas units that can be spent. The `max_gas_amount` value in the transaction must be set to greater than this parameter’s value.
-
-There also exists some global per-category limits:
-- `txn.max_execution_gas`: The maximum number of gas units a transaction can spend on execution.
-- `txn.max_io_gas`: The maximum number of gas units a transaction can spend on IO.
-- `txn.max_storage_fee`: The maximum amount of APT a transaction can spend on persistent storage.
-These limits help decouple one category from another, allowing us to set `txn.maximum_number_of_gas_units` generously without having to worry about abuses.
-
-## Calculating Storage Fees
-
-The storage fee for a transaction is calculated based on the following factors:
-1. The size of the transaction itself
-2. The number of new storage slots used and bytes written
-3. The events emitted
-For details, see [How Base Gas Works](./base-gas.md).
-
-It should be noted that due to some backward compatibility reasons, the total storage fee of a transaction is currently presented to the client as part of the total `gas_used`. This means, this amount could vary based on the gas unit price even for the same transaction.
-
-Here is an example. Suppose we have a transaction that costs `100` gas units in execution & IO, and `5000` Octa in storage fees. The network will show that you have used
-- `100 + 5000 / 100 = 150` gas units if the gas unit price is `100`, or
-- `100 + 5000 / 200 = 125` gas units if the unit price is `200`.
-
-We are aware of the confusion this might create, and plan to present these as separate items in the future. However this will require some changes to the transaction output format and downstream clients, so please be patient while we work hard to make this happen.
-
-## Calculating Storage Deletion Refund
-
-If a transaction deletes state items, a refund is issued to the transaction payer for the released storage slots. Currently, a full refund is issued for the slot's fee, excluding any fees for excess bytes beyond a set quota (e.g., 1KB). However, fees for event emissions are not refundable.
-
-The refund amount is denominated in APT and is not converted to gas units or included in the total `gas_used`. Instead, this refund amount is specifically detailed in the `storage_fee_refund_octas` field of the [`FeeStatement`](#the-fee-statement). As a result, the transaction's net effect on the payer's APT balance is determined by `gas_used * gas_unit_price - storage_refund`. If the result is positive, there is a deduction from the account balance; if negative, there is a deposit.
-
-## Examples
-
-### Example 1: Account balance vs transaction fee
-
-**The sender’s account must have sufficient funds to pay for the transaction fee.**
-
-If, let's say, you transfer all the money out of your account so that you have no remaining balance to pay for the transaction fee. In such case the Aptos blockchain would let you know that the transaction will fail, and your transfer wouldn't succeed either.
-
-### Example 2: Transaction amounts vs transaction fee
-
-**Transaction fee is independent of transfer amounts in the transaction.**
-
-In a transaction, for example, transaction A, you are transferring 1000 coins from one account to another account. In a second transaction B, with the same gas field values of transaction A, you now transfer 100,000 coins from one account to another one account. Assuming that both the transactions A and B are sent roughly at the same time, then the gas costs for transactions A and B would be near-identical.
-
-## Estimating gas consumption via simulation
-
-The gas used for a transaction can be estimated by simulating the transaction on chain as described here or locally via the gas profiling feature of the Aptos CLI. The results of the simulated transaction represent the **exact** amount that is needed at the **exact** state of the blockchain at the time of the simulation. These gas units used may change based on the state of the chain. For this reason, any amount coming out of the simulation is only an estimate, and when setting the max gas amount, it should include an appropriate amount of headroom based upon your comfort-level and historical behaviors. Setting the max gas amount too low will result in the transaction aborting and the account being charged for whatever gas was consumed.
-
-To simulate transactions on chain, used the [`SimulateTransaction`](https://fullnode.devnet.aptoslabs.com/v1/spec#/operations/simulate_transaction) API. This API will run the exact transaction that you plan to run.
-
-To simulate the transaction locally, use the gas profiler, which is integrated into the Aptos CLI.
-This will generate a web-based report to help you understand the precise gas usage of your transaction.
-See [Gas Profiling](../move/move-on-aptos/gas-profiling) for more details.
-
-:::tip
-Note that the `Signature` provided on the transaction must be all zeros. This is to prevent someone from using the valid signature.
-:::
-
-To simulate the transaction, there are two flags:
-
-1. `estimate_gas_unit_price`: This flag will estimate the gas unit price in the transaction using the same algorithm as the [`estimate_gas_price`](https://fullnode.devnet.aptoslabs.com/v1/spec#/operations/estimate_gas_price) API.
-2. `estimate_max_gas_amount`: This flag will find the maximum possible gas you can use, and it will simulate the transaction to tell you the actual `gas_used`.
-
-### Simulation steps
-
-The simulation steps for finding the correct amount of gas for a transaction are as follows:
-
-1. Estimate the gas via simulation with both `estimate_gas_unit_price` and `estimate_max_gas_amount` set to `true`.
-2. Use the `gas_unit_price` in the returned transaction as your new transaction’s `gas_unit_price`.
-3. View the `gas_used * gas_unit_price` values in the returned transaction as the **lower bound** for the cost of the transaction.
-4. To calculate the upper bound of the cost, take the **minimum** of the `max_gas_amount` in the returned transaction, and the `gas_used * safety factor`. In the CLI a value of `1.5` is used for `safety factor`. Use this value as `max_gas_amount` for the transaction you want to submit. Note that the **upper bound** for the cost of the transaction is `max_gas_amount * gas_unit_price`, i.e., this is the most the sender of the transaction is charged.
-5. At this point you now have your `gas_unit_price` and `max_gas_amount` to submit your transaction as follows:
- 1. `gas_unit_price` from the returned simulated transaction.
- 2. `max_gas_amount` as the minimum of the `gas_used` * `a safety factor` or the `max_gas_amount` from the transaction.
-6. If you feel the need to prioritize or deprioritize your transaction, adjust the `gas_unit_price` of the transaction. Increase the value for higher priority, and decrease the value for lower priority.
-
-:::tip
-Prioritization is based upon buckets of `gas_unit_price`. The buckets are defined in [`mempool_config.rs`](https://github.com/aptos-labs/aptos-core/blob/30b385bf38d3dc8c4e8ee0ff045bc5d0d2f67a85/config/src/config/mempool_config.rs#L8). The current buckets are `[0, 150, 300, 500, 1000, 3000, 5000, 10000, 100000, 1000000]`. Therefore, a `gas_unit_price` of 150 and 299 would be prioritized nearly the same.
-:::
-
-:::tip
-Note that the `safety factor` only takes into consideration changes related to execution and IO. Unexpected creation of storage slots may not be sufficiently covered.
-:::
diff --git a/developer-docs-site/docs/concepts/governance.md b/developer-docs-site/docs/concepts/governance.md
deleted file mode 100644
index 619d81ad85ce8..0000000000000
--- a/developer-docs-site/docs/concepts/governance.md
+++ /dev/null
@@ -1,69 +0,0 @@
----
-title: "Governance"
-slug: "governance"
----
-import ThemedImage from '@theme/ThemedImage';
-import useBaseUrl from '@docusaurus/useBaseUrl';
-
-# Governance
-
-The Aptos on-chain governance is a process by which the Aptos community members can create and vote on proposals that minimize the cost of blockchain upgrades. The following describes the scope of these proposals for the Aptos on-chain governance:
-
-- Changes to the blockchain parameters, for example, the epoch duration, and the minimum required and maximum allowed validator stake.
-- Changes to the core blockchain code.
-- Upgrades to the Aptos Framework modules for fixing bugs or for adding or enhancing the Aptos blockchain functionality.
-- Deploying new framework modules (at the address `0x1` - `0xa`).
-
-## How a proposal becomes ready to be resolved
-
-See below for a summary description of how a proposal comes to exist and when it becomes ready to be resolved:
-
-
-
-- The Aptos community can suggest an Aptos Improvement Proposal (AIP) in the [Aptos Foundation AIP GitHub](https://github.com/aptos-foundation/AIPs).
-- When appropriate, an on-chain proposal can be created for the AIP via the `aptos_governance` module.
-- Voters can then vote on this proposal on-chain via the `aptos_governance` module. If there is sufficient support for a proposal, then it can be resolved.
-- Governance requires a minimal number of votes to be cast by an expiration threshold. However, if sufficient votes, more than 50% of the total supply, are accumulated prior to that threshold, the proposal can be executed **without waiting for the full voting period**.
-
-## Who can propose
-
-- To either propose or vote, you must stake, but you are not required to run a validator node. However, we recommend that you run validator with a stake as part of the validator set to gain rewards from your stake.
-- To create a proposal, the proposer's backing stake pool must have the minimum required proposer stake. The proposer's stake must be locked up for at least as long as the proposal's voting period. This is to avoid potential spammy proposals.
-- Proposers can create a proposal by calling [`aptos_governance::create_proposal`](https://github.com/aptos-labs/aptos-core/blob/27a255ebc662817944435349afc4ec33ea317e64/aptos-move/framework/aptos-framework/sources/aptos_governance.move#L183).
-
-## Who can vote
-
-- To vote, you must stake, though you are not required to run a validator node. Your voting power is derived from the backing stake pool.
-- Voting power is calculated based on the current epoch's active stake of the proposer or voter's backing stake pool. In addition, the stake pool's lockup must be at least as long as the proposal's duration.
-- Verify proposals before voting. Ensure each proposal is linked to its source code, and if there is a corresponding AIP, the AIP is in the title and description.
-
-:::tip
-Each stake pool can be used to vote on each proposal exactly only one time.
-:::
-
-## Who can resolve
-- Anyone can resolve an on-chain proposal that has passed voting requirements by using the `aptos governance execute-proposal` command from Aptos CLI.
-
-## Aptos Improvement Proposals (AIPs)
-
-AIPs are proposals created by the Aptos community or the Aptos Labs team to improve the operations and development of the Aptos chain.
-To submit an AIP, create an issue in [`Aptos Foundation's GitHub repository`](https://github.com/aptos-foundation/AIPs/issues) using the [template](https://github.com/aptos-foundation/AIPs/blob/main/TEMPLATE.md)
-To keep up with new AIPs, check the `#aip-announcements` channel on [Aptos' discord channel](https://discord.gg/aptosnetwork).
-To view and vote on on-chain proposals, go to [`Aptos' Governance website`](https://governance.aptosfoundation.org/).
-
-## Technical Implementation of Aptos Governance
-The majority of the governance logic is in [`aptos_governance.move and voting.move`](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-framework/sources).
-The `aptos_governance` module outlines how users can interact with Aptos Governance. It's the external-facing module of the Aptos on-chain governance process and contains logic and checks that are specific to Aptos Governance.
-The `voting` module is the Aptos governance standard that can be used by DAOs on the Aptos chain to create their own on-chain governance process.
-
-If you are thinking about creating a DAO on Aptos, you can refer to `aptos_governance`'s usage of the `voting` module as an example.
-In `aptos_governance`, we rely on the `voting` module to create, vote on, and resolve a proposal.
-- `aptos_governance::create_proposal` calls `voting::create_proposal` to create a proposal on-chain, when an off-chain AIP acquires sufficient importance.
-- `aptos_governance::vote` calls `voting::vote` to record the vote on a proposal on-chain;
-- `aptos_governance::resolve` can be called by anyone. It calls `voting::resolve` to resolve the proposal on-chain.
diff --git a/developer-docs-site/docs/concepts/index.md b/developer-docs-site/docs/concepts/index.md
deleted file mode 100644
index cfa3c6857979c..0000000000000
--- a/developer-docs-site/docs/concepts/index.md
+++ /dev/null
@@ -1,20 +0,0 @@
----
-title: "Learn about Aptos"
----
-
-# Learn Aptos Concepts
-
-Start here to get into the core concepts of the Aptos blockchain. Then review our [research papers](https://aptoslabs.com/research) and the Aptos source code found in the [Aptos-core](https://github.com/aptos-labs/aptos-core) repository of GitHub while continuing your journey through this site. The source contains READMEs and code comments invaluable to developing on Aptos.
-
-- ### [Aptos White Paper](../aptos-white-paper/index.md)
-- ### [Aptos Blockchain Deep Dive](./blockchain.md)
-- ### [Move - A Web3 Language and Runtime](./move.md)
-- ### [Accounts](./accounts.md)
-- ### [Resources](./resources.md)
-- ### [Events](./events.md)
-- ### [Transactions and States](./txns-states.md)
-- ### [Gas and Transaction Fees](./gas-txn-fee.md)
-- ### [Computing Transaction Gas](./base-gas.md)
-- ### [Blocks](./blocks.md)
-- ### [Staking](./staking.md)
-- ### [Governance](./governance.md)
diff --git a/developer-docs-site/docs/concepts/move.md b/developer-docs-site/docs/concepts/move.md
deleted file mode 100644
index ded1944f05c7f..0000000000000
--- a/developer-docs-site/docs/concepts/move.md
+++ /dev/null
@@ -1,54 +0,0 @@
----
-title: "Move - A Web3 Language and Runtime"
-slug: "move-on-aptos"
----
-
-# Move - A Web3 Language and Runtime
-
-The Aptos blockchain consists of validator nodes that run a consensus protocol. The consensus protocol agrees upon the ordering of transactions and their output when executed on the Move Virtual Machine (MoveVM). Each validator node translates transactions along with the current blockchain ledger state as input into the VM. The MoveVM processes this input to produce a changeset or storage delta as output. Once consensus agrees and commits to the output, it becomes publicly visible. In this guide, we will introduce you to core Move concepts and how they apply to developing on Aptos.
-
-## What is Move?
-
-Move is a safe and secure programming language for Web3 that emphasizes **scarcity** and **access control**. Any assets in Move can be represented by or stored within *resource*. **Scarcity** is enforced by default as structs cannot be accidentally duplicated or dropped. Only structs that have explicitly been defined at the bytecode layer as *copy* can be duplicated and *drop* can be dropped, respectively.
-
-**Access control** comes from both the notion of accounts as well as module access privileges. A module in Move may either be a library or a program that can create, store, or transfer assets. Move ensures that only public module functions may be accessed by other modules. Unless a struct has a public constructor, it can only be constructed within the module that defines it. Similarly, fields within a struct can only be accessed and mutated within its module that or via public accessors and setters. Furthermore, structs defined with *key* can be stored and read from global storage only within the module defines it. Structs with *store* can be stored within another *store* or *key* struct inside or outside the module that defines that struct.
-
-In Move, a transaction's sender is represented by a *signer*, a verified owner of a specific account. The signer has the highest level of permission in Move and is the only entity capable of adding resources into an account. In addition, a module developer can require that a signer be present to access resources or modify assets stored within an account.
-
-## Comparison to other VMs
-
-| | Aptos / Move | Solana / SeaLevel | EVM | Sui / Move |
-|---|---|---|---|---|
-| Data storage | Stored at a global address or within the owner's account | Stored within the owner's account associated with a program | Stored within the account associated with a smart contract | Stored at a global address |
-| Parallelization | Capable of inferring parallelization at runtime within Aptos | Requires specifying all data accessed | Currently serial nothing in production | Requires specifying all data accessed |
-| Transaction safety | Sequence number | Transaction uniqueness | nonces, similar to sequence numbers | Transaction uniqueness |
-| Type safety | Module structs and generics | Program structs | Contract types | Module structs and generics |
-| Function calling | Static dispatch | Static dispatch | Dynamic dispatch | Static dispatch |
-| Authenticated Storage | [Yes](../reference/glossary.md#merkle-trees) | No | Yes | No |
-| Object global accessibility | Yes | Not applicable | Not applicable | No, can be placed in other objects |
-
-## Aptos Move features
-
-Each deployment of the MoveVM has the ability to extend the core MoveVM with additional features via an adapter layer. Furthermore, MoveVM has a framework to support standard operations much like a computer has an operating system.
-
-The Aptos Move adapter features include:
-* [Move Objects](https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-10.md) that offer an extensible programming model for globally access to heterogeneous set of resources stored at a single address on-chain.
-* [Cryptography primitives](../move/move-on-aptos/cryptography) for building scalable, privacy-preserving dapps.
-* [Resource accounts](../move/move-on-aptos/resource-accounts) that offer programmable accounts on-chain, which can be useful for DAOs (decentralized autonomous organizations), shared accounts, or building complex applications on-chain.
-* [Tables](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-stdlib/sources/table.move) for storing key, value data within an account at scale.
-* Parallelism via [Block-STM](https://medium.com/aptoslabs/block-stm-how-we-execute-over-160k-transactions-per-second-on-the-aptos-blockchain-3b003657e4ba) that enables concurrent execution of transactions without any input from the user.
-* Multi-agent framework that enables a single transaction to be submitted with multiple distinct `signer` entities.
-
-The Aptos framework ships with many useful libraries:
-* A [Aptos Token Objects](https://github.com/aptos-labs/aptos-core/tree/main/aptos-move/framework/aptos-token-objects/sources) as defined in [AIP-11](https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-1.md) and [AIP-22](https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-22.md) that makes it possible to create interoperable NFTs with either lightweight smart contract development or none at all.
-* A [Coin standard](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-framework/sources/coin.move) that makes it possible to create type-safe Coins by publishing a trivial module.
-* A [Fungible asset standard](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-framework/sources/fungible_asset.move) as defined in [AIP-21](https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-21.md) to moderninze the coin concept with better programmability and controls.
-* A [staking](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-framework/sources/staking_contract.move) and [delegation](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-framework/sources/delegation_pool.move) framework.
-* A [`type_of`](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-stdlib/sources/type_info.move) service to identify at run-time the address, module, and struct name of a given type.
-* A [timestamp service](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-framework/sources/timestamp.move) that provides a monotonically increasing clock that maps to the actual current unixtime.
-
-With updates frequently.
-
-## More Resources
-
-Developers can begin their journey in Move by heading over to our [Move developer page](../move/move-on-aptos.md).
diff --git a/developer-docs-site/docs/concepts/node-networks-sync.md b/developer-docs-site/docs/concepts/node-networks-sync.md
deleted file mode 100755
index 1cc7e706c592b..0000000000000
--- a/developer-docs-site/docs/concepts/node-networks-sync.md
+++ /dev/null
@@ -1,35 +0,0 @@
----
-title: "Node Networks and Sync"
-slug: "node-networks-sync"
----
-
-# Node Networks and Synchronization
-
-Validator nodes and fullnodes form a hierarchical structure with validator nodes at the root and fullnodes everywhere else. The Aptos blockchain distinguishes two types of fullnodes: validator fullnodes and public fullnodes. Validator fullnodes connect directly to validator nodes and offer scalability alongside DDoS mitigation. Public fullnodes connect to validator fullnodes (or other public fullnodes) to gain low-latency access to the Aptos network.
-
-![v-fn-network.svg](../../static/img/docs/v-fn-network.svg)
-
-## Node types
-
-Aptos operates with these node types:
-
-* [Validator nodes (VNs)](../nodes/validator-node/index.md) - participates in consensus and drives [transaction processing](../concepts/txns-states.md).
-* Validator fullnodes (VFNs) - captures and keeps up-to-date on the state of the blockchain; run by the validator operator, so it can connect directly to the validator node and therefore serve requests from public fullnodes. Otherwise, it works like a public fullnode.
-* [Public fullnodes (PFNs)](../nodes/full-node/index.md) - run by someone who is not a validator operator, PFNs cannot connect directly to a validator node and therefore rely upon VFNs for synchronization.
-* [Archival nodes (ANs)](../guides/state-sync.md#running-archival-nodes) - is a fullnode that contains all blockchain data since the start of the blockchain's history.
-
-## Separate network stacks
-The Aptos blockchain supports distinct networking stacks for various network topologies. For example, the validator network is independent of the fullnode network. The advantages of having separate network stacks include:
-* Clean separation between the different networks.
-* Better support for security preferences (e.g., bidirectional vs server authentication).
-* Allowance for isolated discovery protocols (i.e., on-chain discovery for validator node's public endpoints vs. manual configuration for private organizations).
-
-# Node synchronization
-Aptos nodes synchronize to the latest state of the Aptos blockchain through two mechanisms: consensus or state synchronization. Validator nodes will use both consensus and state synchronization to stay up-to-date, while fullnodes use only state synchronization.
-
-For example, a validator node will invoke state synchronization when it comes online for the first time or reboots (e.g., after being offline for a while). Once the validator is up-to-date with the latest state of the blockchain it will begin participating in consensus and rely exclusively on consensus to stay up-to-date. Fullnodes, however, continuously rely on state synchronization to get and stay up-to-date as the blockchain grows.
-
-## State synchronizer
-
-Each Aptos node contains a [State Synchronizer](../guides/state-sync.md) component which is used to synchronize the state of the node with its peers. This component has the same functionality for all types of Aptos nodes: it utilizes the dedicated peer-to-peer network to continuously request and disseminate blockchain data. Validator nodes distribute blockchain data within the validator node network, while fullnodes rely on other fullnodes (i.e., validator nodes or public fullnodes).
-
diff --git a/developer-docs-site/docs/concepts/resources.md b/developer-docs-site/docs/concepts/resources.md
deleted file mode 100644
index 25e25de47574d..0000000000000
--- a/developer-docs-site/docs/concepts/resources.md
+++ /dev/null
@@ -1,64 +0,0 @@
----
-title: "Resources"
-id: "resources"
----
-
-# Resources
-
-On Aptos, on-chain state is organized into resources and modules. These are then stored within the individual accounts. This is different from other blockchains, such as Ethereum, where each smart contract maintains their own storage space. See [Accounts](./accounts.md) for more details on accounts.
-
-## Resources vs Instances
-
-Move modules define struct definitions. Struct definitions may include the abilities such as `key` or `store`. Resources are struct instance with The `key` ability that are stored in global storage or directly in an account. The `store` ability allows struct instances to be stored within resources. An example here is how the APT coin is stored: CoinStore is the resource that contains the APT coin, while the Coin itself is an instance:
-
-```rust
-/// A holder of a specific coin type and associated event handles.
-/// These are kept in a single resource to ensure locality of data.
-struct CoinStore has key {
- coin: Coin,
-}
-
-/// Main structure representing a coin/token in an account's custody.
-struct Coin has store {
- /// Amount of coin this address has.
- value: u64,
-}
-```
-
-The Coin instance can be taken out of CoinStore with the owning account's permission and easily transferred to another CoinStore resource. It can also be kept in any other custom resource, if the definition allows, for example:
-
-```rust
-struct CustomCoinBox has key {
- coin: Coin,
-}
-```
-
-## Define resources and objects
-
-All instances and resources are defined within a module that is stored at an address. For example `0x1234::coin::Coin<0x1234::coin::SomeCoin>` would be represented as:
-
-```
-module 0x1234::coin {
- struct CoinStore has key {
- coin: Coin,
- }
-
- struct SomeCoin { }
-}
-```
-
-In this example, `0x1234` is the address, `coin` is the module, `Coin` is a struct that can be stored as a resource, and `SomeCoin` is a struct that is unlikely to ever be represented as an instance. The use of the phantom type allows for there to exist many distinct types of `CoinStore` resources with different `CoinType` parameters.
-
-## Permissions of Instances including Resources
-
-Permissions of resources and other instances are dictated by the module where the struct is defined. For example, an instance within a resource may be accessed and even removed from the resource, but the internal state cannot be changed without permission from the module where the instance's struct is defined.
-
-Ownership, on the other hand, is signified by either storing a resource under an account or by logic within the module that defines the struct.
-
-## Viewing a resource
-
-Resources are stored within accounts. Resources can be located by searching within the owner's account for the resource at its full query path inclusive of the account where it is stored as well as its address and module. Resources can be viewed on the [Aptos Explorer](https://explorer.aptoslabs.com/) by searching for the owning account or be directly fetched from a fullnode's API.
-
-## How resources are stored
-
-The module that defines a struct specifies how instances may be stored. For example, events for depositing a token can be stored in the receiver account where the deposit happens or in the account where the token module is deployed. In general, storing data in individual user accounts enables a higher level of execution efficiency as there would be no state read/write conflicts among transactions from different accounts, allowing for seamless parallel execution.
diff --git a/developer-docs-site/docs/concepts/staking.md b/developer-docs-site/docs/concepts/staking.md
deleted file mode 100644
index 60ca55d6e485c..0000000000000
--- a/developer-docs-site/docs/concepts/staking.md
+++ /dev/null
@@ -1,294 +0,0 @@
----
-title: "Staking"
-slug: "staking"
----
-import ThemedImage from '@theme/ThemedImage';
-import useBaseUrl from '@docusaurus/useBaseUrl';
-
-# Staking
-
-:::tip Consensus
-We strongly recommend that you read the consensus section of [Aptos Blockchain Deep Dive](./blockchain.md#consensus) before proceeding further.
-:::
-
-In a distributed system like blockchain, executing a transaction is distinct from updating the state of the ledger and persisting the results in storage. An agreement, i.e., consensus, must be reached by a quorum of validators on the ordering of transactions and their execution results before these results are persisted in storage and the state of the ledger is updated.
-
-Anyone can participate in the Aptos consensus process, if they stake sufficient utility coin, i.e., place their utility coin into escrow. To encourage validators to participate in the consensus process, each validator's vote weight is proportional to the amount of validator's stake. In exchange, the validator is rewarded proportionally to the amount staked. Hence, the performance of the blockchain is aligned with the validator's interest, i.e., rewards.
-
-:::note
-Currently, slashing is not implemented.
-:::
-
-The current on-chain data can be found in [`staking_config::StakingConfig`](https://mainnet.aptoslabs.com/v1/accounts/0x1/resource/0x1::staking_config::StakingConfig). The configuration set is defined in [`staking_config.move`](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-framework/sources/configs/staking_config.move).
-
-The rest of this document presents how staking works on the Aptos blockchain. See [Supporting documentation](#supporting-documentation) at the bottom for related resources.
-
-## Staking on the Aptos blockchain
-
-
-
-The Aptos staking module defines a capability that represents ownership.
-
-:::tip Ownership
-See the `OwnerCapability` defined in [stake.move](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-framework/sources/stake.move).
-:::
-
-The `OwnerCapability` resource can be used to control the stake pool. Three personas are supported:
-- Owner
-- Operator
-- Voter
-
-Using this owner-operator-voter model, a custodian can assume the owner persona and stake on the Aptos blockchain and participate in the Aptos governance. This model allows delegations and staking services to be built as it separates the account that is control of the funds from the other accounts (operator, voter), hence allows secure delegations of responsibilities.
-
-This section describes how this works, using Bob and Alice in the example.
-
-### Owner
-
-The owner is the owner of the funds. For example, Bob creates an account on the Aptos blockchain. Now Bob has the `OwnerCapability` resource. Bob can assign his account’s operator address to the account of Alice, a trusted node operator, to appoint Alice as a validator.
-
-As an owner:
-
-- Bob owns the funds that will be used for staking.
-- Only Bob can add, unlock or withdraw funds.
-- Only Bob can extend the lockup period.
-- Bob can change the node operator Alice to some other node operator anytime Bob wishes to do so.
-- Bob can set the operator commission percentage.
-- The reward will be deposited into Bob's (owner's) account.
-
-### Operator
-
-A node operator is assigned by the fund owner to run the validator node and receives commission as set by the owner. The two personas, the owner and the operator, can be two separate entities or the same. For example, Alice (operator) runs the validator node, operating at the behest of Bob, the fund owner.
-
-As an operator:
-
-- Alice has permissions only to join or leave the validator set.
-- As a validator, Alice will perform the validating function.
-- Alice has the permissions to change the consensus key and network addresses. The consensus key is used by Alice to participate in the validator consensus process, i.e., to vote and propose a block. Alice is allowed to change ("rotate") this key in case this key is compromised.
-- However, Alice cannot move funds (unless Alice is the owner, i.e., Alice has the `OwnerCapability` resource).
-- The operator commission is deducted from the staker (owner) rewards and deposited into the operator account.
-
-### Voter
-
-An owner can designate a voter. This enables the voter to participate in governance. The voter will use the voter key to sign the governance votes in the transactions.
-
-:::tip Governance
-This document describes staking. See [Governance](./governance.md) for how to participate in the Aptos on-chain governance using the owner-voter model.
-:::
-
-## Validation on the Aptos blockchain
-
-Throughout the duration of an epoch, the following flow of events occurs several times (thousands of times):
-
-- A validator leader is selected by a deterministic formula based on the validator reputation determined by validator's performance (including whether the validator has voted in the past or not) and stake. **This leader selection is not done by voting.**
-- The selected leader sends a proposal containing the collected quorum votes of the previous proposal and the leader's proposed order of transactions for the new block.
-- All the validators from the validator set will vote on the leader's proposal for the new block. Once consensus is reached, the block can be finalized. Hence, the actual list of votes to achieve consensus is a subset of all the validators in the validator set. This leader validator is rewarded. **Rewards are given only to the leader validator, not to the voter validators.**
-- The above flow repeats with the selection of another validator leader and repeating the steps for the next new block. Rewards are given at the end of the epoch.
-
-## Validator state and stake state
-
-States are defined for a validator and the stake.
-
-- **Validator state:** A validator can be in any one of these four states. Moreover, the validator can go from inactive (not tracked in the validator set anywhere) state to any one of the other three states:
- - inactive
- - pending_active.
- - active.
- - pending_inactive.
-- **Stake state:** A validator in pending_inactive or active state, can have their stake in either of these four states:
- - inactive.
- - pending_active.
- - active.
- - pending_inactive.
-
- These stake states are applicable for the existing validators in the validator set adding or removing their stake.
-
-### Validator states
-
-
-
-There are two edge cases to call out:
-1. If a validator's stake drops below the required [minimum](#minimum-and-maximum-stake), that validator will be moved from active state directly to the inactive state during an epoch change. This happens only during an epoch change.
-2. Aptos governance can also directly remove validators from the active set. **Note that governance proposals will always trigger an epoch change.**
-
-### Stake state
-
-The state of stake has more granularity than that of the validator; additional stake can be added and a portion of stake removed from an active validator.
-
-
-
-### Validator ruleset
-
-The below ruleset is applicable during the changes of state:
-
-- Voting power can change (increase or decrease) only on epoch boundary.
-- A validator’s consensus key and the validator and validator fullnode network addresses can change only on epoch boundary.
-- Pending inactive stake cannot be moved into inactive (and thus withdrawable) until before lockup expires.
-- No validators in the active validator set can have their stake below the minimum required stake.
-
-## Validator flow
-
-:::tip Staking pool operations
-See [Staking pool operations](../nodes/validator-node/operator/staking-pool-operations.md) for the correct sequence of commands to run for the below flow.
-:::
-
-1. Owner initializes the stake pool with `aptos stake create-staking-contract`.
-2. When the owner is ready to deposit the stake (or have funds assigned by a staking service in exchange for ownership capability), owner calls `aptos stake add-stake`.
-3. When the validator node is ready, the operator can call `aptos node join-validator-set` to join the active validator set. Changes will be effective in the next epoch.
-4. Validator validates (proposes blocks as a leader-validator) and gains rewards. The stake will automatically be locked up for a fixed duration (set by governance) and automatically renewed at expiration.
-5. At any point, if the operator wants to update the consensus key or validator network addresses, they can call `aptos node update-consensus-key` or `aptos node update-validator-network-addresses`. Similar to changes to stake, the changes to consensus key or validator network addresses are only effective in the next epoch.
-6. Validator can request to unlock their stake at any time. However, their stake will only become withdrawable when their current lockup expires. This can be at most as long as the fixed lockup duration.
-7. After exiting, the validator can either explicitly leave the validator set by calling `aptos node leave-validator-set` or if their stake drops below the min required, they would get removed at the end of the epoch.
-8. Validator can always rejoin the validator set by going through steps 2-3 again.
-9. An owner can always switch operators by calling `aptos stake set-operator`.
-10. An owner can always switch designated voter by calling `aptos stake set-delegated-voter`.
-
-## Joining the validator set
-
-Participating as a validator node on the Aptos network works like this:
-
-1. Operator runs a validator node and configures the on-chain validator network addresses and rotates the consensus key.
-2. Owner deposits her Aptos coins funds as stake, or have funds assigned by a staking service. The stake must be at least the minimum amount required.
-3. **The validator node cannot sync until the stake pool becomes active.**
-4. Operator validates and gains rewards.
-5. The staked pool is automatically be locked up for a fixed duration (set by the Aptos governance) and will be automatically renewed at expiration. You cannot withdraw any of your staked amount until your lockup period expires. See [stake.move#L728](https://github.com/aptos-labs/aptos-core/blob/00a234cc233b01f1a7e1680f81b72214a7af91a9/aptos-move/framework/aptos-framework/sources/stake.move#L728).
-6. Operator must wait until the new epoch starts before their validator becomes active.
-
-:::tip Joining the validator set
-For step-by-step instructions on how to join the validator set, see: [Joining Validator Set](../nodes/validator-node/operator/staking-pool-operations.md#joining-validator-set).
-:::
-
-### Minimum and maximum stake
-
-You must stake the required minimum amount to join the validator set. Moreover, you can only stake up to the maximum stake amount. The current required minimum for staking is 1M APT tokens and the maximum is 50M APT tokens.
-
-If at any time after joining the validator set, your current staked amount exceeds the maximum allowed stake (for example as the rewards are added to your staked amount), then your voting power and the rewards will be calculated only using the maximum allowed stake amount, and not your current staked amount.
-
-The owner can withdraw part of the stake and leave their balance below the required minimum. In such case, their stake pool will be removed from the validator set when the next epoch starts.
-
-### Automatic lockup duration
-
-When you join the validator set, your stake will automatically be locked up for a fixed duration that is set by the Aptos governance.
-
-### Automatic lockup renewal
-
-When your lockup period expires, it will be automatically renewed, so that you can continue to validate and receive the rewards.
-
-### Unlocking your stake
-
-You can request to unlock your stake at any time. However, your stake will only become withdrawable when your current lockup expires. This can be at most as long as the fixed lockup duration. You will continue earning rewards on your stake until it becomes withdrawable.
-
-The principal amount is updated when any of the following actions occur:
-1. Operator [requests commission unlock](../nodes/validator-node/operator/staking-pool-operations.md#requesting-commission)
-2. Staker (owner) withdraws funds
-3. Staker (owner) switches operators
-
-When the staker unlocks stake, this also triggers a commission unlock. The full commission amount for any staking rewards earned is unlocked. This is not proportional to the unlock stake amount. Commission is distributed to the operator after the lockup ends when `request commission` is called a second time or when staker withdraws (distributes) the unlocked stake.
-
-### Resetting the lockup
-
-When the lockup period expires, it is automatically renewed by the network. However, the owner can explicitly reset the lockup.
-
-:::tip Set by the governance
-
-The lockup duration is decided by the Aptos governance, i.e., by the covenants that the Aptos community members vote on, and not by any special entity like the Aptos Labs.
-:::
-
-## Epoch
-
-An epoch in the Aptos blockchain is defined as a duration of time, in seconds, during which a number of blocks are voted on by the validators, the validator set is updated, and the rewards are distributed to the validators.
-
-:::tip Epoch on Mainnet
-The Aptos mainnet epoch is set as 7200 seconds (two hours).
-:::
-
-### Triggers at the epoch start
-
-:::tip
-See the [Triggers at epoch boundary section of `stake.move`](https://github.com/aptos-labs/aptos-core/blob/256618470f2ad7d89757263fbdbae38ac7085317/aptos-move/framework/aptos-framework/sources/stake.move#L1036) for the full code.
-:::
-
-At the start of each epoch, the following key events are triggered:
-
-- Update the validator set by adding the pending active validators to the active validators set and by removing the pending inactive validators from the active validators set.
-- Move any pending active stake to active stake, and any pending inactive stake to inactive stake.
-- The staking pool's voting power in this new epoch is updated to the total active stake.
-- Automatically renew a validator's lockup for the validators who will still be in the validator set in the next epoch.
-- The voting power of each validator in the validator set is updated to be the corresponding staking pool's voting power.
-- Rewards are distributed to the validators that participated in the previous epoch.
-
-## Rewards
-
-Rewards for staking are calculated by using:
-
-1. The `rewards_rate`, an annual percentage yield (APY), i.e., rewards accrue as a compound interest on your current staked amount.
-2. Your staked amount.
-3. Your proposer performance in the Aptos governance.
-
-:::tip Rewards rate
-The `rewards_rate` is set by the Aptos governance. Also see [Validation on the Aptos blockchain](#validation-on-the-aptos-blockchain).
-:::
-
-### Rewards formula
-
-See below the formula used to calculate rewards to the validator:
-
-```
-Reward = staked_amount * rewards_rate per epoch * (Number of successful proposals by the validator / Total number of proposals made by the validator)
-```
-
-### Rewards paid every epoch
-
-Rewards are paid every epoch. Any reward you (i.e., validator) earned at the end of current epoch is added to your staked amount. The reward at the end of the next epoch is calculated based on your increased staked amount (i.e., original staked amount plus the added reward), and so on.
-
-### Rewards based on the proposer performance
-
-The validator rewards calculation uses the validator's proposer performance. Once you are in the validator set, you can propose in every epoch. The more successfully you propose, i.e., your proposals pass, the more rewards you will receive.
-
-Note that rewards are given only to the **leader-validators**, i.e., validators who propose the new block, and not to the **voter-validators** who vote on the leader's proposal for the new block. See [Validation on the Aptos blockchain](#validation-on-the-aptos-blockchain).
-
-:::tip Rewards are subject to lockup period
-All the validator rewards are also subject to lockup period as they are added to the original staked amount.
-:::
-
-## Leaving the validator set
-
-:::tip
-See the Aptos Stake module in the Move language at [stake.move](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-framework/sources/stake.move).
-:::
-
-- At any time you can call the following sequence of functions to leave the validator set:
- - Call `Stake::unlock` to unlock your stake amount, and
- - Either call `Stake::withdraw` to withdraw your staked amount at the next epoch, or call `Stake::leave_validator_set`.
-
-## Rejoining the validator set
-
-When you leave a validator set, you can rejoin by depositing the minimum required stake amount.
-
-## Supporting documentation
-
-* [Current on-chain data](https://mainnet.aptoslabs.com/v1/accounts/0x1/resource/0x1::staking_config::StakingConfig)
-* [Staking Pool Operations](../nodes/validator-node/operator/staking-pool-operations.md)
-* [Delegation Pool Operations](../nodes/validator-node/operator/delegation-pool-operations.md)
-* [Configuration file `staking_config.move`](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-framework/sources/configs/staking_config.move)
-* [Contract file `staking_contract.move`](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-framework/sources/staking_contract.move) covering requesting commissions
-* [All staking-related `.move files](https://github.com/aptos-labs/aptos-core/tree/main/aptos-move/framework/aptos-framework/sources)
diff --git a/developer-docs-site/docs/concepts/txns-states.md b/developer-docs-site/docs/concepts/txns-states.md
deleted file mode 100755
index 71e5a0ff44280..0000000000000
--- a/developer-docs-site/docs/concepts/txns-states.md
+++ /dev/null
@@ -1,122 +0,0 @@
----
-title: "Transactions and States"
-slug: "txns-states"
----
-
-import ThemedImage from '@theme/ThemedImage';
-import useBaseUrl from '@docusaurus/useBaseUrl';
-
-# Transactions and States
-
-The Aptos blockchain stores three types of data:
-
-* **Transactions**: Transactions represent an intended operation being performed by an account on the blockchain (e.g., transferring assets).
-* **States**: The (blockchain ledger) state represents the accumulation of the output of execution of transactions, the values stored within all [resources](./resources).
-* [**Events**](./events.md): Ancillary data published by the execution of a transaction.
-
-:::tip
-Only transactions can change the ledger state.
-:::
-
-## Transactions
-
-Aptos transactions contain information such as the sender’s account address, authentication from the sender, the desired operation to be performed on the Aptos blockchain, and the amount of gas the sender is willing to pay to execute the transaction.
-
-### Transaction states
-
-A transaction may end in one of the following states:
-
-* Committed on the blockchain and executed. This is considered as a successful transaction.
-* Committed on the blockchain and aborted. The abort code indicates why the transaction failed to execute.
-* Discarded during transaction submission due to a validation check such as insufficient gas, invalid transaction format, or incorrect key.
-* Discarded after transaction submission but before attempted execution. This could be caused by timeouts or insufficient gas due to other transactions affecting the account.
-
-The sender’s account will be charged gas for any committed transactions.
-
-During transaction submission, the submitter is notified of successful submission or a reason for failing validations otherwise.
-
-A transaction that is successfully submitted but ultimately discarded may have no visible state in any accessible Aptos node or within the Aptos network. A user can attempt to resubmit the same transaction to re-validate the transaction. If the submitting node believes that this transaction is still valid, it will return an error stating that an identical transaction has been submitted.
-
-The submitter can try to increase the gas cost by a trivial amount to help make progress and adjust for whatever may have been causing the discarding of the transaction further downstream.
-
-:::tip Read more
-See [Aptos Blockchain Deep Dive](./blockchain.md) for a comprehensive description of the Aptos transaction lifecycle.
-:::
-
-### Contents of a Transaction
-
-A signed transaction on the blockchain contains the following information:
-
-- **Signature**: The sender uses a digital signature to verify that they signed the transaction (i.e., authentication).
-- **Sender address**: The sender's [account address](./accounts.md#account-address).
-- **Sender public key**: The public authentication key that corresponds to the private authentication key used to sign the transaction.
-- **Payload**: Indicates an action or set of actions Alice's behalf. In the case this is a Move function, it directly calls into Move bytecode on the chain. Alternatively, it may be Move bytecode peer-to-peer [transaction script](../reference/glossary.md#transaction-script). It also contains a list of inputs to the function or script. For this example, it is a function call to transfer an amount of Aptos Coins from Alice account to Bob's account, where Alice's account is implied by sending the transaction and Bob's account and the amount are specified as transaction inputs.
-- [**Gas unit price**](../reference/glossary.md#gas-unit-price): The amount the sender is willing to pay per unit of gas, to execute the transaction. This is represented as Octa or units of 10-8 utility tokens.
-- [**Maximum gas amount**](../reference/glossary.md#maximum-gas-amount): The maximum gas amount in Aptos utility tokens the sender is willing to pay for this transaction. Gas charges are equal to the base gas cost covered by computation and IO multiplied by the gas price. Gas costs also include storage with an Apt-fixed priced storage model. This is represents as Octa or units of 10-8 Aptos utility tokens.
-- **Gas price** (in specified gas units): This is the amount the sender is willing to pay per unit of [gas](./gas-txn-fee.md) to execute the transaction. [Gas](./gas-txn-fee.md) is a way to pay for computation and storage. A gas unit is an abstract measurement of computation with no inherent real-world value.
-- **Maximum gas amount**: The [maximum gas amount](./gas-txn-fee.md#gas-and-transaction-fee-on-the-aptos-blockchain) is the maximum gas units the transaction is allowed to consume.
-- **Sequence number**: This is an unsigned integer that must be equal to the sender's account [sequence number](./accounts.md#account-sequence-number) at the time of execution.
-- **Expiration time**: A timestamp after which the transaction ceases to be valid (i.e., expires).
-
-### Types of transaction payloads
-Within a given transaction, the two most common types of payloads include:
-
-- An entry point
-- [A script (payload)](../move/move-on-aptos/move-scripts)
-
-Currently the SDKs [Python](https://aptos.dev/sdks/python-sdk) and [Typescript](https://aptos.dev/sdks/ts-sdk/index) support both. This guide points out many of those entry points, such as `coin::transfer` and `aptos_account::create_account`.
-
-All operations on the Aptos blockchain should be available via entry point calls. While one could submit multiple transactions calling entry points in series, many such operations may benefit from being called atomically from a single transaction. A script payload transaction can call any entry point or public function defined within any module.
-
-:::tip Read more
-See the tutorial on [Your First Transaction](../tutorials/first-transaction.md) for generating valid transactions.
-:::
-
-:::note Transaction generation
-The Aptos REST API supports generating BCS-encoded transactions from JSON. This is useful for rapid prototyping, but be cautious using it in Mainnet as this places a lot of trust on the fullnode generating the transaction.
-:::
-
-## States
-
-The Aptos blockchain's ledger state, or global state, represents the state of all accounts in the Aptos blockchain. Each validator node in the blockchain must know the latest version of the global state to execute any transaction.
-
-Anyone can submit a transaction to the Aptos blockchain to modify the ledger state. Upon execution of a transaction, a transaction output is generated. A transaction output contains zero or more operations to manipulate the ledger state called **write sets** emitting a vector of resulting events, the amount of gas consumed, and the executed transaction status.
-
-### Proofs
-
-The Aptos blockchain uses proof to verify the authenticity and correctness of the blockchain data.
-
-Data within the Aptos blockchain is replicated across the network. Each validator and fullnode's [storage](./validator-nodes#storage) is responsible for persisting the agreed upon blocks of transactions and their execution results to the database.
-
-The blockchain is represented as an ever-growing [Merkle tree](../reference/glossary.md#merkle-trees), where each leaf appended to the tree represents a single transaction executed by the blockchain.
-
-All operations executed by the blockchain and all account states can be verified cryptographically. These cryptographic proofs ensure that:
-- The validator nodes agree on the state.
-- The client does not need to trust the entity from which it is receiving data. For example, if a client fetches the last **n** transactions from an account, a proof can attest that no transactions were added, omitted or modified in the response. The client may also query for the state of an account, ask whether a specific transaction was processed, and so on.
-
-### Versioned database
-
-The ledger state is versioned using an unsigned 64-bit integer corresponding to the number of transactions the system has executed. This versioned database allows the validator nodes to:
-
-- Execute a transaction against the ledger state at the latest version.
-- Respond to client queries about ledger history at both current and previous versions.
-
-## Transactions change ledger state
-
-
-
-The above figure shows how executing transaction T*i* changes the state of the Aptos blockchain from S*i-1* to S*i*.
-
-In the figure:
-
-- Accounts **A** and **B**: Represent Alice's and Bob's accounts on the Aptos blockchain.
-- **S*i-1*** : Represents the (*i-1*)-the state of the blockchain. In this state, Alice's account **A** has a balance of 110 APT (Aptos coins), and Bob's account **B** has a balance of 52 APT.
-- **T*i*** : This is the *i*-th transaction executed on the blockchain. In this example, it represents Alice sending 10 APT to Bob.
-- **Apply()**: This is a deterministic function that always returns the same final state for a specific initial state and a specific transaction. If the current state of the blockchain is **S*i-1***, and transaction **T*i*** is executed on the state **S*i-1***, then the new state of the blockchain is always **S*i***. The Aptos blockchain uses the [Move language](../move/book/SUMMARY.md) to implement the deterministic execution function **Apply()**.
-- **S*i*** : This is the *i*-the state of the blockchain. When the transaction **T*i*** is applied to the blockchain, it generates the new state **S*i*** (an outcome of applying **Apply(S*i-1*, T*i*)** to **S*i-1*** and **T*i***). This causes Alice’s account balance to be reduced by 10 to 100 APT and Bob’s account balance to be increased by 10 to 62 APT. The new state **S*i*** shows these updated balances.
diff --git a/developer-docs-site/docs/concepts/validator-nodes.md b/developer-docs-site/docs/concepts/validator-nodes.md
deleted file mode 100755
index 36f836a258624..0000000000000
--- a/developer-docs-site/docs/concepts/validator-nodes.md
+++ /dev/null
@@ -1,63 +0,0 @@
----
-title: "Validator Nodes Overview"
-slug: "validator-nodes"
----
-import BlockQuote from "@site/src/components/BlockQuote";
-
-An Aptos node is an entity of the Aptos ecosystem that tracks the state of the Aptos blockchain. Clients interact with the blockchain via Aptos nodes. There are two types of nodes:
-* Validator nodes
-* [Fullnodes](./fullnodes.md)
-
-Each Aptos node comprises several logical components:
-* [REST service](../reference/glossary.md#rest-service)
-* [Mempool](#mempool)
-* [Consensus (disabled in fullnodes)](#consensus)
-* [Execution](#execution)
-* [Virtual Machine](#virtual-machine-vm)
-* [Storage](#storage)
-* [State synchronizer](#state-synchronizer)
-
-The [Aptos-core](../reference/glossary.md#aptos-core) software can be configured to run as a validator node or as a fullnode.
-
-# Overview
-
-When a transaction is submitted to the Aptos blockchain, validator nodes run a distributed [consensus protocol](../reference/glossary.md#consensus-protocol), execute the transaction, and store the transaction and the execution results on the blockchain. Validator nodes decide which transactions will be added to the blockchain and in which order.
-
-The Aptos blockchain uses a Byzantine Fault Tolerance (BFT) consensus protocol for validator nodes to agree on the ledger of finalized transactions and their execution results. Validator nodes process these transactions and include them in their local copy of the blockchain database. This means that up-to-date validator nodes always maintain a copy of the current [state](../reference/glossary.md#state) of the blockchain, locally.
-
-Validator nodes communicate directly with other validator nodes over a private network. [Fullnodes](./fullnodes.md) are an external validation and/or dissemination resource for the finalized transaction history. They receive transactions from peers and may re-execute them locally (the same way a validator executes transactions). Fullnodes store the results of re-executed transactions to local storage. In doing so, they can challenge any foul-play by validators and provide evidence if there is any attempt to re-write or modify the blockchain history. This helps to mitigate against validator corruption and/or collusion.
-
-
-The AptosBFT consensus protocol provides fault tolerance of up to one-third of malicious validator nodes.
-
-
-## Validator node components
-
-![validator.svg](../../static/img/docs/validator.svg)
-### Mempool
-
-Mempool is a component within each node that holds an in-memory buffer of transactions that have been submitted to the blockchain, but not yet agreed upon or executed. This buffer is replicated between validator nodes and fullnodes.
-
-The JSON-RPC service of a fullnode sends transactions to a validator node's mempool. Mempool performs various checks on the transactions to ensure transaction validity and protect against DOS attacks. When a new transaction passes initial verification and is added to mempool, it is then distributed to the mempools of other validator nodes in the network.
-
-When a validator node temporarily becomes a leader in the consensus protocol, consensus pulls the transactions from mempool and proposes a new transaction block. This block is broadcasted to other validators and contains a total ordering over all transactions in the block. Each validator then executes the block and submits votes on whether or not to accept the new block proposal.
-
-### Consensus
-
-Consensus is the component that is responsible for ordering blocks of transactions and agreeing on the results of execution by participating in the consensus protocol with other validator nodes in the network.
-
-### Execution
-
-Execution is the component that coordinates the execution of a block of transactions and maintains a transient state. Consensus votes on this transient state. Execution maintains an in-memory representation of the execution results until consensus commits the block to the distributed database. Execution uses the virtual machine to execute transactions. Execution acts as the glue layer between the inputs of the system (represented by transactions), storage (providing a persistency layer), and the virtual machine (for execution).
-
-### Virtual machine (VM)
-
-The virtual machine (VM) is used to run the Move program within each transaction and determine execution results. A node's mempool uses the VM to perform verification checks on transactions, while execution uses the VM to execute transactions.
-
-### Storage
-
-The storage component is used to persist agreed upon blocks of transactions and their execution results to the local database.
-
-### State synchronizer
-
-Nodes use their state synchronizer component to “catch up” to the latest state of the blockchain and stay up-to-date.
diff --git a/developer-docs-site/docs/guides/_category_.json b/developer-docs-site/docs/guides/_category_.json
deleted file mode 100644
index e79050872b49e..0000000000000
--- a/developer-docs-site/docs/guides/_category_.json
+++ /dev/null
@@ -1,4 +0,0 @@
-{
- "label": "Guides",
- "position": 2
-}
diff --git a/developer-docs-site/docs/guides/account-management/key-rotation.md b/developer-docs-site/docs/guides/account-management/key-rotation.md
deleted file mode 100644
index a0f5e2493ac00..0000000000000
--- a/developer-docs-site/docs/guides/account-management/key-rotation.md
+++ /dev/null
@@ -1,141 +0,0 @@
----
-title: "Rotating an authentication key"
-id: "key-rotation"
----
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-Aptos Move accounts have a public address, an authentication key, a public key, and a private key. The public address is permanent, always matching the account's initial authentication key.
-
-The Aptos account model facilitates the unique ability to rotate an account's private key. Since an account's address is the *initial* authentication key, the ability to sign for an account can be transferred to another private key without changing its public address.
-
-In this guide, we show examples for how to rotate an account's authentication key using a few of the various Aptos SDKs.
-
-Here are the installation links for the SDKs we will cover in this example:
-
-* [Aptos CLI](../../tools/aptos-cli)
-* [Typescript SDK](../../sdks/ts-sdk/index)
-* [Python SDK](../../sdks/python-sdk)
-
-:::warning
-Some of the following examples use private keys. Do not share your private keys with anyone.
-:::
-
-## How to rotate an account's authentication key
-
-
-
-Run the following to initialize two test profiles. Leave the inputs blank both times you're prompted for a private key.
-
-```shell title="Initialize two test profiles on devnet"
-aptos init --profile test_profile_1 --network devnet --assume-yes
-aptos init --profile test_profile_2 --network devnet --assume-yes
-```
-```shell title="Rotate the authentication key for test_profile_1 to test_profile_2's authentication key"
-aptos account rotate-key --profile test_profile_1 --new-private-key
-```
-:::info Where do I view the private key for a profile?
-Public, private, and authentication keys for Aptos CLI profiles are stored in `~/.aptos/config.yaml` if your config is set to `Global` and `/.aptos/config.yaml` if it's set to `Workspace`.
-
-To see your config settings, run `aptos config show-global-config`.
-:::
-
-```shell title="Confirm yes and create a new profile so that you can continue to sign for the resource account"
-Do you want to submit a transaction for a range of [52000 - 78000] Octas at a gas unit price of 100 Octas? [yes/no] >
-yes
-...
-
-Do you want to create a profile for the new key? [yes/no] >
-yes
-...
-
-Enter the name for the profile
-test_profile_1_rotated
-
-Profile test_profile_1_rotated is saved.
-```
-You can now use the profile like any other account.
-
-In your `config.yaml` file, `test_profile_1_rotated` will retain its original public address but have a new public and private key that matches `test_profile_2`.
-
-The authentication keys aren't shown in the `config.yaml` file, but we can verify the change with the following commands:
-
-```shell title="Verify the authentication keys are now equal with view functions"
-# View the authentication key of `test_profile_1_rotated`
-aptos move view --function-id 0x1::account::get_authentication_key --args address:test_profile_1_rotated
-
-# View the authentication key of `test_profile_2`, it should equal the above.
-aptos move view --function-id 0x1::account::get_authentication_key --args address:test_profile_2
-```
-
-```json title="Example output from the previous two commands"
-{
- "Result": [
- "0x458fba533b84717c91897cab05047c1dd7ac2ea73e75c77281781f5b7fec180c"
- ]
-}
-{
- "Result": [
- "0x458fba533b84717c91897cab05047c1dd7ac2ea73e75c77281781f5b7fec180c"
- ]
-}
-```
-
-
-
-
-This program creates two accounts on devnet, Alice and Bob, funds them, then rotates the Alice's authentication key to that of Bob's.
-
-View the full example for this code [here](https://github.com/aptos-labs/aptos-core/tree/main/ecosystem/typescript/sdk/examples/typescript/rotate_key.ts).
-
-The function to rotate is very simple:
-```typescript title="Typescript SDK rotate authentication key function"
-:!: static/sdks/typescript/examples/typescript-esm/rotate_key.ts rotate_key
-```
-Commands to run the example script:
-```shell title="Navigate to the typescript SDK directory, install dependencies and run rotate_key.ts"
-cd ~/aptos-core/ecosystem/typescript/sdk/examples/typescript-esm
-pnpm install && pnpm rotate_key
-```
-```shell title="rotate_key.ts output"
-Account Address Auth Key Private Key Public Key
-------------------------------------------------------------------------------------------------
-Alice 0x213d...031013 '0x213d...031013' '0x00a4...b2887b' '0x859e...08d2a9'
-Bob 0x1c06...ac3bb3 0x1c06...ac3bb3 0xf2be...9486aa 0xbbc1...abb808
-
-...rotating...
-
-Alice 0x213d...031013 '0x1c06...ac3bb3' '0xf2be...9486aa' '0xbbc1...abb808'
-Bob 0x1c06...ac3bb3 0x1c06...ac3bb3 0xf2be...9486aa 0xbbc1...abb808
-```
-
-
-
-This program creates two accounts on devnet, Alice and Bob, funds them, then rotates the Alice's authentication key to that of Bob's.
-
-View the full example for this code [here](https://github.com/aptos-labs/aptos-core/tree/main/ecosystem/python/sdk/examples/rotate-key.py).
-
-Here's the relevant code that rotates Alice's keys to Bob's:
-```python title="Python SDK rotate authentication key function"
-:!: static/sdks/python/examples/rotate_key.py rotate_key
-```
-Commands to run the example script:
-```shell title="Navigate to the python SDK directory, install dependencies and run rotate_key.ts"
-cd ~/aptos-core/ecosystem/python/sdk
-poetry install && poetry run python -m examples.rotate-key
-```
-```shell title="rotate_key.py output"
-Account Address Auth Key Private Key Public Key
-------------------------------------------------------------------------------------------------
-Alice 0x213d...031013 '0x213d...031013' '0x00a4...b2887b' '0x859e...08d2a9'
-Bob 0x1c06...ac3bb3 0x1c06...ac3bb3 0xf2be...9486aa 0xbbc1...abb808
-
-...rotating...
-
-Alice 0x213d...031013 '0x1c06...ac3bb3' '0xf2be...9486aa' '0xbbc1...abb808'
-Bob 0x1c06...ac3bb3 0x1c06...ac3bb3 0xf2be...9486aa 0xbbc1...abb808
-```
-
-
-
diff --git a/developer-docs-site/docs/guides/building-from-source.md b/developer-docs-site/docs/guides/building-from-source.md
deleted file mode 100644
index fdfd7a46ee141..0000000000000
--- a/developer-docs-site/docs/guides/building-from-source.md
+++ /dev/null
@@ -1,176 +0,0 @@
----
-title: "Building Aptos From Source"
----
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-# Building Aptos From Source
-
-[Binary releases are available](../tools/aptos-cli/install-cli/index.md), but if you want to build from source or develop on the Aptos tools, this is how.
-
-## Supported operating systems
-
-Aptos can be built on various operating systems, including Linux, macOS. and Windows. Aptos is tested extensively on Linux and macOS, and less so on Windows. Here are the versions we use:
-
-* Linux - Ubuntu version 20.04 and 22.04
-* macOS - macOS Monterey and later
-* Microsoft Windows - Windows 10, 11 and Windows Server 2022+
-
-## Clone the Aptos-core repo
-
-
-1. Install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git). Git is required to clone the aptos-core repo, and will need to be installed prior to continuing. You can install it with the instructions on the official [Git website](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git).
-
-1. Clone the Aptos repository. To clone the Aptos repository (repo), you first need to open a command line prompt (Terminal on Mac / Linux, Powershell on Windows). Then run the following command to clone the Git repository from GitHub.
-
- ```
- git clone https://github.com/aptos-labs/aptos-core.git
- ```
-
-1. Now let's go into the newly created directory `aptos-core` by *changing directory* or `cd`ing into it:
- ```
- cd aptos-core
- ```
-
-### (Optional) Check out release branch
-
-Optionally, check out a release branch to install an Aptos node. We suggest you check out `devnet` for your first development. See [Choose a network](./system-integrators-guide.md#choose-a-network) for an explanation of their differences.
-
-
-Release Branches
-
-
-
- git checkout --track origin/devnet
-
-
-
-
- git checkout --track origin/testnet
-
-
-
-
- git checkout --track origin/mainnet
-
-
-
-
-
-## Set up build dependencies
-
-Prepare your developer environment by installing the dependencies needed to build, test and inspect Aptos Core.
-No matter your selected mechanism for installing these dependencies, **it is imperative you keep your entire toolchain up-to-date**. If you encounter issues later, update all packages and try again.
-
-
-macOS
-
-**> Using the automated script**
-
-1. Ensure you have `brew` package manager installed: https://brew.sh/
-1. Run the dev setup script to prepare your environment: `./scripts/dev_setup.sh`
-1. Update your current shell environment: `source ~/.cargo/env`.
-
-:::tip
-You can see the available options for the script by running `./scripts/dev_setup.sh --help`
-:::
-
-**> Manual installation of dependencies**
-
-If the script above doesn't work for you, you can install these manually, but it's **not recommended**.
-
-1. [Rust](https://www.rust-lang.org/tools/install)
-1. [CMake](https://cmake.org/download/)
-1. [LLVM](https://releases.llvm.org/)
-1. [LLD](https://lld.llvm.org/)
-
-
-
-
-Linux
-
-**> Using the automated script**
-
-1. Run the dev setup script to prepare your environment: `./scripts/dev_setup.sh`
-1. Update your current shell environment: `source ~/.cargo/env`
-
-:::tip
-You can see the available options for the script by running `./scripts/dev_setup.sh --help`
-:::
-
-**> Manual installation of dependencies**
-
-If the script above does not work for you, you can install these manually, but it is **not recommended**:
-
-1. [Rust](https://www.rust-lang.org/tools/install).
-1. [CMake](https://cmake.org/download/).
-1. [LLVM](https://releases.llvm.org/).
-1. [libssl-dev](https://packages.ubuntu.com/jammy/libssl-dev) and [libclang-dev](https://packages.ubuntu.com/jammy/libclang-dev)
-
-
-
-
-
-Windows
-
-**> Using the automated script**
-
-1. Open a PowerShell terminal as an administrator.
-1. Run the dev setup script to prepare your environment: `PowerShell -ExecutionPolicy Bypass -File ./scripts/windows_dev_setup.ps1`
-
-**> Manual installation of dependencies**
-
-1. Install [Rust](https://www.rust-lang.org/tools/install).
-1. Install [LLVM](https://releases.llvm.org/). Visit their GitHub repository for the [latest prebuilt release](https://github.com/llvm/llvm-project/releases/tag/llvmorg-15.0.7).
-1. Install [Microsoft Visual Studio Build Tools for Windows](https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2022). During setup, select "Desktop development with C++" and three additional options: MSVC C++ build tools, Windows 10/11 SDK, and C++ CMake tools for Windows.
-1. If on Windows ARM, install [Visual Studio](https://visualstudio.microsoft.com/vs).
-1. If not already installed during Visual Studio/Build Tools installation, install [CMake](https://cmake.org/download/).
-
-1. Open a new PowerShell terminal after installing all dependencies
-
-
-
-### Additional Tools
-
-If you used `scripts/dev_setup.sh` for MacOS or Linux setup, additional tools are optionally available.
-
-#### TypeScript
-Typically only needed for _developing_ the TypeScript SDK.
-[Using the released SDK can be achieved from npm/pnpm/yarn](../sdks/ts-sdk/index).
-```bash
-scripts/dev_setup.sh -J
-```
-
-#### PostgreSQL
-Used in the Indexer.
-```bash
-scripts/dev_setup.sh -P
-```
-
-#### Move Prover Tools
-```bash
-scripts/dev_setup.sh -y -p
-```
-
-
-
-Now your basic Aptos development environment is ready. Head over to our [Developer Tutorials](../tutorials/index.md) to get started in Aptos.
-
-## Building Aptos
-
-The simplest check that you have a working environment is to build everything and run the tests.
-
-```bash
-cargo build
-cargo test -- --skip prover
-```
-
-If you installed the Move Prover Tools above then you don't need to skip the prover tests.
-
-Other documentation of specific tools has recommended patterns for `cargo build` and `cargo run`
-
-* [Run a Local Development Network](../guides/local-development-network.md)
-* [Indxer](../indexer/legacy/indexer-fullnode.md)
-* [Node Health Checker](../nodes/measure/node-health-checker.md)
-* [Running a Local Multinode Network](running-a-local-multi-node-network.md)
diff --git a/developer-docs-site/docs/guides/data-pruning.md b/developer-docs-site/docs/guides/data-pruning.md
deleted file mode 100644
index 95a5eb4825afc..0000000000000
--- a/developer-docs-site/docs/guides/data-pruning.md
+++ /dev/null
@@ -1,63 +0,0 @@
----
-title: "Data Pruning"
-slug: "data-pruning"
----
-
-# Data Pruning
-
-When a validator node is running, it participates in consensus to execute
-transactions and commit new data to the blockchain. Similarly, when fullnodes
-are running, they sync the new blockchain data through [state synchronization](../guides/state-sync.md).
-As the blockchain grows, storage disk space can be managed by pruning old
-blockchain data. Specifically, by pruning the **ledger history**: which
-contains old transactions. By default, ledger pruning is enabled on all
-nodes with a pruning window that can be configured. This document describes
-how you can configure the pruning behavior.
-
-:::note
-By default the ledger pruner keeps 150 million recent transactions. The approximate amount of disk space required for every 150M transactions is 200G. Unless
-bootstrapped from the genesis and configured to disable the pruner or a long
-prune window, the node doesn't carry the entirety of the ledger history.
-Majority of the nodes on both the testnet and mainnet have a partial
-history of 150 million transactions according to this configuration.
-:::
-
-
-To manage these settings, edit the node configuration YAML files,
-for example, `fullnode.yaml` for fullnodes (validator or public) or
-`validator.yaml` for validator nodes, as shown below.
-
-## Disabling the ledger pruner
-
-Add the following to the node configuration YAML file to disable the
-ledger pruner:
-
-:::caution Proceed with caution
-Disabling the ledger pruner can result in the storage disk filling up very quickly.
-:::
-
-```yaml
-storage:
- storage_pruner_config:
- ledger_pruner_config:
- enable: false
-```
-
-## Configuring the ledger pruning window
-
-Add the following to the node configuration YAML file to make the node
-retain, for example, 1 billion transactions and their outputs, including events
-and write sets.
-
-:::caution Proceed with caution
-Setting the pruning window smaller than 100 million can lead to runtime errors and damage the health of the node.
-:::
-
-```yaml
-storage:
- storage_pruner_config:
- ledger_pruner_config:
- prune_window: 1000000000
-```
-
-See the complete set of storage configuration settings in the [Storage README](https://github.com/aptos-labs/aptos-core/tree/main/storage#configs).
diff --git a/developer-docs-site/docs/guides/explore-aptos.md b/developer-docs-site/docs/guides/explore-aptos.md
deleted file mode 100644
index 5c6438ef828c2..0000000000000
--- a/developer-docs-site/docs/guides/explore-aptos.md
+++ /dev/null
@@ -1,275 +0,0 @@
----
-title: "Explore Aptos"
-slug: "explore-aptos"
----
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-import ThemedImage from '@theme/ThemedImage';
-import useBaseUrl from '@docusaurus/useBaseUrl';
-
-# Use the Aptos Explorer
-
-The [Aptos Explorer](https://explorer.aptoslabs.com/) lets you delve into the activity on the Aptos blockchain in great detail, seeing transactions, validators, and account information. With the Aptos Explorer, you can ensure that the transactions performed on Aptos are accurately reflected. Note, the Aptos ecosystem has [several other explorers](https://github.com/aptos-foundation/ecosystem-projects#explorers) to choose from.
-
-The Aptos Explorer provides a one-step search engine across the blockchain to discover details about wallets, transactions, network analytics, user accounts, smart contracts, and more. The Aptos Explorer also offers dedicated pages for key elements of the blockchain and acts as the source of truth for all things Aptos. See the [Aptos Glossary](../reference/glossary.md) for definitions of many of the terms found here.
-
-## Users
-
-The Aptos Explorer gives you a near real-time view into the status of the network and the activity related to the core on-chain entities. It serves these audiences and purposes by letting:
-
-* App developers understand the behavior of the smart contracts and sender-receiver transaction flows.
-* General users view and analyze Aptos blockchain activity on key entities - transactions, blocks, accounts, and resources.
-* Node operators check the health of the network and maximize the value of operating the node.
-* Token holders find the best node operator to delegate the tokens and earn a staking reward.
-
-## Common tasks
-
-Follow the instructions here to conduct typical work in the Aptos Explorer.
-
-### Select a network
-
-The Aptos Explorer renders data from all Aptos networks: Mainnet, Testnet, Devnet, and your local host if configured. See [Aptos Blockchain Networks](../nodes/networks.md) for a detailed view of their purposes and differences.
-
-To select a network in the [Aptos Explorer](https://explorer.aptoslabs.com/), load the explorer and use the *Select Network* drop-down menu at the top right to select your desired network.
-
-
-
-
-
-### Find a transaction
-
-One of the most common tasks is to track a transaction in Aptos Explorer. You may search by the account address, transaction version and hash, or block height and version.
-
-To find a transaction:
-
-1. Enter the value in the *Search transactions* field near the top of any page.
-1. Do not press return.
-1. Click the transaction result that appears immediately below the search field, highlighted in green within the following screenshot:
-
-
-
-
-
-The resulting [Transaction details](#transaction-details) page appears.
-
-### Find an account address
-
-The simplest way to find your address is to use the [Aptos Petra Wallet](https://petra.app/docs/use).
-
-Then simply append it to the following URL to load its details in the Aptos Explorer:
-https://explorer.aptoslabs.com/account/
-
-Like so:
-https://explorer.aptoslabs.com/account/0x778bdeebb67d3914b181236c2f1f4acc0e561482fc265b9a5709488a97fb3303
-
-See [Accounts](#accounts) for instructions on use.
-
-## Explorer pages
-
-This section walks you through the available screens in Aptos Explorer to help you find the information you need.
-
-### Explorer home
-
-The Aptos Explorer home page provides an immediate view into the total supply of Aptos coins, those that are now staked, transactions per second (TPS), and active validators on the network, as well as a rolling list of the latest transactions:
-
-
-
-
-
-Click the **Transactions** tab at the top or **View all Transactions** at the bottom to go to the [Transactions](#transactions) page.
-
-### Transactions
-
-The *Transactions* page displays all transactions on the Aptos blockchain in order, with the latest at the top of an ever-growing list.
-
-In the transactions list, single-click the **Hash** column to see and copy the hash for the transaction or double-click the hash to go directly to the transaction details for the hash.
-
-
-
-
-
-Otherwise, click anywhere else in the row of the desired transaction to load its [Transaction details](#transaction-details) page.
-
-Use the controls at the bottom of the list to navigate back through transactions historically.
-
-### Transaction details
-
-The *Transaction details* page reveals all information for a given transaction, starting with its default *Overview* tab. There you can see a transaction's status, sender, version, gas fee, and much more:
-
-
-
-
-
-Scrolling down on the Overview, you can also see the transaction's signature (with `public_key`) and hashes for tracking.
-
-The Transaction details page offers even more information in the following tabs.
-
-#### Events
-
-The Transaction details *Events* tab shows the transaction's [sequence numbers](../reference/glossary.md#sequence-number), including their types and data.
-
-#### Payload
-
-The Transaction details *Payload* tab presents the transaction's actual code used. Click the down arrow at the bottom of the code block to expand it and see all contents.
-
-#### Changes
-
-The Transaction details *Changes* tab shows the addresses, state key hashes, and data for each index in the transaction.
-
-### Accounts
-
-The *Accounts* page aggregates all transactions, tokens, and other resources in a single set of views starting with its default *Transactions* tab:
-
-
-
-
-
-You can load your account page by appending your account address to:
-https://explorer.aptoslabs.com/account/
-
-See [Find account address](#find-account-address) for more help.
-
-On the Accounts > Transactions tab, click any transaction to go to its [Transaction details](#transaction-details) page.
-
-As on the main [Transactions](#transactions) page, you may also single-click the **Hash** column to see and copy the hash for the transaction or double-click the hash to go directly to the transaction details for the hash.
-
-As with Transactions, the Aptos Explorer provides tabs for additional information about the account.
-
-#### Tokens
-
-The *Tokens* tab presents any assets owned by the account, as well as details about the tokens themselves (name, collection, and more). Click any of the assets to go to the [Token details](#token-details) page.
-
-#### Token details
-
-The *Token details* page contains:
-
- * *Overview* tab including token name, owner, collection, creator, royalty, and more.
- * *Activities* tab showing all transfer types, the addresses involved, property version, and amount.
-
-
-
-
-
-On either tab, click an address to go to the *Account* page for the address.
-
-#### Resources
-
-The *Resources* tab presents a view of all types used by the account. Use the *Collapse All* toggle at top right to see all types at once.
-
-#### Modules
-
-The *Modules* tab displays the source code and ABI used by the account. Select different modules on the left sidebar to view Move source code and ABI of a specific module. Use the expand button at the top right of the source code to expand the code for better readability.
-
-
-
-
-
-#### Info
-
-The *Info* tab shows the [sequence number](../reference/glossary.md#sequence-number) and authentication key used by the account.
-
-### Blocks
-
-The *Blocks* page presents a running list of the latest blocks to be committed to the Aptos blockchain.
-
-
-
-
-
-Click the:
- * Hash to see and copy the hash of the block.
- * First version to go to the first transaction in the block.
- * Last version to go to the last transaction in the block.
- * Block ID or anywhere else to go to the [Block details](#block-details) page.
-
-### Block details
-
-The *Block details* page contains:
-
- * *Overview* tab including block height, versions, timestamp, proposer, epoch and round.
- * *Transactions* tab showing the version, status, type, hash, gas, and timestamp.
-
-
-
-
-
- On the *Overview* tab, click the versions to go to the related transactions or double-click the address of the proposer to go to the *Account* page for that address.
-
- On the *Transactions* tab, click the desired row to go to the *Transactions details* page.
-
-### Validators
-
-The *Validators* page lists every validator on the Aptos blockchain, including their validator address, voting power, public key, fullnode address, and network address.
-
-
-
-
-
-Click the validator address to go to the *Account* page for that address. Click the public key or any of the other addresses to see and copy their values.
diff --git a/developer-docs-site/docs/guides/local-development-network.md b/developer-docs-site/docs/guides/local-development-network.md
deleted file mode 100644
index fc34b5187fd7f..0000000000000
--- a/developer-docs-site/docs/guides/local-development-network.md
+++ /dev/null
@@ -1,368 +0,0 @@
----
-title: "Run a Local Development Network"
----
-
-# Run a Local Development Network
-
-You can run the Aptos network locally. This local network will not be connected to any production Aptos network (e.g. mainnet), it will run on your local machine, independent of other Aptos networks. Building against a local network has a few advantages:
-- **No ratelimits:** Hosted services (including the Node API, Indexer API, and faucet) are generally subject to ratelimits. Local development networks have no ratelimits.
-- **Reproducibility:** When using a production network you might have to repeatedly make new accounts or rename Move modules to avoid incompatibility issues. With a local network you can just choose to start from scratch.
-- **High availability:** The Aptos devnet and testnet networks are periodically upgraded, during which time they can be unavailable. The internet can also be unreliable sometimes. Local development networks are always available, even if you have no internet access.
-
-## Prerequisites
-In order to run a local development network you must have the following installed:
-- Aptos CLI: [Installation Guide](../tools/aptos-cli/install-cli/index.md).
-- Docker: [Installation Guide](https://docs.docker.com/get-docker/).
- - Docker Desktop is the strongly recommended installation method.
-
-:::tip
-If you do not want to run an [Indexer API](../indexer/api/index.md) as part of your local network (`--with-indexer-api`) you do not need to install Docker. Note that without the Indexer API your local network will be incomplete compared to a production network. Many features in the downstream tooling will not work as expected / at all without this API available.
-:::
-
-## Run a local network
-
-You can run a local network using the following Aptos CLI command:
-```bash
-aptos node run-local-testnet --with-indexer-api
-```
-
-**Note:** Despite the name (`local-testnet`), this has nothing to with the Aptos testnet, it will run a network entirely local to your machine.
-
-You should expect to see output similar to this:
-```
-Readiness endpoint: http://0.0.0.0:8070/
-
-Indexer API is starting, please wait...
-Node API is starting, please wait...
-Transaction stream is starting, please wait...
-Postgres is starting, please wait...
-Faucet is starting, please wait...
-
-Completed generating configuration:
- Log file: "/Users/dport/.aptos/testnet/validator.log"
- Test dir: "/Users/dport/.aptos/testnet"
- Aptos root key path: "/Users/dport/.aptos/testnet/mint.key"
- Waypoint: 0:397412c0f96b10fa3daa24bfda962671c3c3ae484e2d67ed60534750e2311f3d
- ChainId: 4
- REST API endpoint: http://0.0.0.0:8080
- Metrics endpoint: http://0.0.0.0:9101/metrics
- Aptosnet fullnode network endpoint: /ip4/0.0.0.0/tcp/6181
- Indexer gRPC node stream endpoint: 0.0.0.0:50051
-
-Aptos is running, press ctrl-c to exit
-
-Node API is ready. Endpoint: http://0.0.0.0:8080/
-Postgres is ready. Endpoint: postgres://postgres@127.0.0.1:5433/local_testnet
-Transaction stream is ready. Endpoint: http://0.0.0.0:50051/
-Indexer API is ready. Endpoint: http://127.0.0.1:8090/
-Faucet is ready. Endpoint: http://127.0.0.1:8081/
-
-Applying post startup steps...
-
-Setup is complete, you can now use the local testnet!
-```
-
-Once you see this final line, you know the local testnet is ready to use:
-```
-Setup is complete, you can now use the local testnet!
-```
-
-As you can see from the output, once the local network is running, you have access to the following services:
-- [Node API](../nodes/aptos-api-spec.md): This is a REST API that runs directly on the node. It enables core write functionality such as transaction submission and a limited set of read functionality, such as reading account resources or Move module information.
-- [Indexer API](../indexer/api/index.md): This is a GraphQL API that provides rich read access to indexed blockchain data. If you click on the URL for the Indexer API above, by default http://127.0.0.1:8090, it will open the Hasura Console. This is a web UI that helps you query the Indexer GraphQL API.
-- [Faucet](../reference/glossary#faucet): You can use this to create accounts and mint APT on your local network.
-- [Transaction Stream Service](../indexer/txn-stream/index.md): This is a grpc stream of transactions. This is relevant to you if you are developing a [custom processor](../indexer/custom-processors/index.md).
-- Postgres: This is the database that the indexer processors write to. The Indexer API reads from this database.
-
-## Using the local network
-
-### Configuring your Aptos CLI
-
-You can add a separate profile, as shown below:
-
-```bash
-aptos init --profile local --network local
-```
-
-and you will get an output like below. At the `Enter your private key...` command prompt press enter to generate a random new key.
-
-```bash
-Configuring for profile local
-Using command line argument for rest URL http://localhost:8080/
-Using command line argument for faucet URL http://localhost:8081/
-Enter your private key as a hex literal (0x...) [Current: None | No input: Generate new key (or keep one if present)]
-```
-
-This will create and fund a new account, as shown below:
-
-```bash
-No key given, generating key...
-Account 7100C5295ED4F9F39DCC28D309654E291845984518307D3E2FE00AEA5F8CACC1 doesn't exist, creating it and funding it with 10000 coins
-Aptos is now set up for account 7100C5295ED4F9F39DCC28D309654E291845984518307D3E2FE00AEA5F8CACC1! Run `aptos help` for more information about commands
-{
- "Result": "Success"
-}
-```
-
-From now on you should add `--profile local` to CLI commands to run them against the local network.
-
-### Configuring the TypeScript SDK
-
-In order to interact with the local network using the TypeScript SDK, use the local network URLs when building the client:
-```typescript
-import { Provider, Network } from "aptos";
-
-const provider = new Provider(Network.LOCAL);
-```
-
-The provider is a single super client for both the node and indexer APIs.
-
-## Resetting the local network
-
-Sometimes while developing it is helpful to reset the local network back to its initial state:
-- You made backwards incompatible changes to a Move module and you'd like to redeploy it without renaming it or using a new account.
-- You are building a [custom indexer processor](../indexer/custom-processors/index.md) and would like to index using a fresh network.
-- You want to clear all on chain state, e.g. accounts, objects, etc.
-
-To start with a brand new local network, use the `--force-restart` flag:
-```bash
-aptos node run-local-testnet --force-restart
-```
-
-It will then prompt you if you really want to restart the chain, to ensure that you do not delete your work by accident.
-
-```bash
-Are you sure you want to delete the existing chain? [yes/no] >
-```
-
-If you do not want to be prompted, include `--assume-yes` as well:
-```bash
-aptos node run-local-testnet --force-restart --assume-yes
-```
-
-## FAQ
-
-### Where can I get more information about the run-local-testnet command?
-
-More CLI help can be found by running the command:
-
-```bash
-aptos node run-local-testnet --help
-```
-
-It will provide information about each of the flags you can use.
-
-
-### I'm getting the error `address already in use`, what can I do?
-
-If you're getting an error similar to this error:
-
-```bash
-'panicked at 'error binding to 0.0.0.0:9101: error creating server listener: Address already in use (os error 48)'
-```
-
-This means one of the ports needed by the local network is already in use by another process.
-
-On Unix systems, you can run the following command to get the name and PID of the process using the port:
-
-```bash
-lsof -i :8080
-```
-
-You can then kill it like this:
-```bash
-kill $PID
-```
-
-### How do I change the ports certain services run on?
-
-You can find flags to configure this for each service in the CLI help output:
-```
-aptos node run-local-testnet -h
-```
-
-The help output tells you which ports services use by default.
-
-### How do I opt out of running certain services?
-
-- Opt out of running a faucet with `--no-faucet`.
-- Opt out of running a Transaction Stream Service with `--no-txn-stream`.
-
-
-### How do I publish Move modules to the local testnet?
-
-If you set up a profile called `local` above, you can run any command by adding the `--profile local` flag. In this case, we also use `local` as the named address in the `HelloBlockchain` example. The CLI will replace `local` with the account address for that profile.
-
-```bash
-aptos move publish --profile local --package-dir /opt/git/aptos-core/aptos-move/move-examples/hello_blockchain --named-addresses HelloBlockchain=local
-```
-
-### How do I see logs from the services?
-In the output of the CLI you will see something like this:
-```
-Test dir: "/Users/dport/.aptos/testnet"
-```
-
-The logs from each of the services can be found in here. There are directories for the logs for each service. For processor logs, see the `tokio-runtime` directory.
-
-### What if it says Docker is not available?
-To run an Indexer API using `--with-indexer-api` you need to have Docker on your system.
-
-You might be seeing an error that looks like this:
-```
-Unexpected error: Failed to apply pre run steps for Postgres: Docker is not available, confirm it is installed and running. On Linux you may need to use sudo
-```
-
-Make sure you have Docker 24+:
-```bash
-$ docker --version
-Docker version 24.0.6, build ed223bc
-```
-
-Make sure the Docker daemon is running. If you see this error it means it is not running:
-```bash
-$ docker info
-...
-ERROR: Cannot connect to the Docker daemon at unix:///Users/dport/.docker/run/docker.sock. Is the docker daemon running?
-```
-
-Make sure the socket for connecting to Docker is present on your machine in the default location. For example on Unix systems this file should exist:
-```
-/var/run/docker.sock
-```
-
-If it doesn't, open Docker Desktop and enable `Settings -> Advanced -> Allow the default Docker socket to be used`.
-
-Alternatively, you can find where it is like this:
-```
-$ docker context inspect | grep Host
- "Host": "unix:///Users/dport/.docker/run/docker.sock",
-```
-
-Then make a symlink to it in the expected location:
-```
-sudo ln -s /Users/dport/.docker/run/docker.sock /var/run/docker.sock
-```
-
-Alternatively, run the CLI like this to tell it where the socket is:
-```
-DEFAULT_SOCKET=/Users/dport/.docker/run/docker.sock aptos node run-local-testnet --with-indexer-api
-```
-
-Note: As mentioned above, if you're on Mac or Windows, we recommend you use Docker Desktop rather than installing Docker via a package manager (e.g. Homebrew or Choco).
-
-### The local network seems to hang on startup
-If the CLI seems to sit there and do nothing when you are using `--with-indexer-api`, consider quitting and restarting Docker. Sometimes Docker gets in a bad state. Note that Docker is only required if you are using `--with-indexer-api`.
-
-### How do I use the Postgres on my host machine?
-By default when using `--with-indexer-api` the CLI will run a Postgres instance in Docker. If you have Postgres running on your host machine and would like to use that instead, you can do so with the `--use-host-postgres` flag. There are also flags for specifying how it should connect to the host Postgres. Here is an example invocation:
-```bash
-aptos node run-local-testnet --with-indexer-api --use-host-postgres --postgres-user $USER
-```
-
-### How do I wait for the local network to come up programmatically?
-When running the CLI interactively, you can see if the network is alive by waiting for this message:
-```
-Setup is complete, you can now use the local testnet!
-```
-
-If you are writing a script and would like to wait for the local network to come up, you can make a GET request to `http://127.0.0.1:8070`. At first this will return [503](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/503). When it returns [200](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200) it means all the services are ready.
-
-You can inspect the response to see which services are ready.
-
-
-Example using curl
-
-
-
-### How do I learn more about the Aptos CLI?
-If you are new to the Aptos CLI see this comprehensive [Aptos CLI user documentation](../tools/aptos-cli/use-cli/use-aptos-cli.md).
diff --git a/developer-docs-site/docs/guides/nfts/aptos-token-overview.md b/developer-docs-site/docs/guides/nfts/aptos-token-overview.md
deleted file mode 100644
index 95d166d2ab149..0000000000000
--- a/developer-docs-site/docs/guides/nfts/aptos-token-overview.md
+++ /dev/null
@@ -1,74 +0,0 @@
----
-title: "Aptos Token Overview"
----
-import ThemedImage from '@theme/ThemedImage';
-import useBaseUrl from '@docusaurus/useBaseUrl';
-
-# Aptos Token Standards
-
-The [Aptos Digital Asset Standard](../../standards/digital-asset.md) defines the canonical Nonfungible Token on Aptos. Aptos leverages composability to extend the digital asset standard with features like fungibility via the [Fungible Asset standard](../../standards/fungible-asset.md). The concept of composability comes from the underlying data model for these constructs: the [Move object](../../standards/aptos-object.md) data model.
-
-The rest of this document discusses how the Aptos token standards compare to the standards on Ethereum and Solana.
-
-## Data models
-
-To understand tokens, we begin by comparing the data models across different blockchains.
-
-### Ethereum
-
-Ethereum has two types of accounts:
-* Externally-owned accounts which store a balance of Ether.
-* Contract accounts which manage their underlying smart contracts and have an associated storage for persistent state, which can only be mutated by the associated contract.
-
-In order to create a new NFT collection, a creator must deploy their own contract to the blockchain, which in turn will create a collection and set of NFTs within its storage.
-
-### Solana
-
-Unlike Ethereum or Aptos where data and code co-exist, Solana stores data and programs in separate accounts. There are two types of accounts on the Solana blockchain:
-* Executable accounts only store contract code
-* Non-executable accounts store data associated with and owned by executable accounts.
-
-In order to create a new NFT collection, a creator calls an existing deployed program to populate a new collection and set of NFTs.
-
-### Aptos
-
-The [accounts](../../concepts/accounts.md) in Aptos store both smart contracts and data. Unlike Ethereum, the associated data of a smart contract is distributed across the space of all accounts in [resources](../../concepts/resources.md) within [accounts](../../concepts/accounts.md) or [objects](../../standards/aptos-object.md). For example, a collection and an NFT within that collection are stored in distinct objects at different addresses with the smart contract defining them at another address. A smart contract developer could also store data associated with the NFT and collection at the same address as the smart contract or in other objects.
-
-There are two means to create NFTs on Aptos:
-
-* The [no-code standard](https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-22.md) allows creators to call into the contract to create new collections and tokens without deploying a new contract.
-* Custom NFT contracts allow creators to customize their NFTs by extending the object model that can manage all aspects of their collection.
-
-Aptos strikes a balance between the customizability offered by Ethereum with the simplicity of creating new collections like Solana.
-
-Like Ethereum, Aptos requires indexing to determine the set of all NFTs owned by an account, while Solana has no need.
-
-## Token standard comparison
-
-The Fungible Token (FT) was initially introduced by [EIP-20](https://eips.ethereum.org/EIPS/eip-20), and Non-Fungible Token (NFT) was defined in [EIP-721](https://eips.ethereum.org/EIPS/eip-721). Later, [EIP-1155](https://eips.ethereum.org/EIPS/eip-1155) combined FT and NFT or even Semi-Fungible Token (SFT) into one standard.
-
-The Ethereum token standards requires each token to deploy their own individual contract code to distinguish collection of tokens. Solana account model enables another pattern where code can be reused so that one generic program operates on various data. To create a new token, you could create an account that can mint tokens and more accounts that can receive them. The mint account itself uniquely determines the token type instead of contract account, and these are all passed as arguments to the one contract deployed to some executable account.
-
-The collection of Aptos token standards shares some similarities with Solana, especially how it covers FT, NFT and SFT into a common on-chain code. Instead of deploying a new smart contract for each new token, a creator calls a function in the contract with the necessary arguments. Depending on which function you call, the token contract will mint/transfer/burn/... tokens.
-
-### Token identification
-
-Aptos identifies a token by its `Address` or `ObjectId`, a location within global storage. Collections are stored at a location determined by the address of the creator and the name of the collection.
-
-In Ethereum, contracts are deployed on accounts determined by the account that is deploying the contract. NFTs are then stored as indexes into data tables within the contract.
-
-In Solana, NFT data is stored under a mint account, independent of the program account.
-
-### Token metadata
-
-Aptos token has metadata in its `Token` resource with the data most commonly required by dapps to interact with tokens. Some examples include:
-- `name`: The name of the token. It must be unique within a collection.
-- `description`: The description of the token.
-- `uri`: A URL pointer to off-chain for more information about the token. The asset could be media such as an image or video or more metadata in a JSON file.
-- `collection`: A pointer to the ObjectId of the collection.
-
-Additional fields can be stored in creator-defined resources or the `PropertyMap` resource that defines a generalizable key-value map.
-
-In Ethereum, only a small portion of such properties are defined as methods, such as `name()`, `symbol()`, `decimals()`, `totalSupply()` of ERC-20; or `name()` and `symbol()` and `tokenURI()` of the optional metadata extension for ERC-721; ERC-1155 also has a similar method `uri()` in its own optional metadata extension. Token metadata is not standardized so that dapps have to take special treatment case by case.
-
-In Solana, the Token Metadata program offers a Metadata Account defining numerous metadata fields associated with a token as well, including `collection` which is defined in `TokenDataId` in Aptos. Solana, however, does not offer mutability for assets, unlike Aptos. Like Aptos, Token Metadata v1.1.0 offers an `attribute` container for customized properties.
diff --git a/developer-docs-site/docs/guides/running-a-local-multi-node-network.md b/developer-docs-site/docs/guides/running-a-local-multi-node-network.md
deleted file mode 100644
index 94dd274666672..0000000000000
--- a/developer-docs-site/docs/guides/running-a-local-multi-node-network.md
+++ /dev/null
@@ -1,102 +0,0 @@
----
-title: "Run a Local Multinode Network"
-slug: "running-a-local-multi-node-network"
----
-
-# Run a Local Multinode Network
-
-This guide describes how to run a local network with multiple validator nodes and validator fullnodes. You will use the [Aptos Forge CLI](https://github.com/aptos-labs/aptos-core/tree/main/testsuite/forge-cli/src) for this.
-
-:::tip Use only for test networks
-The method described in this guide should be used only for test networks of multi-node local networks. Do not use this guide for deploying in production environments. Currently this is the only guide for multi-node networks.
-
-For deploying a local network with a single node, see [Run a Local Development Network with the CLI](../guides/local-development-network.md).
-:::
-
-## Before you proceed
-
-This guide assumes you have done the steps in [Building Aptos From Source](building-from-source.md)
-
-## Running multiple validators
-
-To deploy multiple local validators, run:
-
-```bash
-cargo run -p aptos-forge-cli \
- -- \
- --suite "run_forever" \
- --num-validators 4 test local-swarm
-```
-
-This will start a local network of 4 validators, each running in their own process. The network will run forever unless you manually terminate it.
-
-The terminal output will display the locations of the validator files (for example, the genesis files, logs, node configurations, etc.) and the commands that were run to start each node. The process id (PID) of each node and server addresses (e.g., REST APIs) are also displayed when it starts. For example, if you run the above command you should see:
-
-```bash
-...
-2022-09-01T15:41:27.228289Z [main] INFO crates/aptos-genesis/src/builder.rs:462 Building genesis with 4 validators. Directory of output: "/private/var/folders/dx/c0l2rrkn0656gfx6v5_dy_p80000gn/T/.tmpq9uPMJ"
-...
-2022-09-01T15:41:28.090606Z [main] INFO testsuite/forge/src/backend/local/swarm.rs:207 The root (or mint) key for the swarm is: 0xf9f...
-...
-2022-09-01T15:41:28.094800Z [main] INFO testsuite/forge/src/backend/local/node.rs:129 Started node 0 (PID: 78939) with command: ".../aptos-core/target/debug/aptos-node" "-f" "/private/var/folders/dx/c0l2rrkn0656gfx6v5_dy_p80000gn/T/.tmpq9uPMJ/0/node.yaml"
-2022-09-01T15:41:28.094825Z [main] INFO testsuite/forge/src/backend/local/node.rs:137 Node 0: REST API is listening at: http://127.0.0.1:64566
-2022-09-01T15:41:28.094838Z [main] INFO testsuite/forge/src/backend/local/node.rs:142 Node 0: Inspection service is listening at http://127.0.0.1:64568
-...
-```
-
-Using the information from this output, you can stop a single node and restart
-it. For example, to stop and restart the node `0`, execute the below commands:
-
-```bash
-kill -9
-cargo run -p aptos-node \
- -- \
- -f
-```
-
-## Faucet and minting
-
-In order to mint coins in this test network you need to run a faucet. You can do that with this command:
-
-```bash
-cargo run -p aptos-faucet-service -- run-simple --key --node-url
-```
-
-You can get the values above like this:
-- `key`: When you started the swarm, there was output like this: `The root (or mint) key for the swarm is: 0xf9f...`. This is the `key`.
-- `node_url`: When you started the swarm, there was output like this: `REST API is listening at: http://127.0.0.1:64566`. This is the `node_url`.
-
-The above command will run a faucet locally, listening on port `8081`. Using this faucet, you can then mint tokens to your test accounts, for example:
-
-```bash
-curl -X POST http://127.0.0.1:8081/mint?amount=&pub_key=
-```
-
-As an alternative to using the faucet service, you may use the faucet CLI directly:
-```
-cargo run -p aptos-faucet-cli -- --amount 10 --accounts --key
-```
-
-:::tip Faucet and Aptos CLI
-See more on how the faucet works in the [README](https://github.com/aptos-labs/aptos-core/tree/main/crates/aptos-faucet).
-
-Also see how to use the [Aptos CLI](../tools/aptos-cli/use-cli/use-aptos-cli.md#account-examples) with an existing faucet.
-:::
-
-## Validator fullnodes
-
-To also run validator fullnodes inside the network, use the `--num-validator-fullnodes` flag. For example:
-```bash
-cargo run -p aptos-forge-cli \
- -- \
- --suite "run_forever" \
- --num-validators 3 \
- --num-validator-fullnodes 1 test local-swarm
-```
-
-## Additional usage
-
-To see all tool usage options, run:
-```bash
-cargo run -p aptos-forge-cli --help
-```
diff --git a/developer-docs-site/docs/guides/sponsored-transactions.md b/developer-docs-site/docs/guides/sponsored-transactions.md
deleted file mode 100644
index 04258b2293a1e..0000000000000
--- a/developer-docs-site/docs/guides/sponsored-transactions.md
+++ /dev/null
@@ -1,71 +0,0 @@
-# Sponsored Transactions
-
-As outlined in [AIP-39](https://github.com/aptos-foundation/AIPs/blob/main/aips/aip-39.md),sponsored transactions allow one account to pay the fees associated with executing a transaction for another account. Sponsored transactions simplify the process for onboarding users into applications by allowing the application to cover all associated fees for interacting with the Aptos blockchain. Here are two examples:
-* [MerkleTrade](https://merkle.trade/) offers low cost trading to those with Ethereum wallets by creating an Aptos wallet for users and covering all transaction fees so that the user does not need to acquire utility tokens for Aptos.
-* Community engagement applications like [Graffio](https://medium.com/aptoslabs/graffio-web3s-overnight-sensation-81a6cf18b626) offered to cover transaction fees for custodial accounts to support the collaborative drawing application for those without wallets.
-
-## Process Overview
-
-The process for sending a sponsored transaction follows:
-* The sender of the transaction determines upon an operation, as defined by a `RawTransaction`.
-* The sender generates a `RawTransactionWithData::MultiAgentWithFeePayer` structure
- * Prior to the framework 1.8 release, this must contain the fee payer's address.
- * After framework release 1.8, this can optionally be set to `0x0`.
-* (Optionally) the sender aggregates signatures from other signers.
-* The sender can forward the signed transaction to the fee payer to sign and forward it to the blockchain.
-* Upon execution of the transaction, the sequence number of the sender account is incremented, all gas fees are deducted from the gas fee payer, and all refunds are sent to the gas fee payer.
-
-Alternatively, if the fee payer knows the operation and all signers involved, the fee payer could generate and sign the transaction and send it back to the other signers to sign.
-
-## Technical Details
-
-In Aptos, a sponsored transaction reuses the same SignedTransaction as any other user transaction:
-```rust
-pub struct SignedTransaction {
- /// The raw transaction
- raw_txn: RawTransaction,
-
- /// Public key and signature to authenticate
- authenticator: TransactionAuthenticator,
-}
-```
-
-The difference is in the `TransactionAuthenticator`, which stores the authorization from the fee payer of the transaction to extract utility fees from their account:
-```rust
-pub enum TransactionAuthenticator {
-...
- /// Optional Multi-agent transaction with a fee payer.
- FeePayer {
- sender: AccountAuthenticator,
- secondary_signer_addresses: Vec,
- secondary_signers: Vec,
- fee_payer_address: AccountAddress,
- fee_payer_signer: AccountAuthenticator,
- },
-...
-}
-```
-
-To prepare a sponsored transaction for an account, the account must first exist on-chain. This is a requirement that is being removed with the 1.8 framework release.
-
-As of the 1.8 framework release, an account does not need to exist on-chain. However, the first transaction for an account requires enough gas to not only execute the transaction and cover the costs associated with account creation, even if an account already exists. Future improvements to the account model intend to eliminate this requirement.
-
-During signing of the transaction, all parties sign the following:
-```rust
-pub enum RawTransactionWithData {
-...
- MultiAgentWithFeePayer {
- raw_txn: RawTransaction,
- secondary_signer_addresses: Vec,
- fee_payer_address: AccountAddress,
- },
-}
-```
-
-Prior to framework release 1.8, all signers were required to know the actual fee payer address prior to signing. As of framework release 1.8, signers can optionally set the address to `0x0` and only the fee payer must sign with their address set.
-
-## SDK Support
-
-Currently, there are two demonstrations of sponsored transactions:
-* The Python SDK has an example in [fee_payer_transfer_coin.py](https://github.com/aptos-labs/aptos-core/blob/main/ecosystem/python/sdk/examples/fee_payer_transfer_coin.py).
-* The Rust SDK has a test case in [the API tests](https://github.com/aptos-labs/aptos-core/blob/0a62e54e13bc5da604ceaf39efed5c012a292078/api/src/tests/transactions_test.rs#L255).
diff --git a/developer-docs-site/docs/guides/state-sync.md b/developer-docs-site/docs/guides/state-sync.md
deleted file mode 100644
index a6b6a4a62020e..0000000000000
--- a/developer-docs-site/docs/guides/state-sync.md
+++ /dev/null
@@ -1,181 +0,0 @@
----
-title: "State Synchronization"
-slug: "state-sync"
----
-
-import ThemedImage from '@theme/ThemedImage';
-import useBaseUrl from '@docusaurus/useBaseUrl';
-
-# State Synchronization
-
-Nodes in an Aptos network (e.g., validator nodes and fullnodes) must always be synchronized to the latest Aptos blockchain state. The [state synchronization](https://medium.com/aptoslabs/the-evolution-of-state-sync-the-path-to-100k-transactions-per-second-with-sub-second-latency-at-52e25a2c6f10) (state sync) component that runs on each node is responsible for this. State sync identifies and fetches new blockchain data from the peers, validates the data and persists it to the local storage.
-
-:::tip Need to start a node quickly?
-If you need to start a node quickly, here's what we recommend by use case:
- - **Devnet public fullnode**: To sync the entire blockchain history, use [intelligent syncing](state-sync.md#intelligent-syncing). Otherwise, use [fast sync](state-sync.md#fast-syncing).
- - **Testnet public fullnode**: To sync the entire blockchain history, restore from a [backup](../nodes/full-node/aptos-db-restore.md). Otherwise, download [a snapshot](../nodes/full-node/bootstrap-fullnode.md) or use [fast sync](state-sync.md#fast-syncing).
- - **Mainnet public fullnode**: To sync the entire blockchain history, restore from a [backup](../nodes/full-node/aptos-db-restore.md). Otherwise, use [fast sync](state-sync.md#fast-syncing).
- - **Mainnet validator or validator fullnode**: To sync the entire blockchain history, restore from a [backup](../nodes/full-node/aptos-db-restore.md). Otherwise, use [fast sync](state-sync.md#fast-syncing).
-:::
-
-## State sync modes
-
-State sync runs in two modes. All nodes will first bootstrap (in bootstrapping mode) on startup, and then continuously synchronize (in continuous sync mode).
-
-### Bootstrapping mode
-
-When the node starts, state sync will perform bootstrapping by using the specified bootstrapping mode configuration. This allows the node to catch up to the Aptos blockchain. There are several bootstrapping modes:
-
-- **Execute all the transactions since genesis**. In this state sync mode the node will retrieve from the Aptos network all the transactions since genesis, i.e., since the start of the blockchain's history, and re-execute those transactions. Naturally, this synchronization mode takes the longest amount of time.
-- **Apply transaction outputs since genesis**. In this state sync mode the node will retrieve all the transactions since genesis but it will skip the transaction execution and will only apply the outputs of the transactions that were previously produced by validator execution. This mode reduces the amount of CPU time required.
-- **(Default) Intelligent syncing since genesis**. In this state sync mode the node will retrieve all the transactions since genesis and will either execute the transactions, or apply the transaction outputs, depending on whichever is faster, per data chunk. This allows the node to adapt to CPU and network resource constraints more efficiently. This mode is the default mode.
-- **Fast syncing**. In this state sync mode the node will skip the transaction history in the blockchain and will download only the latest blockchain state directly. As a result, the node will not have the historical transaction data, but it will be able to catch up to the Aptos network much more rapidly.
-
-### Continuous syncing mode
-
-After the node has bootstrapped and caught up to the Aptos network initially, state sync will then move into continuous syncing mode to stay up-to-date with the blockchain. There are several continuous syncing modes:
-
-- **Executing transactions**. This state sync mode will keep the node up-to-date by executing new transactions as they are committed to the blockchain.
-- **Applying transaction outputs**. This state sync mode will keep the node up-to-date by skipping the transaction execution and only applying the outputs of the transactions as previously produced by validator execution.
-- **(Default) Intelligent syncing**. This state sync mode will keep the node up-to-date by either executing the transactions, or applying the transaction outputs, depending on whichever is faster, per data chunk. This allows the node to adapt to CPU and network resource constraints more efficiently. This mode is the default mode.
-
-## Configuring the state sync modes
-
-The below sections provide instructions for how to configure your node for different use cases.
-
-### Executing all transactions
-
-To execute all the transactions since genesis and continue to execute new
-transactions as they are committed, add the following to your node
-configuration file (for example,`fullnode.yaml` or `validator.yaml`):
-
-```yaml
- state_sync:
- state_sync_driver:
- bootstrapping_mode: ExecuteTransactionsFromGenesis
- continuous_syncing_mode: ExecuteTransactions
-```
-
-:::tip Verify node syncing
-While your node is syncing, you'll be able to see the
-[`aptos_state_sync_version{type="synced"}`](../nodes/full-node/fullnode-source-code-or-docker.md#verify-initial-synchronization) metric gradually increase.
-:::
-
-### Applying all transaction outputs
-
-To apply all transaction outputs since genesis and continue to apply new
-transaction outputs as transactions are committed, add the following to your
-node configuration file:
-
-```yaml
- state_sync:
- state_sync_driver:
- bootstrapping_mode: ApplyTransactionOutputsFromGenesis
- continuous_syncing_mode: ApplyTransactionOutputs
-```
-
-:::tip Verify node syncing
-While your node is syncing, you'll be able to see the
-[`aptos_state_sync_version{type="synced"}`](../nodes/full-node/fullnode-source-code-or-docker.md#verify-initial-synchronization) metric gradually increase.
-:::
-
-### Intelligent syncing
-
-To execute or apply all transactions and outputs since genesis (and continue to
-do the same as new transactions are committed), add the following to your node
-configuration file:
-
-```yaml
- state_sync:
- state_sync_driver:
- bootstrapping_mode: ExecuteOrApplyFromGenesis
- continuous_syncing_mode: ExecuteTransactionsOrApplyOutputs
-```
-
-This is the default syncing mode on all nodes, as it allows the node to adapt to CPU and network resource constraints more efficiently.
-
-:::tip Verify node syncing
-While your node is syncing, you'll be able to see the
-[`aptos_state_sync_version{type="synced"}`](../nodes/full-node/fullnode-source-code-or-docker.md#verify-initial-synchronization) metric gradually increase.
-:::
-
-### Fast syncing
-
-:::tip Fastest and cheapest method
-This is the fastest and cheapest method of syncing your node. It
-requires the node to start from an empty state (i.e., not have any existing
-storage data).
-:::
-
-:::caution Proceed with caution
-Fast sync should only be used as a last resort for validators and
-validator fullnodes. This is because fast sync skips all of the blockchain
-history and as a result: (i) reduces the data availability in the network;
-and (ii) may hinder validator consensus performance if too much data has
-been skipped. Thus, validator and validator fullnode operators should be
-careful to consider alternate ways of syncing before resorting to fast sync.
-:::
-
-To download the latest blockchain state and continue to apply new
-transaction outputs as transactions are committed, add the following to your
-node configuration file:
-
-```yaml
- state_sync:
- state_sync_driver:
- bootstrapping_mode: DownloadLatestStates
- continuous_syncing_mode: ExecuteTransactionsOrApplyOutputs
-```
-
-While your node is syncing, you'll be able to see the
-`aptos_state_sync_version{type="synced_states"}` metric gradually increase.
-However, `aptos_state_sync_version{type="synced"}` will only increase once
-the node has bootstrapped. This may take several hours depending on the
-amount of data, network bandwidth and node resources available.
-
-**Note:** If `aptos_state_sync_version{type="synced_states"}` does not
-increase then do the following:
-1. Double-check the node configuration file has correctly been updated.
-2. Make sure that the node is starting up with an empty storage database
-(i.e., that it has not synced any state previously).
-
-## Running archival nodes
-
-To operate an archival node, which is a fullnode that contains all blockchain data
-since the start of the blockchain's history (that is, genesis), you should:
-1. Run a fullnode and configure it to either: (i) execute all transactions; (ii) apply all transaction outputs; or (iii)
-use intelligent syncing (see above). Do not select fast syncing, as the fullnode will not contain all data since genesis.
-2. Disable the ledger pruner, as described in the [Data Pruning document](data-pruning.md#disabling-the-ledger-pruner).
-This will ensure that no data is pruned and the fullnode contains all blockchain data.
-
-:::caution Proceed with caution
-Running and maintaining archival nodes is likely to be expensive and slow
-as the amount of data being stored on the fullnode will continuously grow.
-:::
-
-
-## Security implications and data integrity
-Each of the different syncing modes perform data integrity verifications to
-ensure that the data being synced to the node has been correctly produced
-and signed by the validators. This occurs slightly differently for
-each syncing mode:
-1. Executing transactions from genesis is the most secure syncing mode. It will
-verify that all transactions since the beginning of time were correctly agreed
-upon by consensus and that all transactions were correctly executed by the
-validators. All resulting blockchain state will thus be re-verified by the
-syncing node.
-2. Applying transaction outputs from genesis is faster than executing all
-transactions, but it requires that the syncing node trusts the validators to
-have executed the transactions correctly. However, all other
-blockchain state is still manually re-verified, e.g., consensus messages,
-the transaction history and the state hashes are still verified.
-3. Fast syncing skips the transaction history and downloads the latest
-blockchain state before continuously syncing. To do this, it requires that the
-syncing node trust the validators to have correctly agreed upon all
-transactions in the transaction history as well as trust that all transactions
-were correctly executed by the validators. However, all other blockchain state
-is still manually re-verified, e.g., epoch changes and the resulting blockchain states.
-
-All of the syncing modes get their root of trust from the validator set
-and cryptographic signatures from those validators over the blockchain data.
-For more information about how this works, see the [state synchronization blogpost](https://medium.com/aptoslabs/the-evolution-of-state-sync-the-path-to-100k-transactions-per-second-with-sub-second-latency-at-52e25a2c6f10).
diff --git a/developer-docs-site/docs/guides/system-integrators-guide.md b/developer-docs-site/docs/guides/system-integrators-guide.md
deleted file mode 100644
index d243cf7883dc3..0000000000000
--- a/developer-docs-site/docs/guides/system-integrators-guide.md
+++ /dev/null
@@ -1,542 +0,0 @@
----
-title: "Integrate with Aptos"
-slug: "system-integrators-guide"
----
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-# Integrate with the Aptos Blockchain
-
-If you provide blockchain services to your customers and wish to add the Aptos blockchain to your platform, then this guide is for you. This system integrators guide will walk you through all you need to integrate the Aptos blockchain into your platform.
-
-## Overview
-
-This document will guide you through the following tasks to integrate with Aptos:
-1. Prepare an environment for testing.
-1. Create an account on the blockchain.
-1. Exchange account identifiers with another entity on the blockchain, for example, to perform swaps.
-1. Create a transaction.
-1. Obtain a gas estimate and validate the transaction for correctness.
-1. Submit the transaction to the blockchain.
-1. Wait for the outcome of the transaction.
-1. Query historical transactions and interactions for a given account with a specific account, i.e., withdraws and deposits.
-
-## Getting Started
-
-In order to get started you'll need to select a network and pick your set of tools. There are also a handful of SDKs to help accelerate development.
-
-### Choose a network
-
-There are four well-supported networks for integrating with the Aptos blockchain:
-
-1. [Local testnet](http://127.0.0.1:8080) -- our standalone tool for local development against a known version of the codebase with no external network.
-1. [Devnet](https://fullnode.devnet.aptoslabs.com/v1/spec#/) -- a shared resource for the community, data resets weekly, weekly update from aptos-core main branch.
-1. [Testnet](https://fullnode.testnet.aptoslabs.com/v1/spec#/) -- a shared resource for the community, data will be preserved, network configuration will mimic Mainnet.
-1. [Mainnet](https://fullnode.mainnet.aptoslabs.com/v1/spec#/) -- a production network with real assets.
-
-See [Aptos Blockchain Networks](../nodes/networks.md) for full details on each environment.
-
-### Run a local testnet
-
-There are two options for running a local testnet:
-* [Install the Aptos CLI](../tools/aptos-cli/install-cli/index.md) and 2) run a [local development network](./local-development-network.md). This path is useful for developing on the Aptos blockchain, debugging Move contracts, and testing node operations. Using the CLI you will have a fully featured local development environment including a single node network, the node API, indexer API, and a faucet.
-* Directly [run a local testnet](../nodes/local-testnet/run-a-local-testnet.md) using either the [Aptos-core source code](../nodes/local-testnet/run-a-local-testnet.md#using-the-aptos-core-source-code) or a [Docker image](../nodes/local-testnet/run-a-local-testnet.md#using-docker). These paths are useful for testing changes to the Aptos-core codebase or framework, or for building services on top of the Aptos blockchain, respectively.
-
-Either of these methods will expose a [REST API service](../apis/fullnode-rest-api.md) at `http://127.0.0.1:8080` and a Faucet API service at `http://127.0.0.1:8000` for option 1 run a local testnet or `http://127.0.0.1:8081` for option 2 install the Aptos CLI. The applications will output the location of the services.
-
-### Production network access
-
-
-
-
-
-
-
-### SDKs and tools
-
-Aptos currently provides three SDKs:
-1. [Typescript](../sdks/new-ts-sdk/index.md)
-2. [Python](../sdks/python-sdk.md)
-3. [Rust](../sdks/rust-sdk.md)
-
-Almost all developers will benefit from exploring the CLI. [Using the CLI](../tools/aptos-cli/use-cli/use-aptos-cli.md) demonstrates how the CLI can be used to which includes creating accounts, transferring coins, and publishing modules.
-
-## Accounts on Aptos
-
-An [account](../concepts/accounts.md) represents an entity on the Aptos blockchain that can send transactions. Each account is identified by a particular 32-byte account address and is a container for [Move modules and resources](../concepts/resources.md). On Aptos, accounts must be created on-chain prior to any blockchain operations involving that account. The Aptos framework supports implicitly creating accounts when transferring Aptos coin via [`aptos_account::transfer`](https://github.com/aptos-labs/aptos-core/blob/88c9aab3982c246f8aa75eb2caf8c8ab1dcab491/aptos-move/framework/aptos-framework/sources/aptos_account.move#L18) or explicitly via [`aptos_account::create_account`](https://github.com/aptos-labs/aptos-core/blob/88c9aab3982c246f8aa75eb2caf8c8ab1dcab491/aptos-move/framework/aptos-framework/sources/aptos_account.move#L13).
-
-At creation, an [Aptos account](https://github.com/aptos-labs/aptos-core/blob/88c9aab3982c246f8aa75eb2caf8c8ab1dcab491/aptos-move/framework/aptos-framework/sources/account.move#L23) contains:
-* A [resource containing Aptos Coin](https://github.com/aptos-labs/aptos-core/blob/60751b5ed44984178c7163933da3d1b18ad80388/aptos-move/framework/aptos-framework/sources/coin.move#L50) and deposit and withdrawal of coins from that resource.
-* An authentication key associated with their current public, private key(s).
-* A strictly increasing [sequence number](../concepts/accounts.md#account-sequence-number) that represents the account's next transaction's sequence number to prevent replay attacks.
-* A strictly increasing number that represents the next distinct GUID creation number.
-* An [event handle](../concepts/events.md) for all new types of coins added to the account.
-* An event handle for all key rotations for the account.
-
-Read more about [Accounts](../concepts/accounts.md) and [set one up](../tools/aptos-cli/use-cli/use-aptos-cli.md#initialize-local-configuration-and-create-an-account).
-
-## Transactions
-
-Aptos [transactions](../concepts/txns-states.md) are encoded in [Binary Canonical Serialization (BCS)](https://github.com/diem/bcs). Transactions contain information such as the sender’s account address, authentication from the sender, the desired operation to be performed on the Aptos blockchain, and the amount of gas the sender is willing to pay to execute the transaction.
-
-Read more in [Transactions and States](../concepts/txns-states.md).
-
-### Generating transactions
-
-Aptos supports two methods for constructing transactions:
-
-- Using the Aptos client libraries to generate native BCS transactions.
-- Constructing JSON-encoded objects and interacting with the REST API to generate native transactions.
-
-The preferred approach is to directly generate native BCS transactions. Generating them via the REST API enables rapid development at the cost of trusting the fullnode to generate the transaction correctly.
-
-#### BCS-encoded transactions
-
-BCS-encoded transactions can be submitted to the `/transactions` endpoint but must specify `Content-Type: application/x.aptos.signed_transaction+bcs` in the HTTP headers. This will return a transaction submission result that, if successful, contains a transaction hash in the `hash` [field](https://github.com/aptos-labs/aptos-core/blob/9b85d41ed8ef4a61a9cd64f9de511654fcc02024/ecosystem/python/sdk/aptos_sdk/client.py#L138).
-
-#### JSON-encoded transactions
-
-JSON-encoded transactions can be generated via the [REST API](https://fullnode.devnet.aptoslabs.com/v1/spec#/), following these steps:
-
-1. First construct an appropriate JSON payload for the `/transactions/encode_submission` endpoint as demonstrated in the [Python SDK](https://github.com/aptos-labs/aptos-core/blob/b0fe7ea6687e9c180ebdbac8d8eb984d11d7e4d4/ecosystem/python/sdk/aptos_sdk/client.py#L128).
-1. The output of the above contains an object containing a `message` that must be signed with the sender’s private key locally.
-1. Extend the original JSON payload with the signature information and post it to the `/transactions` [endpoint](https://github.com/aptos-labs/aptos-core/blob/b0fe7ea6687e9c180ebdbac8d8eb984d11d7e4d4/ecosystem/python/sdk/aptos_sdk/client.py#L142). This will return a transaction submission result that, if successful, contains a transaction hash in the `hash` [field](https://github.com/aptos-labs/aptos-core/blob/b0fe7ea6687e9c180ebdbac8d8eb984d11d7e4d4/ecosystem/python/sdk/aptos_sdk/client.py#L145).
-
-JSON-encoded transactions allow for rapid development and support seamless ABI conversions of transaction arguments to native types. However, most system integrators prefer to generate transactions within their own tech stack. Both the [TypeScript SDK](https://github.com/aptos-labs/aptos-core/blob/9b85d41ed8ef4a61a9cd64f9de511654fcc02024/ecosystem/typescript/sdk/src/aptos_client.ts#L259) and [Python SDK](https://github.com/aptos-labs/aptos-core/blob/b0fe7ea6687e9c180ebdbac8d8eb984d11d7e4d4/ecosystem/python/sdk/aptos_sdk/client.py#L100) support generating BCS transactions.
-
-### Types of transactions
-
-Within a given transaction, the target of execution can be one of two types:
-
-- An entry point (formerly known as script function)
-- A script (payload)
-
-Both [Python](https://github.com/aptos-labs/aptos-core/blob/3973311dac6bb9348bfc81cf983c2a1be11f1b48/ecosystem/python/sdk/aptos_sdk/client.py#L256) and [TypeScript](https://github.com/aptos-labs/aptos-core/blob/3973311dac6bb9348bfc81cf983c2a1be11f1b48/ecosystem/typescript/sdk/src/aptos_client.test.ts#L93) support the generation of transactions that target entry points. This guide points out many of those entry points, such as `aptos_account::transfer` and `aptos_account::create_account`.
-
-Most basic operations on the Aptos blockchain should be available via entry point calls. While one could submit multiple transactions calling entry points in series, such operations benefit from being called atomically from a single transaction. A script payload transaction can call any public (entry) function defined within any module. Here's an example [Move script](https://github.com/aptos-labs/aptos-core/tree/main/aptos-move/move-examples/scripts/two_by_two_transfer) that uses a MultiAgent transaction to extract funds from two accounts and deposit them into two other accounts. This is a [Python example](https://github.com/aptos-labs/aptos-core/blob/main/ecosystem/python/sdk/examples/transfer_two_by_two.py) that uses the bytecode generated by compiling that script. Currently there is limited support for script payloads in TypeScript.
-
-### Status of a transaction
-
-Obtain transaction status by querying the API [`/transactions/by_hash/{hash}`](https://fullnode.devnet.aptoslabs.com/v1/spec#/operations/get_transaction_by_hash) with the hash returned during the submission of the transaction.
-
-A reasonable strategy for submitting transactions is to limit their lifetime to 30 to 60 seconds, and polling that API at regular intervals until success or several seconds after that time has elapsed. If there is no commitment on-chain, the transaction was likely discarded.
-
-### Testing transactions or transaction pre-execution
-
-To facilitate evaluation of transactions as well as gas estimation, Aptos supports a simulation API that does not require and should not contain valid signatures on transactions.
-
-The simulation API is a synchronous API that executes a transaction and returns the output inclusive of gas usage. The simulation API can be accessed by submitting a transaction to [`/transactions/simulate`](https://fullnode.devnet.aptoslabs.com/v1/spec#/operations/simulate_transaction).
-
-Both the [Typescript SDK](https://github.com/aptos-labs/aptos-ts-sdk/blob/main/src/api/transactionSubmission/simulate.ts) and [Python SDK](https://github.com/aptos-labs/aptos-core/blob/main/ecosystem/python/sdk/examples/simulate_transfer_coin.py) support the simulation API. Note the output and gas used may change based upon the state of the account. For gas estimations, we recommend that the maximum gas amount be larger than the amount quoted by this API.
-
-## Viewing current and historical state
-
-Most integrations into the Aptos blockchain benefit from a holistic and comprehensive overview of the current and historical state of the blockchain. Aptos provides historical transactions, state, and events, all the result of transaction execution.
-
-* Historical transactions specify the execution status, output, and tie to related events. Each transaction has a unique version number associated with it that dictates its global sequential ordering in the history of the blockchain ledger.
-* The state is the representation of all transaction outputs up to a specific version. In other words, a state version is the accumulation of all transactions inclusive of that transaction version.
-* As transactions execute, they may emit events. [Events](../concepts/events.md) are hints about changes in on-chain data.
-
-The storage service on a node employs two forms of pruning that erase data from nodes:
-
-* state
-* events, transactions, and everything else
-
-While either of these may be disabled, storing the state versions is not particularly sustainable.
-
-Events and transactions pruning can be disabled via setting the [`enable_ledger_pruner`](https://github.com/aptos-labs/aptos-core/blob/cf0bc2e4031a843cdc0c04e70b3f7cd92666afcf/config/src/config/storage_config.rs#L141) to `false`. This is default behavior in Mainnet. In the near future, Aptos will provide indexers that mitigate the need to directly query from a node.
-
-The REST API offers querying transactions and events in these ways:
-
-* [Transactions for an account](https://fullnode.devnet.aptoslabs.com/v1/spec#/operations/get_account_transactions)
-* [Transaction by version](https://fullnode.devnet.aptoslabs.com/v1/spec#/operations/get_transaction_by_version)
-* [Events by event handle](https://fullnode.devnet.aptoslabs.com/v1/spec#/operations/get_events_by_event_handle)
-
-## Exchanging and tracking coins
-
-Aptos has a standard [Coin type](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-framework/sources/coin.move). Different types of coins can be represented in this type through the use of distinct structs that represent the type parameter or generic for `Coin`.
-
-Coins are stored within an account under the resource `CoinStore`. At account creation, each user has the resource `CoinStore<0x1::aptos_coin::AptosCoin>` or `CoinStore`, for short. Within this resource is the Aptos coin: `Coin`.
-
-### Transferring coins between users
-
-Coins, including APT, can be transferred between users via the [`aptos_account::transfer_coins`](https://github.com/aptos-labs/aptos-core/blob/d1610e1bb5214689a37a9cab59cf9254e8eb2be1/aptos-move/framework/aptos-framework/sources/aptos_account.move#L92) function for all coins and [`aptos_account::transfer`](https://github.com/aptos-labs/aptos-core/blob/88c9aab3982c246f8aa75eb2caf8c8ab1dcab491/aptos-move/framework/aptos-framework/sources/aptos_account.move#L18) for Aptos coins.
-
-:::caution
-It is important to note that if an account has not registered a `CoinStore` for a given `T`, then any transfer of type `T` to that account will fail.
-:::
-
-### Current balance for a coin
-
-The current balance for a `Coin` where `T` is the Aptos coin is available at the account resources URL: `https://{rest_api_server}/accounts/{address}/resource/0x1::coin::CoinStore<0x1::aptos_coin::AptosCoin>`. The balance is stored within `coin::amount`. The resource also contains the total number of deposit and withdraw events, and the `counter` value within `deposit_events` and `withdraw_events`, respectively.
-
-```
-{
- "type": "0x1::coin::CoinStore<0x1::aptos_coin::AptosCoin>",
- "data": {
- "coin": {
- "value": "3927"
- },
- "deposit_events": {
- "counter": "1",
- "guid": {
- "id": {
- "addr": "0xcb2f940705c44ba110cd3b4f6540c96f2634938bd5f2aabd6946abf12ed88457",
- "creation_num": "2"
- }
- }
- },
- "withdraw_events": {
- "counter": "1",
- "guid": {
- "id": {
- "addr": "0xcb2f940705c44ba110cd3b4f6540c96f2634938bd5f2aabd6946abf12ed88457",
- "creation_num": "3"
- }
- }
- }
- }
-}
-```
-
-### Querying transactions
-
-In Aptos, each transaction is committed as a distinct version to the blockchain. This allows for the convenience of sharing committed transactions by their version number; to do so, query: `https://{rest_server_api}/transactions/by_version/{version}`
-
-Transactions submitted by an account can also be queried via the following URL where the `sequence_number` matches the sequence number of the transaction: `https://{rest_server_api}/account/{address}/transactions?start={sequence_number}&limit=1`
-
-A transfer transaction would appear as follows:
-
-```
-{
- "version": "13629679",
- "gas_used": "4",
- "success": true,
- "vm_status": "Executed successfully",
- "changes": [
- {
- "address": "0xb258b91eee04111039320a85b0c24a2dd433909e14a6b5c32ee722e0fdecfddc",
- "data": {
- "type": "0x1::coin::CoinStore<0x1::aptos_coin::AptosCoin>",
- "data": {
- "coin": {
- "value": "1000"
- },
- "deposit_events": {
- "counter": "1",
- "guid": {
- "id": {
- "addr": "0x5098df8e7969b58ab3bd2d440c6203f64c60a1fd5c08b9d4abe6ae4216246c3e",
- "creation_num": "2",
- }
- }
- },
- ...
- }
- },
- "type": "write_resource"
- },
- ...
- ],
- "sender": "0x810026ca8291dd88b5b30a1d3ca2edd683d33d06c4a7f7c451d96f6d47bc5e8b",
- "sequence_number": "0",
- "max_gas_amount": "2000",
- "gas_unit_price": "1",
- "expiration_timestamp_secs": "1660616127",
- "payload": {
- "function": "0x1::aptos_account::transfer",
- "arguments": [
- "0x5098df8e7969b58ab3bd2d440c6203f64c60a1fd5c08b9d4abe6ae4216246c3e",
- "1000"
- ],
- "type": "entry_function_payload"
- },
- "events": [
- {
- "key": "0x0300000000000000810026ca8291dd88b5b30a1d3ca2edd683d33d06c4a7f7c451d96f6d47bc5e8b",
- "guid": {
- "id": {
- "addr": "0x810026ca8291dd88b5b30a1d3ca2edd683d33d06c4a7f7c451d96f6d47bc5e8b",
- "creation_num": "3"
- }
- }
- },
- "sequence_number": "0",
- "type": "0x1::coin::WithdrawEvent",
- "data": {
- "amount": "1000"
- }
- },
- {
- "key": "0x02000000000000005098df8e7969b58ab3bd2d440c6203f64c60a1fd5c08b9d4abe6ae4216246c3e",
- guid": {
- "id": {
- "addr": "0x5098df8e7969b58ab3bd2d440c6203f64c60a1fd5c08b9d4abe6ae4216246c3e",
- "creation_num": "2"
- }
- }
- },
- "sequence_number": "0",
- "type": "0x1::coin::DepositEvent",
- "data": {
- "amount": "1000"
- }
- }
- ],
- "timestamp": "1660615531147935",
- "type": "user_transaction"
-}
-
-```
-
-Here is a breakdown of the information in a transaction:
-* `version` indicates the globally unique identifier for this transaction, its ordered position in all the committed transactions on the blockchain
-* `sender` is the account address of the entity that submitted the transaction
-* `gas_used` is the units paid for executing the transaction
-* `success` and `vm_status` indicate whether or not the transaction successfully executed and any reasons why it might not have
-* `changes` include the final values for any state resources that have been modified during the execution of the transaction
-* `events` contain all the events emitted during the transaction execution
-* `timestamp` is the near real-time timestamp of the transaction's execution
-
-If `success` is false, then `vm_status` will contain an error code or message that resulted in the transaction failing to succeed. When `success` is false, `changes` will be limited to gas deducted from the account and the sequence number incrementing. There will be no `events`.
-
-Each event in `events` is differentiated by a `key`. The `key` is derived from the `guid` in `changes`. Specifically, the `key` is a 40-byte hex string where the first eight bytes (or 16 characters) are the little endian representation of the `creation_num` in the `guid` of the `changes` event, and the remaining characters are the account address.
-
-As events do not dictate what emitted them, it is imperative to track the path in `changes` to determine the source of an event. In particular, each `CoinStore` has both a `WithdrawEvent` and a `DepositEvent`, based upon the type of coin. In order to determine which coin type is used in a transaction, an indexer can compare the `guid::creation_num` in a `changes` event combined with the address to the `key` for events in `events`.
-
-Using the above example, `events[1].guid` is equivalent to `changes[0].data.data.deposit_events.guid`, which is `{"addr": "0x5098df8e7969b58ab3bd2d440c6203f64c60a1fd5c08b9d4abe6ae4216246c3e", "creation_num": "2"}`.
-
-:::tip
-The `key` field will be going away in favor of `guid`
-:::
-
-### Querying events
-
-Aptos provides clear and canonical events for all withdraw and deposit of coins. This can be used in coordination with the associated transactions to present to a user the change of their account balance over time, when that happened, and what caused it. With some amount of additional parsing, metadata such as the transaction type and the other parties involved can also be shared.
-
-Query events by handle URL: `https://{rest_api_server}/accounts/{address}/events/0x1::coin::CoinStore<0x1::aptos_coin::AptosCoin>/withdraw_events`
-
-```
-[
- {
- "version":"13629679",
- "key": "0x0300000000000000cb2f940705c44ba110cd3b4f6540c96f2634938bd5f2aabd6946abf12ed88457",
- "guid": {
- "id": {
- "addr": "0x810026ca8291dd88b5b30a1d3ca2edd683d33d06c4a7f7c451d96f6d47bc5e8b",
- "creation_num": "3"
- }
- }
- },
- "sequence_number": "0",
- "type": "0x1::coin::WithdrawEvent",
- "data": {
- "amount": "1000"
- }
- }
-]
-```
-
-Gather more information from the transaction that generated the event by querying `https://{rest_server_api}/transactions/by_version/{version}` where `{version}` is the same value as the `{version}` in the event query.
-
-:::tip
-
-When tracking full movement of coins, normally events are sufficient. `0x1::aptos_coin::AptosCoin`, however, requires considering `gas_used` for each transaction sent from the given account since it represents gas in Aptos. To reduce unnecessary overhead, extracting gas fees due to transactions does not emit an event. All transactions for an account can be retrieved from this API: `https://{rest_server_api}/accounts/{address}/transactions`
-
-:::
-
-### Tracking coin balance changes
-
-Consider the transaction from the earlier section, but now with an arbitrary coin `0x1337::my_coin::MyCoin` and some gas parameters changed:
-```
-{
- "version": "13629679",
- "gas_used": "20",
- "success": true,
- "vm_status": "Executed successfully",
- "changes": [
- {
- "address": "0xb258b91eee04111039320a85b0c24a2dd433909e14a6b5c32ee722e0fdecfddc",
- "data": {
- "type": "0x1::coin::CoinStore<0x1337::my_coin::MyCoin>",
- "data": {
- "coin": {
- "value": "1000"
- },
- "deposit_events": {
- "counter": "1",
- "guid": {
- "id": {
- "addr": "0x5098df8e7969b58ab3bd2d440c6203f64c60a1fd5c08b9d4abe6ae4216246c3e",
- "creation_num": "2",
- }
- }
- },
- ...
- }
- },
- "type": "write_resource"
- },
- ...
- ],
- "sender": "0x810026ca8291dd88b5b30a1d3ca2edd683d33d06c4a7f7c451d96f6d47bc5e8b",
- "sequence_number": "0",
- "max_gas_amount": "2000",
- "gas_unit_price": "110",
- "expiration_timestamp_secs": "1660616127",
- "payload": {
- "function": "0x1::aptos_account::transfer_coins",
- "type_arguments": [
- "0x1337::my_coin::MyCoin"
- ],
- "arguments": [
- "0x5098df8e7969b58ab3bd2d440c6203f64c60a1fd5c08b9d4abe6ae4216246c3e",
- "1000"
- ],
- "type": "entry_function_payload"
- },
- "events": [
- {
- "key": "0x0300000000000000810026ca8291dd88b5b30a1d3ca2edd683d33d06c4a7f7c451d96f6d47bc5e8b",
- "guid": {
- "id": {
- "addr": "0x810026ca8291dd88b5b30a1d3ca2edd683d33d06c4a7f7c451d96f6d47bc5e8b",
- "creation_num": "3"
- }
- }
- },
- "sequence_number": "0",
- "type": "0x1::coin::WithdrawEvent",
- "data": {
- "amount": "1000"
- }
- },
- {
- "key": "0x02000000000000005098df8e7969b58ab3bd2d440c6203f64c60a1fd5c08b9d4abe6ae4216246c3e",
- guid": {
- "id": {
- "addr": "0x5098df8e7969b58ab3bd2d440c6203f64c60a1fd5c08b9d4abe6ae4216246c3e",
- "creation_num": "2"
- }
- }
- },
- "sequence_number": "0",
- "type": "0x1::coin::DepositEvent",
- "data": {
- "amount": "1000"
- }
- }
- ],
- "timestamp": "1660615531147935",
- "type": "user_transaction"
-}
-```
-
-There are three balance changes in this transaction:
-1. A withdrawal of `1000` of `0x1337::my_coin::MyCoin` from the transaction sending account `0x810026ca8291dd88b5b30a1d3ca2edd683d33d06c4a7f7c451d96f6d47bc5e8b`
-2. A deposit of `1000` of `0x1337::my_coin::MyCoin` to receiving account `0x5098df8e7969b58ab3bd2d440c6203f64c60a1fd5c08b9d4abe6ae4216246c3e`
-3. A gas fee `2200` of `0x1::aptos_coin::AptosCoin` from the sending account `0x810026ca8291dd88b5b30a1d3ca2edd683d33d06c4a7f7c451d96f6d47bc5e8b`
-
-To retrieve the withdrawal information:
-1. Scan the `changes` for `0x1::coin::CoinStore`. Note the `CoinType` is a generic signifying which coin is stored in the store. In this example, the `CoinType` is `0x1337::my_coin::MyCoin`.
-2. Retrieve the `guid` for `withdraw_events`. In this example, the `guid` contains `addr` `0x810026ca8291dd88b5b30a1d3ca2edd683d33d06c4a7f7c451d96f6d47bc5e8b` and `creation_num` `3`.
-3. Scan for events with this `guid` and extract the event associated with it. In this example, it is the `0x1::coin::WithdrawEvent`.
-4. Note the `amount` field will be the number of `CoinType` removed from the account in the `guid`. In this example, it is `1000`.
-
-To retrieve the deposit information, it's the same as withdrawal except:
-1. The `guid` used is under `deposit_events`
-2. The `amount` will be a positive increase on the account's balance.
-3. The event's name will be: `0x1::coin::DepositEvent`
-
-To retrieve the gas fee:
-1. The `gas_used` field must be multiplied times the `gas_unit_price`. In this example, `gas_used=20` and `gas_unit_price=110` so the total gas coins withdrawn is `2200`.
-2. Gas is always: `0x1::aptos_coin::AptosCoin`
-
-To retrieve information about the number of decimals of the coin:
-1. You can retrieve the number of decimals for a coin via its: `0x1::coin::CoinInfo`
-2. This will be located at the address of the coin type. In this example, you would need to look up `0x1::coin::CoinInfo<0x1337::my_coin::MyCoin>` at address `0x1337`.
-
-:::tip
-If you always use the events in this manner, you won't miss any balance changes for an account.
-By monitoring the events, you will find all balance changes in the `0x1::coin::CoinStore`:
-1. Coin mints
-2. Coin burns
-3. Coin transfers
-4. Staking coins
-5. Withdrawing staked coins
-6. Transfers not derived from `coin::transfer`
-
-:::
-
-To create some sample data to explore, conduct ["Your first transaction"](../tutorials/first-transaction.md).
-
-To learn more about coin creation, make ["Your First Coin"](../tutorials/first-coin.md).
-
-## Integrating with the faucet
-
-This tutorial is for SDK and wallet developers who want to integrate with the [Aptos Faucet](https://github.com/aptos-labs/aptos-core/tree/main/crates/aptos-faucet). If you are a dapp developer, you should access the faucet through an existing [SDK](../tutorials/first-transaction.md) or [CLI](../tools/aptos-cli/use-cli/use-aptos-cli.md#initialize-local-configuration-and-create-an-account) instead.
-
-### Differences between devnet and testnet
-What are the differences between devnet and testnet? Effectively none. In the past, the testnet faucet had a Captcha in front of it, making it unqueryable by normal means. This is no longer true.
-
-The endpoints for each faucet are:
-- Devnet: https://faucet.devnet.aptoslabs.com
-- Testnet: https://faucet.testnet.aptoslabs.com
-
-### Calling the faucet: JavaScript / TypeScript
-If you are building a client in JavaScript or TypeScript, you should make use of the [@aptos-labs/aptos-faucet-client](https://www.npmjs.com/package/@aptos-labs/aptos-faucet-client) package. This client is generated based on the OpenAPI spec published by the faucet service.
-
-Example use:
-```typescript
-import {
- AptosFaucetClient,
- FundRequest,
-} from "@aptos-labs/aptos-faucet-client";
-
-async function callFaucet(amount: number, address: string): Promise {
- const faucetClient = new AptosFaucetClient({BASE: "https://faucet.devnet.aptoslabs.com"});
- const request: FundRequest = {
- amount,
- address,
- };
- const response = await faucetClient.fund({ requestBody: request });
- return response.txn_hashes;
-}
-```
-
-### Calling the faucet: Other languages
-If you are trying to call the faucet in other languages, you have two options:
-1. Generate a client from the [OpenAPI spec](https://github.com/aptos-labs/aptos-core/blob/main/crates/aptos-faucet/doc/spec.yaml).
-2. Call the faucet on your own.
-
-For the latter, you will want to build a query similar to this:
-```
-curl -X POST 'https://faucet.devnet.aptoslabs.com/mint?amount=10000&address=0xd0f523c9e73e6f3d68c16ae883a9febc616e484c4998a72d8899a1009e5a89d6'
-```
-
-This means mint 10000 OCTA to address `0xd0f523c9e73e6f3d68c16ae883a9febc616e484c4998a72d8899a1009e5a89d6`.
diff --git a/developer-docs-site/docs/guides/transaction-management.md b/developer-docs-site/docs/guides/transaction-management.md
deleted file mode 100644
index b3d1facf1b9bd..0000000000000
--- a/developer-docs-site/docs/guides/transaction-management.md
+++ /dev/null
@@ -1,107 +0,0 @@
-# Transaction Management
-
-This guide explains how to build a transaction management harness that can scale on the Aptos blockchain.
-
-## Background
-
-In Aptos, transactions are mapped back to an account in terms of the entity that signs or authorizes that transaction and provides an account-based sequence number. When the Aptos network receives a new transaction, several rules are followed with respect to this:
-
-- The transaction sent from an account must be authorized correctly by that account.
-- The current time as defined by the most recent ledger update must be before the expiration timestamp of the transaction.
-- The transaction's sequence number must be equal to or greater than the sequence number on-chain for that account.
-
-Once the initial node has accepted a transaction, the transaction makes its way through the system by an additional rule. If a transactions sequence number is higher than the current on-chain sequence number, it can only progress toward consensus if every node in the path has seen a transaction with the sequence number between the on-chain state and the current sequence number.
-
-Example:
-
-Alice owns an account whose current on-chain sequence number is 5.
-
-Alice submits a transaction to node Bob with sequence number 6.
-
-Bob the node accepts the transaction but does not forward it, because Bob has not seen 5.
-
-In order to make progress, Alice must either send Bob transaction number 5 or Bob must be notified from consensus that 5 was committed. In the latter, Alice submitted the transaction through another node.
-
-Beyond this there are two remaining principles:
-
-- A single account can have at most 100 uncommitted transactions submitted to the blockchain. Any more than that and the transactions will be rejected. This can happen silently if Alice submits the first 100 to Bob the node and the next 100 to Carol the node. If both those nodes share a common upstream, then that upstream will accept Alice's 100 sent via Bob but silently reject Alice's 100 sent via Carol.
-- Submitting to distinct transactions to multiple nodes will result in slow resolution as transactions will not make progress from the submitted node until the submitted knows that all preceding transactions have been committed. For example, if Alice sends the first 50 via Bob and the next 50 via Carol.
-
-## Building a Transaction Manager
-
-Now that we understand the nuances of transactions, let's dig into building a robust transaction manager. This consists of the following core components:
-
-- A sequence number generator that allocates and manages available sequence numbers for a single account.
-- A transaction manager that receives payloads from an application or a user, sequence numbers from the sequence number generator, and has access to the account key to combine the three pieces together into a viable signed transaction. It then also takes the responsibility for pushing the transaction to the blockchain.
-- An on-chain worker, leader harness that lets multiple accounts share the signer of a single shared account.
-
-Currently this framework assumes that the network builds no substantial queue, that is a transaction that is submitted executes and commits with little to no delay. In order to address high demand, this work needs to be extended with the following components:
-
-- Optimizing `base_gas_unit` price to ensure priority transactions can be committed to the blockchain.
-- Further handling of transaction processing rates to ensure that the expiration timer is properly set.
-- Handling of transaction failures to either be ignored or resubmitted based upon desired outcome.
-
-Note, an account should be managed by a single instance of the transaction manager. Otherwise each instance of the transaction manager will likely have stale in-memory state resulting in overlapping sequence numbers.
-
-### Implementations
-
-- Python
- - [Sequence number manager](https://github.com/aptos-labs/aptos-core/pull/7987)
- - [Transaction manager](https://github.com/aptos-labs/aptos-core/pull/7987)
-- [Worker-leader smart contract](https://github.com/aptos-labs/aptos-core/pull/7986)
-
-### Managing Sequence Numbers
-
-Each transaction requires a distinct sequence number that is sequential to previously submitted transactions. This can be provided by the following process:
-
-1. At startup, query the blockchain for the account’s current sequence number.
-2. Support up to 100 transactions in flight at the same time, that is 100 sequence numbers can be allocated without confirming that any have been committed.
-3. If there are 100 transactions in flight, determine the actual committed state by querying the network. This will update the current sequence number.
-4. If there are less than 100 transactions in flight, return to step 2.
-5. Otherwise sleep for .1 seconds and continue to re-evaluate the current on-chain sequence number.
-6. All transactions should have an expiration time. If the expiration time has passed, assume that there has been a failure and reset the sequence number. The trivial case is to only monitor for failures when the maximum number of transactions are in flight and to let other services manages this otherwise.
-
-In parallel, monitor new transactions submitted. Once the earliest transaction expiration time has expired synchronize up to that transaction. Then repeat the process for the next transaction.
-
-If there is any failure, wait until all outstanding transactions have timed out and leave it to the application to decide how to proceed, e.g., replay failed transactions. The best method for waiting for outstanded transactions is first to query the ledger timestamp and ensure it is at least elapsed the maximum timeout from the last transactions submit time. From there, validate with mempool that all transactions since the last known committed transaction are either committed or no longer exist within the mempool. This can be done by querying the REST API for transactions of a specific account, specifying the currently being evaluated sequence number and setting a limit to 1. Once these checks are complete, the local transaction number can be resynchronized.
-
-These failure handling steps are critical for the following reasons:
-* Mempool does not immediate evict expired transactions.
-* A new transaction cannot overwrite an existing transaction, even if it is expired.
-* Consensus, i.e., the ledger timestamp, dictates expirations, the local node will only expire after it sees a committed timestamp after the transactions expiration time and a garbage collection has happened.
-
-### Managing Transactions
-
-Once a transaction has been submitted it goes through a variety of steps:
-
-1. Submission to a REST endpoint.
-2. Pre-execution validation in the Mempool during submission.
-3. Transmission from Mempool to Mempool with pre-execution validation happening on each upstream node.
-4. Inclusion in a consensus proposal.
-5. One more pre-execution validation.
-6. Execution and committing to storage.
-
-There are many potential failure cases that must be considered:
-
-- Failure during transaction submission (1 and 2):
- - Visibility: The application will receive an error either that the network is unavailable or that the transaction failed pre-execution validation.
- - If the error is related to availability or duplicate sequence numbers, wait until access is available and the sequence number has re-synchronized.
- - Pre-execution validation failures are currently out of scope, outside of those related to duplicate sequence numbers, account issues are likely related to an invalid key for the account or the account lacks sufficient funds for gas.
-- Failure between submission and execution (3, 4, and 5):
- - Visibility: Only known by waiting until the transaction has expired.
- - These are the same as other pre-execution validation errors due to changes to the account as earlier transactions execute. It is likely either duplicate sequence numbers or the account lacks sufficient funds for gas.
-- Failure during execution (6):
- - Visibility: These are committed to the blockchain.
- - These errors occur as a result of on-chain state issues, these tend to be application specific, such as an auction where a new bid might not actually be higher than the current bid.
-
-### Workers and Identity
-
-Using the above framework, a single account can push upwards of 100 transactions from the start of a block to the end of a block. Assuming that all 100 transactions are consumed within 1 block, it will take a bit of time for the next 100 slots to be available. This is due to the network delays as well as the multi-staged validator pipeline.
-
-To fully leverage the blockchain for massive throughput, using a single user account is not enough. Instead, Aptos supports the concept of worker accounts that can share the responsibility of pushing work through a shared account, also known as a resource account.
-
-In this model, each worker has access to the `SignerCap` of the shared account, which enables them to impersonate the shared account or generate the `signer` for the shared account. Upon gaining the `signer`, the transaction can execute the logic that is gated by the signer of the shared account.
-
-Another model, if viable, is to decouple the `signer` altogether away from permissions and to make an application specific capability. Then this capability can be given to each worker that let’s them operate on the shared infrastructure.
-
-Note that parallelization on the shared infrastructure can be limited if any transaction would have any read or write conflicts. This won’t prevent multiple transactions from executing within a block, but can impact maximum blockchain performance.
diff --git a/developer-docs-site/docs/index.md b/developer-docs-site/docs/index.md
deleted file mode 100644
index b786679e8d8e5..0000000000000
--- a/developer-docs-site/docs/index.md
+++ /dev/null
@@ -1,148 +0,0 @@
----
-title: "Aptos Developer Documentation"
-slug: "/"
-hidden: false
-sidebar_position: 0
-hide_table_of_contents: true
----
-
-# Aptos Developer Documentation
-
-Welcome! Aptos is a Layer 1 for everyone. In the [Ohlone language](https://en.wikipedia.org/wiki/Ohlone_languages), ["Aptos"](https://en.wikipedia.org/wiki/Aptos,_California) means "The People." This site is here to help you grow a [web3 ecosystem project](https://github.com/aptos-foundation/ecosystem-projects) that benefits the entire world through easier development, more reliable services, faster transactions, and a supportive, decentralized family.
-
-This documentation will help you develop applications for the Aptos blockchain, run nodes, and be a part of the blossoming Aptos community. This documentation covers both basic and advanced topics. Here you will find concepts, how-to guides, quickstarts, tutorials, API references, code examples, release notes, and more.
-
-> Please note, this site is built from the `main` upstream branch of GitHub and so therefore reflects the latest changes to Aptos. If you have checked out [another branch](https://github.com/aptos-labs/aptos-core/branches) to use a [specific network](guides/system-integrators-guide.md#choose-a-network), the code may not yet have all of the features described here.
-
-## Find the latest releases
-
-See the newest Aptos releases in the [Latest Releases](./releases/index.md) list and its subpages.
-
-## Set up your environment and start with the tutorials
-
-
Reference for the REST API to interact with the Aptos blockchain.
-
-
-
-
-
-
-## Connect to an Aptos network
-
-Aptos offers the ability to run a local testnet, as well as provides a shared devnet and testnet. See the [System Integrators Guide](guides/system-integrators-guide.md#networks) for a summary of the available networks and the means to connect to them.
-
-:::tip Aptos Devnet Resets
-The Aptos devnet is reset every Thursday. See the latest updates in the [Aptos Discord](https://discord.gg/aptosnetwork).
-:::info
-
-:::
-
-## Find the ecosystem
-
-We are excited that you are here, and we look forward to getting to know you. Welcome to the Aptos community! Find out more about us and exchange ideas at:
-
-* [Discord](https://discord.gg/aptosnetwork)
-* [Stack Overflow](https://stackoverflow.com/questions/tagged/aptos)
-* [Forum](https://forum.aptoslabs.com/)
-* [Medium](https://medium.com/aptoslabs)
-* [Telegram](https://t.me/aptos_official)
-* [Twitter](https://twitter.com/Aptos_Network)
-
-## Community projects on Aptos
-
-Here's a [list of community-maintained projects](https://github.com/aptos-foundation/ecosystem-projects) collected by the [Aptos Foundation](https://aptosfoundation.org/). If you have a project that you want added to the list, just edit the page and add a GitHub pull request.
-
-Want to pitch in on smaller tasks, such as doc updates and code fixes? See our [Community](./community/index.md) list for opportunities to help the Aptos ecosystem.
diff --git a/developer-docs-site/docs/indexer/api/example-queries.md b/developer-docs-site/docs/indexer/api/example-queries.md
deleted file mode 100644
index e81339ed8102a..0000000000000
--- a/developer-docs-site/docs/indexer/api/example-queries.md
+++ /dev/null
@@ -1,202 +0,0 @@
----
-title: "Example Queries"
----
-
-# Example Indexer API Queries
-
-import BetaNotice from '../../../src/components/_indexer_beta_notice.mdx';
-
-
-
-## Running example queries
-
-1. Open the Hasura Explorer for the network you want to query. You can find the URLs [here](/indexer/api/labs-hosted#hasura-explorer).
-1. Paste the **Query** code from an example into the main query section, and the **Query Variables** code from the same example into the Query Variables section (below the main query section).
-
-## More Examples
-You can find many more example queries in the [TypeScript SDK](https://github.com/aptos-labs/aptos-ts-sdk/tree/main/src/internal/queries). Indeed if you're using the TypeScript SDK, you should look at the [API](https://github.com/aptos-labs/aptos-ts-sdk/tree/main/src/api).
-
-## Example Token Queries
-
-Getting all tokens currently in account.
-
-**Query**
-
-```graphql
-query CurrentTokens($owner_address: String, $offset: Int) {
- current_token_ownerships(
- where: {owner_address: {_eq: $owner_address}, amount: {_gt: "0"}, table_type: {_eq: "0x3::token::TokenStore"}}
- order_by: [{last_transaction_version: desc}, {token_data_id: desc}]
- offset: $offset
- ) {
- token_data_id_hash
- name
- collection_name
- property_version
- amount
- }
-}
-```
-
-**Query Variables**
-```json
-{
- "owner_address": "0xaa921481e07b82a26dbd5d3bc472b9ad82d3e5bfd248bacac160eac51687c2ff",
- "offset": 0
-}
-```
-
----
-
-Getting all token activities for a particular token. **Note** that to get the `token_id_hash` you have to first make a query to get the token from the above query.
-
-**Query**
-
-```graphql
-query TokenActivities($token_id_hash: String, $offset: Int) {
- token_activities(
- where: {token_data_id_hash: {_eq: $token_id_hash}}
- # Needed for pagination
- order_by: [{last_transaction_version: desc}, {event_index: asc}]
- # Optional for pagination
- offset: $offset
- ) {
- transaction_version
- from_address
- property_version
- to_address
- token_amount
- transfer_type
- }
-}
-```
-
-**Query Variables**
-
-```json
-{
- "token_id_hash": "f344b838264bf9aa57d5d4c1e0c8e6bbdc93f000abe3e7f050c2a0f4dc23d030",
- "offset": 0
-}
-```
-
----
-
-Getting current token offered to account.
-
-**Query**
-
-```graphql
-query CurrentOffers($to_address: String, $offset: Int) {
- current_token_pending_claims(
- where: {to_address: {_eq: $to_address}, amount: {_gt: "0"}}
- # Needed for pagination
- order_by: [{last_transaction_version: desc}, {token_data_id: desc}]
- # Optional for pagination
- offset: $offset
- ) {
- token_data_id_hash
- name
- collection_name
- property_version
- from_address
- amount
- }
-}
-```
-
-** Query Variables**
-
-```json
-{
- "to_address": "0xe7be097a90c18f6bdd53efe0e74bf34393cac2f0ae941523ea196a47b6859edb",
- "offset": 0
-}
-```
-
-## Example Coin Queries
-
-Getting coin activities (including gas fees).
-
-**Query**
-
-```graphql
-query CoinActivity($owner_address: String, $offset: Int) {
- coin_activities(
- where: {owner_address: {_eq: $owner_address}}
- # Needed for pagination
- order_by: [{last_transaction_version: desc}, {event_index: asc}]
- # Optional for pagination
- offset: $offset
- ) {
- activity_type
- amount
- coin_type
- entry_function_id_str
- transaction_version
- }
-}
-```
-
-**Query Variables**
-
-```json
-{
- "owner_address": "0xe7be097a90c18f6bdd53efe0e74bf34393cac2f0ae941523ea196a47b6859edb",
- "offset": 0
-}
-```
-
----
-
-Currently owned coins (`0x1::coin::CoinStore`).
-
-**Query**
-
-```graphql
-query CurrentBalances($owner_address: String, $offset: Int)Ï {
- current_coin_balances(
- where: {owner_address: {_eq: $owner_address}}
- # Needed for pagination
- order_by: [{last_transaction_version: desc}, {token_data_id: desc}]
- # Optional for pagination
- offset: $offset
- ) {
- owner_address
- coin_type
- amount
- last_transaction_timestamp
- }
-}
-```
-
-**Query Variables**
-
-```json
-{
- "owner_address": "0xe7be097a90c18f6bdd53efe0e74bf34393cac2f0ae941523ea196a47b6859edb",
- "offset": 0
-}
-```
-
-## Example Explorer Queries
-
-Getting all user transaction versions (to filter on user transaction for block explorer).
-
-**Query**
-
-```graphql
-query UserTransactions($limit: Int) {
- user_transactions(limit: $limit, order_by: {version: desc}) {
- version
- }
-}
-```
-
-**Query Variables**
-
-```json
-{
- "limit": 10
-}
-```
diff --git a/developer-docs-site/docs/indexer/api/index.md b/developer-docs-site/docs/indexer/api/index.md
deleted file mode 100644
index 82e20f7388ca2..0000000000000
--- a/developer-docs-site/docs/indexer/api/index.md
+++ /dev/null
@@ -1,19 +0,0 @@
----
-title: "Indexer API"
----
-
-import BetaNotice from '../../../src/components/\_indexer_beta_notice.mdx';
-
-
-
-This section contains documentation for the Aptos Indexer API, the API built upon the standard set of processors provided in the [aptos-labs/aptos-indexer-processors](https://github.com/aptos-labs/aptos-indexer-processors) repo.
-
-## Usage Guide
-
-### Address Format
-
-When making a query where one of the query params is an account address (e.g. owner), make sure the address starts with a prefix of `0x` followed by 64 hex characters. For example: `0xaa921481e07b82a26dbd5d3bc472b9ad82d3e5bfd248bacac160eac51687c2ff`.
-
-### TypeScript Client
-
-The Aptos TypeScript SDK provides an API functions for making queries to the Aptos Indexer API. Learn more [here](../../sdks/new-ts-sdk/fetch-data-from-chain.md).
diff --git a/developer-docs-site/docs/indexer/api/labs-hosted.md b/developer-docs-site/docs/indexer/api/labs-hosted.md
deleted file mode 100644
index 06d518a8be5ce..0000000000000
--- a/developer-docs-site/docs/indexer/api/labs-hosted.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: "Labs-Hosted Indexer API"
----
-
-import BetaNotice from '../../../src/components/_indexer_beta_notice.mdx';
-
-
-
-## GraphQL API Endpoints
-
-When making GraphQL queries to the Labs-Hosted Indexer API, use the following endpoints:
-
-- **Mainnet:** https://indexer.mainnet.aptoslabs.com/v1/graphql
-- **Testnet:** https://indexer-testnet.staging.gcp.aptosdev.com/v1/graphql
-- **Devnet:** https://indexer-devnet.staging.gcp.aptosdev.com/v1/graphql
-
-## Hasura Explorer
-
-The following URLs are for the Hasura Explorer for the Labs-Hosted Indexer API:
-
-- **Mainnet:** https://cloud.hasura.io/public/graphiql?endpoint=https://indexer.mainnet.aptoslabs.com/v1/graphql
-- **Testnet:** https://cloud.hasura.io/public/graphiql?endpoint=https://indexer-testnet.staging.gcp.aptosdev.com/v1/graphql
-- **Devnet:** https://cloud.hasura.io/public/graphiql?endpoint=https://indexer-devnet.staging.gcp.aptosdev.com/v1/graphql
-
-## Rate limits
-
-The following rate limit applies for the Aptos Labs hosted indexer API:
-
-- For a web application that calls this Aptos-provided indexer API directly from the client (for example, wallet or explorer), the rate limit is currently 5000 requests per five minutes by IP address. **Note that this limit can change with or without prior notice.**
-
-If you need a higher rate limit, consider running the Aptos Indexer API yourself. See the guide to self hosting [here](/indexer/api/self-hosted).
diff --git a/developer-docs-site/docs/indexer/api/self-hosted.md b/developer-docs-site/docs/indexer/api/self-hosted.md
deleted file mode 100644
index bc903af8fbe21..0000000000000
--- a/developer-docs-site/docs/indexer/api/self-hosted.md
+++ /dev/null
@@ -1,95 +0,0 @@
----
-title: "Self-Hosted Indexer API"
----
-
-import BetaNotice from '../../../src/components/_indexer_beta_notice.mdx';
-
-
-
-This guide will walk you through setting up a self-hosted Indexer API.
-
-:::caution
-Currently this guide only explains how to run processor part of the Indexer API. By the end of this guide you will have a running processor that consumes transactions from the Transaction Stream Service, parses them, and stores them in the database. Unfortunately this guide does not explain how to attach an API to this system right now.
-:::
-
-## Prerequisites
-
-- A running PostgreSQL instance is required, with a valid user and database. In this example we call the user `postgres` and the database `indexer`.
-- If you wish to use Docker, you must have Docker installed. [Installation Guide](https://docs.docker.com/get-docker/).
-
-
-## Configuration
-To run the service we need to define a config file. We will start with this template:
-
-```yaml
-health_check_port: 8084
-server_config:
- processor_config:
- type: default_processor
- postgres_connection_string: postgresql://postgres:@localhost:5432/indexer
- indexer_grpc_data_service_address: 127.0.0.1:50051
- indexer_grpc_http2_ping_interval_in_secs: 60
- indexer_grpc_http2_ping_timeout_in_secs: 10
- auth_token: AUTH_TOKEN
-```
-
-From here you will likely want to change the values of some of these fields. Let's go through some of them.
-
-### `processor_name`
-:::info
-A single instance of the service only runs a single processor. If you want to run multiple processors, you must run multiple instances of the service. In this case, it is up to you whether to use the same database or not.
-:::
-
-This is the processor you want to run. You can see what processors are available [here](https://github.com/aptos-labs/aptos-indexer-processors/blob/main/rust/processor/src/processors/mod.rs#L23). Some examples:
-- `coin_processor`
-- `ans_processor`
-- `token_v2_processor`
-
-### `postgres_connection_string`
-This is the connection string to your PostgreSQL database. It should be in the format `postgresql://:@:/`.
-
-### `indexer_grpc_data_service_address`
-This is the URL for the Transaction Stream Service. If you are using the Labs-Hosted instance you can find the URLs for each network at [this page](../txn-stream/labs-hosted). Make sure to select the correct URL for the network you want to index. If you are running this service locally the value should be `127.0.0.1:50051`.
-
-### `auth_token`
-This is the auth token used to connect to the Transaction Stream Service. If you are using the Labs-Hosted instance you can use the API Gateway to get an auth token. Learn more at [this page](/indexer/txn-stream/labs-hosted).
-
-## Run with source code
-Clone the repo:
-```
-# SSH
-git clone git@github.com:aptos-labs/aptos-indexer-processors.git
-
-# HTTPS
-git clone https://github.com/aptos-labs/aptos-indexer-processors.git
-```
-
-Navigate to the directory for the service:
-```
-cd aptos-indexer-processors
-cd rust/processor
-```
-
-Run the service:
-```
-cargo run --release -- -c config.yaml
-```
-
-## Run with Docker
-
-
-To run the service with Docker, use the following command:
-```
-docker run -it --network host --mount type=bind,source=/tmp/config.yaml,target=/config.yaml aptoslabs/indexer-processor-rust -c /config.yaml
-```
-
-This command binds the container to the host network and mounts the config file from the host into the container. This specific invocation assumes that your config file in the host is at `/tmp/config.yaml`.
-
-See the image on DockerHub here: https://hub.docker.com/r/aptoslabs/indexer-processor-rust/tags.
diff --git a/developer-docs-site/docs/indexer/api/usage-guide.md b/developer-docs-site/docs/indexer/api/usage-guide.md
deleted file mode 100644
index 80e976c9bbfda..0000000000000
--- a/developer-docs-site/docs/indexer/api/usage-guide.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: "Indexer API Usage Guide"
----
-
-import BetaNotice from '../../../src/components/_indexer_beta_notice.mdx';
-
-
-
-Coming soon!
-
-
diff --git a/developer-docs-site/docs/indexer/custom-processors/e2e-tutorial.md b/developer-docs-site/docs/indexer/custom-processors/e2e-tutorial.md
deleted file mode 100644
index c91013b8d5277..0000000000000
--- a/developer-docs-site/docs/indexer/custom-processors/e2e-tutorial.md
+++ /dev/null
@@ -1,384 +0,0 @@
----
-title: "End-to-End Tutorial"
----
-
-# Creating a Custom Indexer Processor
-
-import BetaNotice from '../../../src/components/_indexer_beta_notice.mdx';
-
-
-
-In this tutorial, we're going to walk you through all the steps involved with creating a very basic custom indexer processor to track events and data on the Aptos blockchain.
-
-We use a very simple smart contract called **Coin Flip** that has already emitted events for us.
-
-The smart contract is already deployed, and you mostly don't need to understand it unless you're curious to mess with it or change things.
-
-## Getting Started
-
-To get started, clone the [aptos-indexer-processors](https://github.com/aptos-labs/aptos-indexer-processors) repo:
-```
-# HTTPS
-https://github.com/aptos-labs/aptos-indexer-processors.git
-
-# SSH
-git@github.com:aptos-labs/aptos-indexer-processors.git
-```
-
-Navigate to the coin flip directory:
-```
-cd aptos-indexer-processors
-cd python/processors/coin_flip
-```
-
-Processors consume a stream of transactions from the Transaction Stream Service. In order to use the Labs-Hosted Transaction Stream Service you need an auth token. Follow [this guide](/indexer/txn-stream/labs-hosted#auth-tokens) to guide to get one. Once you're done, you should have a token that looks like this:
-```
-aptoslabs_yj4bocpaKy_Q6RBP4cdBmjA8T51hto1GcVX5ZS9S65dx
-```
-
-You also need the following tools:
-- The [Aptos CLI](/tools/aptos-cli/install-cli)
-- Python 3.9+: [Installation Guide](https://docs.python-guide.org/starting/installation/#python-3-installation-guides).
-- Poetry: [Installation Guide](https://python-poetry.org/docs/#installation).
-
-We use postgresql as our database in this tutorial. You're free to use whatever you want, but this tutorial is geared towards postgresql for the sake of simplicity. We use the following database configuration and tools:
-- [Postgresql](https://www.postgresql.org/download/)
- - We will use a database hosted on `localhost` on the port `5432`, which should be the default.
- - When you create your username, keep track of it and the password you use for it.
- - You can view a tutorial for installing postgresql and psql [here](https://www.digitalocean.com/community/tutorials/how-to-install-postgresql-on-ubuntu-22-04-quickstart) tool to set up your database more quickly.
- - If you want to easily view your database data, consider using a GUI like [DBeaver](https://dbeaver.io/), [pgAdmin](https://www.pgadmin.org/), or [Postico](https://eggerapps.at/postico2/).
-
-Explaining how to create a database is beyond the scope of this tutorial. If you are not sure how to do it, consider checking out tutorials on how to create a database with the `psql` tool.
-
-## Setup your environment
-
-### Setup the postgresql database
-
-Make sure to start the `postgresql` service:
-
-The command for Linux/WSL might be something like:
-
-```shell
-sudo service postgresql start
-```
-
-For mac, if you're using brew, start it up with:
-
-```shell
-brew services start postgresql
-```
-
-Create your database with the name `coin_flip`, where our username is `user` and our password is `password`.
-
-If your database is set up correctly, and you have the `psql` tool, you should be able to run the command `psql -d coin_flip`.
-
-### Setup your local environment with poetry and grpc
-
-If you haven't yet, make sure to read the introductory [custom processor guide](https://github.com/aptos-labs/aptos-indexer-processors).
-
-You can also check out the python-specific broad overview of how to create an indexer processor [here](https://github.com/aptos-labs/aptos-indexer-processors/tree/main/python).
-
-## Configure your indexer processor
-
-Now let's setup the configuration details for the actual indexer processor we're going to use.
-
-### Setup your config.yaml file
-
-Copy the contents below and save it to a file called `config.yaml`. Save it in the `coin_flip` folder. Your file directory structure should look something like this:
-
-```
-- indexer
- - python
- - aptos_ambassador_token
- - aptos-tontine
- - coin_flip
- - move
- - sources
- - coin_flip.move
- - package_manager.move
- - Move.toml
- - config.yaml <-------- Edit this config.yaml file
- - models.py
- - processor.py
- - README.md
- - example_event_processor
- - nft_marketplace_v2
- - nft_orderbooks
- __init__.py
- main.py
- README.md
- - rust
- - scripts
- - typescript
-```
-
-Once you have your config.yaml file open, you only need to change one field, if you just want to run the processor as is:
-```yaml
-grpc_data_stream_api_key: ""
-```
-
-### More customization with config.yaml
-
-However, if you'd like to customize things further, you can change some of the other fields.
-
-If you'd like to start at a specific version, you can specify that in the config.yaml file with:
-```yaml
-starting_version_default: 123456789
-```
-
-This is the transaction version the indexer starts looking for events at. If the indexer has already processed transactions past this version, **it will skip all of them and go to the latest version stored.**
-
-The rows in `next_versions_to_process` are the `indexer_name` as the primary key and the `next_version` to process field, along with the `updated_at`.
-
-If you want to **force** the indexer to backfill data (overwrite/rewrite data) from previous versions even though it's already indexed past it, you can specify this in the config.yaml file with:
-
-```yaml
-starting_version_backfill: 123456789
-```
-
-If you want to use a different network, change the `grpc_data_stream_endpoint` field to the corresponding desired value:
-
-```yaml
-devnet: 35.225.218.95:50051
-testnet: 35.223.137.149:50051 # north america
-testnet: 34.64.252.224:50051 # asia
-mainnet: 34.30.218.153:50051
-```
-
-If these ip addresses don't work for you, they might be outdated. Check out the `README.md` at the root folder of the repository for the latest endpoints.
-
-If you're using a different database name or processor name, change the `processor_name` field and the `db_connection_uri` to your specific needs. Here's the general structure of the field:
-
-```yaml
-db_connection_uri: "postgresql://username:password@database_url:port_number/database_name"
-```
-
-### Add your processor & schema names to the configuration files
-
-First, let's create the name for the database schema we're going to use. We use `coin_flip` in our example, so we need to add it in two places:
-
-1. We need to add it to our `python/utils/processor_name.py` file:
-```python
- class ProcessorName(Enum):
- EXAMPLE_EVENT_PROCESSOR = "python_example_event_processor"
- NFT_MARKETPLACE_V1_PROCESSOR = "nft_marketplace_v1_processor"
- NFT_MARKETPLACE_V2_PROCESSOR = "nft_marketplace_v2_processor"
- COIN_FLIP = "coin_flip"
-```
-2. Add it to the constructor in the `IndexerProcessorServer` match cases in `utils/worker.py`:
-
-```python
-match self.config.processor_name:
- case ProcessorName.EXAMPLE_EVENT_PROCESSOR.value:
- self.processor = ExampleEventProcessor()
- case ProcessorName.NFT_MARKETPLACE_V1_PROCESSOR.value:
- self.processor = NFTMarketplaceProcesser()
- case ProcessorName.NFT_MARKETPLACE_V2_PROCESSOR.value:
- self.processor = NFTMarketplaceV2Processor()
- case ProcessorName.COIN_FLIP.value:
- self.processor = CoinFlipProcessor()
-```
-
-3. Add it to the `python/utils/models/schema_names.py` file:
-
-```python
-EXAMPLE = "example"
-NFT_MARKETPLACE_SCHEMA_NAME = "nft_marketplace"
-NFT_MARKETPLACE_V2_SCHEMA_NAME = "nft_marketplace_v2"
-COIN_FLIP_SCHEMA_NAME = "coin_flip"
-```
-
-### Explanation of the event emission in the Move contract
-
-In our Move contract (in `coin_flip/move/sources/coin_flip.move`), each user has an object associated with their account. The object has a `CoinFlipStats` resource on it that tracks the total number of wins and losses a user has and is in charge of emitting events.
-
-```rust
-// CoinFlipStats object/resource definition
-#[resource_group_member(group = aptos_framework::object::ObjectGroup)]
-struct CoinFlipStats has key {
- wins: u64,
- losses: u64,
- event_handle: EventHandle, //
- delete_ref: DeleteRef,
-}
-
-// event emission in `flip_coin`
-fun flip_coin(
- user: &signer,
- prediction: bool,
- nonce: u64,
-) acquires CoinFlipStats {
- // ...
- let (heads, correct_prediction) = flip(prediction, nonce);
-
- if (correct_prediction) {
- coin_flip_stats.wins = coin_flip_stats.wins + 1;
- } else {
- coin_flip_stats.losses = coin_flip_stats.losses + 1;
- };
-
- event::emit_event(
- &mut coin_flip_stats.event_handle,
- CoinFlipEvent {
- prediction: prediction,
- result: heads,
- wins: coin_flip_stats.wins,
- losses: coin_flip_stats.losses,
- }
- );
-}
-```
-The events emitted are of type `CoinFlipEvent`, shown below:
-```rust
-struct CoinFlipEvent has copy, drop, store {
- prediction: bool, // true = heads, false = tails
- result: bool,
- wins: u64,
- losses: u64,
-}
-```
-
-### Viewing and understanding how the event data is emitted and processed
-
-When we submit a transaction that calls the `coin_flip` entry function, the indexer parses the events and records the data of each event that occurred in the transaction.
-
-Within the `data` field of each `Event` type, we see the arbitrary event data emitted. We use this data to store the event data in our database.
-
-The processor loops over each event in each transaction to process all event data. There are a *lot* of various types of events that can occur in a transaction- so we need to write a filtering function to deal with various events we don't want to store in our database.
-
-This is the simple iterative structure for our event List:
-
-```python
-for event_index, event in enumerate(user_transaction.events):
- # Skip events that don't match our filter criteria
- if not CoinFlipProcessor.included_event_type(event.type_str):
- continue
-```
-
-where the `included_event_type` function is a static method in our `CoinFlipProcessor` class:
-
-```python
-@staticmethod
-def included_event_type(event_type: str) -> bool:
- parsed_tag = event_type.split("::")
- module_address = parsed_tag[0]
- module_name = parsed_tag[1]
- event_type = parsed_tag[2]
- # Now we can filter out events that are not of type CoinFlipEvent
- # We can filter by the module address, module name, and event type
- # If someone deploys a different version of our contract with the same event type, we may want to index it one day.
- # So we could only check the event type instead of the full string
- # For our sake, check the full string
- return (
- module_address
- == "0xe57752173bc7c57e9b61c84895a75e53cd7c0ef0855acd81d31cb39b0e87e1d0"
- and module_name == "coin_flip"
- and event_type == "CoinFlipEvent"
- )
-```
-
-If you wanted to see the event data for yourself inside the processor loop, you could add something like this to your `processor.py` file:
-
-```python
-for event_index, event in enumerate(user_transaction.events):
- # Skip events that don't match our filter criteria
- if not CoinFlipProcessor.included_event_type(event.type_str):
- continue
-
- # ...
-
- # Load the data into a json object and then use/view it as a regular dictionary
- data = json.loads(event.data)
- print(json.dumps(data, indent=3))
-```
-In our case, a single event prints this out:
-
-
-```json
-{
- 'losses': '49',
- 'prediction': False,
- 'result': True,
- 'wins': '51'
-}
-```
-
-So we'll get our data like this:
-
-```python
-prediction = bool(data["prediction"])
-result = bool(data["result"])
-wins = int(data["wins"])
-losses = int(data["losses"])
-
-# We have extra data to insert into the database, because we want to process our data.
-# Calculate the total
-win_percentage = wins / (wins + losses)
-```
-
-And then we add it to our event list with this:
-
-```python
-# Create an instance of CoinFlipEvent
-event_db_obj = CoinFlipEvent(
- sequence_number=sequence_number,
- creation_number=creation_number,
- account_address=account_address,
- transaction_version=transaction_version,
- transaction_timestamp=transaction_timestamp,
- prediction=prediction,
- result=result,
- wins=wins,
- losses=losses,
- win_percentage=win_percentage,
- inserted_at=datetime.now(),
- event_index=event_index, # when multiple events of the same type are emitted in a single transaction, this is the index of the event in the transaction
-)
-event_db_objs.append(event_db_obj)
-```
-### Creating your database model
-
-Now that we know how we store our CoinFlipEvents in our database, let's go backwards a bit and clarify how we *create* this model for the database to use.
-
-We need to structure the `CoinFlipEvent` class in `models.py` to reflect the structure in our Move contract:
-
-```python
-class CoinFlipEvent(Base):
- __tablename__ = "coin_flip_events"
- __table_args__ = ({"schema": COIN_FLIP_SCHEMA_NAME},)
-
- sequence_number: BigIntegerPrimaryKeyType
- creation_number: BigIntegerPrimaryKeyType
- account_address: StringPrimaryKeyType
- prediction: BooleanType # from (event.data["prediction"]
- result: BooleanType # from (event.data["result"]
- wins: BigIntegerType # from (event.data["wins"]
- losses: BigIntegerType # from (event.data["losses"]
- win_percentage: NumericType # calculated from the above
- transaction_version: BigIntegerType
- transaction_timestamp: TimestampType
- inserted_at: InsertedAtType
- event_index: BigIntegerType
-```
-
-The unmarked fields are from the default event data for every event emitted on Aptos. The marked fields are specifically from the fields we calculated above.
-
-The other fields, __tablename__ and __table_args__, are indications to the python SQLAlchemy library as to what database and schema name we are using.
-
-## Running the indexer processor
-
-Now that we have our configuration files and our database and the python database model set up, we can run our processor.
-
-Navigate to the `python` directory of your indexer repository:
-
-```shell
-cd ~/indexer/python
-```
-
-And then run the following command:
-
-```shell
-poetry run python -m processors.main -c processors/coin_flip/config.yaml
-```
-
-If you're processing events correctly, the events should now show up in your database.
diff --git a/developer-docs-site/docs/indexer/custom-processors/index.md b/developer-docs-site/docs/indexer/custom-processors/index.md
deleted file mode 100644
index 9988a882eda77..0000000000000
--- a/developer-docs-site/docs/indexer/custom-processors/index.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: "Custom Processors"
----
-
-import BetaNotice from '../../../src/components/_indexer_beta_notice.mdx';
-
-
-
-This section explains what a custom processor is, how to write and deploy one, and how to parse transactions from the [Transaction Stream Service](/indexer/txn-stream).
-
diff --git a/developer-docs-site/docs/indexer/custom-processors/parsing-txns.md b/developer-docs-site/docs/indexer/custom-processors/parsing-txns.md
deleted file mode 100644
index a34cd5b3c1ed9..0000000000000
--- a/developer-docs-site/docs/indexer/custom-processors/parsing-txns.md
+++ /dev/null
@@ -1,81 +0,0 @@
----
-title: "Parsing Transactions"
----
-
-import BetaNotice from '../../../src/components/_indexer_beta_notice.mdx';
-
-
-
-
-
-Fundamentally an indexer processor is just something that consumes a stream of a transactions and writes processed data to storage. Let's dive into what a transaction is and what kind of information you can extract from one.
-
-## What is a transaction?
-
-A transaction is a unit of execution on the Aptos blockchain. If the execution of the program in a transaction (e.g. starting with an entry function in a Move module) is successful, the resulting change in state will be applied to the ledger. Learn more about the transaction lifecycle at [this page](/concepts/blockchain/#life-of-a-transaction).
-
-There are four types of transactions on Aptos:
-- Genesis
-- Block metadata transactions
-- State checkpoint transactions
-- User transactions
-
-The first 3 of these are internal to the system and are not relevant to most processors; we do not cover them in this guide.
-
-Generally speaking, most user transactions originate from a user calling an entry function in a Move module deployed on chain, for example `0x1::coin::transfer`. In all other cases they originate from [Move scripts](/move/move-on-aptos/move-scripts). You can learn more about the different types of transactions [here](../../concepts/txns-states##types-of-transactions).
-
-A user transaction that a processor handles contains a variety of information. At a high level it contains:
-- The payload that was submitted.
-- The changes to the ledger resulting from the execution of the function / script.
-
-We'll dive into this in the following sections.
-
-## What is important in a transaction?
-
-### Payload
-
-The payload is what the user submits to the blockchain when they wish to execute a Move function. Some of the key information in the payload is:
-- The sender address
-- The address + module name + function name of the function being executed.
-- The arguments to the function.
-
-There is other potentially interesting information in the payload that you can learn about at [this page](/concepts/txns-states#contents-of-a-transaction).
-
-### Events
-
-Events are emitted during the execution of a transaction. Each Move module can define its own events and choose when to emit the events during execution of a function.
-
-For example, in Move you might have the following:
-```rust
-struct MemberInvitedEvent has store, drop {
- member: address,
-}
-
-public entry fun invite_member(member: address) {
- event::emit_event(
- &mut member_invited_events,
- MemberInvitedEvent { member },
- );
-}
-```
-
-If `invite_member` is called, you will find the `MemberInvitedEvent` in the transaction.
-
-:::tip Why emit events?
-This is a good question! In some cases, you might find it unnecessary to emit events since you can just parse the writesets. However, sometimes it is quite difficult to get all the data you need from the different "locations" in the transaction, or in some cases it might not even be possible, e.g. if you want to index data that isn't included in the writeset. In these cases, events are a convenient way to bundle together everything you want to index.
-:::
-
-### Writesets
-
-When a transaction executes, it doesn't directly affect on-chain state right then. Instead, it outputs a set of changes to be made to the ledger, called a writeset. The writeset is applied to the ledger later on after all validators have agreed on the result of the execution.
-
-Writesets show the end state of the on-chain data after the transaction has occurred. They are the source of truth of what data is stored on-chain. There are several types of write set changes:
-
-- Write module / delete module
-- Write resource / delete resource
-- Write table item / delete table item
-
-
diff --git a/developer-docs-site/docs/indexer/indexer-landing.md b/developer-docs-site/docs/indexer/indexer-landing.md
deleted file mode 100644
index 1abd424bf82ed..0000000000000
--- a/developer-docs-site/docs/indexer/indexer-landing.md
+++ /dev/null
@@ -1,80 +0,0 @@
----
-title: "Learn about Indexing"
----
-
-import BetaNotice from '../../src/components/_indexer_beta_notice.mdx';
-import ThemedImage from '@theme/ThemedImage';
-import useBaseUrl from '@docusaurus/useBaseUrl';
-
-
-
-## Quick Start
-
-Refer to this role-oriented guide to help you quickly find the relevant docs:
-
-- Core Infra Provider: You want to run your own Transaction Stream Service in addition to the rest of the stack.
- - See docs for [Self-Hosted Transaction Stream Service](/indexer/txn-stream/self-hosted).
-- API Operator: You want to run the Indexer API on top of a hosted Transaction Stream Service.
- - See docs for [Self-Hosted Indexer API](/indexer/api/self-hosted).
-- Custom Processor Builder: You want to build a custom processor on top of a hosted Transaction Stream Service.
- - See docs for [Custom Processors](/indexer/custom-processors).
-- Indexer API Consumer: You want to use a hosted Indexer API.
- - See docs for the [Labs-Hosted Indexer API](/indexer/api/labs-hosted).
- - See the [Indexer API Usage Guide](/indexer/api/usage-guide).
-
-# Architecture Overview
-
-Typical applications built on the Aptos blockchain, on any blockchain for that matter, require the raw blockchain data to be shaped and stored in an application-specific manner. This is essential to supporting low-latency and rich experiences when consuming blockchain data in end-user apps from millions of users. The [Aptos Node API](https://aptos.dev/nodes/aptos-api-spec#/) provides a lower level, stable and generic API and is not designed to support data shaping and therefore cannot support rich end-user experiences directly.
-
-The Aptos Indexer is the answer to this need, allowing the data shaping critical to real-time app use. See this high-level diagram for how Aptos indexing works:
-
-
-
-
-
-At a high level, indexing on the Aptos blockchain works like this:
-
-1. Users of a dApp, for example, on an NFT marketplace, interact with the Aptos blockchain via a rich UI presented by the dApp. Behind the scenes, these interactions generate, via smart contracts, the transaction and event data. This raw data is stored in the distributed ledger database, for example, on an Aptos fullnode.
-1. This raw ledger data is read and indexed using an application-specific data model, in this case an NFT marketplace-specific data model (”Business logic” in the above diagram). This NFT marketplace-specific index is then stored in a separate database (”Indexed database” in the above diagram) and exposed via an API.
-1. The dApp sends NFT-specific GraphQL queries to this indexed database and receives rich data back, which is then served to the users.
-
-Step 2 is facilitated by the Aptos Indexer. The diagram above is a simplified view of how the system works at a high level. In reality, the system is composed of many components. If you are interested in these details, see the [Detailed Overview](#detailed-overview) below.
-
-## Indexer API
-
-Aptos supports the following ways to access indexed data.
-
-1. [Labs hosted Indexer API](/indexer/api/labs-hosted): This API is rate-limited and is intended only for lightweight applications such as wallets. This option is not recommended for high-bandwidth applications.
-2. [Self hosted Indexer API](/indexer/api/self-hosted): Run your own deployment of the Labs hosted indexer stack.
-3. [Custom processor](/indexer/custom-processors): Write and deploy a custom processor to index and expose data in a way specific to your needs.
-
-## Transaction Stream Service
-
-The Indexer API and Custom Processors depend on the Transaction Stream Service. In short, this service provides a GRPC stream of transactions that processors consume. Learn more about this service [here](/indexer/txn-stream/). Aptos Labs offers a [hosted instance of this service](/indexer/txn-stream/labs-hosted) but you may also [run your own](/indexer/txn-stream/self-hosted).
-
-## Detailed Overview
-
-This diagram explains how the Aptos Indexer tech stack works in greater detail.
-
-
-
-
-
-
-
-
-
-## Legacy Indexer
-Find information about the legacy indexer [here](/indexer/legacy/).
diff --git a/developer-docs-site/docs/indexer/legacy/custom-data-model.md b/developer-docs-site/docs/indexer/legacy/custom-data-model.md
deleted file mode 100644
index dc50288aeab4e..0000000000000
--- a/developer-docs-site/docs/indexer/legacy/custom-data-model.md
+++ /dev/null
@@ -1,196 +0,0 @@
----
-title: "Custom Data Model"
----
-
-:::warning Legacy Indexer
-This is documentation for the legacy indexer. To learn how to write a custom processor with the latest indexer stack, see [Custom Processors](/indexer/custom-processors).
-:::
-
-## Define your own data model
-
-Use this method if you want to develop your custom indexer for the Aptos ledger data.
-
-:::tip When to use custom indexer
-Currently Aptos-provided indexing service (see above) supports the following core Move modules:
-- `0x1::coin`.
-- `0x3::token`.
-- `0x3::token_transfers`.
-
-If you need an indexed database for any other Move modules and contracts, then you should develop your custom indexer.
-:::
-
-Creating a custom indexer involves the following steps. Refer to the indexing block diagram at the start of this document.
-
-1. Define new table schemas, using an ORM like [Diesel](https://diesel.rs/). In this document Diesel is used to describe the custom indexing steps ("Business logic" and the data queries in the diagram).
-2. Create new data models based on the new tables ("Business logic" in the diagram).
-3. Create a new transaction processor, or optionally add to an existing processor. In the diagram this step corresponds to processing the ledger database according to the new business logic and writing to the indexed database.
-4. Integrate the new processor. Optional if you are reusing an existing processor.
-
-In the below detailed description, an example of indexing and querying for the coin balances is used. You can see this in the [`coin_processor`](https://github.com/aptos-labs/aptos-core/blob/main/crates/indexer/src/processors/coin_processor.rs).
-
-### 1. Define new table schemas
-
-In this example we use [PostgreSQL](https://www.postgresql.org/) and [Diesel](https://diesel.rs/) as the ORM. To make sure that we make backward-compatible changes without having to reset the database at every upgrade, we use [Diesel migrations](https://docs.rs/diesel_migrations/latest/diesel_migrations/) to manage the schema. This is why it is very important to start with generating a new Diesel migration before doing anything else.
-
-Make sure you clone the Aptos-core repo by running `git clone https://github.com/aptos-labs/aptos-core.git` and then `cd` into `aptos-core/tree/main/crates/indexer` directory. Then proceed as below.
-
-a. The first step is to create a new Diesel migration. This will generate a new folder under [migrations](https://github.com/aptos-labs/aptos-core/tree/main/crates/indexer/migrations) with `up.sql` and `down.sql`
-
-```bash
-DATABASE_URL=postgres://postgres@localhost:5432/postgres diesel migration generate add_coin_tables
-```
-
-b. Create the necessary table schemas. This is just PostgreSQL code. In the code shown below, the `up.sql` will have the new changes and `down.sql` will revert those changes.
-
-```sql
--- up.sql
--- coin balances for each version
-CREATE TABLE coin_balances (
- transaction_version BIGINT NOT NULL,
- owner_address VARCHAR(66) NOT NULL,
- -- Hash of the non-truncated coin type
- coin_type_hash VARCHAR(64) NOT NULL,
- -- creator_address::name::symbol
- coin_type VARCHAR(5000) NOT NULL,
- amount NUMERIC NOT NULL,
- transaction_timestamp TIMESTAMP NOT NULL,
- inserted_at TIMESTAMP NOT NULL DEFAULT NOW(),
- -- Constraints
- PRIMARY KEY (
- transaction_version,
- owner_address,
- coin_type_hash
- )
-);
--- latest coin balances
-CREATE TABLE current_coin_balances {...}
--- down.sql
-DROP TABLE IF EXISTS coin_balances;
-DROP TABLE IF EXISTS current_coin_balances;
-```
-
-See the [full source for `up.sql` and `down.sql`](https://github.com/aptos-labs/aptos-core/tree/main/crates/indexer/migrations/2022-10-04-073529_add_coin_tables).
-
-c. Run the migration. We suggest running it multiple times with `redo` to ensure that both `up.sql` and `down.sql` are implemented correctly. This will also modify the [`schema.rs`](https://github.com/aptos-labs/aptos-core/blob/main/crates/indexer/src/schema.rs) file.
-
-```bash
-DATABASE_URL=postgres://postgres@localhost:5432/postgres diesel migration run
-DATABASE_URL=postgres://postgres@localhost:5432/postgres diesel migration redo
-```
-
-### 2. Create new data schemas
-
-We now have to prepare the Rust data models that correspond to the Diesel schemas. In the case of coin balances, we will define `CoinBalance` and `CurrentCoinBalance` as below:
-
-```rust
-#[derive(Debug, Deserialize, FieldCount, Identifiable, Insertable, Serialize)]
-#[diesel(primary_key(transaction_version, owner_address, coin_type))]
-#[diesel(table_name = coin_balances)]
-pub struct CoinBalance {
- pub transaction_version: i64,
- pub owner_address: String,
- pub coin_type_hash: String,
- pub coin_type: String,
- pub amount: BigDecimal,
- pub transaction_timestamp: chrono::NaiveDateTime,
-}
-
-#[derive(Debug, Deserialize, FieldCount, Identifiable, Insertable, Serialize)]
-#[diesel(primary_key(owner_address, coin_type))]
-#[diesel(table_name = current_coin_balances)]
-pub struct CurrentCoinBalance {
- pub owner_address: String,
- pub coin_type_hash: String,
- pub coin_type: String,
- pub amount: BigDecimal,
- pub last_transaction_version: i64,
- pub last_transaction_timestamp: chrono::NaiveDateTime,
-}
-```
-
-We will also need to specify the parsing logic, where the input is a portion of the transaction. In the case of coin balances, we can find all the details in `WriteSetChanges`, specifically where the write set change type is `write_resources`.
-
-**Where to find the relevant data for parsing**: This requires a combination of understanding the Move module and the structure of the transaction. In the example of coin balance, the contract lives in [coin.move](https://github.com/aptos-labs/aptos-core/blob/main/aptos-move/framework/aptos-framework/sources/coin.move), specifically the coin struct (search for `struct Coin`) that has a `value` field. We then look at an [example transaction](https://fullnode.testnet.aptoslabs.com/v1/transactions/by_version/259518) where we find this exact structure in `write_resources`:
-
-```json
-"changes": [
- {
- ...
- "data": {
- "type": "0x1::coin::CoinStore<0x1::aptos_coin::AptosCoin>",
- "data": {
- "coin": {
- "value": "49742"
- },
- ...
-```
-
-See the full code in [coin_balances.rs](https://github.com/aptos-labs/aptos-core/blob/main/crates/indexer/src/models/coin_models/coin_balances.rs).
-
-### 3. Create a new processor
-
-Now that we have the data model and the parsing function, we need to call that parsing function and save the resulting model in our Postgres database. We do this by creating (or modifying) a `processor`. We have abstracted a lot already from that class, so the only function that should be implemented is `process_transactions` (there are a few more functions that should be copied, those should be obvious from the example).
-
-The `process_transactions` function takes in a vector of transactions with a start and end version that are used for tracking purposes. The general flow should be:
- - Loop through transactions in the vector.
- - Aggregate relevant models. Sometimes deduping is required, e.g. in the case of `CurrentCoinBalance`.
- - Insert the models into the database in a single Diesel transaction. This is important, to ensure that we do not have partial writes.
- - Return status (error or success).
-
-:::tip Coin transaction processor
-See [coin_process.rs](https://github.com/aptos-labs/aptos-core/blob/main/crates/indexer/src/processors/coin_processor.rs) for a relatively straightforward example. You can search for `coin_balances` in the page for the specific code snippet related to coin balances.
-:::
-
-**How to decide whether to create a new processor:** This is completely up to you. The benefit of creating a new processor is that you are starting from scratch so you will have full control over exactly what gets written to the indexed database. The downside is that you will have to maintain a new fullnode, since there is a 1-to-1 mapping between a fullnode and the processor.
-
-### 4. Integrate the new processor
-
-This is the easiest step and involves just a few additions.
-
-1. To start with, make sure to add the new processor in the Rust code files: [`mod.rs`](https://github.com/aptos-labs/aptos-core/blob/main/crates/indexer/src/processors/mod.rs) and [`runtime.rs`](https://github.com/aptos-labs/aptos-core/blob/main/crates/indexer/src/runtime.rs). See below:
-
-[**mod.rs**](https://github.com/aptos-labs/aptos-core/blob/main/crates/indexer/src/processors/mod.rs)
-
-```rust
-pub enum Processor {
- CoinProcessor,
- ...
-}
-...
- COIN_PROCESSOR_NAME => Self::CoinProcessor,
-```
-
-[**runtime.rs**](https://github.com/aptos-labs/aptos-core/blob/main/crates/indexer/src/runtime.rs)
-
-```rust
-Processor::CoinProcessor => Arc::new(CoinTransactionProcessor::new(conn_pool.clone())),
-```
-
-2. Create a `fullnode.yaml` with the correct configuration and test the custom indexer by starting a fullnode with this `fullnode.yaml`.
-
-**fullnode.yaml**
-
-```yaml
-storage:
- enable_indexer: true
- storage_pruner_config:
- ledger_pruner_config:
- enable: false
-
-indexer:
- enabled: true
- check_chain_id: true
- emit_every: 1000
- postgres_uri: "postgres://postgres@localhost:5432/postgres"
- processor: "coin_processor"
- fetch_tasks: 10
- processor_tasks: 10
-```
-
-Test by starting an Aptos fullnode by running the below command. You will see many logs in the terminal output, so use the `grep` filter to see only indexer log output, as shown below:
-
-```bash
-cargo run -p aptos-node --features "indexer" --release -- -f ./fullnode_coin.yaml | grep -E "_processor"
-```
-
-See the full instructions on how to start an indexer-enabled fullnode in [Indexer Fullnode](./indexer-fullnode).
diff --git a/developer-docs-site/docs/indexer/legacy/index.md b/developer-docs-site/docs/indexer/legacy/index.md
deleted file mode 100644
index 9ff2c96e6ba08..0000000000000
--- a/developer-docs-site/docs/indexer/legacy/index.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: "Legacy Indexer"
----
-
-# Legacy Indexer
-
-:::caution Deprecation Alert
-
-From Now - end of Q2, 2024: We will not be adding any new features to the legacy Indexer. However, we will continue to generally support the community, and will make sure that any changes made on the blockchain level does not break the existing legacy processors.
-
-After Q2, 2024: We will remove the indexer crates from the [aptos-core](https://github.com/aptos-labs/aptos-core) repo and the legacy indexer will no longer be supported. Please look at our new [Transaction Stream Service](/indexer/txn-stream/) and updated [Indexer API](/indexer/api/)
-
-:::
diff --git a/developer-docs-site/docs/indexer/legacy/indexer-fullnode.md b/developer-docs-site/docs/indexer/legacy/indexer-fullnode.md
deleted file mode 100644
index a815b2304a5d3..0000000000000
--- a/developer-docs-site/docs/indexer/legacy/indexer-fullnode.md
+++ /dev/null
@@ -1,122 +0,0 @@
----
-title: "Run an Indexer Fullnode"
-slug: "indexer-fullnode"
----
-
-:::warning Legacy Indexer
-This is documentation for the legacy indexer. To learn how to run the underlying infrastructure for the latest indexer stack, see [Transaction Stream Service](/indexer/txn-stream).
-:::
-
-# Run an Aptos Indexer
-
-:::danger On macOS with Apple silicon only
-The below installation steps are verified only on macOS with Apple silicon. They might require minor tweaking when running on other builds.
-:::
-
-## Summary
-
-To run an indexer fullnode, these are the steps in summary:
-
-1. Make sure that you have all the required tools and packages described below in this document.
-1. Follow the instructions to [set up a public fullnode](/nodes/full-node/fullnode-source-code-or-docker.md) but do not start the fullnode yet.
-1. Edit the `fullnode.yaml` as described below in this document.
-1. Run the indexer fullnode per the instructions below.
-
-## Prerequisites
-
-Install the packages below. Note, you may have already installed many of these while [preparing your development environment](/guides/building-from-source). You can confirm by running `which command-name` and ensuring the package appears in the output (although `libpq` will not be returned even when installed).
-
-> Important: If you are on macOS, you will need to [install Docker following the official guidance](https://docs.docker.com/desktop/install/mac-install/) rather than `brew`.
-
-For an Aptos indexer fullnode, install these packages:
-
- - [`brew`](https://brew.sh/) - `/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"` Run the commands emitted in the output to add the command to your path and install any dependencies
- - [`cargo` Rust package manager](https://www.rust-lang.org/tools/install) - `curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh`
- - [`docker`](https://docs.docker.com/get-docker/) - `brew install docker`
- - [libpq Postgres C API library containing the `pg_ctl` command](https://formulae.brew.sh/formula/libpq) - `brew install libpq`
- Make sure to perform all export commands after the installation.
- - [`postgres` PostgreSQL server](https://www.postgresql.org/) - `brew install postgresql`
- - [`diesel`](https://diesel.rs/) - `brew install diesel`
-
-## Set up the database
-
-1. Start the PostgreSQL server:
- `brew services start postgresql`
-1. Ensure you can run `psql postgres` and then exit the prompt by entering: `\q`
-1. Create a PostgreSQL user `postgres` with the `createuser` command (find it with `which`):
- ```bash
- /path/to/createuser -s postgres
- ```
-1. Clone `aptos-core` repository if you have not already:
- ```bash
- git clone https://github.com/aptos-labs/aptos-core.git
- ```
-1. Navigate (or `cd`) into `aptos-core/crates/indexer` directory.
-1. Create the database schema:
- ```bash
- diesel migration run --database-url postgresql://localhost/postgres
- ```
- This will create a database schema with the subdirectory `migrations` located in this `aptos-core/crates/indexer` directory. If for some reason this database is already in use, try a different database. For example: `DATABASE_URL=postgres://postgres@localhost:5432/indexer_v2 diesel database reset`
-
-## Start the fullnode indexer
-
-1. Follow the instructions to set up a [public fullnode](/nodes/full-node/fullnode-source-code-or-docker.md) and prepare the setup, but **do not** yet start the indexer (with `cargo run` or `docker run`).
-1. Pull the latest indexer Docker image with:
- ```bash
- docker pull aptoslabs/validator:nightly_indexer
- ```
-1. Edit the `./fullnode.yaml` and add the following configuration:
- ```yaml
- storage:
- enable_indexer: true
- # This is to avoid the node being pruned
- storage_pruner_config:
- ledger_pruner_config:
- enable: false
-
- indexer:
- enabled: true
- postgres_uri: "postgres://postgres@localhost:5432/postgres"
- processor: "default_processor"
- check_chain_id: true
- emit_every: 500
- ```
-
-:::tip Bootstap the fullnode
-Instead of syncing your indexer fullnode from genesis, which may take a long period of time, you can choose to bootstrap your fullnode using backup data before starting it. To do so, follow the instructions to [restore from a backup](/nodes/full-node/aptos-db-restore.md).
-
-Note: indexers cannot be bootstrapped using [a snapshot](/nodes/full-node/bootstrap-fullnode.md) or [fast sync](../../guides/state-sync.md#fast-syncing).
-:::
-
-1. Run the indexer fullnode with either `cargo run` or `docker run` depending upon your setup. Remember to supply the arguments you need for your specific node:
- ```bash
- docker run -p 8080:8080 \
- -p 9101:9101 -p 6180:6180 \
- -v $(pwd):/opt/aptos/etc -v $(pwd)/data:/opt/aptos/data \
- --workdir /opt/aptos/etc \
- --name=aptos-fullnode aptoslabs/validator:nightly_indexer aptos-node \
- -f /opt/aptos/etc/fullnode.yaml
- ```
- or:
- ```bash
- cargo run -p aptos-node --features "indexer" --release -- -f ./fullnode.yaml
- ```
-
-## Restart the indexer
-
-To restart the PostgreSQL server:
-
-1. [shut down the server](https://www.postgresql.org/docs/8.1/postmaster-shutdown.html) by searching for the `postmaster` process and killing it:
- ```bash
- ps -ef | grep -i postmaster
- ```
-
-1. Copy the process ID (PID) for the process and pass it to the following command to shut it down:
- ```bash
- kill -INT PID
- ```
-
-1. Restart the PostgreSQL server with:
- ```bash
- brew services restart postgresql@14
- ```
diff --git a/developer-docs-site/docs/indexer/legacy/migration.md b/developer-docs-site/docs/indexer/legacy/migration.md
deleted file mode 100644
index c75690a55423c..0000000000000
--- a/developer-docs-site/docs/indexer/legacy/migration.md
+++ /dev/null
@@ -1,124 +0,0 @@
----
-title: "Migrate to Transaction Stream Service"
----
-
-This guide contains information on how to migrate to using the Transaction Stream Service if you are currently running a legacy indexer.
-
-The old indexer stack requires running an archival fullnode with additional threads to process the transactions which is difficult and expensive to maintain. Adding more custom logic either requires a bulkier machine, or running several fullnodes that scale linearly.
-
-This new way of indexing uses the [Transaction Stream Service](https://aptos.dev/indexer/txn-stream/). You can either use the [Labs-Hosted Transaction Stream Service](https://aptos.dev/indexer/txn-stream/labs-hosted/) or [run your own instance of Transaction Stream Service](https://aptos.dev/indexer/txn-stream/self-hosted).
-
-## 1. Clone the repo
-
-```
-# SSH
-git clone git@github.com:aptos-labs/aptos-indexer-processors.git
-
-# HTTPS
-git clone https://github.com/aptos-labs/aptos-indexer-processors.git
-```
-
-Navigate to the directory for the service:
-
-```
-cd aptos-indexer-processors
-cd rust/processor
-```
-
-## 2. Migrate processors to Transaction Stream Service
-
-For each processor you're migrating, you'll need to create a config file using the template below. You can find more information about each field of the config file [here](https://aptos.dev/indexer/api/self-hosted/#configuration).
-
-```yaml
-health_check_port: 8084
-server_config:
- processor_config:
- type: default_processor
- postgres_connection_string:
- indexer_grpc_data_service_address:
- indexer_grpc_http2_ping_interval_in_secs: 60
- indexer_grpc_http2_ping_timeout_in_secs: 10
- auth_token:
- starting_version: 0 # optional
- ending_version: 0 # optional
-```
-
-To connect the processor to the Transaction Stream Service, you need to set the URL for `indexer_grpc_data_service_address`. Choose one of the following options.
-
-### Option A: Connect to Labs-Hosted Transaction Stream Service
-
-The main benefit of using the Labs-Hosted Transaction Stream Service is that you no longer need to run an archival fullnode to get a stream of transactions. This service is rate-limited. Instructions to connect to Labs-Hosted Transaction Stream can be found [here](https://aptos.dev/indexer/txn-stream/labs-hosted).
-
-### Option B: Run a Self-Hosted Transaction Stream Service
-
-If you choose to, you can run a self-hosted instance of the Transaction Stream Service and connect your processors to it. Instructions to run a Self-Hosted Transaction Stream can be found [here](https://aptos.dev/indexer/txn-stream/self-hosted).
-
-## 3. (Optional) Migrate custom processors to Transaction Stream Service
-
-If you have custom processors written with the old indexer, we highly recommend starting from scratch with a new database. Using a new database ensures that all your custom database migrations will be applied during this migration.
-
-### a. Migrate custom table schemas
-
-Migrate your custom schemas by copying over each of your custom migrations to the [`migrations`](https://github.com/aptos-labs/aptos-indexer-processors/tree/main/rust/processor/migrations) folder.
-
-### b. Migrate custom processors code
-
-Migrate the code by copying over your custom processors to the [`processors`](https://github.com/aptos-labs/aptos-indexer-processors/tree/main/rust/processor) folder and any relevant custom models to the [`models`](https://github.com/aptos-labs/aptos-indexer-processors/tree/main/rust/processor/src/models) folder. Integrate the custom processors with the rest of the code by adding them to the following Rust code files.
-
-[`mod.rs`](https://github.com/aptos-labs/aptos-indexer-processors/blob/main/rust/processor/src/processors/mod.rs)
-
-```
-pub enum Processor {
- ...
- CoinProcessor,
- ...
-}
-
-impl Processor {
- ...
- COIN_PROCESSOR_NAME => Self::CoinProcessor,
- ...
-}
-```
-
-[`worker.rs`](https://github.com/aptos-labs/aptos-indexer-processors/blob/main/rust/processor/src/worker.rs)
-
-```
-Processor::CoinProcessor => {
- Arc::new(CoinTransactionProcessor::new(self.db_pool.clone()))
-},
-```
-
-## 4. Backfill Postgres database with Diesel
-
-Even though the new processors have the same Postgres schemas as the old ones, we recommend you do a complete backfill (ideally writing to a new DB altogether) because some fields are a bit different as a result of the protobuf conversion.
-
-These instructions asusme you are familar with using [Diesel migrations](https://docs.rs/diesel_migrations/latest/diesel_migrations/). Run the full database migration with the following command:
-
-```
-DATABASE_URL=postgres://postgres@localhost:5432/postgres diesel migration run
-```
-
-## 5. Run the migrated processors
-
-To run a single processor, use the following command:
-
-```
-cargo run --release -- -c config.yaml
-```
-
-If you have multiple processors, you'll need to run a separate instance of the service for each of the processors.
-
-If you'd like to run the processor as a Docker image, the instructions are listed [here](https://aptos.dev/indexer/api/self-hosted#run-with-docker).
-
-## FAQs
-
-### 1. Will the protobuf ever be updated, and what do I need to do at that time?
-
-The protobuf schema may be updated in the future. Backwards incompatible changes will be communicated in release notes.
-
-### 2. What if I already have custom logic written in the old indexer? Is it easy to migrate those?
-
-Since the new indexer stack has the same Postgres schema as the old indexer stack, it should be easy to migrate your processors. We still highly recommend creating a new DB for this migration so that any custom DB migrations are applie.
-
-Follow Step 3 in this guide to migrate your custom logic over to the new processors stack.
diff --git a/developer-docs-site/docs/indexer/txn-stream/index.md b/developer-docs-site/docs/indexer/txn-stream/index.md
deleted file mode 100644
index ba0472f0b465f..0000000000000
--- a/developer-docs-site/docs/indexer/txn-stream/index.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-title: "Transaction Stream Service"
----
-
-import BetaNotice from '../../../src/components/_indexer_beta_notice.mdx';
-
-
-
-The Transaction Stream Service is a service that listens to the Aptos blockchain and emits transactions as they are processed. These docs explain how this system works, how to use the Labs-Hosted instance of the service, and how to deploy it yourself.
diff --git a/developer-docs-site/docs/indexer/txn-stream/labs-hosted.md b/developer-docs-site/docs/indexer/txn-stream/labs-hosted.md
deleted file mode 100644
index 642e8611e19b3..0000000000000
--- a/developer-docs-site/docs/indexer/txn-stream/labs-hosted.md
+++ /dev/null
@@ -1,41 +0,0 @@
----
-title: "Labs-Hosted Transaction Stream Service"
----
-
-import BetaNotice from '../../../src/components/_indexer_beta_notice.mdx';
-
-
-
-If you are running your own instance of the [Indexer API](/indexer/api), or a [custom processor](/indexer/custom-processors), you must have access to an instance of the Transaction Stream Service. This page contains information about how to use the Labs-Hosted Transaction Stream Service.
-
-## Endpoints
-All endpoints are in GCP us-central1 unless otherwise specified.
-
-- **Mainnet:** grpc.mainnet.aptoslabs.com:443
-- **Testnet:** grpc.testnet.aptoslabs.com:443
-- **Devnet:** grpc.devnet.aptoslabs.com:443
-
-
-
-## Auth tokens
-
-In order to use the Labs-Hosted Transaction Stream Service you must have an auth token. To get an auth token, do the following:
-1. Go to https://aptos-api-gateway-prod.firebaseapp.com.
-1. Sign in and select "API Tokens" in the left sidebar.
-1. Create a new token. You will see the token value in the first table.
-
-You can provide the auth key by setting the `Authorization` HTTP header ([MDN](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Authorization)). For example, with curl:
-```
-curl -H 'Authorization: Bearer aptoslabs_yj4donpaKy_Q6RBP4cdBmjA8T51hto1GcVX5ZS9S65dx'
-```
-
-For more comprehensive information about how to use the Transaction Stream Service, see the docs for the downstream systems:
-- [Indexer API](/indexer/api/self-hosted)
-- [Custom Processors](/indexer/custom-processors)
diff --git a/developer-docs-site/docs/indexer/txn-stream/local-development.md b/developer-docs-site/docs/indexer/txn-stream/local-development.md
deleted file mode 100644
index f79a693f471f0..0000000000000
--- a/developer-docs-site/docs/indexer/txn-stream/local-development.md
+++ /dev/null
@@ -1,123 +0,0 @@
----
-title: "Running Locally"
----
-
-# Running the Transaction Stream Service Locally
-
-import BetaNotice from '../../../src/components/_indexer_beta_notice.mdx';
-
-
-
-:::info
-This has been tested on MacOS 13 on ARM and Debian 11 on x86_64.
-:::
-
-When building a custom processor, you might find it helpful to develop against a local development stack. The Transaction Stream Service is a complicated, multi-component system. To assist with local development, we offer a Python script that wraps a Docker compose file to set up the entire system.
-
-This script sets up the following:
-- Single node testnet with the indexer GRPC stream enabled.
-- A Redis instance.
-- Transaction Stream Service, including the following components:
- - [cache-worker](https://github.com/aptos-labs/aptos-core/tree/main/ecosystem/indexer-grpc/indexer-grpc-cache-worker): Pulls transactions from the node and stores them in Redis.
- - [file-store](https://github.com/aptos-labs/aptos-core/tree/main/ecosystem/indexer-grpc/indexer-grpc-file-store): Fetches transactions from Redis and stores them in a filesystem.
- - [data-service](https://github.com/aptos-labs/aptos-core/tree/main/ecosystem/indexer-grpc/indexer-grpc-data-service): Serves transactions via a GRPC stream to downstream clients. It pulls from either the cache or the file store depending on the age of the transaction.
-- Shared volumes and networking to hook it all up.
-
-You can learn more about the Transaction Stream Service architecture [here](/indexer/txn-stream) and the Docker compose file [here](https://github.com/aptos-labs/aptos-core/blob/main/docker/compose/indexer-grpc/docker-compose.yaml).
-
-## Prerequisites
-In order to use the local development script you must have the following installed:
-- Python 3.8+: [Installation Guide](https://docs.python-guide.org/starting/installation/#python-3-installation-guides).
-- Poetry: [Installation Guide](https://python-poetry.org/docs/#installation).
-- Docker: [Installation Guide](https://docs.docker.com/get-docker/).
-- Docker Compose v2: This should be installed by default with modern Docker installations, verify with this command:
-```bash
-docker-compose version --short
-```
-- grpcurl: [Installation Guide](https://github.com/fullstorydev/grpcurl#installation)
-- OpenSSL
-
-## Preparation
-Clone the aptos-core repo:
-```
-# HTTPS
-git clone https://github.com/aptos-labs/aptos-core.git
-
-# SSH
-git clone git@github.com:aptos-labs/aptos-core.git
-```
-
-Navigate to the `testsuite` directory:
-```
-cd aptos-core
-cd testsuite
-```
-
-Install the Python dependencies:
-```
-poetry install
-```
-
-## Running the script
-### Starting the service
-```
-poetry run python indexer_grpc_local.py start
-```
-
-You will know this succeeded if the command exits and you see the following:
-```
-Attempting to stream from indexer grpc for 10s
-Stream finished successfully
-```
-
-### Stopping the service
-```
-poetry run python indexer_grpc_local.py stop
-```
-
-### Wiping the data
-When you start, stop, and start the service again, it will re-use the same local testnet data. If you wish to wipe the local testnet and start from scratch you can run the following command:
-```
-poetry run python indexer_grpc_local.py wipe
-```
-
-## Using the local service
-You can connect to the local Transaction Stream Service, e.g. from a custom processor, using the following configuration values:
-```
-indexer_grpc_data_service_address: 127.0.0.1:50052
-auth_token: dummy_token
-```
-
-You can connect to the node at the following address:
-```
-http://127.0.0.1:8080/v1
-```
-
-## Debugging
-
-### Usage on ARM systems
-If you have a machine with an ARM processor, e.g. an M1/M2 Mac, the script should detect that and set the appropriate environment variables to ensure that the correct images will be used. If you have issues with this, try setting the following environment variable:
-```bash
-export DOCKER_DEFAULT_PLATFORM=linux/amd64
-```
-
-Additionally, make sure the following settings are correct in Docker Desktop:
-- Enabled: Preferences > General > Use Virtualization framework
-- Enabled: Preferences > General > Use Docker Compose V2
-- Disabled: Features in development -> Use Rosetta for x86/amd64 emulation on Apple Silicon
-
-This script has not been tested on Linux ARM systems.
-
-### Redis fails to start
-Try setting the following environment variable before running the script:
-```bash
-export REDIS_IMAGE_REPO=arm64v8/redis
-```
-
-### Cache worker is crashlooping or `Redis latest version update failed.` in log
-Wipe the data:
-```bash
-poetry run python indexer_grpc_local.py wipe
-```
-
-This means historical data will be lost.
diff --git a/developer-docs-site/docs/indexer/txn-stream/self-hosted.md b/developer-docs-site/docs/indexer/txn-stream/self-hosted.md
deleted file mode 100644
index 4cd2a5675cd77..0000000000000
--- a/developer-docs-site/docs/indexer/txn-stream/self-hosted.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-title: "Self-Hosted Transaction Stream Service"
----
-
-import BetaNotice from '../../../src/components/_indexer_beta_notice.mdx';
-
-
-
-Coming soon!
diff --git a/developer-docs-site/docs/integration/aptos-name-service-connector.md b/developer-docs-site/docs/integration/aptos-name-service-connector.md
deleted file mode 100644
index ef27af9a6b61e..0000000000000
--- a/developer-docs-site/docs/integration/aptos-name-service-connector.md
+++ /dev/null
@@ -1,86 +0,0 @@
----
-title: "Integrate with Aptos Names Service"
-id: "aptos-names-service-package"
----
-# Integrate with Aptos Names Service
-The Aptos Name Service provides a React UI package that offers developers a customizable button and modal to enable users to search for and mint Aptos names directly from their website.
-
-## Prerequisites
-- [React project](https://create-react-app.dev/docs/getting-started/)
-- Supporting dependencies installed in the root directory of your React project with npm or yarn:
- - `npm install @emotion/styled @emotion/react`
- - `yarn add @emotion/styled @emotion/react react-copy-to-clipboard`
-
-## Use Aptos Names Service Connector
-1. Open a terminal session and navigate to the root directory of your React project.
-1. Install the `aptos-names-connector` package using npm or yarn:
- - `npm install "@aptos-labs/aptos-names-connector"`
- - `yarn add "@aptos-labs/aptos-names-connector"`
-1. Once you have installed the package, you can import the `AptosNamesConnector` component and use it in your React application (by default in `./src/App.js`):
- ```
- import { AptosNamesConnector } from "@aptos-labs/aptos-names-connector";
-
- function MyComponent() {
- const handleSignTransaction = async () => {
- // Handle signing of transaction
- };
-
- return (
-
- );
- }
- ```
- 1. To see your changes, start a development server using npm or yarn. The following commands will open the React application in your default web browser (typically to `localhost:3000`):
- - `npm start`
- - `yarn start`
-
-## Configure `AptosNamesConnector` properties
-The `AptosNamesConnector` component accepts the following props:
-
-- `onSignTransaction`: A required callback function that is called when the user clicks the "Mint" button in the modal. This function should handle the signing of the transaction.
-- `isWalletConnected`: A boolean value that indicates whether the user's wallet is connected.
-- `network`: A string value that specifies whether the component should connect to the mainnet or testnet.
-- `buttonLabel`: A string value that specifies the text to display on the button.
-
-## Customize button label and appearance
-The button label can be customized by passing a string value to the buttonLabel prop.
-The appearance of the button in the `AptosNamesConnector` component can be customized to fit in your website. The button has the CSS class name of `ans_connector_button`:
-
-```
-.ans-connector-button {
- background-color: #000000;
- border: none;
- border-radius: 4px;
- color: #ffffff;
- cursor: pointer;
- font-size: 16px;
- font-weight: bold;
- padding: 12px 16px;
-}
-```
-To use `ans_connector_button` in your React application, add `import "@aptos-labs/aptos-names-connector/dist/index.css";` to the top of your App.js file and reference it with ``
-
-## Supported networks
-The `AptosNamesConnector` component supports both mainnet and testnet. To connect to the mainnet, set the network prop to "mainnet". To connect to the testnet, set the network prop to "testnet".
-
-## Example
-The following example shows how to use the `AptosNamesConnector` component in a React application:
-
-
-
-- Add a ‘claim name’ button to any page in your application. This allows your users to directly create an Aptos name, giving them a human-readable .apt name for their Aptos wallet address. You can customize the look of the button to suit your application. Here is an example on the profile page of an NFT marketplace.
-
-![Claim name](../../static/img/docs/ans_entrypoint_example.png)
-
-- When the button is clicked, the Aptos Names modal will show up, and the user can search for a name and mint it directly in your application.
-
-![Show Aptos Name Service modal](../../static/img/docs/ans_entrypoint_modal_example.png)
-
-- Once the user has minted their name, you can replace their Aptos wallet address by querying from Aptos fullnodes. Now your users have a human-readable .apt name.
-
-![Claim another name](../../static/img/docs/ans_entrypoint_with_other_name.png)
diff --git a/developer-docs-site/docs/integration/index.md b/developer-docs-site/docs/integration/index.md
deleted file mode 100644
index 546576cce01cf..0000000000000
--- a/developer-docs-site/docs/integration/index.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: "Interact with Blockchain"
----
-
-# Integrate with the Aptos Blockchain
-
-Follow these guides to work with the Aptos blockchain.
-
-- ### [Aptos Token Overview](../guides/nfts/aptos-token-overview.md)
-- ### [Integrate with Aptos Names Service](aptos-name-service-connector.md)
-- ### [Integrate with the Aptos Faucet](../guides/system-integrators-guide.md#integrating-with-the-faucet)
-- ### [Error Codes in Aptos](../reference/error-codes.md)
diff --git a/developer-docs-site/docs/integration/wallet-adapter-concept.md b/developer-docs-site/docs/integration/wallet-adapter-concept.md
deleted file mode 100644
index fd0d20abb6f81..0000000000000
--- a/developer-docs-site/docs/integration/wallet-adapter-concept.md
+++ /dev/null
@@ -1,93 +0,0 @@
----
-title: "Integrate with Aptos Wallets"
-id: "wallet-adapter-concept"
----
-
-import ThemedImage from '@theme/ThemedImage';
-import useBaseUrl from '@docusaurus/useBaseUrl';
-
-# Integrate with Aptos wallets
-
-Decentralized applications often run through a browser extension or mobile application to read onchain data and submit
-transactions. The Aptos Wallet Adapter allows for a single interface for apps and wallets to integrate together.
-
-## Implementing the Aptos Wallet Adapter
-
-For the best user experience, we suggest that dapps offer multiple wallets, to allow users to choose their preferred
-wallet.
-
-Implementing wallet integration can be difficult for dapps in:
-
-1. Support and test all edge cases
-2. Implement and maintain different wallet APIs
-3. Provide users with needed functionality the wallet itself doesn't support
-4. Keep track on all the different wallets in our ecosystem
-
-In addition, creating and implementing a wallet is also not an easy task,
-
-1. Provide a wallet that follows a known standard so it is easy to integrate with
-2. Getting visibility and exposure in the ecosystem among all the other wallets
-3. Dapp projects need to dedicate time and resource to integrate the wallet within their app
-
-When we started building a wallet adapter, we wanted to provide an adapter that can be easy enough for wallets to integrate with and for dapps to use and implement.
-
-For that, we provide an [Aptos Wallet Adapter](https://github.com/aptos-labs/aptos-wallet-adapter) monorepo for wallet and dapps creators to ease development and ensure a smooth process in building projects on the Aptos network.
-The Aptos Wallet Adapter acts as a service between dapps and wallets and exposes APIs for dapps to interact with the wallets by following our [Wallet Standard](../standards/wallets.md). This in turns allows dapps to support many wallets with minimal integration efforts, and for wallets to follow a known standard and gain visibility.
-
-## Adapter structure
-
-The adapter has three different components, the:
-
-1. Adapter Core package
-2. Adapter React provider (for dapps)
-3. Adapter Template plugin (for wallets)
-
-This structure offers the following benefits:
-
-- Modularity (separation of concerns) - separating the adapter into three components can help having more freedom in design, implementation, deployment and usage.
-- Wallets create and own their plugin implementation (instead of having all in the same monorepo):
- - Reduces the packages bundle size used by dapps.
- - Lets them be self-service and support themselves without too much friction.
- - Prevents build failures in case of any bugs/bad implementation/wrong config files/etc.
-- Simplicity - keeps the Provider package very light and small as the major logic is implemented in the core package.
-- Flexibility - for wallets in creating and implementing custom functions.
-
-### Adapter Core package
-
-The [Adapter Core package](https://github.com/aptos-labs/aptos-wallet-adapter/tree/main/packages/wallet-adapter-core) handles the interaction between the dapp and the wallet. It:
-
-- Exposes the standard API (and some different functions supported by different wallets)
-- Holds the current wallet state and the installed wallets
-- Emits events on different actions and much more
-
-Dapps should not _know_ this package as dapps interact with the provider, which in turn interacts with the core package; some Types are exposed from the core package for the dapp to use.
-
-Wallets should implement their own plugin class that extends the basic plugin class (properties + events) interface that lives in the core package.
-
-:::tip
-If a wallet supports functions that are not part of the basic plugin interface, a pull request should be made to the core package to include this function so it can support it. You can take a look at the `signTransaction` on the wallet core package for guidance.
-:::
-
-### Adapter React provider
-
-The light [Adapter React package](https://github.com/aptos-labs/aptos-wallet-adapter/tree/main/packages/wallet-adapter-react) is for dapps to import and use. The package contains a `Provider` and a `Context` to implement and use within your app.
-
-Follow the [Wallet Adapter For Dapp Builders](./wallet-adapter-for-dapp.md) guide on how to use the provider package on your dapp.
-
-### Adapter Template plugin
-
-Wallets looking to integrate with the adapter should implement their own wallet plugin, to ease the process we provide you with a pre-made class that implements the basic functionality needed (according to the wallet standard).
-
-The [Wallet Adapter Plugin Template repo](https://github.com/aptos-labs/wallet-adapter-plugin-template) holds a pre-made class, a test file, and some config files to help you build and publish the plugin as an NPM package.
-
-Follow the [Wallet Adapter For Wallet Builders](./wallet-adapter-for-wallets.md) on how to use the template to implement and publish your wallet plugin.
-
-
-
-
diff --git a/developer-docs-site/docs/integration/wallet-adapter-for-dapp.md b/developer-docs-site/docs/integration/wallet-adapter-for-dapp.md
deleted file mode 100644
index 9d5b206645ff6..0000000000000
--- a/developer-docs-site/docs/integration/wallet-adapter-for-dapp.md
+++ /dev/null
@@ -1,87 +0,0 @@
----
-title: "For Dapps"
-id: "wallet-adapter-for-dapp"
----
-
-# Wallet Adapter For Dapp Builders
-
-Imagine you have a great idea for a dapp and you want to start building it. Eventually, you will need to integrate a wallet or multiple wallets so your users can interact with the Aptos blockchain.
-Implementing wallet integration can be difficult in supporting all edge cases, new features, unsupported functionality. And it can be even harder to support multiple wallets.
-
-In addition, different wallets have different APIs, and not all wallets share the same naming convention. For example, maybe all wallets have a `connect` method, but not all wallets call that method `connect`; that can be tricky to support.
-
-Luckily, Aptos built a wallet adapter, created and maintained by the Aptos team, to help you ramp up development and standardize where possible.
-
-The Aptos Wallet Adapter provides:
-
-- Easy wallet implementation - no need to implement and support code for multiple wallets.
-- Support for different wallet APIs.
-- Support for features not implemented on the wallet level.
-- Detection for uninstalled wallets (so you can show users that a wallet is not installed).
-- Auto-connect functionality and remembers the current wallet state.
-- Listens to wallet events, such as account and network changes.
-- A well-developed and maintained reference implementation by the Aptos ecosystem team.
-
-## Install
-
-Currently, the adapter supports a _React provider_ for you to include in your app.
-
-Install wallet dependencies you want to include in your app. You can find a list of the wallets in the Aptos Wallet Adapter [README](https://github.com/aptos-labs/aptos-wallet-adapter#supported-wallet-packages).
-
-Install the React provider:
-
-```bash
-npm install @aptos-labs/wallet-adapter-react
-```
-
-## Import dependencies
-
-In the `App.jsx` file:
-
-Import the installed wallets:
-
-```js
-import { PetraWallet } from "petra-plugin-wallet-adapter";
-```
-
-Import the `AptosWalletAdapterProvider`:
-
-```js
-import { AptosWalletAdapterProvider } from "@aptos-labs/wallet-adapter-react";
-```
-
-Wrap your app with the Provider, pass it the plugins (wallets) you want to have on your app as an array, and include an autoConnect option (set to false by default):
-
-```js
-const wallets = [new PetraWallet()];
-
-
-;
-```
-
-### Use
-
-On any page you want to use the wallet properties, import `useWallet` from `@aptos-labs/wallet-adapter-react`:
-
-```js
-import { useWallet } from "@aptos-labs/wallet-adapter-react";
-```
-
-You can then use the exported properties:
-
-```js
-const {
- connect,
- account,
- network,
- connected,
- disconnect,
- wallet,
- wallets,
- signAndSubmitTransaction,
- signTransaction,
- signMessage,
-} = useWallet();
-```
-
-Finally, use the [examples](https://github.com/aptos-labs/aptos-wallet-adapter/tree/main/packages/wallet-adapter-react#examples) on the package README file to build more functionality into your dapps.
diff --git a/developer-docs-site/docs/integration/wallet-adapter-for-wallets.md b/developer-docs-site/docs/integration/wallet-adapter-for-wallets.md
deleted file mode 100644
index 86170317fb3d5..0000000000000
--- a/developer-docs-site/docs/integration/wallet-adapter-for-wallets.md
+++ /dev/null
@@ -1,41 +0,0 @@
----
-title: "For Wallets"
-id: "wallet-adapter-for-wallets"
----
-
-# Wallet Adapter For Wallet Builders
-
-To gain from dapps in the Aptos Ecosystem and provide your users the functionality they are looking for in a wallet, your wallet plugin should follow the [Aptos Wallet Standard](../standards/wallets.md) and be built from the Aptos Wallet Adapter.
-
-The [wallet-adapter-plugin-template](https://github.com/aptos-labs/wallet-adapter-plugin-template) repository gives wallet builders a pre-made class with all required wallet functionality following the Aptos Wallet Standard for easy and fast development.
-
-## Configuration
-
-1. `git clone git@github.com:aptos-labs/wallet-adapter-plugin-template.git`
-1. Open `src/index.ts` for editing.
-1. Replace all `AptosWindow` references with: `Window`
-1. Replace `AptosWalletName` with: `WalletName`
-1. Replace `url` with your website URL.
-1. Change `icon` to your wallet icon (pay attention to the required format).
-1. Replace `window.aptos` with: `window.`
- - Make sure the `Window Interface` has `` as a key (instead of `aptos`).
-1. Open `__tests/index.test.tsx` and change `AptosWallet` to: `Wallet`
-1. Run tests with `pnpm test` - all tests should pass.
-
-At this point, you have a ready wallet class with all required properties and functions to integrate with the Aptos Wallet Adapter.
-
-### Publish as a package
-
-The next step is to publish your wallet as an NPM package so dapps can install it as a dependency. Use one of the options below:
-
-[Creating and publishing scoped public packages](https://docs.npmjs.com/creating-and-publishing-scoped-public-packages)
-
-[Creating and publishing unscoped public packages](https://docs.npmjs.com/creating-and-publishing-unscoped-public-packages)
-
-:::tip
-If your wallet provides functionality that is not included, you should open a pull request against `aptos-wallet-adapter` in the core package to have it support this functionality. See the `signTransaction` on the [wallet core package](https://github.com/aptos-labs/aptos-wallet-adapter/blob/main/packages/wallet-adapter-core/src/WalletCore.ts) for guidance.
-:::
-
-### Add your name to the wallets list
-
-Once the package is published, create a pull request against the [aptos-wallet-adapter](https://github.com/aptos-labs/aptos-wallet-adapter) package and add your wallet name to the [supported wallet list](https://github.com/aptos-labs/aptos-wallet-adapter#supported-wallet-packages) on the README file as a URL to your NPM package.
diff --git a/developer-docs-site/docs/move/book/SUMMARY.md b/developer-docs-site/docs/move/book/SUMMARY.md
deleted file mode 100644
index 3d5e9bda02670..0000000000000
--- a/developer-docs-site/docs/move/book/SUMMARY.md
+++ /dev/null
@@ -1,46 +0,0 @@
-# The Move Programming Language
-
-[Introduction](introduction.md)
-
-## Getting Started
-
-- [Modules and Scripts](modules-and-scripts.md)
-- [Move Tutorial](creating-coins.md)
-
-## Primitive Types
-
-- [Integers](integers.md)
-- [Bool](bool.md)
-- [Address](address.md)
-- [Vector](vector.md)
-- [Signer](signer.md)
-- [References](references.md)
-- [Tuples and Unit](tuples.md)
-
-## Basic Concepts
-
-- [Local Variables and Scopes](variables.md)
-- [Equality](equality.md)
-- [Abort and Assert](abort-and-assert.md)
-- [Conditionals](conditionals.md)
-- [While and Loop](loops.md)
-- [Functions](functions.md)
-- [Structs and Resources](structs-and-resources.md)
-- [Constants](constants.md)
-- [Generics](generics.md)
-- [Type Abilities](abilities.md)
-- [Uses and Aliases](uses.md)
-- [Friends](friends.md)
-- [Packages](packages.md)
-- [Package Upgrades](package-upgrades.md)
-- [Unit Tests](unit-testing.md)
-
-## Global Storage
-
-- [Global Storage Structure](global-storage-structure.md)
-- [Global Storage Operators](global-storage-operators.md)
-
-## Reference
-
-- [Standard Library](standard-library.md)
-- [Coding Conventions](coding-conventions.md)
diff --git a/developer-docs-site/docs/move/book/abilities.md b/developer-docs-site/docs/move/book/abilities.md
deleted file mode 100644
index 81ecc0dd2b025..0000000000000
--- a/developer-docs-site/docs/move/book/abilities.md
+++ /dev/null
@@ -1,244 +0,0 @@
-# Abilities
-
-Abilities are a typing feature in Move that control what actions are permissible for values of a given type. This system grants fine grained control over the "linear" typing behavior of values, as well as if and how values are used in global storage. This is implemented by gating access to certain bytecode instructions so that for a value to be used with the bytecode instruction, it must have the ability required (if one is required at all—not every instruction is gated by an ability).
-
-
-
-## The Four Abilities
-
-The four abilities are:
-
-* [`copy`](#copy)
- * Allows values of types with this ability to be copied.
-* [`drop`](#drop)
- * Allows values of types with this ability to be popped/dropped.
-* [`store`](#store)
- * Allows values of types with this ability to exist inside a struct in global storage.
-* [`key`](#key)
- * Allows the type to serve as a key for global storage operations.
-
-### `copy`
-
-The `copy` ability allows values of types with that ability to be copied. It gates the ability to copy values out of local variables with the [`copy`](./variables.md#move-and-copy) operator and to copy values via references with [dereference `*e`](./references.md#reading-and-writing-through-references).
-
-If a value has `copy`, all values contained inside of that value have `copy`.
-
-### `drop`
-
-The `drop` ability allows values of types with that ability to be dropped. By dropped, we mean that value is not transferred and is effectively destroyed as the Move program executes. As such, this ability gates the ability to ignore values in a multitude of locations, including:
-* not using the value in a local variable or parameter
-* not using the value in a [sequence via `;`](./variables.md#expression-blocks)
-* overwriting values in variables in [assignments](./variables.md#assignments)
-* overwriting values via references when [writing `*e1 = e2`](./references.md#reading-and-writing-through-references).
-
-If a value has `drop`, all values contained inside of that value have `drop`.
-
-### `store`
-
-The `store` ability allows values of types with this ability to exist inside of a struct (resource) in global storage, *but* not necessarily as a top-level resource in global storage. This is the only ability that does not directly gate an operation. Instead it gates the existence in global storage when used in tandem with `key`.
-
-If a value has `store`, all values contained inside of that value have `store`
-
-### `key`
-
-The `key` ability allows the type to serve as a key for [global storage operations](./global-storage-operators.md). It gates all global storage operations, so in order for a type to be used with `move_to`, `borrow_global`, `move_from`, etc., the type must have the `key` ability. Note that the operations still must be used in the module where the `key` type is defined (in a sense, the operations are private to the defining module).
-
-If a value has `key`, all values contained inside of that value have `store`. This is the only ability with this sort of asymmetry.
-
-## Builtin Types
-
-Most primitive, builtin types have `copy`, `drop`, and `store` with the exception of `signer`, which just has `drop`
-
-* `bool`, `u8`, `u16`, `u32`, `u64`, `u128`, `u256`, and `address` all have `copy`, `drop`, and `store`.
-* `signer` has `drop`
- * Cannot be copied and cannot be put into global storage
-* `vector` may have `copy`, `drop`, and `store` depending on the abilities of `T`.
- * See [Conditional Abilities and Generic Types](#conditional-abilities-and-generic-types) for more details.
-* Immutable references `&` and mutable references `&mut` both have `copy` and `drop`.
- * This refers to copying and dropping the reference itself, not what they refer to.
- * References cannot appear in global storage, hence they do not have `store`.
-
-None of the primitive types have `key`, meaning none of them can be used directly with the [global storage operations](./global-storage-operators.md).
-
-## Annotating Structs
-
-To declare that a `struct` has an ability, it is declared with `has ` after the struct name but before the fields. For example:
-
-```move
-struct Ignorable has drop { f: u64 }
-struct Pair has copy, drop, store { x: u64, y: u64 }
-```
-
-In this case: `Ignorable` has the `drop` ability. `Pair` has `copy`, `drop`, and `store`.
-
-
-All of these abilities have strong guarantees over these gated operations. The operation can be performed on the value only if it has that ability; even if the value is deeply nested inside of some other collection!
-
-As such: when declaring a struct’s abilities, certain requirements are placed on the fields. All fields must satisfy these constraints. These rules are necessary so that structs satisfy the reachability rules for the abilities given above. If a struct is declared with the ability...
-
-* `copy`, all fields must have `copy`.
-* `drop`, all fields must have `drop`.
-* `store`, all fields must have `store`.
-* `key`, all fields must have `store`.
- * `key` is the only ability currently that doesn’t require itself.
-
-For example:
-
-```move
-// A struct without any abilities
-struct NoAbilities {}
-
-struct WantsCopy has copy {
- f: NoAbilities, // ERROR 'NoAbilities' does not have 'copy'
-}
-```
-
-and similarly:
-
-```move
-// A struct without any abilities
-struct NoAbilities {}
-
-struct MyResource has key {
- f: NoAbilities, // Error 'NoAbilities' does not have 'store'
-}
-```
-
-## Conditional Abilities and Generic Types
-
-When abilities are annotated on a generic type, not all instances of that type are guaranteed to have that ability. Consider this struct declaration:
-
-```
-struct Cup has copy, drop, store, key { item: T }
-```
-
-It might be very helpful if `Cup` could hold any type, regardless of its abilities. The type system can *see* the type parameter, so it should be able to remove abilities from `Cup` if it *sees* a type parameter that would violate the guarantees for that ability.
-
-This behavior might sound a bit confusing at first, but it might be more understandable if we think about collection types. We could consider the builtin type `vector` to have the following type declaration:
-
-```
-vector has copy, drop, store;
-```
-
-We want `vector`s to work with any type. We don't want separate `vector` types for different abilities. So what are the rules we would want? Precisely the same that we would want with the field rules above. So, it would be safe to copy a `vector` value only if the inner elements can be copied. It would be safe to ignore a `vector` value only if the inner elements can be ignored/dropped. And, it would be safe to put a `vector` in global storage only if the inner elements can be in global storage.
-
-To have this extra expressiveness, a type might not have all the abilities it was declared with depending on the instantiation of that type; instead, the abilities a type will have depends on both its declaration **and** its type arguments. For any type, type parameters are pessimistically assumed to be used inside of the struct, so the abilities are only granted if the type parameters meet the requirements described above for fields. Taking `Cup` from above as an example:
-
-* `Cup` has the ability `copy` only if `T` has `copy`.
-* It has `drop` only if `T` has `drop`.
-* It has `store` only if `T` has `store`.
-* It has `key` only if `T` has `store`.
-
-Here are examples for this conditional system for each ability:
-
-### Example: conditional `copy`
-
-```
-struct NoAbilities {}
-struct S has copy, drop { f: bool }
-struct Cup has copy, drop, store { item: T }
-
-fun example(c_x: Cup, c_s: Cup) {
- // Valid, 'Cup' has 'copy' because 'u64' has 'copy'
- let c_x2 = copy c_x;
- // Valid, 'Cup' has 'copy' because 'S' has 'copy'
- let c_s2 = copy c_s;
-}
-
-fun invalid(c_account: Cup, c_n: Cup) {
- // Invalid, 'Cup' does not have 'copy'.
- // Even though 'Cup' was declared with copy, the instance does not have 'copy'
- // because 'signer' does not have 'copy'
- let c_account2 = copy c_account;
- // Invalid, 'Cup' does not have 'copy'
- // because 'NoAbilities' does not have 'copy'
- let c_n2 = copy c_n;
-}
-```
-
-### Example: conditional `drop`
-
-```
-struct NoAbilities {}
-struct S has copy, drop { f: bool }
-struct Cup has copy, drop, store { item: T }
-
-fun unused() {
- Cup { item: true }; // Valid, 'Cup' has 'drop'
- Cup { item: S { f: false }}; // Valid, 'Cup' has 'drop'
-}
-
-fun left_in_local(c_account: Cup): u64 {
- let c_b = Cup { item: true };
- let c_s = Cup { item: S { f: false }};
- // Valid return: 'c_account', 'c_b', and 'c_s' have values
- // but 'Cup', 'Cup', and 'Cup' have 'drop'
- 0
-}
-
-fun invalid_unused() {
- // Invalid, Cannot ignore 'Cup' because it does not have 'drop'.
- // Even though 'Cup' was declared with 'drop', the instance does not have 'drop'
- // because 'NoAbilities' does not have 'drop'
- Cup { item: NoAbilities {}};
-}
-
-fun invalid_left_in_local(): u64 {
- let c_n = Cup { item: NoAbilities {}};
- // Invalid return: 'c_n' has a value
- // and 'Cup' does not have 'drop'
- 0
-}
-```
-
-### Example: conditional `store`
-
-```
-struct Cup has copy, drop, store { item: T }
-
-// 'MyInnerResource' is declared with 'store' so all fields need 'store'
-struct MyInnerResource has store {
- yes: Cup, // Valid, 'Cup' has 'store'
- // no: Cup, Invalid, 'Cup' does not have 'store'
-}
-
-// 'MyResource' is declared with 'key' so all fields need 'store'
-struct MyResource has key {
- yes: Cup, // Valid, 'Cup' has 'store'
- inner: Cup, // Valid, 'Cup' has 'store'
- // no: Cup, Invalid, 'Cup' does not have 'store'
-}
-```
-
-### Example: conditional `key`
-
-```
-struct NoAbilities {}
-struct MyResource has key { f: T }
-
-fun valid(account: &signer) acquires MyResource {
- let addr = signer::address_of(account);
- // Valid, 'MyResource' has 'key'
- let has_resource = exists>(addr);
- if (!has_resource) {
- // Valid, 'MyResource' has 'key'
- move_to(account, MyResource { f: 0 })
- };
- // Valid, 'MyResource' has 'key'
- let r = borrow_global_mut>(addr)
- r.f = r.f + 1;
-}
-
-fun invalid(account: &signer) {
- // Invalid, 'MyResource' does not have 'key'
- let has_it = exists>(addr);
- // Invalid, 'MyResource' does not have 'key'
- let NoAbilities {} = move_from(addr);
- // Invalid, 'MyResource' does not have 'key'
- move_to(account, NoAbilities {});
- // Invalid, 'MyResource' does not have 'key'
- borrow_global(addr);
-}
-```
diff --git a/developer-docs-site/docs/move/book/abort-and-assert.md b/developer-docs-site/docs/move/book/abort-and-assert.md
deleted file mode 100644
index eb1fab905f537..0000000000000
--- a/developer-docs-site/docs/move/book/abort-and-assert.md
+++ /dev/null
@@ -1,207 +0,0 @@
-# Abort and Assert
-
-[`return`](./functions.md) and `abort` are two control flow constructs that end execution, one for
-the current function and one for the entire transaction.
-
-More information on [`return` can be found in the linked section](./functions.md)
-
-## `abort`
-
-`abort` is an expression that takes one argument: an **abort code** of type `u64`. For example:
-
-```move
-abort 42
-```
-
-The `abort` expression halts execution of the current function and reverts all changes made to global
-state by the current transaction. There is no mechanism for "catching" or otherwise handling an
-`abort`.
-
-Luckily, in Move transactions are all or nothing, meaning any changes to global storage are made all
-at once only if the transaction succeeds. Because of this transactional commitment of changes, after
-an abort there is no need to worry about backing out changes. While this approach is lacking in
-flexibility, it is incredibly simple and predictable.
-
-Similar to [`return`](./functions.md), `abort` is useful for exiting control flow when some
-condition cannot be met.
-
-In this example, the function will pop two items off of the vector, but will abort early if the
-vector does not have two items
-
-```move=
-use std::vector;
-fun pop_twice(v: &mut vector): (T, T) {
- if (vector::length(v) < 2) abort 42;
-
- (vector::pop_back(v), vector::pop_back(v))
-}
-```
-
-This is even more useful deep inside a control-flow construct. For example, this function checks
-that all numbers in the vector are less than the specified `bound`. And aborts otherwise
-
-```move=
-use std::vector;
-fun check_vec(v: &vector, bound: u64) {
- let i = 0;
- let n = vector::length(v);
- while (i < n) {
- let cur = *vector::borrow(v, i);
- if (cur > bound) abort 42;
- i = i + 1;
- }
-}
-```
-
-### `assert`
-
-`assert` is a builtin, macro-like operation provided by the Move compiler. It takes two arguments, a
-condition of type `bool` and a code of type `u64`
-
-```move
-assert!(condition: bool, code: u64)
-```
-
-Since the operation is a macro, it must be invoked with the `!`. This is to convey that the
-arguments to `assert` are call-by-expression. In other words, `assert` is not a normal function and
-does not exist at the bytecode level. It is replaced inside the compiler with
-
-```move
-if (condition) () else abort code
-```
-
-`assert` is more commonly used than just `abort` by itself. The `abort` examples above can be
-rewritten using `assert`
-
-```move=
-use std::vector;
-fun pop_twice(v: &mut vector): (T, T) {
- assert!(vector::length(v) >= 2, 42); // Now uses 'assert'
-
- (vector::pop_back(v), vector::pop_back(v))
-}
-```
-
-and
-
-```move=
-use std::vector;
-fun check_vec(v: &vector, bound: u64) {
- let i = 0;
- let n = vector::length(v);
- while (i < n) {
- let cur = *vector::borrow(v, i);
- assert!(cur <= bound, 42); // Now uses 'assert'
- i = i + 1;
- }
-}
-```
-
-Note that because the operation is replaced with this `if-else`, the argument for the `code` is not
-always evaluated. For example:
-
-```move
-assert!(true, 1 / 0)
-```
-
-Will not result in an arithmetic error, it is equivalent to
-
-```move
-if (true) () else (1 / 0)
-```
-
-So the arithmetic expression is never evaluated!
-
-### Abort codes in the Move VM
-
-When using `abort`, it is important to understand how the `u64` code will be used by the VM.
-
-Normally, after successful execution, the Move VM produces a change-set for the changes made to
-global storage (added/removed resources, updates to existing resources, etc).
-
-If an `abort` is reached, the VM will instead indicate an error. Included in that error will be two
-pieces of information:
-
-- The module that produced the abort (address and name)
-- The abort code.
-
-For example
-
-```move=
-address 0x2 {
-module example {
- public fun aborts() {
- abort 42
- }
-}
-}
-
-script {
- fun always_aborts() {
- 0x2::example::aborts()
- }
-}
-```
-
-If a transaction, such as the script `always_aborts` above, calls `0x2::example::aborts`, the VM
-would produce an error that indicated the module `0x2::example` and the code `42`.
-
-This can be useful for having multiple aborts being grouped together inside a module.
-
-In this example, the module has two separate error codes used in multiple functions
-
-```move=
-address 0x42 {
-module example {
-
- use std::vector;
-
- const EMPTY_VECTOR: u64 = 0;
- const INDEX_OUT_OF_BOUNDS: u64 = 1;
-
- // move i to j, move j to k, move k to i
- public fun rotate_three(v: &mut vector, i: u64, j: u64, k: u64) {
- let n = vector::length(v);
- assert!(n > 0, EMPTY_VECTOR);
- assert!(i < n, INDEX_OUT_OF_BOUNDS);
- assert!(j < n, INDEX_OUT_OF_BOUNDS);
- assert!(k < n, INDEX_OUT_OF_BOUNDS);
-
- vector::swap(v, i, k);
- vector::swap(v, j, k);
- }
-
- public fun remove_twice(v: &mut vector, i: u64, j: u64): (T, T) {
- let n = vector::length(v);
- assert!(n > 0, EMPTY_VECTOR);
- assert!(i < n, INDEX_OUT_OF_BOUNDS);
- assert!(j < n, INDEX_OUT_OF_BOUNDS);
- assert!(i > j, INDEX_OUT_OF_BOUNDS);
-
- (vector::remove(v, i), vector::remove(v, j))
- }
-}
-}
-```
-
-## The type of `abort`
-
-The `abort i` expression can have any type! This is because both constructs break from the normal
-control flow, so they never need to evaluate to the value of that type.
-
-The following are not useful, but they will type check
-
-```move
-let y: address = abort 0;
-```
-
-This behavior can be helpful in situations where you have a branching instruction that produces a
-value on some branches, but not all. For example:
-
-```move
-let b =
- if (x == 0) false
- else if (x == 1) true
- else abort 42;
-// ^^^^^^^^ `abort 42` has type `bool`
-```
diff --git a/developer-docs-site/docs/move/book/address.md b/developer-docs-site/docs/move/book/address.md
deleted file mode 100644
index 34a2883049bb2..0000000000000
--- a/developer-docs-site/docs/move/book/address.md
+++ /dev/null
@@ -1,73 +0,0 @@
-# Address
-
-`address` is a built-in type in Move that is used to represent locations (sometimes called accounts) in global storage. An `address` value is a 256-bit (32-byte) identifier. At a given address, two things can be stored: [Modules](./modules-and-scripts.md) and [Resources](./structs-and-resources.md).
-
-Although an `address` is a 256-bit integer under the hood, Move addresses are intentionally opaque---they cannot be created from integers, they do not support arithmetic operations, and they cannot be modified. Even though there might be interesting programs that would use such a feature (e.g., pointer arithmetic in C fills a similar niche), Move does not allow this dynamic behavior because it has been designed from the ground up to support static verification.
-
-You can use runtime address values (values of type `address`) to access resources at that address. You *cannot* access modules at runtime via address values.
-
-## Addresses and Their Syntax
-
-Addresses come in two flavors, named or numerical. The syntax for a named address follows the
-same rules for any named identifier in Move. The syntax of a numerical address is not restricted
-to hex-encoded values, and any valid [`u256` numerical value](./integers.md) can be used as an
-address value, e.g., `42`, `0xCAFE`, and `2021` are all valid numerical address
-literals.
-
-To distinguish when an address is being used in an expression context or not, the
-syntax when using an address differs depending on the context where it's used:
-* When an address is used as an expression the address must be prefixed by the `@` character, i.e., [`@`](./integers.md) or `@`.
-* Outside of expression contexts, the address may be written without the leading `@` character, i.e., [``](./integers.md) or ``.
-
-In general, you can think of `@` as an operator that takes an address from being a namespace item to being an expression item.
-
-## Named Addresses
-
-Named addresses are a feature that allow identifiers to be used in place of
-numerical values in any spot where addresses are used, and not just at the
-value level. Named addresses are declared and bound as top level elements
-(outside of modules and scripts) in Move Packages, or passed as arguments
-to the Move compiler.
-
-Named addresses only exist at the source language level and will be fully
-substituted for their value at the bytecode level. Because of this, modules
-and module members _must_ be accessed through the module's named address
-and not through the numerical value assigned to the named address during
-compilation, e.g., `use my_addr::foo` is _not_ equivalent to `use 0x2::foo`
-even if the Move program is compiled with `my_addr` set to `0x2`. This
-distinction is discussed in more detail in the section on [Modules and
-Scripts](./modules-and-scripts.md).
-
-### Examples
-
-```move
-let a1: address = @0x1; // shorthand for 0x0000000000000000000000000000000000000000000000000000000000000001
-let a2: address = @0x42; // shorthand for 0x0000000000000000000000000000000000000000000000000000000000000042
-let a3: address = @0xDEADBEEF; // shorthand for 0x00000000000000000000000000000000000000000000000000000000DEADBEEF
-let a4: address = @0x000000000000000000000000000000000000000000000000000000000000000A;
-let a5: address = @std; // Assigns `a5` the value of the named address `std`
-let a6: address = @66;
-let a7: address = @0x42;
-
-module 66::some_module { // Not in expression context, so no @ needed
- use 0x1::other_module; // Not in expression context so no @ needed
- use std::vector; // Can use a named address as a namespace item when using other modules
- ...
-}
-
-module std::other_module { // Can use a named address as a namespace item to declare a module
- ...
-}
-```
-
-## Global Storage Operations
-
-The primary purpose of `address` values are to interact with the global storage operations.
-
-`address` values are used with the `exists`, `borrow_global`, `borrow_global_mut`, and `move_from` [operations](./global-storage-operators.md).
-
-The only global storage operation that *does not* use `address` is `move_to`, which uses [`signer`](./signer.md).
-
-## Ownership
-
-As with the other scalar values built-in to the language, `address` values are implicitly copyable, meaning they can be copied without an explicit instruction such as [`copy`](./variables.md#move-and-copy).
diff --git a/developer-docs-site/docs/move/book/bool.md b/developer-docs-site/docs/move/book/bool.md
deleted file mode 100644
index 6423a8c6394c0..0000000000000
--- a/developer-docs-site/docs/move/book/bool.md
+++ /dev/null
@@ -1,33 +0,0 @@
-# Bool
-
-`bool` is Move's primitive type for boolean `true` and `false` values.
-
-## Literals
-
-Literals for `bool` are either `true` or `false`.
-
-## Operations
-
-### Logical
-
-`bool` supports three logical operations:
-
-| Syntax | Description | Equivalent Expression |
-| ------------------------- | ---------------------------- | ------------------------------------------------------------------- |
-| `&&` | short-circuiting logical and | `p && q` is equivalent to `if (p) q else false` |
-| || | short-circuiting logical or | p || q is equivalent to `if (p) true else q` |
-| `!` | logical negation | `!p` is equivalent to `if (p) false else true` |
-
-### Control Flow
-
-`bool` values are used in several of Move's control-flow constructs:
-
-- [`if (bool) { ... }`](./conditionals.md)
-- [`while (bool) { .. }`](./loops.md)
-- [`assert!(bool, u64)`](./abort-and-assert.md)
-
-## Ownership
-
-As with the other scalar values built-in to the language, boolean values are implicitly copyable,
-meaning they can be copied without an explicit instruction such as
-[`copy`](./variables.md#move-and-copy).
diff --git a/developer-docs-site/docs/move/book/coding-conventions.md b/developer-docs-site/docs/move/book/coding-conventions.md
deleted file mode 100644
index 654557f22c896..0000000000000
--- a/developer-docs-site/docs/move/book/coding-conventions.md
+++ /dev/null
@@ -1,77 +0,0 @@
-# Move Coding Conventions
-
-This section lays out some basic coding conventions for Move that the Move team has found helpful. These are only recommendations, and you should feel free to use other formatting guidelines and conventions if you have a preference for them.
-
-## Naming
-
-- **Module names**: should be lower snake case, e.g., `fixed_point32`, `vector`.
-- **Type names**: should be camel case if they are not a native type, e.g., `Coin`, `RoleId`.
-- **Function names**: should be lower snake case, e.g., `destroy_empty`.
-- **Constant names**: should be upper camel case and begin with an `E` if they represent error codes (e.g., `EIndexOutOfBounds`) and upper snake case if they represent a non-error value (e.g., `MIN_STAKE`).
--
-- **Generic type names**: should be descriptive, or anti-descriptive where appropriate, e.g., `T` or `Element` for the Vector generic type parameter. Most of the time the "main" type in a module should be the same name as the module e.g., `option::Option`, `fixed_point32::FixedPoint32`.
-- **Module file names**: should be the same as the module name e.g., `option.move`.
-- **Script file names**: should be lower snake case and should match the name of the “main” function in the script.
-- **Mixed file names**: If the file contains multiple modules and/or scripts, the file name should be lower snake case, where the name does not match any particular module/script inside.
-
-## Imports
-
-- All module `use` statements should be at the top of the module.
-- Functions should be imported and used fully qualified from the module in which they are declared, and not imported at the top level.
-- Types should be imported at the top-level. Where there are name clashes, `as` should be used to rename the type locally as appropriate.
-
-For example, if there is a module:
-
-```move
-module 0x1::foo {
- struct Foo { }
- const CONST_FOO: u64 = 0;
- public fun do_foo(): Foo { Foo{} }
- ...
-}
-```
-
-this would be imported and used as:
-
-```move
-module 0x1::bar {
- use 0x1::foo::{Self, Foo};
-
- public fun do_bar(x: u64): Foo {
- if (x == 10) {
- foo::do_foo()
- } else {
- abort 0
- }
- }
- ...
-}
-```
-
-And, if there is a local name-clash when importing two modules:
-
-```move
-module other_foo {
- struct Foo {}
- ...
-}
-
-module 0x1::importer {
- use 0x1::other_foo::Foo as OtherFoo;
- use 0x1::foo::Foo;
- ...
-}
-```
-
-## Comments
-
-- Each module, struct, and public function declaration should be commented.
-- Move has doc comments `///`, regular single-line comments `//`, block comments `/* */`, and block doc comments `/** */`.
-
-## Formatting
-
-The Move team plans to write an autoformatter to enforce formatting conventions. However, in the meantime:
-
-- Four space indentation should be used except for `script` and `address` blocks whose contents should not be indented.
-- Lines should be broken if they are longer than 100 characters.
-- Structs and constants should be declared before all functions in a module.
diff --git a/developer-docs-site/docs/move/book/conditionals.md b/developer-docs-site/docs/move/book/conditionals.md
deleted file mode 100644
index d53b268ac5922..0000000000000
--- a/developer-docs-site/docs/move/book/conditionals.md
+++ /dev/null
@@ -1,62 +0,0 @@
-# Conditionals
-
-An `if` expression specifies that some code should only be evaluated if a certain condition is true. For example:
-
-```move
-if (x > 5) x = x - 5
-```
-
-The condition must be an expression of type `bool`.
-
-An `if` expression can optionally include an `else` clause to specify another expression to evaluate when the condition is false.
-
-```move
-if (y <= 10) y = y + 1 else y = 10
-```
-
-Either the "true" branch or the "false" branch will be evaluated, but not both. Either branch can be a single expression or an expression block.
-
-The conditional expressions may produce values so that the `if` expression has a result.
-
-```move
-let z = if (x < 100) x else 100;
-```
-
-The expressions in the true and false branches must have compatible types. For example:
-
-```move=
-// x and y must be u64 integers
-let maximum: u64 = if (x > y) x else y;
-
-// ERROR! branches different types
-let z = if (maximum < 10) 10u8 else 100u64;
-
-// ERROR! branches different types, as default false-branch is () not u64
-if (maximum >= 10) maximum;
-```
-
-If the `else` clause is not specified, the false branch defaults to the unit value. The following are equivalent:
-
-```move
-if (condition) true_branch // implied default: else ()
-if (condition) true_branch else ()
-```
-
-Commonly, [`if` expressions](./conditionals.md) are used in conjunction with expression blocks.
-
-```move
-let maximum = if (x > y) x else y;
-if (maximum < 10) {
- x = x + 10;
- y = y + 10;
-} else if (x >= 10 && y >= 10) {
- x = x - 10;
- y = y - 10;
-}
-```
-
-## Grammar for Conditionals
-
-> *if-expression* → **if (** *expression* **)** *expression* *else-clause**opt*
-
-> *else-clause* → **else** *expression*
diff --git a/developer-docs-site/docs/move/book/constants.md b/developer-docs-site/docs/move/book/constants.md
deleted file mode 100644
index c59f0107c57b1..0000000000000
--- a/developer-docs-site/docs/move/book/constants.md
+++ /dev/null
@@ -1,112 +0,0 @@
-# Constants
-
-Constants are a way of giving a name to shared, static values inside of a `module` or `script`.
-
-The constant's must be known at compilation. The constant's value is stored in the compiled module
-or script. And each time the constant is used, a new copy of that value is made.
-
-## Declaration
-
-Constant declarations begin with the `const` keyword, followed by a name, a type, and a value. They
-can exist in either a script or module
-
-```text
-const : = ;
-```
-
-For example
-
-```move=
-script {
-
- const MY_ERROR_CODE: u64 = 0;
-
- fun main(input: u64) {
- assert!(input > 0, MY_ERROR_CODE);
- }
-
-}
-
-address 0x42 {
-module example {
-
- const MY_ADDRESS: address = @0x42;
-
- public fun permissioned(s: &signer) {
- assert!(std::signer::address_of(s) == MY_ADDRESS, 0);
- }
-
-}
-}
-```
-
-## Naming
-
-Constants must start with a capital letter `A` to `Z`. After the first letter, constant names can
-contain underscores `_`, letters `a` to `z`, letters `A` to `Z`, or digits `0` to `9`.
-
-```move
-const FLAG: bool = false;
-const MY_ERROR_CODE: u64 = 0;
-const ADDRESS_42: address = @0x42;
-```
-
-Even though you can use letters `a` to `z` in a constant. The
-[general style guidelines](./coding-conventions.md) are to use just uppercase letters `A` to `Z`,
-with underscores `_` between each word.
-
-This naming restriction of starting with `A` to `Z` is in place to give room for future language
-features. It may or may not be removed later.
-
-## Visibility
-
-`public` constants are not currently supported. `const` values can be used only in the declaring
-module.
-
-## Valid Expressions
-
-Currently, constants are limited to the primitive types `bool`, `u8`, `u16`, `u32`, `u64`, `u128`, `u256`, `address`, and
-`vector`. Future support for other `vector` values (besides the "string"-style literals) will
-come later.
-
-### Values
-
-Commonly, `const`s are assigned a simple value, or literal, of their type. For example
-
-```move
-const MY_BOOL: bool = false;
-const MY_ADDRESS: address = @0x70DD;
-const BYTES: vector = b"hello world";
-const HEX_BYTES: vector = x"DEADBEEF";
-```
-
-### Complex Expressions
-
-In addition to literals, constants can include more complex expressions, as long as the compiler is
-able to reduce the expression to a value at compile time.
-
-Currently, equality operations, all boolean operations, all bitwise operations, and all arithmetic
-operations can be used.
-
-```move
-const RULE: bool = true && false;
-const CAP: u64 = 10 * 100 + 1;
-const SHIFTY: u8 = {
- (1 << 1) * (1 << 2) * (1 << 3) * (1 << 4)
-};
-const HALF_MAX: u128 = 340282366920938463463374607431768211455 / 2;
-const REM: u256 = 57896044618658097711785492504343953926634992332820282019728792003956564819968 % 654321;
-const EQUAL: bool = 1 == 1;
-```
-
-If the operation would result in a runtime exception, the compiler will give an error that it is
-unable to generate the constant's value
-
-```move
-const DIV_BY_ZERO: u64 = 1 / 0; // error!
-const SHIFT_BY_A_LOT: u64 = 1 << 100; // error!
-const NEGATIVE_U64: u64 = 0 - 1; // error!
-```
-
-Note that constants cannot currently refer to other constants. This feature, along with support for
-other expressions, will be added in the future.
diff --git a/developer-docs-site/docs/move/book/creating-coins.md b/developer-docs-site/docs/move/book/creating-coins.md
deleted file mode 100644
index 6922938975d43..0000000000000
--- a/developer-docs-site/docs/move/book/creating-coins.md
+++ /dev/null
@@ -1,3 +0,0 @@
-# Move Tutorial
-
-Please refer to the [Move Core Language Tutorial](https://github.com/aptos-labs/aptos-core/tree/main/aptos-move/move-examples/move-tutorial).
diff --git a/developer-docs-site/docs/move/book/equality.md b/developer-docs-site/docs/move/book/equality.md
deleted file mode 100644
index dbabfa66c6677..0000000000000
--- a/developer-docs-site/docs/move/book/equality.md
+++ /dev/null
@@ -1,164 +0,0 @@
-# Equality
-
-Move supports two equality operations `==` and `!=`
-
-## Operations
-
-| Syntax | Operation | Description |
-| ------ | --------- | --------------------------------------------------------------------------- |
-| `==` | equal | Returns `true` if the two operands have the same value, `false` otherwise |
-| `!=` | not equal | Returns `true` if the two operands have different values, `false` otherwise |
-
-### Typing
-
-Both the equal (`==`) and not-equal (`!=`) operations only work if both operands are the same type
-
-```move
-0 == 0; // `true`
-1u128 == 2u128; // `false`
-b"hello" != x"00"; // `true`
-```
-
-Equality and non-equality also work over user defined types!
-
-```move=
-address 0x42 {
-module example {
- struct S has copy, drop { f: u64, s: vector }
-
- fun always_true(): bool {
- let s = S { f: 0, s: b"" };
- // parens are not needed but added for clarity in this example
- (copy s) == s
- }
-
- fun always_false(): bool {
- let s = S { f: 0, s: b"" };
- // parens are not needed but added for clarity in this example
- (copy s) != s
- }
-}
-}
-```
-
-If the operands have different types, there is a type checking error
-
-```move
-1u8 == 1u128; // ERROR!
-// ^^^^^ expected an argument of type 'u8'
-b"" != 0; // ERROR!
-// ^ expected an argument of type 'vector'
-```
-
-### Typing with references
-
-When comparing [references](./references.md), the type of the reference (immutable or mutable) does
-not matter. This means that you can compare an immutable `&` reference with a mutable one `&mut` of
-the same underlying type.
-
-```move
-let i = &0;
-let m = &mut 1;
-
-i == m; // `false`
-m == i; // `false`
-m == m; // `true`
-i == i; // `true`
-```
-
-The above is equivalent to applying an explicit freeze to each mutable reference where needed
-
-```move
-let i = &0;
-let m = &mut 1;
-
-i == freeze(m); // `false`
-freeze(m) == i; // `false`
-m == m; // `true`
-i == i; // `true`
-```
-
-But again, the underlying type must be the same type
-
-```move
-let i = &0;
-let s = &b"";
-
-i == s; // ERROR!
-// ^ expected an argument of type '&u64'
-```
-
-## Restrictions
-
-Both `==` and `!=` consume the value when comparing them. As a result, the type system enforces that
-the type must have [`drop`](./abilities.md). Recall that without the
-[`drop` ability](./abilities.md), ownership must be transferred by the end of the function, and such
-values can only be explicitly destroyed within their declaring module. If these were used directly
-with either equality `==` or non-equality `!=`, the value would be destroyed which would break
-[`drop` ability](./abilities.md) safety guarantees!
-
-```move=
-address 0x42 {
-module example {
- struct Coin has store { value: u64 }
- fun invalid(c1: Coin, c2: Coin) {
- c1 == c2 // ERROR!
-// ^^ ^^ These resources would be destroyed!
- }
-}
-}
-```
-
-But, a programmer can _always_ borrow the value first instead of directly comparing the value, and
-reference types have the [`drop` ability](./abilities.md). For example
-
-```move=
-address 0x42 {
-module example {
- struct Coin as store { value: u64 }
- fun swap_if_equal(c1: Coin, c2: Coin): (Coin, Coin) {
- let are_equal = &c1 == &c2; // valid
- if (are_equal) (c2, c1) else (c1, c2)
- }
-}
-}
-```
-
-## Avoid Extra Copies
-
-While a programmer _can_ compare any value whose type has [`drop`](./abilities.md), a programmer
-should often compare by reference to avoid expensive copies.
-
-```move=
-let v1: vector = function_that_returns_vector();
-let v2: vector = function_that_returns_vector();
-assert!(copy v1 == copy v2, 42);
-// ^^^^ ^^^^
-use_two_vectors(v1, v2);
-
-let s1: Foo = function_that_returns_large_struct();
-let s2: Foo = function_that_returns_large_struct();
-assert!(copy s1 == copy s2, 42);
-// ^^^^ ^^^^
-use_two_foos(s1, s2);
-```
-
-This code is perfectly acceptable (assuming `Foo` has [`drop`](./abilities.md)), just not efficient.
-The highlighted copies can be removed and replaced with borrows
-
-```move=
-let v1: vector = function_that_returns_vector();
-let v2: vector = function_that_returns_vector();
-assert!(&v1 == &v2, 42);
-// ^ ^
-use_two_vectors(v1, v2);
-
-let s1: Foo = function_that_returns_large_struct();
-let s2: Foo = function_that_returns_large_struct();
-assert!(&s1 == &s2, 42);
-// ^ ^
-use_two_foos(s1, s2);
-```
-
-The efficiency of the `==` itself remains the same, but the `copy`s are removed and thus the program
-is more efficient.
diff --git a/developer-docs-site/docs/move/book/friends.md b/developer-docs-site/docs/move/book/friends.md
deleted file mode 100644
index 1dab74fe4b873..0000000000000
--- a/developer-docs-site/docs/move/book/friends.md
+++ /dev/null
@@ -1,131 +0,0 @@
-# Friends
-
-The `friend` syntax is used to declare modules that are trusted by the current module.
-A trusted module is allowed to call any function defined in the current module that have the `public(friend)` visibility.
-For details on function visibilities, please refer to the *Visibility* section in [Functions](./functions.md).
-
-## Friend declaration
-
-A module can declare other modules as friends via friend declaration statements, in the format of
-
-- `friend ` — friend declaration using fully qualified module name like the example below, or
-
- ```move
- address 0x42 {
- module a {
- friend 0x42::b;
- }
- }
- ```
-
-- `friend ` — friend declaration using a module name alias, where the module alias is introduced via the `use` statement.
-
- ```move
- address 0x42 {
- module a {
- use 0x42::b;
- friend b;
- }
- }
- ```
-
-A module may have multiple friend declarations, and the union of all the friend modules forms the friend list.
-In the example below, both `0x42::B` and `0x42::C` are considered as friends of `0x42::A`.
-
-```move
-address 0x42 {
-module a {
- friend 0x42::b;
- friend 0x42::c;
-}
-}
-```
-
-Unlike `use` statements, `friend` can only be declared in the module scope and not in the expression block scope.
-`friend` declarations may be located anywhere a top-level construct (e.g., `use`, `function`, `struct`, etc.) is allowed.
-However, for readability, it is advised to place friend declarations near the beginning of the module definition.
-
-Note that the concept of friendship does not apply to Move scripts:
-- A Move script cannot declare `friend` modules as doing so is considered meaningless: there is no mechanism to call the function defined in a script.
-- A Move module cannot declare `friend` scripts as well because scripts are ephemeral code snippets that are never published to global storage.
-
-### Friend declaration rules
-Friend declarations are subject to the following rules:
-
-- A module cannot declare itself as a friend.
-
- ```move=
- address 0x42 {
- module m { friend Self; // ERROR! }
- // ^^^^ Cannot declare the module itself as a friend
- }
-
- address 0x43 {
- module m { friend 0x43::M; // ERROR! }
- // ^^^^^^^ Cannot declare the module itself as a friend
- }
- ```
-
-- Friend modules must be known by the compiler
-
- ```move=
- address 0x42 {
- module m { friend 0x42::nonexistent; // ERROR! }
- // ^^^^^^^^^^^^^^^^^ Unbound module '0x42::nonexistent'
- }
- ```
-
-- Friend modules must be within the same account address. (Note: this is not a technical requirement but rather a policy decision which *may* be relaxed later.)
-
- ```move=
- address 0x42 {
- module m {}
- }
-
- address 0x43 {
- module n { friend 0x42::m; // ERROR! }
- // ^^^^^^^ Cannot declare modules out of the current address as a friend
- }
- ```
-
-- Friends relationships cannot create cyclic module dependencies.
-
- Cycles are not allowed in the friend relationships, e.g., the relation `0x2::a` friends `0x2::b` friends `0x2::c` friends `0x2::a` is not allowed.
-More generally, declaring a friend module adds a dependency upon the current module to the friend module (because the purpose is for the friend to call functions in the current module).
-If that friend module is already used, either directly or transitively, a cycle of dependencies would be created.
- ```move=
- address 0x2 {
- module a {
- use 0x2::c;
- friend 0x2::b;
-
- public fun a() {
- c::c()
- }
- }
-
- module b {
- friend 0x2::c; // ERROR!
- // ^^^^^^ This friend relationship creates a dependency cycle: '0x2::b' is a friend of '0x2::a' uses '0x2::c' is a friend of '0x2::b'
- }
-
- module c {
- public fun c() {}
- }
- }
- ```
-
-- The friend list for a module cannot contain duplicates.
-
- ```move=
- address 0x42 {
- module a {}
-
- module m {
- use 0x42::a as aliased_a;
- friend 0x42::A;
- friend aliased_a; // ERROR!
- // ^^^^^^^^^ Duplicate friend declaration '0x42::a'. Friend declarations in a module must be unique
- }
- }
- ```
diff --git a/developer-docs-site/docs/move/book/functions.md b/developer-docs-site/docs/move/book/functions.md
deleted file mode 100644
index c518c456fc83b..0000000000000
--- a/developer-docs-site/docs/move/book/functions.md
+++ /dev/null
@@ -1,594 +0,0 @@
-# Functions
-
-Function syntax in Move is shared between module functions and script functions. Functions inside of modules are reusable, whereas script functions are only used once to invoke a transaction.
-
-## Declaration
-
-Functions are declared with the `fun` keyword followed by the function name, type parameters, parameters, a return type, acquires annotations, and finally the function body.
-
-```text
-fun <[type_parameters: constraint],*>([identifier: type],*):
-```
-
-For example
-
-```move
-fun foo(x: u64, y: T1, z: T2): (T2, T1, u64) { (z, y, x) }
-```
-
-### Visibility
-
-Module functions, by default, can only be called within the same module. These internal (sometimes called private) functions cannot be called from other modules or from scripts.
-
-```move=
-address 0x42 {
-module m {
- fun foo(): u64 { 0 }
- fun calls_foo(): u64 { foo() } // valid
-}
-
-module other {
- fun calls_m_foo(): u64 {
- 0x42::m::foo() // ERROR!
-// ^^^^^^^^^^^^ 'foo' is internal to '0x42::m'
- }
-}
-}
-
-script {
- fun calls_m_foo(): u64 {
- 0x42::m::foo() // ERROR!
-// ^^^^^^^^^^^^ 'foo' is internal to '0x42::m'
- }
-}
-```
-
-To allow access from other modules or from scripts, the function must be declared `public` or `public(friend)`.
-
-#### `public` visibility
-
-A `public` function can be called by *any* function defined in *any* module or script. As shown in the following example, a `public` function can be called by:
-- other functions defined in the same module,
-- functions defined in another module, or
-- the function defined in a script.
-
-There are also no restrictions for what the argument types a public function can take and its return type.
-
-```move=
-address 0x42 {
-module m {
- public fun foo(): u64 { 0 }
- fun calls_foo(): u64 { foo() } // valid
-}
-
-module other {
- fun calls_m_foo(): u64 {
- 0x42::m::foo() // valid
- }
-}
-}
-
-script {
- fun calls_m_foo(): u64 {
- 0x42::m::foo() // valid
- }
-}
-```
-
-#### `public(friend)` visibility
-
-The `public(friend)` visibility modifier is a more restricted form of the `public` modifier to give more control about where a function can be used. A `public(friend)` function can be called by:
-- other functions defined in the same module, or
-- functions defined in modules which are explicitly specified in the **friend list** (see [Friends](./friends.md) on how to specify the friend list).
-
-Note that since we cannot declare a script to be a friend of a module, the functions defined in scripts can never call a `public(friend)` function.
-
-```move=
-address 0x42 {
-module m {
- friend 0x42::n; // friend declaration
- public(friend) fun foo(): u64 { 0 }
- fun calls_foo(): u64 { foo() } // valid
-}
-
-module n {
- fun calls_m_foo(): u64 {
- 0x42::m::foo() // valid
- }
-}
-
-module other {
- fun calls_m_foo(): u64 {
- 0x42::m::foo() // ERROR!
-// ^^^^^^^^^^^^ 'foo' can only be called from a 'friend' of module '0x42::m'
- }
-}
-}
-
-script {
- fun calls_m_foo(): u64 {
- 0x42::m::foo() // ERROR!
-// ^^^^^^^^^^^^ 'foo' can only be called from a 'friend' of module '0x42::m'
- }
-}
-```
-
-### `entry` modifier
-
-The `entry` modifier is designed to allow module functions to be safely and directly invoked much like scripts. This allows module writers to specify which functions can be invoked to begin execution. The module writer then knows that any non-`entry` function will be called from a Move program already in execution.
-
-Essentially, `entry` functions are the "main" functions of a module, and they specify where Move programs start executing.
-
-Note though, an `entry` function _can_ still be called by other Move functions. So while they _can_ serve as the start of a Move program, they aren't restricted to that case.
-
-For example:
-
-```move=
-address 0x42 {
-module m {
- public entry fun foo(): u64 { 0 }
- fun calls_foo(): u64 { foo() } // valid!
-}
-
-module n {
- fun calls_m_foo(): u64 {
- 0x42::m::foo() // valid!
- }
-}
-
-module other {
- public entry fun calls_m_foo(): u64 {
- 0x42::m::foo() // valid!
- }
-}
-}
-
-script {
- fun calls_m_foo(): u64 {
- 0x42::m::foo() // valid!
- }
-}
-```
-
-Even internal functions can be marked as `entry`! This lets you guarantee that the function is called only at the beginning of execution (assuming you do not call it elsewhere in your module)
-
-```move=
-address 0x42 {
-module m {
- entry fun foo(): u64 { 0 } // valid! entry functions do not have to be public
-}
-
-module n {
- fun calls_m_foo(): u64 {
- 0x42::m::foo() // ERROR!
-// ^^^^^^^^^^^^ 'foo' is internal to '0x42::m'
- }
-}
-
-module other {
- public entry fun calls_m_foo(): u64 {
- 0x42::m::foo() // ERROR!
-// ^^^^^^^^^^^^ 'foo' is internal to '0x42::m'
- }
-}
-}
-
-script {
- fun calls_m_foo(): u64 {
- 0x42::m::foo() // ERROR!
-// ^^^^^^^^^^^^ 'foo' is internal to '0x42::m'
- }
-}
-```
-
-Entry functions can take primitive types, String, and vector arguments but cannot take Structs (e.g. Option). They also
-must not have any return values.
-
-### Name
-
-Function names can start with letters `a` to `z` or letters `A` to `Z`. After the first character, function names can contain underscores `_`, letters `a` to `z`, letters `A` to `Z`, or digits `0` to `9`.
-
-```move
-// all valid
-fun FOO() {}
-fun bar_42() {}
-fun bAZ19() {}
-
-// invalid
-fun _bAZ19() {} // Function names cannot start with '_'
-```
-
-### Type Parameters
-
-After the name, functions can have type parameters
-
-```move
-fun id(x: T): T { x }
-fun example(x: T1, y: T2): (T1, T1, T2) { (copy x, x, y) }
-```
-
-For more details, see [Move generics](./generics.md).
-
-### Parameters
-
-Functions parameters are declared with a local variable name followed by a type annotation
-
-```move
-fun add(x: u64, y: u64): u64 { x + y }
-```
-
-We read this as `x` has type `u64`
-
-A function does not have to have any parameters at all.
-
-```move
-fun useless() { }
-```
-
-This is very common for functions that create new or empty data structures
-
-```move=
-address 0x42 {
-module example {
- struct Counter { count: u64 }
-
- fun new_counter(): Counter {
- Counter { count: 0 }
- }
-
-}
-}
-```
-
-### Acquires
-
-When a function accesses a resource using `move_from`, `borrow_global`, or `borrow_global_mut`, the function must indicate that it `acquires` that resource. This is then used by Move's type system to ensure the references into global storage are safe, specifically that there are no dangling references into global storage.
-
-```move=
-address 0x42 {
-module example {
-
- struct Balance has key { value: u64 }
-
- public fun add_balance(s: &signer, value: u64) {
- move_to(s, Balance { value })
- }
-
- public fun extract_balance(addr: address): u64 acquires Balance {
- let Balance { value } = move_from(addr); // acquires needed
- value
- }
-}
-}
-```
-
-`acquires` annotations must also be added for transitive calls within the module. Calls to these functions from another module do not need to annotated with these acquires because one module cannot access resources declared in another module--so the annotation is not needed to ensure reference safety.
-
-```move=
-address 0x42 {
-module example {
-
- struct Balance has key { value: u64 }
-
- public fun add_balance(s: &signer, value: u64) {
- move_to(s, Balance { value })
- }
-
- public fun extract_balance(addr: address): u64 acquires Balance {
- let Balance { value } = move_from(addr); // acquires needed
- value
- }
-
- public fun extract_and_add(sender: address, receiver: &signer) acquires Balance {
- let value = extract_balance(sender); // acquires needed here
- add_balance(receiver, value)
- }
-}
-}
-
-address 0x42 {
-module other {
- fun extract_balance(addr: address): u64 {
- 0x42::example::extract_balance(addr) // no acquires needed
- }
-}
-}
-```
-
-A function can `acquire` as many resources as it needs to
-
-```move=
-address 0x42 {
-module example {
- use std::vector;
-
- struct Balance has key { value: u64 }
- struct Box has key { items: vector }
-
- public fun store_two(
- addr: address,
- item1: Item1,
- item2: Item2,
- ) acquires Balance, Box {
- let balance = borrow_global_mut(addr); // acquires needed
- balance.value = balance.value - 2;
- let box1 = borrow_global_mut>(addr); // acquires needed
- vector::push_back(&mut box1.items, item1);
- let box2 = borrow_global_mut>(addr); // acquires needed
- vector::push_back(&mut box2.items, item2);
- }
-}
-}
-```
-
-### Return type
-
-After the parameters, a function specifies its return type.
-
-```move
-fun zero(): u64 { 0 }
-```
-
-Here `: u64` indicates that the function's return type is `u64`.
-
-:::tip
-A function can return an immutable `&` or mutable `&mut` [reference](./references.md) if derived from an input reference. Keep in mind, this means that a function [cannot return a reference to global storage](./references.md#references-cannot-be-stored) unless it is an [inline function](#inline-functions).
-:::
-
-Using tuples, a function can return multiple values:
-
-```move
-fun one_two_three(): (u64, u64, u64) { (0, 1, 2) }
-```
-
-If no return type is specified, the function has an implicit return type of unit `()`. These functions are equivalent:
-
-```move
-fun just_unit(): () { () }
-fun just_unit() { () }
-fun just_unit() { }
-```
-
-`script` functions must have a return type of unit `()`:
-
-```move
-script {
- fun do_nothing() {
- }
-}
-```
-
-As mentioned in the [tuples section](./tuples.md), these tuple "values" are virtual and do not exist at runtime. So for a function that returns unit `()`, it will not be returning any value at all during execution.
-
-### Function body
-
-A function's body is an expression block. The return value of the function is the last value in the sequence
-
-```move=
-fun example(): u64 {
- let x = 0;
- x = x + 1;
- x // returns 'x'
-}
-```
-
-See [the section below for more information on returns](#returning-values)
-
-For more information on expression blocks, see [Move variables](./variables.md).
-
-### Native Functions
-
-Some functions do not have a body specified, and instead have the body provided by the VM. These functions are marked `native`.
-
-Without modifying the VM source code, a programmer cannot add new native functions. Furthermore, it is the intent that `native` functions are used for either standard library code or for functionality needed for the given Move environment.
-
-Most `native` functions you will likely see are in standard library code such as `vector`
-
-```move=
-module std::vector {
- native public fun empty(): vector;
- ...
-}
-```
-
-## Calling
-
-When calling a function, the name can be specified either through an alias or fully qualified
-
-```move=
-address 0x42 {
-module example {
- public fun zero(): u64 { 0 }
-}
-}
-
-script {
- use 0x42::example::{Self, zero};
- fun call_zero() {
- // With the `use` above all of these calls are equivalent
- 0x42::example::zero();
- example::zero();
- zero();
- }
-}
-```
-
-When calling a function, an argument must be given for every parameter.
-
-```move=
-address 0x42 {
-module example {
- public fun takes_none(): u64 { 0 }
- public fun takes_one(x: u64): u64 { x }
- public fun takes_two(x: u64, y: u64): u64 { x + y }
- public fun takes_three(x: u64, y: u64, z: u64): u64 { x + y + z }
-}
-}
-
-script {
- use 0x42::example;
- fun call_all() {
- example::takes_none();
- example::takes_one(0);
- example::takes_two(0, 1);
- example::takes_three(0, 1, 2);
- }
-}
-```
-
-Type arguments can be either specified or inferred. Both calls are equivalent.
-
-```move=
-address 0x42 {
-module example {
- public fun id(x: T): T { x }
-}
-}
-
-script {
- use 0x42::example;
- fun call_all() {
- example::id(0);
- example::id(0);
- }
-}
-```
-
-For more details, see [Move generics](./generics.md).
-
-
-## Returning values
-
-The result of a function, its "return value", is the final value of its function body. For example
-
-```move=
-fun add(x: u64, y: u64): u64 {
- x + y
-}
-```
-
-[As mentioned above](#function-body), the function's body is an [expression block](./variables.md). The expression block can be a sequence of various statements, and the final expression in the block will be the value of that block.
-
-```move=
-fun double_and_add(x: u64, y: u64): u64 {
- let double_x = x * 2;
- let double_y = y * 2;
- double_x + double_y
-}
-```
-
-The return value here is `double_x + double_y`
-
-### `return` expression
-
-A function implicitly returns the value that its body evaluates to. However, functions can also use the explicit `return` expression:
-
-```move
-fun f1(): u64 { return 0 }
-fun f2(): u64 { 0 }
-```
-
-These two functions are equivalent. In this slightly more involved example, the function subtracts two `u64` values, but returns early with `0` if the second value is too large:
-
-```move=
-fun safe_sub(x: u64, y: u64): u64 {
- if (y > x) return 0;
- x - y
-}
-```
-
-Note that the body of this function could also have been written as `if (y > x) 0 else x - y`.
-
-However where `return` really shines is in exiting deep within other control flow constructs. In this example, the function iterates through a vector to find the index of a given value:
-
-```move=
-use std::vector;
-use std::option::{Self, Option};
-fun index_of(v: &vector, target: &T): Option {
- let i = 0;
- let n = vector::length(v);
- while (i < n) {
- if (vector::borrow(v, i) == target) return option::some(i);
- i = i + 1
- };
-
- option::none()
-}
-```
-
-Using `return` without an argument is shorthand for `return ()`. That is, the following two functions are equivalent:
-
-```move
-fun foo() { return }
-fun foo() { return () }
-```
-
-## Inline Functions
-
-Inline functions are functions whose bodies are expanded in place at the caller location during compile time.
-Thus, inline functions do not appear in Move bytecode as a separate functions: all calls to them are expanded away by the compiler.
-In certain circumstances, they may lead to faster execution and save gas.
-However, users should be aware that they could lead to larger bytecode size: excessive inlining potentially triggers various size restrictions.
-
-One can define an inline function by adding the `inline` keyword to a function declaration as shown below:
-
-```move=
-inline fun percent(x: u64, y: u64):u64 { x * 100 / y }
-```
-
-If we call this inline function as `percent(2, 200)`, the compiler will replace this call with the inline function's body, as if the user has written `2 * 100 / 200`.
-
-### Function parameters and lambda expressions
-
-Inline functions support _function parameters_, which accept lambda expressions (i.e., anonymous functions) as arguments.
-This feature allows writing several common programming patterns elegantly.
-Similar to inline functions, lambda expressions are also expanded at call site.
-
-A lambda expression includes a list of parameter names (enclosed within `||`) followed by the body.
-Some simple examples are: `|x| x + 1`, `|x, y| x + y`, `|| 1`, `|| { 1 }`.
-A lambda's body can refer to variables available in the scope where the lambda is defined: this is also known as capturing.
-Such variables can be read or written (if mutable) by the lambda expression.
-
-The type of a function parameter is written as `|| `.
-For example, when the function parameter type is `|u64, u64| bool`, any lambda expression that takes two `u64` parameters and returns a `bool` value can be provided as the argument.
-
-Below is an example that showcases many of these concepts in action (this example is taken from the `std::vector` module):
-
-```move=
-/// Fold the function over the elements.
-/// E.g, `fold(vector[1,2,3], 0, f)` is the same as `f(f(f(0, 1), 2), 3)`.
-public inline fun fold(
- v: vector,
- init: Accumulator,
- f: |Accumulator,Element|Accumulator
-): Accumulator {
- let accu = init;
- // Note: `for_each` is an inline function, but is not shown here.
- for_each(v, |elem| accu = f(accu, elem));
- accu
-}
-```
-
-The type signature of the elided public inline function `for_each` is `fun for_each(v: vector, f: |Element|)`.
-Its second parameter `f` is a function parameter which accepts any lambda expression that consumes an `Element` and returns nothing.
-In the code example, we use the lambda expression `|elem| accu = f(accu, elem)` as an argument to this function parameter.
-Note that this lambda expression captures the variable `accu` from the outer scope.
-
-### Current restrictions
-
-There are plans to loosen some of these restrictions in the future, but for now,
-
-- Only inline functions can have function parameters.
-- Only explicit lambda expressions can be passed as an argument to an inline function's function parameters.
-- Inline functions and lambda expressions cannot have `return`, `break`, or `continue` expressions.
-- Inline functions or lambda expressions cannot return lambda expressions.
-- Cyclic recursion involving only inline functions is not allowed.
-- Parameters in lambda expressions must not be type annotated (e.g., `|x: u64| x + 1` is not allowed): their types are inferred.
-
-### Additional considerations
-
-- Avoid using module-private constants/methods in public inline functions.
- When such inline functions are called outside of that module, an in-place expansion at call site leads to invalid access of the private constants/methods.
-- Avoid marking large functions that are called at different locations as inline. Also avoid inline functions calling lots of other inline functions transitively.
- These may lead to excessive inlining and increase the bytecode size.
-- Inline functions can be useful for returning references to global storage, which non-inline functions cannot do.
diff --git a/developer-docs-site/docs/move/book/generics.md b/developer-docs-site/docs/move/book/generics.md
deleted file mode 100644
index 1e5364cd4a3e7..0000000000000
--- a/developer-docs-site/docs/move/book/generics.md
+++ /dev/null
@@ -1,482 +0,0 @@
-# Generics
-
-Generics can be used to define functions and structs over different input data types. This language feature is sometimes referred to as *parametric polymorphism*. In Move, we will often use the term generics interchangeably with type parameters and type arguments.
-
-Generics are commonly used in library code, such as in vector, to declare code that works over any possible instantiation (that satisfies the specified constraints). In other frameworks, generic code can sometimes be used to interact with global storage many different ways that all still share the same implementation.
-
-## Declaring Type Parameters
-
-Both functions and structs can take a list of type parameters in their signatures, enclosed by a pair of angle brackets `<...>`.
-
-### Generic Functions
-
-Type parameters for functions are placed after the function name and before the (value) parameter list. The following code defines a generic identity function that takes a value of any type and returns that value unchanged.
-
-```move
-fun id(x: T): T {
- // this type annotation is unnecessary but valid
- (x: T)
-}
-```
-
-Once defined, the type parameter `T` can be used in parameter types, return types, and inside the function body.
-
-### Generic Structs
-
-Type parameters for structs are placed after the struct name, and can be used to name the types of the fields.
-
-```move
-struct Foo has copy, drop { x: T }
-
-struct Bar has copy, drop {
- x: T1,
- y: vector,
-}
-```
-
-Note that [type parameters do not have to be used](#unused-type-parameters)
-
-## Type Arguments
-
-### Calling Generic Functions
-
-When calling a generic function, one can specify the type arguments for the function's type parameters in a list enclosed by a pair of angle brackets.
-
-```move
-fun foo() {
- let x = id(true);
-}
-```
-
-If you do not specify the type arguments, Move's [type inference](#type-inference) will supply them for you.
-
-### Using Generic Structs
-
-Similarly, one can attach a list of type arguments for the struct's type parameters when constructing or destructing values of generic types.
-
-```move
-fun foo() {
- let foo = Foo { x: true };
- let Foo { x } = foo;
-}
-```
-
-If you do not specify the type arguments, Move's [type inference](#type-inference) will supply them for you.
-
-### Type Argument Mismatch
-
-If you specify the type arguments and they conflict with the actual values supplied, an error will be given:
-
-```move
-fun foo() {
- let x = id(true); // error! true is not a u64
-}
-```
-
-and similarly:
-
-```move
-fun foo() {
- let foo = Foo { x: 0 }; // error! 0 is not a bool
- let Foo { x } = foo; // error! bool is incompatible with address
-}
-```
-
-## Type Inference
-
-In most cases, the Move compiler will be able to infer the type arguments so you don't have to write them down explicitly. Here's what the examples above would look like if we omit the type arguments:
-
-```move
-fun foo() {
- let x = id(true);
- // ^ is inferred
-
- let foo = Foo { x: true };
- // ^ is inferred
-
- let Foo { x } = foo;
- // ^ is inferred
-}
-```
-
-Note: when the compiler is unable to infer the types, you'll need annotate them manually. A common scenario is to call a function with type parameters appearing only at return positions.
-
-```move
-address 0x2 {
-module m {
- using std::vector;
-
- fun foo() {
- // let v = vector::new();
- // ^ The compiler cannot figure out the element type.
-
- let v = vector::new();
- // ^~~~~ Must annotate manually.
- }
-}
-}
-```
-
-However, the compiler will be able to infer the type if that return value is used later in that function:
-
-```move
-address 0x2 {
-module m {
- using std::vector;
-
- fun foo() {
- let v = vector::new();
- // ^ is inferred
- vector::push_back(&mut v, 42);
- }
-}
-}
-```
-
-## Unused Type Parameters
-
-For a struct definition,
-an unused type parameter is one that
-does not appear in any field defined in the struct,
-but is checked statically at compile time.
-Move allows unused type parameters so the following struct definition is valid:
-
-```move
-struct Foo {
- foo: u64
-}
-```
-
-This can be convenient when modeling certain concepts. Here is an example:
-
-```move
-address 0x2 {
-module m {
- // Currency Specifiers
- struct Currency1 {}
- struct Currency2 {}
-
- // A generic coin type that can be instantiated using a currency
- // specifier type.
- // e.g. Coin, Coin etc.
- struct Coin has store {
- value: u64
- }
-
- // Write code generically about all currencies
- public fun mint_generic(value: u64): Coin {
- Coin { value }
- }
-
- // Write code concretely about one currency
- public fun mint_concrete(value: u64): Coin {
- Coin { value }
- }
-}
-}
-```
-
-In this example,
-`struct Coin` is generic on the `Currency` type parameter,
-which specifies the currency of the coin and
-allows code to be written either
-generically on any currency or
-concretely on a specific currency.
-This genericity applies even when the `Currency` type parameter
-does not appear in any of the fields defined in `Coin`.
-
-### Phantom Type Parameters
-
-In the example above,
-although `struct Coin` asks for the `store` ability,
-neither `Coin` nor `Coin` will have the `store` ability.
-This is because of the rules for
-[Conditional Abilities and Generic Types](./abilities.md#conditional-abilities-and-generic-types)
-and the fact that `Currency1` and `Currency2` don't have the `store` ability,
-despite the fact that they are not even used in the body of `struct Coin`.
-This might cause some unpleasant consequences.
-For example, we are unable to put `Coin` into a wallet in the global storage.
-
-One possible solution would be to
-add spurious ability annotations to `Currency1` and `Currency2`
-(i.e., `struct Currency1 has store {}`).
-But, this might lead to bugs or security vulnerabilities
-because it weakens the types with unnecessary ability declarations.
-For example, we would never expect a resource in the global storage to have a field in type `Currency1`,
-but this would be possible with the spurious `store` ability.
-Moreover, the spurious annotations would be infectious,
-requiring many functions generic on the unused type parameter to also include the necessary constraints.
-
-Phantom type parameters solve this problem.
-Unused type parameters can be marked as *phantom* type parameters,
-which do not participate in the ability derivation for structs.
-In this way,
-arguments to phantom type parameters are not considered when deriving the abilities for generic types,
-thus avoiding the need for spurious ability annotations.
-For this relaxed rule to be sound,
-Move's type system guarantees that a parameter declared as `phantom` is either
-not used at all in the struct definition, or
-it is only used as an argument to type parameters also declared as `phantom`.
-
-#### Declaration
-
-In a struct definition
-a type parameter can be declared as phantom by adding the `phantom` keyword before its declaration.
-If a type parameter is declared as phantom we say it is a phantom type parameter.
-When defining a struct, Move's type checker ensures that every phantom type parameter is either
-not used inside the struct definition or
-it is only used as an argument to a phantom type parameter.
-
-More formally,
-if a type is used as an argument to a phantom type parameter
-we say the type appears in _phantom position_.
-With this definition in place,
-the rule for the correct use of phantom parameters can be specified as follows:
-**A phantom type parameter can only appear in phantom position**.
-
-The following two examples show valid uses of phantom parameters.
-In the first one,
-the parameter `T1` is not used at all inside the struct definition.
-In the second one, the parameter `T1` is only used as an argument to a phantom type parameter.
-
-```move
-struct S1 { f: u64 }
- ^^
- Ok: T1 does not appear inside the struct definition
-
-
-struct S2 { f: S1 }
- ^^
- Ok: T1 appears in phantom position
-```
-
-The following code shows examples of violations of the rule:
-
-```move
-struct S1 { f: T }
- ^
- Error: Not a phantom position
-
-struct S2 { f: T }
-
-struct S3 { f: S2 }
- ^
- Error: Not a phantom position
-```
-
-#### Instantiation
-
-When instantiating a struct,
-the arguments to phantom parameters are excluded when deriving the struct abilities.
-For example, consider the following code:
-
-```move
-struct S has copy { f: T1 }
-struct NoCopy {}
-struct HasCopy has copy {}
-```
-
-Consider now the type `S`.
-Since `S` is defined with `copy` and all non-phantom arguments have `copy`
-then `S` also has `copy`.
-
-#### Phantom Type Parameters with Ability Constraints
-
-Ability constraints and phantom type parameters are orthogonal features in the sense that
-phantom parameters can be declared with ability constraints.
-When instantiating a phantom type parameter with an ability constraint,
-the type argument has to satisfy that constraint,
-even though the parameter is phantom.
-For example, the following definition is perfectly valid:
-
-```move
-struct S {}
-```
-
-The usual restrictions apply and `T` can only be instantiated with arguments having `copy`.
-
-## Constraints
-
-In the examples above, we have demonstrated how one can use type parameters to define "unknown" types that can be plugged in by callers at a later time. This however means the type system has little information about the type and has to perform checks in a very conservative way. In some sense, the type system must assume the worst case scenario for an unconstrained generic. Simply put, by default generic type parameters have no [abilities](./abilities.md).
-
-This is where constraints come into play: they offer a way to specify what properties these unknown types have so the type system can allow operations that would otherwise be unsafe.
-
-### Declaring Constraints
-
-Constraints can be imposed on type parameters using the following syntax.
-
-```move
-// T is the name of the type parameter
-T: (+ )*
-```
-
-The `` can be any of the four [abilities](./abilities.md), and a type parameter can be constrained with multiple abilities at once. So all of the following would be valid type parameter declarations:
-
-```move
-T: copy
-T: copy + drop
-T: copy + drop + store + key
-```
-
-### Verifying Constraints
-
-Constraints are checked at call sites so the following code won't compile.
-
-```move
-struct Foo { x: T }
-
-struct Bar { x: Foo }
-// ^ error! u8 does not have 'key'
-
-struct Baz { x: Foo }
-// ^ error! T does not have 'key'
-```
-
-```move
-struct R {}
-
-fun unsafe_consume(x: T) {
- // error! x does not have 'drop'
-}
-
-fun consume(x: T) {
- // valid!
- // x will be dropped automatically
-}
-
-fun foo() {
- let r = R {};
- consume(r);
- // ^ error! R does not have 'drop'
-}
-```
-
-```move
-struct R {}
-
-fun unsafe_double(x: T) {
- (copy x, x)
- // error! x does not have 'copy'
-}
-
-fun double(x: T) {
- (copy x, x) // valid!
-}
-
-fun foo(): (R, R) {
- let r = R {};
- double(r)
- // ^ error! R does not have 'copy'
-}
-```
-
-For more information, see the abilities section on [conditional abilities and generic types](./abilities.md#conditional-abilities-and-generic-types).
-
-## Limitations on Recursions
-
-### Recursive Structs
-
-Generic structs can not contain fields of the same type, either directly or indirectly, even with different type arguments. All of the following struct definitions are invalid:
-
-```move
-struct Foo {
- x: Foo // error! 'Foo' containing 'Foo'
-}
-
-struct Bar {
- x: Bar // error! 'Bar' containing 'Bar'
-}
-
-// error! 'A' and 'B' forming a cycle, which is not allowed either.
-struct A {
- x: B
-}
-
-struct B {
- x: A
- y: A
-}
-```
-
-### Advanced Topic: Type-level Recursions
-
-Move allows generic functions to be called recursively. However, when used in combination with generic structs, this could create an infinite number of types in certain cases, and allowing this means adding unnecessary complexity to the compiler, vm and other language components. Therefore, such recursions are forbidden.
-
-Allowed:
-
-```move
-address 0x2 {
-module m {
- struct A {}
-
- // Finitely many types -- allowed.
- // foo -> foo -> foo -> ... is valid
- fun foo() {
- foo();
- }
-
- // Finitely many types -- allowed.
- // foo -> foo> -> foo> -> ... is valid
- fun foo() {
- foo>();
- }
-}
-}
-```
-
-Not allowed:
-
-```move
-address 0x2 {
-module m {
- struct A {}
-
- // Infinitely many types -- NOT allowed.
- // error!
- // foo -> foo> -> foo>> -> ...
- fun foo() {
- foo>();
- }
-}
-}
-```
-
-```move
-address 0x2 {
-module n {
- struct A {}
-
- // Infinitely many types -- NOT allowed.
- // error!
- // foo -> bar -> foo>
- // -> bar, T2> -> foo, A>
- // -> bar, A> -> foo, A>>
- // -> ...
- fun foo() {
- bar();
- }
-
- fun bar {
- foo