TEA Party Code Walkthrough
Last updated
Last updated
In this section, we'll walk through the TEA Party application's sample code.
The steps are:
Clone the code to local.
Install the build tools.
Understand the folder structure.
Understand the compile workflow.
Run it.
Start by cloneing the following GitHub repo to your local machine: https://github.com/tearust/tapp-sample-teaparty
There are 4 folders (click the following links for more details):
: This is the .
: This is the .
party-share: This is the common data structure or library that shared by both the and the .
: This is the .
Note: This is a brief diagram. The real communication is more complicated than this.
Accounting information is stored in the state (e.g. when querying the balance of a user's TApp account.)
Note: This is a brief diagram. The real communication is more complicated than this.
Running SQL queries is almost the same as running a query against the state. The only difference is that we replace the state with the GlueSQL instance. Note: SQL queries are not allowed to change the state. Only Select
statements are allowed in SQL queries.
Rather than select
, many SQL statements will change the database. They are all considered commands. The workflow is almost the same as the state command, with the state being replaced by the GlueSQL instance.
Because the state and GlueSQL are memory based distributed databases, they're very expensive when used to store large amounts of data. TApps needing to store large amounts of data should use either OrbitDB (structured data) or IPFS (blob data/ files).
The diagram above shows a common use case that loads all messages. But in many cases, the ids (index) of the OrbitDB is stored in GlueSQL, so it's very common to first have to query GlueSQL to get the IDs. After successfully querying GlueSqL for the IDs, then we can query OrbitDB using the IDs to retrieve the actual data.
The above diagram shows the combination of SQL and NoSQL.
Click on any of the following links for more details:
TODO:
Any user can launch a TApp by clicking on one of the s urls (there's no domain used when launching TApps). Picking any of the urls will work exactly the same so you can choose the one with least network latency. The URL is nothing but an IPFS CID.
Querying the state can return the result without having to wait in conveyor queue. But the communication is still async, so additional queries for more results are still needed which is not shown in the diagram. You can see the details on additional queries at .
Commands are more complicated in that certain precautions must be taken before they're allowed to change the state. Like any other distributed state machine, we have to make sure the state in all the s are consistent. We use the algorithm to sort the commands by their timestamp and are executed in identical order across all replicas.
The following diagram demonstrates the workflow of how a simple transfer txn command is handled. Note that this diagram is a simplifed verison. The full version can be found here: .
OrbitDB and IPFS live inside the , so the s are not involved in this workflow.
Code walkthrough for .
Code walkthrough for .
Code walkthrough for .