Skip to content

Latest commit

 

History

History
35 lines (18 loc) · 6.13 KB

Q&As_2018.5.30.md

File metadata and controls

35 lines (18 loc) · 6.13 KB

Q1.Can you commit a date for main net token swap?

MATRIX will launch a test-net this September as per the original schedule. This is certain. Meanwhile, the testing of token swap will also begin. Each of our community members is welcome to participate in the testing for three months before it goes live. The MATRIX Main Network will go online on 30th, December of 2018.

By the end of this month, or early next month, we will push our P2P codes, (designed based on the MATRIX network framework), and a beta version of the master node election algorithms to GITHUB. As for the crypto part, we will introduce a new, enhanced version which is compatible with the current standard.

The English translation of the MATRIX masternode is ongoing. The technical team will provide a detailed explanation in regards to the setup scheme as well as the configurations of miner/validation nodes, (including, but not limited to, the “why?” and its benefits). Your valuable suggestions will be highly appreciated and considered.

Q2.I was wondering if we could get a progress update on the development of MATRIX. What are the next milestones to be achieved before the test-net launch and are we still on track to release the test-net in September 2018?

As for a progress update on the development, there will be a monthly report detailing what we have achieved, where we are and what we will be doing next. Our experts will also take time to answer your questions each week.

You can find our current project plan here: 30th, June: we will bring you a Golang release supporting multi-machine interconnection and proof of concept;

30th, July: we will introduce a version which supports 32 SuperNodes running in LAN, in which the performance of one single machine can reach 1KTPS. Design proposal is as follows:

The 50K TPS version currently in validation is based on a version with three cluster servers: two miner clusters and one validation cluster. The front-end network uses a FPGA version, without the ECC (Ellipse Curve Cryptography) validation. This is because we will use a standard PCIE Cipher Card for the final implementation.

As a matter of fact, the major bottleneck on the current design lies in how to achieve a steady network transfer and protocol processing. To combat this challenge, we have two new experts from HUAWEI joining us, who will be responsible for the design work based on the Intel DPDK scheme. We will expect a model for proof by the 30th of July. As you may know, this will eliminate the use of special hardware or FPGA, and a X86 processor can help.

Q3. A detailed explanation of the delegate nodes (supernodes): How MATRIX can be more decentralized than EOS? How to choose delegate nodes and achieve consensus?

MATRIX is designed to have 21 delegate nodes, which are the agents for transaction processing. Different from the EOS’ supernodes, the delegate nodes of MATRIX are randomly selected in every given period. Such a design is to maintain decentralization and fairness in the sense that the probability of a node staying as a delegate many times over is sufficiently low. A node has to make a given amount of deposits before it can attend the selection process. The period for a selection was chosen as approximately 10 minutes.

The pseudo code for the random delegate selection algorithm is listed below. First, the qualified nodes are ordered with regards to their computing power. The nodes are then sequentially allocated to 32 clusters to maintain balanced computing power across clusters. Next, the algorithm iterates inside each cluster. Given a cluster, its nodes are initially assigned to 3-node groups in a random manner. The three nodes in each group will choose a winner node. The selection of the winner can be based on a random function, or a combination of: computing power, communication bandwidth, network connectively, and other metrics. The winners will be randomly allocated to 3-node groups again in the next iteration and choose a winner again. This process repeats until there is only one node left, which will be the delegate node in the current cluster, while all other nodes are the voter nodes of the delegate. As there are initially 32 nodes, we will have 21 to handle the transactions, and the remaining 11 nodes as the audition nodes to verify if the 21 delegates are working honestly.

MATRIX introduces a novel hybrid consensus mechanism. At the level of delegate nodes, the consensus is still PoW. It means that the 21 delegates will compete for rewards. Such competition is essential to motivate the miners. Meanwhile, the delegate and its voters inside a cluster collaborate towards winning the reward. To raise its probability to win the PoW by employing more computing power, the delegate allocates the transactions to be verified, and the mining work, to the voters. Then all nodes work together in a concurrent fashion. The collaborative consensus mechanism leads to three major advantages. First, the redundant work among different nodes can be significantly reduced. Second, the transaction throughput and computing power for PoW can be dramatically raised. Third, the MATRIX blockchain can be organized as a massive platform offering computing power.

The mining algorithm of MATRIX no longer depends on the HASH function. Instead, it is based on the training of deep neural networks (DNNs) and/or the Markov Chain Monte Carlo algorithm for Bayesian reasoning. In other words, the mining of MATRIX is targeting the good of the public welfare. MATRIX is equipped with portals to post machine learning jobs. The portals will accept problem formulations, DNN models, and training data. The 21 delegate nodes will fetch a posted job before the PoW computation. Then each delegate distributes its workload to its voters, which will train the model with asynchronous updating. In the case of training DNNs, every voter node will process a given number of training data and derive a local gradient, which is broadcasted to other nodes. These nodes update its gradient in the way similar to the federated learning algorithm. The delegate of the cluster that first finishes the training will receive the award.