Through blockchain technology and philosophy, we provide suitable pervasive computing resources for each individual scene, so that efficient computing resources are not limited to top research laboratories and large organizations, but also to directly transform these resources into practical breakthrough innovations. The driving force to promote the popularization and promotion of various applications. The project aims to solve the problem of insufficient resources faced by ubiquitous computing and blockchain systems from the aspects of underlying computing resources, ecological construction and evidence-based incentives. Through the certification and circulation of the certificate, while building a sufficiently powerful storage pool and computing pool, attract more community users with free storage resources and idle computing resources to join the ecosystem. Through the full application of cryptography technology, it provides storage and computing capabilities with high reliability, high scalability, and fine-grained privacy protection. At the same time, for specific applications, through DSP, FPGA and ASIC design, the chip design and implementation of specific AI algorithms can be realized to meet the computing power requirements of future applications from multiple links.
Distributed storage distributes data across multiple independent devices, uses multiple storage devices to share storage load, uses metadata servers to locate storage information, and scales horizontally. Distributed storage builds a distributed storage device into a virtual storage pool for use by upper-layer applications, improving system reliability, availability, and access efficiency. Distributed storage systems are gradually replacing traditional storage architectures, especially in the field of unstructured data storage.
The core technologies of distributed storage systems mainly include metadata management technology, system elastic extension technology and storage optimization technology.
(1) Large capacity. The nodes of the system use a general-purpose or non-generic architecture storage server as a building unit, and expand the storage nodes infinitely and horizontally according to the needs of users, and form a unified shared storage pool.
(2) High performance. Provides 10-15 times higher aggregate IOPS and throughput than traditional storage, and can grow linearly with storage node expansion. The dedicated metadata module provides fast and accurate data retrieval and positioning to meet different needs. The need for fast response from front-end business.
(3) High reliability. The entire system does not have any single point of failure, data security and business continuity are guaranteed. There is a special data protection policy between the node devices to achieve device-level redundancy of the system and to replace damaged hard disks or node devices online.
(4) High scalability. The system can support online seamless dynamic horizontal expansion. In the case of adopting the redundancy policy, the uplink and downlink of any storage node have no influence on the front-end service, and are completely transparent, and the system is expanding the new storage node. Automatic load balancing can be selected, and the pressure of all data is evenly distributed on each storage node. In addition, the storage capacity of the system and the throughput of the system can be increased simultaneously with the increase of the system size, and the file access performance is always maintained. The system can adapt to the dynamic growth of node and storage data scale through topology and data organization.
(5) High integration. Compatible with any brand of universal storage server, it can be easily implemented in a standard IP/IB network environment without changing the original network architecture.
(6) Easy to manage. The entire system can be configured and managed through the WEB interface or mobile APP, and the operation and maintenance is simple.
(7) High privacy. Through cryptographic techniques, information stored by individuals and organizations on distributed storage is guaranteed to have a corresponding level of privacy.
Computational power, as the name suggests, can be understood as the computing power. It generally refers to the number of hash collisions that the miner can do per second in the process of scouring bitcoin. The unit is recorded as hash/s. At present, the total network computing power of Bitcoin has increased to the historical peak of 32E/S – this is the signal that new miners continue to join Bitcoin mining. There are differences in mining algorithms for different currencies. For example, Bitcoin is the sha256 algorithm, Liteco is the scrypt algorithm, and Ethereum is the Ethash algorithm. In the field of super-calculation and scientific computing, more use of floating-point arithmetic speed as a measure of power. On June 8, 2018, the US Department of Energy's Oak Ridge National Laboratory announced the creation of the world's fastest supercomputer with a peak floating point speed of 2 billion times per second.
Because the computing system that constitutes the computing power has different architectures and uses, there is no unified technical indicator for computing power.
Computing systems can be divided into isomorphic computing and heterogeneous computing in terms of architecture. Isomorphic computing is the computation of a system that consists of computational units of the same type of instruction set and architecture. Heterogeneous computing refers to the way in which systems are composed of computing units that use different types of instruction sets and architectures. Common types of computing units include coprocessors such as CPUs and GPUs, DSPs, ASICs, and FPGAs. Heterogeneous computing is a kind of parallel and distributed computing. It can be done with a single independent computer that can support both simd and mimd, or with a separate set of computers interconnected by a high-speed network. Specifically, heterogeneous computing uses both a processor and an accelerator such as a GPU or a many-core chip in the operation.
Artificial intelligence (AI) is the science that researches and develops intelligent theories, methods, techniques, and applications for simulating, extending, and extending people. By understanding the essence of intelligence, it produces intelligent machines that respond in a manner similar to human intelligence. Research in this area includes robotics, speech recognition, image recognition, natural language processing, and expert systems. On the hardware side, it is mainly using GPU parallel computing neural networks. At the same time, FPGAs and ASICs have the potential to emerge in the future. The AI chip is generally an ASIC-specific chip that points to the AI algorithm. Traditional CPUs and GPUs can perform AI algorithms, but they are slow and have low performance. They are not practical for commercial use. Compared with traditional terminal chips, cloud smart chips have larger scale, more complex structure, stronger computing power and lower energy consumption. And other characteristics to adapt to the conditions required by the software in the device at runtime.
As a special chip, ASIC is different from traditional general-purpose chips. Because it is a custom-tailored chip for a specific need, the chip's computing power and computational efficiency can be customized according to the needs of the algorithm. Compared with general-purpose chips, ASICs are characterized by small size, low power consumption, high computing performance, and high computational efficiency. The larger the chip shipments and the lower the cost.
(1) High performance. In high-performance mode, the ASIC chip has an equivalent theoretical peak speed of up to 166.4 trillion fixed-point operations per second, with typical board power consumption of only 80 watts and peak power consumption of no more than 110 watts.
(2) Fast efficiency. Artificial intelligence chips have high efficiency compared to conventional chips. The latest MLUv01 architecture of Cambrian and the advanced process chip of TSMC16nm, the equivalent theoretical peak speed in balanced mode is 128 trillion fixed-point operations per second.
(3) Strong computing power. The AI chip architecture is out of the image processing, and the parallel computing power is powerful. Due to its high parallel structure, it has higher efficiency and more powerful computing power in processing graphics data and complex algorithms.
(4) Small size. Compared to traditional chips, AI chips are smaller in size and suitable for special and proprietary applications.