Architecture
Last updated
Last updated
GPGPU is a cloud protocol that integrates GPU resources and delivers high-quality results through appropriate computational processes based on client requests. To accomplish this, an efficient structural design capable of managing GPU resources from various regions and with different characteristics is essential. The network, composed of containers, nodes, and queues, allocates GPU resources according to client requests and ensures correct outputs through continuous monitoring. The architecture is divided into four layers as follows:
Service Layer: This layer consists of the website or program where clients, GPU providers, and nodes can perform and manage their tasks. Clients can hire as many GPU resources as needed and execute their tasks. GPU providers can manage device connections, monitor connection status, and handle rewards. The service is designed with a user-friendly UI/UX to allow intuitive understanding and easy onboarding without requiring technical knowledge.
Backend Layer: This is the system where the website or program comprising the service layer actually runs. It plays a crucial role in the protocol by processing user requests and recording node operations.
Node Layer: Nodes manage the GPU resources provided by GPU providers and configure the queues. They are responsible for monitoring the hired GPUs and recording their reputations upon task completion.
Computation Layer: Each client is allocated a storage space in this layer. Clients can hire GPU resources as needed and process the data they upload to the cloud environment in the desired format. This is the layer where GPU resources execute tasks based on client requests.