Get Started - IO Cloud
This document provides a high-level overview of IO Cloud and onboarding steps that allow you to create, configure, deploy, and manage clusters.
Table of Contents
Introduction
The IO Cloud allows you to deploy and manage on-demand decentralized GPU Clusters. This provides users with access to GPU resources without expensive hardware investments and infrastructure management. IO Cloud democratizes access to GPUs by providing ML Engineers & Developers the same experience as any cloud provider.
IO Cloud leverages distributed resources from a decentralized network of nodes called IO workers. Clusters provide the building blocks of IO Cloud and are fully meshed self-healing GPUs. With IO Cloud, you can leverage a decentralized network of GPUs/CPUs capable of executing Python-based ML code for your AI projects. This platform is Natively powered on the RAY distributed computing python framework used by OpenAI to train GPT3 and GP4 across 300K servers.
Key Features
- A large and cost-efficient GPU cloud on-demand with unlimited scalability for AI/ML Training/Inference.
Smooth Integration with IO SDK: Effortlessly connect IO Network's GPU resources with IO SDK's distributed. computing capabilities to create a unified, high-performance environment for your AI projects. - Offering unparalleled affordability, up to -90% cheaper per TFLOP.
- Globally distributed GPU resources, functioning as the CDN of ML serving and inference, bringing GPUs closer to end users.
- Natively powered by and battle-tested on the RAY distributed computing python framework used by OpenAI to train GPT3 and GP4 across 300K servers. Offering unprecedented easy framework to scale Python applications at any scale.
- Future access to IO Models Store and advanced inference features like server-less inferences, cloud gaming, and pixel streaming.
Create Account
Go to cloud.io.net to create an io.net account. Currently, you must create an account with a service provider: X, Apple ID, Worldcoin or Google. Choose your preferred option, then click Sign Up. This creates an account and signs you in.
Payments
IO Cloud offers different payment methods. Users may also pay at different points in the cluster deployment process. The main two ways to pay for clusters is using Solana and credit cards. To use Solana, you must set up a wallet. This can be done when you register your account or later in Account Settings. Users can add money to their account or pay for their clusters at the end of the configuration process.
To learn more about the types of payments we offer and step-by-step guides, see IO Cloud Payments.
App Guide
IO Cloud Home Page
The IO Cloud home page (tab) provides you with three options to deploy clusters, browse the GPU marketplace, add funds to your balance, and view & monitor your clusters.
Clusters
The Cluster tabs allow you to view and manage deployed clusters. Each tab provides a list of specific cluster types containing details such as: name, accelerator (GPU), status, and remaining compute hours. You can rename, extend, or terminate clusters. Each tab also offers access to important links: Visual Studio, Jupyter Notebook, and Ray Dash for management.
In the upper-left, select the tab that corresponds to your cluster type.
io.net offers three different types of clusters you can use for your AI projects.
- Ray- A cluster of machines (nodes) managed by the Ray framework. Ray is an open-source framework for building and running distributed applications. It provides a universal API for building distributed applications. A Ray cluster consists of multiple machines (nodes) connected together to form a distributed computing environment. These machines work together to execute tasks and manage resources efficiently.
- Mega-Ray- A cluster of machines managed by the Ray framework. The Mega-Ray Cluster allows you to select all available GPUs or CPUs under certain filters and hire them. It differs from a Ray cluster in that those are a set amount.
- Kubernetes- A platform designed to automate the deployment, scaling, and management of containerized applications. It provides a framework for automating the management of containerized workloads and services.
To view detailed instructions for each cluster type, see:
Clusters Tab
Select the tab that corresponds with your cluster type. This page allows you to monitor clusters and track data such as:
- Current Status
- Cluster Name
- Compute Hours Remaining and Served
- The type & number of GPUs/CPUs used in the cluster
Cluster Detail
The cluster detail page displays the following information:
- List of workers and status
- Type of GPU/CPU & Device ID
- Cluster ID
- Remaining Compute Hours
- Cost
- Connectivity Tier
The cluster detail also offers access to important links: Visual Studio, Jupyter Notebook, and Ray Dash for management.
For a detailed explanation of monitoring and managing your clusters, see Monitor and Manage Clusters.
Updated 3 months ago