At GridSentience we are building a grid computing network to accelerate deep learning with commodity, mobile, and edge devices. Anybody can join with any device to our computing grid, profit from our solution, and even make money by renting out spare computing capacity. We want people to say goodbye to expensive cloud providers and hello to more cost-effective, efficient, and secure computing solutions, by harnessing the power of connected devices, helping to train large models faster and with greater privacy. Sign up for our waiting list today to be the first to know when our service becomes available.
If you're involved in deep learning, you know that training large models can be a time-consuming and expensive process. While popular cloud providers like AWS, Google Cloud, and Microsoft Azure offer convenient solutions for running these computations, they often come with hefty price tags.But what if there was a better way? Enter the world of edge device clusters! Edge device clusters consist of a network of connected devices, such as computers, smartphones, and IoT devices, that work together to perform computations. By harnessing the power of these devices, edge device clusters can offer a more cost-effective and efficient solution for running large deep-learning models.
Cost-effectiveness: Unlike cloud providers that charge by the hour or by the amount of data processed, edge device clusters offer a more flexible pricing model. By using devices you already own, you can save on the costs of renting cloud computing resources. Speed: By distributing computations across multiple devices, edge device clusters can perform deep learning computations faster than a single device. This can be especially beneficial for time-sensitive projects. Privacy: If you're dealing with sensitive data, you may not want to store it on a third-party server. Edge device clusters offer a more secure option, as data is stored locally on the devices in the network. Sustainability: By using devices that are already in use, edge device clusters can reduce the need for additional hardware, which can help lower the carbon footprint of deep learning computations.