An anonymous reader quotes a report from TechCrunch: More and more companies are running large language models, which require access to GPUs. The most popular of those by far are from Nvidia, making them expensive and often in short supply. Renting a long-term instance from a cloud provider when you only need access to these costly resources for a single job, doesn’t necessarily make sense. To help solve that problem, AWS launched Amazon Elastic Compute Cloud (EC2) Capacity Blocks for ML today, enabling customers to buy access to these GPUs for a defined amount of time, typically to run some sort of AI-related job such as training a machine learning model or running an experiment with an existing model.

The product gives customers access to NVIDIA H100 Tensor Core GPUs instances in cluster sizes of one to 64 instances with 8 GPUs per instance. They can reserve time for up

Link to original post https://slashdot.org/story/23/11/01/2025206/new-aws-service-lets-customers-rent-nvidia-gpus-for-quick-ai-projects?utm_source=rss1.0mainlinkanon&utm_medium=feed from Teknoids News

Read the original story