Amazon Web Services announced a slew of new or updated offerings at its cloud-computing conference in Las Vegas, seeking to maintain its lead in the market for internet-based computing. Following is a rundown.

Amazon Elastic Inference is a new service that lets customers attach GPU-powered inference acceleration to any Amazon EC2 instance and reduces deep learning costs by up to 75 percent. From a report: “What we see typically is that the average utilization of these P3 instances GPUs are about 10 to 30 percent, which is pretty wasteful with elastic inference. You don’t have to waste all that costs and all that GPU,” AWS chief executive Andy Jassy said onstage at the AWS re:Invent conference earlier today. “[Amazon Elastic Inference] is a pretty significant game changer in being able to run inference much more cost-effectively.”

While the majority of workloads in the cloud are Linux-based, Amazon Web Services (AWS) CEO Andy Jassy said he is well aware that Windows is still significant, and as a result his company launched a new fully managed Windows file system built on native Windows file servers. From a report: “What we were hoping to do was make this Windows file system work as part of EFS — would have been much easier for us to layer on another file system … because it’s much easier if you’re trying to build a business at scale,” he explained. However, he said customers wanted a native Windows file system and they “weren’t being flexible.” “So we changed our approach,” he continued.

Inferentia is company’s own dedicated machine learning chip. From a report: “Inferentia will be a very high-throughput, low-latency, sustained-performance very cost-effective processor,” AWS CEO Andy Jassy explained during the announcement. Holger Mueller, an analyst with Constellation Research, says that while Amazon is far behind, this is a good step for them as companies try to differentiate their machine learning approaches in the future. Inferentia supports popular frameworks like INT8, FP16 and mixed precision. What’s more, it supports multiple machine learning frameworks, including TensorFlow, Caffe2 and ONNX.

TechCrunch writes about SageMaker Ground Truth: You can’t build a good machine learning model without good training data. But building those training sets is hard, often manual work, that involves labeling thousand and thousands of images, for example. With SageMaker, AWS has been working on a service that makes building machine learning models a lot easier. But until today, that labeling task was still up to the user. Now, however, the company is launching SageMaker Ground Truth, a training set labeling service. Using Ground Truth, developers can point the service at the storage buckets that hold the data and allow the service to automatically label it. What’s nifty here is that you can both set a confidence level for the fully automatic service or you can send the data to human laborers.

GeekWire writes about the self-driving racing league and DeepRacer : Amazon Web Services chief and big sports fan Andy Jassy on Wednesday in Las Vegas unveiled a first-of-its-kind global autonomous …read more

Source:: Slashdot