Microsoft announces Azure DevOps, will succeed Visual Studio Team Services

In a bid for dominance in the multibillion-dollar AI software-as-a-service (SaaS) market, Microsoft’s beefing up its suite of cloud-hosted machine learning tools — Azure Cognitive Services — with two new additions that were previously in preview. It today announced the general availability of Anamoly Detector, which aims to suss out unusual activity in millions of data transactions, and Custom Vision, which facilitates the training and deployment of object-detecting AI models.

“From using speech recognition, translation, and text-to-speech to image and object detection, Azure Cognitive Services makes it easy for developers to add intelligent capabilities to their applications in any scenario,” said Microsoft chief of staff Anand Raman, who added that more than a million developers have tried Cognitive Services to date. “As companies increasingly look to transform their businesses with AI, we continue to add improvements to Azure AI to make it easy for developers and data scientists to deploy, manage, and secure AI functions directly into their applications.”

Anomaly Detector is designed to identify unusual, rare, or irregular data patterns that might signal problems — like credit card fraud, for instance, or a compromised network node. It’s available through a single API and runs in real time, and currently, over 200 teams across Azure and other “core” Microsoft products currently rely on it to ‘boost the reliability” of their systems, Raman says.

“Through a single API, developers can easily embed anomaly detection capabilities into their applications to ensure high data accuracy, and automatically surface incidents as soon as they happen,” he added. “Common use case scenarios include identifying business incidents and text errors, monitoring IoT device traffic, detecting fraud, responding to changing markets, and more.”

Today’s other big announcement? Custom Vision, Azure’s AI-driven image classification product, has exited preview. It allows developers to train their own real-time object classifiers and export them to run offline on iOS (with Apple’s CoreML toolkit), Android (in Google’s TensorFlow machine learning framework), and other edge devices, and the latest release packs key improvements.

Among them are “advanced training” with a new high-performance backend designed to better handle “challenging datasets” and “fine-grained” classification. Developers can now specify a compute time budget and experimentally identify the best training and augmentation settings, and take advantage of support for Azure Resource Manager (ARM) for Raspberry Pi 3 and the Vision AI Dev Kit to seed models to low-power devices.

Custom Vision pricing starts at $2 per 1,000 transactions, while training costs $20 per compute hour and image storage is $0.70 per 1,000 images. The free tier includes 5,000 training images per project (up to a maximum of two projects and an hour of training per month) and 10,000 predictions per month.

“Today’s milestones illustrate our commitment to make the Azure AI platform suitable for every business scenario, with enterprise-grade tools that simplify application development, and industry-leading security and compliance for protecting customers’ data,” Raman wrote.

Microsoft also announced today the expansion of Azure Stack, its hybrid cloud computing software solution, to over 92 countries, and introduced support for hyperconverged infrastructure (HCI), allowing customers to run virtualized apps on-premises with direct access to Azure services such as backup and recovery. Last but not least, it made generally available Azure Data Box Edge, and Intel Arria 10 FPGA-powered appliance for edge containers that processes and accelerates machine learning workloads, and the Azure Data Box Gateway virtual appliance.

Source link


Please enter your comment!
Please enter your name here