Azure Synapse Link, now in public preview, is a cloud-native HTAP (hybrid transactional analytics processing) implementation to connect Azure Synapse Analytics (announced last November) with operational databases running in Azure.
It is initially available in Azure Cosmos DB, but will be added to other database services such as Azure SQL, Azure Database for PostgreSQL, and Azure Database for MySQL at a later date.
Once a link is established between a database and Azure Synapse Analytics, the data is automatically and continuously available to the latter in a columnar structure.
According to Microsoft technical fellow and CTO for data Raghu Ramakrishnan, no complex ETL pipelines or additional database compute resources are required and customers can run their analytics workloads on real-time data through Azure Synapse Analytics immediately and cost-effectively.
A new Azure supercomputer built in collaboration with OpenAI ranks among the top five on the TOP500 list, according to Microsoft.
With more than 285,000 CPU cores, 10,000 GPUs and 400 gigabits per second of network connectivity for each GPU server, the supercomputer was built to train OpenAI's models. It may serve as a prototype for similar systems that could be made generally available to Azure customers.
“As we’ve learned more and more about what we need and the different limits of all the components that make up a supercomputer, we were really able to say, ‘If we could design our dream system, what would it look like?’” said OpenAI CEO Sam Altman.
“And then Microsoft was able to build it.”
Microsoft has already developed its own family of large AI models – Microsoft Turing – and is using them to improve language understanding in Bing, Office, Dynamics and other products. Examples include Word's key insights feature, Outlook's suggested replies, and improved answers to Bing queries.
The Microsoft Turing model for natural language generation, which has 17 billion parameters, was released to researchers earlier this year.
Ultimately, these large AI models, training optimisation tools and supercomputing resources will be made available through Azure and GitHub.
“The exciting thing about these models is the breadth of things they’re going to enable,” said Microsoft CTO Kevin Scott.
“This is about being able to do a hundred exciting things in natural language processing at once and a hundred exciting things in computer vision, and when you start to see combinations of these perceptual domains, you’re going to have new applications that are hard to even imagine right now."
Microsoft also announced responsible ML tools in Azure Machine Learning, and made their source code available via GitHub.
These include the InterpretML toolkit to help understand model behaviour, WhiteNoise toolkit to help incorporate differential privacy, and the Fairlearn toolkit (coming in June) to help assess and improve the fairness of model behaviour in the sense that it works consistently well across groups of people characterised by factors such as gender, age or skin tone.
Professional services firm EY has already piloted the use of Fairlearn with a ML model designed to automate lending decisions. It revealed a 15.3 percentage point difference between positive decisions for males versus females, and this was reduced to 0.43 percentage points after remediation without loss of overall accuracy.
Microsoft's first industry-specific cloud service, Microsoft Cloud for Healthcare is now in public preview with a free trial period that will run for the next six months.
Aspects of healthcare addressed by Microsoft Cloud for Healthcare include a consumer-friendly patient experience (eg, via the Microsoft Healthcare Bot Service, self-service portals, and a new bookings app in Microsoft Teams to support telemedicine), referral management, outreach (eg, to contact patients with preventative and care management programs), continuous patient monitoring through IoT, and analytics to help gain insights from operational and health data.