Amazon Web Services has introduced cloud-based products and services to help businesses manage their data flow, improve performance and cut costs, amid the current turbulent global economy.
At AWS re:Invent 2022 held in Las Vegas this week, the cloud computing giant announced three services to help organisations become more data-driven by unlocking the full potential of their data.
These are: Amazon DataZone, Amazon Aurora zero-ETL (extract, transform, load) integration with Amazon Redshift, Amazon Redshift integration for Apache Spark, and AWS Clean Rooms.
During the announcements, the Amazon subsidiary noted that amid the tough economic times, it continues to introduce products and services to help organisations cut back on costs and improve their bottom line. Cloud services are already helping thousands of AWS customers across the globe to save 30% or more on their IT budgets, according to the company.
The Amazon DataZone helps customers catalogue, discover, share and govern data stored across AWS, on-premises and third-party sources. The software solution allows administrators and data stewards who oversee an organisation’s data assets to manage and govern access to data, using controls to ensure it is accessed with the right level of privileges and in the right context.
After the catalogue is set up, data consumers can use the Amazon DataZone web portal to search and discover data assets, examine metadata and request access to datasets.
“Good governance is the foundation that makes data accessible to the entire organisation, but we often hear from customers that it is difficult to strike the right balance between making data discoverable and maintaining control,” said Swami Sivasubramanian, VP of databases, analytics and machine learning at AWS.
“With Amazon DataZone, customers can use a single service that balances strong governance controls with streamlined access to make it easy to find, organise and collaborate with data.”
Amazon Aurora zero-ETL integration with Amazon Redshift enables customers to analyse petabytes (million gigabytes) of transactional data in near-real-time, eliminating the need to extract, transform and load data between services.
Amazon Redshift integration for Apache Spark makes it easier and faster for customers to run Apache Spark applications on data from Amazon Redshift, using AWS analytics and machine learning services.
Apache Spark is a multi-language engine for executing data engineering, data science and machine learning on single-node machines or clusters.
The two new integrations, according to Sivasubramanian, make it easier for organisations to connect and analyse data across data stores without having to move data between services.
He emphasised the importance of organisations becoming data-driven and using data to drive their businesses forward, connecting the dots between agility, digital transformation and continuous innovation.
AWS Clean Rooms allows customers to build a data clean room in minutes and collaborate with any other company in the AWS Cloud, to generate unique insights from their data, while protecting sensitive information.
“Customers tell us they want to collaborate more safely and securely with their partners in areas like advertising, media, financial services and life sciences,” said Dilip Kumar, VP of AWS Applications for AWS.
“However, the data they need to do this is fragmented across data stores and applications belonging to different partners. AWS Clean Rooms helps customers and their partners to better analyse and collaborate on their data on AWS.
“With the launch of AWS Clean Rooms, we are making it easier, simpler and more secure for multiple companies to share and analyse combined datasets to generate new insights that they could not do on their own,” noted Kumar.
Share