Back in the day, the theft and loss of backup tapes and laptops were the primary causes of data breaches. That all changed when systems were redesigned and data at rest was encrypted on portable devices. Not only did we use technology to mitigate a predictable human problem, we also increased the tolerance of failure.
A single lapse, such as leaving a laptop in a car, doesn’t have to compromise an organisation’s data. We need the same level of failure tolerance, with access controls and IT security, in the cloud.
In the cloud, all infrastructure is virtualised and runs as software. Services and servers are not fixed but can shrink, grow, appear, disappear, and transform in the blink of an eye. Cloud services aren’t the same as those anchored on-premises. For example, AWS S3 buckets have characteristics of both file shares and web servers, but they are something else entirely.
Practices differ too. You don’t patch cloud servers – they are replaced with the new software versions. There is also a distinction between the credentials used by an operational instance (like a virtual computer), and those that are accessible by that instance (the services it can call).
Cloud computing requires a distinct way of thinking about IT infrastructure.
Read the full article published March 17, 2020 here: https://datacentrereview.com/content-library/opinion/1602-is-the-cloud-safe-thinking-about-the-cloud-through-a-security-lens by DCR.