Cloud computing in the form we understand today started around 10 years ago, with the launch of Amazon Web Services (AWS). This was the first commercially viable option for businesses to store data in the cloud rather than on-premise and acted as a shared service for anyone connecting to the platform.
Early stage cloud computing was certainly more technical than it is now. However, no more than managing a data center – something that IT departments at the time were well used to. Anyone who was able to successfully start up virtual engines was able to set up a cloud environment.
Cloud started off by offering much more basic services – network, storage and compute. Over the last 10 years, we have seen the volume of data stored in the cloud grow exponentially with businesses frequently dealing with petabytes of data, compared to the gigabytes of yesteryear. With data exploding around us, cloud services have had to learn to be much more efficient.
The Cloud of Today
As data volumes grew and users grew, cloud providers began to offer more cloud-based value-added services. Analytics and machine learning capabilities that are housed in the cloud are frequently offered alongside other business objective oriented services. As a result, the range of people purchasing cloud services has expanded and is no longer confined to the IT department.
This is because one of the great advantages of cloud and SaaS services is how easy they are to set up and how flexible and scalable they are. Business users without extensive IT knowledge became able to utilize services previously only accessible to the IT department. Of course, with this came a rise in “shadow IT” and while being able to access cloud-based business tools were very helpful for achieving business objectives and accessing new services that could not run on legacy IT systems, it did open up businesses to greater data privacy risks.
Today, the IT and business functions increasingly collaborate to implement the most up-to-date, data-driven technologies to solve real business needs, making “shadow IT” less of a problem. By working together, IT and business are able to offer smarter responses to business needs. The business function is better able to define the requirements of their cloud solutions, while the IT department has more flexibility to implement and test different technologies to find the best solution.
An important upside of this is that it allows the IT department to keep a better overview of where data is stored with third-party services like Salesforce. To ensure this is implemented throughout the business, the IT department needs to take on an educational role within the organization – teaching business users about the implications of new data privacy laws and safe and compliant ways to use data within the business. Considering the rapidly approaching GDPR deadline this is good news for organizations as the regulation will mandate much stricter approaches to data protection.
This strategy is being led from the top down, with new Data Protection Officers and C-suite positions like the CDO (Chief Digital Officer) and CTO (Chief Technology Officer) straddling the IT and business function. This is helping redefine the IT department as a department for technological creativity and innovation, rather than simply focussing on solution implementation.
The Cloud of Tomorrow
With GDPR coming in to effect in May 2018, the cloud will need to evolve and adapt. Increasingly, there will be more security services attached to the cloud, as well as greater oversight around what data is stored and where it is. This will be essential for changes like the right to be forgotten, where a person can ask a business to delete all personal data relating to them.
For businesses, this will require significant re-architecting in the cloud environment to increase the ability to analyze and discover data across the storage landscape. The cloud is ready to implement these changes, but it will require a change of mindset within organizations. Cloud usage cannot just be determined on the basis of features and costs, but also on ensuring that the data is stored in a controlled, managed, compliant environment.
In the next 5-10 years, as the volume of data continues to expand exponentially, there will be an increasing need to align this data in terms of format and quality. Organisations will need to be able to pull together multiple data-streams across multiple cloud environments into combined, high-quality insights. This is where an open-source, vendor-neutral management layer will become crucial to help organizations bridge the gap between their vast data reserves and the insights offered by machine learning and AI technologies. All of this will contribute to a future where businesses can use data stored in the cloud to provide predictive analytics for the business – such as predicting load requirements for peak shopping days, or market fluctuations to prepare investors.
Bigdata and data center
thanks you RSS link