We asked 90 CIO’s from across North America what their key concerns and departmental statistics are. Below is an infographic to show some of our findings. To get the full Intelligence report register your interest here
by Carmen Carey, CEO, ControlCircle
Carmen Carey, CEO, ControlCircle highlights some of the challenges currently facing CIOs of large global businesses and provides examples of how technology can help manage business risk.
What technology risks are large global firms currently facing?
For any size company, technology risks can be introduced through poor standards and controls or inconsistent application of these standards and controls. However, for larger businesses these issues potentially create more of a threat as the scale of the employee population and corresponding technology footprint means that maintaining consistency and auditing accurately is difficult –possibly even more so in global businesses where controls can be even further diluted or subject to inconsistencies.
Global businesses also have the additional challenge of differing regional regulatory compliance needs which, in turn, drive inconsistency in standards and controls. The logical downside to this is that security will fall to the lowest common denominator.
In addition, with the ever growing prominence of big data and the analytics to mine vast data sets, more data is being retained with less knowledge and control over it. The aggregation of this data, particularly into a single system, makes data theft more interesting to individuals, as typically there is more social data about individuals, not covered by personal information legislation, which can be effectively mined for illegal purposes.
What technology is currently available to assist disaster recovery and business continuity?
Business continuity is the act of protecting the business in the case of a disaster and covers people, process and technology. Typically disaster recovery (DR) is focused around the technology and its recoverability in the event of a disaster.
In DR, the most important asset to protect is the business data and to have it in a recoverable state. Critical applications come next, those that the business needs to have operational to protect its immediate revenue stream.
The tools available include data replication between storage at differing sites and off site backup for the less important data. If your data and applications reside in the cloud, you are reliant upon your service provider to ensure that you have protection and replication. For those that have their own private estate, the opportunity exists to replicate to cloud providers and leverage the cloud provider’s capabilities in the event of a disaster.
How to know if your business is vulnerable to attack
Although large global businesses tend to be more of a target to hackers, no business is safe from the threat. With the growing popularity of BYOD, businesses also need to be aware of the potential threats its own employees could effect. We’ve listed below a few of the types of organisations who should be particularly vigilant.
• Internet facing businesses
• Businesses that are not intrinsically data led businesses, but for whom data is now becoming more meaningful and valuable
• Global businesses, particularly those who have operating entities outside of Europe and North America
• Global businesses with regionalised IT responsibilities
• Organisations whose business model is built around the availability of the technology and the veracity of the stored data
To find out more about protecting your business from attack and putting in place continuity plans, contact us. Also, you may want to have a look at a short video from Check Point looking at some of the security risks your company may be exposed to.
By Mohammed Farooq Founder and CEO of Gravitant
VMware introduced x86 server virtualization and ushered in a radical change in the architecture,operations and economics of enterprise data centers. Virtualization improved x86 server utilization from below 20% to above 60% while significantly improving service delivery using ITIL automation tools.
This was a massive transformation from first-generation data center architecture from IBM, HP and SUN (now Oracle) with vertically integrated hardware/software stacks.
The VMware era moved the data center towards a hardware-agnostic environment where resources were pooled globally and shared among multiple applications. VMware’s breakthrough hypervisor technology pioneered a fundamental shift in data center economics and management.
But now we are in the cloud era and what got us here (virtualization) will not get us where we want to go, which is to true Infrastructure-as-a-Service, whether delivered by a multi-tenant (public) cloud or an internal private cloud. Furthermore, in the cloud era, applications are often engagement oriented and dynamic, as distinct from the static and predictable workloads of the virtual era, and require a different consumption paradigm.
There is a proliferation of clouds (e.g. Amazon web services, Google, Microsoft Azure, Terremark, VMware, Nebula, Eucalyptus, Piston, etc) and cloud technologies (VMware, Openstack, Cloudstack). Each of these clouds has a different consumption model, different SLAs, charging and pricing, and management models. Enterprises are finding they need to have a mix of private and public clouds to meet their application, workload and business needs, which gives rise to a management burden and ineffective resource consumption. Virtualization management tools can’t help us across multiple internal and external clouds — they run out of steam in the cloud era.
Fortunately there is a new technology emerging called cloud middleware that can help us address the complexity of the cloud era. This cloud middleware forms the basis of what is known as a cloud services brokerage and management platform.
A well-designed cloud services brokerage and management platform can deliver technology agnostic and cloud provider optimized consumption models. The cloud broker can match the application to the best-fit cloud platform and continuously optimize usage by workload demand and goals. This can greatly simplify the complexity of managing multiple cloud providers as well as achieve greater utilization and agility.
To handle the dynamic, cloud-centric apps, the cloud services broker manages cloud resource consumption from the perspective of IT supply chain management in which demand and supply are continuously optimized across a network of suppliers. The manufacturing industry has successfully implemented this model using Just-in-Time techniques to reduce inventory and, hence, cost in the supply chain.
The cloud services broker is the next-generation ITIL automation for the cloud and it takes the resource pooling capability provided by virtualization to the next level — to resource matching and optimization. In this way, the cloud services broker can provide a step-function improvement in agility and cost reduction over what virtualization gives us.
What changes in data center architectures, economics, management and technology have you seen as you move to the cloud?
Mohammed Farooq is Founder and CEO of Gravitant, Inc. He is a business and technology veteran with 16 years IT experience and is a seasoned executive with more than 12 years of experience in building early-stage companies and managing technology strategy for multi-billion dollar enterprises.
By Christophe Lemaire, CIO at Eurostar
By Mohammed Farooq Founder and CEO of Gravitant
In the last several months, interest in the concept of cloud brokerage has been growing and was recently punctuated by Accenture’s self-proclaimed desire to be the IT industry “cloud broker” as part of a larger $400 million cloud investment.
Given the rising interest, I want to share thoughts on three areas where cloud brokerage can bring about practical benefits: legitimizing shadow IT, enhancing hybrid cloud management and rationalizing IT outsourcing.
Bringing Shadow IT into the Light
While many enterprise IT organizations believe they have limited public cloud spend today, Gartner has estimated that by 2014 up to 35% of IT spending will occur outside of the central IT budget on public cloud services. For the average company with $5 billion in revenue, that extra IT spend could be north of $25 million.
To respond to their business users’ need for IT services that are more cloud-like, most large enterprise IT organizations have embarked on some sort of private cloud initiative. But until that initiative is complete, end users will happily continue to use public cloud services – with the business taking on the consequences of higher overall IT costs as well as security and compliance holes and risks.
A cloud services brokerage enables end-users to continue to use public cloud services, but allows the IT organization to offer value-added services for the business such as spend consolidation, alternative sourcing to multiple public clouds, solution architecture guidance to ensure that the proper security services like VPNs are in place and consolidated billing management. Once a private cloud has been completed, it can be added to a brokerage similar to a public cloud services provider and with the added benefit of a public cloud-like experience.
Filling the Gaps in Hybrid Cloud Management
Most private cloud management software has morphed over time to hybrid cloud management solutions. Virtually all of these platforms provide end-user benefit through self-service portal catalogs offering fixed VM/workload configurations as well as automation on the back end to speed up the deployment of the VM/workload on either private or public clouds. However, gaps still exist for an IT organization looking to deploy a true hybrid cloud experience. Today’s public cloud services landscape is highly fragmented from an offering standpoint. Each provider has different capabilities, packaging and pricing models. Thus, many public cloud users end up defaulting to a single public cloud provider, setting themselves up for the cardinal sin of long-term lock-in.
A brokerage capability can help the end-user navigate this complexity by allowing collaboration with IT to design the proper application infrastructure solution using either private or public cloud infrastructure, and includes additional services like security or backup. A brokerage can also provide an estimated bill of IT for that application infrastructure solution across different providers, internal and external, that is reflective of the latest pricing. It can enable true chargeback to the business for the actual IT services that have been consumed while automating and consolidating the complex billing process across multiple providers on the backend.
Rationalizing IT Outsourcing
While IT outsourcing contracts specify minimum spend levels, the reality is that the actual spend is higher as additional services are requested of the ITO provider.
Thus, the savings that businesses hope to realize from ITO are diminished. In fact, 38% of customers cite lack of innovation or continuous improvement as the greatest challenge with their ITO vendor.
One way to reduce ITO spend is to leverage public clouds to reduce infrastructure spend. The Everest Group has highlighted a case study of a leading global power generation company that found that usage of public cloud models for only 25% of workloads would drive greater than 30% reduction in overall infrastructure costs.
However, the challenge is to preserve these estimated public cloud savings. As with every new technology, hidden costs of adoption appear after the fact. The Texas Cloud Offering (PTCO) has documented 10 lessons learned from using public clouds to reduce infrastructure costs. Some key sources of cloud cost leakage include the need to re-architecture infrastructure solutions to maximize the benefits of public cloud, the risk of lock-in, billing complexity and cloud VM sprawl. By implementing a cloud brokerage capability to mitigate these cost leakages, the State of Texas enjoyed a 41% savings by moving a web services application to a public cloud. What do you think about using a cloud brokerage model as a bridge to the cloud? Do you think these use cases are applicable? What other use cases come to mind?
Mohammed Farooq is Founder and CEO of Gravitant, Inc. He is a business and technology veteran with 16 years IT experience and is a seasoned executive with more than 12 years of experience in building early-stage companies and managing technology strategy for multibillion dollar enterprises.
By Randy Spratt, EVP, CIO, CTO, McKesson Corporation
What would happen if your health information were in a language that only your general physician understood? A similar disconnect actually happens in U.S. healthcare today when fragmented information systems are unable to talk to one another. Doctors lack a holistic view of their patients’ medical histories, while patients have to unnecessarily repeat information and are left without access to their very own health data. Ultimately, vital information is trapped, resulting in reduced quality and increased cost across the system.
While some large healthcare organizations have made progress in capturing electronic healthcare records and making these records available to their own physicians and patients, there are few examples of making these records “liquid” enough to be able to flow to other provider organizations.
If you have the misfortune of requiring care outside of your primary healthcare provider system, it is highly unlikely that your treating physician will have access to your previous clinical records, medication history, and other critical healthcare information. From my own perspective as the CIO of McKesson, America’s largest healthcare services company, I see healthcare suffering from the high costs and needless complexity of a fragmented technology landscape. As healthcare goes mobile and data migrates increasingly to the cloud, the ability to seamlessly transfer healthcare records will prove a key lever in transforming the efficiency of our healthcare system.
As CIO’s, regardless of our industry, we have insight into a broad spectrum of information as it travels across our organizations. Yet in many ways, our own companies suffer from the same lack of connectivity and interoperability that plagues today’s healthcare system. As in healthcare, technology plays a crucial role in improving efficiency and quality in virtually every business. Yet, according to the Gartner 2013 CIO Survey, “CIOs on average report that their enterprises realize only 43% of technology’s business potential.”
To deliver on the true potential of IT and remain relevant, it is imperative that we serve as an enabler in overcoming the silos that exist within today’s organizations, serving as our companies’ chief driver of efficiency as well as a key enabler of business strategy. The CIO of tomorrow will need to acquire and manage the right portfolio of technologies and develop IT talent in order to positively impact the bottom line. Unpaid technology debt – failure to take advantage of the capabilities and cost differential of appropriate new technologies – will be the CIO killer of tomorrow. And failure to develop capable, high performance teams current with those technologies and fluent in the business will be the primary cause of technology debt.