IT’s been talking about the CapEx/OpEx benefits of cloud for years now – but that’s just a tiny fraction of the enterprise story. The economics of cloud computing go far beyond just monetary costs, spanning the entire mechanism of IT delivery from as-a-service models to just-in-time provisioning for urgent business needs. However, many CIOs have yet to grasp the basic principles of cloud economics, and they’ll need to swot up fast if they’re to maintain their rightful seat at the C-suite table.Without an understanding of cloud economics, CIOs will struggle to justify their budgets to the rest of the business. The old adage that CIOs spend 70 percent of their budgets on “lights-on” maintenance has yet to change across the majority of industries. In fact, much of the other 30 percent goes into new technologies that don’t have a clear business case – further distancing IT from executive decision-makers and making it that much harder to maintain budgets.This is because the true value of IT budgets isn’t transparent to the rest of the business. In the world of cloud economics, IT’s utility as a supplier isn’t just judged in terms of costs: factors like speed, agility, and scalability are all of far greater value to the business decision-makers generating demand. Yet most CIOs continue to represent their value solely in terms of new hardware, software, and capabilities which don’t resonate at all with business priorities. And because they measure value in this way, IT tends to be far less agile and responsive than business units need – one of the main drivers behind shadow IT, which is growing far faster than IT can react to.CIOs can turn this around, and reclaim control of enterprise technology, by measuring their results according to new units of “currency”. They should aim to represent IT’s value as a unit cost of any business service. In a bank, for example, you’d report on how much IT spend the business would need to manage a single mortgage. This lets the bank’s CIO prove what value she’s providing to the business. It also allows her to demonstrate improvement: for example, driving the IT cost per mortgage down with greater infrastructure efficiencies. And it helps her justify her budget’s necessity, as happens when overall IT costs increase as a result of growth in mortgage sales.Cloud economics operates in a number of different currencies: the most common ones are time, convenience, and scalability. These are the things which business units are willing to pay for, because they directly impact an enterprise’s ability to effectively go to market and compete. And when CIOs measure their infrastructure and services using these currencies, they can also see which areas of their operations they need to focus on improving. A huge cost per unit for one particular service, for example, may indicate that operational silos are resulting in large inefficiencies when it comes to provisioning.If IT can’t break down its silos, take ownership of the processes that cross between business units, and prove that its services are the best choice for the enterprise, individual decision-makers will continue to go rogue with shadow IT. The security, privacy, and even cost issues of this are immense: we’ve seen some business units in the region purchase cloud services with their credit cards, then suddenly hit massive diseconomies of scale when their applications suddenly ramp up to peak demand. That’s the point when these decision-makers realise IT’s value as an end-to-end solutions provider, and run back to private or hybrid cloud – but by then, it’s often too late.Every CIO today has to think not as a cost centre but a service provider. Your choice of infrastructure still matters: converged infrastructure solutions like Vblocks force IT teams to break their silos and streamline the hand-off between different areas of technical expertise, improving service levels and freeing up resources in the process. But that’s just the starting point: the CIO’s focus should be adopting the currencies and models of cloud economics, so that they and the business can see IT’s true value.
Presumably you’ve heard the news that something very exciting is on the horizon. We can’t wait to share that news with you in just a few days! But before we dive into September, and all the incredibly exciting change that brings, let’s take a quick look back at what happened in August. Read on to see what the teams have been up to.VMworld 2016VMworld is just wrapping up in Las Vegas. During the first day’s keynote Michael Dell took the stage for a brief conversation with Pat Gelsinger to discuss, among other things, the future of VMware. “The open ecosystem of VMware,” he said, “is absolutely critical to its success.” </p><p>Jim Ganthier spoke with SiliconANGLE’s The Cube about the private cloud and Dell’s HPC business and how the impending EMC acquisition will alter it. </p><p>In the booth we had a number of presentations, including a talk from our customer, Honeywell, on their very powerful solution leveraging the PowerEdge FX2 modular infrastructure platform together with VMware VSAN for use in oil refineries and other mission critical industries.Validated System for VirtualizationOn the second day of VMworld, we announced the Dell Validated System for Virtualization , developed to meet business needs quickly, at high efficiency, and with rock-solid reliability. That flexibility is key to delivering on the promise of a service-defined infrastructure that will enable and accelerate the new service-oriented IT.Make Smarter IT Decisions Dell highlighted its Dell Performance Analysis Collection Kit (DPACK) in a recent blog post that allows customers to visualize their infrastructure requirements and then directly compare to some of the most popular reference architectures. DPACK is an agentless, non-disruptive software that runs remotely to gather core metrics such as disk I/O, throughput, CPU utilization, memory utilization, free and used capacity and network throughput. Using this tool, Dell found that combining Dell’s innovative FX2 chassis and VMware VSAN created the most powerful VSAN cluster in the world. The FX2 chassis populated with four FC430 dual socket servers and two FD332 storage sleds with all flash drives proved to provide 16X the performance of the legacy solution. The versatility of VSAN licensing and flexibility of the FX2 chassis networking also stood out and enabled the solution to double its performance to 32X the legacy solution by adding an SC4020 all flash array.The Path is Open to Software-Defined NetworkingThe Dell Networking team recently published an eBook on how organizations are using Dell Open Networking solutions to get amazing results. The new eBook called “The Path Is Open to Software-Defined Networking” highlights Dell Networking customers, Cornell University, U2 Cloud, Netsystems and Midokura on how they are protecting their investments, accelerating innovation velocity and increasing business agility.HPC Leads Manufacturing InnovationIn a blog series, the Dell HPC team is highlighting the top industries where high performance computing provides the most value. In the second and most recent blog post on the manufacturing industry, Ed Turkel explains that the goal of the HPC System for Manufacturing is to make HPC accessible and seamless for manufacturers of all sizes, allowing them to develop more competitive products with faster time to market, higher quality, and lower cost. According to the National Center for Manufacturing Sciences(NCMS), out of more than 300,000 manufacturers in the US, 95 percent are categorized as small or medium sized (less than 500 employees) and 94 percent of them have not fully adopted HPC. By opening up HPC resources to more and more manufacturers, Dell is helping to enable an unprecedented surge in innovation.As Dell buckles down for the fall conference season ahead, don’t forget to register for our main conference, Dell World (October 18-20 in Austin). Registration is open. We hope you can make it!
We are again excited to be hosting the Field Day crew for Tech Field Day 13 (that #TFD13 for you folks following along on Twitter) later this week. For those who aren’t familiar with Field Days I recommend you take a quick look at their website here before coming back to learn what we’ve got planned when the delegates visit us at the Round Rock campus this Thursday.Now that Dell EMC is a united force, we wanted to make sure that we were showcasing products and solutions from across the legacy Dell and EMC teams, and also giving delegates some insight into how those legacy technologies are now being used together.Topics and demos this week will include a demonstration from Chad Dunn on VxRail, a discussion from Armando Acosta on Data Analytics, a talk from Alan Brumley on our OEM strategy, and a tour of the Modular Data Center from David Hardy. Be sure to tune in to the livestream starting around 2:30 PM CT on Thursday to catch the full presentations, and check back next week to see all the coverage on our dedicated Tech Field Day page.Bonus content – here’s one of the most popular presentations we’ve ever had at a Field Day. This is Carol Pflueger demonstrating the modular FX2 system to delegates two years ago. Enjoy! </p><p>
Dell Financial Services Expands the Flexible Consumption Opportunity for Dell EMC Storage Customers When it comes to public cloud adoption, customers expect acquisition flexibility, reduced risk and cost savings. With our OpenScale flexible payment solutions, Dell Financial Services (DFS) is helping customers acquire modernized technology and achieve their transformational goals at a cost favorable to the public cloud.On-premise IT run with flexible consumption is more cost effective than the public cloud for most workloadsGrowing amounts of data continue to drive the need for diversified IT environments. By utilizing flexible consumption models to manage predictable workloads, customers can experience cloud-like flexibility within their on-premise environment without the cost impact of the public cloud. In a recent VMware-commissioned survey of 150 IT decision makers, 41 percent of respondents reported they currently operate private clouds at lower unit costs than public cloud. The survey went on to explain that these cost efficiencies were achieved through a combination of automation, improved capacity planning, diligent cost management and the use of flexible licensing agreements.Announcing a Flex On Demand entry point to make flexible consumption available to more customersThe market for flexible consumption solutions has been previously limited to enterprise customers or those with larger deployments. To make flexible consumption solutions available to more customers, we are excited to announce a lower threshold for consumption-based All-Flash storage through the DFS OpenScale Flex On Demand payment solution. Flex on Demand Velocity pricing models for Dell EMC Unity All-Flash and XtremIO X2 storage arrays will offer price points of less than $1,000 per month, and customers can run consumption-based All-Flash storage without needing custom configuration – thereby improving time to installation for storage deployments.Flex On Demand reduces costs, enabling customers to pay only for capacity as it is used. Customers can take advantage of All-Flash storage on a consumption basis across the Dell EMC storage portfolio and enjoy the operational and cost benefits of running on-premise workloads with a lower capacity commitment and a more flexible payment period through Flex On Demand.We believe flexible consumption models will become the standard for infrastructure investments as customers look to achieve transformation goals with more control at a more cost-effective rate. DFS is committed to expanding our payment solutions in this area, focusing on offering industry-leading flexibility and more choice for customers across our innovative financial services portfolioContinue your transformation journey with flexible consumption today. Contact your local Dell EMC or DFS account representative to learn more about Flex On Demand or to get your Flex On Demand Velocity quote for Dell EMC Unity All-Flash or XtremIO X2.
Moore’s law postulates that the number of transistors on a chip doubles every two years. As such, it serves as a wonderful example of exponential growth. We can intuitively grasp this principle when, for instance, we observe the advances in the performance capabilities of smartphones and other devices alone within the past few years.Moreover, this law only covers a small portion of overall technological development, which is growing just as exponentially – although it was unbearably slow in its early days, it is now progressing at lightning speed. In the past, paradigm shifts took millennia to enact (as was the case with stone tools and the wheel), whereas today, they only take a few years (as with the Internet or, once again, the smartphone). We can safely assume that this rate of acceleration will continue to increase at the same pace – and not just in the realm of IT, but also in other scientific fields, such as physics and biology. While we’re on this topic, it’s interesting to note the parallels between technological and global population growth: Both are exponential.AI will play a key role in this development; and as far as this is concerned, experts such as Ray Kurzweil and Jay Wheeler also expect exponential growth. That’s why there isn’t just one ‘kind’ of AI; we currently still distinguish between ‘weak’ AI, whereas several years from now we will see ‘strong’ AI. Some people even refer to artificial superintelligence when describing the next technological revolution. Technological singularity will come at the end of this development: At that point, humans will no longer be able to perceive said development, even approximately (Harvard Science Review has contemplated the possibility of an artificial mega-brain with an IQ of 34,597: A far cry from the average human IQ of 100). In the latter half of this century, there will likely no longer be any clear differentiation between artificial and human intelligence.We haven’t reached that point yet, though, not by a long shot – strictly speaking, AI is still in its infancy. Laws need to be put into place regarding the technology’s potential and potential risks, related ethical issues need to be resolved, and there is a lack of sufficient standardization – even though work is already being done to this end, such as in China. There is a lack of AI expertise on the personnel market, and millions of AI developers and companies worldwide are struggling with an inadequate IT infrastructure that falls far short of the requirements for AI development: The network bandwidth is too weak, there is insufficient memory, and there are equally few specialized AI solutions. AI applications eat up vast volumes of data and usually need to master these in real time; otherwise, a great advantage of AI goes lost. The existing solutions tend to be more bad than good.Specialized hardware and software solutions are required, as is an adapted, AI-oriented infrastructure – not only to quickly process data, but also to quickly develop and implement new AI applications. In the global competitive arena, time-to-market is essential for artificial intelligence. Eighty percent of companies surveyed by market researcher ESG on behalf of Dell EMC believe that their new developments revolving around AI and machine learning will take under two years to yield significant business advantages.Building up an AI-specific infrastructure requires AI expertise as well as general IT expertise. IT departments need to work hand in hand with data scientists to select the correct servers, graphics processors, memory solutions, and networks with sufficient and scalable bandwidth. Next, they need to carry out the construction and testing phases, plus time-consuming fine-tuning of the AI framework and libraries as well as the software interplay. Finally, the data scientists must validate and approve the entire system. Only then can they start to develop the initial models.Working through the cloud does not necessarily represent a faster route when it comes to this kind of project; on the contrary. Although many public cloud providers offer AI computing power and libraries, they are unable to supply reference configurations or solution centers for customers, to say nothing of sufficient consulting; as far as this is concerned, they leave data scientists out in the cold. Moreover, there are concept-related performance problems to contend with, such as ones caused by data transfer, which typically make an internal solution preferable.Dell EMC knows that, too. We recently introduced the Dell EMC Ready Solutions for AI, which we specially designed together with NVIDIA for machine learning with Hadoop and deep learning. These solutions make AI rollout easier and quicker, and provide comprehensive findings from data more quickly. Companies no longer have to procure their AI solutions in individual components, combine them, and spend precious time fine-tuning them: Instead, they can rely on a package of best-of-breed software that has been designed, validated, and fully integrated by Dell EMC, including AI frameworks and libraries as well as the required computing, network, and memory capacities.Our solutions increase data scientists’ overall productivity by 30 percent and reduce the time required until the productive application of an AI solution by up to 12 months, as compared with a DIY solution. Moreover, new services on offer from Dell EMC Consulting fully support companies with AI, from implementing and commissioning ready-solution technologies and AI libraries through to providing architecture recommendations and industry consultations.Even the most modest AI development will hardly get off the ground without the right infrastructure. An ambitious AI roadmap with which technology can be used throughout the company on a wide scale requires a highly specialized AI infrastructure. At this point, companies can no longer afford to do without this if they wish to remain competitive over the long term.Technological and AI development are advancing at an exponential pace. Without an adequate AI infrastructure, though, nothing will come of this.