Every so often I check my blog stats to see what you, the reader, find most interesting - my goal is to continue to bring you great content in both my blog and my research. While I was looking back over my blog stats I thought you might like to see the top ten blog posts in case you missed any of them. But just how should I assess the top ten? Like all outcome metrics, this one is open to interpretation.
I could take the simple route and just count which posts have the most reads. But that would fail to take into account how many days it has been since the blog was published - it stands to reason that older blog posts might garner more reads. So a ranking based on the number of reads divided by the number of days the post has been online would yield a more accurate result in terms of most read post (See Table 1 - Top ten most read posts).
With Amazon Web Services and Microsoft Azure now on greater than $2 billion annual run rates and expanding their application services nearly weekly, it’s starting to look tougher than ever for traditional hosters, enterprise cloud players and managed service providers to compete against them. When you just can’t see how to win, the better option might just be not to try.
That seems to be the new trend in enterprise cloud vendor strategies as evidenced this week in moves by Datapipe, Google, and VMware. These moves follow similar shifts in strategy taken by Accenture, Rackspace, and others in the past quarter. The strategies acknowledge a reality that is redefining what they hoped hybrid cloud meant.
Last year I published a reasonably well-received research document on Hadoop infrastructure, “Building the Foundations for Customer Insight: Hadoop Infrastructure Architecture”. Now, less than a year later it’s looking obsolete, not so much because it was wrong for traditional (and yes, it does seem funny to use a word like “traditional” to describe a technology that itself is still rapidly evolving and only in mainstream use for a handful of years) Hadoop, but because the universe of analytics technology and tools has been evolving at light-speed.
If your analytics are anchored by Hadoop and its underlying map reduce processing, then the mainstream architecture described in the document, that of clusters of servers each with their own compute and storage, may still be appropriate. On the other hand, if, like many enterprises, you are adding additional analysis tools such as NoSQL databases, SQL on Hadoop (Impala, Stinger, Vertica) and particularly Spark, an in-memory-based analytics technology that is well suited for real-time and streaming data, it may be necessary to begin reassessing the supporting infrastructure in order to build something that can continue to support Hadoop as well as cater to the differing access patterns of other tools sets. This need to rethink the underlying analytics plumbing was brought home by a recent demonstration by HP of a reference architecture for analytics, publicly referred to as the HP Big Data Reference Architecture.
Between 2012 and 2014, mobile BI adoption shot up: Forrester survey data shows that the percentage of technology decision-makers who make some BI applications available on mobile devices has nearly quadrupled, and the percentage who state that BI is delivered exclusively via mobile devices has risen from 1% in 2012 to 7% in 2014. While this clearly demonstrates that mobile BI is gaining traction, the actual mobile BI adoption picture is rather more nuanced. Our ongoing research and client interactions show that mobile BI adopters fall into three overall groups; some organizations
Really ‘get’ the transformational potential of mobile BI. They are the ones who understand that mobile BI is about much more than liberating reports and dashboards from the desktop. They focus on how data can be leveraged to best effect when in the hands of the right person at the right time. If necessary, they’re prepared to change their business processes accordingly. For those companies, mobile BI is an enabler of strategic goals, and deployment is a journey, not an end in itself.
Make mobile BI available because it’s the right thing to do, or they’ve been asked to. Many of these organizations are reaping considerable benefits from their mobile BI implementations, and the more far-sighted of them are working on how to move from the tactical to the strategic. Equally, many are trying to figure out where to go from here, in particular if the initial deployment doesn't show a clear benefit, let alone return on investment.
At the China Hadoop Summit 2015 in Beijing this past weekend, I talked with various big data players, including large consumers of big data China Unicom, Baidu.com, JD.com, and Ctrip.com; Hadoop platform solution providers Hortonworks, RedHadoop, BeagleData, and Transwarp; infrastructure software vendors like Sequotia.com; and Agile BI software vendors like Yonghong Tech.
The summit was well-attended — organizers planned for 1,000 attendees and double that number attended — and from the presentations and conversations it’s clear that big data ecosystems are making substantial progress. Here are some of my key takeaways:
Telcos are focusing on optimizing internal operations with big data.Take China Unicom, one of China’s three major telcos, for example. China Unicom has completed a comprehensive business scenario analysis of related data across each segment of internal business operations, including business and operations support systems, Internet data centers, and networks (fixed, mobile, and broadband). It has built a Hadoop-based big data platform to process trillions of mobile access records every day within the mobile network to provide practical guidelines and progress monitoring on the construction of base stations.
The movement to cloud is fast changing the method companies will deploy and consume security services. The number one issue that drives the adoption of managed security services (MSS) and the business of managed security service providers (MSSPs) is complexity reduction. As companies replace premise-based data centers with virtual cloud data centers, the expectations of these customers will change as well, they look for elastic ways to purchase security services, as well as, methods that allow for the active defense of both cloud, and premise based workloads. Consider the following:
We have heard that the perimeter is dead, and many ways it is. We name the normal assassins and they include outsourcing, mobile solutions, and the cloud.
Another truism is that companies never wanted to be in the information technology business in the first place. Information technology has brought real productivity improvements but it has also brought significant costs.
Moving information technology to the cloud provides companies the opportunity to reallocate costs from capital expenditures to operational expenditures and reassign operations staff to other roles.
Formula One has gotten us all used to amazing speed. In as little as three seconds, F1 pit teams replace all four wheels on a car and even load in dozens of liters of fuel. Pit stops are no longer an impediment to success in F1 — but they can be differentiating to the point where teams that are good at it win and those that aren’t lose.
It turns out that pit stops not only affect speed; they also maintain and improve quality. In fact, prestigious teams like Ferrari, Mercedes-Benz, and Red Bull use pit stops to (usually!) prevent bad things from happening to their cars. In other words, pit stops are now a strategic component of any F1 racing strategy; they enhance speed with quality. But F1 teams also continuously test the condition of their cars and external conditions that might influence the race.
My question: Why can’t we do the same with software delivery? Can fast testing pit stops help? Today, in the age of the customer, delivery teams face a challenge like none before: a business need for unprecedented speed with quality — quality@speed. Release cycle times are plummeting from years to months, weeks, or even seconds — as companies like Amazon, Netflix, and Google prove.
Well, it’s now been about nine months, and time to check in on the gestation of the DATA Act. But before we start on what’s happened since the law passed on May 9, 2014, let’s take a quick look at what it is, and what government organizations have to work with.
This bipartisan legislation – jointly sponsored by two democrats and two republicans – is an effort to modernize the way the government collects and publishes spending information – in particular by establishing standard elements and formats for the data. The new law assigns responsibility for the task, sets out a four-year timetable for implementation, and establishes a strict oversight regime to measure compliance in the adoption of the standards and the subsequent quality and timeliness of the published spending data. That oversight is the big difference between the DATA Act and the previous legislation to improve funding transparency. This time someone is watching, and the law has teeth.
Telefónica entered into an exclusivity agreement with Hutchison Whampoa regarding Hutchison’s potential acquisition of the Telefónica subsidiary O2 UK for £10.25 billion in cash, valuing the deal at an estimated 7.5 times 2014 EV/EBITDA. The Hutchison-O2 UK deal — should it complete — will entirely redraw the telco landscape in the UK in terms of market shares. The acquisition of O2 UK will transform Hutchison from the smallest mobile operator with 7.5 million customers to the largest with 31.5 million customers and reduce the number of mobile operators in the UK from four to three.
This development follows on the heels of the announcement by Orange and Deutsche Telekom that they have entered into exclusive negotiations with BT Group regarding a potential divestment of 100% of their shares in EE, their joint venture in the UK. The increased merger activity is not surprising, and we predicted as much in our report Predictions 2015: Telecoms Will Struggle To Align To The CIO's BT Agenda. Still, these deals raise important questions for the European telecoms markets:
Customers are using more communication channels for customer service than ever before. They are also contacting customer service organizations more frequently. Companies are rising to this challenge as overall satisfaction with the quality of service over all communication channels is trending upwards.
Moreover, customers have little appetite for long or difficult service interactions, including navigating arduous interactive voice response (IVR) menus to connect with an agent or waiting in queues to be connected to a phone agent; and are increasingly turning to self-service as the easiest path to service resolution. Here are some key takeaways from our latest consumer survey about channel usage for customer service.
For the first time in the history of our survey, respondents reported using the FAQ pages on a company's website more often than speaking with an agent over the phone. Use of the help/FAQ pages on a company's website for customer service increased from 67% in 2012 to 76% in 2014, while phone interactions have remained constant at a 73% usage rate.
Other self-service channels also see increased usage since 2012. For example, use of communities and virtual agents jumped by over 10 percentage points each. We also see robust uptake of speech and mobile self-service channels.
Self-service adoption increased across all generations from 2012 to 2014, with the largest increases attributable to older boomers (ages 59-69) and the golden generation (ages 70+).
Online chat adoption continues to rise – from 38% in 2009 to 43% in 2012 to 58% in 2014. Screensharing, cobrowsing and SMS are other channels that are increasing in popularity among the young and old alike.