Why Big Data Projects Fail: Top 3 Reasons
Big Data is all around us, and I don’t mean that data is used everywhere. Instead, Big Data marketing is all around us, on the side of buses, on our television screens, in banner ads bombarding us online and in our inboxes. The marketing saturation is such that you’d be forgiven for thinking yours is the only company that struggles with data. If you needed a reality check, Gartner offered it this year when they announced that 60% of Big Data projects fail. But what are the top reasons why Big Data projects fail?
Top 3 Reasons Why Big Data Projects Fail:
1. Overinflated Expectations
We have to start with the obvious. If you’ve bought into the hype and expect that there is one solution that fixes all of your data problems, then you’re in for a rude awakening. As with many things, the truth is far more complex and nuanced. There are no “one-stop shops”. The point of a Big Data project is to use data to make smarter, faster decisions. To get there, you will need several different pieces of software addressing each step of the way.
2. Confusing Results
Sometimes the final step in a process is both the most important as well as the most overlooked. The same rings true with Big Data projects. One of the biggest reasons why Big Data projects fail is that the results of the data analysis are not understandable to the end users. Fundamentally, none of us speak data. Some of us are more proficient than others, but often times the business users who are supposed to be empowered by Big Data projects are not the most data savvy.
Academic research backs up this hypothesis as well. Daniel Kahneman, a renowned researcher in decision analysis, argues that if humans are confronted with something that is not immediately understandable or a conclusion they can’t follow, then they are likely to respond with the evolutionary “fight or flight” mechanism. For business, this means they will respond with instinctual decisions instead of reasoned, data-driven decisions. The solution is software that automatically explains the results of Big Data projects in written language.
3. Unnecessary Complexity
Again, we can blame the marketing here. All too often companies fall into the trap of buying the “Cadillac Model” when they really only need the “Toyota Corolla”. You don’t need the most expensive Big Data tool, you need to find the right set of tools that solve the problem you have, which to be honest might not even be a Big Data problem. The dirty little secret in database management is that often times, the majority of data is worthless and unusable. This data costs money to store but can’t, no matter the tool, deliver you value. The best way forward is to begin with the business problem you are trying to fix and work backward to identify the software tools you need to solve the problem.
In conclusion, why do Big Data projects fail? They fail because of human error, overexcitement, and strangely because the human factor is forgotten in the final output. Unfortunately, we don’t have the statistics to show what percentage of the 60% of failed projects actually needed a Big Data solution. We also don’t have the data to show what percentage of the successful 40% adopted last mile solutions like Natural Language Generation. But anecdotally, I would be willing to bet that the vast majority of those 40% of successful projects are using software to automatically explain the results of the analysis.