Solve your BI on Big Data Challenges like Bell Canada did
A futuristic data strategy is important for your organization if you want to use your data lake for business benefits. As more and more data gets generated, it becomes challenging to meet the reporting needs of the business stakeholders who want reports with more dimensions, metrics, and measures. Your existing systems cannot deal with such massive amounts of data as they’re not built for that, and thus you should explore new technologies that have been built specifically to solve your BI on Big Data challenges.
In this article, we will look at the challenges that Bell Canada faced with Big Data and the solution they adopted to achieve instant BI at a massive scale.
BI on Big Data Challenge #1: Inability to ingest high velocity and volume of data
Bell, the largest telecommunication company in Canada, follows a data-driven culture where thousands of users across the organization use data to make business decisions. Their traditional data warehouse architecture worked well until they faced their first big data challenge where their existing systems could not keep pace with new data sources that were streaming a massive volume of data at high velocity.
Soon they realized that their existing systems could ingest only half of the data that was being generated. This resulted in the loss of valuable data, and it was not possible to build meaningful reports from incomplete data. No amount of optimization of existing infrastructure worked and they knew it was necessary to build a Big Data architecture that could deal with their massive, high-velocity data.
To overcome this challenge, Bell Canada implemented its Hadoop platform, and within a few weeks after implementation, they could ingest all their data. The consolidated data lake could store all of their current data as well as scale up to meet their future needs.
BI on Big Data Challenge #2: Inability to consume their Big Data effectively
Once the data came into Hadoop, the next big BI on Big Data challenge that Bell Canada faced was consumption of this rich data for effective use by business users. They were using advanced BI tools such as MicroStrategy and Tableau that delivered response times in seconds for smaller datasets. But when they tried to analyze larger datasets, the response times declined drastically, making it difficult for them to meet the reporting expectations of their business users.
Besides response times, each BI query needed raw data access and consumed a lot of resources. This often led to resource overload on the Hadoop cluster and resulted in inconsistent response times on BI queries. Reporting often became cost-prohibitive, and they know they had to design a data model for consumption that would allow easy, quick access to massive data.
To solve these problems, they used Kyvos to create a BI Consumption layer on their Big Data platform. It pre-aggregated massive volumes of data into OLAP cubes that supported multi-dimensional analysis, providing deeper insights into Big Data with instant response times. Reporting became easy and cost-effective, allowing thousands of users to access Big Data in easily consumable formats. Besides this, the BI Consumption layer allowed them to scale their Big Data BI platform easily by adding more nodes, just as they did for their Hadoop cluster.
Bell Canada chose OLAP on Big Data to achieve phenomenal results with their BI on Big Data platform. If you want to understand how OLAP on Big Data technology works and what it can do for you, read our blog.
To get further details on how Bell Canada used Kyvos to solve their BI on Big Data challenges and achieve multi-dimensional analytics on their Big Data, listen to our webinar recording “BI at Exponential Scale at Bell Canada.”
In this webinar our speakers, Mark Huang, Director – Data Engineering at Bell Canada and Ajay Anand, VP Products & Marketing – Kyvos Insights, discuss the solution and its benefits in detail.