banner

An Invisible framework for Predictive Analytics

Predictive Analytics has gained tremendous significance over the last few years, and it’s only increasing by the day. Powered by Artificial Intelligence tools and techniques, it is being applied to almost every industry and every field today. And these tools are being super-charged by real-time data. As you’d very likely know, World has created 90% of all data in the last 2 years. But is this data really organized, and how do we make sense out it?

After spending a substantial amount of time talking to business and solving their problems, here are what I think are the critical problems preventing organizations in adopting Predictive Analytics:

Data data everywhere, not a byte relevant:

Many organizations today are well equipped with Business Intelligence and Data Warehousing, concepts that gained tremendous commercial traction in early 2000s. But today, despite all of that (seemingly) well-organized data, most organizations are facing some unanticipated issues:
 
– Are we really collecting relevant data for solving our problems?
– How do we connect various data points?

– How do we know which data is relevant for solving which problem?

 
Mathematical Models – a Random Forest:
 
To top the data problems, there are a plethora of mathematical models that can be applied in Predictive Analytics. Applying the right model to the right data to solve the right problem is extremely important – simply because all models will give some results ! Selecting the right model not only needs a sound mathematical background, but also demands deep understanding of the problem and business domain. A lot of times, this is a tough to find resource.
 
The Platform Conundrum:
 
Beyond data and Models, there are literally dozens of Platforms on which solutions can be delivered – from opensource and pay-per-use, to higher end customized versions. Combine it with front-end Technology and big data tech stack, and your Alphabet soup is complete. These platforms allow us to store our data and run models (some allow more customizations to models than others) and get us the results with visualizations. Some of these platforms do seem inexpensive at first, until you realize the in-built exponential scaling revenue model ! Besides, some platforms do have more community support and better future than others – so which ones do we go with?
 
Where do we start?
 
These are broad categories in which most questions fall, according to me – and none of the organizations I’ve dealt with till date have showed problems in all three categories ! And while most organizations do seem to have a good idea of this domain, the most common question in each category across industries is, Where do we start? 
 
This question is not without its merit, given the technical terms, different viewpoints on experiences with technologies and platforms, resource availability, support, costs, scalability and of course, jargons ! My 2 cents to this question is always – “start with your biggest problem”.
 
The Invisible Framework
 
credit default 3It is quite easy to get lost in all the data & math & tech, each of which is a huge field in itself. And that’s where we need a framework to walk along side. The framework has to be flexible, as each problem is unique, and has to be un-intrusive, allowing us to move across data, models and technologies. It should be simple, so many more can understand and adopt it, yet be complete with all its complex models and solutions.
 
This is where the concept of Invisible Framework comes in. The Invisible Framework for Predictive Analytics is Essence over Complexity. It is a framework where Solving a problem is chosen over technical and domain complexity. It allows people to be comfortable with the solution rather than worry about terms and technicalities involved at each stage of arriving at solution. Only by removing the complexity, can the framework really be made Invisible. 
 
The solution should be such that the complexity is masked behind ease of use, and jargons are simplified for business users, so they can easily communicate with modelers and data analysts alike. The only way this is possible, is by keeping laser focus on the problem, and for everyone to be involved to a certain extent in the end result. It should not really matter whether or not the organization is happy with the result, the only thing that should matter is how can we make it better.
 
Here’s how it works !
 
As an example, this is the exact approach we follow at Risk Edge. We build solutions that allow experts from multiple domains to come together and discuss the solution. A whole lot of times we don’t follow strict rules of selection for either model / technology / platform, but let the requirements dictate those. We switch between technologies, databases, OS, models, etc. with relative ease, and increasing interoperability between various techs only helps ! This allows us to focus on only the problem, and nothing else. In rare instances, we do end up changing the technology mid-way, but that too usually happens when the problem evolves into a very different one mid-way.
 
In times of accelerating change, solutions (at least for Predictive Analytics) that can be deployed and run for many years to come should be seen as loss making propositions – since they’ll make us lose out on more opportunities. For the next few years at least, the only mantra to follow for such solutions should be “Forever Beta” !