benefit from all our premium research
related research from the MWD library
most recent posts
- SMiLE, it’s collaboration!
- Building a business case for social collaboration
- Watson Analytics: getting creative with the data preparation and analysis process
- Clarizen webinar poll: a snapshot into changing project management expectations
- Digital, customers, data: the strategy connection that you must understand
Thursday, April 26, 2012 by Helena Schwenk
If you haven’t already heard on Twitter or the blogosphere, IBM yesterday announced its intention to acquire Vivisimo – a data discovery and navigation vendor primarily for unstructured data – for an undisclosed sum. The company has around 140 customers in industries such as government, life sciences, consumer goods and financial services including Airbus, Procter & Gamble, Bupa, and LexisNexis among others.
Not surprisingly given the current market buzz this deal is being framed as a Big Data acquisition and therefore sits within the company’s Information Management division. Vivisimo is a small but nonetheless interesting player in the federated discovery, navigation and analytics space. In other words its software is good at scanning unstructured (as well as structured) data, automating its discovery and then putting it in a format that can be navigated and analysed by users to provide a single view of their disparate information. One particular use case is the Johns Hopkins University Applied Physics Laboratory, who is using Vivisimo to discover, search, and navigate multiple federated data sources to provide its employees with a unified search tool for content, project and expertise information.
These capabilities are enabled through the software’s federated architecture that pushes out the queries, exploration and analysis of data at source in their respective repositories, rather than trying to bring data together in a centralised repository such as a data warehouse. There is of course a solid argument for federating data discovery and search, as it’s not always practical, timely or necessarily to lug large amounts of data across the network and consolidate it for analysis.
Under the hood Vivisimo’s platform includes data source connectors as well capabilities for indexing, text analytics and metadata support (such as tagging and building taxonomies), all of which underpin its application workbench for developing custom discovery applications. As mentioned in our Outlook 2012 blog post, analytic search is one of the key technologies trends we see in the market, helping more end users gain access to and insight from their data. It’s also behind similar moves by HP and Oracle who in 2011 snapped up Autonomy and Endeca respectively for similar types of capabilities.
This is a smart move by IBM as it gives their customers flexibility in the design of specific Big Data applications. However, the big question on everyone’s lips is not necessarily why did IBM buy Vivisimo but how is it going to integrate yet another technology piece into its (perhaps one would say all encompassing ) Big Data platform. IBM already has text analytic capabilities within BigInsights (its platform for building Hadoop based applications). Not forgetting this the company already has similar search capabilities from OmniFind as part of its Content Analytics product line that currently sits outside of its Big Data technology offerings. To add to this, the company also has the challenge of trying to unify the different development environments that currently sit within its Big Data portfolio.
Despite these overlaps and integration challenges, there are some natural synergies and use cases for integrating the Vivisimo technology with other parts of its Big Data platform. For instance it can be used for the discovery and profiling of content before it’s loaded into the platform; equally it could be used as a single point of access across a range of repositories – not necessarily those only based on Hadoop – thereby helping customers get to grips with more of their Big Data in a faster timeframe by leaving it within its source system. How it will achieve these aims and how fast it can manage the integration remains to be seen. The company expects the deal to close in the second quarter of 2012.