blog

SAP HANA – betting big on in-memory

Helena Schwenk

Tuesday, October 19, 2010 by

Last week I attended SAP TechEd in Berlin, the business application vendor’s premier European developer conference. During an event packed with NetWeaver 7.3 announcements, SAP chose to highlight its on-going commitment to in-memory computing with HANA, its High-performance Analytic Appliance. This version of HANA is designed to enable faster ways to process data from business applications in real time, without impacting the underlying source system. We believe HANA will bring new and interesting opportunities to SAP, however its introduction does raise some important questions about the impact on data quality and the appliance’s relationship with other SAP data warehousing technologies.

HANA can work as a “non-disruptive” in-memory operational BI appliance

TechEd Berlin kicked off with SAP CTO Vishal Sikka’s keynote in which he outlined the company’s three strategic technology priorities comprising: on-demand, mobility and in-memory. Alongside these themes, Sikka reiterated the company’s guiding principle of “innovation without disruption” – the ideal of being able to continuously evolve the technology landscape at the same time as leveraging innovations in a way that is non-disruptive to existing investments. One of the key areas where this principle was best illustrated was during the keynote slot on HANA.

HANA’s key value proposition is its ability to support operational BI and production reporting in a non-disruptive way. It manages this by leveraging Sybase’s Replication Server for real-time replication and synchronisation of a SAP ERP system and by loading data into memory for query and reporting. By running in parallel to the source SAP ERP application, HANA enables business users to query large volumes of operational data in real time. This is an approach designed to speed time to value – as there is no need to pre-aggregate data in a warehouse – at the same time as accelerating query performance times and minimising the load and impact on the operational system. HANA’s support for standard SQL and MDX means it can also work with a BusinessObjects client (such as Web Intelligence or Crystal Reports) or similarly compliant tools.

HANA is a core part of the company’s Business Analytics in-memory computing strategy, previously announced at Sapphire in May of this year. The appliance comprises the SAP Business Analytic Engine (BAE), an in-memory columnar data store with compression technology, combined with optimised hardware from partner HP (with other partners such as IBM planned for the future). This follows a general trend in the market for leveraging in-memory technology to increase the speed and performance of BI systems. The prospect of “real” real-time BI querying and reporting is becoming more of a practicality today thanks to hardware advances such as multi core chips, parallel processing, 64-bit computing as well as lowering memory costs: this consequently means lightening fast analytic systems are now coming within the reach of more and more users.

Data quality issues can pervade all BI systems, whether in-memory or not

While HANA’s sweet spot is high performance operational reporting and analysis, it does raise question marks about data quality. By replicating ERP transactional data within HANA, companies also stand to replicate the data quality issues that may reside within that data. As BI practitioners will know, the success of any BI system, whether in-memory or not, depends on the level of trust users place in that data. If they don’t believe the information surfaced within the BI environment then they will find ways to circumnavigate these issues. While SAP points to the benefits of using HANA to highlight and identify data quality issues, which can then be corrected at source, this is not a perfect solution – especially since any time lag that emerges between identifying data issues in HANA and rectifying them in the source ERP system (as data governance kicks in) may eradicate some of the time advantage of having a “real” real-time system. Equally HANA is unlikely to be the platform of choice for resolving data quality problems since it utilises a column centric data store which isn’t ideally suited for applications that require good update performance. Another classic resolution to the data quality problem is to “fix” the data further downstream in a data warehouse – a view that also chimes with SAP’s current thinking around data warehouses.

Understanding the relationship between HANA and BW & BWA is not straightforward

Interestingly, despite extolling the power and performance of in-memory computing for BI and analytics, the company still believes there is a role for data warehousing; especially where there is a need for an integrated, harmonised, cleansed and consolidated view of the business. This is not altogether surprising given the company’s investment in BW (its data warehousing platform) with its 10,000 customer base. In this sense BW remains the company’s go-forward forward strategy for customers that want a persistent data warehouse, whereas HANA supports a real-time and high performance “virtual” view onto ERP data – something that may complement or partially replace the functionality of an enterprise data warehouse.

This position does help to partially explain the relationship between HANA and Business Warehouse Accelerator (BWA), SAP’s other in-memory product. However the distinction in use cases is rather subtle, especially since a lot of the underlying technology between both systems is the same. It appears from the information I gleaned at TechEd there are two main usage scenarios. BWA is used to accelerate the query performance of SAP BW implementations by loading an entire BW InfoCube (SAP’s star schema-like format) in-memory and leveraging BWA indexes to enable faster response times. HANA, on the other hand, is currently targeted at SAP ERP customers who want high performance operational reporting data marts without impacting source system platforms. SAP positions HANA as an evolution of BWA but also one which could open up in-memory options to non-BW users. The challenge for SAP moving forward will be to carefully articulate how HANA fits in the overall SAP system landscape especially when compared to BW and BWA – without further confusing the customer base.

This is a 1.0 release of HANA; however SAP doesn’t lack in-memory computing ambitions

Above all it’s worth remembering that HANA is a 1.0 release, something reflected in the appliance’s current scope. In this first iteration the appliance works well against SAP ERP data but isn’t geared up to work with other operational data sources such as Siebel CRM system data. However this doesn’t mean the company hasn’t got ambitious plans for in-memory computing. The company hopes to use in-memory technology as a basis for developing new types of faster and agile applications such as financial planning, simulations, and real-time inventory and price optimisation. Likewise the core in-memory computing engine used within the HANA appliance will power versions of SAP NetWeaver, allowing in-memory technology to be used across all of its product development roadmaps both analytic and transactional. SAP has placed its in-memory bets and is hoping with HANA it will win big.

Posted in Analytics, Information Management

4 Responses to SAP HANA – betting big on in-memory

  1. Jon Russell says:

    Nice intro into HANA.

  2. Tom S says:

    Any idea how Hana will be priced? By CPU? Machine?

    • Helena Schwenk says:

      No pricing information for HANA was made available during SAP TechED – that said I expect its pricing will take into account the ability for HANA to leverage hardware improvements such as multi-core processing.
      HANA is due to be released to customers at the end of this month (November 30th) during a “ramp up” phase before a broader rollout, so expect to hear more about pricing in the coming weeks and months.

Leave a comment: