Solix Empower Conference Invite

Big Data, Common Data Platform, Data Management, Enterprise Applications, Solix EMPOWER No Comments »

You're invited to Solix EMPOWER

When I look back at when we started Solix, I can’t help but marvel at how much has changed. When we began this journey, ERP databases hovered around few TB, now they are in hundreds. Data security, at that time, meant protecting copies of production data for testing, training and quality control, all that was targeted was to keep the copies of production data safe, while keeping capital costs down. Now, we must not only mask test copies of data, we must also be able to predict who is trying to steal the data.

Today, we’re in a time of great change. In the last decade the amount of data in the world has increased exponentially and is now well into the zettabytes. Data is coming at industries in greater volume, velocity and variety. Enterprises are relying on these massive amounts of data for BI and looking to mine with advanced analytics. The ERP and the Enterprise Data Warehouse are no longer sufficient to deal with this onslaught of data. Hadoop has emerged as the solution to deal with this petabyte scale level data. But, even it cannot address all of the challenges, it presents to the enterprises of today and who want to ensure they are also the enterprises of the future. Data has become the most important asset an organization can have in our increasingly digital world. The world is data driven and enterprises must be as well, or risk failure.

The Solix Common Data Platform brings the strengths of the Data Lake combined with Enterprise Archive with Information Governance. This can not only help in deep analytics, it can also span across the entire archive with this architecture. Fears of data swamps can be erased, and the stress of figuring out how to pay for Tier 1 storage evaporates because the CDP is built on commodity storage and commodity compute. Solix CDP makes the Data Lake truly enterprise ready for the first time and opens up vast possibilities for creating an advanced analytics platform.

At Solix, we are uniquely positioned to empower our customers to become data driven organizations. That is why we’ve announced our inaugural Solix EMPOWER conference, Sept. 18 at Northeastern University in San jose, California. Solix EMPOWER is an education and networking conference where we will bring together our own customers, analysts, industry experts and partners to discuss Analytics, Big Data and Cloud technologies. We’re thrilled with the lineup of speakers we’ve put together, which includes:

Herb Cunitz, President of Hortonworks.
Eli Collins, Chief Technologist at Cloudera.
Solomon Darwin, executive director of the UC Berkley Haas School of Business.
Jnan Dash, former Senior Executive at Oracle.
Rafiq Dossani, Economist and Educationist at RAND Corporation.

And many more, to see the full list of speakers and the developing agenda visit http://www.solixempower.com. The proceeds of every registration will also go to the Touch-A-Life Foundation, which benefits homeless high school students.

We’re excited about Solix EMPOWER, and the chance to explore together the possibilities for enterprises in our increasingly digital world. Please, register for the event at http://www.solixempower.com/registration/order/. We look forward to seeing you.

Introducing Solix Common Data Platform

Big Data, Common Data Platform, Data Management, Enterprise Applications Comments Off on Introducing Solix Common Data Platform

For companies to be data driven, they have to be able to process the new types of Enterprise data, be it social, IoT, machine data etc., and make real-time decisions. If companies do not evolve into data-driven organizations, they risk serious business disruption from competitors and startups.

Current Data warehouses are just not ready to be able to handle the volume, velocity and variety of this new data. New Data Lakes based on Apache Hadoop provide a low-cost answer to the problem of capturing this high volume structured/unstructured data using low-cost infrastructure and open source software. The Data Lake is a storage repository that holds a vast amount of raw data in its native format until it is needed, without imposing a data schema or requirements. When a business question arises, the data lake can be queried for relevant data, and a schema tailored to the question applied to that smaller set of data.

But, there has been significant opposition to this concept. According to Gartner, a Data Lake accepts any data, without oversight or governance. Without descriptive metadata and a mechanism to maintain it, the data lake risks turning into a data swamp. And without metadata, every subsequent use of data means analysts start from scratch. Without appropriate governance measures, Data Lakes can create a ‘data free-for-all’ that exacerbates issues of data quality and data lineage.

It is essential that semi-structured and unstructured data adhere to metadata conventions that have been formally defined by governance principles to ascertain meaning from data. Ensuring that data has uniform metadata standards enables users to understand how data relates to other data—such as how proprietary data from CRM relates to sentiment data, for example. The danger with Data Lakes is that individual end-users are liable to ascribe those attributes that they need within the specific context of their particular business problem or use, which may not follow governance conventions, to the entire data set.

Another risk is security and access control. The security capabilities of central data lake technologies are still immature. Metadata and semantics are essential for ensuring compliance with regulations governing the security, use, and location of specific kinds of data, such as personally identifiable medical information. In theory, independent data marts are no longer necessary as Data Lakes enable the enterprise to distance itself from a silo-based culture while emphasizing sharing and integration. In practice, without Metadata, data marts remain the best way to ensure regulatory compliance and adequate data security.

We believe that with a well-defined business process for data ingestion and with Information Governance, we can address the issues and make Data Lakes Enterprise-ready. Further, if you add the Enterprise Archive to the Data Lake, we can vastly expand the reach of analytics and create an Advanced Analytics Platform. We don’t believe this is a replacement strategy for the data warehouse; in fact it is more a complement to the existing investment. With that as a goal, we have been working on our new offering – Solix Common Data Platform.

Solix Common Data Platform = Enterprise Archive + Enterprise Data Lake + Information Governance

Here is the comparison on how it differs from a traditional data warehouse and a Data Lake:

Comparison on how it differs from a traditional data warehouse and a Data Lake

Leveraging ILM Software to Accelerate Application Migrations to Oracle Fusion Apps

Enterprise Applications No Comments »

It is October, and for Oracle users and industry watchers that means Oracle OpenWorld Conference 2009. In recent years Oracle has become a power in the IT industry with its acquisition spree and most recently Sun Microsystems. One of the major questions on everyone’s mind is what will Larry Ellison announce at the Moscone Center this year.

Every year Oracle OpenWorld is the stage for at least one major Oracle announcement. Three years ago, it was Oracle’s move on Linux support and then the next year, the move on to Virtualization Software. Last year it was the HP/Oracle Appliance offering — a massive parallel processing computer for data warehousing. What will it be this year, could it be any surprise on Java or Salesforce.com (Marc Benioff is doing a keynote at Oracle Open World).

What Oracle really needs is a positive announcement about Oracle Fusion Applications (OFA), its much promised and delayed next generation ERP/CRM system built on the service-oriented architecture (SOA) platform and delivered as software-as-a-service (SaaS), originally announced in 2006 (as “already half finished”) and now promised for 2010.

Oracle’s promise at this initial announcement was to “be the first company on the planet to build a full suite of applications for large and small companies based on standards.” Fusion Apps will combine the best functionality from its ERP/CRM systems (Oracle Applications, PeopleSoft, JD Edwards and Siebel) built on a new, advanced technology base, with a new user interface, new process model, and new data model. Oracle’s unified application platform going forward, a new generation of service-oriented, Web 2.0-based applications is to have business intelligence integrated as a pervasive element and be designed to integrate easily into SOA architectures. It will make a clear shift from application silos to supporting end-to-end business processes.

Oracle critics have raised questions about when exactly Oracle will introduce Fusion Apps., and whether the new platform will be enough when it does arrive. The strongest reply to these questions would be the announcement of a market launch date or at least a positive progress report.

However, even when OFA does arrive, it will present serious challenges for users. One of the largest of these will be data migration from present generation applications to a new and as yet not publicly defined database structure. Data migrations are fraught with issues, often exceed timelines and budgets, and can create challenges to application migration acceptance. Understanding exactly what needs to be migrated and determining the rules for mapping to the target environment will require significant time and effort. Best practice is to ensure that the scope of the initiative is limited to data sources that will be required or that add value to the target application or data structures. Just because data sources related to the target are available does not mean there is business value in migrating them. Often, not all historical data needs to be migrated to the target. By archiving or disposing of older, non-value-added data, the level of effort and timeframes can be minimized.

For most organizations, the lack of concern about, and clear understanding of, the magnitude of data quality issues will create problems. In many cases, substantial amounts of rework will be required to address data quality issues encountered during the development of the migration processes. Again, the more data involved in the migration, the more likely, more numerous, and more severe these problems will be.

ILM software such as Solix EDMS can play a vital role in preparing for the massive data migration that moving to Fusion Apps will require, and the time to think about that is now, before the migration begins. The purpose of ILM software is to identify data no longer in active use and archive it out of the production database automatically, in an organized manner that preserves access to that data when it is needed. The obverse of that coin is that it also identifies the subset of data that is active and that therefore needs to be migrated. By reducing the size of the production databases that will be migrated, it can reduce the time, effort, and number of data quality issues that will be involved, while preserving access to the archived data through its own or other industry-standard data viewers.

As with everything involving very large business databases containing vital business information, however, ILM cannot be installed, implemented and run in a day. As the deadline approaches for the introduction of Fusion Apps, this means that Oracle’s several hundred thousand enterprise customers should seriously consider ILM now. This will give them time to select a product, install and test it, and then start a gradual archiving process for their major databases that will leave only the active data in the production database when the time comes to migrate to Fusion Apps. That will give them a major head start on migration, and the good news is that in the meantime archiving with a strong ILM tool such as Solix EDMS will improve the performance of their present architecture while delaying and decreasing Capex expenditures on expensive Tier 0 and Tier 1 storage systems.


© Solix Technologies, Inc.
Entries RSS