Recently we encountered an interesting problem with one of our webinars. This is an event sponsored by us in collaboration with Forrester. Our speakers—distinguished Forrester Analyst Noel Yuhanna and Solix Executive Chairman John Ottman—rehearsed and were ready to go. The event started and a decent number of attendees logged in. And then we discovered that for some reason our guest speaker, Mr. Yuhanna could not be heard. Everyone else had good connections, and Mr. Yuhanna was able to log in and hear us, but no one could hear him. For a few minutes we panicked. We had never experienced a problem like this before, must have been some technical glitch with Cisco Webex. Then my staff quickly set up a three-way conference call and patched him through his mobile phone. As a result, we found a way to get around the problem and had a successful event.
The lesson here is that creative thinking can find ways around problems, and the webinar, `Enterprise Data Management on a Zero-Based Budget,’ was actually about just that. Sensitive data is under increasing risk in database copies used for test and development, training and other non-production activities. According to Forrester, yet only 25% of enterprises have adopted data masking. I talk to many CIOs who understand the ROI of ILM, yet overall the market penetration remains small. I believe this is mostly a budget and/or a priority issue. Many Enterprises are struggling to keep the lights on with slashed budgets and staff cuts, and they just do not have the resources to implement ILM, regardless of the benefit.
For that reason, we have developed the Solix EDMS Standard Edition (SE), a FREE Download that allows users to begin deployment with zero out-of-pocket expense. I hope that those of you reading this who have not yet implemented ILM in your shops will take this opportunity to download and try out Solix EDMS (SE) and experience what it can mean to your organization. This can be the first step in deploying ILM at your organization, and launch your enterprise data management projects immediately and free of cost.
In recent years IT has been rocked by the advent of Big Data, new kinds of data coming from the Internet, using new technologies such as Hadoop and Map Reduce. So far IT has treated this largely as an exotic technology from the outside that has a lot of promise but that is separate from traditional company data and the systems and people generating it.
But is that really true? Today increasing numbers of work groups and and small companies are using online systems like Google Drive, Dropbox, etc., to share and collaborate on data of all kinds from written documents to photographs and videos, both for work and personal use. Google, one of the chief drivers of this often bottom-up movement in organizations, has a specific mission –to organize the world’s information and make it universally accessible and useful. We all know the power of this vision; it is changing our lives. What we seldom think about is that these services are based on those same Big Data technologies and concepts.
So why can’t CIOs do the same thing with the large amounts of data that their organizations generate? Enterprises are experiencing dramatic growth in data, but often much of the data is stored inefficiently—which wastes resources and time. Clearly, enterprises are continuing to invest in more storage/infrastructure every year. For me, the case for Big Data as a repository for records and retention management is made. Think of the power of having an internal system that makes all company documents, videos, photos, etc., as well as traditional structured data, instantly available to whoever needs it (and has proper authorization to access it) from a central place accessible anywhere the Internet or corporate internal network reaches, on any device the user wants, at any time. And simultaneously protects that data from loss and ensures a single master version of the truth. It can be done with Big Data technologies.
The advantages of creating this kind of repository based on Big Data technology include:
- No database licensing and maintenance costs: Imagine the money being spent on Oracle, SQL Server, DB2, etc. Open source technology eliminates that.
- No Tier 1 storage costs: You can use standard SATA storage, even white box storage as the hyperscale installations do, instead of high cost storage from EMC, Netapp, IBM, HP, etc.
- Choice of public or private cloud: If you choose you can eliminate CAPEX entirely and host your repository on Amazon or any of several other public cloud services. Or if you prefer, you can put it in a private cloud in-house.
- No Backup/DR costs or issues: The way Big Data is organized, it eliminates the need for backup or administrative costs. And because it is available across the Internet it supports work from alternative locations in an emergency, as well as routine remote work.
- Extended Analytics: Once corporate data, including semi-structured and unstructured documents, etc., are in a Big Data repository, it becomes easy to add third-party data such as weather data, link that to your sales & marketing, & extend analytics beyond your enterprise data.
- And one final important advantage: Building and running this repository will allow IT to gain valuable experience with the Big Data technologies that clearly will be a big part of the IT future.
Imagine the world of enterprise data center in 2025. It will be vastly different. Clearly, Big Data will seep into every enterprise. This is perfect use case for getting a head start on that transition. If you are the CIO, who believes in this vision, reach out to us. We can help you.
Big Data, Cloud, Application Upgrades, Application Retirement, Data Warehouse, Disk Backup – Why I Love “Data Validation”Data Validation No Comments »
Earlier this year, we moved our office to a new location. The move was painful. We realized just how much stuff we had accumulated over years, including many things we never use but were sitting around, collecting dust. We had to look at every piece of furniture, every server, every software CD, and decide whether we need it. The best part of that exercise was that we used it to teach our non-technical staff about database archiving and application retirement and how we do a similar exercise for data centers and applications.
Having gotten through the move, the year seems very encouraging. We are feeling good vibrations from multiple indicators, about an economic resurgence. We also started the year very well, announcing Solix EDMS SE – Data Validation.
I love this tool. It provides tremendous value to DBAs and application owners. Data is always moving. Think of Big Data, Cloud, Data warehousing, Application Upgrades, Migrations, Hardware Upgrades, Application Backups – you are constantly moving data. How do you ensure that the data has indeed moved correctly, and how do you automate the process of comparing a copy of a data set with the original to determine the accuracy of the copy?
Invalidated or poorly validated data is a huge issue. Imagine the inaccuracies in your analytical reporting and the decisions based on that. Imagine a disaster situation leaving you trying to recover your core database from the backup and realizing that that backup is incomplete or garbled. The impact on the organization could be enormous.
Data validation is the first step in assessing data quality, an analytic and domain-specific process to determine the quality of the data set. In general, IT departments handle this with custom scripts and custom programming. The news, with our new announcement is, you no longer need that. Check out Solix EDMS SE http://www.solix.com/Solix-Technologies-Announces-Free-Data-Validation-Download.htm. This is a FREE tool that you can download now and start using across your enterprise today.