Why Data Agility for Successful Big Data Technology Development

As technology advances at breakneck speed, our lives are ending up being progressively digitized. From Twitter feeds to sensor data to medical devices, companies are drowning in big data yet starving for actionable information. Probably, you’ve heard a lot of speak about the volume, range, and velocity of big data and how difficult it is to keep up with that surge of data.

For many business, their ability to collect data has actually surpassed their ability to organize it quickly enough for analysis and action. Executives, IS personnel, and analysts alike have been irritated with conventional rigid procedures for data processing that require a series of steps prior to data is ready for analysis. Relational databases and data warehouses have actually served companies well for gathering and normalizing other relational data from point of sale (POS), ERP, CRM, and other data sources where the data format and structure is known and doesn’t alter regularly. Nevertheless, the relational model and process for specifying schema beforehand can not equal the rapidly developing variety and format of data.

In some cases an expert just wishes to begin having fun with data to understand what’s in it and what new understandings it can expose prior to the data is modeled and contributed to the data warehouse schema. Sometimes you’re not even sure what questions to ask. This process drives up the costs for using conventional relational databases and data warehouses due to the fact that DBA resources are required to flatten, summarize, and completely structure the data, and these DBA expenses can postpone access to new data sources. Legacy databases are just not agile sufficient to fulfill the growing needs of most companies today.

What is Data Agility and Why is it Important?

Hadoop has actually become a mainstream technology for storing and processing big amounts of data at a low cost, now the conversation has actually rotated. Nowadays, it’s not about how much data you can store and process. Instead, it’s about data agility, indicating how fast can you draw out value from your mountains of data and how rapidly can you equate that information into action? After all, you still need somebody to apply structure or schema to the data before it can be analyzed. Simply because you can get data into Hadoop quickly doesn’t suggest an analyst can quickly get it out.

Executives desire their teams to concentrate on business effect, not on how they need to store, process, and assess their data. How does the capability to process and assess data impact their operations? How quickly can they adjust and respond to modifications in customer choices, market conditions, competitive actions, and operations? These concerns will direct the financial investment and scope of big data tasks in 2015 as enterprises move their focus from just catching and managing data to actively using it.

This idea can be applied not just to your big data infrastructure; it can be used across all business activities, from risk management to marketing campaigns to supply chain optimization.

When the concept of data agility was first discussed, the discussion focused on an organization’s capability to rapidly gather business intelligence. However, the concept of data agility can also use to data warehouse architecture. With traditional data warehouse architectures based upon relational database systems, the data schema has to be thoroughly designed and maintained. If the schema needs to be altered, it can sometimes take up to a year to make the change to an RDBMS. Even the process of removing data from a data store and loading it into a data warehouse can take a whole day before it’s available to be analyzed.

With Hadoop, keeping a new type of data doesn’t suggest needing to redesign the schema. It’s as easy as producing a new folder and moving the new kind of files to that folder. Using Hadoop for keeping and processing data, groups can develop items in a much shorter timeframe.

The Real Hindrance to Data Agility

Conventional databases require a schema prior to writing data. Couple that with the time required to get the data into the database and the process can no longer be considered agile. Worse yet, there are times those DBAs must perform complex procedures that need dropping international elements or exporting data, altering table designs, as well as reloading data in a particular order to satisfy the table design. Some big data technologies such as Apache Hive are able to get around the schema-on-write but still need defining a schema before users can ask the primary question.

You Will Know Data Agility When You See It

New data discovery and data exploration technologies are being established to offer greater flexibility. Apache Drill is a great example of “the” business enabler for data agility. Inspired predominantly by Google’s Dremel, Apache Drill is an open source, low-latency SQL query engine for Hadoop and NoSQL that can query throughout data sources. It can manage flat fixed schemas and is purpose-built for semi-structured/nested data.

What does this mean to be “the” business making it possible for technology? Believe real-time business intelligence. Drill is opening the door to this inevitable future of reduced cycle times for data processing to support faster reactions to chances and hazards. Eventually, the faster you can ask a question and get the ideal answer, the much better for your business.

Drill implements schema-on-the-fly. This means that when a new data format shows up, nothing needs to be done to be able to process the data with Drill. No DBAs are needed to build and keep schema designs. Industrial off-the-shelf business intelligence tools can communicate with Drill due to the fact that Drill executes requirements. It is ANSI SQL:2003-compliant and ships with JDBC and ODBC motorists. The business doesn’t need to adopt new tools to deal with all the data from all data sources.

Naturally, for any new technology, an opposing view can constantly be considered. The concern that may emerge is: What developments are sustaining the need for these new technologies? The dominant modification in the industry falls on the usage of data interchange formats such as JSON. Data that originates from applications that publish data in JSON do not need a DBA to structure the inbound data because it appears already structured, thus eliminating the personnel and process traffic jam.

Drill fuels data agility by permitting users to perform self-service data ingestion and data source management, whether due to including a new data source or adapting for a modification in the inbound data structure.

Agility in Your Enterprise

Data agility should be a crucial element of all your big data efforts in the future. Individuals can examine and check out data directly. Self-service data expedition eliminates the dependency on IT to set up data meanings and structures, and frees up IT personnel to carry out more important and leveraged activities.

By implementing agile technologies such as Hadoop and Apache Drill into your enterprise and existing data management and analytics abilities, you’ll be able to direct your organization’s agility to real-time business impact.

 

Source: tdwi

Post a comment