Since the 80s or 90s, computers have grown in importance not just in a personal sense but in a business one as well. While technology has made life easier, it’s still powered by man (for now!) and therefore is not entirely infallible. You simply can’t trust your insights when you can’t trust the inputs.
How does this concept relate to the education industry? Mainly, through hardware, sales software and analytical marketing tools: while the leap from sales binders to Excel spreadsheets may have made enrollment and sales data more streamlined and convenient, the results ultimately depend on the data inputted rather than the vehicle.
With human error occurring more than we want to admit, false or faulty data can still leak into a document or calculation and contaminate outcomes, resulting in misaligned marketing strategies, increased costs, and business instability. The problem becomes amplified when large and varied sets of big data need to be analyzed to help an organization make informed business decisions. This is the often a complex process of examining large and varied data sets to uncover information including mystifying arrays, undiscovered parallels, market developmental cycles and buyer biases that help administrations gain valuable insights, enhance decisions, and create new products. The relationship between bad input leading to bad output can be summarized by this phrase: garbage in, garbage out.
The evolution from Rolodex to a spreadsheet or even smartphone app has certainly streamlined collecting information, but it hasn’t entirely eliminated user error. Innovations in hardware and software have made it uncomplicated and cost effective to amass, stockpile, and evaluate copious amounts of sales and marketing data. If good information is input, then good data will be spat back out and vice versa, which may significantly affect planning, buying and selling decisions. In education marketing, user error makes it more difficult to know the client. In essence, bad data is as good as no data and perhaps even worse.
So, what can we do? While adherence to data integrity and entry along with correct set-up ensures the best and most accurate results, human error will always be a constant. Bad data input will always occur, but controlling for bad data, and engineering procedures to supervise data integrity successfully will help eliminate issues in decision making and avoid increased cost and organizational miscues. The best solution is to detect the ‘bad’ early and locate the problem before it gets worse. Fortunately, we can do something about data quality. No one wants to find out a pipe is clogged by the time their basement is flooded. Admitting that you have a data quality problem is the key to the solution.
Tune in to my next article to find out how segmenting data based on audience, system of controls, implementing a tiered tracking system and management oversight can help keep data on track. I’ll also provide an important warning about overanalyzing data that can save you great turmoil and stress.