Posts Tagged ‘quality’

When was the last time you opened up a customer record to handle an inquiry only to find the comment field populated with abbreviations and codes that made the documentation unintelligible?  Or, as you are quality checking the data side of a call, you find entries into fields that “are not allowed” according to procedures but certainly are allowed by the database? If you have, then you probably agree with Howard (2007) “Data quality isn’t just a data management problem, it’s a company problem.” And, certainly it is a customer contact problem we have to address sooner rather than later. 

In his article, Howard describes how he was in the process of integrating three new data sources into his enterprise customer database when he discovered the sad truth to the lack of data quality management.  The first two files had the state code field correctly defined as a two-byte field however file number one had 64 values defined for state codes and the second file had 67 distinct values defined.  The third file had the state field defined as 18 bytes with 260 distinct state codes defined. At this point he begins to ask, “whose problem is data quality anyway?” 

To try to figure out his dilemma, Howard first looks to the data modeler who could have defined a domain table of state and commonwealth codes that would force anyone using the database to enter a common code set.  He then considered the application development team whose “application edit checks failed to recognize the 50 valid state codes or provide any text standardization conversions.”  He states that whether or not a company has a quality assurance team is largely dependent on the size of the company but goes on to say that data quality with these teams may not be better.  In his example, the fact that the state code should be only two bytes in length and conform to the USPS standard was overlooked.  Because there was no specific requirement to test, these data sources passed QA with flying colors.  Howard says, “More than likely, someone assumed everyone knew the 50 state codes and that writing validation code was a waste of time. After all, everyone knows the state abbreviations for Michigan, Minnesota and Missouri. (Don’t feel bad if you have to check – I did.)” Finally, he turns to the business users.  An executive at one of Mr. Howard’s clients told him that data quality at their company was an afterthought. “Bad information was captured and passed on to the next application that assumed the first application had done its job.  Bad data is persistently stored in multiple data stores.” 

Mr. Howard’s article presents a valid question:  Because data integrity is critical to the success in today’s corporations, who is responsible?  Enterprise systems contain critical product, customer, and employee data.  This data is integrated into management reports including dashboards and scorecards.  Managers and executives use this data to make both tactical and strategic decisions.  If someone does not take the responsibility to manage the quality of data quality, critical data elements / metrics may be incorrect or unusable thus jeopardizing the success of an organization.  We all know what happens when a customer contact agent inputs undefined abbreviations and nonsense data into customer interaction fields.  What can we do as customer contact professionals to ensure the quality of this precious data? 

I propose we take the reigns in the contact center by designing automated workflows and unified desktop interfaces that drive consistently accurate and best practice data capture.  If we don’t do it, Mr. Howard suggests that no one else will (or is).


Howard, W. (2007). Data Quality Isn’t Just a Data Management Problem. DM Review, 17(10), 16.   


Learn about the “whats, whys, and hows” of using simulation in the contact center.  In addition, you’ll hear how simulation-based training can dramatically improve your performance.

A research study completed by the Georgia Institute of Technology found that using simulation technology in the training of customer contact agents significantly improves their productivity and effectiveness.  

The research project was independently designed and conducted by Dr. Goutam Challagalla, PhD, and Dr. Nagesh Murthy, PhD, of Georgia Tech’s DuPree College of Management. The resulting study compared conventional training vs. a simulation based training approach. Simulation training uses the agent’s actual working environment – telephone, computer, system and workspace – to teach agents to effectively listen, think, talk and type at the same time.   

“Traditional training in contact centers puts management in a Catch 22 situation,” said Dr. Challagalla. “Historically, these centers use their best agents to help train new agents, resulting in serious productivity penalties. Our research clearly demonstrates that contact centers have much to gain by using simulation based training to build new hire skills. New agents get superior training without robbing days or weeks of productive time from experienced agents.” 

“We were extremely conservative in designing the experiments,” said Dr. Murthy. “For example, at Firm ‘A,’ the training content for both groups over eight days was identical except for 1 to 1.5 hours near the end, when one group used StarPerformer[1] and the other used role playing. If simulation training had been used throughout the curriculum, the superiority of this method might have shown even greater effects.” 

The professors secured the cooperation of two Fortune 50 companies to participate in the study. The participating companies worked closely with Dr. Challagalla and Dr. Murthy to develop valid call scenario situations. Specific methodologies and metrics were designed for each company to adjust for their unique circumstances. 

Applied Study – Firm “A”

Simulator based training reduces agents’ call handing time. Post training call duration, on average, was at least 13% shorter for agents trained with StarPerformer than for the group trained with conventional role playing. Among the six call scenarios measured; the maximum mean reduction was 22% – from 215 seconds for the conventional group to 168 seconds for the simulation group – a reduction of 47 seconds.

 Simulation based training provides more uniform results when taking perceived usefulness into account. Agents from both groups who rated the usefulness of their training favorably processed calls faster after training, as opposed to agents who gave less favorable evaluations. In the StarPerformer group, those agents giving the lowest “usefulness” ratings still performed faster than agents from the traditional group who gave high marks to their training.  

 The Simulation group handled post training calls more pleasantly. The simulation group, which could listen to how they responded to calls, scored higher on Firm A’s “pleasantness” metric after training (customer satisfaction). 

Experimental Study – Firm “B”

The Firm “B” experiment focused on a single performance measure—accuracy. Participants in the StarPerformer group scored 8% higher correct responses than participants trained with role playing methods. The “accuracy gap” between the two groups increased as the complexity of the call scenario increased. In addition, the StarPerformer group gave higher ratings for usefulness than traditional training.

[1] StarPerformer was developed by Advertech, Ltd. StarPerformer is the next generation of simulation-based training.