Posts Tagged ‘call-center’

A Gorilla lounging around.

Image via Wikipedia

Have you heard about the study, “Gorillas in our Midst?”  It is based on an experiment in which people are asked to watch a video of a basketball game and count the number of passes one of the teams makes. A minute or so into the tape, while the people are busy counting passes, a woman in a gorilla suit walks onto the screen, stops, faces the camera, and beats her fists on her chest.  Fifty percent of the people who watch the video don’t see the gorilla. I think life in the contact center is much like this experiment.

Each day, we focus on the task of “counting the passes” in our own “basketball game.” We know what we have to do to get through the day, and we do it efficiently.  But, we miss a lot by being so focused.  What would someone without so much intentional focus see? Does our harried pace keep us from observing something as obvious as a gorilla in a basketball game?  And, might the solution to some menacing problem be found in what we are not seeing?

A good way to check whether we are “missing the gorilla” is to ask the new people on the team what they see.  These individuals haven’t been indoctrinated into our carefully orchestrated day, and therefore often observe what we miss.

Keep an open mind. You may hear some things you think are impossible.  I’ll bet there were several people who didn’t believe they had missed something as obvious as a gorilla in the middle of the video. When we are hyper-focused, we become blind to everything in our peripheral vision. Believe me, that’s where those gorillas love to dwell.

In which fifty percent are you – do you see gorillas in your midst daily or have some manifestations gotten by you?

Advertisements
Tradition helping with a person and saying &qu...

Image via Wikipedia

Can you change people’s habits and attitudes? Can you take a task and change people’s conduct and attitudes by making something seem fun? Watch what a group of scientists did using fun or pleasure to get people to use a long staircase with a moving escalator right next to it. At first no one took the stairs; almost 97% of the people took the escalator. Notice how scientists changed how people reacted to climbing a long staircase as first choice. Now 66% more people took the stairs. This is not a joke but a practical value in life. In the video, you can observe what the scientists did and how they completely reversed human behavior by inserting fun.

http://www.youtube.com/watch?v=2lXh2n0aPyw&feature=player_embedded

See other examples of the fun theory in action at:  http://thefuntheory.com/

“This site is dedicated to the thought that something as simple as fun is the easiest way to change people’s behaviour for the better. Be it for yourself, for the environment, or for something entirely different, the only thing that matters is that it’s change for the better.”

Simulations can be constructed to present a “game-like” atmosphere.  People learn while having fun.

I’ll bet we could come up with more ways to change behavior in the contact center by making it more fun!

When was the last time you opened up a customer record to handle an inquiry only to find the comment field populated with abbreviations and codes that made the documentation unintelligible?  Or, as you are quality checking the data side of a call, you find entries into fields that “are not allowed” according to procedures but certainly are allowed by the database? If you have, then you probably agree with Howard (2007) “Data quality isn’t just a data management problem, it’s a company problem.” And, certainly it is a customer contact problem we have to address sooner rather than later. 

In his article, Howard describes how he was in the process of integrating three new data sources into his enterprise customer database when he discovered the sad truth to the lack of data quality management.  The first two files had the state code field correctly defined as a two-byte field however file number one had 64 values defined for state codes and the second file had 67 distinct values defined.  The third file had the state field defined as 18 bytes with 260 distinct state codes defined. At this point he begins to ask, “whose problem is data quality anyway?” 

To try to figure out his dilemma, Howard first looks to the data modeler who could have defined a domain table of state and commonwealth codes that would force anyone using the database to enter a common code set.  He then considered the application development team whose “application edit checks failed to recognize the 50 valid state codes or provide any text standardization conversions.”  He states that whether or not a company has a quality assurance team is largely dependent on the size of the company but goes on to say that data quality with these teams may not be better.  In his example, the fact that the state code should be only two bytes in length and conform to the USPS standard was overlooked.  Because there was no specific requirement to test, these data sources passed QA with flying colors.  Howard says, “More than likely, someone assumed everyone knew the 50 state codes and that writing validation code was a waste of time. After all, everyone knows the state abbreviations for Michigan, Minnesota and Missouri. (Don’t feel bad if you have to check – I did.)” Finally, he turns to the business users.  An executive at one of Mr. Howard’s clients told him that data quality at their company was an afterthought. “Bad information was captured and passed on to the next application that assumed the first application had done its job.  Bad data is persistently stored in multiple data stores.” 

Mr. Howard’s article presents a valid question:  Because data integrity is critical to the success in today’s corporations, who is responsible?  Enterprise systems contain critical product, customer, and employee data.  This data is integrated into management reports including dashboards and scorecards.  Managers and executives use this data to make both tactical and strategic decisions.  If someone does not take the responsibility to manage the quality of data quality, critical data elements / metrics may be incorrect or unusable thus jeopardizing the success of an organization.  We all know what happens when a customer contact agent inputs undefined abbreviations and nonsense data into customer interaction fields.  What can we do as customer contact professionals to ensure the quality of this precious data? 

I propose we take the reigns in the contact center by designing automated workflows and unified desktop interfaces that drive consistently accurate and best practice data capture.  If we don’t do it, Mr. Howard suggests that no one else will (or is).

Reference

Howard, W. (2007). Data Quality Isn’t Just a Data Management Problem. DM Review, 17(10), 16.   

Learn about the “whats, whys, and hows” of using simulation in the contact center.  In addition, you’ll hear how simulation-based training can dramatically improve your performance.

http://www.screencast.com/t/5XqIa4riZ

A research study completed by the Georgia Institute of Technology found that using simulation technology in the training of customer contact agents significantly improves their productivity and effectiveness.  

The research project was independently designed and conducted by Dr. Goutam Challagalla, PhD, and Dr. Nagesh Murthy, PhD, of Georgia Tech’s DuPree College of Management. The resulting study compared conventional training vs. a simulation based training approach. Simulation training uses the agent’s actual working environment – telephone, computer, system and workspace – to teach agents to effectively listen, think, talk and type at the same time.   

“Traditional training in contact centers puts management in a Catch 22 situation,” said Dr. Challagalla. “Historically, these centers use their best agents to help train new agents, resulting in serious productivity penalties. Our research clearly demonstrates that contact centers have much to gain by using simulation based training to build new hire skills. New agents get superior training without robbing days or weeks of productive time from experienced agents.” 

“We were extremely conservative in designing the experiments,” said Dr. Murthy. “For example, at Firm ‘A,’ the training content for both groups over eight days was identical except for 1 to 1.5 hours near the end, when one group used StarPerformer[1] and the other used role playing. If simulation training had been used throughout the curriculum, the superiority of this method might have shown even greater effects.” 

The professors secured the cooperation of two Fortune 50 companies to participate in the study. The participating companies worked closely with Dr. Challagalla and Dr. Murthy to develop valid call scenario situations. Specific methodologies and metrics were designed for each company to adjust for their unique circumstances. 

Applied Study – Firm “A”

Simulator based training reduces agents’ call handing time. Post training call duration, on average, was at least 13% shorter for agents trained with StarPerformer than for the group trained with conventional role playing. Among the six call scenarios measured; the maximum mean reduction was 22% – from 215 seconds for the conventional group to 168 seconds for the simulation group – a reduction of 47 seconds.

 Simulation based training provides more uniform results when taking perceived usefulness into account. Agents from both groups who rated the usefulness of their training favorably processed calls faster after training, as opposed to agents who gave less favorable evaluations. In the StarPerformer group, those agents giving the lowest “usefulness” ratings still performed faster than agents from the traditional group who gave high marks to their training.  

 The Simulation group handled post training calls more pleasantly. The simulation group, which could listen to how they responded to calls, scored higher on Firm A’s “pleasantness” metric after training (customer satisfaction). 

Experimental Study – Firm “B”

The Firm “B” experiment focused on a single performance measure—accuracy. Participants in the StarPerformer group scored 8% higher correct responses than participants trained with role playing methods. The “accuracy gap” between the two groups increased as the complexity of the call scenario increased. In addition, the StarPerformer group gave higher ratings for usefulness than traditional training.


[1] StarPerformer was developed by Advertech, Ltd. StarPerformer is the next generation of simulation-based training.