May 25, 2021

Millions of Arizona Citizens Receive Benefits With the Help of an AI-powered Chatbot



Key Challenges

   Improve Program Service Evaluator training
   Enable Program Service Evaluators to obtain policy information without searching the entire policy manual
   Deliver conversational responses to Program Service Evaluators

Policies Prompt a Need for More Efficient Training

Our client, the Arizona Department of Economic Security (DES), needed to improve its Program Service Evaluator (PSE) training. PSEs are responsible for administering benefits and guiding applicants through the application process. During the application process, PSEs refer to an online policy manual of various state benefit programs. To ensure that qualified Arizona residents receive benefits, the policy manual includes specific guidelines and procedures. PSEs search the manual using keywords. PSEs then communicate with benefit recipients.

PSEs can search the policy manual using keywords, but the search terms may not match the specific language of the manual. To find information, PSEs regularly contact experienced coworkers. Because PSEs often need help, senior members of the DES policy team realized they could save time by providing responses to common questions. The policy team asked us to use advances in artificial intelligence to propose solutions that would save time for their senior staff members. After a detailed analysis, we used an innovative solution using Microsoft Azure Cognitive Services with a chatbot interface.

A chatbot offered several advantages over the PSEs’ previous methods of gathering information. A chatbot would reduce the time commitments of senior employees. A chatbot could respond to common questions using replies. A chatbot would also allow PSEs to obtain information from the DES policy manual without using the manual's search function. PSEs would be able to ask the chatbot questions in everyday language. Then, the chatbot would return information validated from the manual. A chatbot that understood the contents of the policy manual would reduce the amount of time spent. Ultimately, the chatbot would enable the PSEs to more efficiently evaluate benefits applications.

Incremental Improvements with Agile Approach

We divided the chatbot development into four stages: Preview, MVP, MVP+, and Pilot. (See Figure 1). We released the first preview build of the chatbot within three weeks of starting the project. The initial build allowed us to get early feedback.

Figure 1: Project Stages

The Preview build of the chatbot responded to PSE questions with a knowledge base of stored questions and responses. Because of our earlier testing, we knew we still needed to refine our chatbot.

The first challenge with the Preview build was the build didn’t adequately address the size and details of the policy manual. The Preview build’s knowledge base covered 500 of the most common PSE questions and responses. Still, the knowledge base did not contain enough information to address the intricacies of the manual. During the Preview stage, PSEs frequently asked common questions the chatbot was unable to answer. The chatbot often returned answers unrelated to the PSEs’ queries.

A second challenge was the initial build's search function did not meet client requirements. The old policy manual search engine prioritized the frequency of typed keywords. Our chatbot's search function prioritized question stems. The result was experienced PSEs did not find our search function intuitive. To improve the chatbot's search function, the chatbot needed to return a field of results when asked a single keyword. A specific result needed to appear when multiple keywords were used.

The challenges we encountered in the Preview stage defined the MVP build. Our chatbot needed to mimic the behavior of modern search engines. Our chatbot also needed to provide conversational responses to questions phrased in natural language. Finally, our chatbot's knowledge base also needed to grow. We expanded the chatbot's knowledge base from 500 question and response pairs to 5,000.

Producing Refined Results

During the MVP phase, we automated the generation of questions and responses when the chatbot crawled policy content. Whenever content was added, revised, or deleted from the manual, the chatbot automatically crawled the manual's pages and updated the knowledge base. The new build even allowed users to narrow the search categories to further refine results.

Results were further refined through continuous user feedback. If users struggled to find the information they needed, a question and response pair was automatically generated with help from the policy team. The automatic generation of questions and responses offered a substantial advantage over the previous build. Earlier, the addition of question and response pairs was unstructured. Providing a structured approach for question and response pairs also significantly improved the speed at which the bot learned.

We conducted weekly review meetings and progressively increased the audience size (Figure 1). In review meetings, we acknowledged specific chatbot queries and identified mismatched keywords. As we addressed concerns raised by PSEs, they felt ownership for the outcome. PSEs and supervisors then became champions for the chatbot. Through our extensive training, the chatbot learned quickly. Eventually, the chatbot returned results with greater than 90 percent accuracy.

Considering Users First

The chatbot is currently used by over 1,800 PSEs with varying degrees of expertise. The PSEs access the chatbot through a web interface and Skype for Business. Administrators can view the question and response database, manually edit the questions and responses if needed, and manually trigger the crawl function if the database needs updated.

We designed a web interface featuring a welcoming, friendly avatar named Sean. The interface provides options to track case numbers, resize the bot, and export conversations. When users type an ambiguous question, the chatbot offers multiple possible responses (with references).

PSEs can also use our chatbot via Skype for Business. Skype interactivity posed significant challenges, as the interface had to be entirely text-based. We created intuitive menu options that users selected via number input. The completed Skype interface possessed all the functionality of the web interface.

We also created an easy-to-use admin portal. The admin portal allows users to customize chatbot responses, manually trigger policy database crawls, track case numbers, and view response metrics.

The chatbot interface and admin portal resulted in a user-friendly solution. PSEs unfamiliar with the implementation can interact with it, understand it, and use it proficiently within minutes. As the DES project director observed, the chatbot integrated seamlessly into the PSEs’ workflow.

Going Live: Distributing Benefits with AI-driven Technology

The DES chatbot has increased evaluation efficiency for over 1,800 PSEs and improved processing time for millions of Arizona benefits recipients. The chatbot provides PSEs with speedy responses, successfully answering hundreds of queries per day.

Reflecting, our team lead recalled four significant factors that differentiated the DES chatbot from others. First, the policy manual was large and dense. Words and sentences in the manual resembled legal statutes. The bot simplifies references to the manual with results that are 90+ percent accurate. Second, the project is unique when compared to other chatbots because our chatbot auto-trains from site content. We enhanced the content further through user feedback loop training. Third, the intuitive user interface offers multiple responses to ambiguous questions. The availability of multiple responses drastically reduces the number of interactions required to find the desired result. Finally, the incremental review cycle allowed us to tailor the chatbot to client requirements and drove user acceptance and adoption.

Feedback from DES has been overwhelmingly positive. DES Chief Information Officer Sanjiv Rastogi is optimistic. He anticipates the chatbot’s role will expand to suit the department’s future needs: “MAQ Software helped us decide on and implement a solution built on Azure with cognitive services, which provides us the grow-as-you-go infrastructure, platform, SaaS, and AI integration DES needs. The level of confidence we have in this solution allows us to build, not just for today, but as an evergreen platform that will bring DES into the future."