Tuesday, June 18, 2019

Case Study: Machine Learning Drives Support Ticket Prioritization

Key Challenges

   Respond to 30 complex support tickets daily.
   Prioritize urgent tickets from key stakeholders automatically.
   Automatically forward support tickets to the correct team member.

Timely Support Requires Prioritization

User support is a key DataOps function. Large data systems generate hundreds of support requests per month. Teams typically respond to support requests in the order received. But adhering to the order received risks losing urgent requests in the ticket backlog. To ensure that teams address urgent requests quickly, they must implement prioritization systems.

15,000 users depended on our client’s reporting infrastructure for business metrics every day. To support the users, our nine-member team tracked up to 30 complex support tickets per day. The support tickets ranged from feature requests to urgent data needs. Answering the tickets in the order received wouldn’t work. The team couldn’t risk ignoring urgent requests or overlooking tickets from key stakeholders. The DataOps team needed a system to automatically prioritize the tickets. The system also needed to analyze content, locate similar but previously resolved tickets, and categorize tickets.

Instant Analysis with Azure Databricks

The first version of the support ticket prioritization system used on-premise servers. The system improved prioritization but required manual intervention for processing. To improve efficiency, the team began developing a cloud-based solution.

Azure Databricks was the key component of the cloud-based prioritization solution. Azure Databricks allowed the team to run machine learning models directly on cloud-based data. The system starts with emails that users send to a support alias. Every five minutes, the system collates and processes the emails through Azure Data Factory. Azure Databricks notebooks then run an AI model that analyzes the support tickets. The team trained the AI model for one month using responses from a 9,000-ticket database. The AI model compares the content of each ticket to previous tickets, identifying high priority topics and users. The model recommends the best support team to answer each ticket, categorizing the ticket into an issue and sub-issue. The system also pairs each support request with previously resolved reference tickets. Finally, a Power BI report displays the ticket priorities, categories, sub-categories, and reference tickets.

Fast Responses, Increased Satisfaction

The cloud-based ticket prioritization system is extremely efficient. Previously, the DataOps team required two and a half days to resolve a ticket. Now, tickets are resolved within one and a half days, despite the user base increasing by 50%. The prioritization system also improved training efficiency. Newly onboarded DataOps team members refer to the reference tickets to create accurate responses.

The quick ticket response time increased user satisfaction. Before the team implemented the system, there were significant backlogs. Now, tickets are categorized and prioritized within five minutes of creation. There is a minimal backlog with only 10 to 15 open tickets at a time. The DataOps team plans to incorporate the ticket prioritization system into future DataOps projects. As the support team lead states, “The ticket prioritization system gets tickets to the right people, with the right priority, in the right order. It will greatly benefit our clients for a long time.”

Friday, June 14, 2019

Case Study: Real-Time Emotion Analysis Using AI

Key Challenges

   Analyze video footage to identify faces in a crowd.
   Detect emotions by analyzing facial expressions.
   Track emotions on a real-time graph to gauge overall attendee sentiment.

A Better Way to Collect Feedback

One of our core values is adopting the latest technology. To deliver the best software for our clients, we grow our capabilities through initiatives. Recent initiatives used Artificial Intelligence (AI) to improve business efficiency. In one such initiative, we used AI to improve audience feedback evaluation.

Traditionally, audience sentiment is collected using surveys. But surveys are costly and yield inconsistent results. To improve audience feedback, we created a product called Media Analytics. Media Analytics reveals audience sentiment through an intuitive Power BI dashboard. Media Analytics relies on unstructured video data input. To generate useful insights, we needed to create a data structure that would accommodate video data.

Powered by Microsoft Cognitive APIs

We used a host of video technologies to record audience reactions. We tested the video technologies using subjects ranging from conference keynote speakers to marketing focus groups. We then used Microsoft’s Cognitive Face API to identify sentiment person by person. Face API analyzed the sentiment by examining the video input frame by frame. Next, we used Microsoft’s Emotion API to match facial patterns to emotions. Emotion API plotted emotions ranging from anger to happiness to surprise. We extracted the emotion data using Microsoft’s Vision API. Power BI visuals delivered actionable insights based on the results. To protect the privacy of audience members, we did not store facial recognition data.

Figure 1: Emotion distribution analysis

Quick, Custom Analysis

Media Analytics allows speakers to observe emotional trends in audiences. Speakers can review lectures and easily evaluate audience satisfaction without conducting surveys. Media Analytics removes bias from feedback, capturing audience sentiment in real time.

Media Analytics is customizable for many industries. We are currently adding features including multiple camera angles (audience and speaker) and multi-language support. If you would like to see a demonstration of Media Analytics or are interested in purchasing it, please contact sales@maqsoftware.com.

Thursday, June 13, 2019

Case Study: DataOps Practices Create Incremental Business Value

Key Challenges

   Ensure the partner portal delivers accurate data.
   Improve report reliability.
   Respond to user support questions quickly.

Building a Team to Support 440,000 Partners

In 2017, our client revolutionized its partner services. 440,000 partners relied on our client’s analytics platform, but the platform inefficiently sourced data from silos. To improve performance, the client consolidated their business reporting into a single partner portal. Partner reporting, business insights, and performance improved significantly. At the center of the new portal, a Power BI dashboard provided insights to over 13,000 users. The dashboard integrated data from over thirty sources, handling millions of queries annually. At the time, no other business had a larger Power BI implementation. Over the next year, dashboard and partner portal use continued to increase. The portal’s growing popularity spurred the need for a support team.

To support the portal, we assembled a 40-member team. We divided the team into three domains: infrastructure, data refresh, and user support. The infrastructure team monitored data pipelines and ensured that reports loaded quickly. The data refresh team refreshed the data from upstream sources and transaction systems. The user support team answered questions about reports, managed data queries, and located reports. Together, the teams bridged the gap between development and operations and delivered continuous incremental value to our client. The principles the teams followed are codified as DataOps.

DataOps is the convergence of agile methodologies, DevOps practices, and statistical process control. This article examines the roles our DataOps teams fulfill for our client’s partner portal. This article offers a glimpse into the benefits of DataOps practices.


A primary responsibility of DataOps teams is ensuring report uptime. Reports drive critical business decisions, so uptime is crucial. To improve uptime, we created a monitoring framework. The framework simplifies maintenance, monitoring, and report validation at all stages of development. The framework also tracks basic functions (such as page rendering) by capturing snapshots and delivering verification emails. Tracking the emails and examining the snapshots allows our DataOps teams to quickly detect failures. The framework even validates report data against backend data, pinpointing errors in the data pipeline.

The DataOps teams further increased reliability by implementing supplementary monitoring and control systems. The teams created an infrastructure health monitoring system that detects potential failure scenarios and corrects them. The teams automated alerts to monitor CPU, memory usage, and disk space and detect spikes outside of acceptable limits. With automated alerts, the portal requires less active monitoring. Less time spent actively monitoring freed team resources to address more complex challenges.

One early complex challenge was addressing the sheer number of data refresh jobs. Data refreshes must occur on schedule so users can base business decisions on the latest data points. Because of the partner portal’s large scale, the teams could not manually check each job. To improve the data refresh reliability, the DataOps teams built a data refresh tracking and monitoring system. The tracking system displayed the historical statuses of data refreshes at subsystem levels. The system triggered alerts if it detected delays in the operational data refresh pipeline. The tracking and monitoring system assured users that data points were never stale.

Cost Reduction

As the partner portal supported more users, the DataOps teams needed to monitor usage of cloud-based resources. In pay-as-you-go models, it is always important to optimize the usage of cloud-based resources.

Resource monitoring revealed discrepancies from the client’s estimated usage. The usage varied depending on:

   The usability of the applications.
   The user base’s knowledge of the existence and availability of the data and report assets.
   The user base’s training regarding using assets for day-to-day business activities.
   The users’ actual usage of the assets.

Identifying the sources of usage variations enabled the DataOps teams to tailor the resources to the client’s needs.

Resource usage also varied depending on our client’s business cycles. By predicting spikes and drop-offs in activity, the DataOps teams could optimally provision cloud-based resources. The ability to review usage patterns also proved useful when rationing provisioned assets. The teams could detect deviations from planned usage and scale assets depending on actual usage. The system alerted the teams about unexpected peaks in usage. The alerts allowed the teams to detect improperly functioning applications and correct the issues quickly. The teams further reduced costs by scheduling virtual machines to shut down during periods of low demand.

Data Quality

Erroneous data reduces users’ trust in information systems, resulting in reduced data asset usage. To ensure users receive high-quality data, the DataOps teams adopted statistical process control (SPC) techniques. SPC emphasizes early detection of nonconforming data and ensures rapid responses. To detect errors, the team developed a trend monitoring tool that calculates rolling averages and movement trends of key data points. If key data points deviate beyond an acceptable threshold, the system generates alerts. The alerts allow the DataOps teams to correct the data in a timely manner.

User Support

Responding to user requests is a large part of the DataOps teams’ work. Prompt support responses are essential for a good user experience. Originally, when a user required access to certain assets, a DataOps team member had to validate the request. Often, requests were pressing, and manual approval slowed the process. In response, the DataOps teams created a self-service tool for access requests. The tool, a simple request form, improved the user experience and significantly reduced the DataOps teams’ workload. A rules-based system accompanied the form, and the form automatically forwarded requests to authorized approvers. Upon arrival, authorized approvers could automatically grant access if the user had sufficient permissions. Alternatively, the approver could collate approved requests, allowing the operations team to provide access later.

DataOps Creates Operational Efficiencies

DataOps is essential for successfully planning, developing, and executing data solutions for all enterprises. DataOps tools and practices allowed our client to:

   Rapidly develop and enhance reports.
   Gather data optimally.
   Validate data.
   Ensure data availability for a large user base, aiding business decisions.

Our client’s partner portal allows 440,000 business partners to define goals according to uniform data points. Over the course of the last year, the DataOps team managed a 50% increase in reports without increasing the team size. During this time, the total number of users grew by a factor of ten. The DataOps teams sustained this growth without increasing team size by continually delivering incremental value. Our DataOps teams improved the availability, reliability, and quality of data delivered.

Thursday, April 25, 2019

Case Study: Improving Power BI Premium Performance

Key Challenges

   Determine the cause of slow Power BI Premium load times.
   Create a list of steps to improve report load times.
   Teach the client how to assess future Power BI Premium load time inefficiencies.

Power BI Premium Expertise

Power BI report load times drive businesses. Slow load times result in unused reports and undiscovered insights. Fast load times result in frequent report use and actionable insights. We work with the Power BI team and build enterprise-grade Power BI Premium implementations. Because of our experience, we are experts in DAX and tabular models. Our knowledge—and the in-house tools we’ve created—enables us to improve Power BI implementations by pinpointing inefficiencies.

Assessing Our Client’s Report Load Times

Recently, a client asked us how to improve the load times of their Power BI Premium implementation. Power BI Premium offers the ability to share data with anyone without purchasing per-user licenses. Power BI Premium also adds configuration complexities beyond those associated with standard Power BI implementations. The client, a medium-sized airline, used Power BI Premium to track passenger and flight statistics. Their reports had loaded slowly for the previous six months.

Determining the cause of the slow load times required some analysis. We organized multiple rounds of discussion with the client. Before our first round of discussion, we analyzed all dedicated capacities. While there can be numerous causes for slow load times, we often find report design needs to be improved. Reports with many visuals and slicers require much longer to load. Data refresh times also need to be considered. Are scheduled refreshes occurring during peak traffic times? How do user queries interact with scheduled refreshes? In this case, the client’s DAX queries were not properly written. The DAX queries resulted in significant load times when multiple queries were executed.

After identifying the primary bottleneck in the reports, we examined the client’s partner reports and models. We reduced load times by performing the following steps:

   Disabled the auto date and time function.
   Used out-of-the-box visuals when possible.
   Used a collapsed drill-down hierarchy in matrix visuals.
   Reduced report-level filters.
   Reduced the number of slicers.
   Checked for query folding.
   Fixed erroneous measures.

Empowering the Client to Address Future Challenges

We presented our findings to the client in a report, but our work did not end there. We demonstrated how they could better utilize their Power BI Premium implementation in the future. We reviewed Power BI Premium behavior and capacity metrics. We also explained how metrics could be used to identify future load issues.

Ultimately, the client’s report performance improved by 57%. More importantly, our client now could independently identify and improve performance issues. The project concluded with another happy client: “The optimizations made a huge difference! Our reports load faster than ever.”

Tuesday, April 16, 2019

Case Study: Millions of Arizona Citizens Receive Benefits Efficiently Using AI-powered Chatbot

Key Challenges

   Improve Program Service Evaluator training.
   Enable Program Service Evaluators to obtain policy information without searching the entire policy manual.
   Deliver conversational responses to Program Service Evaluators.

Policies Prompt a Need for More Efficient Training

Our client, the Arizona Department of Economic Security (DES), needed to improve its Program Service Evaluator (PSE) training. PSEs are responsible for administering benefits and guiding applicants through the application process. During the application process, PSEs refer to an online policy manual of various state benefit programs. To ensure that qualified Arizona residents receive benefits, the policy manual includes specific guidelines and procedures. PSEs search the manual using keywords. PSEs then communicate with benefit recipients.

PSEs can search the policy manual using keywords, but the search terms may not match the specific language of the manual. To find information, PSEs regularly contact experienced coworkers. Because PSEs often need help, senior members of the DES policy team realized they could save time by providing responses to common questions. The policy team asked us to use advances in artificial intelligence to propose solutions that would save time for their senior staff members. After a detailed analysis, we used an innovative solution using Microsoft Azure Cognitive Services with a chatbot interface.

A chatbot offered several advantages over the PSEs’ previous methods of gathering information. A chatbot would reduce the time commitments of senior employees. A chatbot could respond to common questions using replies. A chatbot would also allow PSEs to obtain information from the DES policy manual without using the manual's search function. PSEs would be able to ask the chatbot questions in everyday language. Then, the chatbot would return information validated from the manual. A chatbot that understood the contents of the policy manual would reduce the amount of time spent. Ultimately, the chatbot would enable the PSEs to more efficiently evaluate benefits applications.

Incremental Improvements with Agile Approach

We divided the chatbot development into four stages: Preview, MVP, MVP+, and Pilot. (See Figure 1). We released the first preview build of the chatbot within three weeks of starting the project. The initial build allowed us to get early feedback.

Figure 1: Project Stages

The Preview build of the chatbot responded to PSE questions with a knowledge base of stored questions and responses. Because of our earlier testing, we knew we still needed to refine our chatbot.

The first challenge with the Preview build was the build didn’t adequately address the size and details of the policy manual. The Preview build’s knowledge base covered 500 of the most common PSE questions and responses. Still, the knowledge base did not contain enough information to address the intricacies of the manual. During the Preview stage, PSEs frequently asked common questions the chatbot was unable to answer. The chatbot often returned answers unrelated to the PSEs’ queries.

A second challenge was the initial build's search function did not meet client requirements. The old policy manual search engine prioritized the frequency of typed keywords. Our chatbot's search function prioritized question stems. The result was experienced PSEs did not find our search function intuitive. To improve the chatbot's search function, the chatbot needed to return a field of results when asked a single keyword. A specific result needed to appear when multiple keywords were used.

The challenges we encountered in the Preview stage defined the MVP build. Our chatbot needed to mimic the behavior of modern search engines. Our chatbot also needed to provide conversational responses to questions phrased in natural language. Finally, our chatbot's knowledge base also needed to grow. We expanded the chatbot's knowledge base from 500 question and response pairs to 5,000.

Producing Refined Results

During the MVP phase, we automated the generation of questions and responses when the chatbot crawled policy content. Whenever content was added, revised, or deleted from the manual, the chatbot automatically crawled the manual's pages and updated the knowledge base. The new build even allowed users to narrow the search categories to further refine results.

Results were further refined through continuous user feedback. If users struggled to find the information they needed, a question and response pair was automatically generated with help from the policy team. The automatic generation of questions and responses offered a substantial advantage over the previous build. Earlier, the addition of question and response pairs was unstructured. Providing a structured approach for question and response pairs also significantly improved the speed at which the bot learned.

We conducted weekly review meetings and progressively increased the audience size (Figure 1). In review meetings, we acknowledged specific chatbot queries and identified mismatched keywords. As we addressed concerns raised by PSEs, they felt ownership for the outcome. PSEs and supervisors then became champions for the chatbot. Through our extensive training, the chatbot learned quickly. Eventually, the chatbot returned results with greater than 90 percent accuracy.

Considering Users First

The chatbot is currently used by over 1,800 PSEs with varying degrees of expertise. The PSEs access the chatbot through a web interface and Skype for Business. Administrators can view the question and response database, manually edit the questions and responses if needed, and manually trigger the crawl function if the database needs updated.

We designed a web interface featuring a welcoming, friendly avatar named Sean. The interface provides options to track case numbers, resize the bot, and export conversations. When users type an ambiguous question, the chatbot offers multiple possible responses (with references).

PSEs can also use our chatbot via Skype for Business. Skype interactivity posed significant challenges, as the interface had to be entirely text-based. We created intuitive menu options that users selected via number input. The completed Skype interface possessed all the functionality of the web interface.

We also created an easy-to-use admin portal. The admin portal allows users to customize chatbot responses, manually trigger policy database crawls, track case numbers, and view response metrics.

The chatbot interface and admin portal resulted in a user-friendly solution. PSEs unfamiliar with the implementation can interact with it, understand it, and use it proficiently within minutes. As the DES project director observed, the chatbot integrated seamlessly into the PSEs’ workflow.

Going Live: Distributing Benefits with AI-driven Technology

The DES chatbot has increased evaluation efficiency for over 1,800 PSEs and improved processing time for millions of Arizona benefits recipients. The chatbot provides PSEs with speedy responses, successfully answering hundreds of queries per day.

Reflecting, our team lead recalled four significant factors that differentiated the DES chatbot from others. First, the policy manual was large and dense. Words and sentences in the manual resembled legal statutes. The bot simplifies references to the manual with results that are 90+ percent accurate. Second, the project is unique when compared to other chatbots because our chatbot auto-trains from site content. We enhanced the content further through user feedback loop training. Third, the intuitive user interface offers multiple responses to ambiguous questions. The availability of multiple responses drastically reduces the number of interactions required to find the desired result. Finally, the incremental review cycle allowed us to tailor the chatbot to client requirements and drove user acceptance and adoption.

Feedback from DES has been overwhelmingly positive. DES Chief Information Officer Sanjiv Rastogi is optimistic. He anticipates the chatbot’s role will expand to suit the department’s future needs: “MAQ Software helped us decide on and implement a solution built on Azure with cognitive services, which provides us the grow-as-you-go infrastructure, platform, SaaS, and AI integration DES needs.”

Thursday, March 14, 2019

Case Study: A Better Way to Access and Organize Legal Documents

Key Challenges

   Improve document organization and collaboration without third-party tools.
   Support on-premises, cloud, and hybrid environments.
   Create a solution accessible from any device.

A Profession That Revolves Around Documents

Our client, the legal department of a large software company, needed a better way to organize their documents. Their existing solution, a large SharePoint implementation, had two disadvantages. First, finding the right documents and organizing the related materials required significant amounts of time. Second, because so much time was spent finding the correct documents, the progress of collaborative work was slow. Third-party tools that improved functionality did exist, but our client did not want a third-party solution. Our client could not incur additional costs or maintenance.

Our Process: Add-in vs Add-on

Our client knew there was an easier way to find the information they needed. Previously, we had helped create a policy portal. Based on our past work with the client, they knew we would deliver a useful solution quickly. Our initial analysis showed that either a Microsoft Office add-in or add-on was necessary. An add-in or add-on would allow lawyers to search for and store documents within SharePoint without leaving the program where they created or received a document (such as Word, Outlook, Excel, PowerPoint, or the web). An add-in or add-on would also improve the efficiency of collaborative work. By creating a centralized repository, lawyers could manage documents more efficiently. We set a goal to create a solution accessible from any device that would support on-premises, cloud, and hybrid environments.

Our team began work by researching whether an add-in or add-on solution was more appropriate. Add-ons presented an insurmountable challenge for our client; add-ons required each user to install the application and check for updates. Add-ons also required on-premise resources. In contrast, add-ins pulled code from online resources. Because add-ins pulled code from online resources, they were more suitable for synchronizing a legal department of several hundred people. Add-ins were also much more suitable for access from any device.

We designed the initial version of the add-in around improving the search functionality of the database. When users opened Word or Outlook, they were presented with a thin pane that allowed them to search and contribute to SharePoint repositories without opening another window. In Outlook, legal department team members now dragged and dropped documents from their emails directly into a library of related documents. Libraries improved the effectiveness of collaborative work and ease of access. OneDrive and Delve integration also improved collaborative work. Most importantly, documents were automatically tagged, and metadata was automatically generated. Metadata dramatically improved searchability.

Expanding the Scope to Outside Organizations

As the development of the add-in progressed, our client realized they could market the add-in to others in the legal industry. At the time, many law firms relied on multiple third-party document management systems. Our add-in offered a central location to access and organize documents, simplifying workflows. Our add-in also allowed lawyers to collaborate on documents from any device—an industry first.

After numerous feedback sessions with attorneys from our client’s legal department, we completed the project. The client implemented the add-in in their legal department and encouraged adoption by numerous outside law firms. Few products offer the convenience of access to documents directly from Outlook or Word.

Thursday, March 7, 2019

Case Study: Custom Gantt Chart Improves B2B Communication

Key Challenges

   Create an advanced Gantt chart visual that displays all relevant construction details at a glance.
   Create an image carousel to display images of construction site progress.
   Enable the visual to scale smoothly to large projector screens.

Finding Communication Middle Ground

Our client, a local construction company, was building an office complex for a large software company. The construction company needed to present their progress to the executives of the software company. The executives of the software company needed to monitor construction progress. Visits to the physical site were deemed inefficient by the construction managers. Presenting over-sized paper Gantt charts everyday was impractical. We developed an innovative Power BI Gantt chart visual to easily display construction updates.

Going Beyond Out-of-Box Features

Power BI provides an out-of-box Gantt chart, but it was not adequate for the needs of the construction firm. Their physical Gantt charts showed information that couldn’t be easily transferred to the out-of-box (OOB) Power BI Gantt chart. We created a Gantt chart that was large enough to identify all relevant details at a glance. We also provided features unavailable in the OOB Power BI Gantt chart. Our Gantt chart allowed construction managers to display project milestones (as is typical of Gantt charts) and other details. Managers were able to indicate whether certain milestones were flexible or had hard deadlines. A small completion bar showed plan progress and milestone progress. Our client also wanted to include actual images of the construction site with plan overviews. To meet requirements, we created a separate custom visual called Image Carousel and linked the two visuals.

Screenshot of our custom Gantt chart

Our initial design did not account for form factors used for review presentations. About one week into the development of the custom visual, the construction company informed the engineering team the Gantt chart custom visual was constantly blurry. When our team asked about the screen resolution, the construction manager replied, “very large screens.” The construction manager's response prompted our engineering team to adopt the use of SVGs, which scale for resolution. Even the adoption of SVGs was insufficient. Our team was not able to determine a workaround until we realized that the "very large screens" were projectors.

Quick Delivery and Continuing Innovation

The project required that we create the Gantt chart and Image Carousel custom visuals in three weeks. One week was added at the end of the project for UI changes and to implement user feedback. We completed the entire project in four weeks, prompting our client to comment, “The pace of your work was really outstanding!” In addition, one week later we were approached by a different client asking for our Gantt chart custom visual. The visual proved so popular that we created an open source version to share on Microsoft AppSource. So far, our generalized version of the Gantt chart has been downloaded 50,000 times.