Tuesday, April 16, 2019

Case Study: 2.9 Million Arizona Citizens Receive Benefits Efficiently Using AI-powered Chatbot



Key Challenges

   Improve Program Service Evaluator training by providing responses to common questions.
   Enable evaluators to obtain policy information without searching the entire policy manual.
   Respond to policy manual questions in everyday language.

Complex Policies Prompt a Need for More Efficient Training

Our client, the Arizona Department of Economic Security (DES), needed to improve its Program Service Evaluator (PSE) training. PSEs are responsible for administering benefits and guiding applicants through the application process. As a part of this process, PSEs refer to an online policy manual that outlines guidelines and protocols for various state programs. After years of legislative and policy amendments, the manual is dense with legal terminology and technical language. PSEs are tasked with searching the manual using keywords, then translating the results into everyday language to communicate with their clients.

Although the PSEs can search the policy manual using keywords, the search terms often do not match the complex language of the manual. PSEs regularly must reach out to coworkers or senior employees for relevant information, which results in evaluation delays. Although less experienced team members will always need guidance, the senior members of the policy team realized that they could save a lot of time if PSEs had direct access to responses to common questions. From this realization, the mandate to create a chatbot intelligent enough to decipher the complex policy manual was born.

A chatbot offered several advantages over the PSEs’ previous means of gathering information. A chatbot would reduce time commitments for senior employees by responding to common PSE questions using preformulated replies. A chatbot would also allow PSEs to obtain information from the DES policy manual without explicitly using the search function. PSEs would be able to ask the chatbot questions in everyday language, and the chatbot would return information validated from the manual. Lastly, a chatbot would enable PSEs to continue referencing the policy manual as they always had, with the benefit of a supplementary resource. A chatbot implementation that successfully “understood” the contents of the policy manual would dramatically reduce the amount of time spent poring over the policy manual. Ultimately, the chatbot would enable the PSEs to more efficiently evaluate benefits applications.

Incremental Improvements and the Iterative Process

We divided the chatbot development into four user acceptance testing (UAT) stages: Preview, MVP, MVP+, and Pilot. (See Figure 1). We pushed out the first preview build of the chatbot within three weeks of starting the project. The initial build allowed us to get early feedback, enabling course-corrections.

Figure 1: Project UAT Stages

The first build of the chatbot responded to PSE questions by referencing a manually compiled knowledge base of stored questions and replies. From early user testing, however, we knew we needed to refine the approach.

One challenge with the first build was that it didn’t adequately address the size and complexity of the policy manual. Although the initial build’s knowledge base covered 500 of the most common PSE questions and responses, it simply did not contain enough information to address the intricacies of the manual. During this first stage, PSEs frequently asked questions that both our client’s policy team and our chatbot team thought the chatbot would be able to answer. Instead, the chatbot often returned answers unrelated to the PSEs’ queries.

Additionally, the first build often required the PSEs to phrase their questions in a manner that ran counter to how they searched the policy manual. The old policy manual search engine prioritized the frequency of typed keywords. For example, if a PSE searched for “earned income,” the search engine would return the result with the highest number of occurrences of the phrase. The result was that experienced PSEs came to expect results returned in a certain order. Our chatbot needed to be able to return a field of results when presented with a single keyword and a specific result when presented with multiple keywords, all while ordering the results in a manner the PSEs expected.

With these challenges in mind, it became clear that the chatbot would need to move past a manually compiled question and response bank, and even reach beyond typical chatbot capabilities. Our chatbot needed to mimic the behavior of modern search engines yet provide conversational responses to questions phrased in natural language. The chatbot also needed to understand questions relating to all parts of the manual, which would ultimately result in the chatbot’s knowledge base expanding from roughly 500 question and response pairs to well over 5,000.

Producing Refined Results

To achieve such a significant change in behavior, we developed a method to generate questions and responses automatically when the bot crawled policy content. If DES added new content to the policy manual, the bot would automatically crawl the new pages and update its database. This ability to auto-update the question and response database was unique to this project and crucial to meeting DES’s workflow needs. This ensured that the chatbot could always access all the content from the manual. The new build even allowed users to narrow the search categories to further refine results.

By automatically generating question and response pairs, the chatbot team was better able to incorporate the policy team’s knowledge to improve the bot through user feedback loop training. If users struggled to find the information they needed, we could now directly influence the chatbot’s machine learning process by connecting a user’s question with the exact page they were looking for. This offered a substantial advantage over the previous build, in which the inclusion of question and response pairs was unstructured. Additionally, providing a structured process for question and response pairs significantly improved the speed at which the bot learned.

We conducted weekly UAT meetings, progressively increasing the audience size (Figure 1). In these meetings, specific chatbot queries were acknowledged and then used to identify mismatched keywords. This was crucial to improving the chatbot and gaining acceptance and adoption within the PSE community. As PSEs and supervisors saw their concerns addressed, they felt ownership over the outcome and became champions for the chatbot. Through the testing process, the chatbot learned quickly, eventually returning results with 90+ percent accuracy.

Putting the Users First

User friendliness was of the utmost importance when creating the chatbot. The chatbot is used by over 1,800 PSEs with varying degrees of technical expertise. The PSEs need to access the chatbot both through a web interface and through Skype for Business. Also, administrators must be able to view the question and answer database at a glance, manually edit the questions and answers if needed, and manually trigger the crawl function if the database needs updating outside of the regularly scheduled crawls.

We designed a web interface that is welcoming, resembling a smartphone text message window, with a friendly avatar. This brings comfort to non-tech savvy users and utilizes an already familiar user experience. The window further provides options to track case numbers, resize, and export conversations. When users type an ambiguous question, the chatbot offers multiple possible responses (with references). This helps users clarify the results without having to ask a series of multiple follow-up questions.

In addition to the web interface, the PSEs needed to access the chatbot via Skype for Business. Skype interactivity posed significant challenges, as the interface had to be entirely text-based. Our engineers, however, rose to the challenge, creating intuitive menu options that users select via number input. Despite the limitations, the team successfully implemented a Skype interface with all the functionality of the web interface.

Finally, we created an admin portal that is simply designed, yet powerful enough to customize chatbot responses, manually trigger policy database crawls, track case numbers, and view response metrics.

The effort the team put into designing the chatbot interface and admin portal resulted in a chatbot solution where PSEs who have never encountered the implementation can interact with it, understand how it works, and use it proficiently within minutes. As the DES project director observed, the chatbot integrated seamlessly into the PSEs’ workflow.

Going Live: Distributing Benefits with AI-driven Technology

The DES chatbot has increased evaluation efficiency for over 1,800 PSEs and improved processing time for over 2.9 million Arizona benefits recipients. The chatbot provides users with speedy responses, successfully answering hundreds of queries per day.

Reflecting, our team lead recalls four significant factors that differentiated this project from others. First, the policy manual was big and complex. Words and sentences in it resemble legal statutes. The bot simplifies the chore of referencing the manual with results that are 90+ percent accurate. Second, the project is unique amongst all other chatbots because it auto-trains from site content. We enhanced the content further through user feedback loop training. Third, the intuitive user interface offers multiple responses to ambiguous questions, drastically reducing the number of interactions required to find the result users are looking for. Finally, the incremental UAT cycle not only allowed us to tailor the chatbot to the end users’ expectations, it also drove user acceptance and adoption.

Feedback from DES has been overwhelmingly positive. DES Chief Information Officer Sanjiv Rastogi is optimistic, anticipating that the chatbot’s role will expand to suit the department’s future needs: “MAQ Software helped us decide on and implement a solution built on Azure with cognitive services, which gives us the grow-as-you-go infrastructure, platform, SaaS, and AI integration that DES needs.”

Thursday, March 14, 2019

Case Study: A Better Way to Access and Organize Legal Documents



Key Challenges

   Improve document organization and collaboration without third-party tools.
   Include support for on-premises, cloud, and hybrid environments.
   Create a solution that is accessible from any device.

A Profession That Revolves Around Documents

Lawyers’ work involves more than just court appearances. Lawyers spend significant time preparing clients, engaging in negotiations, and carrying out painstaking research. But if you ask a lawyer to describe their profession in just one word, most would probably give you a blank look and then a small smile before simply saying “documents.” Documents dominate every facet of a lawyer’s professional existence. Lawyers search for documents for case research, and they write documents to support their cases. Court reporters even write documents to capture lawyers’ every move and utterance within courtrooms. With so many documents, document management quickly becomes a problem.

Our client, the legal department of a large software company, needed a better way to organize their documents. Their existing solution, a vast SharePoint implementation, had two major problem areas. First and foremost, lawyers within the department spent an inordinate amount of time searching for the right documents and organizing related materials. Second, because so much time was spent finding the correct documents, the progress of collaborative work was slow. Third-party tools to improve functionality existed, but the legal department was wary of the additional costs and maintenance these tools would require. Adding third-party software would increase inefficiency and result in more ways for work to fall through the cracks.

Our Process: Add-in vs Add-on

The legal department knew there had to be an easier way to get the information they needed, so they reached out to us. We’d previously worked with the legal team to create a policy portal, so they knew that we would deliver a useful solution quickly. As we started work on the project, we determined that either a Microsoft Office add-in or an add-on would be the most effective solution for the lawyers’ needs. An add-in or an add-on would give the lawyers the ability to search for and store documents within SharePoint without leaving the program where they created or received a document (such as Word, Outlook, Excel, PowerPoint, or even the web). This, in turn, would improve the efficiency of collaborative work, and with the creation of a centralized repository, lawyers could manage documents more efficiently. We set a goal to create a solution that would support on-premises, cloud, and hybrid environments and would be accessible from any device.

The first decision our team had to make was whether an add-in or an add-on solution was more appropriate. The logistical problems surrounding add-ons quickly made it clear that an add-in solution would be required. Add-ons presented an insurmountable challenge for our client; an add-on would require each user to install the application and check for updates. Add-ons also required on-premise resources. In contrast, add-ins pull code from online resources, making them much more suitable for synchronizing a legal department of several hundred people and much more suitable for access from any device.

Our initial version of the add-in was strictly designed around improving the search functionality of the database. When users opened Word or Outlook, they were presented with a thin pane—our add-in—which allowed them to search and contribute to SharePoint repositories without opening another window. In Outlook, legal department team members were now able to drag and drop documents from their emails directly into a library of related documents, improving the effectiveness of collaborative work and ease of access. Collaboration was further improved with OneDrive and Delve integration. Most importantly, these documents were automatically tagged. Metadata was automatically generated for all documents in the repository regardless of how they were entered, dramatically improving searchability.

Expanding the Scope to Outside Organizations

As the development of the add-in progressed, our client realized that complexities related to document organization were common throughout the legal world. Understanding that other legal departments and law firms could benefit from improved file access and collaboration, our client decided to sell the add-in implementation as a product. At the time, many law firms relied on multiple third-party document management systems. Our add-in offered a central location to access and organize documents, simplifying workflows. Our add-in also allowed lawyers to collaborate on documents from any device—an industry first.

After numerous feedback sessions with attorneys from our client’s legal department, we completed the project. The client implemented the add-in in their legal department and encouraged adoption by numerous law firms they associated with. Today, collaborative legal software has become more commonplace, yet few products offer the convenience of access to documents directly from Outlook or Word.

Thursday, March 7, 2019

Case Study: Custom Gantt Chart Improves B2B Communication



Key Challenges

   Create an advanced Gantt chart visual that displays all relevant construction details at a glance.
   Create an image carousel to display images of construction site progress.
   Enable the visual to scale smoothly to all resolutions.

Finding Communication Middle Ground

B2B projects face communication difficulties that aren’t present in intrateam or even intraorganizational work. The importance of avoiding these difficulties is obvious simply from the amount of time businesses spend promoting concepts like alignment and synergy. What businesses really strive for with these buzzwords is good communication practices, and the benefits of effective communication across teams and organizations don’t need to be extolled. At the core of all effective B2B communication, however, is a firm grasp of how to communicate project goals and outcomes in a way that clients understand.

Our client, a local construction company, was contracted to build office buildings for a large software company. The construction company needed to be able to present their progress to the executives of the software company, and the executives of the software company needed to be able to monitor construction progress in a medium they understood. Visits to the physical site were deemed inefficient by the construction managers, and the construction workers’ own oversized paper Gantt charts were considered cumbersome and ineffective by the executive team. The eventual compromise reflected both parties’ need for a familiar source of information; we were hired by the construction company to create a custom Power BI Gantt chart in order to present construction progress to the executive team.

Going Beyond Out-of-Box Features

While Power BI provides an out-of-box Gantt chart, the construction workers didn’t find the out-of-box Gantt chart hospitable to their needs. Their physical Gantt charts showcased a tremendous amount of information that couldn’t be easily transferred to the out-of-box Power BI Gantt chart. We created a Gantt chart that was big enough for them to identify all relevant details at a glance and provided features unavailable in the default Power BI Gantt chart. Our Gantt chart allowed our client to display project milestones (as is typical of Gantt charts), but it also allowed our client to display a host of contextual details. Construction managers were able to indicate whether certain milestones were flexible or had hard deadlines. A small completion bar showed plan progress and milestone progress. Our client also wanted to include actual images of the construction site with plan overviews. To meet this expectation, we created a separate custom visual called Image Carousel and linked the two visuals.

Ironically, the major hurdle our engineering team had to overcome on this project arose out of miscommunication. About one week into the development of the custom visual, the construction company informed the engineering team that the Gantt chart custom visual was constantly blurry. When our team asked about the screen resolution that was being used to display the visual, the construction manager replied, “very large screens,” prompting our engineering team to adopt the use of SVGs, which scale for resolution. Even this initially proved unsuccessful. It was not until our engineering team learned that the “very large screens” were in fact projectors that they were able to devise a workaround.

Quick Delivery and Continuing Growth

Our team had just three weeks to create the Gantt chart and Image Carousel custom visuals for the construction company. One week was then tacked on to the end of the project for UI and user feedback to be implemented, prompting our client to comment, “The pace of your work was really outstanding!” The success of this project didn’t end, though, with just fulfilled client expectations and improved communication. Just one week later we were approached by a separate client asking for our Gantt chart custom visual. The visual proved so popular that we created a nonproprietary version to share on Microsoft AppSource. To date, our nonproprietary version of the Gantt chart has been downloaded by 50,000 users.

Thursday, February 28, 2019

Case Study: Business Intelligence Makes a Travel Company More Efficient



Key Challenges

   Create a custom visual that shows points of departure, total flights, and flight sources.
   Enable airlines to easily view and compare key performance indicators (KPIs).
   Create custom animation and straightforward interactivity for the visual.

A Visual That Would Make or Break the Sale

In 2018, one of our client’s technical service providers approached us with a simple request: “Can you create a custom flight visual?” Our client, a large software provider, was in the middle of sales negotiations with an online travel company. The negotiations had broken down because the travel company didn’t think our client’s business intelligence (BI) service offered a significant advantage over a custom HTML solution. Even though our client’s BI service allowed users to drill down into the data, interact with the data, and run queries, the travel company insisted that it needed a custom visual that could display flight routes and flight route KPIs simultaneously. Without this custom visual, the sales deal could not be finalized.

Our Process

We accepted our client’s challenge and immediately got to work. We were tasked with creating a custom visual that showed points of departure and arrival, the total number of flights from an airport, and (when applicable) the multiple destinations of flights from a singular source. Prior to the creation of our custom visual, the online travel company didn’t have any central repository for their data. As a result, the creation of this custom visual ultimately allowed the online travel company to sell their data to partnered airlines. Partner airlines were then able to navigate to the visual and examine flight route health at a glance. Additionally, the availability of this data enabled partner airlines to compare their key performance indicators (KPIs) to those of their competitors. Now, partner airlines could examine their key metrics and those of their competitors—like when their rates were higher than average or what percentage of their flights arrived on time—at any location they wanted.

A Slow Rollout but a Huge Impact

The travel company was slow to roll out the new custom visual. They first released the visual to just two partners—and received overwhelmingly positive feedback. They then demoed the visual at an internal event for ten partners. Everyone loved it! Even at this early stage in the project, all the parties involved knew that our client’s business intelligence service and our custom visual would prove transformative for how the travel company and their partner airlines carried out daily business. This very success inspired even greater success for our client (the large software company). Following the implementation of our custom visual, other business divisions within the travel company reached out to purchase the business intelligence service. Our client even succeeded in selling their cloud computing infrastructure, bringing them into even closer relations with the online travel company and generating a bevy of sales leads across the travel company’s partner network.

A Better Experience for our Client and the Travel Company

It’s rare for such a small contribution to a project to lead to such outsized results. Because of our custom visual, our client was able to sell their business intelligence service to one of the largest travel companies in the world, secure the sale of their cloud computing architecture, and generate sales leads across a vast partner network. The travel company was able to centralize their data and make it visible to their partners, improving everyday business performance and long-term planning. This custom visual touches the lives of thousands of business people every day. It is humbling to realize that our contribution to both our clients’ success and our reach into the lives of those using our custom visual was made possible by the travel company’s decision to make custom visualization a go/no-go item.

Monday, February 25, 2019

DataOps at MAQ Software




For nearly a decade, we have specialized in building, delivering, and operating business intelligence and data analytics solutions. During this time, we have seen significant growth in the volume and complexity of data. The increased complexity results from two primary causes: 1. Metric definitions are increasingly complex and involve calculations with multiple data points (compared to simple metrics in the past). 2. The variety of data points has increased over time. We’ve also seen an increasing number of interdependencies between projects. These interdependencies are a result of solutions that are combined or built on top of one another.

Despite the increased volume, complexity, and number of interdependencies, modern business practices demand that developers must create and deploy data-based solutions faster than ever. To address the challenges of delivering projects in such a complex environment, we have adopted agile methodologies, DevOps practices, and statistical process control. Together, these processes have come to form what we now call DataOps.

We have embraced agile methodologies since our inception, and we continue to deliver business intelligence and data analytics projects using these methodologies. Incremental delivery—one of the core agile practices—has enabled us to deliver business value to our customers early in the development cycle, allowing them to immediately unlock the potential of their data assets. Agile methodologies have proven practical in projects where early requirements are often difficult to ascertain. Incremental delivery allows our customers to continuously develop their requirements as they begin to better appreciate the story their data can tell. We have found that the close customer collaboration afforded by agile practices is vital in ensuring the success of our data projects.

We also have long embraced DevOps practices. These practices hasten, automate, and streamline the development, validation, integration, and deployment of data solutions. By introducing automation at all stages of the development life cycle, we have shortened the time it takes for data solutions to reach production. This means that we can push changes to production on demand, with minimal human intervention or mistakes. Automation has significantly reduced the cost of releasing incremental changes. As a result, it is now possible to issue several releases to production every day. From code check-in, to code quality checks, to continuous integration, to automated validations, to automated deployments, automation has streamlined the entire release process. In many cases—due to the ever-increasing complexity and interdependency of our projects—automation is not just a convenience, but a necessity.

More recently, we improved the efficiency of live data pipelines by creating ongoing alerts. These monitoring mechanisms are a set of automated test cases that run at each processing stage of the data pipeline. Because data is processed at various stages of the pipeline on an ongoing basis, it is crucial to ensure that the check-gates along the data pipeline prevent incorrect data from flowing through the system. Statistical process control, missing data, excessive data volumes, and wide variations in the average values of key metrics calculated from data are all red flags that prompt timely alerts to DevOps team members and trigger mechanisms that prevent the flow of data through the system. These monitoring and control mechanisms help maintain the quality and integrity of the data in the live data pipelines. Because of these processes, customers can manage their day-to-day business operations with the confidence that they have received accurate insights from their data.


Using agile methodologies to develop data solutions, DevOps to build and deploy solutions, and statistical process control techniques to monitor and operate the data pipelines has led to tremendous benefits for our workflow and—more importantly—our customers. Agile methodologies gave us the flexibility and speed required to compete effectively in dynamic business environments. The ability to incrementally build data solutions, use them early, and use the feedback from their usage to define further requirements has been instrumental in ensuring that solutions remain relevant from conception to deployment. DevOps practices helped teams overcome the traditional bottleneck of deploying solutions to production, shortening the time from conception to deployment and improving the ease of integration and deployment. DevOps practices also resulted in the ability to move small changes to production more frequently, minimizing the risk of regression issues and the resulting downtime. Statistical process control techniques ensured that live solutions continue to operate as expected. Data is now churned through the solutions in a reliable manner, ensuring the ongoing delivery of value from the data.

The combination of agile methodologies, DevOps, and statistical process control techniques has evolved over time into DataOps. DataOps is the logical combination of highly proven methods of software development, delivery, and operations. DataOps is driven by the need of businesses to unlock the value of their data assets in a timely, reliable, consistent, and continuous manner. With DataOps practices, the time it takes for development has decreased despite increases in volume, complexity, and the number of interdependencies in modern data solutions.

DataOps, however, is not a revolution, nor is it groundbreaking. It’s what we’ve been doing all along. It is the result of methodologies designed to handle complex data requirements with ever-increasing efficiency. By adopting these methods, we have incrementally improved our processes and increased the value that we deliver to our customers. Whether we think of the workflow as a combination of agile, DevOps, and statistical process control—or as DataOps—the resulting delivery benefits are undeniable. As data processing demands become more complex, we will continue to pursue the most efficient means of data processing, support, delivery, and operations.

Friday, February 15, 2019

Case Study: Improve Feedback Analysis with Azure Databricks



Key Challenges

   Transition feedback analysis architecture from VMs to Azure Databricks.
   Improve analytics execution speed and scalability.
   Add entity recognition and key phrase extraction services.

Fast and Accurate Feedback Analysis is Crucial

Tracking customer sentiment is an essential business activity. Customer feedback lets businesses know which efforts are working and highlights customer difficulties. More significantly, understanding consumer desires enables predictive action. If every customer who enters a store asks for a certain product, the store owner knows that she should stock more of the product for the following week. But by tracking customer feedback, the store owner can dig deeper and understand why customers are demanding the product. If the store owner determines the all-important “why,” she will know whether the increased demand was due to global consumer trends, a marketing campaign, a celebrity endorsement, or any number of other reasons. In other words, customer feedback allows businesses to pursue insights they would otherwise not be aware of.

Our client, the voice of the customer team for a large software company, wanted to improve their text analytics system. The client’s system relied on VMs to compile online customer feedback and perform sentiment analysis. To improve execution speed and increase scalability, the client wanted to move the system to a serverless architecture. The client also wanted to incorporate two new features: entity recognition and keyphrase extraction.

Our Process: Benefits of Azure Databricks

The client’s previous feedback architecture used Python scripts to process customer feedback. During processing, contractions were expanded, inflectional endings were removed, HTML tags were removed, punctuation marks were removed, characters were rewritten in lowercase, spelling mistakes were corrected, and junk words were removed from the feedback. Sentiment analysis was then performed on the cleaned data. The system was functional but slow and non-scalable compared to modern serverless solutions.

We knew that Azure Databricks would offer our client the exact kind of speed and flexibility that they were looking for. Azure Databricks allows users to run robust analytics algorithms and drive real-time business insights. It offers one-click, autoscaling deployment that ensures that enterprise users’ scaling and security requirements are suitably met. Azure Databricks also features optimized connectors, which we used to run Microsoft Cognitive Service APIs. This allowed our team to quickly implement entity recognition and keyphrase extraction. And because the Azure Databricks solution was managed from a single notebook, our teams could collaborate more effectively across office locations. Now, when our India team begins working on contraction processing, our team in Redmond can continue with lemmatization without missing a beat.

Immediately Put to the Test

We completed our client’s new Azure Databricks-based feedback analysis implementation just in time for the biggest shopping test of the year: Black Friday. As it turned out, feedback analysis was crucial in ensuring a smooth purchasing process for customers.

Shortly after Black Friday sales started, the client’s online checkout tool began having technical difficulties. Due to the large number of online transactions, the checkout tool failed. No matter how many times customers reloaded the checkout page, customer transactions could not be completed. Customer support lines were inundated with calls from holiday shoppers, so the client was initially unaware of the problem. Fortunately, the customer feedback tool immediately compiled, analyzed, and made information about the checkout issues available. The voice of the customer team forwarded the feedback information to the technical team, and the problem was addressed quickly.

Improved Feedback Analysis Leads to a Better Customer Experience

Our Azure Databricks feedback analysis tool improved speed and brought a new level of scalability to our client’s business operations. As evidenced by the Black Friday feedback, the tool’s speed and accuracy was a success right from the start. The technical improvements from the Azure Databricks architecture resulted in the ideal business outcome: the discovery of actionable business insights, faster. Ultimately, of course, improved insights mean a better customer experience, which is crucial to any business’s success.

Thursday, February 14, 2019

SAFe at MAQ Software




Since our inception, we’ve used agile methodologies to deliver software solutions that address our clients’ business needs. These methodologies—in their purest form—work perfectly for small projects. Agile methodologies deliver high business value by ensuring that projects respond to changing business needs. These methodologies result in functional software, delivered early and frequently, with short release cycles.

As the complexity and size of our projects have grown, however, we’ve recognized an increased need to accommodate interdependencies between projects. Agile methodologies work best with small teams (five to nine members). Projects with larger teams—especially interdependent projects—increase the complexity of execution. With larger teams, the benefits of agile methodologies (such as osmotic communication and uniform knowledge within teams) are difficult to achieve. Agile ceremonies (daily scrums, sprint planning, etc.) are also often difficult to manage when team sizes increase.

With the increased size and complexity of our projects, we knew we needed to avoid relying on centralized planning when executing interdependent projects. This would have resulted in a fallback to the waterfall methodology, which would have made any response to change difficult. Businesses provide a high-level roadmap for projects. With the waterfall method, project teams attempt to nail down specific business requirements from the start, but the waterfall methodology is ill-suited for dynamic environments. To efficiently handle increasingly complex projects and avoid waterfall methodologies, we needed to create an optimal level of central planning and coordination that retained the benefits of agile teams.

As we sought a balance between central planning and agile methodologies, we quickly realized that teams need to coordinate at various points during the sprint to eliminate redundancies and better cater to interdependencies among deliverables. With interdependent projects, it is crucial to align with the common business goals of the organization, portfolio, or program they belong to. While team-level autonomy is important, it is equally important to align teams to an organization’s objectives and to deliver projects that are part of a long-term roadmap. We determined that teams must take advantage of common infrastructure, identify common requirements that can be implemented and used by multiple teams, identify dependent requirements so that teams can hand off from the producer to the consumer in a timely manner, and synchronize deployments so that integrated solutions work seamlessly.

Our customers often create 3 to 18-month release roadmaps that broadly define the objectives of their organizations, portfolios, or programs. Our individual agile teams then plan sprints for the next two to four weeks, aiming to incrementally achieve the objectives of the planned roadmap. Project owners collaborate closely with the agile teams during the duration of the sprints to help them achieve broad goals while implementing course corrections as necessary. This close collaboration is the primary distinction between our methodologies and the waterfall model. We encourage collaboration between project owners and agile teams, whereas waterfall planning often occurs at a detailed granularity that minimizes collaboration with stakeholders.

These evolved methodologies closely resemble the Scaled Agile Framework (SAFe) described here. The practices of cross-team cadence and synchronization help multiple agile teams of a portfolio or program advance together. Lean-agile principles minimize waste by avoiding replicated efforts and promote the reusability of capabilities built by any of the teams of the portfolio.

Retaining the characteristics of agile teams ensures that the teams are self-organized and self-managed. They are also able to release software in shorter cycles with continuous customer collaboration.

The SAFe framework retains many benefits of agile methodologies while scaling to accommodate complex, interdependent projects. This is accomplished through the following:

   DataOps practices help deliver valuable software to customers faster and more effectively.
   System demos ensure that the software developed by the constituent agile teams all work together as a system, thereby delivering value to the portfolio.
   Agile release trains ensure that the synchronized release of software across agile teams is smooth and efficient.
Figure 1: A representation of SAFe DataOps practices in our context.
We have found SAFe to be a viable framework for supporting the business needs of our customers in a dynamic business environment. The ability to balance the autonomy of the agile teams while ensuring the alignment of projects with the organizational objectives has been the key differentiator of SAFe. The union of systems thinking, agile principles, and lean principles results in a highly efficient and scalable framework for delivering software at scale for our large enterprise customers.