Agile Indy 2024 Trip Report

October 22, 2024
Jeff Patton presenting at AgileIndy 2024

Agile Indy 2024

From the AgileIndy site: AgileIndy is a user group devoted to raising awareness, acceptance, and support to people who explore and apply agile values, principles, and practices to make building software solutions more effective, humane, and sustainable.

This year’s conference took place on Friday October 18th at 502 E Carmel Dr, Carmel, IN 46032.

A little bit about myself. I am a Staff Software Engineer and a repeat AgileIndy attender. In fact, I wrote a very similar article about last year’s conference: AgileIndy is Back! A 2023 Conference Experience Recap.

09:00 Opening Keynote: Build to Learn & Build to Earn by Jeff Patton

Opening Keynote

Jeff opened with a callback to the history of the software Industry, in the 90s, almost everyone used the Waterfall Model. The consensus was that it sucked. In 2024, almost everyone uses Agile. However, the current consensus is that Agile also sucks. Agile is often combined with DevOps in order to optimize for delivering working software at smaller and smaller increments.

Studies show that high performing organizations are twice as effective as the average. See the book: Accelerate. High performing organizations are doing Agile and DevOps, so why isn’t that working for us?

To answer this conundrum, Jeff showed us an underpants gnome skit from South Park. The correlation is:

1. Agile + DevOps

2. …

3. Profit

The problem is that, very often, Agile + DevOps does not focus on value. It focuses on delivering working software quickly. Making more crap faster doesn’t create value for the customer.

Then Jeff polled the audience: Think about products you would recommend and say why you would recommend them. The common answers:

  • solves my problem
  • inexpensive
  • easy to use
  • saves time
  • [company] creates revenue
  • [company] high demand

Notably, no one said:

  • on time
  • stakeholders pleased
  • releases quickly

The next topic was a dive into step 2 of the underpants scheme. Solving a customer need with software involves either a new product or new features. This leads to software requirements. The next question is how much does it cost? This is typically expressed in a triangle:
Cost by Scope by Time by Quality

You can pick 2 to optimize for out of Cost, Scope, and Time. If you try to pick all 3, you’ll find that Quality was a hidden forth factor.

What really matters for your software is what the customers do with it and what they say about it. On customers, there is a difference between users and choosers. For enterprise software, most of the users did not choose to use the software product(s) they use on a daily basis. So for those products, the focus is typically on efficiency rather than user feedback. Lots of people using the product generates revenue and people saying good things about it builds the brand.

Businesses are seldom content to sustain themselves: most want to grow. However, this is a problem that the business cannot solve internally. The business needs to look for unsolved problems or needs in their customer base. Addressing these needs creates positive outcomes for customers. Note that output is not the same as outcome. Building more things faster doesn’t necessarily create positive outcomes for customers. The thing we want to happen faster is positive outcomes, not output. The outcomes are phase 2 of the underpants scheme.

Alright, so how do we get a positive outcome? It starts with ideas. Which are problematic. There are three main reasons for that:

  1. Too many – everyone has an idea
  2. Most ideas suck – 90% startup failure, 80% of software features are rarely/never used
  3. Bias toward one’s own ideas

Building things faster doesn’t increase profit, but building the right thing does. To know whether the idea will generate positive ROI, it is useful to be able to answer the following questions:

  1. Do you really understand the problems you are solving?
    – Talk to customers, users
    – Read reviews, social media
    – 404 test (leave a link to the feature on your site that resolves to 404 – monitor user traffic to that link)
  2. When they see your solution, do they want it?
    – Show solution to people, then wait and listen
  3. Can we build this predictably?
    – Technical research
    – Prototype
  4. Can people easily learn how to use it?
    – Expert user interview with paper / rough prototypes
    – Paper prototypes are not necessarily low fidelity. Consider fidelity as a set of three: visual, data, and functional. Paper prototypes can be have high data and functional fidelity with low visual fidelity.
  5. Will they actually use it?
    – Release a MVP to a limited % of audience or early adopters. The MVP should lack features, scale, and performance.
    – Need to build it
    – Release to learn (the goal here is not to increase revenue, but to learn what the users will actually use)
    – Target specific users

To sum up, the goal of Agile is not to build more crap faster. It is to learn faster. Nail it before you scale it.

My thoughts

I have been on a few projects that did not go so well. This keynote did a great job of explaining why that was. In almost all of those cases, the problem was a lack of understanding as to how we are producing value for the customer. Furthermore, I have noticed that a common theme among those projects is that we’re doing Agile. However, we’re not willing to change direction or even dynamically re-order the backlog. And I’m left thinking “aren’t we then discarding somewhere between 80-99% of the value of Agile?” So Agile, on these projects, is just a kind of flavoring applied to our meetings.

The point of releasing smaller increments is to have more frequent meaningful interactions with the customer. To find out the differences are between what we think is necessary and what is actually necessary. To learn faster, change faster, and deliver less. But what we do deliver is what the customer actually wants. Agile processes can deliver solutions faster not because everyone magically becomes better at their jobs, but because we realize that 80% of what we were planning to build was useless.

10:15 FAST, OKRs, modern product management, and software teaming bundled up in a 6-week dojo…a way to create positive change at warp speed by Ryan McCann

FAST, OKRs, modern product management, and software teaming bundled up in a 6-week dojo…a way to create positive change at warp speed

Ryan started us off with the following quote from W. Edwards Deming:

“Every system is perfectly designed to get the result that it does.”

Typically bad outcomes are not the result of bad actors, but indicate a flaw in the system. The rest of the presentation concerned a case study of a system that was producing bad outcomes.

Phase 1: Fight Cloud

The team was fully remote, face cameras were off during meetings, and people were jerks to each other. It took three months of consulting to turn the attitude around. However, the team then grew from 9 to 25. And the infighting started up again. After 5 months, they knew they had a lot of dependencies and that the team member capabilities were overall bad. The solution was to run a 6-week dojo.

Phase 2: Dojo

The goals of the dojo were to:

  • Increase release frequency
  • Reduce change failure rate
  • Reduce dependencies

At the beginning of the dojo, all 25 people worked on the same task. Towards the end they split into two groups working separate tasks. Dependencies were reduced as people learned things that previously only a one or a small number of people knew how to do. When the dojo was finished, the release frequency went up by a factor of three, and the failure rate dropped from 40% to 0%.

The mobbing was enforced from above and people had to power through it. The FAST methodology was used, and cameras on was enforced. However, 5 people did not make it through the dojo and had to be let go. Also, representative customer(s) were brought into these sessions to work with the developers.

Phase 3: UI Reduction

During this phase, UX met with customers with 100 screens of application. They left with 3 useful ones. This was tough feedback, but they were able to roll out a release in 4 months time that had exactly what the customers needed. During this phase the team absorbed another team going up to a size of 32. They were able to maintain release frequency and team cohesion. Dependencies went up at first, but back down after the new team acclimated.

My thoughts

I had some trouble determining what I should take away from this presentation. I really wanted to know what specifically happened in that dojo. How did you keep 25 people focused on the same task? Seems like it would be really easy to have 3 people be actively working on it and 22 people on reddit. What parts of FAST made the biggest difference? What was being used beforehand? How much was getting done before / during the dojo? Release frequency went up by a factor of three, but did the amount being released stay consistent?

I did learn about FAST during this talk; I wasn’t aware of it previously. During the presentation, I was able to piece some things together about how it works. And I learned more by reading about it afterwards.

11:15 Boost your Effectiveness with Microsoft CoPilot by Dan Neumann

Boost your Effectiveness with Microsoft CoPilot

The first thing that Dan notes is that there are many flavors of Copilot, and they had to work with Microsoft to select the correct one for them. The selection was 365 Copilot.

So given that context, the rest of the presentation was examples of how Copilot could be used to increase productivity.

Teams Meetings (transcription must be turned on)

  • Auto-generate the meeting notes
  • Meeting summaries – ask Copilot to summarize the last three sprint reviews
  • Recovering from distractions during a meeting – ask Copilot to summarize what your coworker just said

PowerPoint

  • Generate a PowerPoint presentation based on a given outline
  • Create a presentation based on another presentation

Word

  • Can generate project status reports based on sprint reviews
  • Can provide summaries of long documents
  • Can provide outlines of documents, which can then be used to generate PowerPoint presentations

General

  • Search – ask for the location of a file and Copilot will search locally, one-drive, emails, Teams, and Sharepoint
  • Prepare for a meeting with a specific person about a topic – Copilot will look through previous emails, Teams conversations, etc.., then generate an agenda. Note: definitely fact check this before sending the agenda
  • Create personas by asking Copilot to interview you as if it was an expert on user personas
  • Copilot does have connectors to JIRA and Atlassian
  • Cost of Copilot is easily justified by time savings on initial (draft) content generation and creating summaries

Downsides

  • Copilot changes constantly, you can get wildly different results with the same prompts on different days
  • Connecting to Azure DevOps needs to happen through the admin dashboard graph connector. The integration still leaves much to be desired. The most reliable way to get DevOps data into the prompts is to give Copilot the URL to the project board as part of the prompt.
  • Copilot can’t run monte-carlo simulations on a csv-exported project backlog to give estimated completion dates. ChatGPT can do this.
  • Difficult to determine which flavor of Copilot you need

My thoughts

I found this quite enlightening. I liked seeing the specific examples of how to use Copilot. Not that much else to say, other than looking forward to trying some of these out myself.

13:30 CTO/CIO Panel Discussion by Tim Coleman, Matt Etchison, John Macrina, Dave Todaro, and Diana Williams

CTO/CIO Panel Discussion

Diana conducted the panel. It followed a question answer format. Unfortunately, during the Q&A, I was sitting far enough away to have trouble keeping track of who was answering which question. My apologies there, but I will list from my notes summaries of the questions and answers. You’ll notice some of the answers don’t much relate to the questions, but that is pretty normal. Sometimes the conversation flows away from the topic a little bit.

What are the Pros/Cons of Agile?

  • Pro: Learning faster how to do the right things for the customers
  • Con: Biggest challenge to implementing Agile is the cultural shift
  • Pro: From the business side, it’s great to get involved with the process and not be surprised by the end product
  • Con: Scope creep is a real concern
  • Con: Brought Agile to a college environment and faced a similar cultural impediment
  • Con: It is difficult to get people to change what they are doing, to fight against human tendencies

AI – how are your organizations dealing with it?

  • Not changing much in terms of business interactions, but AI can help with interactive design
  • Data needs to be in a good place prior to using AI. It is likely to be disruptive, but will enhance the experience by eliminating toil
  • Hoping that AI will increase productivity massively. Using AI for a chat based service desk rather than a call based service desk
  • Helps with eliminating toil in the development process, not by replacing developers. Humans always need to be involved in turning a business problem into a solution. AI will raise the level of abstraction just like C raised it from Assembly and Managed languages raised it from C.
  • Most people have a fear of AI, but are unaware of how they are currently using it (GPS, Amazon shopping, etc…)

What are some manual process that AI has improved?

  • Take every high school transcript and OCR it into the database
  • Text to voice automation for training modules
  • AI is at its best when you have a model specific to your organization / problem domain. The purchasable AI solutions are for common / general problems, you will need to invest in AI/data to get the best returns with a model customized to your domain.

What new technologies are you excited about?

  • What various cloud vendors are doing to increase productivity
  • Data management capabilities for enhancing AI, IoT, Quantum Computing
  • Application platform AI offerings, Blockchain for storage solutions and contract automation
  • Upgrade legacy tech stacks to enable the organization to receive innovations
  • Identify students that could be struggling with a probability index in order to focus on helping those students

My Thoughts

I find the argument that AI will increase developer productivity rather than replace them to be flawed. Clearly if everyone is 2x as productive, you need half the people to do the same amount of work. Sometimes it will be the case that the amount of well defined work to do increases correspondingly, but silently assuming that to be the majority case is incorrect.

However, the above would be more likely to apply if the productivity increase is small. But, if we are to believe the hype train, the productivity increase will be massive. Is the reality going to match the hype? Probably not. But we shouldn’t be saying both that AI will massively increase productivity and that it won’t result in lost jobs.

That being said, a massive productivity increase is a very good thing. But, we shouldn’t pretend that it won’t cause unemployment in the short term. We have historical examples that show otherwise.

14:30 Technical Agility 101: Deliver Like the Tech Giants by Brad Nelson

Technical Agility 101: Deliver Like the Tech Giants

Brad began by polling the audience to define Agile. The definition: a mindset focused on flexibility and adaptation. Next we took a look at the Agile Manifesto. Which emphasizes delivering working software frequently, and that working software is the measure of progress. In fact, the manifesto mentions software quite frequently. Most Agile projects use Scrum. So lets contrast the manifesto to the Scrum Guide. It only mentions software once, in order to say we’ve moved beyond it.

The typical team makeup is a Product Owner, a Scrum Master, and 3-7 developers. If you look at what training is available it is almost all focused on the Product Owner or Scrum Master. They make up a minority of the team. Evidently, the focus on software seems to have been lost: there should be training for the developers. It seems that the Agile industry is focused more on processes and tools than it is on people.

Brad then shifted focus to how to go about delivering smaller increments of value. The first thought is to have user stories rather than requirements. It’s nice to have a simple user story. But, the legacy code base can easily make that simple user story take weeks or months to complete. This can be due to a high degree of coupling between components and many instances of repeated code. Consequently, the code also needs to be broken down into smaller chunks.

That said, we can’t stop and fix everything in the code base before developing any new features. We need to clean up as we code.

Here are some specific tips on how to write better software:

  • Follow the DRY principle
  • Follow the YAGNI principle
  • Write code that humans can understand
  • Standardize the format of the code
  • Make the code readable or even self documenting
  • Use version control
  • Instill these good habits in programming

In addition to the above, create standard operating procedures. This is where DevOps comes in. DevOps automates parts of those standard operating procedures. Developers spend less than 1/3rd of their time writing code. So automating tasks (such as deployment) can disproportionately increase that amount of time available for writing code.

Finally, shift tests left in the process. Write a test plan as part of the requirements. On TDD, studies have shown that whether tests are written before or after the code is pretty much irrelevant. What is important is that automated tests are written during the development process by the developer.

My Thoughts

I think this was a good point about the dearth of developer related training. The specific points were all things we’ve been doing for years at SEP, but we’ve all seen legacy code bases that suffer from these issues.

The TDD point matches my experience with it. One thing I will add is that introducing automated unit tests into a legacy codebase can be a massive undertaking.

15:30 Closing Keynote: Own Your Move to Product by Pete Anderson

Closing Keynote

Pete started off with a description of Product Culture. Product culture is based in humility. We always assume we are wrong in some way.

  • Delivery: bias toward action
  • Personal: empathy, passion, and curiosity
  • Trust and Accountability are key

Product is some thing that we create which adds value for our customers. We should align strategy with persistent teams. And understand the output stream from idea to delivery to release. Create a set of curated tools for each phase of the value stream.

Many change initiatives use pilot teams. However, these teams can be misleading in their results because by pulling them off to some isolated area we remove most of what makes delivery hard. That being dependencies.

The goal isn’t to get an organization to change, but to get an organization to be capable of continuous change. Start the transformation with the why and the expected outcome. Then focus on tools and processes. Talk to leaders to find out what is on and off the table in regards to change practices. What is not able to be changed?

Apply the change at all levels of the organization. Training is a partnership that is complete once the coach is no longer needed. Partner with more than just the team, but with leaders across the organization. Measure improvements by doing surveys.

Partial commitment to change is not going to be productive. Agility is getting a bad rap. Many change initiatives fail because of a lack of business strategy. The business strategy needs to be clear from the start. Change initiatives need buy in and to focus on the full value stream. Furthermore, coaching has not proven its value over the last 5 years. Because of that, we are currently exploring retainer agreements as an alternative to embedded coaching.

My Thoughts

The most interesting point for me was the one about pilot teams. It is very natural when trying something new to attempt to test it in a controlled situation with as few variables as possible. It is good to realize that those variables tend to be the ones that most hamper productivity.

One way you could account for that is with two pilot teams. One testing the current process and the other testing the new process. I suggest this because if you have any doubt about the new process being superior, you are probably not going to want to change the entire organization to see how it works. That said, if you do have that confidence, then applying the change simultaneously at all levels of the organization seems like a good plan.