Value Stream Mapping: Lead Time

Lead Time, Metric Used in Lean

One of the joys of writing a book is interacting with readers. Many authors experience forehead-thumping “duh!” moments as they hear questions and realize that they left out an important detail or could have provided an example to clarify. Now that Value Stream Mapping has been released, great questions are starting to roll in—along with the need to clarify a few points.

This post addresses a question about Lead Time (LT) that has been asked by several readers who work in manufacturingThe question has taken several forms:

  • Why do you treat lead time differently in your book?
  • Why don’t you convert inventory (note: we prefer to call it work-in-process) that has accumulated between process blocks to time? Similarly, how does your version of lead time differ from converting work-in-process into time?
  • Why don’t you place lead time on the peak of the sawtooth (square wave) timeline?

WARNING: If you have never mapped a manufacturing value stream, you may find this post a bit confusing and may be better off not reading it.  :-)

First, a little background…

As we mention in the Acknowledgements section of Value Stream Mapping, “Very few people know how many times and for how many years we considered writing this book and then decided against it. We felt that the value stream mapping ship had sailed.” We go on to explain that we wrote it to close the significant gap that existed about how best to apply value stream mapping in office, service, and knowledge work environments.

We also wanted help those in manufacturing learn how to get deeper results by using value stream mapping as a management practice versus viewing it solely a workflow design tool. (More about this in a webinar I’m giving on Wed April 23 hosted by Gemba Academy. Click here to register: www.gembaacademy.com/webinars/martin-vsm.)

Therefore, in the mapping “mechanics” chapters of Value Stream Mapping that address how to physically create current and future state value stream maps (chapters 3 & 4), we intentionally didn’t include manufacturing mapping icons, terminology, and metrics. Because that book had indeed been written (Mike Rother & John Shook’s Learning to See).

In hindsight, perhaps we should have included at least a few footnotes to highlight some of the major differences between office- and manufacturing-based mapping conventions. The treatment of lead time is one of those differences.

Lead Time:  Office vs. Manufacturing

In any value stream there are multiple measures of lead time (also known as throughput time, turnaround time, and response time), including:

  • the lead time to fulfill a customer request (the “customer experience”)
  • the material lead time from receiving receipt to shipping product (the “material experience”)
  • the lead time for each process block in the value stream

The customer experience lead time is the elapsed time from receiving a customer request to delivering on that request. The lead time for extended value streams can also include the lead time for processes before a customer request is received (supply chain, sales and marketing processes, etc.) and after delivery (warranty work, invoicing, etc.).

In manufacturing, the lead time between process blocks is typically calculated based on the days of demand of the observed WIP that has accumulated between process blocks: Lead Time = Observed WIP/Daily Customer Demand. This may or may not represent how long it takes the following process to consume the materials.

In office, service, and knowledge work environments, we define the lead time for each process in the value stream as the elapsed time from the moment work is made available to a person, work team, or department until it has been completed and made available to the next person, work team, or department in the value stream.

We use this approach for a number of reasons:

  1. Office, service, and knowledge work environments often have high degrees of variation in workloads and accumulated WIP.
  2. These environments are often staffed with “shared resources” who support many value streams and juggle many priorities, and therefore, are not always available to perform work when it arrives.
  3. We believe it’s the most sensible way to reflect workflow in low volume environments—e.g., month-end close only occurs 12 times a year, its duration is just several days, and there’s only one month-end close being done at a time.

In office, service and knowledge work environments, we recommend that you follow a single “work item” as it passes through the value stream, whether it’s verbal information, electronic information, or a physical item (which may include people as in healthcare patients, restaurant customers, etc.).

(Keep in mind that work items in office and service settings typically transform as they pass through a value stream, just like a product does in manufacturing. For example, the work items in a software development value stream mapping activity I recently facilitated are: email request → request for quote → quotation → purchase order → work order → beta code → final code → invoice.  Every value stream has its own version of work item transformation.)

To repeat, the process block lead time for a single work item is the elapsed time from the moment it’s received until it’s handed off to the next process in the value stream. It includes the process time (the time it takes to actually do the work, also referred to as touch time and cycle time), as well as any waiting/delays that may occur:

  • before anyone begins working on it
  • during the work (e.g., waiting for clarification)
  • after the work is complete, but hasn’t yet been passed on the next process (as can be the case in batching, interruptions, and shifting priorities)

Example: LT = 2 hours for a work item that arrives in Work Area A at 1 pm and is passed to Work Area B at 3 pm is two hours. If the work takes 20 minutes to complete, the lead time is still two hours, but 20 minutes of the two-hour lead time is process time and the work sits idle for 1 hour and 40 minutes.

Note, too, that WIP can accumulate during any of the three stages bulleted above. We typically include WIP on our value stream maps to see where the largest queuing and constraints lie versus using those numbers to calculate the lead time. (However, in high volume office “production” areas with dedicated resources, the lead time will approximate WIP/daily demand).

As you can see on the sample value stream maps in our book’s Appendices and on the VSM segment shown later in the post, the WIP for a particular process is shown to the left of the process block and includes WIP from any of the three stages listed above:

WIP image

Timeline Treatment

Sadly, there is no industry-wide standard for value stream metrics and timeline conventions. Some people break process time into value-adding and non-value-adding time and show the sum for each category.  Some people separate pur waiting time from the lead time rather than viewing it as a single throughput time metrics. Some people use the traditional square-wave timeline shape (commonly referred to as a “sawtooth” timeline), while others use a straight line. And so on.

When we first learned value stream mapping 15 years ago, we used the square-wave type timeline and placed the lead time on the “peak” of the timeline to the left of the process block it referred to. And we placed process time in the “trough” of the timeline directly below the process block it referred to.

Outpatient imaging simulation future state map

But over the years, we found that teams consistently got tripped up with which metric went where. And the metrics placement didn’t make intuitive sense to many teams when they had to reverse the placement in their minds in order to calculate Activity Ratio. [Activity Ratio (AR) is a summary metric we use to reflect the degree of flow, which we've described in Value Stream Mapping (p. 90), as well as our two earlier books, The Kaizen Event Planner and Metrics-Based Process Mapping: AR = (Sum of timeline process times/Sum of timeline lead times) x 100.] In the formula, process time is on top. On a sawtooth timeline, it’s on the bottom.

For these reasons, we eventually moved to a single line timeline, with process time on the top and lead time on the bottom. In the below example, process time is above the line and expressed in minutes, and the lead time is below the line and expressed in hours:

Straight VSM timeline

Because our preferred software for creating electronic versions of maps (which aid in distribution and storage), iGrafx® FlowCharter, doesn’t yet have a straight line option for the timeline, we created a workaround that enables us to place both metrics directly below the process block, which approximates a manually drawn straight line.

However, due to iGrafx hard-coded conventions, the lead time remains on top. I share this because it answers the question we’ve gotten about why the timelines on the sample value stream maps we included in the Value Stream Mapping appendices look as they do:

Segment from Appendix E VSM 2

So that’s the story. I hope this clarifies why and how we treat lead time the way that we do. I invite your comments and will continue to periodically post responses to the questions we’re getting, so please keep them coming.

I wish you the best as you begin experimenting with or refine your use of value stream mapping to transform your value streams.

As a reminder, I offer free monthly webinars where I cover topics such as this. You can learn about them by subscribing: www.ksmartin.com/subscribe.

You can also listen to the recordings for past webinars on our website, YouTube, Vimeo, and SlideShare (which also includes the slides). I’ve given six webinars on value stream mapping in the past six months, which you may find helpful.

Click here to register for my next value stream mapping webinar, hosted by Gemba Academy on Wednesday April 23 at 9 am Pacific Time.

And for more information on the book, visit www.ksmartin.com/value-stream-mapping. We give in-house value stream mapping workshops as well.

Posted in Lean | Tagged , , , , , , , , , , | 6 Comments

The Power of Hope in Improvement

Hope Street

I love how conversations can challenge one’s thinking and spark new ideas. Interviews—for a new job, a board position, or with the media—are particularly rich opportunities to stretch your mental muscles and discover what you really believe. And sometimes you can be very surprised by where the conversation takes you.

One such surprise occurred for me during a recent podcast with LeanBlog’s Mark Graban about my latest book, Value Stream Mapping (with co-author Mike Osterling). Around 26 minutes into the podcast, the conversation turned to how powerful value stream maps are in illuminating the truth about current work systems and how invigorating it is when people see that they can actually fix the problems that have been creating organizational drag.

This led me to share a revelation I had had a day earlier about the vital role that hope plays in the improvement process: “Part of what the transition phase between the current state and future state is about is giving people hope. We don’t talk about hope in business circles. But when people are beaten down and frustrated with the amount of chaos that they deal with day in and day out, hope is a great antidote to resistance [to change]. And hope is the way forward.”

Now lest you think that topic’s excessively “squishy,” I went on to say: “Of course, it [hope] has to be followed by execution, but I think hope is a good place to start.” Mark added: “When people start to see the possibility, it’s great to see how people start to turn from despair to optimism.”

Indeed. I’ve long believed that improvement results are largely dependent on establishing success-oriented mindsets and preconditions, but considering the role of hope and optimism in solving problems and transformation organizations creates rich new territory for us to explore.

It’s been two months since we recorded the podcast and, in that time, I’ve visited six different clients. I’ve been paying particular attention to mindsets and looking for patterns around degrees of hope. I’ve also experimented with using hope to stimulate more innovative thinking. It seems to be working. Improvement teams at the last three clients have designed future state that have far exceeded any of the teams I’ve led in the past 20 years of being in business.

In the most dramatic case, a team has designed a future state that’s projected to deliver the following results:

  • Lead time reduction from 17 months to 7.5 months (56% improvement)
  • Freed capacity equivalent to 22 FTEs (full-time equivalents). Note: No layoffs will occur.
  • $25 million in freed working capital (annualized)

Time will tell how close this client comes to their projections (see Chapter 4 in Value Stream Mapping for information on calculating projected results), but I’m placing money on them. Their ability to achieve this dramatic level of improvement exists, in large part, because they had high levels of hope going into the transformation cycle they’re now in.

They also possessed high levels of three additional psychological levers that I’ve found are preconditions to making significant improvement:

  • Will — To succeed in improvement, you have to WANT to improve. All too often I see organizations whose actions don’t match their words. If you want to lose weight, but you’re not willing to become more active or alter your food intake, you’re not going to see any results. And merely checking a box (yay, I thought about weight loss!) isn’t going to move the scale’s needle at all. You either want it or you don’t. It’s disrespectful to all parties involved to approach improvement with no demonstrable will.
  • Belief — You have to BELIEVE that improvement is possible. While facilitating teams, I often sense their waning belief that they can create the future condition that they desire. But as Theodore Roosevelt said, “Believe you can and you’re halfway there.” (Hat tip to @dirkhinze for Tweeting this quote over the weekend.) Obviously the organization needs to commit to improvement upfront (including the proper resources to make improvement) or a team’s lack of belief will be well-earned. But a skilled facilitator can build belief in a team who will otherwise fall prey to disbelief.
  • Courage — Making change of any sort is difficult. The more complex the improvement and the more people it touches, the more difficult it is. It takes a healthy dose of COURAGE and intestinal fortitude to successfully transform a culture and its work systems.  You need courageous leadership and courageous team members. You have to be willing to let leaders with outdated paradigms and management styles go. You have to be willing to have your Board, Congress, or Wall Street breathing down your neck when you opt for a measured approach to improvement. You have to be willing to go against the grain.

But underlying all of these is HOPE. Hope for a better tomorrow. Hope for less stress and frustration. Hope for shorter work days and more time with family. Hope for processes that don’t require heroics to succeed. Hope for customers who want to come back again and again and sing your praises to prospective customers.

It’s our job as leaders, improvement professionals, business management consultants, and academics to take a hard look at hope and do all we can to deliver on it. When you feel the visceral shift in a team as hope emerges, see physical changes that reflect that shift, and hear the verbal evidence that hope has indeed arrived, THAT’S when the magic happens.

It certainly takes a lot more than hope to get results. But hope is a damn good place to start.

To listen to the podcast referenced: www.leanblog.org/190.

Photo by Tracey Clark. Reprinted with permission.

 

Posted in Continuous Improvement | Tagged , , , , , , | 3 Comments

7 Reasons Why Most Organizations Don’t Know Their Customers

Voice of the customer

It’s difficult to say when and where the concept of “business” was borne. It’s often attributed to ancient Roman law and to British law in the early 1500′s. The Dutch East India Company, established in 1602 in modern-day Jakarta, is often viewed as the first multi-national company and the first company to issue stock. In the U.S., business as we know it today was arguably developed at the dawn of the Industrial Revolution.

Regardless of when the concept of business was formed, it’s pretty evident that every business—without exception—has always been established to provide a good or service to a customer. Which means we’ve been serving customers for at least four centuries and likely far longer. So why is obtaining and considering the voice of the customer—a business’s raison d’etre—monumentally difficult for so many?

The #1 goal in the Lean management approach is to provide greater value to customers. Adding value is accomplished through a variety of means: better product design, better pricing, less operational waste, faster delivery, better quality, better post-sales service (often more important than the product itself),  and so on. But, as I described in my recent book, The Outstanding Organization, businesses must gain impeccable clarity about who their customers are and what they value. In other words, what—very specifically—are their needs and preferences? It is only by doing the heavy lifting to answer this core question that businesses have any chance at all of providing greater value and, therefore, becoming a Lean enterprise.

For the sake of brevity, I’ll skip the part about defining who one’s customers are. (But skipping the topic doesn’t diminish its importance. Some businesses are extremely clear, while others are shockingly unaware who they’re actually serving. More about this in my book.)

Time and time again when I work with clients, I find that even leaders who oversee customer service have difficulty answering basic questions about what their customers value. Why? I’ve found the answers fall into seven buckets:

1. They don’t ask.

The comic strip below (shared with permission from Gordon Pritchard) describes a shockingly common problem that typically stems from lack of interest, lack of time, fear of the truth, or lack of skill. But there’s zero chance of achieving any level of excellence if you don’t ask. Don’t ask, don’t tell, doesn’t work in life and it doesn’t work in business.

093-Voice-of-the-Customer Gordon

2. They rely solely on surveys. 

While surveys (of any sort) provide an efficient means to gather data from large numbers of people, there’s a big difference between data and information, and effectiveness trumps efficiency any day of the week. The biggest problem with surveys is that data interpretation and the resulting conclusions depend on sound surveys and survey processes to begin with—a requirement that is shockingly difficult to achieve.

3. They discount the importance of qualitative data. 

Getting to know one’s customers is best done in their environment, as they’re interfacing with an organization’s goods or services. Gaining a deep understanding about variation in needs and preferences is best achieved by observation and conversation, neither one of which can be accomplished via surveys. I like the term “thick data” (as opposed to “big data”), which I just learned while reading a well-written piece on the subject of qualitative data in this weekend’s Wall Street Journal.

4. They ask the wrong questions. 

Yesterday I received three customer surveys. Two of them were the wildly popular and, in my opinion, woefully ineffective Net Promoter Score (NPS) surveys that presumably measure customer loyalty:

Joss & Main survey

I’ve long questioned the cause-and-effect conclusion where recommendations necessarily translate into long-term customer loyalty. The minute a better product comes along, customers flock to those products, so today’s success isn’t necessarily a good predictor of future success.

Even worse, customer loyalty today isn’t necessarily a strong indicator that organizations are providing high value. I’ve interfaced with many organizations who receive decent NPS scores, but have significant operational problems that frustrate customers. In nearly every case, their NPS scores provided a false sense of security to senior leaders and slowed the desire for and pace of improvement. I’m not the only one with concerns. In response to one of my Tweets yesterday, Mark Graban shared a well-written analysis.

Don’t get me wrong. In concept, I like the simplicity of a single-question survey. But if that’s your goal, I believe the question to customers should be: “What can we do better that would improve your experience?” The answers to this question provide actionable information (which NPS lacks) and gets far closer to truly understanding customer value. The “downside”: You need to actually do something with the answers or your customers will quickly catch on that you’re merely checking a box and have no real interest in what they think.

5. They survey too much.

If I get another automated pop-up asking me to complete a customer survey, I’m going to scream. While it’s good news that seeking customer feedback is on the rise, it’s both lazy and disrespectful to program in a pop-up survey with every single interaction a customer has with a business. Nor is it wise to send an email with a survey link after each and every order a customer places.

At best, survey overkill breeds cynicism (“they don’t really care about me”) and at worse it erodes the customer experience. Plus, this practice often gives skewed results that are based on someone’s tolerance level for the intrusion versus reflecting their actual customer experience.

6. They attempt to influence the results.

You either want to know the truth or you don’t. Companies that attempt to influence their ratings are better off not asking for feedback at all. It’s insulting to a customer and the resulting data may bear little resemblance to reality.

Influencing takes form in many ways from overt face-to-face begging (“please, our store will look better if you rate us a 5″) to more subtle means such as timing a survey shortly after a “good news” event.

Whether unintentional or not, pre-selecting highest ratings is a form of influencing that can cloud the truth as I experienced yesterday with Delta’s onboard survey, pictured below. If organizations don’t want the unvarnished truth, they should stop wasting their customers’ time.

Delta onboard survey cropped

7. They draw the wrong conclusions.

Drawing the wrong conclusions, which can lead to poor decisions, is the greatest risk with quantitative data. A good example appears in the Lego story in the Wall Street Journal article I mentioned above. Only when Lego took the time and effort to truly get to know its customers did their business turn around. The information was clear, which resulted in better decisions, which led to better results.

When you go the gemba (the real place—in this case, to the customer) and talk and observe, you get far richer information that even a well-written, well-administered survey can yield. Does it take more time and effort? Yes. But as I asserted earlier, effectiveness trumps efficiency. If you want to get to know your customers and what they truly value, talking with them directly is the only way. Aim for getting “thick data” over “big data.”

(Note: For efficiency’s sake, email is a viable option as long as you ask well-constructed open-ended questions, you carefully analyze their responses, and you ask follow-up questions to clarify, if needed.)

Posted in Customer value, Lean | Tagged , , , , , , , , , | 10 Comments