• Skip to main content
  • Skip to primary sidebar

This view of service management...

On the origin and the descent of managing services. We put meat on the bones.

  • Kanban Software
  • Services
    • Kanban Software Solutions
    • Consulting & Coaching
    • Training and Seminars
  • Posts
  • Events
    • Events – Agenda View
    • Events – Calendar View
  • Publications
    • Our Publications
    • Notable Publications
    • Quotes
  • About us

Automated Value Stream Maps

9 May 2020 by Robert Falkowitz Leave a Comment

The previous article in this series gave an overview of visu­al­i­za­tion types useful for managing services but rarely seen. In this article, I will examine in de­tail a key visu­ali­zation, the value stream map (VSM). I do not intend to explain how to use VSMs. This article assumes a basic understanding of value streams and of value stream maps. Instead, I will examine how you might automatically create and update that visualization with­in ser­vice and operations management tools.1

What is a value stream map?

Fig. 1: An example of a value stream map

A value stream map is one of the key visualizations used in lean management. It describes the flow of information and materials through a value stream. Many readily available sources describe these maps, so I will not go into any detail here. I will only note the iterative use of the maps in value stream mapping. This ac­ti­vity sup­ports the continual im­prove­­ment of an organization. It especially concerns identifying waste in the flow of ma­te­rials and information.

Tools for creating value stream maps manually

Fig. 2: Tools for the manual design of value stream maps

Many different tools are capable of creating value stream maps. Vir­tu­ally all these tools provide a VSM template, icons and drawing tools to enter text, position icons and draw connections.

I might mention in passing the simplest of tools: pencil, eraser and paper or white­board, marker and eraser. Using these tools, es­pe­ci­ally in a group activity, allows for personal interactions like body language and tones of voice. Automated tools have no channels to communicate those interactions.

However useful such manually created diagrams might be, they have no built-in intelligence. They do not connect automatically to any underpinning data. Users may validate the accuracy of the diagram only manually. Their main­tenance of the maps is labor-intensive. In short, such tools cannot create automated value stream maps.

Partially automated value stream maps

semi-automatic update of value stream map
Fig. 3: Some tools allow for automatic update of data in value stream map labels

Certain tools go a step beyond this sort of simple drawing. They allow shapes in the VSM to be related to data in spreadsheets. As the data in spreadsheets changes, managers may need to alter the diagram. In some cases, this synchronization may be auto­mated.

In their simplest form, such tools remain essentially drawing tools.  The user must create manually the objects on the VSM. In the more sophisticated form, these tools can draw complete VSMs based on data in the spreadsheet. To my knowledge, such tools hard-code the style and layout of the resulting VSM. Such tools re­pre­sent the simplest form of the automated value stream map.

Integrating VSM creation with service system management tools

The next step in the creation and main­tenance of automated value stream maps would be to bypass the spreadsheets. Service manage­ment or operations management tools may directly provide the data to VSMs from the opera­tional data they manage.

We may divide the setup of such automation into six areas:

  • the design of a VSM template
  • the definition of the value stream
  • the identification of the data sources
  • the linking of the data sources to the VSM object attributes
  • the identification of thres­holds to trigger alerts
  • the definition of analyses of the VSM data
  • the program for updating and communicating the VSMs

Once the designers complete this setup,  the system may create VSMs in a largely automated way. As we will see, we may also automate some of the uses of VSMS, once delivered to the desired audience.

Design the VSM Template

A VSM template may define the default attributes for a VSM. These attributes might include the shapes and icons to use, the color palette, fonts and so forth. Technically, the template might take the form of an XSL style sheet applied to XML data.

The manual choices made by designers prevent the automation of  template creation. Of course, some future and par­ti­cu­larly so­phis­ti­cated AI might be capable of executing this task.

Define the Value Stream

Value stream managers may define the value stream in a map either  visually or by direct coding. Designers already do such work using business process automation tools or BPMN notation. They might find it easier to define the value stream phases and other VSM components using a visual tool. Theo­re­ti­cally, designers could directly write, or tune, the un­der­pin­ning XML code. We might dub this technique “value stream as code”, ana­log­ous to “infra­structure as code”.

Lean management calls for gemba walks at the workplace to identify the phases of the value stream used in practice. How shall we conceive of a gemba walk when an IT system performs the service or process?

Certain tools can sniff network packets and trace other system artifacts. They add the intelligence needed to the flow of these virtual materials. Using such tools, it might be pos­sible to identify flow based on the reality of how the service system processes information. If possible, we should prefer this approach to basing the value stream on the theoretical architectural designs of a service.

For example, an electronic mail delivery service has unique identifiers of messages allowing tracing the real processing of messages. We could apply a similar approach to other services if they had the necessary identifiers. There might be other methods to identify automatically how a system processes data.

Among the factors influencing the usability of such methods are:

  • the degree to which nodes are shared
  • the complexity of the pro­cess­ing
  • the design of the in­for­ma­tion packet
  • the technologies in use

Auto­mating the identification of the value stream phases might be possible if the service system were designed to allow the necessary tracing.2

Identify the Data Sources

Data maintained in automated management tools may supply most of the object attributes dis­played on a VSM. I note below the ex­cep­tions depending on manual updates.

You will see in the diagrams below that I suggest automated updates based on data in log files. In principle, those data represent the reality of what happens in a service system. This reality may well be different from what we find in normative config­uration records, agree­ments and other such sources.

Cycle Times

capturing cycle times
Fig. 4: Cycle times may be captured from many sources, most, but not all, automatically

Cycle times may be measured and reported using various sources. Com­puter inputs and outputs might be timestamped. Kanban boards, whether physical or vir­tual, might record start and end times. Executors of purely manual tasks might report their cycle times.

In some cases, designers might calculate mean cycle times using Little’s Law:

Mean Lead Time = Mean work items in progress / Mean work items per unit time

Make sure that the measured times do not include non-value-adding time.

When machines perform work, we can distinguish be­tween value-adding time and non-value-adding time in a straight-forward way. When people perform work, only the executor of the task can really distinguish what was value-adding from what was not. Consider the issues associated with completing a weekly timesheet, recording the amount of work done on each assigned project.

Who knows what percentage of the time spent on a single task was value-adding? In general, only the person per­form­ing a task knows that. Note that the mere fact of recording such information is, itself, non-value-adding. Fur­ther­more, worker biases and other forms of error depreciate the re­li­ability of such time estimates. Consequently, you may wish to  collect these data only peri­odi­cally, not continuously. Too, in­de­pen­dent controls on the data recorded could help reduce bias and improve accuracy.

Take care to avoid high levels of measurement overhead. Random sampling may help to reduce that overhead, especially for a high volume of work items during the measurement period.

Queue/Inventory Sizes

A value stream map should report aggregated values of queue size. Instantaneous mea­sure­ments of queue size support proactive allocation of resources and un­block­ing activities. However, they do not support optimization activities based on value stream maps. Instead, we seek such statistics as mean inventory size and standard deviation over the sample period.

If computerized agents perform services, monitoring agents can measure queue sizes. For example, a message transfer agent (MTA) will have an input and an output queue. Standard agents can measure the size of those queues and report those data to the event management system.

For manual work, designers may derive queue sizes from kanban boards. The board designer may split each value stream phase into an “in progress” sub-column and a “completed” sub-column. In that case, the queue sizes may be viewed directly from the “completed” sub-columns. Other­wise, the “Ready” or “Backlog” columns to the left side of kanban boards display such work. Portfolio kanban software would be par­ti­cu­larly useful for ga­ther­ing such data. Furthermore, it can help ensure the same data are not counted multiple times.

For physical ma­terials, the machines that automate the handling of materials may provide in­ven­tory sizes. Supply chain data may also provide the data needed for their calculation.

In an information technology context, inventories of goods might include computers, spare parts and other devices. These components may be in storage, awaiting use or in the inter­me­diate phases of a value stream. For example, a technician may clone a batch of disks to prepare  com­puters for de­ploy­ment to the desktop. After preparation, but be­fore in­stal­lation in the com­puters, they form part of an intermediate inventory.

The diagram for cycle times (Fig. 4) is also mostly relevant to capturing queue sizes.

Availability

Fig. 5: The availability of service system components (at the functional level) may be captured automatically, for the most part

In an automated value stream map, we should consider the avail­ability of the whole system required for each value stream phase. Drilling down to the individual components becomes important only to define specific actions to improve availability.

Analysts may measure the availability of computing de­vices and other machinery in many ways. For value stream mapping, the most ap­pro­pri­ate way is to subtract the down­time due to incidents from the planned service time, divided by the planned service time. How­ever, I would not generalize the use of this metric for avail­ability.3

The service management tool should understand the rela­tion­ship of  system com­ponents to the successful completion of each phase of the value stream. Incident tracking needs to be able to iden­tify when any of those com­ponents have failed. It further needs to relate those failures to the com­ponents. In this way, the service ma­nage­ment tool can auto­ma­tically calculate availability for the value stream maps.

Resource and Capacity Use

The service ma­nage­ment tool should detect system component unavailability. It should also know how much of their theoretical capacity the service or process uses over any given period. It also needs to un­der­stand how capacity use is related to performance. Measuring the use of non-IT machines is more straight-forward. Some machines are either on or off. Others can function at dif­ferent speeds. Agents can generally measure the % of pro­cessing cycles used on computing processors. Combine this statistic with the processing capacity of a single cycle. Storage measurement, too, is very simple to measure. Also, the management tool should have an idea of how capacity use affects performance. For example, running a machine faster might increase its failure rate and hasten the time before the next pre­ven­tive maintenance. The use of a computer processor might have some logarith­mic re­la­tion­ship of capacity use to per­formance. Similarly, working people to exhaustion generally in­creases the error rate and lowers throughput. The over-use of resources generally provokes some form of waste. Inversely, the under-use of re­sources is another form of waste.

Defect Rates

Being able to measure defect rates at each phase of the value stream implies that:
  • each phase has distinct criteria for successful com­ple­tion
  • these criteria are tested at handover time to the next phase
  • the results of such tests—at least, the negative results—are logged
Logs may record the failures to meet those success criteria. The relevant automated value stream maps derive data directly from those logs. Application developers may in­clude in their applications the capa­bility to report intermediate failures to respect success criteria. There is increasing pressure on all developers to thereby enhance the observability of how software works. When workers detect defects manu­ally, such as via visual inspection of an inter­medi­ate product, they should maintain a corresponding manual log. The tool creating the automated value stream maps may process this log for reporting those de­fects on the maps. Customer reports are also a source of information about defects. Customer support request records may contain nu­mer­ic defect data. Records of the return of mer­chan­dise (if ap­pli­cable) may also con­tain such data. Channels, such as complaints to sales personnel, may contain anec­dotal defect infor­mation. Take care to avoid the double-counting of defects. Stop­ping production upon detection of a defect and not passing defective products down the value stream serve to prevent miscounting.
capturing defects
Fig. 6: Defects in goods or services may be captured via various channels

Batch Sizes

The size of a batch of work can have a very significant effect on the flow of that work. Con­se­quently, it can have a significant impact on throughput and lead times. De­spite this impact, service management tools do not general­ly provide a structured way of defining and managing batch sizes. Therefore, it might be dif­fi­cult to automate the reporting of batch sizes in a VSM.

In a retail store, batch size might be the quantity of items ordered from a dis­tri­butor when it is time to restock. In a factory, batch size might be the number of com­po­nents to assemble when it is time to fetch more to a station on that line. But what do we mean by “batch size” in the context of services delivered via software applications?

Software applications might manage the flow and processing of information in batches, as distinct from handling every transaction separately. The daily accounts closing aggregating the day’s transactions and balances exem­plifies this. Responding to queries in blocks of data, rather than de­li­ver­ing all results at once, is another example. Thus, you might see the results of a query in, say, 25 lines at a time. If you want to see more, click on the “See more” button.

Batching of work also occurs in the management of  technology com­ponents. For ex­ample, when a user in your com­pany needs a new computer, do you prepare just a single computer and deliver it or do you prepare a batch of computers? Technicians use the logic that larger batches of computers prepared in ad­vance permit more rapid deliveries. Of course, doing such work in batches may also lead to various forms of waste, such as over­production and rework.

Therefore, there is a case for knowing and reporting the sizes of batches. Tuning batch size is part of the incremental changes you might make to optimize the flow of work.

Data about the sizes of batches might hide in various places in ma­nage­ment tools. Work orders instructing someone to prepare x number of components might contain batch sizes. Ap­pli­ca­tion con­fi­gur­a­tion files or database records might contain them. Or they might be implicit in the capacity of the in­fra­structure used.

For example, the size of a batch of goods delivered by truck might be “as many as can fit”. The number of new disks in a batch might be “the number of connections to the ghosting apparatus”. Remember, though. A gemba walk might reveal actual batch sizes that differ from the planned or theoretically sizes.

capturing batch sizes
Fig. 7: Data about batch sizes may be sourced in many places, some of which workers record manually

Changeover and Maintenance Times

Changeover times might have a high impact on the flow of work on assembly lines. However, software systems, by their very nature, do not have such issues. Or, at least, they perform change­overs rapidly. The waste of such changeovers may become noticeable only when value stream managers eliminate far more important sources of waste.

We may consider two types of software change­overs. First, sys­tem managers might stop some software running on a platform to free up resources for a different software. Shutting down a virtual ma­chine and starting up another virtual machine on the same plat­form exemplifies this need. Another example is shutting down one application followed by starting up another application.

The second case is typical of any operating system supporting pre-emptive multitasking. The pro­cessor cycles dedicated to process context switching are a form of changeover and waste. Monitoring the number of context switches, as opposed to their duration, might be is generally possible.

Whether a system is hardware or software, it may require shut­downs for maintenance purposes. Technicians often perform manual main­te­nance tasks according to work orders generated by the production control system. How­ever, derive data for the VSMs from the aggregate of the actual maintenance times. We prefer this to the ex­pected times that work orders might indicate. Log and report automated maintenance tasks (which are generally non-value-adding activities). Examples include the periodic rebooting of servers or the shutdown of applications during the reorg­anization of indexes.

Similarly, virtually all software batch oper­a­tions are non-value-adding actions. Think of im­port­ing daily exchange rates, adding the day’s trans­ac­tions to a data warehouse or the periodic closing of books. These are not forms of maintenance, however. Report these activities as phases of the value stream, especially if they are performed frequently.

Fig. 8: Many changeover and maintenance activities automatically write to logs, but manual activities require the technician to record the execution times

Link the Data Sources to the VSM Objects

We have seen that a VSM may contain automatically reported data  derived from various ma­nage­ment tools. Some data, however, might be difficult to obtain automatically. Other data might reflect  planned or expected values rather than the actual operational values.

The VSM designer must link the identified data sources to objects in the value stream map. For example, link each inventory shape to the calculation of its inventory size. Link mean cycle times to the segments in the VSM’s timeline, and so forth.

Identify Alert Thresholds and Algorithms

Managers might use value stream maps to visualize how various com­ponents of a service operate. But they use them principally  to identify forms of waste and potential improvement ac­ti­vi­ties. So, let’s also try to automate the VSM’s use in identifying issues and im­provements. The automatic identification of issues depends, obviously, on first determining the criteria indicative of an issue. These criteria might be simplistic thresholds or more so­phis­ticated algorithms, such as used by AI analytics. To the extent that thresholds are used, a service management tool might already record their definitions. The most ob­vi­ous sources would be the agree­ments with cus­tomers and sup­pli­ers to respect certain lead times. They might also contain records of capacity thresholds for various service system com­po­nents. Older approaches may have defined performance criteria in OLAs. (OLAs may be deprecated in methods focusing on the customer and using multidisciplinary teams responsible for entire services.) Other sources of data might include industry benchmarks. For example, flow efficiency is a standard metric for flow ma­nage­ment. It is defined as value-added time divided by total cycle time, expressed as a percentage. It is commonly reported on value stream maps. Knowledge work ac­ti­vities like soft­ware engineering commonly have a flow ef­fi­ciency of 5% to 15%. In other words, flow is abysmally poor.

Define Visual Analytics

Value stream maps should visually indicate issues worthy of further in­ves­ti­gation and action. Only the imagination limits the visual tech­niques that VSM designers might use to highlight such issues. Examples of visualization tech­niques might include:
  • special colors with the color scheme in use
  • special fonts
  • changes to backgrounds around the objects or the labels concerned
  • text annotations
  • fish-eye display of map objects worthy of closer attention
value stream map with highlighted issues
Fig. 9: Illustration of various techniques used to highlight issues on an automated value stream map

Update and Communicate the Automated Value Stream Maps

Value stream maps need to be kept up to date. Value stream managers must have timely access to the updated versions. We shall want to automate these updates and map distributions as much as possible. Service management tools com­mon­ly have the capability to automate the update of visu­ali­zations. No innovations would be required to implement this func­tion for value stream maps. Similarly, the communication of service management information is already well advanced. Tools support pushing in­for­mation (ge­ne­rally by some elec­tronic messaging) and pulling in­for­ma­tion (making it available via some information portal. More so­phis­ti­cated tools also allow for sub­scribing to specific reports.

Validate and Decide

The simplest of drawing tools allow for group interactions and types of non-verbal com­mu­ni­ca­tion. Unfortunately, electronic and automated tools provide no good channels for this type of com­mu­ni­cation. (Do you believe that adding a smiley to the end of a written message has the same force as a genuine smile from the person looking at you?). Furthermore, we should not un­der­play the value of strug­gl­ing with building a visualization manu­al­ly to enhance learning and ac­cep­tance. Are you more apt to understand the visualization in whose creation you have par­ti­ci­pated or the visualization that has been created for you by a machine? It is not a good idea to apply the information displayed in an automated value stream map without further analysis or challenge. Such visualizations would be pointless. Just let the automated creation process take the necessary im­prove­ment steps on its own! Therefore, value stream managers need to view the maps analytically. They need to discuss them and decide for them­selves how to benefit from the information they display. They should attempt to discover their own insights. Only then should they decide which im­prove­ments to implement.

Implement Improvements

Implementation of the changes intended to im­prove the value stream concludes an iteration of the value stream mapping activity. What role could the automated value stream map play in this implementation activity? For many, the map would play no role at all during the imple­men­ta­tion of the change. An automated value stream map may conceivably act as a sort of operational control panel for a value stream. In other words, there could be a two-way relationship be­tween value stream operations and the map. On the one hand, the map is drawn directly from operational data. On the other hand, changes to the map could automatically change the para­meters of the flow of work. For example, batch sizes, shift duration and re­source counts  could be altered within an electronic automated value stream map. With such a technology, the map might also be used to test hypotheses about the impact of chang­ing flow parameters. Most or­gani­za­tions, however, have a very long way to go before they develop such capabilities.

Summary of Benefits of Automated Value Stream Maps

We assume that the members of a service delivery organization have achieved con­sen­sus on what a value stream map should display. In this case, automation will vastly decrease the time needed to ge­ne­rate an acceptable and useful map. As continual improvement tools, auto­ma­ted value stream maps may be useful in creating simulations of proposed improvements. Value stream managers may visually compare the situation of the recent past to a simulation of a proposed future. Visual simu­la­tions would be especially bene­ficial if the proposed changes were to alter the phases of the value stream itself. Furthermore, automating the cal­cu­la­tion and display of oper­a­tional values removes the risk of certain errors in the map. When a person types or pens in a value (e.g.,  a cycle time) there is the risk of misreading or mishearing the value. That person might misplace the decimal point or write the wrong number of zeros, etc. Automation also leads to the consistency of output, which en­hances the comparability of maps. This con­sis­tency is es­peci­ally im­portant in the algorithms used to calculate the numeric statistics reported on value stream maps. Two different persons might have different views on how to cal­cu­late availability; a single software instance for creating a map does not.

Summary of Drawbacks of Automated Value Stream Maps

I have already alluded above to the benefit of creating a value stream map manually. The creators struggle together in finding the best ways to present the in­for­mation on the map. They might decide to adapt the map for the particular purposes of a given service. In the end, they un­der­stand the details of the map because they created each part themselves. Merely being pre­sented with an image created by a third party makes learning from the map harder.

I described above how an automated value stream map might include visual indicators of factors that lead to waste. While they enhance map usability, they also present the risk of ignoring factors that are not visually dominant. Compare this situation to the bias assuming all is well if the light on the control panel is green.

Setting up the automation of value stream map creation is itself a labor-intensive activity. It makes sense only if the resulting system will create value stream maps regularly. This would be the case if value stream maps were being used as intended. However, some immature organizations might think of value stream maps as one-off types of documentation. They might create them once and then file them. In such cases, auto­mation makes little sense.

As with any form of automation, it makes sense if it solves one or more problems an organization is facing. But if the organization cannot identify the problems it is trying to solve, it cannot un­der­stand the value of auto­ma­tion. Such automation efforts are likely to be misguided and wasteful.

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License The article Automated Value Stream Maps by Robert S. Falkowitz, including all its contents, is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Notes

1 This is not a tutorial on how to use any particular service management tool. To my knowledge, no service management tool currently has the capability to automatically create and maintain value stream maps. However, if users are aware of what is possible, without very much effort on the part of the tool designers, they might start to request such capabilities.

2 Various tools exist that can track the flow of events through a service system. I am thinking of products from companies such as Dynatrace, New Relic, Amplitude or Splunk (no endorsement intended). The trick is to relate those events to the much higher value stream phases. It is unlikely that such relationships can be identified automatically.
3 When measuring the availability of an IT-based service, I generally recommend defining the metric as a percentage of customer requests that the system can fulfill. IN this customer-oriented way, we avoid considering a system to be unavailable when no one wants to use it. However, the traditional use of value stream maps in a manufacturing context understands the availability of machinery as the percentage of planned time that equipment is functioning correctly. This interpretation corresponds to the IT definition of availability as measurable in terms of the percentage of time a component is down.
Credits

Unless otherwise indicated here, the diagrams are the work of the author.

Summary
Automated value stream maps
Article Name
Automated value stream maps
Description
An automated value stream map is an advanced example of how information visualizations may be integrated into service system management tools.
Author
Robert S. Falkowitz
Publisher Name
Concentric Circle Consulting
Publisher Logo
Concentric Circle Consulting

Filed Under: Visualization Tagged With: automated value stream map, availability, batch size, capacity use, changeover time, cycle time, defect rate, information flow, inventory size, lean management, maintenance time, materials flow, queue size, value added work, value stream, value stream map, value stream phase, waste

Subscribe to our mailing list

Click here to be the first to learn of our events and publications
  • Email
  • Facebook
  • LinkedIn
  • Phone
  • Twitter
  • xing
  • YouTube

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Primary Sidebar

Kanban eLearning

Kanban training online

Recent Posts

  • Verbs, nouns and kanban board structure
  • The role of the problem manager
  • The Three Indicators

Tag Cloud

context switching Cost of Delay cause impact process definition knowledge management lean management Incident Management change management flow efficiency risk service manager process process metrics value stream problem statistical control chart knowledge work service request kanban board automation kanban training incident priority agile ITIL flow incident management tools lead time waste ITSM service management tools resource liquidity lean rigidity kanban manifesto for software development manifesto bias histogram
  • Kanban Software
  • Services
  • Posts
  • Events
  • Publications
  • Subscribe
  • Rights & Duties
  • Personal Data

© 2014–2023 Concentric Circle Consulting · All Rights Reserved.
Concentric Circle Consulting Address
Log in

Manage Cookie Consent
We use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Manage options Manage services Manage vendors Read more about these purposes
View preferences
{title} {title} {title}