Please learn from our mistakes

No-bullshit lessons in business and careers. One mail every day. 15k+ readers love it. Join in?

Oops! Something went wrong while submitting the form.
TODAY’S STORY
24 Feb
,
2023

The magical effectiveness of chunking work into smaller batches


“There once were two watchmakers, named Hora and Tempus, who manufactured very fine watches. Both of them were highly regarded, and the phones in their workshops rang frequently-new customers were constantly calling them.

However, Hora prospered, while Tempus became poorer and poorer and finally lost his shop.

What was the reason?

The watches the men made consisted of about 1,000 parts each. Tempus had so constructed his that if he had one partly assembled and had to put it down — to answer the phone say — it immediately fell to pieces and had to be reassembled from the elements. The better the customers liked his watches, the more they phoned him, the more difficult it became for him to find enough uninterrupted time to finish a watch.

The watches that Hora made were no less complex than those of Tempus. But he had designed them so that he could put together subassemblies of about ten elements each. Ten of these subassemblies, again, could be put together into a larger subassembly; and a system of ten of the latter subassemblies constituted the whole watch. Hence, when Hora had to put down a partly assembled watch in order to answer the phone, he lost only a small part of his work, and he assembled his watches in only a fraction of the man-hours it took Tempus.

It is rather easy to make a quantitative analysis of the relative difficulty of the tasks of Tempus and Hora:

Suppose the probability that an interruption will occur while a part is being added to an incomplete assembly is p.

Then the probability that Tempus can complete a watch he has started without interruption is (1-p)^1000 — a very small number unless p is .001 or less. Each interruption will cost, on average, the time to assemble 1/p parts (the expected number assembled before interruption).

On the other hand, Hora has to complete one hundred eleven sub-assemblies of ten parts each. The probability that he will not be interrupted while completing any one of these is (1-p)^10, and each interruption will cost only about the time required to assemble five parts.

Now if p is about .01 – that is, there is one chance in a hundred that either watchmaker will be interrupted while adding any one part to an assembly — then a straightforward calculation shows that it will take Tempus, on the average, about four thousand times as long to assemble a watch as Hora.”

— Excerpt from the paper ‘Architecture of Complexity' by Herbert A. Simon

I've written about the benefits of practicing modularity in life and systems before. Recently, I also wrote about batch vs. flow processing, and the many advantages the latter has over the former.

It is such an important principle to understand because it is fundamentally linked to understanding many of the failures we see in our life and the world around us: failures that happen due to brittleness, lack of redundancy, and centralized dependencies that make the system fragile.

Imagine two complex adaptive systems, one organized modularly and one not.

At one moment, both might be able to exploit their environments equally and thus be equally "adapted" to their environment. But they will evolve at vastly different rates, with the one organized modularly quickly outstripping the one not so organized.

And the modularly organized system is able to do so because it has fewer dependencies and more degrees of freedom. This has some major benefits when it comes to product development, rapid experimentation, and evolution:

1. Fewer dependencies mean that failure in one part of the complex system doesn't translate to the breakdown of other parts.

Parts are loosely coupled and can buffer more volatility than parts that are tightly coupled and fall down together when something goes wrong.

For example, the Transmission Control Protocol (TCP) that runs the Internet breaks its messages into many small packets.

This is by design.

Because consider what would happen if we sent a 10 MB file as a single large packet and we had one error per million bytes (1 MB).

We would average 10 errors in our transmission.

Consequently, we would have one chance in 22,026 that the file would reach the destination without an error.

In contrast, if we break this large file into 10,000 packets of 1,000 bytes, there is only 1/1,000 that an individual packet will become corrupted. We will have to resend an average of 10 of the 10,000 packets, an overhead of 0.1 percent.

If we didn’t break messages into smaller packets, our overhead would be 22 million times higher!

This lowers cost and improves throughput, because we can quickly forward messages with low computational overhead. Now, only one 1000-byte packet will have to be resent per 1000 packets.

It is vastly better than having to send the entire 10 MB file again and have 1/22,026 chance of succeeding at sending it without errors.

Just thinking about how smartly a lot of our infrastructure is designed just blows my mind.

2. Reducing batch sizes accelerates feedback.

And in product development, this rapid and timely feedback can have huge economic consequences.

Why? Because the consequences of failures usually increase exponentially when we delay feedback.

Each engineering decision acts as the foundation for many other subsequent decisions. These decisions are dependent on the initial decision and grow geometrically over time.

Consequently, a single incorrect assumption can force us to change hundreds of later decisions.

Hence, when we delay feedback due to large batch sizes, rework becomes exponentially more expensive. With smaller batches and accelerated feedback, we can error-correct ourselves before our mistakes become too expensive to fix.

3. Smaller modules increase the rate of learning.

Breaking a process down into a larger number of modules, i.e., smaller batches means you can have specialists running each module and it's easier to get proficient in handling smaller modules.

The rate of learning increases. The detail at which each module is being thought about increases

Also, bugs in the system are detected and solved faster, because they're now one bug out of thirty versus one bug out of 300.

The latter is exponentially harder to debug due to the complexity and interconnectedness of the system.

4. Reducing batch sizes reduces overall volatility in the system.

When you have large batches, you cause periodic overloads.

For example, a restaurant may be able to handle a continuous stream of customers arriving in groups of three or four people, but a bus full of 80 tourists will overload its processes.

This large batch progressively overloads the seating process, the ordering process, the kitchen, and the table-clearing process.

Reducing batch sizes can lower this kind of volatility in demand, and sometimes, completely eliminate waiting times.

For instance, if you send your whole 1000-person organization at a single location to lunch at exactly the same time, they might complain about the queue in the cafeteria. And if you naively assume this queue is caused by insufficient capacity, then you will try to increase the capacity of the canteen.

But, if you were wiser, you would actually stagger lunch hours into multiple time slots and send smaller batches of people during any time slot. This alone will eliminate the problem of capacity overload for the kitchen and also reduce queues by a significant amount.

5. Large batches are demotivating.

When you have a single 3-month deadline, you have permission to slack for the first two months. But when you have multiple 5-day deadlines, you have no place to hide.

You feel more urgency. After all, even if you're behind schedule, there appears to be plenty of time left before the deadline. And in these 3 months, you will have a tonne of excuses. A lot of things will go wrong in the meanwhile that you will easily be able to blame on other factors and people.

As a result, you will feel less responsible for the overall outcome.

Also, large batch sizes demotivate by delaying feedback, which has huge psychological costs. When the team is experimenting with a new idea, rapid feedback is enormously energizing. Rapid feedback quickly supplies the positive reinforcement of success, and fast positive reinforcement always increases motivation. It helps us to subconsciously associate our delivery of a work product and the beginning of the next process step.

This gives us a sense of control.

Even when ideas fail, people prefer to get rapid feedback, because it allows them to move on from what isn't working, quickly.

Think about it. Wouldn't it be demotivating to invest a lot of effort proceeding in the wrong direction, and then have your manager tell you that you've done it all wrong?

This isn't just true for the organizational setting, it is true for personal projects as well. You can have ambitious goals but make sure you're evaluating progress in smaller chunks. It will keep you accountable and you will error-correct faster.

This idea is one I've also hinted on in this essay.

6. More modularity means easier upgradability.

For lack of a part, the entire system doesn't have to be overhauled. Only parts need to be replaced, leading to the system growing in a very bottom-up, organic fashion.

Think about cells in the human body. Each cell is connected to and communicates with other cells but still maintains its own independence as an entity. Our bodies replaces millions of cells every single day. But it doesn't affect the functioning of the system.

Modularity appears to be an evolved property in biology — one that can be mimicked in the organization of human knowledge and organizational structures.

I hope that after reading this, some of your gears are turning.

There are still some more advantages to breaking work down into smaller modules or batches, but for that, you will first need to understand some other control theory jargon.

Don't fret, we will be going down this rabbit hole through the next few weeks and months.

Every fundamental concept is fundamentally connected to each other. And the intent is to make you better at operations and management by training your systems-thinking muscle.

You'll see.

This is also an easter egg for something good coming later this year ;)

P. S. – The source for some of the many concepts I've referenced here is this excellent book called The Principles of Product Development Flow by Donald G. Reinertsen.

Feeling Lucky?
Subscribe to get new posts emailed to you, daily. No spam.
Oops! Something went wrong while submitting the form.
15k+ business professionals act on our advice every day. You should too.
Subscribe to get new posts emailed to you, daily. No spam.
Oops! Something went wrong while submitting the form.
15k+ business professionals act on our advice every day. You should too.