The excitement and disruption around AI continues to build. It feels like organisations far from the tech cutting edge are beginning to play with it and figure out how to incorporate it into their processes; we’re speed-running Rogers’s adoption curve.

The technological aspects are fascinating, but I’m also intrigued about the impact on organisations. In particular, I’ve been wondering about what it will take to successfully incorporate a new technological change like AI into an existing organisation. Where will it succeed and where will it fail? That jogged my memory about a brilliant paper about 1950s coal mines. It might seem like a strange comparison, but there are always lessons to be found in the past.

Have a great week,
Rob

This week’s article

AI and coal mines

What organisations adopting AI can learn from 1950s coal mines

We’re clearly on the cusp of a technological change at least as significant as the advent of computers, as AI (or at least generative AI) becomes widely accessible and works its way into many organisations. But as people hurtle headlong into experimenting with it, which organisations will adapt successfully to it and which will fail?

While AI is novel, and its exact impacts are difficult to predict, it is in lots of ways a technological innovation like any other that’s gone before, and organisations will have to adapt in the same way with the same dynamics. There are lessons to be learned from every historical innovation, and for an example that’s about as far removed from AI as it gets, we can turn to the pioneering work of organisational psychologists Eric Trist and Ken Bamforth in coal mines in the 1950s.

As Richard Burton beautifully described, coal mining was always hard work, but it was once artful, thrilling, and exciting:

“He would look at the seam of coal, and as it were almost surgically make a mark on it. And he’d say to his boy… ‘give me a number two mandrill’, that’s a half-headed pick, then, having stared this gorgeous display of black shining ribbon of coal, he would hit it with one enormous blow and, if he hit it right, something like twenty tons of coal would fall out from the coal face. That’s why… miners believe themselves to be the aristocrats of the working class. They felt superior to all other manual labourers. That coalface was a magical creature.”

As it grew in scale and mechanised, thought, the industry came to define people’s roles in a “one man, one task” manner, in order to make labour easy to hire and manage. By the 1950s it was an utterly regimented industry. But the result was a catastrophic decline in productivity and morale: managers saw workers as interchangeable, and workers felt like cogs in a machine. The relationship between the two was already – thirty years before the miners’ strikes – at best suspicious, and at worst actively hostile.

Trist and Bamforth went to visit a mine in South Yorkshire that seemed to buck the trend: productivity was high and morale was good. This was mainly due, it seemed, to the implementation of a new technology that allowed for the excavation of parts of the mine that were previously impossible to extract coal from. The results of this new technology were completely different levels of productivity and working conditions:

“The work organization of the new seam was, to us, a novel phenomenon consisting of a set of relatively autonomous groups interchanging roles and shifts and regulating their affairs with a minimum of supervision. Cooperation between task groups was everywhere in evidence; personal commitment was obvious, absenteeism low, accidents infrequent, productivity high. The contrast was large between the atmosphere and arrangements on these faces and those in the conventional areas of the pit, where the negative features characteristic of the industry were glaringly apparent.”

The best way to view what happened at the Haigh Moor seam was through the psychologist Harold Leavitt’s system model. Leavitt researched change within organisations, and what made it more or less likely to succeed. He identified four aspects of organisations:

Leavitt’s diamond
  • Structure – the hierarchies and departments of the organisation, but also how people interact with each other and the organisational culture and communication styles.

  • Task – what people in the organisation are trying to achieve.

  • People – the individuals who work in the organisation and what their skills and expertise are.

  • Technology – the tools that people have access to in order to perform their tasks.

Leavitt’s view was that these four parts of an organisation were inextricably linked, and linked in complex ways that defied simple understanding. They were all part of a systemic whole. Change would only be successful if it was successful in all four parts of the organisation, even after taking into account unforeseen and unintended consequences.

In the coal mine example, technology had shocked them out of their existing ways of working: the new technology allowed for mining of previously unviable seams, creating new tasks (technology + task). But the new technology required different ways of working, and so management and labour worked together to figure out new ways of organising the team (structure + people). The individuals involved had to learn new skills in order to use the technology (technology + people).

Everything changed: there was a new task, a new technology to achieve that task, new skills and techniques from the people involved, and a new organisational structure to best exploit the opportunity. Change was holistic, and successful.

AI will be no different. There will be organisations that view it as a purely technological change, and who focus on implementing it without considering its implications for the structure of the organisation, the people within it, and the tasks they have to perform. There will be others that take a broader view, thinking about technology and task but still failing to consider the entire picture.

If you want to do better than that, you could think about the implications for AI across the four areas Leavitt identified. How might it change the structure of your organisation, creating more need for certain roles and less for others? How might it change the skills and behaviours you need from people? How might it change the fundamental nature of the work you’re doing? What might be the second-order effects of those changes? How might you introduce it gradually, listening to feedback and learning as you go?

The organisations that succeed will be the ones that take this holistic approach, and successfully navigate the implications across all four areas. Like any process of changing a complex system, that will involve careful experiment, small-scale changes, and lots of collaboration and listening.

Click here to read the article »

This week’s two interesting links

The Internet Is Suddenly Full Of AI-Generated Hip-Hop

A couple of months ago a video did the rounds of David Guetta, who’d used AI to conjure up a realistic-sounding sample of Eminem. It was interesting, but also pretty meh: it sounded like Eminem, sure, but the lyrics were nonsensical and it all had a slightly uncanny feel about it. It felt like the jobs of rappers were safe for now.

Then, a couple of weeks ago, hip-hop duo Alltta released the song Savages, and everything changed. It illustrated how far things have come in just a couple of months, but also how incredible human-AI collaborations could be: it features lyrics by rapper Mr. J Medeiros, delivered in the unmistakeable flow of Jay-Z, backed by a genuinely good beat. It’s amazing and scary in equal measure.

Over at BuzzFeed News (RIP), Chris Stokel-Walker takes a tour through some of the recent developments in AI-generated hip-hop, and delves into the legal issues that are looming:

“While a consensus is forming that generative AI is potentially troublesome, no one really knows whether hobbyist creators are on shaky legal ground or not. pieawsome said he thinks of what he does as the equivalent of modding a game or producing fanfiction based on a popular book. ‘It’s our version of that,’ he said. ‘That may be a good thing. It may be a bad thing. I don’t know. But it’s kind of an inevitable thing that was going to happen.’”

#


In a way, this is how it should be

Brian Feldman on why the slow-motion shambles that is Twitter feels like a throwback to the old web:

“My current theory of Musk is that he’s a guy who did a lot of coding many, many years ago and it made him very rich and confident, and so nobody who still works at Twitter has the energy to correct him when assumes, ‘If we go into the <img> tag and change twitterbird.png to doge.png, we’ll have ourselves an epic prank.’

“Being able to sense someone messing with a website in real-time, moving the menu items around and forgetting to close an HTML tag here and there, is a neat feeling. It feels scrappy. I don’t mean to excuse any of this and I feel kinda bad for people who still rely on Twitter. But as someone with no skin in the game, I am enjoying the process, if not the result. I honestly prefer the dynamism of a guy who keeps changing the layout of the world’s most expensive MySpace page to sites of comparable scale promising me a new, inconsequential feature.”

#