I often struggle to explain what it means to be part of a high-functioning software team. Sure, there are mountains of literature, and an entire genre of LinkedIn thought leadership that professes all kinds of guidelines and heuristics about what makes teams work, but in my experience, it’s hard to internalize these ideas and follow someone else’s model if you’ve never seen what good looks like.
I’ve been very lucky to have worked directly with dozens, if not hundreds, of developers by this point in my career. I’ve been on some unhealthy teams: teams where people were fearful, and held their cards very close to their chest out of a perceived or real worry around their job security. I’ve also been on dysfunctional teams, where many days or weeks of development time was wasted while the team whiplashed between unclear priorities, or where the cost of coordination had grown so high that no one simply wanted to do it, leaving team as a collective of individuals rather than a unit. But I’ve luckily spent time on some very high-performing teams. When I’ve been on those teams, I was excited to come to work everyday, I wasn’t afraid of disagreeing publicly with those more senior than me, and I felt like my voice and my work had impact.
In this post, I’ll try to document the characteristics and habits of the highest-performing teams I’ve been on.
Psychological safety has been around as a concept for a while, so I’m not really going to spend much time explaining it. Read this first if you haven’t previously encountered the concept.
Software teams are staffed by real people who operate within invisible social and political structures, who are socialized from birth to be more assertive, more deferential, more outspoken, more polite, more argumentative, more placating, and so on. I say these obvious platitudes to make a point that psychological safety isn’t just about hiring some consultants to run employee trainings about what the concept means: Creating true psychological safety requires leaders and managers to take stock of all of the invisible, socialized rules of engagement between people, and understand how those affect one’s ability to meaningfully participate in team discussions and dynamics. In short: social privilege is a Big Thing. Otherwise, don’t be surprised if little things that eat away at team cohesion start manifesting: microaggressions, stereotype threat, the encoding of survivorship bias into your beliefs about what makes an effective team member.
Software teams with high psychological safety, in my observation, have some of the following behaviors:
When a team has a high degree of psychological safety, there are really cool process experiments that you can try, that I believe to generate a self-reinforcing positive feedback cycle that creates even higher levels of trust and safety. On my first high-performing team, during one of our retros, everyone felt like the twice-yearly performance review cycle, where our managers gathered feedback from our peers, wasn’t frequent or granular enough to promote career growth, especially when our team’s priorities were evolving so quickly. So, I proposed an experiment:
Feedback Week.
It’s a one-week process, where everyone (including the team lead and PM) is randomly assigned to collect feedback for another person. It went so well, that after my teammates went to new teams, they brought the experiment with them. And eventually, other teams in the office started copying us! I wrote about it more extensively in this blog post. I even gave a talk about it at DevOpsDays Toronto in 2019.
The ability to run something like Feedback Week is an indicator that your team is in a place of high psychological safety. If you’re lucky enough to be there, my advice is: don’t just stand still. This is an opportunity to boldly experiment with your processes and practices, and try things like Feedback Week, that can help you tap into hidden positive feedback loops. And if you do figure out something cool, tell me about it! Seriously, I love talking about this stuff.
As the complexity of a system increases, the accuracy of any single agent’s own model of that system decreases rapidly.
— Woods Theorem (https://snafucatchers.github.io)
I’ll just straight up admit that I will never have a comprehensive mental model of the GitHub monolith. It’s too big, has too many logical paths, and frankly, it won’t make me better at my job to sink an inordinate amount of time in learning all the parts of the code. Plus, it’ll just change tomorrow.
So, when I do have to gather enough context so I can implement the next feature or bug fix, I rely on the presence and accuracy of artifacts in the code, left by those who worked on it before me.