by perching_aix 10 hours ago

This made me wonder, why aren't there usually teams whose job is to keep an eye on the coding patterns used in the various codebases? Similarly like how you have an SOC team who keeps monitoring traffic patterns, or an Operations Support team who keeps monitoring health probes, KPIs, and logs, or a QA who keeps writing tests against new code, maybe there would be value to keeping track of what coding patterns develop into over the course of the lifetime of codebases?

Like whenever I read posts like this, they're always fairly anecdotal. Sometimes there will even be posts about how large refactor x unlocked new capability y. But the rationale always reads somewhat retconned (or again, anecdotal*). It seems to me that maybe such continuous meta-analysis of one's own codebases would have great potential utility?

I'd imagine automated code smell checking tools can only cover so much at least.

* I hammer on about anecdotes, but I do recognize that sentiment matters. For example, if you're planning work, if something just sounds like a lot of work, that's already going to be impactful, even if that judgement is incorrect (since that misjudgment may never come to light).

jadenPete 2 hours ago | [-0 more]

I work one of these teams! At my company (~300 engineers), we have tech debt teams for both frontend and backend. I’m on the backend team.

We do the work that’s too large in scope for other teams to handle, and clearly documenting and enforcing best practices is one component of that. Part of that is maintaining a comprehensive linting suite, and the other part is writing documentation and educating developers. We also maintain core libraries and APIs, so if we notice many teams are doing the same thing in different ways, we’ll sit down and figure out what we can build that’ll accommodate most use cases.

vlovich123 10 hours ago | [-5 more]

There are. All the big tech companies have them. It’s just difficult to accomplish when you have millions of lines of code.

perching_aix 9 hours ago | [-4 more]

Is there an industry standard name for these teams that I somehow missed then?

svat 4 hours ago | [-0 more]
tczMUFlmoNk 9 hours ago | [-2 more]

You may wish to search for "readability at Google". Here is one article:

https://www.moderndescartes.com/essays/readability/

(I have not read this article closely, but it is about the right concept, so I provide it as a starting point since "readability" writ large can be an ambiguous term.)

mattarm 7 hours ago | [-1 more]

See https://abseil.io/tips/ for some idea of the kinds of guidance these kinds of teams work to provide, at least at Google. I worked on the “C++ library team” at Google for a number of years.

These roles don’t really have standard titles in the industry, as far as I’m aware. At Google we were part of the larger language/library/toolchain infrastructure org.

Much of what we did was quasi-political … basically coaxing and convincing people to adopt best practices, after first deciding what those practices are. Half of the tips above were probably written by interested people from the engineering org at large and we provided the platform and helped them get it published.

Speaking to the original question, no, there were no teams just manually reading code and looking for mistakes. If buggy code could be detected in an automated way, then we’d do that and attempt to fix it everywhere. Otherwise we’d attempt to educate and get everyone to level up their code review skills.

perching_aix 6 hours ago | [-0 more]

This is a really cool insight, thank you!

> Half of the tips above were probably written by interested people from the engineering org at large and we provided the platform and helped them get it published.

Are you aware how those engineers established their recommendations? Did they maybe perform case studies? Or was it more just a distillation of lived experience type of deal?