I think the article has a point. There seem to be two reactions among senior engineers atound me these days.
On one side, there are people who have become a bit more productive. They are certainly not "10x," but they definitely deliver more code. However, I do not observe a substantial difference in the end-to-end delivery of production-ready software. This might be on me and my lack of capacity to exploit the tools to their full extent. But, iterating over customer requirements, CI/CD, peer reviews, and business validation takes time (and time from the most experienced people, not from the AI).
On the other hand, soemtimes I observe a genuine degradation of thinking among some senior engineers (there aren’t many juniors around, by the way). Meetings, requirements, documents, or technology choices seem to be directly copy/pasted from an LLM, without a grain of original thinking, many times without insight.
The AI tools are great though. They give you an answer to the question. But, many times making the correct question, and knowing when the answer is not correct is the main issue.
I wonder if the productivity boost that senior engineers actually need is to profit from the accumulated knowledge found in books. I know it is an old technology and it is not fashionable, but I believe it is mostly unexploited if you consider the whole population of engineers :D
> This might be on me and my lack of capacity to exploit the tools to their full extent. But, iterating over customer requirements, CI/CD, peer reviews, and business validation takes time (and time from the most experienced people, not from the AI).
Yeah, you're certainly not the only one. For me the implementation part has always been a breeze compared to all the "communication overhead" so to speak. And in any mature system it easily takes 90% of all time or more.
So far, the best reference for software engineering research appears to be R. Glass et al.'s 2002 work, Facts and Fallacies of Software Engineering. I haven't found a better or more comprehensive reference.
The only others who compare are his contemporaries, Steve McConnell, Timothy Lister, Tom DeMarco, and Barry Boehm.
Unfortunately, they're all basically retired. It feels like this kind of interest in software development, at least the publishing, ended around the mid-2000s.
My guess is the shift to blogs from books, adoption of Agile (in whatever form), and a shift in industry focus to getting rich rather than getting good ended the efforts to come up with resources like Glass put together.
What do a crowded airport in Beijing, parenting, the risk-return frontier, and micromanaging bosses have in common? Everything—if you look at the world through the right lens.
In Through the Geek’s Lens, you’ll embark on a personal, humorous, and yet thought-provoking journey through some fundamental trade-offs and models from mathematics, psychology, economics, and engineering. Through the eyes of a textbook lover, Through the Geek’s Lens opens unexplored mental horizons and connects the dots between accumulated scientific knowledge and our private and professional lives.
The book introduces and explains complex subjects, their factual basis, and their limitations by contextualizing them within a specific time and place—a moment in the author's personal life. Each section offers a small dose of actionable insight. Among the many topics explored are the trade-off between freedom and security, the diminishing returns faced by particle accelerators and large machine-learning models, and the No Free Lunch Theorem, which explains why a self-confident explorer like the author can get lost in an unknown forest. It applies paraconsistent logic to interpret Eastern philosophies and uses unbiased statistical sampling to show how to see your loved ones. It discusses the Law of Small Numbers and why it’s just as important as the Law of Large Numbers, the determination of Service Level Agreements using basic probability inequalities, and the Shannon-Hartley Theorem to decide when to shout (or not) at your kids. And that’s just the beginning—there’s much more to discover when you look your day to day through a geek’s lens.
The free sample is good reading and makes me want to read more. However Amazon has only made the book available as a free Kindle download (to Kindle devices) or as a $$$ paperback. As author, can you ask Amazon to make the book available as a downloadable PDF?
I think the problem is not just software estimation, but project cost and time estimation is difficult across many domains (construction, transportation, IT, defense, and even the organization of events).
The word "mess" seems to indicate that the uncertainty is easily fixed, but if it has happened across domains for several millennia, it could indicate that there is something fundamentally challenging.
The situation worsens with factors such as project size, requirement changes, methodology (too much or too little of it), technology (specially new technologies), the nature of the delivering institution (public institutions performing worst than private ones), and organizational culture (e.g. waterfall being in many cases detrimental by making adaptations more difficult).
There are certainly bad ideas in the field of estimation, like assuming a Gaussian distribution, but the problem is far from trivial.
See for example:
* Defense: Bolten, Joseph G., et al. Sources of weapon system cost growth: Analysis of 35 major defense acquisition programs. Rand Corporation, 2008.
* Public works: Flyvbjerg, Bent, Mette Skamris Holm, and Soren Buhl. "Underestimating costs in public works projects: Error or lie?." Journal of the American planning association 68.3 (2002): 279-295.
* Transportation: Cantarelli, Chantal C., et al. "Cost overruns in large-scale transportation infrastructure projects: Explanations and their theoretical embeddedness." arXiv preprint arXiv:1307.2176 (2013).
* Olympic Games: Flyvbjerg, Bent, Alexander Budzier, and Daniel Lunn. "Regression to the tail: Why the Olympics blow up." Environment and Planning A: Economy and Space 53.2 (2021): 233-260.
As a baseline on what is know and not know about managing software projects, I always recommend Robert L. Glass Facts and Fallacies of Software Engineering.
More speculatively, I think that the following biology readings are inspiring:
- Gerald Jay Sussman, Building Robust Systems an essay. In general, biology should be a source of inspiration for engineers.
I remember the first time I heard someone (N. Taleb) expressing a negative opinion about economists because they were not "getting" the concept of ergodicity (in addition to other issues not relevant for this discussion). Initially, I didn't get the idea either. At some later point in time, I read Kelly's and Gellman's papers [1][2] with some effort, and although I was able to follow the arguments I didn't found anything surprising. That is, I didn't get the idea nor the implications.
It was during the reading of O. Peters and A. Adamou "Ergodicity Economics" [3] that I better understood the idea.
Imagine a basic gamble that repeats indefinitely. In each iteration and with equal probability, the player can win 60% more or lose 40% of the initial capital. For this gamble, the ensemble average (aka expectation) of the player's wealth one step ahead is simply 1/2 x 160% + 1/2 x 60% = 110%. A good gamble, right? However, the time average of the same step (i.e. the average gain of a single individual playing) is sqrt((1.6)x(0.6)) x 100 = 98%. So, the individual looses money with time. This was quite surprising to me although obvious a posteriori given that the multiplicative stochastic process is not ergodic. In other words, this simple gamble shows that the expectation does not have the intuitive meaning we sometimes assign to it specially for some repetitive gambles.
In other words. the time and ensemble average differ in general for non-ergodic stochastic processes and in particular for multiplicative stochastic processes (note that for additive processes the expectation of the wealth increment can be used).
And here comes the important implication... Given that several economic processes can be modeled as a first approximation as multiplicative random processes (e.g. stock markets, real investments, GDP growth, etc.), it is not a rational strategy to use the ensemble average (aka expectation of wealth increment) to take some economic decisions.
There are several implications of the above simple fact including the optimality of the Kelly criterion; the optimal leverage being below 1 in all cases involving multiplicative processes; the incorrect measurement of inequality; or the known inadequacy of the average income, instead of the median, to measure the average well-being to name a few.
A possible controversial corollary of the above is that the concept of utility is unnecessary and incorrect as a first approximation to the micro-economic behaviour. Instead an ergodic measurable should be used. In the specific cases of multiplicative stochastic processes the difference of the walth logarithm is ergodic and a rational decision maker should use it to optimize his wealth. This will require further debate within the scientific community because it is not clear that what is an optimal decision is a good model for the people's behavior. In any case, if the expected utility is not optimal, it also does not make much sense as a model for the Homo Economicus.
In any case, I really recommend reading instead of rushing to conclusions [3].
[1] J. L. Kelly, A new interpretation of information rate. Bell System Technical Journal, 35 (1956), 917-926.
[2] O. Peters and M. Gell-Mann. Evaluating gambles using dynamics. Chaos, 26:23103, February 2016.
[3] Peters, Ole, and Alexander Adamou. "Ergodicity economics." London Mathematical Laboratory (2018).
>the optimal leverage being below 1 in all cases involving multiplicative processes
This is wrong.
Instead I should have written: "there is an optimal leverage point, most likely close to 1. The optimal leverage point does not depend on the individual risk preferences of the investor".
It is not equivalent, but if someone has the time to read the list, I would recommend instead the reading of R. L. Glass "Facts and Fallacies of Software Engineering" [1].
First of all, I think that the idea and the effort behind the book are a fantastic and worth undertaking.
The issue for me is that the connection between the individual topics and its relevance for Software Engineering is not clear or present (I no doubt that the author sees the relation as obvious).
I would have liked more elaboration of the relation of each topic (in a given section there can be several) with respect to SE, and its positive and negative implications for its practice.
In a similar line, the following two sources are worth reading:
* Facts and Fallacies of Software Engineering. I'm typically surprised when some IT person tells me that he do not even know of its existence.
* IEEE Voice of Evidence Articles. They are reviews of existing evidence on various topics.
The best advice I have seen about this topic is the classic book "The Unwritten Laws of Engineering" from W.J. King. I always recommend the book to newcomers.
Note that the latest re-print of the book is named "The Unwritten Laws of Business". I guess that editors expect to attract more readers with the new title.
On one side, there are people who have become a bit more productive. They are certainly not "10x," but they definitely deliver more code. However, I do not observe a substantial difference in the end-to-end delivery of production-ready software. This might be on me and my lack of capacity to exploit the tools to their full extent. But, iterating over customer requirements, CI/CD, peer reviews, and business validation takes time (and time from the most experienced people, not from the AI).
On the other hand, soemtimes I observe a genuine degradation of thinking among some senior engineers (there aren’t many juniors around, by the way). Meetings, requirements, documents, or technology choices seem to be directly copy/pasted from an LLM, without a grain of original thinking, many times without insight.
The AI tools are great though. They give you an answer to the question. But, many times making the correct question, and knowing when the answer is not correct is the main issue.
I wonder if the productivity boost that senior engineers actually need is to profit from the accumulated knowledge found in books. I know it is an old technology and it is not fashionable, but I believe it is mostly unexploited if you consider the whole population of engineers :D