Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> you add an argument to a method, you need to update usage of that method. You add a generic to a class, you need to update everywhere that class is used.

You're talking about two different things here, though. What if you add a generic parameter to a function? It'll often be inferred at existing call sites, but worst case, same as any other change to a function's signature.



The reason I'm relating them is that they're both changes to a class, that have different blast radii and different levels of abstraction.

In the worst case, adding a nongeneric parameter to a function could have the same impact. I've never heard of that happening though. I'm trying to express how I've observed things working in practice, not the theoretical boundaries of what could happen here.

So lets say you add a concrete parameter to a method. You update the usages. Somewhere the output gets stored in an existing class. So you add a new field to this class of the correct type. You're done.

Let's say you do the same with a generic. Now when you add that field to that class, it also has to be generic over that type. Now you need to update all the places where that class was used.

If your code is overly generic throughout, the likelihood of this having secondary or tertiary effects and having a runaway refactor becomes pretty darn high.

Being too abstract will always get you in trouble, and it'll probably look pretty similar. I'm just saying it's very easy to do with generics and harder to do with less abstract techniques.


> In the worst case, adding a nongeneric parameter to a function could have the same impact. I've never heard of that happening though.

Can you clarify this? I'm reading it as "I've never heard of anyone adding a parameter to a function" and that's so far from my experience that I'm either misreading or you work in a vastly different field than I do.

> Now when you add that field to that class, it also has to be generic over that type. Now you need to update all the places where that class was used.

Only if you need that to be generic too. If you change an int to a T, and you want to preserve the existing behavior for existing callers, they just call it as f<int>() instead of f(). Languages with good type inference will do that for you without changing the calling code as written.


Apologies, I mean that, if you came up with some kind of pathologically bad architecture, it could encounter the same failure mode when trying to add a parameter to a function (like, the callers need to add a parameter, and their callers, etc). But as you note, adding parameters to a function is routine, and I've never heard of this happening. I've definitely added parameters in a way that was tiresome and required me to go higher up the chain of callers than I would have liked, but not in a way that spun out of control.

I'm not really sure what to say at this point really. I think we're miscommunicating somehow. Would you agree that if we are too abstract with our architecture, we'll end up with a brittle and difficult to maintain architecture?


I do agree with that, and with the implication that one shouldn't add generics (or other abstraction) where they don't provide enough value for their costs.

But I'm confused because your example (adding a generic parameter to a function) seems to be an example of adding abstraction to code that did not previously have enough abstraction.


Yeah for sure. If we need more abstraction we need it, generics are just really, really abstract. I'm just saying generics should be a last resort. And if you find yourself with generics all over the place, you might take a step back and ask, did I make a bad architectural decision that will blow up in my face later? Can I do a medium sized refactor now to save myself a massive refactor later?

Coming from Python, I had a bad habit of premature abstraction. In Python, it's easy to be very generic at very little cost (not necessarily using generics - they exist in Python, but they're not "real" since Python is gradually typed). I thought of keeping things generic as "designing for expansion". Then I encountered the problems I've been describing, small refactors would turn into giant ones, and it was entirely unsustainable.

When I asked for advice about this, what I got was pretty much, "Oh yeah, that'll happen. Just don't use generics if you can get away with it." Initially that felt like a nonanswer to me, even a brush off. But as I matured in Rust I realized the advice was spot on, and that I had been abstracting prematurely.

I've seen techniques that can use generics well and actually make coupling looser, and that's awesome, and I don't mean to suggest that one should never use generics. I acknowledge I got into trouble by _misusing_ them. I'm just saying it's an unwieldy tool for special situations. It will rapidly expend your complexity budget.

The original context I was responding to was something like, someone says, generics are great until you get in generic hell, and then someone was like, generics seem fine to me. And I just wanted to explain how one gets into generic hell.


I am now fairly sure we agree and I just didn't like your example.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: