Hacker Newsnew | past | comments | ask | show | jobs | submit | msvalkon's commentslogin

I have worked in a team which maintained and improved a legacy piece of software generating a few hundred million euros of yearly revenue. Not technically a website but a critical backend piece of a very popular one. The application was written in Java and PostgreSQL, which leveraged stored procedures heavily. This was a _major pain_ for us.

While the separation of concerns was sometimes quite easy to understand between the business logic in the Java code and the business logic in the sprocs, there were times when it was an impossible, tangled mess. Debugging this thing was _hard_ and every deployment was fragile and hard to deal with.

There are of course many reasons as to why this particular piece of software evolved as it did. I can only say that if you plan to move any parts of the business logic to stored procedures, make sure that you have a good reason to do so and clear architectural patterns and rules to communicate and follow.


One can create the stored procedure and store it in an .sql file which can be version controlled like any piece of code. These can then be deployed in a number of ways.


Sounds like a terribly biased argument. Why would it kill for _thousands_ of generations?


It wouldn't. The OP has fallen victim to the "long half-life = bad" fallacy.

A long half-life means that the material is less radioactive. By definition.


While an unorthodox merge strategy was used, this is what happens when you hole up in a topic branch for a long time. I bet this would've been easier had they merged smaller commits or PR's to master constantly. If one is afraid of deploying unfinished features, don't make them functional until they are ready. Tie them together once finished. Or did I miss something here?


If master is currently in production, a good way to do frequent deploys for code that's "not quite ready yet" is to use a feature toggle. This allows you to do partial feature deploys, but block any code paths from using it. Plus when it's time to start directing traffic towards that code path you can turn it off, if you find a bug you missed during testing.


This only works for isolated features. As soon as it's some kind of refactoring or a feature that affects a lot of existing code, it's not viable anymore.


If you really want to, you can often find ways to do continuous integration even in such cases.

For the most brute force example, you could copy the whole program and make a runtime switch for deciding which to run.


> you could copy the whole program and make a runtime switch for deciding which to run

Wouldn't that make merges even worse, not better?


Then you should be rebasing the branch often from master so that this is less painful when the flip happens. If there is an army of devs working on the same repo then it should be communicated out really obviously. If a dev is planning on submitting code that'll be merged after the massive branch gets into master then they can just rebase from the feature branch and any conflicts should be fixed in those branches...


I'm also very surprised - I thought it was standard to merge master (target upstream branch) into the topic branch frequently (daily seems reasonable). I know some people do not appreciate "merged <branch> into <other branch>" commits in their history, but that is a small price to pay IMO.


Or rebase. Don't hide merge fixes in merge commits, keep your changes relevant when read in context of current master.

Unless you're working on a topic branch together with another dev, but I find that rarely happens in practice.


Yes I just completed a large refactoring in a feature branch which affected lots of files and the only way to stay sane throughout the process was to constantly rebase my work on top of master (within my feature branch).

Once my refactoring was complete I squashed it into a single commit to prepare to merge (or rebase) into master. I don't really think it's useful to keep the history of how I implemented that refactoring. Squashing it into a single commit is far easier to revert then if I merged multiple commits into master.


Even so, if you can both adhere to "pull before you push, and use force-with-lease instead of force" the rebase workflow is possible too. However it's more of an abuse of Git rather than a use of it.


I find this to be the optimal wow.


Well, I routinely `git rebase upstream/master` to keep my git history clean. It has the same effect as a merge from master, but keeps everything tidy.


Whenever I do this I inevitably end up with merge conflicts in stuff I absolutely didn't touch. I don't know if it's because it's a monorepo with lots of people committing, or what, but it basically never works cleanly. It's very frustrating, particularly because merging master works without a hitch. So I just do that instead.


And also will make a huge mess if someone else has checked out your code already.


Is it really such a huge mess? They'll get a conflict if they try to pull, and then they simply have to rebase their changes onto your new rebased branch, right?


I don't put that much faith in everybody else to figure it out themselves.


It's part of the git workflows.

Everybody merge OR every rebase. The organization gotta decide on the workflow.


It's a very small price to pay! I can't believe how many people hate this practice because of the merge commits. It's extremely useful and not doing it makes git usage that much worse.


while I don't use git a lot (I use p4 for work) I think that this is often a lot of pain - the trick is not to get to far from main, continually touching up your working branch, from main as you work so that when the time comes the final merge is essentially already done (again I'm not a heavy duty git user, maybe this is hard)

I'm also not a big fan of the everyone has their own branch world view, I worked on a big project (a chip design project) where we essentially lost track of what we were building because of this. I'm much more keen on requiring people to stay close to main and eating their own dogfood


In Finland, "for life" means 12 years.


> Frans Carbo, the prison guards' representative from the FNV union, says his members are "angry and a little bit depressed". Young people don't want to join the prison service he adds "because there is no future in it any more - you never know when your prison will be closed".

Well this truly sounds abhorrent.


Actually no, it sounds utopian - closing prisons because crime is low is a good thing.


I remember reading somewhere that Iceland had to close prisons due to lack of prisoners. They get a few convicts per year and have to send them abroad to prisons abroad.


I'm certain he was being sarcastic.


Correct.


I'm a programmer because I enjoy it, my employer seems to value the results and I am relatively well compensated for my time.

We should really make an effort to try to accommodate the people in the programming profession who do not want to spend their free time coding on projects. Programming is very, very enjoyable (to me), but I have a family with which I want to spend as much time as possible. Do I want to write a web-scraper in rust in my free time? Sure it sounds like a nice exercise. Would I rather spend that time with my kids? Yes.


I don't think we should accommodate anybody with the handle 'xkcd-sucks', much less anybody so flippantly derisive of people who do code because they like to code.


They weren't derisive of people who like to code, they were simply pushing back against the idea that you must do it as a lifestyle rather than just a profession. I love xkcd (and that username is clearly just trolling people who wrap up too much of their identity in the things they like), program for fun sometimes, and love to read computer science books for fun, but I completely agree with the sentiment that the common view that programming must be your lifestyle in order to thrive in the industry is unfortunate and problematic.


I don't really understand where the downvotes come from? I find the readability concerns legitimate, and would like to understand why compression algorithm developers feel like this is OK? Is it just the math heavy background? Can't think of any real benefits to this style.


If you don't understand the underlying mathematical algorithms its using, no amount of explicit varible names are going to help you. If you do, the concise structure makes things straightforward. The code is not meant to be read alone and understood, the papers published along with it need to be understood first.


Exactly. Not all code can be understandable to layman with zero effort.


I suspect the downvotes are because code readability is a complex, subtle topic that often gets reduced to flamewars by people who are sure they know ‘the’ right way to do things.

Code readability is relative to the reader, the programming language, and the conventions of a codebase. That's a lot of things to be relative to! Knowing that ought to put speed bumps on the way to dismissing code one isn't familiar with.

I remember having a reaction years ago on seeing some of P.J. Plauger's C++ standard library code. I think I burst out laughing and said I'd fire anyone who wrote code like that for me. Years of subsequent experience have brought multiple layers of understanding how wrong I was.


I find this hard to believe. You probably need to read the man page and perform the action over and over again.

At least personally, I need to do thing X and to achieve that I know I need to use tool Y. I read the man page of Y and very satisfyingly do the thing X. A month from now I have to do something similar to X and cannot for the life of me remember what freakish incantations need to be performed in order to do the same thing so I have to go through the man page again. This is where I'd like the tl/dr of it. I don't need Y often enough to actually warrant spending much time on learning to use the niche cli it requires..


You could use this same argument to argue against testing software. If one of the people who "gets it" has written this piece of software then there should be no reason to test it because they _know_ it works. Anyone coming in for a job interview at a professional company with this attitude will not be taken seriously.

"They" are human and are prone to the exact same errors and mistakes as anyone else. Google is technologically successful because they try to base their decision on analyzed data instead of a gut feeling.


And yet Android UI is mediocre and isn't a qualitative improvement on the previous status quo.

I think the point of GP is that listening to the 'someone who "gets it"' can speed up the development process. Indecision has costs, and you can run studies in parallel anyway. Sometimes costs of having to backtrack every now and then are outweighted by the benefits of moving fast.

RE arguments against testing in code - testing is cool and all, but at some point you have to ask yourself whether you want to ship a product or a test suite.

BTW. the whole anecdote reminds me of a story from Microsoft about problems coming from a group of PhDs:

http://www.joelonsoftware.com/articles/TwoStories.html


"Getting it" can also mean having already tried something before, or having tried enough things that are similar that you can triangulate on what something would actually be like.

I'm all for testing and empirical studies, but it can be taken too far. Zynga focuses so much on measuring and testing, that it sucks all the fun and personality out of their game designs.


Empirical data tends to find local maximas. You need someone with a vision how it all fits together and which guiding principles to embrace.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: