But you still have to pick up the tickets at the machine. Additionally, my mobile phone internet is not recognized as "being in Japan", so I can't access the QR code needed for the ticket without wifi. You can work around it (save the QR code when you have wifi), but it all just seems so inefficient compared to all the countries where you can _book_ your tickets using a mobile app.
This sounds great, especially as it's linked to the IC card. Unfortunately, I couldn't find anything similar for JR West or JR Kyushu, which I will be using in the next few weeks. Hopefully they will implement the same system in the future.
They are then asked if they agree or disagree with a (presumably hypothetical?) company's proposal to reduce employees' welfare, such as replacing a meal with a shake. Two groups showed a different preference.
This makes me think about that old question of whether you thank LLM or not. That is treating LLMs more like humans, so if what this paper found holds, maybe that'd nudge our brain subtly toward dehumanizing other real humans!? That's so counter intuitive...
Do you understand how they chose the two groups? And why show one group one video, and the other group the other video? Shouldn’t both groups be shown the same video, then check whether the group division method had any impact on the results? E.g. if group one was dance lovers and group two were dance haters, you wouldn’t get any data on the haters since they were shown the parkour video instead of the dance video.
Also, interesting bit: "Participants in the high (vs. low) socio-emotional capability condition showed more negative treatment intentions toward employees"
Apparently you do not understand how they chose the two groups. Group identity was not based on a survey or any attribute of the participating individuals.
Low and high socio-emotional groups refer to whether the group was shown the low or high socio-emotional video. The pre-test and exclusion based on lack of attention and instruction following was performed before group selection for each individual, which was presumably random.
But that's beside the point of the paper. They are talking about how the humans perciving the "socio-emotional capabilities of autonomous agents" change their behavior toward other humans. Whether people get that perception because "LLMs hack our brain" or something else is largely irrelevant.
I don't think the point of SLO = flakiness out of control. The point of framing it as SLO is the realization that neither extremes are good. Flakiness cannot be allowed to get out of control that some efforts must be spent to contain it, but it's also unnecessarily perfectionist and thus the waste of precious engineering bandwidth to eliminate them completely. The whole point is to avoid "bureaucratic games", as you call it.
My theory is that the lack of easy mechanism to measure the flakiness is stalling the progress. If the overall flakiness can be measured, and the top offending tests identified, then I think it becomes no brainer to spend efforts curtailing them back when the flakiness gets too high, but otherwise exclude flaky tests from, say, PR merge gate.
This is indeed a religion, because in my experience people tend to feel strongly holding very different positions. You can already see it in this thread.
I think quantifying and prioritizing is key, like you wrote. Respected engineering organizations like Google and GitHub all came to the same place. Flakiness is often unevenly distributed, so find & tackle ones that are the worst. Don't try to eliminate the flakiness because that's not economically viable.
I'm trying to put my money where my mouth is... we'll see how it goes.
I forgot which book it was (maybe "the three body problem"?) but there was a science fiction story where a Chinese king makes his soldier act as a logical gate and his army becomes a computer. I was like, wow, I didn't think about that, but it totally makes sense!!
Hi, I'm Kohsuke, the creator of Jenkins. I'm sorry to hear that you had a bad experience.
Would you be willing to letting me interview you so that I can learn where it failed your expectation? I'm honestly trying to learn where we can do things better, and often what's obvious to one person is completely incomprehensible to another. So I think this is a great opportunity for me to learn a fresh perspective.
Jenkins isn't a terrible experience, and I've used it personally and professionally (and would do so again where it's a good fit), but for my current project it missed a few of the requirements. In trying to rationalize the whole NIH thing, I talked to some friends and peers about their CI experiences. I got pretty consistent responses on Jenkins.
My relevant requirements:
1.) The software needs to be self-hostable and run on the BSDs. For the most part this narrows down the options to buildbot and the Java based CI options (Jenkins and GoCd). Travis could probably be run on FreeBSD, but the open source bits are essentially abandoned (e.g. some repos are missing) with no documentation. Nearly everything else these days is strongly tied to Linux via docker. Some free hosted services offer a FreeBSD target, but I'm looking to test on DF/Free/Net/OpenBSD.
2.) The software needs to scale down. The GoCd folks suggested that the agent would need around 500 MB of RAM. I haven't profiled Jenkins, but I can't imagine the agent being that much lighter weight. Certainly the Jenkins server process is glacially slow. By contrast my prototype in Rust is showing memory usage of under 5 MB for each process (agent + server). I expect that to grow a little but, but not by an order of magnitude.
3.) The software needs to handle multi-arch builds. Travis does this extremely well. Buildbot and GoCd, kinda. Jenkins does not handle this use case (e.g. pipelines + matrix builds are not supported). I really like the way Travis basically handles these as sub jobs.
My experience:
A.) The Jenkins documentation is terrible, if it exists at all. I've heard that this has been improved in the year or so since I've looked at Jenkins (but that hasn't been my experience). I mentioned this to one of the CloudBees guys at the DevOps Days conf I went to last year and got an ack that this is a known issue (although CloudBees has driven a ton of Jenkins documentation and improvement). At MegaCorp we paid a fortune to CloudBees, which helped a ton but didn't really help end users. I cannot understate just how much of a detriment the documentation is.
At the opposite end of the spectrum rust (except for the async stuff) and postgres are just a dream come true. If it's any consolation the GoCd documentation is pretty atrocious as well. Almost none of it is up to date with the current UI.
B.) The Jenkins community tends to cargo cult Jenkins-Groovy snippets like crazy, potentially as a result of #A. Having a good community helps documentation and helps when there are gaps in the documentation.
C.) Bootstrapping Jenkins is not something easily done in an automated way. The CLI is not stable and I had tons of trouble trying to get plugins and dependencies sorted without having to drop into the GUI. For homelab stuff I've automated bootstrapping of nearly everything except for Jenkins with Ansible.
I don't think these are new or unknown issues as in talking to friends and peers I've found that the typical responses regarding Jenkins are along the lines of: Jenkins works well enough so that we're not motivated to switch, but A & C are our main pain points.
Thanks for taking time to put this together. Yes, much of it isn't new, but it's always good to hear how these dots are connected in other people's view to form a theme.
Thanks for your thought. I took your main question to be "why bother?"
I think a part of it is that I fundamentally believe in an extensible system. The world of software development is so diverse, and we have smart people everywhere. So I always felt that the best thing a geek like me can do to other geeks is to give them a shoulder to build on. I don't think that's a solved problem, and to me, that'll always be an important value of the Jenkins project, more so than any code.
I think a part of it is the responsibility to users. Jenkins is very widely used software, and it's an incredibly important part of the software development process for many. I appreciate that kind of trust, and I want to deliver better software for them. I think people in the community shares the same passion.
As CTO of CloudBees, serving our users and customers, and broadening the adoption base are an obviously important business goal. So the interests are aligned there as well.
And finally, I think this kind of "reinvention of the brand" happens all the time. Windows got reinvented from 95 to NT, Firefox got reinvented a few times. There are many other examples less famous but closer to my part of the universe, like Maven 2, GlassFish 3, ...
Exactly our thought! The whole Jenkins Pipeline is around the notion that your job definition should be version controlled (and you don't necessarily have to lose GUI, see our "blue ocean pipeline editor" that now comes out of the box.