Hacker Newsnew | past | comments | ask | show | jobs | submit | sinaa's commentslogin

I really love the idea of "View source" for base prompts.

If we simply treat the prompts as frontend / client-side (one could even argue that it can be harder to get the original code from a JS bundle than extract a prompt using prompt injection), then function calling (the backend API) could be where folks add additional value, and if reasonable, charge for it.

As long as you can audit the function calls and see what's sent and received, same as you can do with a browser, then I think it becomes closer to a familiar and well-tested model.


The core value of code reviews is knowledge transfer and increased harmony across the codebase.

You certainly don't want to work in a codebase riddled with different ways of doing the same thing. Code reviews help with making the patterns more structured and more widely adopted. It also helps everyone at all skill levels to learn something from their peers, through reading their code and sharing thoughts.

It also makes more people in the team familiar with the codebase, such that more people are capable of changing and improving the related parts rather than just the original author.

The potential benefit of finding defects is only a nice to have!


Great work!

Do you simply do a BFS to find the shortest paths? If so, are you doing any tricks to avoid the path explosion problem?


Thanks! I'm glad you asked. I actually do what I call a bi-directional breadth first search[1]. The gist of it is that instead of just doing a BFS from the source node until I reach the target node, I do a reverse BFS from the target node as well and wait until the two searches overlap. That helps with the exploding path problem, although that still becomes an issue for longer paths (>= 5 degrees generally). I also pre-compute all the incoming and outgoing links for each page when I create the database[2] so I don't need to do that upon every search, which resulted in a huge performance boost.

[1] https://github.com/jwngr/sdow/blob/a2699dc95d884ec64a4641630... [2] https://github.com/jwngr/sdow/blob/a2699dc95d884ec64a4641630...


When I was first playing with this, I actually really expected you to be using an expensive performant solution like neo4j. When I read that you were using sqlite, I didn't believe it at first and thought it was a mistype from dev until I looked over the source.

That's an impressive and well thought out performance enhancement, and that the app runs so blazingly fast on sqlite is very impressive.


My thoughts exactly. It's impressive that despite relying on BFS, the tool manages to be so fast on a tiny node on google cloud. Even when there's a depth of >= 5 !


Especially SQLite is often overlooked. A marvelous library that should be part of every personal toolkit.


Neo4j is not necessarily more performant, especially for a particular use case like this.


The downside of bidirectional seems to be that things like place names are linked via short paths through boring articles like "Census designated place" or "City"


The bi-directional nature of the search does not change the end result. It is simply a performance improvement.


Still - It can be annoying. Maybe there could be an option to exclude articles with specific text/link ratios?


It is bidirectional BFS: https://github.com/jwngr/sdow/blob/master/sdow/breadth_first...

A* can't be used given that path cost or expected remaining distance is unknown.

Any ideas on how such an algorithm could be used without precomputing the entire graph?


You make a very good point on this.

As you say, a big win by text is that you can quickly skim it, ignore parts of it that you don't deem to be relevant, and essentially only use the working/short-term memory for the information you are interested the most in the context. Therefore, if the device responds back to you in voice, even if it's entirely a human-like experience, that's still inferior (at least in terms of speed) compared to the text-based experience.

Even for using voice as an input-only mechanism, it's still going to contain lots of redundant/useless information and verbosity.

Compare "OK google, what is the price of bitcoin today?" vs goolging "bitcoin price".


Really happy for GitLab and wish you guys all the best (and luck).

We've been using Gitlab for all our company's development, however, one major issue that is pushing us to switch over to GitHub is the fact that GitLab goes down almost every other day (or sometimes every day) due to deployments. Although during this often the site remains available, when CI is not executed or triggers are not performed, it is still extremely disruptive. I really hope that you prioritise having disruption-free deployments.


Thanks for your comment. Self hosted GitLab has been very reliable but GitLab.com's availability has been far below par.

It is our nr. 1 focus to improve this. Deployments should not cause disruptions. Over the last few months we solved the speed of GitLab.com. Availability is next.


Firstly, I'm really happy to see Gitlab growing and desperately want to see it succeed. We need an open-source alternative, and the organisational transparency within the company is also wonderful. However...

> we solved the speed of GitLab.com

I'm currently browsing a repo tree and each page is taking 5+ seconds to load (I am literally browsing HN while waiting for pages to open). This is a common experience. Please please don't get complacent. Not yet.


I'm the Gitaly Lead at GitLab. Gitaly is a GitLab project to move all the git operations out of the GitLab Ruby monolith into a Git RPC service.

https://gitlab.com/gitlab-org/gitaly/

We're hoping to complete this by the end of the year. Once it's done, we'll be able to end our reliance on NFS, which should greatly improve performance and uptime on GitLab.com and other large GitLab instances. In fact, we're already seeing some big performance payoffs as we bring services online.

> Please please don't get complacent. Not yet.

I can confirm that this is not the case. We're focused and working really hard to improve performance and we're also working hard to improve our metrics, so that we can target optimizations where we can gain greatest benefit.

It's also worth pointing out that routinely experiencing 5+ second render times on browsing a repo homepage is outside our 99th percentile latencies for that route. I'd be interesting in digging into it further. Would you mind creating an issue in https://gitlab.com/gitlab-org/gitaly/issues/new (mark it as confidential if you wish) and ping me `@andrewn`.


The 5+ second delay is a little above the average (I'm rarely frustrated enough to actually go off and get distracted by HN while I wait), but notable delays on loading repo trees are definitely common, if not usually quite so high.


Repo tree rendering is a known issue that we're currently looking into: https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/14680

We hope to make improvements in this area soon. Hopefully you'll notice them!


Thanks, I wanted to indicate our primary focus moved from the speed to the availability. But you're very right that there is still much to improve. We're not happy until every page is fast.

Specifically about the repo tree page load:

1. I'll share this comment with our VP of Eng

2. We're working on Gitaly to make git calls faster.

3. We're working on a multifile editor to make browsing faster.

BTW Here is a graph of how some of the other pages improved https://www.dropbox.com/s/8ztha1av8t0fcau/Screenshot%202017-...


Thanks. Good feedback to hear.


This is both clever and useful!

Is it possible to present a resized version of the route to each path underneath each target? (or perhaps show that on hover)

I understand that this would mean having overlapping map snippets of different sizes (with different centres), but some visual representation of the route to take could be nice.

Currently, the UX of having to click each target to see the path reduces the usability (having to go back and forth between the suggestions is tedious).


Just installed Nightly thinking "how can it be any different from the Developer Edition?" ... Wow, I couldn't be more wrong!

It is indeed a night and day experience! More approachable UI, and much faster performance... Big well-done to everyone working hard to keep Firefox competitive :-)


As an aside, whenever I click on a Slashdot article about Mozilla and Firefox...ALMOST ALL the comments bash Mozilla and Firefox to pieces...and mostly from anonymous posters.

On HN we seem to get reasoned arguments for or against Mozilla and Firefox, but not seemingly mindless hate.

Now I know that HN is a higher class forum these days--Slasdot is not still in its heyday--but I have to say it is refreshing.


Thanks for the links.

What scares me is that over 1/3 of incidents were marked as "Investigation complete; no suspect identified".

And only 18% of attacks (156/833) resulted in a "Charged/Summoned" outcome.


Compared to 4 ns on Windows, many calls to the method can create significantly different performance profiles across the platforms.


App server availability is now 98.3% over the past month, which seems pretty bad!


Good ol' one 9 uptime.


"One nine up-time" rolls right off the tongue! Must be a PR play!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: