I was very impressed by Scuba's ability to ingest and search data... and very unimpressed by the interface. I found it terribly unintuitive, and often didn't get you what you wanted without having to run multiple queries.
Phabricator was pretty sweet... and the internal version of mercurial was a dream!
This isn't a great argument... sure, Splunk "saved the day" because it caught that really bad security thing... but what other tools, at half the cost, could have also "saved the day"? If most of them could have, then it's not really worth the price... right?
Which one... there are like 3 (or more)? The ones I worked with were the single biggest cause of SEVs. I wrote some tooling around trying to make it better, but it honestly needed to be completely replaced.
>Workplace is a fantastic replacement of Wiki that surfaces relevant & interesting content to you.
Yea? I found it horrible to have to read through post-after-post-after-post to try and keep up-to-date with what's going on. Not my way of ingesting information.
>Their chat tools are way better than Google Chat / Microsoft Teams (not sure how it stacks up against Slack since I'm not a huge user of them).
100% disagree... telling my dog to bark at the neighbor because I want something from them is better than Workplace chat. Constantly switching between internal beta and production to try and get the features I wanted, with something stable. Go ahead and send me a link to that message in a chat... oh yea, you can't :-|
This is exactly why I build log-store. Can easily handle 60k logs/sec, but I think more importantly is the query interface. Commands to help you extract value from your logs, including custom commands written in Python.
Free through '23 is my motto... Just a solo founder looking for feedback.
I came across this a few months ago and have been following pretty closely. Having been using this locally in a Docker container has been painless. The UI is definitely iterating quickly, but the time-to-first-log was impressive! Happy to keep using it.
Disclaimer: I am friends with the founder of log-store.
I have been beta testing it for a while for small scale (~50 million non-nested json objects) log aggregation it's working beautifully for this case.
It's a no nonsense solution that is seemless to integrate and operate. On the ops side, it's painless to setup, maintain, and push logs to. On the user side, its extremely fast and straight forward. End users are not fumbling their way through a monster UI like Kibana, access to information they need is straight forward and uncluttered.
I can't speak to it's suitability in a 1TB logs/day situation, but for a small scale straight forward log agg. tool I can't recommend it enough.
log-store [1] is pretty neat. Thanks for making it. It's super powerful and easy to use. There's a learning curve with the query language, but it's super cool once you figure it out.
https://log-store.com - Going for a low-cost alternative to Splunk. Everyone seems to love Splunk, except for the price. My goal is to replicate 75% or so of the most-used features, but at a dramatically reduced price (currently free). I tried asking on here what those features are, but it flopped: https://news.ycombinator.com/item?id=34412176
Disclaimer: I am friends with the author of log-store and have been occasionally helping test the software.
I've been using log-store at work for some time. It has an awesome 'admin' UX. The sysadmin/ops overhead associated with getting something useful out of log aggregation and analysis is so much lower than tools like splunk and elasticsearch.
If you're interested in getting some kind of log analysis/aggregation spun up quickly and don't need all the complexity associated with things like ES and splunk, definitely give it a test drive.
I've been working on one: log-store.com It has a DB built-in that is schema-less, but it's really the frontend I've been trying to focus on, as I think the interface to Kibana is poor.
That's _exactly_ what I'm trying to build, and you don't have to pay anything. I was frustrated with what I thought was a poor interface for Kibana, and having to deal with schema for Elasticsearch, so I built log-store. I appreciate any feedback!
I initially thought you were the author of Zinc, but based on your comment history you're trying to build a logging SaaS replacement or something, which based solely upon its documentation is ALSO not ES API compatible, so I don't know how to interpret your reply
I guess I misread your comment... my apologies. What I'm building is more a replacement for Splunk: on-prem log analytics. As you point out, it is not compatible w/Elasticsearch in any way.
> What I'm building is more a replacement for Splunk
Wow, you really are swinging for the fences, then. I wish you all the best, because running Splunk is terrible, and ripe for replacement, but its search featureset and UX are massive so I hope you are laser focused on who the target audience is for your product
Thanks! Would love to know more about your experience running Splunk... Can I reach out to you?
Looking for some small niche... Industrial, automotive, even garbage truck monitoring :-) I'm not looking to build a massive public company or anything either... More a lifestyle business. We'll see.
Sure, just submit a "Ask HN: What is your experience running Splunk?" and point me to it. I'm sure you'll get all kinds of horror stories aside from mine
The interface (GUI) is OK (and looks better than mine honestly), but the usability isn't great. The query syntax is tough to grok IMO. Using the sample dataset it took myself and a friend who knew Kibana like 5 minutes to figure out how to make a pie chart of the various OS types. It's probably that we're just dense and don't know what we're doing, but with log-store you just click the pie chart icon next to the field, and it will update the query and render the chart. I think that's easier to use.
I hypothesize it's not useful because you're using grep. If you use a tool that can show you multiple lines all tied by a request ID, it becomes much more helpful.
I am quite often looking for patterns across thousands of requests. As an example, one thing I inherited didn’t even log how long each request took to serve. Sure, you find out how long a single request took, by comparing the first and last log entry, but that’s just not useful 99% of the time.