Hacker Newsnew | past | comments | ask | show | jobs | submit | tsdbase's commentslogin

Can you be more specific? Is it car engine telemetry?


Aircraft telemetry.


You really need to know the position of an aircraft to a thousandth of a second?


I said nothing about position. But a lot of things measured have to do with how the structure of the aircraft responds to turbulence, rough air, and aeroacoustic vibration (aka flutter). So there might be modes where structural components have harmonics that are pretty high (several hundred to over 1000 Hz). Therefore you must use a transducer that has a frequency response that can cover that range, and sample the output of the transducer at least twice that rate (at an absolute theoretical minimum, but rule of thumb is 5x oversampling).


The highest sample rates I remember seeing was 20kHz, for the pressure sensors used in turbofan inlet distortion testing.


That isn't out of line with the ways I have seen that sort of thing measured.


Possibly he might measure some other thing, but still after your question I started to wonder how much a passenger plane moves per thousandth of a second.

So: some quick googling suggests that "economical cruising speed" of an Airbus A320 is 840km/h [1]. After quick back-of-envelope calculations, this gives ~230m/s, so 0.2m per 0.001s. Given some possible uncertainty of a single measurement, I'd imagine that's not unreasonable level of precision when e.g. your Airbus is landing on an airport.

[1]: http://www.airliners.net/aircraft-data/stats.main?id=23


I guess I should mention that GPS position is a perfectly normal thing to include in a telemetry stream.


+ ins position of course; gps position is not reliable enough (less gbas) to land a big plane with.


I've hit the same problem and I would like to move back to a SQL data store. However none of the nice dashboards / visualizations support postgres or any SQL database (for now)...

My question (to everyone): what do you use as replacement for kibana or grafana?


If you don't need to keep data forever, but only several weeks or months, and you only need numeric time series data and not raw event logs, Prometheus (http://prometheus.io/) is your friend. Since it's optimized towards purely numeric time series (with arbitrary labeled dimensions), it currently uses an order of magnitude less disk space than InfluxDB for this use case, and I've also heard a few reports of people's CPU+IO usage dropping drastically when they switched from InfluxDB to Prometheus for their metrics.

As dashboards for Prometheus, you can currently use PromDash (http://prometheus.io/docs/visualization/promdash/), Console HTML templates (http://prometheus.io/docs/visualization/consoles/), or Grafana (http://prometheus.io/docs/visualization/grafana/).

Durable long-term storage is still outstanding. Although replication into OpenTSDB and InfluxDB is experimentally there.


I've just implemented a custom backend for graphite-api which seems to be working ok although I don't have crazy requirements. https://github.com/brutasse/graphite-api is a cleaned up fork of graphite (which is much easier to install). I'm using grafana as the front-end and my data is in a postgresql database and graphite-api is linking them together.


Hello, I find myself having the same need. Would you agree to share your implementation or point me to it? Thank you!


If I decided to move to using a SQL data store (I use graphite now), I would re-implement the graphite API as a listener process, or write a graphite backend. The biggest strength of Graphite is how simple it is to upload and query metrics, and I wouldn't want to lose that even if the backend were to change.


Shameless plug here - The author of the article does build time-series databases for a living, and more specifically the Datadog monitoring platform - which will gladly collect your millions of metrics, graph them and alert on them, along with all the events you care to keep :-)

We've been through a number of data stores ourselves, starting with Postgres back in 2010 - then on to Redis + Cassandra before we built our own. But that's a story for another post...

http://datadog.com


Did you consider a hybrid solution? You could store the most recent data in time series database for visualization purposes and dump the rest into a traditional SQL data store. Other than that, IIRC Grafana had plans for PostgresSQL but it's not there yet.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: