Our organization looks similar: models are almost all written in R but the business operates in C#. We just use simple HTTP APIs to intermediate. Specifically, we do the following:
1. Use R packages to bundle up the models with a consistent interface.
2. Create thin Plumber APIs that wrap these packages/models.
3. Build Docker images from these APIs.
4. Deploy API containers to a Docker Swarm (but you could use any orchestration).
5. Stick Nginx in front of them to get pretty, human-readable routes.
6. Call the models via HTTP from C#.
This stack works pretty well for us. Response times are generally fast and throughput is acceptable. And the speed at which we can get models into production is massively better than when we used to semi-manually translate model code to C#...
Probably the biggest issue was getting everyone on board with a standard process and API design, after a few iterations. And putting in place all the automation/process/culture to help data teams write robust, production-ready software.
Most of these tidyverse vs. data.table arguments end up sounding a bit irrelevant to me. They tend to focus on syntax, which is largely down to preference, or performance, which is really only important once in a while. And they seem oddly focused on the need to choose one or the other, when it seems obvious that all are good choices with various tradeoffs.
On the other hand, it's nice to see some criticism of RStudio and Hadley, even if it does stray into the conspiratorial. They've done a lot for the R community, but they do have some obvious blind spots.
It's not that RStudio has nefarious motives that cause harm, it's that they have neutral and sometimes benevolent motives with unforeseen consequences.
I'm in a similar boat, using base R, data.table, or tidyverse as needed. But over time I've found that I respect, more and more, the good decisions that have been made in base R.
Absolutely. I usually respond to the "R-is-not-a-real-language" line with something like "R is syntactic sugar on a Lisp where the atoms are APL arrays". It's plenty interesting from a computer science perspective, if you bother to look.
For instance, why does R use <- instead of = for assignment? Because initial versions (of S) predate C -- developed down the hall -- the language that first introduced = for assignment.
This is attractive in theory but it's a bit difficult to do in practice. Usually the places with the worst comparative odds are also quite aggressive in banning sharp accounts (including any proxy accounts you'd set up as this rival sportsbook). You've got to take the vig into account, as well, which is likely high. So in most cases it's probably not be worth the cost. Especially when there are plenty of other ways to use sharp information to make money.
It's not worth it. Even if they don't ban you outright, the stakes/winnings are very limited for that kind of thing to make sense.
And then there are odds data services like Betradar which provide every subscribing bookmaker with enough near-live data to instantly improve their prices so as to make any arbitrage very hard.
(source: worked at several bookmakers)
In practice these outfits have accounts at large Asian sportsbooks (which don't bother to ban sharps) or at Pinnacle (which has traditionally welcomed them). Even setting aside the risk of being banned, successful syndicates are unlikely to bet at major Western sportsbooks because the vig (margins) are too high.
[0]: https://open-vsx.org/extension/kv9898/open-remote-wsl