Hacker Newsnew | past | comments | ask | show | jobs | submit | epermyakov's commentslogin

The reason I kept the minimum OpenGL 3.3 requirement is because it is supported on the biggest range of platforms. OpenGL 4.xx features require some features from newer generation cards. Of course, with time this is becoming less and less relevant.

In some parts of the code, I did make use of OpenGL 4.xx features like glMultiDrawIndirect, but this is put behind a check to see if it's supported by the driver and added a slower fallback for OpenGL 3.3.


Why didn't you directly use SDL for rendering instead of OpenGL? Was it for performance reasons? OpenGL is deprecated on mac, and SDL use Vulkan on Window and Linux which I guess is better, performance wise than OpenGL.


SDL gives you an API for 2D rendering, mostly limited to drawing boxes, lines, and bitmaps. OpenGL lets you program the graphics card for any use case, most commonly 3D graphics with room to add complex effects and post-processing. The SDL API is just nowhere near flexible enough for what I want to achieve.

Plus, Vulkan is not really "faster" than OpenGL. It just gives you a different API for programming the same graphics hardware, which in the hands of the right person can be used for writing code which is "faster".


That's a very reasonable approach and what I would typically do at work.

I guess my surprise was mainly because it's the kind of extra bother I personally like to get away from in my personal projects. Then again, I typically don't write things meant for adoption by a wide audience. Kudos for making the necessary effort.


All the fundamental behaviours (movement, combat, resource harvesting) are implemented in C.

The engine uses Python as a scripting/config language. You can use it to hook into a lot of events pushed from the engine core (unit got selected, unit started movement, unit started harvesting, etc.) and customize or change the unit behaviours.


Why Python though (and not lua or something)? It is rather annoying to use as embedded language.


My initial idea was that I wanted a more "sexy" language for scripting. Python is a lot more popular and (arguably) more enjoyable to use than Lua. There's a lot of cool stuff like list comprehensions. Plus I had a selfish reason of just wanting to learn more Python and fool around with the CPython code.

Over the duration of the project, I really did learn to appreciate why Lua is embedded into games and Python isn't. Lua is really small and you have full control over everything. And you're eventually going to need that control when you implement features like reflection, pausing the game, etc. CPython is this big shared library that does its' own thing and you have a lot less control over. The parts where it just doesn't expose enough through its' API do do what you want is a real huge pain. I ended up writing a bunch of code to serialize all the internal data structures and this was a massive chore. Also you have a lot less control over CPython's performance and memory allocations.

I didn't really appreciate these things when I started the project so hence I went with Python. But since I ended up doing it, I guess you can still enjoy the benefits of it.


Interesting; thx for the answer!


Over the long development cycle of the project, I've accumulated a nice little library of data structures, allocators, and utilities (mostly in src/lib). Between those and the low-level engine systems such as the task scheduler and event system which have a generic API for any other system to make use of, I believe I have good foundations in place to develop new engine systems relatively easily. Of course, this required the initial investment of laying these foundations.

Most error handling just consists of checking the return value of a call and propagating it up to the caller if necessary. Sometimes I also set an errno-like variable with an error message to check it in the top-level code. It's a bit wordy but obvious and sufficiently good for all my use cases.

I don't think C limits the size of the project. It's all about good organization and coming up with the right higher-level protocols/conventions. This, IMHO, is what allows or prevents the code size from scaling gracefully.


My experience in C as well. Have been questioning my own sanity over the years when I read about the language because I get such a different impressions of it from people on this very site, so it’s a relief to get your perspective.


Coming from C++, the main thing I am missing in C is generic data structures. Having to resort to macros to implement generic vectors (https://github.com/eduard-permyakov/permafrost-engine/blob/m...) I find cumbersome to say the least. It is also hard to beat the performance of the STL data structures when implementing something seemingly straightforward like a vector type.


> It is also hard to beat the performance of the STL

If you want to re-create the STL, maybe. But you can make custom data structures tailored to your task at hand instead.

For example, instead of a std::map or std::unordered_map that allocates and initializes each node separately, you could preallocate some of them in a big chunk of memory, hand them out via a bump allocator scheme, and later free them all at once. Instead of a std::sort algorithm, you could use a bucket sort if it's possible in your situation, to improve your asymptotics from O(n log n) to O(n). Etc, etc.


There's plenty of papers you can find online where people beat the STL: http://www.cs.cmu.edu/~dga/papers/cuckoo-eurosys14.pdf It's usually because there's some use case or underlying assumption they can make that the STL can't, because the STL is designed to be the best possible one-size-fits-all-approach, which is even further constrained by things like ABI history. So it's always a discouraging sign to hear a software engineer talking about such things as though they're holy, since that'd be like hiring a tailor who uses spandex.


Are there any features you would like in C that would make things easier? I'm writing a c compiler and adding low hanging stuff such as removing forward declarations and supporting multiple returns. One thing I'm flip flopping on is function overloading for example. I'd appreciate your opinion.


I guess the "shtick" of C is that it has a small and obvious feature set. The readability and style of programming follows from that. As you start adding more and more language features and constructs, you start getting all the other languages that were derived from C and you no longer have C.

During the development of the project, I had a thought that it would be nice to have a RAII/defer mechanism to get rid of repetitive code for freeing resources at the end of a function. But I'm not sure if that's really necessary since you can just put the 'free' calls at the end of the function and insert some labels between them in a kind of 'stack'. This perhaps is more in the spirit of the language - a bit more wordy, but having less voodoo done by the compiler.


Some people like to make a defer macro like this:

     #define itervar(i) i##__LINE__
     #define FOR_DEFER(pre, post) for (int itervar(i) = ((pre), 0);
                               !itervar(i)++; (post))

     Foo *foos;
     FOR_DEFER(foos = alloc(Foo, 1024), free(foos))
     {
         // This "loop" only runs once.
     }
Those can be easily stacked

     FOR_DEFER(foos = alloc(Foo, 1024), free(foos))
     FOR_DEFER(bars = alloc(Bar, 1024), free(bars))
     {
          ///
     }
Here is a memory arena macro.

     #define FOR_MEMORY_ARENA(name) for DEFER(Arena name = make_arena(), free_arena(&arena))

     FOR_MEMORY_ARENA(arena)
     {
         Foo *foos = arena_alloc(&arena, Foo, 1024);
         Bar *bars = arena_alloc(&arena, Bar, 1024);
         // all allocations automatically freed at the end
     }


gcc and llvm have a cleanup attribute you can use for this. Systemd uses it.

https://fdiv.net/2015/10/08/emulating-defer-c-clang-or-gccbl...


I guess you have to be careful not to return inside of these :)


Interesting, I think defer as a statement might not be too much magic but it would be too complicated for my mostly single-pass approach.

Thank you for your response.


Before substantially modifying the language, make sure your parser can still load vanilla C header files with no new features.

This way if you accidentally create a new language, programmers will still be able to load existing C header files without having to manually write and maintain FFI bindings.


Adding multiple return values creates a new language


The more relevant issue is whether or not it preserves backwards compatibility. It is possible for their compiler to continue to consume existing ANSI C and C99 header files and source code, without requiring the programmer to manually write FFI bindings and wrappers to call regular C functions as is the case in many other high-level languages, and also without adding as many features as C++.


Not OP but based on his comment about checking return values of stuff.. would be nice to have lightweight exceptions. could improve readability and debuggability, less if statements, generally easier to propagate errors up the call stack. my 2cents.


One question I have (and please don't take this as criticism or judgement, it's purely curiosity): why Python 2 and not Python 3? Was it because of when you started working on this, or were there some architectural/design issues that prevented use of Python 3?


Eh, I don't have a great justification why Python 2 should be used over Python 3. I made this choice like 3 years ago when I didn't know too much about Python. That's it. Since I wrote some code against the C API of the interpreter and made a whole bunch of scripts already, it's a massive chore to migrate. Classic story, I know. If I were to start a similar project today, I would attempt to use Python 3 first.

That being said, I did come across some discussions (ex: https://stackoverflow.com/questions/34724057/embed-python3-w...) where it is not possible to strip the standard library from Python 3. I think the use case of embedding strictly the interpreter without any "batteries" is not popular and thus has not been that well-maintained. I've not tested this in practice, however.


Cool. Thanks for the response. It's definitely an interesting project. As a long-time RTS fan, I'll be following this project closely. I hope to have some time soon to come back and give it a more thorough read.


Shifting attention to keep every lib updated is a distraction from getting things done.


I was curious about this too and looked around a bit. I see he has to do some inspection of python interpreter -- so maybe that, coupled with demos already within in python2 are enough inertia to stay back on version 2.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: