I’m curious if you’ve looked at the prior attempts at memory safe variants of C or compiler-assisted safety for legacy C? They are really safe with better performance than Fil-C is. More important, you might find some of their ideas useful in your own work.
Here’s a few I remember:
CCured https://people.eecs.berkeley.edu/~necula/Papers/ccured_topla...
Softbound + CETS https://people.cs.rutgers.edu/~sn349/softbound/
Clay Systems Language https://www.eg.bucknell.edu/~lwittie/research.html
Cyclone Language (Rust drew on it) https://en.m.wikipedia.org/wiki/Cyclone_(programming_languag...
Fail-Safe C https://staff.aist.go.jp/y.oiwa/FailSafeC/index-en.html
CheckedC https://github.com/microsoft/checkedc
Also, one can combine subsets of C with FOSS, static analyzers that can handle those subsets. Then, compose only in ways that the tools can handle. Then, combinatorial and fuzz testing of the interface composition.
I know you’re doing the project for fun while exploring specific ways to achieve your goals. So, these are just some links and concepts that might help on your journey. Lots of folks don’t know about prior work in this area. So, I keep passing it on.
It can run CURL!!! And OpenSSL! This seems possibly like a big deal.
>Fil-C is currently about 200x slower than legacy C according to my tests
Commits messages are the shit.
It’s interesting to see a very pragmatic approach towards “better C”. There’s so many “better C” languages out there gaining popular interest like Zig, Hare, Odin, Jai; but none (I don’t consider Rust a better c) try to tackle memory safety, even when stating from a clean slate. Then there’s this thing, which is still mostly normal C, so it’s very easy to apply to existing code, and it does solve memory safety head on.
It’s not clear from reading but it seems like most checks happen at run time, and not at compile time. How much feedback goes the compiler give to the user about mistakes?