Follow

The rust language is fast
The compiler is slow
The compiler is written in rust
Therefore, rust is slow

· · Web · 2 · 6 · 6

@matrix There is something to be said for compiler speed. Overly complicated languages need overly complicated compilers. Overly complicated languages can seldom be truly optimized. Similarly, if the compiler devs can't optimize their own compiler, what makes you think they can optimize the output of the compiler? It's not universal, but it is a strong trend.

A "sufficiently advanced compiler" will never exist.

@gentoobro
Sure, but I'm not sure I would call Rust more complicated than other languages (with the exception of the borrow checker). C++ can have a lot of fuckery when you use things like virtual methods.

@matrix Complexity is not always obvious. Generics tend to be complicated. C++ compilers are slow mainly because of templates.

@gentoobro Yeah, when programming it's hard to tell precisely how complex is everything going to be

plenty of languages have generics but compile faster than C++

@matrix The devil's in the details. Generics tend to be complicated for compilers because they can cause a combinatorial explosion of concrete implementations. Some languages and compilers handle this reasonably. Others are C++.

My understanding of the problem is something like this: PCH aside, C++ has to recompile the entire template system for every translation unit (source file). The templates have to be completely in the headers due to some petty retarded detail of how they work. Then the linker has to throw away all the duplicate template function implementations. C++ is a language with no zen.

@gentoobro Oh, if that's why then no wonder. I would guess that it's probably some technical debt from being a C superset

@matrix As I understand it, there's some petty detail in the spec that forces all the template functions to have to be in headers like static functions. Some tiny oversight like how in C two identical anonymous structs are considered different types but they're the same if you typedef them.

@gentoobro @matrix A template is just a set of instructions to the compiler for how to construct types and functions.

Any piece of code that is going to work with a type needs to know at a minimum how large the type is and how it is laid out in memory, therefore any function which constructs a templated type needs to see the template definition.

Likewise any template function needs to be instantiated once for every set of template arguments it is called with or else the function won't exist.

Usually the best way to ensure this happens is to put the definition in the header with the declaration and accept there will be duplicate instantiations which the linker has to resolve but at least everything happens automatically.

If you do want to manually instantiate templates then you can use "extern template" to prevent automatic instantiation. Then you need to explicitly instantiate your templates for every combination of types they care called with. Compiles will be faster but if you miss one then you'll get missing symbols.

@mikerotch @matrix Thanks for the explanation of the details.

The question now becomes why they don't cache all the necessary information LTO style and automatically instantiate the required argument combinations at link time.

@gentoobro @matrix That's called a "precompiled header" and all major compilers support them.

How easy those are to enable in a specific project depends a lot on the build system the project uses and how much benefit one can derive from using them depends on how good the developers are in terms of laying out their code in a sensible way.

@mikerotch @matrix PCH, in my personal experience, tends to be a glitchy mess. Maybe that's just build systems and not PCH itself.

But I'm not talking about PCH. I'm talking about being able to put template function definitions in a normal source file, a separate translation unit, and then having the linker figure out what concrete versions to instantiate and then compile. This is roughly how Link Time Optimization works. The compiler keeps some form of intermediate code around for every TU and then runs a final compile and optimization pass over the whole project, inlining functions from other TU's as it sees fit. I see no reason that a C++ compiler couldn't automatically do this with templates, though there may indeed be a legitimate one.

@gentoobro @matrix A PCH is an intermediate representation though. It doesn't perform the exact same sequence of steps you describe but the outcome is equivalent as far as I can tell.

You can move some work from compile time to link time and vice versa, but ultimately if you have many types used in your code then every user of that type will need to know the size and memory layout if for no other reason than for the compiler to know how much stack space to allocate.

PCHs are often glitchy for the same reason that a lot of software is often glitchy: competence and manpower are generally decreasing as demographics are deteriorating. Fewer people know the right way to do things and of those who do don't always have the time.

In my projects I set them up to build with or without a PCH as a command line flag and have CI jobs that test compilation in both modes to make sure my headers are clean and correct and don't depend on the PCH, thus turning it into an optimization rather than an essential component.
@mikerotch @matrix

competence and manpower are generally decreasing as demographics are deteriorating.

It's getting pretty bad...

Users of a template should be able to figure out its memory layout with just the class definitions and function declarations, no different than typical use of classes. The template function definitions, which are largely what bloats the headers, aren't truly needed until final linking of the executable or library. The compiler should be able to keep a list of which instances of which templates are needed in each TU then have the linker compile the necessary versions using cached IM code like LTO does. And all this completely without build system interaction, since it's the linker provoking the final instantiation and compilation. Function inlining would be no different than with non-template functions.

@gentoobro @matrix I also don't use LTO because I've found "jumbo" or "unity" builds to be a superior alternative.

It takes some discipline in terms of restricting your usage of internal linkage, but once you have everything cleaned up so that every file is unity-safe, then just compile your entire project as a single TU and make LTO completely redundant.

It requires a lot more ram and you lose all parallelism so it's only worthwhile for release builds and can't be done if you build on a potato, but it gives you at least the same benefits as LTO without any of the complexity. It also lets you optimize shared libraries that would otherwise be excluded from LTO.

@mikerotch @matrix LTO is optional in my build system since it takes several seconds on every build.

I make extensive use of static functions, often with the same names in many TU's, so a unity build isn't directly feasible for me even for releases. I could probably automate the inclusion of some macros between each source file though, now that I think about it. I'll have to get my C parser working completely before that.

All of my static libraries are compiled as source along with the project for this exact reason. I've read that at least on GCC you can make .a's that keep the LTO information, but it's not risen up my priority list enough to test.

@gentoobro @matrix > I make extensive use of static functions, often with the same names in many TU's, so a unity build isn't directly feasible for me even for releases.

Namespaces: you either have them built into the language or_you_have_to_reinvent_them_yourself_poorly.

I lost a bunch of hours once because I needed to statically link libsodium and also statically link argon2 into the same excutable and got ODR violations because libsodium had previous copied argon2 into their source tree and made modifications to it without changing the symbol names.
@mikerotch @matrix

or_you_have_to_reinvent_them_yourself_poorly

Kind of. One::of::the::things::I::hate::about::C++::is::the::constant::use::of::"::". Sure, you can use "using namespace foo;", but then you're just back where you started. To me, unnecessary visual bloat is a dire sin in source code.

@gentoobro @matrix @mikerotch
:<><:><:><<:><:><:<<::::::<><:<><><:><:<>:::::::::<><><><>:<>:<>><:<><<:><:<><:><:<><:<>
@gentoobro @matrix "using namespace" should be rare, ideally never used at all.

It's much better to pull in only the specific symbols, "using std::swap" being one of the most canonical examples.

I normally don't have very long qualifications on my names despite making very extensive and deep usage of namespaces because most of the time you are using types in your own namespace or a parent namespace and neither of those require a qualification.
@gentoobro @matrix Actually there is one exception: I do use "using namespace std::literals" because that's the one and only way to get access to those symbols.
Show newer

@mikerotch @matrix Now you get into boilerplate. You essentially have to re-declare your types and functions all over the place. There's tradeoffs to everything. C++ takes one direction and I take the polar opposite.

Having my build system slap "#define appendCmds riggedMeshManager_c__appendCmds" and "#undef appendCmds" around each source file with static functions during a unity build is easy and, more importantly, completely transparent to the code. I'm already undermining malloc et. al. with globally included macros. (Another reason for compiling in the static libs by source.)

Show newer
Sign in to participate in the conversation
Game Liberty Mastodon

Mainly gaming/nerd instance for people who value free speech. Everyone is welcome.