PCHs give a sour taste in my mouth after working on a project which very liberally added commonly imported headers to a huge one. In practice, it meant each TU was indirectly including many more headers than needed, and it was harder for humans (or IDEs) to reason about the real dependency chain. While builds were faster, they also started using much more memory.
You also can end up needing to rebuild the world, if touching a header that is part of the PCH, even if it isn’t really needed by all the targets.
Modules and header units were supposed to solve these a lot more cleanly, but is still not well supported.
This blog post has a few inaccuracies, so the situation isn't as bad as it seems. (It is still annoying and requires thought though).
I'm just going to ignore the part where Clang apparently sabotages itself by requiring `-include-pch`. You really shouldn't be using Clang in production because it has all sorts of hard limitations so I am not at all surprised at hitting another one; even if this particular one gets fixed, you'll still run into several others, whether you realize it or not (since usually it silently does the wrong thing). Your `./configure` should already be detecting "is this an actually-working GCC" anyway, so you can use that as the condition before you enable the PCH logic at all.
The "can only use one PCH per compilation" limitation also exists in GCC and is well-documented there since I started using it (maybe in 2010?), but as noted it is not a major limitation. Assuming you're practicing sufficient rigor for your build process, you basically have 3 options: only one PCH for the whole project, one PCH per directory-ish (assuming each directory is a semi-independent part of the project; this is probably sanest), or try to precompile all headers (this has performance implications for whenever you edit a header).
The "build vs src" purity problem has a simple solution - just use `-I` to specify an "overlay" include directory that is in the build directory, and have your PCH-making rule specify that in the first place. That's it.
I hate cmake but this is something cmake does well in my experience. I had to write a Godot 4 plugin and Godot has many many header files. I made a project header that #included all the Godot headers, and a single target_precompile_headers directive in CMakeLists was enough to get it working on Mac and Linux (and I think on Windows but I didn’t need to run it on Windows).
C++ modules solve exactly one of the problems - the "one pch limit" one - at the cost of introducing several more. Certainly they are not more compiler-independent!
PCHs give a sour taste in my mouth after working on a project which very liberally added commonly imported headers to a huge one. In practice, it meant each TU was indirectly including many more headers than needed, and it was harder for humans (or IDEs) to reason about the real dependency chain. While builds were faster, they also started using much more memory.
You also can end up needing to rebuild the world, if touching a header that is part of the PCH, even if it isn’t really needed by all the targets.
Modules and header units were supposed to solve these a lot more cleanly, but is still not well supported.
This blog post has a few inaccuracies, so the situation isn't as bad as it seems. (It is still annoying and requires thought though).
I'm just going to ignore the part where Clang apparently sabotages itself by requiring `-include-pch`. You really shouldn't be using Clang in production because it has all sorts of hard limitations so I am not at all surprised at hitting another one; even if this particular one gets fixed, you'll still run into several others, whether you realize it or not (since usually it silently does the wrong thing). Your `./configure` should already be detecting "is this an actually-working GCC" anyway, so you can use that as the condition before you enable the PCH logic at all.
The "can only use one PCH per compilation" limitation also exists in GCC and is well-documented there since I started using it (maybe in 2010?), but as noted it is not a major limitation. Assuming you're practicing sufficient rigor for your build process, you basically have 3 options: only one PCH for the whole project, one PCH per directory-ish (assuming each directory is a semi-independent part of the project; this is probably sanest), or try to precompile all headers (this has performance implications for whenever you edit a header).
The "build vs src" purity problem has a simple solution - just use `-I` to specify an "overlay" include directory that is in the build directory, and have your PCH-making rule specify that in the first place. That's it.
> You really shouldn't be using Clang in production because it has all sorts of hard limitations
Wait, do you mean you shouldn't be using Clang PCHs in production, or shouldn't be using Clang in production at all?
At all. Clang has a lot of footguns, and filing bugs about "regression compared to GCC" does not actually get them fixed.
Remember, the whole point of Clang is so that people can make their proprietary compiler forks. Everything else, including quality, is secondary.
Wow I didn’t know Squid is still around. Not the kind of software that gets much publicity.
I hate cmake but this is something cmake does well in my experience. I had to write a Godot 4 plugin and Godot has many many header files. I made a project header that #included all the Godot headers, and a single target_precompile_headers directive in CMakeLists was enough to get it working on Mac and Linux (and I think on Windows but I didn’t need to run it on Windows).
Do C++ modules solve this problem?
C++ modules solve exactly one of the problems - the "one pch limit" one - at the cost of introducing several more. Certainly they are not more compiler-independent!