If I understand right, AF is basically the ability to see a texture from an angle, and it retains its detail? As that's like 90% of games, I'm surprised it isn't given more attention too. You put all this work into 8K textures, only to ruin how they look like for most of the game?
It was something I really noticed in Xbox One X Far Cry 5. I'd look down and see the most realistic looking textures. I'd look father away, and it all got pretty unimpressive.
AF is a technique used to sample a texture more accurately than a bilinear filter at extreme oblique angles. It is either turned off or on. The performance cost is attributed to the many samples that need to be taken.
Let's imagine an 8k texture map. Not only is this a texture map for diffuse color, but it's normally 8k for all other textures too (normal maps, specular maps, mask maps, and any other multiple textures on a single triangular surface). So having said that, how does it get projected onto the triangle under every circumstance? Well, it uses MIP mapping to filter out unwanted aliasing due to the distance the camera is from the texture map. So every single texture map has to have every lower level of the resolution stored in memory.
8k
4k
2k
1k
512
256
128
64
32
16
8
4
2
1
Now how do we filter each texture map to mitigate high frequency noise? We use a filter kernel which essentially means taking the texture map and applying some smoothing algorithm in it. Most techniques are always using a square box filter kernel. That is, 2x2 or 4x4. For every size of the filter kernel, you have to index the texture map - which costs bandwidth at the texture stage of the pipeline. Well, square kernels aren't accurate enough at oblique angles. That's where "anistropic" filtering comes into play. For example, instead of a 4x4 we can do a 8x2 or a 2x8 depending on the angle. So a 16X anisotropic filter kernel means sampling that texture map 16 times to get a good approximation of removing noise.
Now after all of that, imagine having to index that texture map 16x for every single MIP level I described above. THEN imagine it for every single texture map for a single surface that could include multiple textures for each of the shading components like diffuse, specular, normal maps, etc..
It eats up bandwidth rather quickly. Since the AMD boards use the SMUs for rendering ray-tracing, it will eat up more bandwidth - thereby taking away cycles that could be used for the more expensive 16x anisotropic filtering kernel. Cutting down on the number of samples would give back several cycles in the texture pipeline.
Hope this helps