Interesting.. the IPv6 setting on the dslreports test requires the use of non-encrypted.. so I changed it away from https. Running the test I could observe that IPv4 had a lot less bufferbloat (A+) and was slightly faster than IPv6 (could be routing) which scored B. It was visible, the bufferbloat #s were spinning madly.
As an experiment, I changed the interface away from eth0. In IPv6 speedtest mode,
6rd-wan6 - disabled SQM, full speed as if script not active
sit0 - disabled SQM, full speed as if script not active
eth1 - no good, very slow
br-lan - no good, very slow
I disabled the wan6 interface without rebooting. IPv6 speedtest still ran. Rebooted and it was enabled again and I didn't want to delete my settings so I left it alone.
I believe that the speed discrepancy between IPv4 and IPv6-6rd is due to the MTU, 1480 vs 1500.
But what is certain from what I could see, bufferbloat is worse on the 6rd encapsulation. I saw the upload meter spike red 220 ms, even though the final overall score was fine or ok (A, B vs A+). With IPv4 the meter didn't bounce around, stayed relatively level.
It's possible some of this is due to the 6rd tunnel. Pity I don't have a native dual stack at the present time from the isp to compare better.
So I believe that SQM is working, but not as well as it does with IPv4 due to bufferbloat being higher.
I decided to chuck out the IPv6-6rd setup for now since I don't connect to any IPv6 only sites.