There are texts in there, but note that your example inverted a transform, so I wonder if the description didn’t get flipped.
If you look at the contents it has a 1D LUT for the gamma transform and a 3x3 matrix for the primaries. The matrix identifies as 16f → 16f, the LUT indicates 10i → 16f, which in that direction is correct. If you invert it, there’s a question if it actually would truncate data to 10i, because it thinks it’s a camera file, or if that just a labeling problem?
Also does the transform actually prescribe the result bit depth. Note that in the CTL in the timeline, the dropdown is grayed out, but set to 32f. So it could well be that you’re on a 32f timeline, and regardless of the transform details, the actual operation actually happens at 32f.
If you do the same transform in the format section of the clip, rather than adding a ColorMgmt TL-FX that dropdown stays active, so you can select your processing bit depth.
Or replicate in batch and turn on the display of bit depth. That way you can see what the transforms do. With that in mind then check how that translates in the timeline.
I would compare the results and see if there’s a noticeable quality difference. I suspect not. That seems more like a UI problem than a data problem. But I could be too optimistic here.
That is one of many weird and obscure stuff in syncolor colour managment. A non-clear nomenclature, an unintelligible info, and the certainty of not knowing 100% of what you are doing.