You already got a lot of good advice here.
Some additional considerations:
Color management when done correctly is an exact science.
When you cook a dinner, you can wing the ingredients and with some minimal experience, it will come out alright, as it’s mostly about combining flavors. If on the other hand you bake a cake or bread, you must measure all ingredients precisely as you’re undertaking a chemical reaction which relies on precise ratios. If you wing it, it will likely look wrong or even be inedible.
Color grading is cooking. Color management is baking.
Color management is the the process of correctly translating the image from one encoding to another encoding with either no loss of information and how the image will be perceived, or calculated degradations due to limitations of the medium.
Where this relates to your description - Log To Lin is not necessarily the same thing in all cases. There is only one linear, but there are many possible log curves. It has to the be right one. In fact in many cases Log curves have a precise inverse that was applied earlier in the pipeline (usually the camera).
That link about scene referred vs. display referred was a good starting point. But it’s also a very complex topic, and people still disagree about some of the aspects of it.
One way to think about it is that in the past most material we worked within an edit was encoded the same way (mostly Rec709), so it was easy to comp and mix and match without side effects. And our displays were mostly Rec709, so once we had a result it look right without further considerations.
With today’s wide range of advanced cameras, many different display standards and distribution channels - our pipelines are a lot more complex, and almost invariably we end up with materials that don’t match (i.e. aren’t encoded the same way) when we receive them. So we have to take extra steps that our comps take that into account and apply the appropriate conversions, so that the resulting frame is made up of a single color encoding in which each pixel has the same formula to display it. Think of it that each input has to do their contribution to arrive at a common answer. And this needs to be done before the comp operators that combine the pixels.
Scene referred doesn’t have to be ACES, though this is the most popular. Scene referred only means that you work the scene (or your comp) in a specific color space that everything you receive is normalized to while you work on it, and then at the end you will translate it once (or multiple times) to the display this will go to.
It’s best to have the scene referred color space be equal or bigger than the best of the image components to avoid losing quality in your master. The idea is that you will work on a future proof master, and then shed quality only at the end of the pipeline due the constraints of specific displays. In the future you can adapt that same master to newer displays that are more capable without having to redo anything.
It’s worth watching a few tutorials and reading materials on color management. But it takes time to digest and get the hang of it. Randy already linked the Flame Academy Classes. If you have fxphd, Charles Poynton’s color theory class is great, and there are other Flame specific courses that cover the topic as well,
In the meantime, if it doesn’t look right, something has likely been mis-translated. Chase each image component’s journey and see where things may have gotten of the rails.
And understand how the tools work. Flame color management is definitely quirky to say the least.
If you get banding, it most likely is a case where an image with too little detail is getting stretched color wise. 8bit image or 8bit sequence settings are commonly the issue.
Technically banding happens when color values that were originally just 1 value apart in the camera file / GFX render suddenly get stretched to being 3 or more values apart causing visible jumps in the color/exposure. Banding can happen with any material, but are more likely with 8bit processing since the values are coarser to begin with, so there is less room for error.
That said, there is a case to be made that the 8bit GFX may be better off just being composited as is rather the put through scene referred transforms which may stress it more than it has leeway.
That applies for example in Flame sequences where you may work in ACES on track 1…3, then have a color mgmt layer on track 4, and the place Rec709 / display referred GFX on track 4. Of course that will then limit you to Rec709 deliverables, but may be the best answer in the given circumstance and also considering the nature of the job and it’s half-life.
Over time as you get a better handle on color management it become easier to make these tradeoffs.