Since we’re now in the area of educated guesses…
The article above (which could be related, but we’re not certain) references the lock file, rather than the segment file.
It’s common for various processes that use multiple files to create some 0-byte file (often named xyz.lck) as a semaphore to avoid multiple processes accessing the same files simultaneously.
You certainly would do that in a multi-segment archive, as you want to make sure you only have one Flame having the archive open at the same time. Probably happens as it prompts you if you want it open for read-only or read-write.
It’s conceivable, that Flame skips that lock file on the first segment, since the fact that there is only one segment reduces conflict. That could explain why it works on 1 segment archives, but not multi-segment archives.
And then why the lock file gets permissions problems… Well, NFS has an elaborate way of mapping user names/ids, between the client and the server, and you can actually have each user on Linux access the NAS with his/her specific user id.
SMB/CIFS has no such thing. When you mount an SMB filesystem on Linux, as part of the mount command you specify the user id and group on Linux to use which is then mapped to the user ID of your login/password you authenticated with during mounting the file system. It’s much more barebones.
Now it could be that the Flame archive process doesn’t run with your user id, but some other system user id on Linux (I haven’t check, but conceivable). It’s possible that this messes up the permissions mapping, and why the Flame process gets an error when it tries to create the lock file.
Anyway - a plausible educated scenario… But only real debugging will tell the truth. You’d have to look at the process list while a longer archive is running to see which user the archive runs at. Could also depend on how you mounted your SMB shares.