FLAME-ON-MAC: Remote client screening setup?

Performance-wise Tailscale is running circles around something like Fortinet.

I’ve also never gotten NDI to work on anything other than local network or vpn, which is why I started using UG between source and destination on a Tailent.

Fucking Swiss army knife :switzerland:

Tailscale is Wireguard with a proprietary control plane.

Word.

Not an NDI expert, but I believe its mechanism relies on discovery via local subnet broadcast to enable their publisher/subscriber architecture, which doesn’t translate through a router. NDI 5 introduced a bridge tool that was supposed to handle the routing between non-local networks, not sure where that landed. They announced it but major components were missing for a while.

I don’t think having a VPN automagically makes broadcast discovery work because the VPN itself often creates another subnet, or may be very VPN dependent. I know Jeff Sousa over in our Colorist Discord uses a combination of NDI and some other tools to stream his broadcast monitor to his home office. I can ask how he configured it. It’s a few components though, not totally out of the box.

Unfortunately, almost all of the NDI helper tools are Windows only.

Right, there’s no platform parity and no Linux solution.

For the most part SRT would actually be a better tool for that use case. You would need a local tool that captures the NDI stream from Flame, republishes as an SRT stream (not sure if OBS does that yet, didn’t used to without hacks). Then on the other end you can use the SRT mini-server (yet another Windows tool) to republish as NDI or just drive a BMD interface directly.

But now you configured so many components that are finicky, you might as well sign up for a 3rd party solution.

Ultragrid here for the win. Better control over quality and latency.

1 Like

SRT is codec agnostic. Packet In → SRT → Packet Out. Doesn’t matter what the codec is. But it is only a transport mechanism nothing else. Easy to do with srt-live-transmit. Its what I use now with UltraGrid

We use a custom UltraGrid/SRT implementation. We do 2K RGB12 bit 444 all day long with about 12 frame latency @24fps glass-to-glass here in town. Further away you go, the more latency is added.

I’d be interested to hear what your stream resolution bit depth and latency is with your solution. What packet-loss avoidance mechanism are you implementing.

I’m lower down the need chain than you are. For on-prem client sessions–where this stuff is used the most–we’re doing 422 HD at 8-10 frame latency with ReedSolomon. At this point 90 percent of the sessions have been in Santa Monica-Santa Monica so the signal isn’t going very far.

My main point to @allklier was about avoiding OBS…

1 Like

Using H264?

Prores

You stream ProRes over the internet?

1 Like

I do :joy:

We’re 5gb fibre on both sides at the main location we do our on-prem sessions.

For some of the other locations we’ve had to do sessions—in house at agencies and whatnot I’ve had to use 265 since we didn’t have the bandwidth to be opulent but the quality of the vtb encoder is a fucking letdown.

Argument for NVIDIA I know…

What’s the data rate?

120… plus whatever overhead from fec. We can kick it up to 300 with no real issues.

What do you mean by the vtb encoder?

Apple’s video toolbox hardware accelerated encoding for 265.

oh…I use libx265 and we project the output on a massive screen at Sony Studios without any complaints.

1 Like

So rad. Yeah, we’re just throwing it up on a handful of oled client tvs and the x2400 broadcasts.