Anyone using Louper?

Hey folks,
Curious if anyone here in Logik land is using Louper for remote reviews?

I am helping out the Louper team with some marketing and partnerships. Not looking to to debate pros/cons vs other platforms… looking for happy users who might be interested in being featured in a case study for co-marketing. We have lots of video editors and Resolve users on the site, but would like to feature some Flame talent. :metal:

Feel free to email direct at ross@louper.io

Cheers!
-Ross

1 Like

I use Louper with a few specific clients. The agency is fully remote so they have people all over the country. We get everyone together in my Louper room for Flame finishing, effects, etc.

No complaints… works good for us.

3 Likes

Thanks Bryan. I’ll ping you.

Ditto

Use it frequently and love it. Resolve and Flame here.

whats the end to end latency nowaday, they say sub second so like 20 frames?

Happy to give you a demo next chance you get

thanks but I just need a number :joy: haha

1 Like

Depends on your bandwidth & your clients. :thinking:

1 Like

In what way does that play a role? there should be a general number of whats possible end-to-end latecy wise roughly. is it more like 10 frames or more like 20 frames? huge difference.

If I look at the stack of latencies during a service “like” louper (i run my own not unsimilar platform internally)

Starting with the softwares framebuffer that 0 ms thats when the frame gets created

  1. First thing would be the latency between the framebuffer and the encoder, now this could be software or hardware based, lets say a AJA card has a certain delay to what comes out of the SDI port to the Encoder hardware , this could also be NDI to OBS , or whatever else, this has a certain latency that Louper has no control of, so NDI would be faster than SDI out probably

  2. Then you have the encoder capture buffer, again SDI in on a blackmagic mini recoder or whatever does have a buffer as well.

  3. now encoder latency, how fast is the encoder spitting out frames, whats the buffer here, this is usually highly tuenable .

  4. sending the packages over the internet , this would be network latency and protocoll overhead, SRT vs RTMPS e.t.c

  5. Receiving these packages, usually another buffer is added here on the receiving side to account for jitter/packageloss e.t.c

  6. Decoding Latency, also depends on GOP size e.t.c

  7. putting these decoded images into a framebuffer and spitting them out the other side

now this is for bascially p2p transfer there are more steps involved if you start re-encoding the signal on a server in between, is that what louper is doing, i guess so?

So yea i just want to get some rough number to judge is louper is doing anything super special to cut down latency?

I honestly think that “sub 1s” of latency is nothing special, I have been doing that with super high quality for years, I can push color accurate 4:2:2 10bit All-Intra HDR to a webbrowser in about 0.5s end-to-end when using NDI->SRT->Browser with a simple signaling server inbetween, thats like as good as it gets for browser delivery as i have ever seen while not using the aweful-quality webRTC stuff.

with SDI and capture cards in the mix i get around a realistic 23 ish frames end-to-end latency

however Ultragrid somehow manages to cut all this down so insanely much that i dont understand how thats even possible, you get latency that is like 1-3 frames end-to-end it bascially just depends on encoding/decoding and network latency its somehow completely bonkers.

1 Like

Correct - each solution has a lower base latency threshold, which is what can be achieved under optimal circumstances (local office setup with little other traffic). This is based on the architecture and the tools.

And then from there you stack all the variables that can make it worse.

One big variable is of course Internet latency. Are you connecting with someone across town on the same ISP, or are you going from NYC to LA? Or worse yet NYC to Sydney?

What should be published and what should be used to compare solutions is that base latency threshold inherent in the solution. The rest is interesting real life data, but much less meaningful in comparing capabilities.

A separate set of variables is for tools that can connect to a group of viewers. At that point, you are more likely going through a cloud relay. The cloud relay adds latency, but also prevents clogging your outbound link with multiple separate sends, which is less likely to be data center grade. And if the relay is closer to the viewers rather than the sender, the experienced latency will be favorable.

And with the cloud relays, it depends which data center regions the service supports. Some will be broader than others. You get what you pay for.

2 Likes

well said