I’ve just started freelancing with a company that uses PCoIP Teradici. While I was living in LA, I never had any issues with latency. But now that I’ve moved back to South America, the connection has become painfully slow — even with a 2Gb fiber line. Unfortunately, as we know, latency isn’t about speed but distance and routing.
Cutting to the chase: has anyone else dealt with this kind of situation? The lag is making it really tough to work effectively. I’m considering renting a virtual machine in Miami to act as a midlayer between my West Coast client and me.
Would love to hear if anyone’s tried something similar or found a better workaround.
I’ve also tested this by spinning up a VM in the region via one of the cloud providers (I like Vultr or Digital Ocean) and doing my own ping test. Since the VM only needs to be alive for a few minutes to do this, it could end up costing just a few pennies at most.
Anyway, sometimes yes, you can use a cloud VM as a “signal relay” and hope it has better routing than just a random internet connection, but unlikely to be the deal maker.
So really, no, there is not much you can do to improve upon the physics of distance and the speed of light thru glass. But there is one wildcard, and that would be a StarLink connection might bounce the signal up the globe via the inter-satellite laser comms, which travel significantly faster than thru glass. But it doesn’t always work that way, and often they use terrestrial ground stations as backhaul.
Using CloudPing.info-
sa-east-1 (São Paulo) 182 ms
I had a conversation about this with someone a while back and I was trying to explain that physics wouldn’t allow anything to be free of latency as distance increases but they just kept telling me it didn’t matter how far away they were it wouldn’t increase latency as if teradici had somehow cracked quantum physics but somehow not been awarded the nobel prize.
Mayyyybe look to rent a server/space in a datacenter that is close as possible to the Zayo backbone in Sao Paolo? If you can get on that backbone in as few hops as possible, might help getting up to Los Angeles?
Teradici/Anyware… its a baffling. Im in NY. I have worked on machines in London with constant 75ms latency that don’t feel too bad, and then Ive had 10ms to Manhattan that feels slower and goes blocky frequently. I can’t understand whats going on. Maybe the Manhattan connection is more variable?
I have heard of one person using Starlink successfully, but I find everyone has a different tolerance. I like to work quickly.
EDIT - i should add the Manhattan one is inside a VPN which apparently adds to the mess of variables.
Thank you, Kieran. My latency averages around 180ms. I’m still able to play in real time, but the cursor lag is quite noticeable on my end. Unfortunately, Starlink isn’t available in my region yet, though I’ve already joined the waiting list. I’m now looking into Zayo — a quick Google search shows they’ve built a low-latency backbone between Brazil and the U.S., which sounds promising. I appreciate the info! I just need to see how do I use this.
still can’t get around the physics of it though. I just don’t think you will be able to get less than ~180ms terrestrially even in the most optimized route.
Have you experimented with other remote options to validate if your cursor lag persists outside of the Teradici/HP Anywhere ecosystem? My remote sessions with Mexico City and Rio, from Los Angeles, are hit or miss, depending on the facility.
I just spun up on Vultr, a dedicated CPU VM, one in São Paulo and one in Los Angeles. In theory they have optimized routes between their own data centers.
BR to LA:
[root@Brazil ~]# ping -c 10 207.246.102.155
PING 207.246.102.155 (207.246.102.155) 56(84) bytes of data.
64 bytes from 207.246.102.155: icmp_seq=1 ttl=49 time=178 ms
64 bytes from 207.246.102.155: icmp_seq=2 ttl=49 time=178 ms
64 bytes from 207.246.102.155: icmp_seq=3 ttl=49 time=177 ms
64 bytes from 207.246.102.155: icmp_seq=4 ttl=49 time=177 ms
64 bytes from 207.246.102.155: icmp_seq=5 ttl=49 time=178 ms
64 bytes from 207.246.102.155: icmp_seq=6 ttl=49 time=178 ms
64 bytes from 207.246.102.155: icmp_seq=7 ttl=49 time=178 ms
64 bytes from 207.246.102.155: icmp_seq=8 ttl=49 time=178 ms
64 bytes from 207.246.102.155: icmp_seq=9 ttl=49 time=178 ms
64 bytes from 207.246.102.155: icmp_seq=10 ttl=49 time=178 ms
--- 207.246.102.155 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9014ms
rtt min/avg/max/mdev = 177.481/177.526/177.585/0.030 ms
LA to BR:
[root@la ~]# ping -c 10 216.238.110.230
PING 216.238.110.230 (216.238.110.230) 56(84) bytes of data.
64 bytes from 216.238.110.230: icmp_seq=1 ttl=50 time=178 ms
64 bytes from 216.238.110.230: icmp_seq=2 ttl=50 time=177 ms
64 bytes from 216.238.110.230: icmp_seq=3 ttl=50 time=178 ms
64 bytes from 216.238.110.230: icmp_seq=4 ttl=50 time=178 ms
64 bytes from 216.238.110.230: icmp_seq=5 ttl=50 time=178 ms
64 bytes from 216.238.110.230: icmp_seq=6 ttl=50 time=178 ms
64 bytes from 216.238.110.230: icmp_seq=7 ttl=50 time=177 ms
64 bytes from 216.238.110.230: icmp_seq=8 ttl=50 time=177 ms
64 bytes from 216.238.110.230: icmp_seq=9 ttl=50 time=178 ms
64 bytes from 216.238.110.230: icmp_seq=10 ttl=50 time=177 ms
--- 216.238.110.230 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9014ms
rtt min/avg/max/mdev = 177.465/177.508/177.573/0.028 ms
There is nothing you can do to improve upon this. It’s physics.
The old RGS had terrible networking protocol and almost zero tolerance for packet loss. We’d get red screen disconnects all the time. Teradici is vastly superior in both image quality and stability although we find the frame rate to be not as smooth as RGS.
I tested DCV, and it felt like a year or two it was comparable to Teradici, but something changed, could be us, could be them, and now it is un-usable.
I’ve simulated a 180ms latency on my Teradici client at home. There is definitely an un-useable level of fuck with the Wacom tablet, but the mouse was more or less usable, the keyboard was fine, and playback was OK. Of course it is often that high latency connection also have some level of packet loss and jitter which I did not test. It makes sense that the Wacom is messed up when using Forwarded mode. I believe they recently added a special option for using Wacom on high latency connections. See attached screenshots.
Improve graphics tablet responsiveness on high latency network ¶
Directive
Options
Default
pcoip.allow_lossy_hid_reports
0 (off), 1 (on)
Off
This setting takes effect when you start the next session.
When this setting is enabled and connected with a compatible client, performance and responsiveness on the locally terminated Wacom/Xencelabs graphics tablet will be improved on networks with high latency or occasional packet drops, though there may be a slight reduction in accuracy.
This setting is not applicable to USB bridged graphics tablets.
anecdotal but we have people remoting into the office here in germany from
Australia (Melbourne) and SouthAfrica( capetown) all the time and they have no complaints at all.
Which is nuts and i dont quiet understand how 250ms of latency could EVER work?? Anyhow apparently packageloss is much mor of a issue than latency (but they do go hand in hand, more hops the packages go through the higher the chance of dropped packages)
Not using VPNs helps a TON , most VPNs take all the besutiful UDP packages and put them in a TCP tunnel / thats sucks. and there is much more “wan optimizations” that all these tools do , with a VPN you are breaking it (thats why they have connection servers and brokers and stuff)
oh and we happily use Parsec as its much better performance (for our usecase) , parsec is the reason we use mac-flames.
I do find it WILD when I meet artists that gave up using wacoms during pandemic and just went with the mouse. Ive met a handful of these folks. True pixel soldiers.