For those using Mac hardware… I haven’t looked in a while but curious if anyone knows if there’s anything external that’s faster than the internal storage?
Right now, I’m getting 6G/s off the hard wired storage inside my Studio.
Thanks!
For those using Mac hardware… I haven’t looked in a while but curious if anyone knows if there’s anything external that’s faster than the internal storage?
Right now, I’m getting 6G/s off the hard wired storage inside my Studio.
Thanks!
Not on Mac Studio, because TB becomes a bottleneck. On a Mac Pro you can put a card with storage. But that’s not external.
TB4 is 40Gbps. Which translates to somewhere around 4GB/s as the max. Same with anything network attached unless you manage to bond multiple channels.
You want super fast - get a Linux machine with multiple Gen5 NVME right on the board (not my system, but someone from our Colorist Discord):
What are you trying to do that you need even more speed?
We can play back 6K all day long over 25-gige network.
Playback definitely isn’t an issue, I’m looking to increase my storage size - which is impossible after purchase. If I go with an external drive, I take a pretty big speed hit. I’d love something that’s at least similar or better speed.
I was hoping that after a while not paying attention to the hardware market that someone had discovered a workaround or unlocked a way around the confines.
I do understand the advantage of going with a Linux box with a lot more flexibility in configurations that would let me take advantage of a bunch of cool things (MLTW comes to mind). But I also really like having a very small, quiet, efficient little computer on my desk. Maybe next round I’ll go back.
@Josh_Laurence - it’s not possible
@Josh_Laurence - a Mac Pro might permit a 100Gb or greater ethernet card.
But that would require a switch.
And media.
And SFPs
And storage that could keep up.
I get it, but basically what I’m saying, is there is a threshold in which additional speed provides NO benefit. So even a Thunderbolt array at 40gb/s is well beyond that.
Sorry to add to the questions instead of resolving any…
I have always wondered… aside from playback… would a batch setup render faster on faster storage. I assume it would… Or is the CPU& GPU then bottleneck… They will never produce enough reads & writes to saturate even 6GB/s or 20GB/s (thats wild)?
@knhn - faster everything leads to faster finish.
Faster read means faster to GPU means faster composite means faster export which requires faster write performance.
Flame can do it.
Less compression means less CPU interference.
So then addl speed always provides a benefit.
@knhn - always.
Project server provided a very unusual and welcome opportunity in that you can run a headless flame, so at least two PCIe slots become available in the qualified workstations.
You can do a lot with two PCIe slots.
To a degree yes. But are you speeding up what only accounts for 5% of the bottleneck, or are you speeding up what accounts for 80% of the bottleneck.
In a world of unlimited power, money, space, cooling - sure, max everything out. Meanwhile, I think there’s a human tendency to hedge bets without solid data driven decisions.
If you’re on a Mac you can instlal iStat menus which sits in the status bar with performance monitoring, and then has various detailed charts and drop downs. That would provide you some additional insight, plus Flame’s resource monitor.
@allklier - it was oracle that turned me on to this stuff - they had a single pane of glass (ipad) that showed you every statistic and scrap of information about every component in the system.
nothing is left to chance in such systems - i know that you know the type of thing - the serial number of the transistor level type system, the logistic chain that was used to source the transistor, blah blah.
then you turn the thing on and it just starts producing metrics, nothing to download or configure or write by yourself.
and then you throw stuff at the machine and watch it do it’s thing - it’s mind-boggling.
it’s expensive as fuck but you won’t spend any time at all writing, configuring, blah blah.
just put it in place, connect it to power, and monitor absolutely everything from your yacht.
Yes, monitoring systems like that are numerous.
At Amazon we had very detailed momitoring dashboards for all the services you owned and the shared infrastructure. Drag and drop dashboard configuration, alarm thresholds tied to your pager. You knew something was headed off the rails before the it hit the ground. And all stores in databases where you could reference past data for analysis.
There is the munin open source framework, and industry standard SNMP.
But back at the Flame ranch, all you need is some trend lines on load, traffic, and memory.
But you need grafana to make it pretty…
What actually would be more interesting is a flame internal profiling function, similar as exists for code.
If it could look at your batch group and tell you about the slowest node, any wasteful processing (bit depth that gets discarded, frames that get repeatedly processed, etc)
I’m assuming we’re inflicting more harm by not knowing where the waste is and make up for it with bigger hardware.
Quite reasonable since time is money. Spending too much time on optimizing is overhead. But with good tools, there’s a sweet spot.
Cineon or whatever the software that Cinesite made, had that.
I have an OWC SSD chassis hooked up to my M3 MBP via TB. The enclosure what like $279, and I populated it with 4 2TB NVMe sticks. While I know it’s slower than the internal MBP storage I can honestly say that I’ve never felt a performance hit. I think I’m echoing something that @allklier said above about bottlenecks and price. For a fraction of the cost of additional internal storage (which you can’t do on a MBP or Mac Studio but humor me here) how much of a real world hit am I taking? 5%? On a 2 min render that’d be an additional 6 seconds.
@andymilkis - it’s right - you have to choose the technology to solve the particular problem.
Flame is the core technology - the problems and situations are variegated and manifold