Well sure, but how are their animation channels views?
wow. so impressive. these are relatively simple shots with close to no occlusion but still amazing how the lighting and motion is matched with the tie and latte.
camera reconstruction is even more impressive. wouldnât want to be a moco operator in the near future⌠maybe not even a compositor lol
I really, really hope that Flame will soon introduce some creative imaging innovations, which is what Flame was built on.
There have been some welcome interface and tech improvements lately, but thatâs not what is going to drive users to a creative tool.
i dont want to get crazy here, there could be a $100K GPU farm behind the curtain of this demo, or maybe theâve spent days training a model to replace that guyâs tie but there is a seismic shift coming in the way we work. Hopefully ADSK will be able to leverage some of this tech but frankly Adobe prob has way more resources than the foundry and Autodesk combined. The future seems to be lots of decentralized tools rather than a one stop shop applicationâŚ
Agreed.
This has been mentioned before, but without a creative designer behind Flame, itâs just iterative improvements and not a lot of image creativity.
I have now learned that Steve McNeill has retired from ADSK/Flame.
Is there anyone creatively driving the development of Flame at this point? I would be very glad to hear of something.
And who from the ADSK dev team do I tag at this point? We used to have relationships with many devs and it showed in the product. Now I can name maybe two?
Damn thatâs cool.
Out of curiosity? Who are you referring to?
Hey Fred. Philippe and Francis, just to name a couple.
Iâm sorry if I come across in any way disrespectful.
Iâm asking the same question, or in the direction of questions from quite a few, that have been asked by a lot of us in recent years.
Very cool⌠but Iâll wait until it becomes available for us to play with⌠Seen quite some Adobe demoâs over the years that looked amazing but ended up⌠nowhere? (deblur?)
Regardless, this will be a thing in the near future. Cool stuff. Been using Photoshops generative tool to create cleanplates in a jiffyâŚ
agreed. Adobe seems to be focused like a laser on the consumer / prosumer market primarily. lots of stuff under the hood, dont want expose to much to user base whereas Autodesk and Foundry seem to squarely sit in the high end pro VFX marketâŚ
Well Photoshop generative fill is pretty nifty⌠Iâve been using it for matte painting bits and pieces and itâs impressive.
Project Rez-Up looks pretty cool.
Damn their shareholders are gonna love this!
I only managed to watch the generative fill stuff. To me it just looked like a still being generated by AI and tracked in using motion vectors all in one button press. Looks slick. But I canât ever imagine any advert containing such easy shots.
It wouldnât be the first time that demos like that are cherry picked (no ding on that at all, itâs supposed to be a aspirational demo after all).
While these tools are getting better all the time, the general problem is that they work quite well under ideal conditions, but falter quickly with unforeseen circumstances. Once people start using them, expectations in terms of budget and schedule change because itâs supposed to be âeasyâ. Until it isnât.
As a consumer, when the tool fails, you can bail and do something else. As a pro you still got to deliver the shot. We all have been on shots where our go-to method suddenly didnât work, and we had to rummage in the proverbial toolbox for a plan b.
I think these are fascinating and inspiring new options. And Iâm delighted to see them. But I also keep a ginormous grain of salt at hand. And donât let any producers watch these videos. You will only suffer if they do.
PS: Iâve been digging a bit into ML tech on the code side to understand it better. Not that this is for everyone. But it does give you a better sense of what might work and what might not, or where it will hit the guardrails hard.
Iâve come to think of this as âthe myth of 80%â because sales people or evangelists or just folks who are excited about the new toy always say the same thing: âsure, itâs only 80% there, but look how quickly it did it!â
And sure, thatâs usually impressive, but unless there are manual controls or some way to get it from 80 to 100, itâs probably not that interesting from a production standpoint. And at this point, after many, many disappointments, if you say 80% to me in the context of a tech pitch I will assume you donât know what youâre talking about.
Interestingly, the Foundry, DNeg, and University of Bath recently concluded their attempt to build an ML roto tool with a bit of a shrug:
âThe principal learning is that rotoscoping is very hard and that people are going to be involved, certainly for the foreseeable future. Thereâs a lot of considerations in the rotoscoping process that need to be taken into account before starting out on a project like SmartROTO.â
I think about this all the time with the various âlook out, AIs going to change the world. Itâs not quite there yet, but it will be in x yearsâ commentary. I mean, with the seemingly murky understanding developers seem to have of how these models actually work, whoâs to say the curve of progress on that front wonât slow or that it will never be able to get out of the uncanny valley that artists have been battling for generations? Iâll tell you the people who definitely wonât say that or even entertain the idea: the folks getting massive investments in their companies right now. Really hard to tell the hype from the reality at any given moment.