At some point (years ago) we’ve lost the detailed traceback, including line numbers, tracing back to a specific function …
Now it’s pretty much like, hey, there’s a problem somewhere with a string … go figure.
Does someone have a work around? a way to get flame spit these back?
I find myself adding so many temporary check prints to try to chase where things break …
the DL_DEBUG_PYTHON_HOOKS env variable tends to output 300 lines without expanding on the actual error.
Any help appreciated!!! this drives me (and others) nut!
@Stefan - i fucking hate python
lol!
Yeaaaah!!! C++ or nothiiiiiiing! (charging with trumpets)
reminds me of this extremely useful tool:
https://www.facebook.com/773932839/videos/10157127791587840/
Joke aside, it doesn’t bother anybody to not see the full tracebacks when writing python tools or trying to debug them?
Or is there a way to see these tracebacks?
for example:
shell/vscode:
Traceback (most recent call last):
File "/disks/nas0/CGI/R_n_D/work.stefan/flame_python/git/DLPythonHooks/2024.2/testing_bits/test_traceback_01.py", line 28, in <module>
main(["list", "of", "things"])
File "/disks/nas0/CGI/R_n_D/work.stefan/flame_python/git/DLPythonHooks/2024.2/testing_bits/test_traceback_01.py", line 8, in main
divide_numbers(number_a, number_b)
File "/disks/nas0/CGI/R_n_D/work.stefan/flame_python/git/DLPythonHooks/2024.2/testing_bits/test_traceback_01.py", line 11, in divide_numbers
return a / b
TypeError: unsupported operand type(s) for /: 'NoneType' and 'int'
Flame
[error] 446648320 InternalHints.C:2017 10/02/24:18:08:18.699 catch errors test execute callback [<function main at 0x7f4111562b00>((<flame.PyBatch object at 0x7f4111f617e0>,),)] failed because of an error of type "unsupported operand type(s) for /: 'NoneType' and 'int'"
(and note that there’s no flame.PyBatch reference in the script)
Script
def main(selection):
number_a = None
number_b = 10
divide_numbers(number_a, number_b)
def divide_numbers(a, b):
return a / b
def get_main_menu_custom_ui_actions():
return [
{
"name": "a52 - DEV",
"actions": [
{
"name": "catch errors test",
"execute": main,
},
],
}
]
if __name__ == "__main__":
main(["list", "of", "things"])
If this would be a several hundred lines script, good luck to find where the error is, pretty tedious.
@fredwarren, (Hello ) Am I missing something?
@Stefan - yep: debugging hundreds of lines is horrible.
If there’s to be any consolation, it’s the same in nuke and resolve: horrible and horrible.
would be ok with a regular complete traceback, it would tell on what line the problem is
I was trying to make the scripts that make .jsx files and openclip for after effects and there’s this really lovely feature - you can plug adobe tools directly in to vscode and debug in the app while working in vscode.
I wondered why it was not possible to do this in flame.
There’s a lovely flame.pyi file from @DannyYoon that was made for 2021.1.
@DannyYoon, hello, have you ever made a public update?
Thanks a bunch for making and sharing the first version to start with!
And I don’t know how to, but maybe it’s a good file to feed to a custom chatgpt model?
Wouldn’t it be nice to have an open source (participative) repo with custom chatgpt-like models?
I mean, something trained or refined with as much flame info as possible.
Is there such a thing already?
So far when testing, it created its own flame python api … it presumed some very nice things (that don’t exist) but it wasn’t useful.
CharGPT behaves like a generative AI application when working in Flame’s domain. Making stuff up even for sinple expressions.
But back to the point: There’s a usefulness problem with traceback messages!
@Sinan - yeah, and the question is: can it be different? how much useful material is there publicly available to make a custom model? like this flame.pyi, doc … and maybe adsk could provide some doc in a different formats? Or some of us would compile (combine/copy-past …) something?
on the adsk part, like if the api was available as one long page/doc that could be fed to a learning model, as opposed to the manual as it is presented online …
I don’t really know what I’m talking about when it comes to helping chatgpt help us, just thinking out loud I guess.
Anyway, Traceback messages!!!
@knhn is that why you have your own debug area in our slate maker script?
(I might be confused and mixing things up, it’s late)
Hi Stefan,
Perso I catch my exception and drop it to a personal file in autodesk log
def fl_rdo_find_segmt(selection):
try:
fl_rdo_segmt_in_ntw(selection)
except Exception as e:
print(str(e))
import traceback
with open("/opt/Autodesk/log/_fl_errors.log", 'a') as f:
traceback.print_exc(file=f)
I don’t think you guys are giving ChatGPT enough credit. It doesn’t know much(anything) about Flame, but if you’re just trying to figure out what line of code is causing you problems then it should be helpful. It’s more than capable of discerning the issue in the example posted above.
Also if you have even a partial .pyi then you can get it to be far more context aware by feeding it the file at the beginning of each session.
@Stefan No, no debug mode in mine. Must be someone else. Ya, Im like you. I just do sleuthing with print statements. Its usually not too bad? You have seen the probably the most complex of my scripts.
I dunno about all this AI stuff. I just do it myself. Helps me learn.
@lambertz Your solution looks interesting… so you would always wrap your top most function or class with that? The function or class that would have been called by execute
in get_main_menu_custom_ui_actions
directly right?