Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation Fault while using Cuda Executioner Provider #155

Open
vaghawan opened this issue Aug 27, 2024 · 3 comments
Open

Segmentation Fault while using Cuda Executioner Provider #155

vaghawan opened this issue Aug 27, 2024 · 3 comments

Comments

@vaghawan
Copy link

Hi,

I'm trying to use insightface model inside pipeless. I've process.py where I write everything for the inference, while this works fine using CPU, when I switch to Cuda execution, I'm thrown with segmentation fault, in fact the stages do not load themselves. So we're halted on the very first step.

The onnx model loading part is added in init.py as
model = insightface.model_zoo.get_model('det_500m.onnx', providers=['CPUExecutionProvider']) . It is working fine with 'CPUExecutionProvider'.
But when the provider is changed to 'CUDAExecutionProvider', it is showing Segmentation fault.

Any help to figure out this would be much much appreciated. We've run the same script away from pipeless and it works just fine.

Thanks
V

@miguelaeh
Copy link
Collaborator

Hi @vaghawan ,
Try downloading the model and using a process.json to load it instead of using an init file

@vaghawan
Copy link
Author

Which means, I can't have process.py right? Insightface has multiple models working for different tasks.

@miguelaeh
Copy link
Collaborator

you can concatenate as many stages as you want, using a model per stage

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants