Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings
Discussion options

I am converting my trained model using tools/deploy.py to onnx so im able to run it using onnxruntime. Can i include the pipeline pre processing to my model. Im looking to resize and pad my input image as part of the network to match the models fixed input.

You must be logged in to vote

Replies: 1 comment

Comment options

@foshyjoshy Hi how are you going with this? I'm in a similar situation with a different model, I am wondering if the preprocessing pipeline has to now be done manually before inferencing images or is that handled by the inference SDK? For example If in my model config I am center cropping my images to 224x224, then is that something I have to now do manually to the images before inference?

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
🙏
Q&A
Labels
None yet
2 participants
Morty Proxy This is a proxified and sanitized view of the page, visit original site.