Spaces:
Sleeping
Sleeping
How to achieve inference speed?
#5
by
Light111
- opened
I am trying to do the inference for the same on a m1, it is taking about 15 seconds, on the nvdia t4 it is taking about 6-7 seconds, to achieve the mentioned (15 FPS), is there any need for some model optimisations like prunning or it will achieve the same without any optimisations?