Streaming support / batch inference?

#2
by brainofchild - opened

For production purposes, these are important. Was curious how the model handles output streaming / batch inference.

Both are possible but have not been added to the codebase yet. We will definitely ship streaming as there is already a PR out for it.

Sign up or log in to comment