{"guide": {"name": "gradio-lite-and-transformers-js", "category": "gradio-clients-and-lite", "pretty_category": "Gradio Clients And Lite", "guide_index": 6, "absolute_index": 53, "pretty_name": "Gradio Lite And Transformers Js", "content": "# Building Serverless Machine Learning Apps with Gradio-Lite and Transformers.js\n\n\n\nGradio and [Transformers](https://huggingface.co/docs/transformers/index) are a powerful combination for building machine learning apps with a web interface. Both libraries have serverless versions that can run entirely in the browser: [Gradio-Lite](./gradio-lite) and [Transformers.js](https://huggingface.co/docs/transformers.js/index).\nIn this document, we will introduce how to create a serverless machine learning application using Gradio-Lite and Transformers.js.\nYou will just write Python code within a static HTML file and host it without setting up a server-side Python runtime.\n\n\n## Libraries Used\n\n### Gradio-Lite\n\nGradio-Lite is the serverless version of Gradio, allowing you to build serverless web UI applications by embedding Python code within HTML. For a detailed introduction to Gradio-Lite itself, please read [this Guide](./gradio-lite).\n\n### Transformers.js and Transformers.js.py\n\nTransformers.js is the JavaScript version of the Transformers library that allows you to run machine learning models entirely in the browser.\nSince Transformers.js is a JavaScript library, it cannot be directly used from the Python code of Gradio-Lite applications. To address this, we use a wrapper library called [Transformers.js.py](https://github.com/whitphx/transformers.js.py).\nThe name Transformers.js.py may sound unusual, but it represents the necessary technology stack for using Transformers.js from Python code within a browser environment. The regular Transformers library is not compatible with browser environments.\n\n## Sample Code\n\nHere's an example of how to use Gradio-Lite and Transformers.js together.\nPlease create an HTML file and paste the following code:\n\n```html\n<html>\n\t<head>\n\t\t<script type=\"module\" crossorigin src=\"https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.js\"></script>\n\t\t<link rel=\"stylesheet\" href=\"https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.css\" />\n\t</head>\n\t<body>\n\t\t<gradio-lite>\nimport gradio as gr\nfrom transformers_js_py import pipeline\n\npipe = await pipeline('sentiment-analysis')\n\ndemo = gr.Interface.from_pipeline(pipe)\n\ndemo.launch()\n\n\t\t\t<gradio-requirements>\ntransformers-js-py\n\t\t\t</gradio-requirements>\n\t\t</gradio-lite>\n\t</body>\n</html>\n```\n\nHere is a running example of the code above (after the app has loaded, you could disconnect your Internet connection and the app will still work since its running entirely in your browser):\n\n<gradio-lite shared-worker>\nimport gradio as gr\nfrom transformers_js_py import pipeline\n<!-- --->\npipe = await pipeline('sentiment-analysis')\n<!-- --->\ndemo = gr.Interface.from_pipeline(pipe)\n<!-- --->\ndemo.launch()\n<gradio-requirements>\ntransformers-js-py\n</gradio-requirements>\n</gradio-lite>\n\nAnd you you can open your HTML file in a browser to see the Gradio app running!\n\nThe Python code inside the `<gradio-lite>` tag is the Gradio application code. For more details on this part, please refer to [this article](./gradio-lite).\nThe `<gradio-requirements>` tag is used to specify packages to be installed in addition to Gradio-Lite and its dependencies. In this case, we are using Transformers.js.py (`transformers-js-py`), so it is specified here.\n\nLet's break down the code:\n\n`pipe = await pipeline('sentiment-analysis')` creates a Transformers.js pipeline.\nIn this example, we create a sentiment analysis pipeline.\nFor more information on the available pipeline types and usage, please refer to the [Transformers.js documentation](https://huggingface.co/docs/transformers.js/index).\n\n`demo = gr.Interface.from_pipeline(pipe)` creates a Gradio app instance. By passing the Transformers.js.py pipeline to `gr.Interface.from_pipeline()`, we can create an interface that utilizes that pipeline with predefined input and output components.\n\nFinally, `demo.launch()` launches the created app.\n\n## Customizing the Model or Pipeline\n\nYou can modify the line `pipe = await pipeline('sentiment-analysis')` in the sample above to try different models or tasks.\n\nFor example, if you change it to `pipe = await pipeline('sentiment-analysis', 'Xenova/bert-base-multilingual-uncased-sentiment')`, you can test the same sentiment analysis task but with a different model. The second argument of the `pipeline` function specifies the model name.\nIf it's not specified like in the first example, the default model is used. For more details on these specs, refer to the [Transformers.js documentation](https://huggingface.co/docs/transformers.js/index).\n\n<gradio-lite shared-worker>\nimport gradio as gr\nfrom transformers_js_py import pipeline\n<!-- --->\npipe = await pipeline('sentiment-analysis', 'Xenova/bert-base-multilingual-uncased-sentiment')\n<!-- --->\ndemo = gr.Interface.from_pipeline(pipe)\n<!-- --->\ndemo.launch()\n<gradio-requirements>\ntransformers-js-py\n</gradio-requirements>\n</gradio-lite>\n\nAs another example, changing it to `pipe = await pipeline('image-classification')` creates a pipeline for image classification instead of sentiment analysis.\nIn this case, the interface created with `demo = gr.Interface.from_pipeline(pipe)` will have a UI for uploading an image and displaying the classification result. The `gr.Interface.from_pipeline` function automatically creates an appropriate UI based on the type of pipeline.\n\n<gradio-lite shared-worker>\nimport gradio as gr\nfrom transformers_js_py import pipeline\n<!-- --->\npipe = await pipeline('image-classification')\n<!-- --->\ndemo = gr.Interface.from_pipeline(pipe)\n<!-- --->\ndemo.launch()\n<gradio-requirements>\ntransformers-js-py\n</gradio-requirements>\n</gradio-lite>\n\n<br>\n\n**Note**: If you use an audio pipeline, such as `automatic-speech-recognition`, you will need to put `transformers-js-py[audio]` in your `<gradio-requirements>` as there are additional requirements needed to process audio files.\n\n## Customizing the UI\n\nInstead of using `gr.Interface.from_pipeline()`, you can define the user interface using Gradio's regular API.\nHere's an example where the Python code inside the `<gradio-lite>` tag has been modified from the previous sample:\n\n```html\n<html>\n\t<head>\n\t\t<script type=\"module\" crossorigin src=\"https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.js\"></script>\n\t\t<link rel=\"stylesheet\" href=\"https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.css\" />\n\t</head>\n\t<body>\n\t\t<gradio-lite>\nimport gradio as gr\nfrom transformers_js_py import pipeline\n\npipe = await pipeline('sentiment-analysis')\n\nasync def fn(text):\n\tresult = await pipe(text)\n\treturn result\n\ndemo = gr.Interface(\n\tfn=fn,\n\tinputs=gr.Textbox(),\n\toutputs=gr.JSON(),\n)\n\ndemo.launch()\n\n\t\t\t<gradio-requirements>\ntransformers-js-py\n\t\t\t</gradio-requirements>\n\t\t</gradio-lite>\n\t</body>\n</html>\n```\n\nIn this example, we modified the code to construct the Gradio user interface manually so that we could output the result as JSON.\n\n<gradio-lite shared-worker>\nimport gradio as gr\nfrom transformers_js_py import pipeline\n<!-- --->\npipe = await pipeline('sentiment-analysis')\n<!-- --->\nasync def fn(text):\n\tresult = await pipe(text)\n\treturn result\n<!-- --->\ndemo = gr.Interface(\n\tfn=fn,\n\tinputs=gr.Textbox(),\n\toutputs=gr.JSON(),\n)\n<!-- --->\ndemo.launch()\n<gradio-requirements>\ntransformers-js-py\n</gradio-requirements>\n</gradio-lite>\n\n## Conclusion\n\nBy combining Gradio-Lite and Transformers.js (and Transformers.js.py), you can create serverless machine learning applications that run entirely in the browser.\n\nGradio-Lite provides a convenient method to create an interface for a given Transformers.js pipeline, `gr.Interface.from_pipeline()`.\nThis method automatically constructs the interface based on the pipeline's task type.\n\nAlternatively, you can define the interface manually using Gradio's regular API, as shown in the second example.\n\nBy using these libraries, you can build and deploy machine learning applications without the need for server-side Python setup or external dependencies.\n", "tags": ["SERVERLESS", "BROWSER", "PYODIDE", "TRANSFORMERS"], "spaces": [], "url": "/guides/gradio-lite-and-transformers-js/", "contributor": null}} |