{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "5a7a0ca2",
   "metadata": {},
   "source": [
    "# ASR Example - Basic Transcription Interface\n",
    "\n",
    "Audio Speech Recognition with aisuite's unified API supporting OpenAI, Deepgram, and Google providers.\n",
    "\n",
    "This example demonstrates basic transcription using the OpenAI format with different providers."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d72f8c18",
   "metadata": {},
   "outputs": [],
   "source": [
    "import aisuite as ai\n",
    "from aisuite.framework.message import TranscriptionResult\n",
    "from dotenv import load_dotenv, find_dotenv\n",
    "import os\n",
    "\n",
    "load_dotenv(find_dotenv())\n",
    "\n",
    "\n",
    "# Set up client with provider configurations\n",
    "client = ai.Client({\n",
    "    \"openai\": {\"api_key\": os.getenv(\"OPENAI_API_KEY\")},\n",
    "    \"deepgram\": {\"api_key\": os.getenv(\"DEEPGRAM_API_KEY\")},\n",
    "    \"google\": {\n",
    "        \"project_id\": os.getenv(\"GOOGLE_PROJECT_ID\"),\n",
    "        \"region\": os.getenv(\"GOOGLE_REGION\"),\n",
    "        \"application_credentials\": os.getenv(\"GOOGLE_APPLICATION_CREDENTIALS\"),\n",
    "    },\n",
    "})\n",
    "\n",
    "audio_file = \"../aiplayground/speech.mp3\"  # Replace with your audio file path"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8ed7f8de",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Basic transcription using kwargs (OpenAI format)\n",
    "print(\"=== Basic Transcription ===\")\n",
    "\n",
    "try:\n",
    "    result = client.audio.transcriptions.create(\n",
    "        model=\"openai:whisper-1\",\n",
    "        file=audio_file,\n",
    "        language=\"en\"\n",
    "    )\n",
    "    if isinstance(result, TranscriptionResult):\n",
    "        print(f\"OpenAI: {result.text}\")\n",
    "    else:\n",
    "        print(\"OpenAI: Got streaming result (not expected for basic call)\")\n",
    "except Exception as e:\n",
    "    print(f\"OpenAI error: {e}\")\n",
    "\n",
    "print(\"--------------------------------\")\n",
    "\n",
    "try:\n",
    "    # Same kwargs work with other providers (auto-mapped)\n",
    "    result = client.audio.transcriptions.create(\n",
    "        model=\"deepgram:nova-2\",\n",
    "        file=audio_file,\n",
    "        language=\"en\",\n",
    "        punctuate=True\n",
    "    )\n",
    "    if isinstance(result, TranscriptionResult):\n",
    "        print(f\"Deepgram: {result.text}\")\n",
    "    else:\n",
    "        print(\"Deepgram: Got streaming result (not expected for basic call)\")\n",
    "except Exception as e:\n",
    "    print(f\"Deepgram error: {e}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "32ed3f0f",
   "metadata": {},
   "source": [
    "## Summary\n",
    "\n",
    "This notebook demonstrates the unified ASR interface with:\n",
    "\n",
    "1. **Basic transcription**: Using OpenAI-format kwargs that work across providers\n",
    "2. **Provider compatibility**: Same interface works with OpenAI, Deepgram, and Google\n",
    "\n",
    "### Environment Setup Required:\n",
    "\n",
    "- **OpenAI**: `OPENAI_API_KEY`\n",
    "- **Deepgram**: `DEEPGRAM_API_KEY`  \n",
    "- **Google**: `GOOGLE_PROJECT_ID`, `GOOGLE_REGION`, `GOOGLE_APPLICATION_CREDENTIALS`\n",
    "\n",
    "### Key Benefits:\n",
    "\n",
    "- Write once, run on any provider\n",
    "- Consistent error handling and response format\n",
    "- Easy provider switching for testing and optimization"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
