{ "queries": { "243aa579-a9ef-45d4-9b31-a5028a4fc982": "What are the key features and capabilities of Meta's Llama 2 and Code Llama in the field of language model development?", "7b62af16-2525-4ec8-be02-0a24292997bb": "How does Llama 2-Chat compare to proprietary, closed-source chat models in terms of safety and utility metrics?", "f31f38df-d8c4-404e-a7e1-4560c6a01e9b": "What is the significance of Meta's transparent delineation of fine-tuning methodologies for LLMs and its impact on community-driven advancements in AI development?", "9f483f37-bc72-4155-b560-97ac6e67e31d": "How does Code Llama outperform other publicly available LLMs (except GPT-4) in code-related tasks?", "e017ef5c-ea94-4eb3-8aa8-7dbd435a5a2a": "What are the limitations of Llama 2, Llama 2-Chat, and Code Llama compared to GPT-4 in the development of language models?", "5d74b1f2-742e-4831-82ac-c4e9a3ef670f": "What are the main differences between traditional language models like OpenAI's ChatGPT and multimodal models like GPT-4 in terms of their ability to process different types of data?", "9f5d663d-1ee5-4a6f-902c-da3331abe1c7": "What is the primary focus of multimodal AI models like GPT-4 and how do they differ from LLMs and Llama variants?", "4e9d8820-1ef6-4f15-a8f9-2e382770def0": "How do multimodal AI models contribute to a more comprehensive understanding of our world and what potential applications can they have?", "5fe168b8-1c13-4e50-8252-5defb5eaff49": "What is the significance of the shift from LLM connections to Vector Databases (Vector DB) in the AI landscape?", "c0300fcf-46d5-4d18-a533-6348053e5a18": "How do Vector DBs differ from traditional databases in terms of data storage and retrieval mechanisms?", "cea8b601-d18c-44fa-8ee2-2839b35563ab": "Can you explain the advantages of using tools like Weaviate and Milvus in handling massive datasets and performing similarity searches?", "5305a67e-e2be-48e0-85d9-b496cc54275e": "What is the primary focus of LLM connections like the LlamaIndex?", "bf95258d-987a-4455-9b48-14b2bcd60289": "How do Vector DBs differ from traditional databases in terms of data storage?", "ae6f37fe-8a44-44e7-afec-fc989f72825c": "Name two tools that are designed to handle massive datasets using Vector DBs.", "fa7ccac1-22ab-4d8c-a826-c14e1c1de935": "What does the rise of Vector DBs represent in the field of AI?", "858fbc09-86ee-471a-b5de-85b7336646d9": "How are LLM agents like AutoGPT and AgentGPT used in tasks ranging from content generation to data analysis?", "c1fe6048-1afd-413f-b66f-1e435a15c0fb": "What is the concept of LLM as an OS and how does it differ from LLM agents?", "ae388eb1-a321-4944-aa42-8bad2efd20bc": "Describe the potential benefits of using LLM as an OS in terms of device and application responsiveness.", "fab93cba-489c-4498-b172-a727be03c4ae": "What does the move towards LLM as an OS signify in the field of AI?", "2033b91f-132b-4048-bac1-b6fa17b4086b": "How does the transition from LLM agents to using LLMs as an OS represent a paradigm shift in AI utilization?", "cfa2de30-1199-491b-a4da-72f1c5b0b430": "What is the main goal of combining LLMs and Vector DBs in the AI-driven future?", "c34da090-a3ad-4016-95e0-8869a89467d9": "How does the concept of LLM as an OS revolutionize our digital interactions?", "1bf4fd7d-3405-459c-b423-74a7ffa68aa2": "What are the primary methods of fine-tuning LLMs?", "cc0e6ff8-f27d-4808-9053-d9202de32bf7": "How do LLM techniques like In-context learning, Few-shot learning, and Zero-shot learning enhance the model's adaptability?", "1b11a1e8-2254-4a05-8a41-73318b88a579": "What is the difference between fine-tuning LLMs and using plugins?", "635d0716-5b83-4684-b4bc-df7e7f0dc863": "How does the shift from fine-tuning to plugins represent a move from static optimization to dynamic adaptability in LLMs?", "50d051a8-bb38-49a5-ba53-2796c608ccca": "How are LLMs evolving from fine-tuning to plugins, and what benefits does this transition offer in terms of AI applications?", "2e43d520-1b5b-4a4a-92bc-924a17a3185a": "What are some key shifts in the AI domain that have been observed with the emergence of LLMs, and how do LLMs contribute to these shifts?", "d3ed15a1-3ec6-47a4-814a-24bdf82d8a08": "Can you explain the concept of LLMs as platforms and how they can be integrated with various tools to enhance their capabilities?", "da5a76f3-98a2-4d45-b5cd-2757594f0fba": "What are some pioneering efforts in the LLM revolution, and how are OpenAI's GPT-4 and Meta's LLaMA2 leading the way?", "81ab10f4-7441-40ba-9b06-3363ce49e8cb": "How have LLMs expanded from text-based models to include multimodal models, and what advantages does this expansion bring to the field of AI?", "ad3a7531-38de-4e08-a6c7-22f9cab8bf7b": "What is the significance of the transition from LLM connections to Vector Databases in terms of efficient high-dimensional storage?", "d5e84653-31e8-496e-8923-8bdb4fd0ea12": "How are LLM agents transforming into LLMs as Operating Systems, and what are the goals of this transformation in terms of device and application development?", "829f2429-4553-4ab5-b9d2-eecbcb4b6718": "How do dynamic plugins replace traditional fine-tuning processes in LLMs, and what impact does this have on the adaptability and integration of LLMs with other tools?", "7636d48b-9d7f-44b1-ad64-265910a33d21": "Can you provide an overview of the research papers and resources mentioned in the context information, and how they contribute to the understanding of LLMs and their future?", "6d541245-a5a3-4774-a958-f35d95ea0e4c": "How do GPT-4 and LLaMA2 contribute to creating an AI future that is more integrated, responsive, and attuned to human interactions?", "1695952a-a7fe-4b29-8026-9e231ef9dabd": "How can the data for this Q&A bot application be obtained for different stocks?", "e6b94f32-80b5-4fff-8a33-4ca4f1767260": "What types of questions can be answered using the stock market analyst recommendations data?", "cb142f7f-43a3-4db9-87a3-b094bd7cf813": "Can you provide an example of a question that can be answered using the analyst recommendations data?", "27652216-84df-47a4-aa16-c2a629b608be": "How can the information for each stock be conveniently updated and maintained in this application?", "ada00d31-b42a-451c-8199-742a8f749528": "What are some key aspects of stock analysis covered by the analyst recommendations data?", "dc0cc2d8-95e5-46c9-a04f-281b4b2cf377": "How can one access recent analyst reports for a specific stock like Microsoft (MSFT)?", "370edbff-3e64-4fdc-91ea-f3b618d76bb4": "What is the significance of the consensus rating for a stock like Microsoft (MSFT)?", "8d263218-833a-425f-985f-20d8171721f6": "Has the consensus rating for Microsoft (MSFT) undergone any recent changes?", "aad10d48-5471-4650-9592-330033df2332": "When was the most recent update made to the consensus rating for Microsoft (MSFT)?", "ef7528a9-cd3f-4cd5-ada3-cc1d1ddf46b3": "What is the recommended action (buy, sell, or hold) for the stock of Microsoft (MSFT) according to the current recommendation?", "7478f0a0-60fe-4e70-a19a-83500ea963fb": "How does the web application work and what is its purpose?", "fadfe6f6-8fd2-4bf6-9e01-1ed2a5c9d39d": "What is LangChain and how does it contribute to the development of the application?", "de63ef8f-c982-4c04-9b07-f6589beecf0c": "Which Python library is used to build the web application and what are its key features?", "e5609b94-374c-4046-af05-67551084fb35": "How does the application gather insights from multiple documents to provide accurate responses?", "3bad1bda-b270-41fa-9cb8-8d54116169d6": "Can the application compare stocks and provide insights for making investment decisions?", "a79e9da2-3c2f-4f77-bd2c-49119635dc17": "What is the purpose of LangChain in building sophisticated applications driven by large language models (LLMs)?", "a084df16-10ef-4732-bcf6-39d151a1db33": "How does LangChain facilitate the integration of various components in application development?", "27ca7f7d-0b1c-42e7-a339-3419aeae695d": "What is the role of Streamlit in the LangChain framework?", "222b57fa-ff23-4c13-ad98-5d7a3406eb8a": "How does Streamlit simplify the process of developing web applications?", "d69a415b-da07-4f18-95fd-e6f78e6b0bed": "What are the different data ingestion methods supported by LangChain?", "8d96d6b6-bd8d-46f3-9649-26ed22838fdb": "How can LangChain be used to load data from different formats such as text, images, PDFs, and Word documents?", "5c79c2a7-4daa-4abc-b940-9a98b4f15b78": "What is the advantage of using the alternative approach for loading documents in LangChain?", "f27b7dd1-9bc7-4d47-b1fb-9f68e4c14368": "How can individual documents be specified for ingestion using LangChain?", "f2198fcc-68a2-4624-bc68-0a8317c4d15c": "How can all documents within a specific folder be loaded and processed using LangChain?", "3b6c58c6-d9b1-4654-a335-78433567b156": "What is the purpose of the vector database in LangChain's data ingestion process?", "88ee5364-50c1-4b33-87b8-0536b1213857": "How can you specify the exact documents to load for ingestion?", "59b8e4f0-3700-4f9f-96ff-d3c8e8148380": "What is the alternative approach for loading pertinent documents from a designated folder?", "4a07ca52-5a6e-42ba-a32c-d5d1dd583050": "How can the centralized list of file locations help in seamless data retrieval and analysis?", "d0f0aa8d-24ea-46da-9365-98416340c3d9": "Why is it more cost-effective to only send document extracts that reference a specific topic to the large language model?", "52983063-0c02-4649-8f9d-e18c772b69ad": "What is the purpose of the VectorstoreIndexCreator class in the LangChain framework?", "37a9982d-e911-4f4d-ada8-7012195f5c9f": "How does the similarity search in the vector store help in finding relevant document chunks for a given question?", "7fc08e64-4956-429f-b739-27abeff5c560": "What is the role of Streamlit in setting up the web application for the document processing system?", "a735c483-cdd5-43ba-9487-8100a4f3e7c7": "How does the application process user input and generate a corresponding response?", "c3d252fe-16f5-4324-bf9b-eb8b51e9dfe2": "Why is it important for the questions to be diverse in nature across the document for the upcoming quiz/examination?", "6ce3662b-8fe6-41c0-b7d8-e28c6196c753": "How does the vector database store the documents in a numeric format (embeddings)?", "0e44d863-1cd0-4fda-afcf-251a224d14c7": "How does the application process user input and generate a response?", "b7721e66-f104-41aa-8b73-65aca2546180": "What is the purpose of calling index.query() with specified parameters?", "3b0a6880-9de2-48ea-a1d7-adf3df8abb6d": "How does the LLM analyze the question and generate a response?", "f822adb7-a33f-4173-9e9f-9b87c4456c59": "What is the range of questions that can be asked in the chat system?", "372be9f6-ba2b-4a18-9745-5e213fae2b1a": "How does the Q&A bot using OpenAI and LangChain unlock knowledge within private document repositories?", "992b5f16-1939-45af-b144-fbb6b21be149": "How can the web-based Q&A bot empower users from various industries?", "723ce5ae-bbdb-4129-b5f4-62004a6f3edb": "What are some examples of questions that can be asked about stock analysis using the chat system?", "200cbe5d-818d-49a7-a27c-103b80cf44a4": "How does the Q&A bot accommodate comparative queries across multiple stocks?", "de7847a0-202e-4965-ad7d-15a652737d46": "What is the potential benefit of using the Q&A bot for accessing and analyzing critical information?", "6a517f18-34b0-4530-94c4-c1f598ed9886": "How does the Q&A bot enhance the process of stock analysis?", "82108654-4c9b-42fe-995e-e3fe16ed49d2": "How does the Q&A bot using OpenAI and LangChain extend beyond individual stock inquiries?", "8bf2e25d-518e-467d-99ea-84bf48746cda": "What traditional retrieval techniques are commonly used by E-commerce platforms for product search?", "bd844d10-9ed8-41cf-9f20-8458e63fa085": "How do sparse methods like TFIDF and BM25 affect the relevance of search results?", "02a18800-d21d-430a-865d-9136bd7b31ad": "Give an example of a query that may result in \"No exact matches found\" on Ebay.", "ba25f6e3-924f-4c6d-b32f-789dfd007750": "How can LLMs be leveraged to improve search relevance in E-commerce platforms?", "c0b842ef-3b6d-4948-8c4a-23ff451ca20e": "Explain the process of using LLM embedding to address semantic complexity in product retrieval.", "664ecee0-85b6-4358-8fad-54a91946d762": "What are some potential challenges of sending the enhanced query directly to a keyword-based search engine?", "c6eab763-e0fd-4752-a35f-3cfc224235da": "What are the advantages of using Hugging Face and LangChain for implementing the LLM-based solution?", "7db6d66c-4bd2-49aa-a58e-7f236a938523": "How does comparing the similarity between query embedding and product embeddings help in generating search results?", "1ceb4fc3-5528-498c-8c72-f6e380b5592f": "What are some alternative techniques that can be used to implement the idea of enhancing queries using LLMs?", "2d69aedd-98e0-421f-b9d7-b963a54357c6": "What is the significance of underestimating certain use cases in driving opportunities and advancements?", "5c0eaeb7-e4f0-480a-a4e5-14ba8af5b57e": "What is the purpose of using LLM embedding in the given solution?", "c6365c48-cbb7-42ff-b13f-16d729d77db0": "How is the product retrieval done in the solution?", "e663bb1a-f846-46a2-86f0-15276012daf3": "What techniques are used to implement the idea of LLM embedding?", "cad68d61-b813-439f-ac9a-7bb70d919332": "What is the role of Hugging Face and LangChain in the implementation?", "7d2e657c-6cc0-4bdd-b43d-eb76fe1ccef5": "How are the product embeddings created and stored in the solution?", "76279a31-7688-4151-8550-f3f990a5bb02": "What dataset is used as a mockup for the E-commerce product inventory?", "a6ae346e-322e-45c3-a3a9-bb8345fb347b": "How are the top-10 products retrieved in the solution?", "eaf81e0f-6099-4dfd-93cf-094f5c425fc6": "What is the purpose of showcasing the effectiveness of the approach?", "4c43b674-a3ee-4b62-b34d-797d3a4a349c": "How does Llama 2 generate the enhanced query for a given raw query?", "d7d14c74-ebac-4d86-a422-63df5407576d": "What is the significance of using the sentence transformer model in the product retrieval process?", "8b23a9b3-42cd-41cd-ba44-b40b26b9c8bf": "How does the LLM enhancement approach improve the product search of E-commerce platforms?", "87c22e42-d91c-48ab-88f5-f01b2cd15daa": "Compare the results of the LLM enhancement with the original Ebay search results in terms of product-level granularity.", "6483b47c-5b51-42e5-ab80-17584162971f": "What are the potential future explorations mentioned for enhancing the product search using LLMs?", "495023b1-1e00-46fb-bc0a-911902eca927": "Explain the concept of similarity match in the embedding space and its significance in product retrieval.", "6aecc26b-9a6c-4540-8808-5997de3f86f8": "Discuss the limitations of comparing the results from the product inventory mockup with the real-world Ebay search engine.", "51b01eae-6086-4bdc-bad4-554c1ebe9117": "How do LLMs contribute to understanding searches better in E-commerce platforms?", "cef39516-a21b-485e-b108-40124f2f7723": "Describe the role of prompt engineering in generating queries for LLM-based product search.", "1335bf22-d2d7-405f-b553-58ebcaa36096": "What is the overall effectiveness of retrieval in the embedding space in terms of relevance and diversity?", "5999d29e-673e-406a-b1d8-98761fc61268": "How can product embeddings with enriched attributes enhance the LLM-based product search?", "b58398f7-67fe-46b3-8324-1efffdfd7ec9": "How can online latency optimization be beneficial for LLM query enhancement in E-commerce platforms?", "82b03482-d056-4c54-8995-4e31c04d35dc": "How are popular language models like GPT3/3.4/4 and LLAMA2 primarily trained?", "745ccacf-076d-418e-92c5-7c53af25c9ca": "What are some examples of massive datasets used to train these models?", "3f977785-1bca-46a9-83e1-2ddde707a6c5": "What are the limitations of models trained on internet data when it comes to domain-specific information?", "40ddf905-7767-4927-929a-27f7e824a019": "Can you provide an example of a use case where fine-tuning with custom data would be beneficial?", "7f490f61-29f2-4977-957f-235a379c59d9": "What is the purpose of closed-book QA and how does it relate to the model's knowledge base?", "73faec18-4634-4c68-a3c7-a91b0553111f": "Why is it important for a model to have domain \"understanding\" and higher-level reasoning abilities?", "ef30349a-ef3b-4d61-a200-c7ee331419a1": "What challenges are faced when trying to make a model 'understand' data instead of simply parroting it out?", "fdc63905-4d33-429f-8305-317bbe36af47": "How do the results achieved by the authors of the 2021 paper differ from the current state of models and training?", "c6c95df2-503a-4309-934b-f25dfd5d134f": "Are there any tutorials or examples available online that demonstrate the concept of using language models as knowledge bases for closed-book QA?", "4217518e-39c9-44e1-b2b9-66d5deb5e736": "What is the significance of training models on custom issues and how can it be applied to reasoning on similar issues and possible solutions?", "ca67decb-ef89-4f62-b38e-1f920a467a5d": "How do the authors of the paper suggest preventing a language model from parroting out answers without understanding the data?", "b4d020a8-ebd3-492d-a2eb-8bd17dcdbe49": "What is the significance of the \"Recite\" step in the model's process, as mentioned by the authors?", "12ffa782-903e-4f43-98e4-da856b795a35": "In the context of training large language models, what is the importance of model size according to the information provided?", "52364c26-dcf1-4fb8-81a0-08112e840c5c": "Can smaller language models, such as GPT2 and Bloom, be trained on a laptop GPU with limited GPU RAM? Explain.", "95b77bf8-16fa-4eab-ba2d-7355c3ada900": "How do the authors describe the feasibility of using models like GPT3/4, LLAMA2, and similar models for closed-book QA tasks?", "0a6d9bc9-e816-4cc8-9e5c-6d4bdd36ac4c": "What are some examples of smaller language models that can be trained on a laptop GPU with 8 to 15 GB GPU RAM?", "945f8a0b-2e6c-44de-ace2-8817ac49abfc": "What are the two parts involved in reducing the size of large language models for fine-tuning?", "fb510a04-ac4e-4baa-95dc-5911baadb5cf": "Why is it difficult and expensive to fine-tune a large language model with the available cloud-based solutions?", "f24f5fac-f385-4505-b51f-847c015c8487": "How does causal training differ from masked LM-based training in language models?", "6476cf5c-9a3a-4387-be00-b9ae9a5204d6": "What is the purpose of quantisation and parameter efficient tuning in the context of fine-tuning large language models?", "589aa372-a0c7-4427-a219-287ce0130ffa": "Can a relatively small model like the BigScience Bloom 3 Billion model be trained with A100 on ColabPro with 40GB GPU RAM? Why or why not?", "65576667-0111-4e63-b733-264756cbd0a3": "What is the significance of distributing training across multiple GPUs when training large language models?", "6377553e-7c3b-42a0-9608-8c16544b428e": "How does DeepSpeed library contribute to the distributed training of language models?", "8e828a96-ce28-43a4-a038-4349d7e2e1e9": "What is the main challenge in overfitting a small test dataset on decoder and encoder-decoder models?", "edf530de-605d-4d53-a50f-9176f44974a8": "How does the understanding observed in ChatGPT relate to training on smaller language models with less than one billion parameters?", "194fb2ae-7416-40b6-a0ab-2b475c6c3f2a": "How does the BigScience Bloom 3 Billion model compare to the A100 on ColabPro with 40GB GPU RAM in terms of training capability?", "15bb043e-2be2-4747-9c46-28d3dba5741d": "What are the two parts involved in reducing the size of models for fine-tuning?", "0354aa3e-5107-487e-a75d-7645f024b6a7": "Can a laptop with a recent GPU run the 7 billion Lamma2 pre-trained model?", "17f92e8b-6195-4658-80e7-02820779dcde": "What is the benefit of compressing knowledge in an NLU model running on a local laptop?", "b45c4675-4af9-4115-abcf-acb115947505": "How could models like the 7 billion Lamma2 model be utilized in small servers or cars?", "684de7b9-38bd-4728-9141-995cc393e4a4": "What is the purpose of unsupervised training and fine-tuning with QLoRa?", "82b8e771-ac90-4882-8b30-6bd28e562f58": "How does QLoRA affect the training process of a large 7B Lamma2 model?", "eb0ecc73-3ab5-4f19-b692-607b77dca2df": "What is the concept of instruction tuning in QLoRa fine-tuning?", "b27c58db-b096-450b-89c4-fbec2538b865": "How does QLoRa contribute to the training of the LM model?", "79d11385-90a3-47c2-8a83-decaeda5f910": "What is the concept of instruction tuning in fine-tuning language models?", "ba74b7f4-306c-45dd-afa3-59cf545d8813": "Why is finding or creating a good quality data set important for training language models?", "75eb288b-35b1-4fd1-8010-2697e4123b05": "How does the Llama2 model differ from previous models like GPT 3/4?", "ba7a92cb-172f-4e01-bd01-ed6e721b3b50": "What is the purpose of using quantization in running the Llama2 model on a large data set?", "5f99854b-b456-423c-9a3b-cb42adc2dd16": "How does the Self-instruct concept help in converting the available training data set to a QA format?", "7612678c-2b10-4ea0-ad20-175672320b2c": "Can you explain the concept of NLP tasks being described via natural language instructions?", "3d354e56-1a6b-4f06-bba5-9b419901cade": "What is the significance of the 137B parameters in the pre-trained language model used for instruction tuning?", "53cca198-2cd5-4ed2-aee3-43006b221e17": "How does the QLORA paper relate to the training data set and the Gauanco model?", "93281044-aec3-4b9a-87d2-e7a25a7e53ab": "Why was using older NLP methods like NER not effective in converting the training data set to a QA format?", "f45dc043-52ea-4aa3-9aec-867ce1c50d8b": "How was the previous best-performing model for creating a QA dataset via ChatGPT or its API?", "411e998b-0ae2-4926-915a-5ab79bd375e0": "What is the purpose of using the Self-instruct concept in Llama2?", "aeb8aea1-e968-4ae0-b8da-915959d6ffe5": "How does the 7 billion model of Llama2 make it feasible to run on a large dataset?", "a7e8b7ad-05b5-49ad-b389-aeabc5e6f021": "What approach was taken to reduce hallucination in the generated dataset?", "7f7024bc-56e3-443d-8f9f-0a5d63b18b96": "What improvements need to be experimented with in order to create a proper QA dataset?", "c6d6e385-4783-4ba7-b83f-88b492a73160": "How was training with new data conducted in order to test the model's effectiveness?", "1253bfe1-ce02-4534-b9fd-8b8f7d68c8dc": "What problems still remain in terms of hallucinations in the model's output?", "5252f766-a345-4ce8-a366-92bfa97a8804": "How can training with new data improve the performance of the model?", "6faa562b-ae7b-421c-b5d3-b25e97c49afc": "What are some of the challenges faced in fine-tuning prompts for effective output?", "d19c6d03-433e-415a-83a4-1d73147c6fa5": "Why is it important to fine-tune models on custom domain data?", "58a0571f-04b1-4204-9792-3de3a9971ade": "How can LLMs help address information retrieval use cases based on Vector space embeddings?", "5383aa4d-7764-4541-a95a-d0fa40defe42": "Why is it necessary to experiment with different methods to create a proper QA dataset?", "0dcb9371-a21e-42a3-adef-284196543732": "What are some potential applications of fine-tuned models in higher-level use cases?", "ebcd0860-e5e0-44f2-9967-2a2471609f3b": "How can virtual assistants benefit online portals, banks, e-commerce, and customer support?", "52f15db9-fdd2-4ce1-bd1e-20e6b4470ec6": "Why is it difficult for the model to answer questions related to domain-specific data it has not seen before?", "b262a6e2-0e71-4d1a-8b58-1545b5d8e50e": "What are some examples of massive datasets used to train popular models like GPT3/3.4/4 and LLAMA2?", "e0d6a930-3c72-45e2-8f31-eddccd9177af": "How can the model learn details and relations of a domain through Instruct tuning?", "a275aa45-490d-4ac5-ade9-2d74329403c2": "How can fine-tuning with custom data help address higher-level reasoning tasks in language models?", "ba6a49e9-ab6b-40cb-b33f-f6d5c53685c0": "What is the main limitation of using generative pre-trained language models as knowledge bases for closed-book QA?", "39248aff-56f4-435f-977e-f8cab4925067": "How can the \"Recite\" step help prevent language models from parroting out answers without understanding the data?", "cdc9f971-be96-4e91-af4e-33f1727a39fe": "What are some potential use cases for closed-book QA that rely on domain-specific data?", "324bae9a-0e13-4a32-aeb2-5facf8df73be": "How do online portals, banks, e-commerce platforms, and customer support benefit from adding virtual assistants powered by language models?", "ed37f5f8-0445-44bd-aa31-a32fe16232ea": "Why is it important for language models to have domain-specific data in order to answer questions accurately?", "7b9cdc1a-1aa7-4277-a0af-817d525079e5": "What are some challenges in making language models \"understand\" data and not just repeat it?", "d9c419e9-d183-40f8-a28f-5d5f35cd7dc5": "How do LLMs based on vector space embeddings address information retrieval use cases?", "9dad5d74-a662-439a-add2-48218cf16154": "What are some potential advantages of using GPT3/4, LLAMA2, and similar models for closed-book QA?", "b44fecbf-9b16-42c5-983c-0576ed4d40bb": "How do the size and number of models and training released impact the performance of language models in closed-book QA?", "b35f823d-bd13-492b-8bbf-e347fe5223a2": "How does the intermediate step called 'Recite' help prevent the model from parroting out answers without understanding the data?", "4bdf0a4e-8374-477b-9136-918d837f5f42": "What is the importance of model size when training large language models?", "ca9997c5-c45c-44ea-91a5-2acce2d16a4c": "Can smaller language models with less than 1 billion parameters be trained on a laptop GPU?", "addbfe80-1777-4470-a94c-18cd2d2fbf5c": "What types of models did the author try to overfit a small test dataset on?", "41b92d2b-097e-417d-a8bc-c20692bf7d10": "What is the purpose of causal training in language models?", "50b79b4c-f4f7-42d3-bf8c-d3988eee4bfd": "How does the understanding seen in ChatGPT relate to training on smaller models?", "bfcb7682-e7b1-4bf6-ba68-f07b18ec8a8f": "What are some challenges in training a large language model?", "794d14a7-ace6-4883-88ee-a191f2cf0ee8": "What are the two approaches mentioned for training the models in the context information?", "46724109-b187-497f-9afa-3218fe1a7266": "How does the size reduction of models enable fine-tuning on commodity GPUs?", "29b71cea-f96c-4587-a30f-7501101e105f": "What is the significance of having Tensor Cores in a laptop GPU for running the Lamma2 pre-trained model?", "075cdb61-c673-4b6e-abf7-b466a1aa0fdc": "Why is it difficult and expensive to fine-tune large models using cloud-based solutions?", "85f1b2ef-e467-4aa1-b07a-4b4d8aafe2c7": "Can a laptop with a recent GPU effectively run the 7 billion Lamma2 pre-trained model? Explain why or why not.", "cfe759fe-15d4-40b8-b5e3-909c2f884dce": "How does the use of Quantisation and Parameter Efficient Tuning allow a laptop with a recent GPU to run the 7 billion Lamma2 pre-trained model?", "6494dac2-a279-4637-b2e8-2bf78b39e1cc": "What is the purpose of fine-tuning a large 7B Lamma2 model using QLoRa and small training data?", "75fb6e7e-c7e0-4d9f-9d6c-7bd960bc36cf": "How does the concept of Instruction Tuning, as introduced in the FLAN paper, contribute to NLP tasks?", "4b1c7d83-5461-406a-b228-fcdfe96a4757": "What is the advantage of using QLoRa in training language models?", "83c6035b-9246-4393-92d6-7ce28d85244a": "Can you explain the difference in output between training the model with just the text as is and instruct fine-tuning with QLoRa?", "f3afacc0-cf22-443b-a5a1-1715a02c0166": "How does instruction tuning contribute to the training of fine-tuned language models?", "88dc5024-c0d5-45a2-92a5-80fe804e3215": "What is the intuition behind using natural language instructions to describe NLP tasks?", "d54df81f-f141-41c9-bcef-4a5a63ff3615": "Why is finding or creating a good quality data set for training a challenging problem?", "c80a1f4a-ab53-42f8-b86f-8618d4bb5ab6": "How does the Llama2 model differ from previous models like GPT 3/4 in terms of performance and cost?", "c042df38-a83d-4889-8491-cc9d8103f138": "What is the purpose of converting the available training data set to a QA format for closed book QA?", "67e45811-15fc-45f0-ad47-855af1b33b41": "How does the use of quantization in the QLoRA-based fine-tuning make it feasible to run the model on a large data set?", "65a7c1e1-db25-4d91-9fc5-d7e8c34ef225": "Can you explain the concept of self-instruct and how it can be used in the context of training language models?", "48ce41d3-9aae-4805-929a-4c5eeb5322e0": "What are some of the challenges faced when using older NLP methods like NER to create a QA dataset?", "eac35e01-9043-4fd0-a057-857cb557d5f5": "How does the 4-bit quantized llama2 7 B model perform in generating output for the given prompt?", "2872a536-b393-4ea8-bbfe-ddaaf37464ef": "Can you describe the process of sliding window and minimal parsing used in generating the QA dataset?", "2ab9a13e-8d90-4c32-b532-ae870c1af18f": "Based on the context information provided, here is a diverse set of questions that can be used for the upcoming quiz/examination:", "f12483d7-66c0-404f-900f-9b5916f685b7": "What is the purpose of running the model in 4-bit mode via Quantisation?", "10687fa0-766c-4c01-9a74-3016d6b0d7e2": "How can the generated QA dataset be used in the fine-tuning process?", "c7bccab9-9f2d-459b-a6fb-e681d999e951": "What was the purpose of adding the specific tag \"Source:8989REF\" in the generated dataset?", "187f62c4-d5ea-4d9f-a356-dd9a68a2bd7c": "What improvements need to be experimented with in order to transform training data related to Professor Thiersch's method into a proper QA dataset?", "052a879f-8079-4288-90e7-6a6fa2c0df6a": "How was training with new data conducted in order to test the model more effectively?", "21688709-9c73-418a-938f-8a0103619292": "What are some of the problems with hallucinations that were observed in the model's output?", "59721eb2-fd20-45b5-a6e6-96e274293d51": "How does the LLama2 13B 4-bit fine-tuned model compare to the 7B model in terms of output quality?", "02fc24ae-6d5d-4c84-8e6e-1996802fc717": "What is one observation about the model's output that makes it difficult to fine-tune prompts effectively?", "150c2f00-25fe-45f3-8fa8-82c2136b86e8": "These questions cover various aspects of the context information and require understanding and analysis of the information provided.", "3659d53d-c5e4-49c0-9c03-69018fd74ab7": "How does the model learn in Instruct tuning?", "0fca48a3-4d64-4b46-8a4f-557051c71ba8": "What are some of the problems that still exist in the model's output?", "b3d8e3eb-be87-4e38-ac1b-a4c63326dbce": "How does the LLama2 13B 4-bit fine-tuned model compare to the 7B model?", "09229886-907d-4fb6-a154-24c1530dfba7": "What are some observations regarding the output of the model?", "adefa977-b7c5-44f6-a08f-0b2147b3d05e": "Why is it difficult to fine-tune prompts for the most effective output?", "a13a455b-7f02-4a43-9c2d-d067ab38ef12": "What is the need for updating higher level use-cases with the fine-tuned models?", "d74d7582-39d3-483a-bb5f-31f0c24c1e19": "What are the characteristics of Meta's new Llama-2 models?", "7b979fb2-cded-4bee-82c4-839678313754": "How does the GQA technique speed up inference on the 70B model?", "eb240ff4-e3e8-4bca-b6b5-9f323889e58e": "What techniques are used in the standard transformer architecture for the Llama-2 models?", "4ed18d32-1144-48a1-a07c-71b3f42b347c": "How many tokens were used for pre-training the Llama-2 models?", "953bf848-6987-4f3c-a838-62266c4d07ff": "What is the dataset used for tuning the Llama-2 models?", "915ad456-3234-41c8-a776-65a81b9346fe": "How is the prompt created for instruction fine-tuning?", "91c0baad-e6eb-439f-8627-048998523db7": "What environment was used for training the Llama-2 models?", "3615cca0-2bbc-4669-a2dd-c830121574c1": "Why was an A100 instance chosen for running the whole dataset and epochs during training?", "37041a7a-4bb2-4a11-bae6-45b9ecd98017": "What is the purpose of the Adam optimizer in the training process?", "b992db86-6502-4748-b1fe-0ebb411ad046": "How does the Llama-2 model facilitate its use and expansion?", "1a135bdd-7ce9-49a9-8b33-c4fa2d3501b6": "What is the purpose of using the Google Colab environment for fine-tuning the model?", "f98d9839-0c56-4bb4-8284-99019fc9b67f": "Why is an A100 instance preferred over a T4 instance for running the whole dataset and epochs?", "6776fc85-0e03-4fcb-b1f4-bda339c21a6a": "How does the PEFT technique help in reducing RAM and storage requirements during fine-tuning?", "ec4760be-308e-4f80-ae5a-a6f46c61e01b": "What are the advantages of using PEFT in terms of model reusability and portability?", "4d6d050a-582f-47ce-87df-09b2ea5993b0": "How does PEFT ensure that the knowledge acquired during the pre-training phase is preserved?", "e59ad353-124b-4f91-9bc4-cb035c417221": "What are the main steps involved in training a large language model?", "16be711f-8564-41a9-8afc-bd8e29309565": "How do PEFT techniques allow fine-tuning of large language models on a single GPU?", "5d772c63-777f-4a90-80d6-dbc783f988eb": "What are some of the widely used PEFT techniques?", "291dd070-87ca-4dc8-b1a1-6c2974aa6490": "How can the model files be shared with other users after the fine-tuning process?", "ca27a1a0-4fa1-4577-a61d-b08cf80b0dd8": "What are the benefits of using an interactive notebook for running the training compared to an unattended Python script?", "33030676-7448-49e0-9670-5d1c301e5c70": "What is the main advantage of using the PEFT technique in language models?", "cfc4c8ad-30f1-47e7-8baa-7b7aa777f408": "How does the LoRa technique differ from the traditional approach of adding new layers in adapter-tuning?", "9344374b-45cb-4732-a5a5-e390bd39475d": "What is the purpose of freezing the weights of the pre-trained model in the LoRa technique?", "05801a0b-d3cd-4ae0-a733-fb5800844028": "How does the LoRa technique address the problem of increased latency in the inference phase?", "c8582643-20e7-4a0e-ac02-ead6ced37152": "Where can the model mentioned in the context information be downloaded from?", "53e2850b-e588-43bc-b3dd-a8eed68e7ce3": "What is the process of merging the pretrained model and the adapters in the final model?", "f1d0a075-463e-4115-b44f-b41c9c9a7c19": "How can the downloaded model from the Hugging Face Hub be used to generate accurate results?", "f132d59f-c86c-4618-b705-d48ca686adf0": "Who are the authors mentioned in the context information that have contributed to the understanding of the techniques discussed?", "8281cd57-69b2-4a89-a57a-9d2f4d89f007": "What is the purpose of merging the pretrained model and the adapters in the final model?", "9f2a6f68-e1ed-4fc9-8843-91888bd0629a": "Where can the model be downloaded from the Hugging Face Hub?", "b220bb91-0c8b-48cc-9cfa-0802d60f91b3": "Who are the authors of the articles mentioned in the context information?", "a91d677a-671b-488e-a5fe-0b406c3a4eb7": "What is the significance of the Llama-2 paper?", "a52e8cc1-defd-47a2-91a0-f300cb47784f": "What is the difference between fine-tuning a GPT and LoRA?", "d381e506-d8d6-4251-ad1a-648d08f3f400": "What is QLoRa and how does it contribute to efficient finetuning?", "8d1b8fdb-de07-4520-bda2-125b701d0b98": "According to the context information, who provided an inspiring code for Llama 2 model fine-tuning?", "77a47343-e614-4009-8f78-b9cd7dd31bf1": "How can one fine-tune their own Llama 2 model in a Colab Notebook?", "6d40a95d-b8f0-4b42-a698-ba05301a2e99": "What is the purpose of the Github Repository mentioned in the context information?", "05870a10-e822-45a1-8aa4-b482dd6c2f6f": "Can you provide the link to the original dataset in the Huggingface hub?", "5b3334bb-3a99-40fe-a06a-30ce94da5c69": "What is the traditional Moore's Law and how is it being replaced by new performance-based laws in the field of computing?", "08db1e69-1228-4051-96fa-9add10343e36": "How have advancements in GPU performance and supercomputer systems accelerated the progress of AI, particularly in the development of LLMs?", "ec1b5986-683a-42eb-aae5-bad40916eb82": "Despite the increasing computational power, why does the training of LLMs still take a significant amount of time?", "1154bdc9-dd50-41fc-bab1-0844b6602da2": "What is Zettascale Computing and how is it expected to impact the field of AI research and development?", "2bd20d66-1b51-4dbf-a549-fc58bfc41468": "How have LLMs-based Generative AI models, such as ChatGPT and GPT-4, revolutionized the field of NLP and enabled previously unattainable applications?", "2df44f3d-352d-44b7-859b-8c576e1dca9e": "What are some common challenges faced by LLMs in terms of scalability, training efficiency, and the need for high-quality outputs?", "bfb97b11-22bf-4dc5-b1e7-67d4b947372e": "What are some common challenges faced by large language models (LLMs) such as GPT-3?", "d5128435-d710-429a-a4ce-c367503a016c": "How do foundation models aim to address the limitations of LLMs?", "ec781e9f-4a6c-40f0-9c7b-dcce45820a9b": "What is the concept of emergence in the context of foundation models?", "beb99bb4-ff83-490f-aa00-f2bc77886b45": "What concerns are raised by the homogenization of foundation models?", "3a53ab1b-3167-48ad-8260-939ce05718b2": "How did the success of GAI and ChatGPT contribute to the emergence of foundation models?", "61901d8c-de48-45c5-9b05-291d9abc67e7": "What is the purpose of pre-trained foundation models in AI development?", "bf7080f2-1703-4630-a538-ad70d568629d": "How do foundation models offer leverage across a wide range of use cases?", "23030dad-1f89-4ff6-a837-3fd087575c4c": "What is the potential impact of foundation models on the digital world?", "e374158c-fd76-4cdd-9f65-edb5978d62fe": "How did the training process of GPT-3 differ from previous LLMs?", "8e521a68-6ecd-4812-bd81-0853b2c1f88a": "What are some examples of state-of-the-art LLMs mentioned in the context?", "41df3732-63c2-4877-9a16-0f24edd9975e": "How do foundation models contribute to the homogenization of machine learning systems, and what concerns does this raise regarding resilience and reliability?", "87bf2c71-ee72-4dde-a38c-d8ac9fb5d822": "In what ways do foundation models compare to other milestones in digital evolution, such as the invention of electricity, the advent of the internet, and the rise of cloud computing?", "89cf2cdc-a512-4ded-bb6d-9389a20d531c": "What are the key characteristics of foundation models that make them significant in shaping the future of AI? Provide examples of these characteristics.", "ac2afabd-3d45-4355-91ec-360e281f7edc": "Explain the concept of transfer learning and fine-tuning in the context of foundation models. How do these techniques reduce development time and resources?", "e88fda3f-d819-4524-8d76-9686953923a0": "How does the scalability of foundation models enable them to handle vast amounts of data and accommodate the increasing demands of the AI landscape? Provide examples of tasks that can be tackled by scalable foundation models.", "53dc2583-781d-4596-b0f1-171324d80625": "Discuss the versatility of foundation models and how they can be applied across multiple domains and industries. Provide specific examples of domains where foundation models are commonly used.", "efae5e98-b9c5-48a8-acad-da9165463c3b": "What is self-supervised learning, and how do foundation models utilize this technique? Explain how self-supervised learning improves the performance of foundation models on various tasks and reduces the need for labeled data.", "c9fe874f-d002-418e-8c4f-accf85db8830": "Describe the robustness of foundation models and how they demonstrate resilience in the face of noisy, incomplete, or adversarial data. Explain the significance of this robustness in maintaining high levels of performance.", "451c0363-c1af-44fd-a7ea-9e802204081d": "How do foundation models utilize self-supervised learning techniques to improve their performance on various tasks?", "b2325882-dbe0-41a4-92e8-24c6f93732e9": "What is the significance of the robustness of foundation models in maintaining high levels of performance and accuracy?", "54ff999b-f69b-4abb-a686-50c55ecfc966": "How does the interoperability of foundation models facilitate collaboration between different AI models and components?", "71ab9f66-a755-4636-bb87-c18984f556ca": "What is the hallmark characteristic of foundation models that enables them to perform well on unseen data and novel tasks?", "499ad8a4-bb01-4364-a0b1-dedd76c3e23f": "In what domains do foundation models excel in language-related applications, and how do they enhance communication between humans and machines?", "3fa455b9-1f20-47af-a4be-59833f6f0c5f": "How are foundation models transforming the analysis and interpretation of visual data in the realm of computer vision?", "288db2a1-9560-4015-9448-adfe995de237": "What techniques are incorporated into foundation models to empower robots to learn from their environment in the field of robotics?", "5bfd2998-bd37-44ea-af30-a8a30b51f6be": "How are foundation models transforming the field of computer vision?", "a24b54f4-6c29-469f-8fc8-fd0471d4f27c": "What techniques are used in incorporating self-supervised learning and reinforcement learning in robotics?", "5007a78b-4e0a-415f-ada0-93453978abe2": "How do foundation models enhance reasoning and search capabilities?", "beaac0a0-dcf6-4f3e-8f79-b1f7866e655a": "In what ways do foundation models facilitate more natural and intuitive communication between humans and machines?", "f541009a-2fb4-4a29-a0fb-8c9582500499": "What is the philosophy of understanding at the core of foundation models?", "ca0de3e2-136f-4ae1-8338-07e73d37537b": "How does AI engineering contribute to the development and deployment of large-scale foundation models?", "e2efd559-bcea-48bc-8c84-b8ad7aff2636": "What is the role of distributed training in scaling out large-scale models?", "2925134e-805e-4acf-a734-b7605879a0b8": "How does AI engineering help in managing data for large-scale models?", "c6fdd337-7ce6-4e0b-8599-31ed8c7bd7d8": "How does AI engineering combine software engineering principles with AI techniques to design and build intelligent systems?", "2151bfb2-4fae-4d91-a835-93da09744fa1": "What role does AI engineering play in the development and deployment of large-scale foundation models?", "a8d4afe8-8959-4905-956d-825502f76652": "How do AI engineers utilize distributed computing to accelerate the training process and improve the performance of large-scale models?", "ef17388d-1879-453a-89fd-c128249f624b": "What are the key responsibilities of AI engineers in terms of data management for training and fine-tuning foundation models?", "eddb397e-ddec-4d72-9be5-b3cd9eff791b": "How do AI engineers optimize the use of computational resources, such as GPUs and TPUs, to ensure efficient and cost-effective training and deployment of large-scale models?", "47fe7a00-1b80-4902-8440-56a1100702d9": "What techniques do AI engineers employ to compress and prune large-scale models, making them more accessible and deployable across different platforms?", "94144c6b-f32b-4b7a-86cc-2debae62a754": "Why is monitoring and maintenance important for AI engineers in ensuring the ongoing success of large-scale models?", "001c262a-04b6-4f9a-b211-2e7bb5cfd37e": "How does AI engineering contribute to the robustness, efficiency, and adaptability of foundation models?", "bc69d2e7-e9a7-40d7-b1db-f5054dc1a579": "In what ways do foundation models represent a critical milestone in the advancement of AI, and how do they drive innovation across various industries?", "9b87bf9d-85b3-4493-ad63-aa1fbc1586e6": "What is the importance of fostering responsible and ethical AI development when utilizing foundation models to address pressing challenges?", "8353781b-ae6e-4263-9ba7-2bfbcb7716f9": "How are foundation models contributing to innovation across various industries?", "7bccbe23-199e-4703-9013-8ce8183fd76c": "What is the importance of fostering responsible and ethical AI development?", "4407d9b7-1865-442b-9180-e6302c48c942": "How can foundation models accelerate AI research and development?", "5d42b302-53d0-44ca-b83a-60aa6f60cfb7": "What are some potential applications of foundation models in language and vision, robotics, and reasoning?", "839c023b-fe8a-4eb4-8ee8-72e4548fde63": "How can language models teach themselves to use tools, according to the Toolformer paper?", "56e24dbe-cf03-4591-9d67-1084bc91b722": "What is the significance of LLaMA as an open and efficient foundation language model?", "a530257e-e50d-431a-a0ea-5a2ac0c28bc4": "How does Google USM scale automatic speech recognition beyond 100 languages?", "3f3359d0-7472-4bdf-a0b6-a227e729cfe5": "What are the benefits of using DeepSpeed and Megatron to train Megatron-Turing NLG 530B, a large-scale generative language model?", "0aee13b0-822a-4b7a-827a-3cf8300e1538": "What are some reflections on foundation models according to the Stanford HAI article?", "8f7359ed-3b74-4af7-a8b4-409c5bd298b0": "What are the opportunities and risks associated with foundation models, as discussed in the provided research paper?", "9747304f-f32e-4f24-84ff-fada76d17d60": "What is GPTQ and how does it contribute to language model compression?", "cd97928a-4ec2-4c85-8d55-a0ae89c8a270": "How does GPTQ achieve remarkable precision while compressing language models to just 2, 3, or 4 bits per parameter?", "5f0a0e9b-85e7-4745-aace-ed1732be1523": "What are the models showcased in the paper that GPTQ can quantize in just a few GPU hours?", "8c74df67-2a1f-4d7e-ae3d-ff1d9a153043": "How does the execution harness developed by the researchers enable efficient operation of compressed models for generative tasks?", "ac312983-7ede-41ef-928c-4c1a7515b3c0": "What is the significance of GPTQ's ability to quantize language models with hundreds of billions of parameters to the 34 bits/component range?", "b9a5cfcd-c1e7-41a2-881a-d223989baa4b": "What are the limitations of GPTQ in terms of hardware support for mixed-precision operands and activation quantization?", "8231fe8b-4eb1-4b27-a0b1-f77aeea730b7": "How does GPTQ pave the way for more efficient and accessible applications of colossal language models?", "ebffd4c6-db5e-4397-9579-c7f14f605a22": "What are the potential research possibilities in the field of model compression highlighted by GPTQ?", "38d678ec-02b5-4c23-a51d-dc098dc7d930": "What are the limitations of GPTQ in terms of hardware support for mixed-precision operands on mainstream architectures?", "284d44f5-ffce-4f9f-af2f-5adfb0841855": "How does GPTQ contribute to the field of machine learning and language modeling?", "86a86d5c-b68a-42bb-83ff-cf2bbe5d2135": "What is the recommended approach for using GPTQ, as suggested by HuggingFace and the mentioned article?", "67a4c3a2-ad70-48df-bcd4-6d6d7fbb1351": "How does AutoGPTQ distinguish itself from other quantization efforts, such as GPTQ-for-LLaMa, Exllama, and llama.cpp?", "3e2be401-57f0-4ebc-b7d6-92ad2955e39c": "What optimization options are included in the integration of AutoGPTQ with the Hugging Face Transformers API?", "9bcdbb29-24a9-4e32-a636-689fc71e2b42": "What is the main difference between Exllama and AutoGPTQ in terms of transformer architectures?", "ae73c44f-6801-424c-b482-abdab766bade": "How has the Hugging Face team enhanced accessibility to GPTQ?", "f4cb0f3a-61eb-48b0-a792-b82302190b06": "What optimization options are included in the integration of the Transformers API for Low-Level Model (LLM) quantization?", "3292ae44-c14a-4d0c-afa1-5433ce835489": "What advanced quantization options are offered by the Auto-GPTQ library?", "a5c55093-e349-42a6-93b5-561b244bd978": "How does the Auto-GPTQ library ensure versatility and adaptability in transformer model quantization?", "b4a5fb12-b393-4722-9b65-b99f327399b8": "What is the purpose of loading the fine-tuned model Llama 2 7B 4-bit Python coder in a Colab session?", "a7ea1303-3182-42d8-9c96-a11d2ca03718": "How is the performance of the model evaluated during inference?", "56a1cf05-8f15-4d20-a97a-401312e9198e": "Why is it recommended to execute GPTQ quantization in an A100 GPU in Colab?", "0952ee8b-cf01-499e-87d8-a0deeda1395f": "How long does it take to quantize the model using GPTQ?", "aabdd808-a68d-4eca-a1f8-9e5e6941a646": "What libraries need to be installed for GPTQ quantization according to the huggingface tutorial?", "62afd36c-6276-419a-be02-a863c8054827": "What is the purpose of quantizing a model using GPTQ?", "e78bcdbc-4813-40f4-a4b0-ce82e36b26dd": "How long does it take to quantize the model using auto-gptq?", "6db424ee-f80e-46de-af77-0de5eaac67b6": "What is the recommended group size for quantization?", "df9bf7ab-3963-4723-98f5-14cb7e18e767": "Why is it necessary to have a GPU to quantize a model?", "003c7810-37eb-4897-ab0d-deff4ebef2e6": "What is the role of the Optimum library in the quantization process?", "f9a8577a-e18f-43d7-8da3-1d9a48e0a3d8": "Can you explain the process of calibrating the quantized weights of the model?", "bdfe288c-389e-478f-8b6a-0079a341fd46": "What are the default datasets that can be used for quantization?", "2deb8212-48a9-4c72-8b1d-2ac16aaed171": "How much GPU VRAM is consumed during the quantization process?", "43be575d-1f6b-4494-9834-5743019cb8c5": "What is the significance of setting the device_map parameter to \"auto\"?", "592aedba-e9ac-45cc-a67f-e544c762cfa7": "What is the purpose of the tokenizer during quantization?", "2c3d793a-04ec-465a-992a-f3ccdc280698": "What is the purpose of moving modules back and forth between the CPU and GPU during quantization?", "ca2228d3-36e8-4c94-b8b0-a60088fa78bd": "What is the recommended group size for quantization?", "de0dbe1e-8a76-4c3d-8b94-eb08bee8868a": "What is the potential trade-off when setting the desc_act parameter to False during quantization?", "9c629a45-19e9-4d4b-82e0-676c71a6a9bc": "How much reduction in model size was achieved after quantizing the fine-tuned Llama 2 7B model?", "b6abba07-0495-4017-a5f0-89d3c2806ec0": "Which libraries need to be loaded in order to use the GPTQ model from Hugging Face Hub?", "df4a2d89-20c6-45d6-b9ca-5cd91fc0f0b0": "How much GPU memory does the model occupy?", "0652be9a-9da8-488d-b63a-98064d571336": "What was the purpose of repeating the performance evaluation on a T4 GPU?", "bd1824d7-6a1f-4ae6-bed9-d65758f26d22": "What is the significance of quantizing large language models when deploying them?", "60d240a2-1ba9-4c1c-87f8-79fcbd546f5f": "What is the purpose of uploading the quantized model to the Hugging Face Hub?", "7f93f6da-e3d9-406a-a475-f41338823b7a": "How can the quantization process maximize GPU usage while using CPU offload?", "a31ec0ff-e208-4174-bc05-08c9df3dc975": "What are the recommended libraries that need to be loaded, including the tokenizer and the model, for using the GPTQ model?", "4adda408-364a-4785-8a3c-dcf42b9006e0": "How much memory does the GPTQ model occupy on a T4 GPU?", "4fc2c2f0-36f4-407f-93d5-2ec635ec24c3": "Compare the inference time of the base model and the quantized model on a T4 GPU. How much faster is the quantized model?", "125121a9-4e12-4723-bc6c-8d5cdceb4649": "What is the title of the ICLR 2023 paper that introduces the GPTQ model?", "b9c1d069-ef7f-4b41-9437-e18165ec5e47": "Who authored the article \"GPTQ or bitsandbytes: Which Quantization Method to Use for LLMs - Examples with Llama 2\"?", "62e58574-e5d5-4564-b7e6-38923053c2b1": "What is the name of the original fine-tuned model in Huggingface for Python code generation?", "beeca607-0de7-4dbc-a130-5c2f503f96ed": "Which Hugging Face blog article discusses making LLMs lighter with AutoGPTQ and transformers?", "d4b4c178-3523-4d32-ac9c-7b4ce4a61b61": "What is the purpose of the GPTQConfig in the Hugging Face official documentation?", "97d81d59-fa63-468f-9486-7bcdeb5cb9bb": "Who authored the article \"4-bit Quantization with GPTQ\"?", "eba4cdeb-dd25-4ff0-a0af-1906a3b27ffa": "What is the main advantage of using the GPTQ model compared to other models?", "78186c11-d875-4315-8ac0-d6137b3c1604": "What is the purpose of LLaMA, Meta's new AI tool, according to the official release?", "75c28f2b-32fc-4613-b150-ef9a48e59737": "How does LLaMA differ from ChatGPT in terms of its intended users and functionality?", "ff86ae5f-c134-4950-8c64-6beebfc47c67": "Why were Meta's previous LLMs, Blender Bot 3 and Galactica, shut down and their development halted?", "c8bd6b2f-83e7-424b-9f4d-f727a981e75b": "How does Meta's effort to \"democratize\" access to LLaMA address the issues of toxicity and bias in Generative AI?", "5ec34627-fe89-4306-a291-41c5f6a4ada1": "What is the significance of the downloadable torrent of the LLaMA system being posted on 4chan?", "a7687172-1660-42c6-86bf-ceee43f812fb": "Is LLaMA currently being used in any of Meta's products? If not, what are Meta's plans for its availability?", "c3fed643-389e-437a-adde-d215903c4894": "Can researchers use LLaMA in their own products?", "36c85b25-fe24-42bf-a7f0-faed90766ece": "How did the LLaMA system become accessible to the community, and what platform was it leaked on?", "4102b74a-a3ec-4c78-b299-4ca6862e1a4d": "What is the purpose of Meta making LLaMA available to researchers before they can use it in their own products?", "bd9489fb-9a9f-410e-b6da-092b6d4c78a9": "What are the potential benefits of open-sourcing AI models like LLaMA for the AI community?", "1c31fd29-fb1c-45d0-8822-c9dbed591db6": "What are the potential consequences of the leak of LLaMA to the public?", "4639867c-7186-4562-a0b6-222f27c3b028": "How does LLaMA differ from ChatGPT, and why is it not suitable for the average internet user?", "d2938311-953b-4b13-aa44-d73cb9000d14": "Has Meta publicly acknowledged the leak of LLaMA, and have they provided any comments on it?", "19456a47-9c65-441e-ade5-92258438a23c": "What are the positive implications of unrestricted access to LLaMA for researchers?", "589cd5ae-11ab-48e3-bd31-45a368e74073": "What are the concerns associated with the misuse of the leaked LLaMA model?", "e48842bb-9e09-45f8-a4c7-ddfe3c95155b": "How could understanding the inner workings of large language models like LLaMA contribute to improvements in robustness, bias, and the toxic nature of LLMs?", "9a314b77-c2ad-4785-9001-6704ca2e9e75": "Why is it important for Meta to address the leak and handle the release of their tools in a responsible manner?", "c7539f18-7ca4-4472-bd67-1547ef2184ea": "How could unrestricted access to the Llama language model potentially benefit researchers in improving its robustness, bias, and toxic nature?", "925a4956-1722-4902-8731-3c24fa644f3e": "What is GPT4All and who developed it?", "5da04889-afb4-4605-b604-29d921182b9a": "How was GPT4All trained and what can it do?", "d90677f7-435b-40f9-845e-ba3aba39aec7": "Where can GPT4All be accessed by the public?", "528b8ac4-c95b-47a4-aeff-a43daf751076": "What is the license for using LLaMA and GPT4All?", "8f03977a-079e-47d5-859e-43e2686fa941": "What is Nomic working on in relation to GPT4All?", "f7e35567-daea-422f-be90-ff25b4096474": "How does GPT4All differ from ChatGPT in terms of language comprehension?", "e3e5d5bc-69c1-4bb1-a2cb-0325538a20ac": "How can GPT4All be run locally on an M1 CPU Mac?", "d7a1dc4e-794d-4a15-b55b-98708e38d1d9": "What is the purpose of the nomic client and how can it be installed?", "aa0a445f-9701-4608-8629-746387cd799f": "What is the goal of GPT4All and how was it fine-tuned?", "7fad798d-b345-4647-adb6-626044c9cacb": "What is the significance of GPT4All in the AI landscape?", "de54119d-e04e-434f-8308-681dfbf7f970": "What is the main feature that GPT4All has, which Bard lacks?", "3f760ce0-6852-4e38-b18d-8b4c51916596": "How can you interact with GPT4All programmatically?", "d0e7510b-9aa0-4ae5-8f39-250d1e47bb04": "How was the GPT4All model fine-tuned and what dataset was used?", "366e105a-cb20-4e2c-8bbe-0ad25a10d165": "How long did it take to develop GPT4All and what were the associated expenses?", "630266d9-ce8f-467f-92d8-710b00d7c64a": "How does the perplexity of GPT4All compare to the alpaca-lora model in the self-instruct evaluation?", "27af08f3-0778-4a2b-a2f6-0b05815be4ae": "What is the TL;DR summary of the document?", "9a63a52b-e577-4d90-b5df-fc599f3f7baa": "What is the purpose of Meta LLaMA in the context of accelerating LLM open-source community?", "19adb054-76c6-4782-9c02-16b95229d3dc": "What is the difference between Stanford's Alpaca and GPT4All in terms of their execution capabilities?", "3466c3d1-2f58-4fd9-891f-b36a00b78680": "How is generative AI evolving according to the document?", "7accb6b8-bdc0-47db-a50e-380b0285d17c": "Who is credited for reviewing the article?", "c79289c6-9353-48d8-b851-3981aa15f1b7": "What are the three different variants of Code Llama based on their parameter sizes?", "1fa91681-f03d-42b7-af28-ec803ee9ddb8": "What is the purpose of the fill-in-the-middle (FIM) competence in the 7B and 13B models of Code Llama?", "714cbcce-c26f-4e69-8a78-e0002cdd78be": "How does the 34B model of Code Llama differ from the smaller 7B and 13B versions in terms of serving and latency?", "131b9e27-6076-4f98-82d4-265c5fd73ffc": "What are the two nuanced adaptations of Code Llama mentioned in the context information?", "d2460fc5-b818-4401-bcde-b0848ce4a474": "What is the specialized derivation of Code Llama called and what programming language is it focused on?", "35331e63-f0f6-423b-ba7b-8bcb69ae5533": "How does the Code Llama - Instruct version enhance the model's capacity to understand human expectations in prompts?", "eff40c0a-c7d5-4626-ae5c-18d0750145b5": "What is the size of the dataset used for training Code Llama and what is its composition?", "1072ab1c-7fc5-4bb7-b52c-75856acc8cf7": "Why is it advised to opt for Code Llama - Instruct versions for endeavors involving code generation?", "5f650314-c5a3-4670-a2e9-75f87283bc07": "How does the training of Code Llama involve dataset curation and duplication prevention?", "0e949a55-b37a-4101-9bd8-2e87c19af966": "What is the initial phase of the dataset size for training Code Llama?", "f2d97dd9-505e-47e1-bcc1-7512cd441049": "What is the purpose of using Code Llama - Instruct versions for endeavors involving code generation?", "473e07b4-002b-4001-89d5-637fdc0c149a": "How does the Code Llama training dataset ensure a near-duplicate-free landscape?", "8f766a48-11d9-4b65-9717-030ec49d0527": "What are some pragmatic applications of Code Infilling within the realm of programming?", "08243684-d92c-4299-934e-c9ec21107f79": "Can you explain the concept of causal masking and its role in the training process of infilling models?", "8cdb5518-f9bd-4e39-b7e4-0261c0a79e78": "What are the challenges associated with handling extensive sequences in transformer-based language models?", "1efdbfdc-1a36-43f2-83c4-c556023adcef": "How does long context fine-tuning (LCFT) address the challenges of extrapolation and attention passes in transformer-based language models?", "3e857e60-cb77-446c-8467-e6989e5b6a46": "What are the pivotal challenges faced in handling extensive sequences in transformer-based language models?", "77bebcd7-ffae-4989-8947-a93bfca47a9d": "How does long context fine-tuning (LCFT) empower models with extended-range capabilities?", "5b791597-944e-4d9a-a98b-d7762fe0b979": "What is the purpose of instruction fine-tuning in Code Llama - Instruct models?", "15c71053-4157-40a6-b94e-d1924f9119b3": "How does Meta AI address the resource-intensive nature of acquiring data for self-instruction in coding tasks?", "dcd38c37-c2b5-4140-824e-144922117305": "Which coding benchmarks did Meta AI use to evaluate Code Llama's performance?", "3448bcc6-6e95-4cf4-8c3e-46b4c9902a69": "How did Code Llama perform in comparison to open-source Large Language Models (LLMs) and its predecessor, Llama 2?", "d0546d50-16c8-4d36-ac5a-4170208e301b": "What are the two coding benchmarks used to evaluate Code Llama's performance?", "5df6a4e8-3b87-440c-88ab-7c3ca4242494": "What were the scores achieved by Code Llama 34B on the HumanEval and MBPP benchmarks?", "58755a31-bd68-4d01-8543-0e688c8338a1": "How does Code Llama's performance compare to other state-of-the-art solutions in the field?", "0926908e-23e3-4475-a18b-0f607ecc612c": "What is the significance of Code Llama's performance in reaffirming the value of open-source foundation models?", "8d2a6c64-0da1-4f83-8cb3-edf1f28d4858": "What is the main difference between Llama 1 and Llama 2 in terms of commercial use?", "1deaa36c-6346-459b-8d9b-7de5bf053ef1": "Which cloud platforms are Llama 2 models available on for fine-tuning and adoption?", "cb0a27f8-14f6-4f45-b4fc-8abbed8803b9": "What are the restrictions on using Llama 2 for companies with a large number of active daily users?", "94312564-2389-487f-9105-480f1e37c8e9": "How does the context window of Llama 2 compare to its predecessor, Llama 1?", "b5331a2c-8f81-4cc2-a8e7-e399543243e3": "What is the significance of the 34 billion model size in the Llama 2 lineup?", "56e2c925-aad3-4e2a-8b8c-6ad8219f9385": "How was the pretrained variant of Llama 2 trained, and what is its context window size?", "e6b3248d-59b8-41ba-8b2d-284f4ae8d31f": "What is the training time required for the 70 billion model of Llama 2?", "0974f8a5-5950-45fa-b688-29bffb1d7b1b": "How does Llama 2's safety performance compare to ChatGPT in AI safety benchmarks?", "8093d04f-2c5c-47d9-876f-946dde659659": "What challenges arise when optimizing a language model like Llama 2 in terms of safety and helpfulness?", "6c14f6c3-9cac-41c8-8070-d15366356138": "What potential limitations can arise if safety is prioritized to an extreme extent in a language model like Llama 2?", "92267346-14f0-411b-b3a4-26f75a59f171": "How does the model strike a balance between helpfulness and safety when optimizing its responses?", "133d8aaf-75ad-4309-adc8-939a7c58d6f2": "What challenges does the model face in finding the right equilibrium between providing useful information and ensuring safety?", "1e815e07-81c2-4776-8d9b-a67515f730ef": "How does Meta employ reward models to optimize the model's responses?", "4ce67274-507c-4048-95d3-1643825715f3": "Why was there a delay in releasing the 34B parameter model?", "a1da755f-0429-4172-90ad-5cbcdb9928ff": "In what categories does Llama 2 outperform its competitors in the open-source language model arena?", "18f5994e-1a4c-4b79-92cb-59074a4176c8": "How does Llama 2's performance compare to Chat GPT 3.5, despite being a smaller model?", "504f89b3-0a29-4da6-a161-7cd706837a3c": "What are the challenges faced by Llama 2 in coding and math problems?", "2d4fa91c-99f2-4a34-a9cf-e771666e8655": "How does Chat GPT 4 perform compared to Llama 2 in coding and math problem tasks?", "a686d02d-fbe9-46aa-b260-90927dc8132f": "What potential does Llama 2 have in the market, considering its efficiency and ability to compete with larger models?", "e405de4a-00d7-407e-bc95-c9f00ebb7975": "How do open-source AI technologies, like Llama 2, continue to advance?", "a9a9daf2-c76e-49cc-9796-3acd2c98c52b": "How does Llama 2's performance compare to larger models like Chat GPT 4 in handling complex language tasks?", "f45f3ae8-768d-4d24-87c3-35e1ebe98687": "What is the significance of Ghost Attention in enhancing conversational continuity in Llama 2?", "4b136829-dbdf-4d27-a383-72a1ec65b7ca": "How does Llama 2's temporal capability contribute to delivering more contextually accurate responses?", "9e642ceb-2763-4987-8f1b-897f31ec0dc6": "What is the impact of Meta's open-sourcing of Llama 2 on developers and researchers?", "a5262825-dd80-458d-87a4-dd943a473bc3": "Can MosaicML's next MPT iteration surpass Llama 2's performance?", "ec5d7d6c-02c8-45ef-81f3-0263c960001c": "How does Llama 2's temporal awareness enhance the user experience?", "33782dcc-bcfe-4811-ba9d-6e69d913692f": "What potential impact does Meta's open-sourcing of Llama 2 have on the AI industry?", "9bff3e07-a994-4e3c-b4d8-0267a3057f17": "Can MosaicML's next MPT iteration surpass Llama 2's performance?", "e48a6a3f-94f1-4951-a1c1-0904d49f6a42": "Should developers and researchers compete with Llama 2 or collaborate with the open-source community to improve existing models?", "da012f33-6b38-41c8-9ffd-5989c1d74a6b": "Why did Microsoft choose to host Llama 2 on Azure despite its investment in ChatGPT?", "42d59926-830d-47a3-a10c-b68d5aadd7ed": "How does the launch of Llama 2 contribute to the democratization and proliferation of AI?", "b7825e35-2248-4dd4-8864-4496d9784f76": "In what areas can Llama 2's language modeling capabilities be transformative?", "43dbcfd7-6a5d-4eb0-b2a1-224ef169f0f1": "What limitations currently exist for Llama 2 in terms of math and coding?", "3773eea9-ac6f-4235-97df-7c2bc609b024": "How does Meta's open-source approach signal a shift towards a more collaborative and creative AI future?", "b62d6ad5-ea0c-40d7-89e2-02b8f8009dcd": "What are the potential implications of Meta's bold democratization play in reshaping preferences and partnerships in the AI space?", "7634741a-bc14-48d2-b8eb-5e3d185872e3": "What is generative AI and how does it relate to machine learning?", "2ab9ed95-623c-423e-a420-71ec8c922460": "How does generative AI differ from traditional NLP models?", "96d50690-2055-4c0f-a1c6-f7ab62e23e73": "What are some examples of generative AI applications?", "085cf51d-346e-49f7-9209-0c91def31b1b": "What are diffusion models and how are they used in generative AI?", "37b89b20-6366-4deb-b926-145927fbd40e": "Explain the advantages of transformer-based language models in generative AI.", "6fcc2ade-04c7-4079-a420-35af5dd0097c": "Who developed the GPT family of transformer-based language models?", "c6b60a4d-f275-46d8-871f-98cb1b5d55a2": "How does generative AI handle long-range dependencies between words in a sentence?", "b0c5155b-f104-4aad-aa68-8e683d276bc2": "What are some potential use cases for generative AI in the future?", "f3a1ffc3-c234-43ea-849e-dffd22c132bf": "Discuss the history and evolution of generative AI.", "98210cc8-73b2-4d05-8d87-1bd18fbb9503": "How is generative AI trained on large datasets to generate new content?", "5869f6a6-45fe-4a1e-bcb5-eb0210d134cb": "What is the primary advantage of transformer-based LLMs over traditional NLP models?", "39040e8d-bba3-4016-b522-39b69c4826f2": "Who developed the GPT family of transformer-based LLMs?", "ee384c2c-a021-4f04-b015-769ba7f321fb": "What are some tasks that transformer-based LLMs are more suitable for?", "d5e96307-5843-466d-940a-b2301fa1c25d": "When was the Eliza chatbot developed and by whom?", "94d7a0d9-e2be-44ac-8225-b0af91664576": "What were some shortcomings of early generative AI implementations?", "28973571-3231-4df0-890f-b77aca020732": "What are the three critical components that contributed to the recent success of AI models?", "a5c5de91-d9e6-434b-89a3-4cf7fcdc8b9b": "How are GPUs different from CPUs in terms of processing?", "6d82dc62-484c-420b-9b85-54b9ad078340": "What breakthrough allowed GPUs to be used for Neural Networks?", "ae5dda95-3566-4e84-b1d5-edc5c2ed2fec": "When did the modern AI revolution begin and what was the key breakthrough in deep learning?", "47954fe6-60c8-461b-ac15-53482439d18c": "How have GPUs contributed to the progress in machine learning?", "85f51464-f90a-468c-accf-82950b2e87ac": "How did GPUs contribute to the advancement of deep neural networks and machine learning?", "d92943c0-bc17-4998-9b87-c05d0727af9b": "What was the significance of AlexNet in the development of computer vision algorithms?", "f23c71e2-8abc-411b-ae5c-bee9f4707b3d": "Why were CNNs not practical for computer vision tasks before the introduction of ImageNet?", "1d41a2e7-666f-493e-a681-ecc0a54770e1": "How did the use of GPUs and the ImageNet dataset drive progress in computer vision?", "a5bb38b7-4091-4f45-bda3-7448463746a9": "What were the limitations of recurrent neural networks (RNNs) and long short-term memory (LSTM) models in natural language processing (NLP)?", "efbe8ee9-ea68-4563-ab4f-fcb67774ffb6": "How did the \"Transformer\" model revolutionize the approach to translation problems in NLP?", "6be49186-946b-4ed2-a342-763ba0ae69ba": "What was the key breakthrough in the \"Attention Is All You Need\" paper by Google?", "3f7b59db-6eab-4d12-a504-b2cf290c76e4": "How did the architectural flaws of previous NLP models hinder their ability to capture the complexity of larger bodies of text?", "595b1e2e-face-45eb-b7bd-531d555a789f": "What role did the development of the \"Transformer\" model play in bridging the gap in NLP for coherent conversations with humans?", "2b17b99b-31d9-4d5d-80d5-7ebe849f5e1a": "How did the \"Attention Is All You Need\" paper contribute to the advancement of NLP?", "acb676fb-e076-48a1-9d45-38d888f87768": "How did the development of the \"Transformer\" model revolutionize the approach to translation problems?", "7d7700c3-d15e-454b-8228-887841bf593a": "What were the limitations of recurrent neural networks (RNNs) and long short-term memory (LSTM) models in processing time-based data?", "9795a659-f502-4904-96de-7f47a0ff2a44": "What is the significance of the \"attention\" mechanism in the Transformer model?", "0b501dc3-bf2c-4a8b-9925-4030b703ee5d": "How have Transformers been found to be state-of-the-art models in natural language processing (NLP) tasks?", "a0724f11-6e22-4b5f-965b-f5790e963a1d": "What breakthrough finding allowed the training of models like BERT and GPT-2 on unstructured data?", "b7b7386d-c26b-496c-93ab-40898a8f9d0a": "What are the challenges faced by researchers in acquiring the right training data for language models?", "2362adaf-8156-4b09-91a3-69e1008b72a9": "How did the advancements of BERT and GPT models harness the immense amount of unstructured text data available on the internet?", "ba4a6a3c-2c0d-4ebf-94f5-63be5da31dcc": "What are the key features and capabilities of GPT-2 and GPT-3 models developed by OpenAI?", "95ebdd63-ed74-4f6f-b87a-7ecd9af63fa1": "How did the architectural flaws of RNNs and LSTMs limit their ability to process longer sentences and paragraphs?", "78b5054e-0920-4026-828f-fdc1ab212be2": "How did the development of BERT and GPT models enable \"zero shot\" performance at completing new tasks?", "ae77025b-4e91-475b-a0e0-77242408a2c5": "How did the advancements of BERT and GPT contribute to the utilization of unstructured text data for teaching computers to work with human language?", "f1e5d696-206c-4dcb-b80b-65265e1f47be": "What is the significance of fine-tuning in large transformer models like GPT-2 and GPT-3?", "9c1dc96c-392c-439c-943b-3b9cf2cedddd": "How does instruction tuning improve the interaction and capabilities of language models like GPT and ChatGPT?", "470b146c-045f-4dca-84f2-5a8068090106": "What is reinforcement learning with human feedback (RLHF) and how does OpenAI utilize this technique in instruction tuning?", "5bf0b560-c262-49fe-abca-836428bc55f7": "How does instruction tuning help align language models with human values and prevent the generation of undesired or dangerous content?", "680a5913-d7db-43db-9c2f-c3937c86d94b": "How does instruction tuning improve the accuracy and capabilities of language models?", "7cf9e061-f143-4e59-b4c3-e6d1dbf2a67b": "What is the specific technique used by OpenAI for instruction tuning?", "c1a4d9f7-6f4b-4780-a1a0-e1ec6ca2fdf1": "Name two players in the AI hardware manufacturing industry mentioned in the document.", "2e7b17e1-9758-4030-ac2c-12a904eb7778": "Which platforms provide access to LLM models via API?", "041e4db3-cd81-456f-bb26-a3d7f3af94af": "What are some examples of breakthroughs happening in the field of Generative AI?", "73c8b04c-715e-474d-a9b2-ee375203cf6e": "How does reinforcement learning with human feedback help align language models to human values?", "3d5591d3-b25d-4027-82b0-eca2f1f7ecd6": "What is the purpose of ChatGPT and how has it impacted the adoption of Generative AI products?", "6fbbd452-b179-4b29-b3d9-56fc6e25bd12": "Discuss the potential benefits and drawbacks of open source models in terms of controlling outputs and democratizing access to technology.", "52a08f8a-0098-41fd-99f9-e91031e26963": "How do multi-modal models differ from traditional models in their ability to understand both text and image?", "4b871b31-43a3-40cf-9054-1333645bd1eb": "What are some examples of new model architectures mentioned in the context information?", "b34e2821-c76a-44ef-a17c-e52a0e8c531b": "How do Agent Models differ from other models in their ability to set tasks and interact with other tools?", "7fdcdb19-7340-45d0-8589-b8ee0b8a02ae": "Who are the leading players in the field of AI research and deployment, specifically in relation to GPT models?", "bd3f6076-7449-44d9-9d14-9306af40199a": "What is the significance of OpenAI transitioning into a for-profit company in 2019?", "a5c4a0cf-8103-47b4-af5d-9bf7f9e2ef66": "How did OpenAI's partnership with Microsoft impact the development of their ChatGPT and Microsoft's search AI?", "2ea1873e-7b39-4852-9947-3f10aacceb70": "What are the key features and capabilities of GPT-2 and GPT-3 models?", "a4c47dbf-366a-4cb1-9d99-0311ddb0a1cd": "How were GPT-2 and GPT-3 models trained and what makes them unique compared to previous models?", "3108de15-43e3-46ee-9a61-e3afa333d8bf": "What are the potential benefits and risks associated with open source models in terms of control over outputs and democratizing access to technology?", "be2545cb-5064-42dc-958d-a1c0201c523e": "How has OpenAI's funding and investment strategy contributed to their research and development efforts?", "46256a75-fef8-4d0f-baa3-755ad3e2a28c": "What is the significance of Microsoft's partnership with OpenAI in the development of ChatGPT and Microsoft's Office productivity apps?", "a27950ed-384a-46ec-a33c-3db3150783aa": "How does GPT-2 differ from previous text generation models in terms of generating human-like text?", "57263c9d-7694-437b-8000-f9e0dfc45364": "What are the key features and capabilities of GPT-3, and how does it differ from GPT-2?", "28c0010a-a5d4-4efc-90e5-d580f847ce26": "What improvements does ChatGPT (GPT 3.5) offer compared to OpenAI's earlier text generation models?", "0e8f5d08-c465-45b4-ab7f-c4e34a1f5e7a": "How does GPT-4 differ from GPT-3.5 in terms of its capabilities and token count?", "d0ba6b7b-fdc5-4286-85b3-299bac3dffc3": "What is the purpose and function of Google's Pathways Language Model (PaLM)?", "ca271d63-1afc-43d1-8cc6-62d158cbc263": "How has PaLM been utilized in various Google projects, such as PaLM-Flan and PaLM-E?", "c9010b88-1885-4df7-8c31-053103816b27": "Can you explain the process of pre-training PaLM and the sources it draws from for self-supervised learning?", "45f924f7-5bcc-4f7d-a01e-db111dd42e4d": "How does OpenAI's GPT-4 compare to Google's PaLM in terms of multimodal capabilities?", "b2b05d15-f0e5-4f1a-934d-0b75fe383b51": "What are the reported improvements in GPT-4's factual response rate and handling of disallowed content compared to previous models?", "65b4e0af-6beb-47f5-a688-f363baf2bfce": "What is the name of Google's research and development arm that was unveiled at Google I/O 2018?", "9cb25ceb-ab15-4d18-8621-2ce2d21a5fce": "Which model has Google recently rolled out in its Bard chatbot?", "65d9d10d-ce06-4ace-90e8-7e1fb6ea1f5f": "What sources were included in the text corpus used for pre-training Google's Pathways Language Model (PaLM)?", "87216062-1098-4e29-b6ef-6711aac9682c": "How many NLP tasks did PaLM excel in out of 29 in the few-shot performance?", "d1681027-fc9f-40b0-951e-bbaf27a7cc1e": "How many parameters does PaLM scale up to compared to GPT-3?", "d0691ab8-3e44-4efa-9e00-201cb04da401": "How many TPU v4 chips were used in each pod for training PaLM?", "c9b46b6b-51ac-4329-b770-621963705da8": "What is the purpose of DeepMind's Chinchilla model?", "31508774-8e6a-4c45-b8d9-01bdc02c94eb": "When did DeepMind become a wholly owned subsidiary of Alphabet Inc.?", "f4eca483-270c-4e07-a259-e14c67108e86": "What does DeepMind's neural network try to replicate?", "0afcceeb-d08a-46a4-955a-bec1db823bdc": "In which year was DeepMind acquired by Google?", "1277f80c-1161-4529-8768-02f6310854db": "What is the purpose of Google's Pathways system in relation to TPUs?", "c9942a3d-c065-46f1-823c-a2d3a0efb490": "When was DeepMind founded and when did it become a subsidiary of Alphabet Inc.?", "e17d0629-cf85-4c91-b550-f4a843b1ba1f": "Which game did DeepMind's AlphaGo program defeat a human professional player in?", "aed59082-a01b-4a61-8e49-c7f0f09cc78f": "What is the significance of DeepMind's program AlphaFold in the field of protein folding?", "f3179b86-d958-448d-afe2-8d1dc121b702": "What is the key breakthrough mentioned in the Chinchilla paper regarding language models?", "d3d1ff9e-681e-4d09-a520-efbd0aca785e": "How does the parameter size of Chinchilla AI compare to GPT-3?", "7c8aa3be-934e-4842-9175-7a1d3ddcd01b": "What is the average accuracy rate of Chinchilla AI on the Measuring Massive Multitask Language Understanding (MMLU) task?", "d904489f-7f99-4f9b-b778-660e1e6543b1": "How does Chinchilla AI outperform other large language model platforms like Gopher?", "0d60dd22-e460-42d4-9a4c-ab265f4bafa0": "What is the advantage of training models with more training data in terms of inference costs?", "da8dbf74-a52e-4902-807f-4b046105b45b": "How does the parameter size of Chinchilla AI compare to GPT-3, and what was the training token count for Chinchilla AI?", "7ebdee83-e2e4-4f89-b025-9bae04322f95": "What is the average accuracy rate of Chinchilla AI on the Measuring Massive Multitask Language Understanding (MMLU) task?", "1eeca274-3cf5-4c01-ade4-66e0abc3fb6d": "Which large language model platforms does Chinchilla AI outperform on a range of downstream evaluation tasks?", "d4e78b30-78b1-471b-a428-512ea31b31d6": "What is the purpose of Nvidia's CUDA API, and how does it enable the creation of parallel programs?", "576ed9ac-572d-4b79-a76f-0f1d4bdcfb5e": "How many parameters and training tokens does the Megatron-Turing model by Nvidia consist of?", "99f8f7b0-0830-4a3b-84b3-0efc6c399c06": "What is the Early Access program offered by Nvidia for its MT-NLG model?", "17497e26-3a2a-481b-94d5-5a423f3dc8c0": "What is the DGX Cloud platform by Nvidia used for?", "f09dd32e-a859-45c5-98e6-a8b535d79228": "What is the main objective of Meta AI (formerly Facebook AI Research)?", "3f06139b-8325-41d8-a613-f7b5b0372b75": "What is the purpose of PyText, the open-source framework released by Meta AI in 2018?", "eb5f0613-ead0-4f96-ab26-cb931b42b132": "What is the purpose of BlenderBot 3, the chatbot developed by Meta AI?", "fc224d5a-4c01-42de-b141-331bdba781a4": "How does Galactica, the large language model developed by Meta AI, assist scientists in their tasks?", "0d4ca200-7e16-48c8-bab3-1eccd0d08a7e": "What is the purpose of Models Meta AI (formerly known as FAIR)?", "8535dcbc-84fc-4902-80ef-3fba21ebc4fc": "When was PyText, an open-source modeling framework focused on NLP systems, released by Models Meta AI?", "202d877f-735e-41d7-b26c-59f9eb75d3b2": "What is the main goal of BlenderBot 3, the chatbot developed by Models Meta AI?", "3cb05b28-07b3-487b-bfb7-ed9fda029cd8": "How does Galactica, the large language model developed by Meta, assist scientists in their tasks?", "aeab3182-2e98-4ee2-9e99-2c5496b05131": "What is the non-commercial license for LLaMA (Large Language Model Meta AI) designed to prevent?", "1aae2f40-1bf9-4a0e-af74-5b482995f0e7": "Who is granted access to LLaMA on a selective case-by-case basis?", "51c38c29-c374-49a0-9dee-d61725418c8a": "How does LLaMA-65B compare to DeepMind's Chinchilla and Google's PaLM?", "f504d403-54d3-45f4-bd67-96f7980f2f2d": "What sources were used to train the LLaMA models?", "00aaaf2f-94ea-4550-89e3-1f7d0a612697": "What are some of the issues associated with large scale language models like LLaMA?", "ed118a91-f123-4fd5-a59d-2bd6a8d1fa25": "When was EleutherAI, a non-profit AI research lab, founded and who are its founders?", "60eb145b-6054-4ac2-ba76-8c84b9bedf52": "What is the purpose of EleutherAI as a non-profit AI research lab?", "b509fda6-8f12-4205-9ae6-d1f47380c87f": "How did EleutherAI contribute to large-scale natural language processing research?", "4a21b5f5-155e-4e05-9d83-d8ad208a9291": "What is the significance of the Pile dataset curated by EleutherAI?", "95039793-be2a-4a0f-b288-2a0ff2dd3359": "How did EleutherAI's GPT-J-6B model compare to other open-source GPT-3 like models?", "7f395a44-71f3-47a8-beca-dc31b3be48d3": "What is the role of CLIP and VQGAN in the image generation model developed by EleutherAI?", "e1051a5d-137c-440c-b909-3541dca24bfb": "How did EleutherAI collaborate with the Korean NLP company TUNiB in training language models?", "5efde3ea-d12c-4db1-ad4d-d4690126d1eb": "What funding sources did EleutherAI utilize for their research and computing resources?", "035be70e-e5be-43ed-b4db-1ed486856ff4": "What was the significance of EleutherAI's GPT-NeoX-20b model in the field of language models?", "211ff822-9a0e-4b7a-ab3d-5495a5a468c5": "When was EleutherAI formally incorporated as a non-profit research institute?", "de01d6b0-abfe-4dd8-aef0-bd263512ade6": "How does EleutherAI prioritize interpretability and alignment in their research on large models?", "5ea40dcf-1e3f-416a-91bd-3a0fb1302d8e": "What is the significance of EleutherAI's collaboration with the Korean NLP company TUNiB in the development of language models in other languages?", "0d2ec195-ba0f-42dd-80bb-e75f8f85fff8": "How did EleutherAI obtain computing resources for training their language models?", "cc1742e9-fa0a-47e7-b9f5-3223c601a1c7": "What is the size and accuracy of EleutherAI's GPT-NeoX-20B model?", "822200f4-5fbd-437e-8ce8-a2845a129e1a": "Which dataset was used to train the GPT-NeoX-20B model and what are the categories of data included in it?", "346d4f82-a98e-41a7-a095-e275eeefaa3a": "How does GPT-NeoX-20B differ from other GPT models in terms of positional embeddings?", "12b59791-9a76-4b48-84ed-ba0df6225087": "When was EleutherAI formally incorporated and as what type of organization?", "c27215a2-f2c4-4832-bbc2-676d5e33e8e4": "Who are the founders of Cohere and what is the specialization of the company?", "42420951-90e2-429f-a5db-d3eba40b4fc2": "What year was Cohere founded and where is it based?", "09e16361-f8d7-4131-896a-ad3afff20993": "How does Cohere contribute to the field of natural language processing?", "bbf6eac5-1e90-4e8f-9394-82b998e618b6": "Can you provide an overview of the diverse nature of the questions that can be asked based on the given context information?", "8c5c3769-8cf3-486a-a822-f3e2ca278d25": "What is the purpose of GPT-NeoX-20B and what are its key features?", "b90cc661-0f3b-483e-a714-21bb0057a1fc": "Who are the founders of Cohere and what is the company's specialization?", "be627025-cf38-4080-b0d8-1d564c405026": "What are the two types of large language models provided by Cohere and how do they differ?", "f431befb-9632-4f92-9da6-de670d58bb6a": "How does Cohere train AI systems on large-scale data and what is the benefit of this approach?", "d1ab81d3-d723-4d2b-8763-298a312717a1": "How does Cohere differentiate itself from other model providers like OpenAI and Anthropic?", "e9ec5dac-672b-4938-bbc3-2839043c641f": "What are the partnerships announced by Cohere with Google Cloud and Amazon AWS, and how do they contribute to the company's development and deployment of its products?", "1b589f6e-8fe0-4acc-80bb-e4df9a054a52": "What is the significance of Cohere's Hyperwrite tool and how does it assist in generating articles?", "672a6f8b-2253-48df-a0cb-794669879513": "How much funding has Cohere raised to date and what is the expected valuation of the company?", "fdb9e1a3-1292-456b-aa3a-4882345b503e": "What is the upcoming offering from Cohere that aims to aid enterprise users in generating text and fine-tuning the model?", "db3884ba-204f-4a62-bf67-221fc556fdc3": "What is the purpose of Cloud's partnership with Cohere and how does it benefit their product development and deployment?", "f346ba96-54da-4d3c-afdf-63eb4a80c7b9": "How does Cohere's language AI, accessed through Sagemaker by Amazon, contribute to the generation of articles using Hyperwrite?", "050513ad-837c-4c91-863c-bdff7d16d700": "How does Cohere's Xlarge model differ from OpenAI's GPT-3 DaVinci model in terms of the number of parameters?", "f787cad4-a870-4d09-bcb9-ce96caf11728": "What are the key factors that Cohere emphasizes on for its users, and how have they incorporated these factors into their product design?", "19272547-4336-4dee-af14-2e30ce3b408a": "What is the focus of Anthropic AI as an AI startup and public benefit corporation, and what areas of research do they specialize in?", "77335e18-57eb-4ecd-ab96-e592ec26cc1d": "How has Google's investment in Anthropic impacted the partnership between the two companies and what stake does Google hold in Anthropic?", "0d1c9643-4a65-477c-8da1-9b02287e97c0": "What is the purpose of Anthropic's conversational large language model AI chatbot named Claude, and how does it utilize constitutional AI to align with human intentions?", "d788e897-be37-4a85-b94f-ee1549485c1d": "What is the significance of AnthropicLM v4-s3 and how does it contribute to Anthropic's AI systems and language models?", "ad6231a5-8168-447c-9b4a-783205e350aa": "What is the purpose of Anthropic's conversational AI chatbot named Claude?", "16de0b1c-45a9-472f-add7-8e4a30bfb8ef": "How does Anthropic's Claude differ from other chatbots in terms of its capabilities?", "86322b87-e3b0-4a63-83c6-33e0f905a6a3": "What are the two versions of Claude offered by Anthropic and how do they differ?", "02b4d0f6-8140-4425-b502-0ae9206f4d20": "According to Anthropic, what measures have been taken to ensure Claude's outputs are not harmful or toxic?", "d69bead5-7ec8-48d1-bea3-b2422eff2dd8": "What are the limitations of Claude compared to ChatGPT in terms of its abilities?", "b8dc7237-7cde-495e-b5f9-75e5114aa02a": "How has Google been involved with Anthropic and what stake does it hold in the company?", "378e7e38-efc2-40b2-80f8-94ab2d6b756e": "How has Anthropic trained Claude to avoid producing sexist, racist, or otherwise toxic outputs?", "e20c99cc-7a0e-4626-a15e-972de9433472": "What concerns have been raised about Claude's safety and potential for intrusion?", "5fc63179-5e0f-4f74-a0fe-282456216b11": "How has Anthropic incorporated the principles of beneficence, non-maleficence, and autonomy into its AI systems?", "65d83987-5509-463e-91e2-d4b9319ff84f": "What is the size of Anthropic's investment and partnership with Google, and how does it contribute to the company's goals?", "68c65b4e-6e57-4adc-b2ac-c2735ebd9cbc": "How does Claude, the AI system developed by Anthropic, differ from present chatbots in terms of harmful outputs?", "a47cf168-58f4-41a6-b0db-22fcee176538": "What are some of the safety features built into Claude to prevent unethical activities?", "647afdba-c7ba-4835-8992-74cf4ac74dbb": "How does Claude's proficiency in math and programming compare to that of ChatGPT?", "93ba7d74-cc70-4b8c-b334-2a740202338d": "What are some concerns regarding Claude's performance, particularly in terms of hallucinations and providing dubious instructions?", "061154cd-f61a-46bf-a5b5-33ecd9de1224": "When was the embargo on media coverage of Claude lifted, and what steps were taken to make it accessible to users?", "5cf54d2f-cb91-46be-8b67-33ea03344b07": "Which online tutoring solution is powered by Anthropic's AI system, Claude?", "5264ed0c-555e-437a-8a3e-1c990a0c8b75": "Name some of the platforms that Claude has integrated with, apart from the Poe app by Quora.", "795e6730-68a3-4430-bc92-b7c0c2f8742b": "Who are the founders of AI21 Labs, and when was the company established?", "578f031b-2582-4b47-9ee9-27163336659e": "What is the significance of AI21 Labs' Wordtune app, and when was it launched?", "bbdbc7c5-bc4e-4f7c-8000-1bfaef5976e7": "What is the purpose of AI21 Labs' Jurassic-1 model, and what tasks can it perform?", "e5b1c4c8-647e-434f-8707-3404022dd820": "What are the two sizes of the Jurassic-1 model developed by AI21 Labs?", "3138ab5f-eefa-4d2e-944e-384d5a78b9d2": "How does the unique 250,000 token vocabulary of the Jurassic-1 model improve computational efficiency and reduce latency?", "4d579d52-a6d1-4a16-8cf4-83b9707ed5e8": "Which companies have notably used the Jurassic-1 model by AI21 Labs and for what purposes?", "7fd4ce59-6a29-4db2-ae2f-85067b634fb6": "What are the key features and capabilities of the Jurassic-2 model compared to the Jurassic-1 model?", "c3e7d428-9dc9-4294-b8fe-c869a3f6aa46": "How does the Jurassic-2 model support users in building virtual assistants and chatbots, as well as in text simplification, content moderation, and creative writing?", "9fc91922-a8cb-430d-b9ca-6fdb04c5cd36": "What are the advantages of Jurassic-2 over ChatGPT in terms of knowledge and database updates?", "53000e9d-9dc4-420a-8b30-a1f85f2a34f0": "How many APIs are built for businesses in the Jurassic-2 model, and what are they specifically tailored for?", "8d71a839-4e71-49ad-ae19-9c6f0dc433e6": "What are the three sizes of the Jurassic-2 model and what are their corresponding instruction-tuned versions?", "26696beb-e52a-4355-88a1-dc5eddd471a5": "How does Jurassic-2 assist users in various tasks such as text simplification, content moderation, and creative writing?", "8dcc95df-2d57-4f53-aedc-095d8a2ba4c9": "What are the five APIs built for businesses that come with Jurassic-2 and what specific generative AI features do they offer?", "d1871f51-da6e-4e43-8c07-7b363c9c44cc": "How does Jurassic-2's performance on Stanford's Holistic Evaluation of Language Models (HELM) compare to other models, and what is its win rate?", "67ad9fe2-2f25-4677-ac1d-4bf65fd68607": "Until when is Jurassic-2 available for free, and what is the significance of the May 1st, 2023 date?", "ffde2142-0cfe-44bc-8f5a-11455db2ec0f": "What is the name of Baidu's AI language model and what are its capabilities?", "6ef5711e-e246-4067-bb50-bb9719b70587": "How does ERNIE enhance language representations and what masking strategies does it implement?", "6321c716-b2ee-4d3c-b7e0-50462736e922": "What are the differences between ERNIE 2.0 and ERNIE 3.0 in terms of their pre-training frameworks and capabilities?", "98be5170-3e52-4dc0-bb4f-1c746cc9b53f": "How does ERNIE 3.0 stand out from models like GPT-3 in terms of zero-shot and few-shot learning capabilities?", "b5f9087c-62ff-46ce-a428-e3dab99370d0": "How can ERNIE 3.0 be easily tailored for natural language understanding and generation tasks with zero-shot learning?", "4c4f898b-0550-4551-907f-6ed2d9961a68": "What are the key features of Baidu's ERNIE 3.0 language model and how does it differ from other models like GPT-3?", "3219b63b-2ada-4777-bdc7-0efd520ff735": "How does Baidu's ERNIE Bot aim to revolutionize its search engine and improve operational efficiency in various industries?", "b3c4fb2b-ec7b-450e-a828-12d37ecb01b8": "What are the main characteristics and capabilities of Nvidia's H100 Tensor Core and how is it optimized for AI and high-performance computing models?", "8a8eeaa5-94d2-4776-94d1-1c7f43022cdb": "Explain the role of Google's Tensor Processing Units (TPUs) in accelerating NLP workloads and their integration with TensorFlow.", "8f8cae8d-0fd0-4975-a631-7dfe363df63a": "How does Baidu's ERNIE 2.0 differ from ERNIE 3.0 in terms of its pre-training framework and collaborative pretraining capabilities?", "ca3f726e-455b-4503-9326-498a185503f2": "Describe the significance of ERNIE 3.0 Titan, including its parameter size and training data, in the field of natural language processing.", "a79a885b-58a0-4d26-abfe-713dcf00bf0b": "What are the potential applications of ERNIE Bot in various industries, such as finance, energy, media, and public affairs?", "239d8b04-7356-447b-931c-524d414704e6": "How does ERNIE 3.0 showcase task-agnostic zero-shot and few-shot learning capabilities, and how can it be tailored for natural language understanding and generation tasks?", "45b0ed8c-0bc9-409e-a7bf-f0379a23ab79": "Explain the availability and access restrictions of ERNIE Bot, including its limited access to invited users and the expected availability to enterprise clients through Baidu AI Cloud.", "e6964209-fa22-4a39-880f-ffd95cdd48cf": "Discuss the significance of Google Cloud Platform's TPU v4 in accelerating NLP workloads and its integration with the TensorFlow framework.", "0c297862-7a1e-44fb-b6e5-f0bd81716a00": "What are the key features and optimizations of Nvidia's H100 Tensor Core and Google's Tensor Processing Units (TPUs) for AI and machine learning workloads?", "43487332-1d1d-4412-a963-243b0b29c71c": "How does Microsoft Azure contribute to the field of machine learning and deep learning with its GPU instances and partnership with OpenAI?", "81a9f085-8262-47e0-aeb3-aa69f1e51cf0": "What are some of the advanced models developed on computing and cloud systems, such as BERT, RoBERTa, Bloom, Megatron, and the GPT family? How do these models contribute to NLP tasks?", "dd0ac7fd-5758-4303-a187-aeee2e44b2ac": "How does the increasing availability of specialized hardware for NLP tasks impact cloud computing programs? How does it enable companies to train and run previously impossible models?", "da9a9dbd-39c6-42c9-a537-772e61f5fbc5": "What is the significance of open-source LLMs efforts in the context of the provided information?", "eca3ce12-d0d3-4a98-be43-fa0652e0e598": "What is the difference between RoBERTa and BERT in terms of training procedure and dataset size?", "e8e3ada0-e32c-45fb-920a-eb97a86be967": "How does the availability of specialized hardware for NLP tasks impact cloud computing programs?", "1143bcbb-80c7-4df9-bb17-186bd88e4097": "What are the potential benefits of open-source language models in comparison to using an API?", "9d0a0ec5-d649-45cc-bfa0-6955b6785bf5": "How have Eleuther's \"The Pile\" and LAION's LAION-5B dataset contributed to progress in text and image modeling?", "68f73351-d85e-460b-9322-c679e53b3da1": "What is the strategic partnership between Hugging Face and Amazon Web Services (AWS) and how does it increase the availability of open-source data sets and models?", "2ae28963-37f7-4c74-b82e-800c9563a433": "How has generative AI been applied to different modalities such as image, text, and code generation?", "42e52a1c-ca40-4cf9-8ae9-e4fa35d4fd1a": "Can you explain the significance of MidJourney's Discord and ChatGPT in terms of their user base and popularity?", "e4289919-2ecb-4877-abd1-55b62c6d8385": "In what ways have software development use cases seen a significant rise in relation to generative AI?", "059c9bfb-27e3-4f6f-90b8-a33f9a5a1a65": "What is the purpose of DALL-E in generating digital images from natural language descriptions?", "b9a17eec-2ebf-426d-8095-bd2c4c3504e1": "How does DALL-E differ from other image generation models in terms of its architecture and capabilities?", "0480086a-0153-4b2e-8cbf-511d3547b1b7": "What potential challenges or limitations may arise from using public datasets as training data for DALL-E?", "04e1acbf-8404-4181-a1a5-ab4c683b1711": "How does Midjourney's image generation platform work, and what is its unique feature compared to other similar programs?", "736f8826-fe05-4a66-b4e8-7d916b2c8645": "When was the beta version 5 of Midjourney released, and what command can users type to generate images using the Discord bot?", "7cdd0423-f0bb-479e-970a-2e128d8c8474": "How does Midjourney allow users to select the image they want to upscale from the generated options?", "4bbb2226-30f4-4cff-aa89-7d7f92d14184": "In what ways can DALL-E 2 produce higher-resolution images and perform zero-shot visual reasoning?", "ebabafaf-c7b3-493a-8ba8-37d9ff4b81c1": "What are some possible applications of generative AI in software development, as mentioned in the context?", "ce5b309d-b03a-4916-8bdc-48342b22c79b": "How has GitHub Copilot been utilized by developers, and what is its current user base as of September?", "f57b7cec-86db-4747-949e-50bf5f67e2c0": "What are some popular modalities for which generative AI models have been developed, apart from image generation?", "52974393-9046-4cab-a6b1-d9fed62db27a": "What is the purpose of the Midjourney AI program developed by Midjourney, Inc.?", "27c02c88-9884-4466-8a50-b8f4e8dd7b8b": "How can users generate images using the Midjourney AI program?", "aea5f85a-1522-4ebe-9944-a80570b013f4": "What is the difference between DreamStudio and Stable Diffusion, even though they are applications of the same technology?", "2b0169d3-8565-4a50-8cc6-a107ff21d0f5": "What is the key feature of DreamStudio that sets it apart from other image generation applications?", "bfd36cc6-f615-4123-8f19-f5fe0a50a39f": "How does DALL-E use public datasets as training data, and what impact can it have on its results?", "78e6ace7-f4a6-4d2c-bf19-6ad3c5428614": "What tasks can Stable Diffusion perform using its latent diffusion model?", "53dc5192-c1fa-462e-b016-5b8af208d355": "What is the minimum VRAM requirement for Stable Diffusion, and how does it make the model independent of cloud services?", "13792f38-a238-42c1-b4b6-3f4d74c5d9af": "What is the purpose of the Whisper AudioGen AudioLM system developed by OpenAI?", "7e940d4d-d815-4b10-92bf-3aef5d27971e": "How has Whisper AudioGen been trained, and what are its capabilities?", "f19cd821-7ee1-44b8-924c-f577ecc9aaf9": "What programming languages and frameworks were used to develop Whisper AudioGen?", "5e3527a0-5bb0-4fe8-a356-d2565c5ebe6c": "What are the key features of DreamStudio and how does it support negative prompting?", "6737fb3a-c5c1-438b-827b-fa8dd3d507e8": "How has Whisper, the automatic speech recognition system developed by OpenAI, been trained and what tasks does it support?", "dace7eef-9d55-4b6b-814f-488374b1b32e": "What is the purpose of Google's AudioLM model and what are its capabilities in generating audio?", "30382fd2-4c9e-4cad-8d6d-94cdc8bb3a84": "How does AudioGen AI by Meta convert text prompts into audio files and what is its similarity to image-generating AI like DALL-E?", "504d73a0-5e70-41d0-b09f-f3db18184846": "How does AudioLM improve fidelity in music and audio events?", "6da3c0bc-e5d1-4114-9a71-262610e9237a": "What is the success rate achieved by AudioLM from human raters?", "b981df39-67a1-4d86-8578-70b1446f1882": "How accurate is the audio classifier trained to detect synthetic speech generated by AudioLM?", "952188a9-78bb-4f3a-a1fd-5648ba8e0b89": "What is the purpose of Meta's AudioGen AI?", "6986ab0a-147e-4c29-b180-f105bd6c3f7d": "How is AudioGen AI similar to image-generating AI like DALL-E?", "b9062f64-45f4-4fea-a659-3653ed200ea1": "What is the quality rating of the audio output generated by AudioGen AI?", "03ead958-0652-44c6-9fdd-d565d27ee683": "What is the limitation of AudioGen AI in terms of sequencing sounds through time?", "fbdc2b91-c100-43f6-a70e-46e20ffd6be9": "What is the unique feature of Neeva, the AI-powered search engine?", "321f8a79-e392-44d5-8bd3-f1a9f371f5f4": "How does Neeva provide ad-free and private searches?", "167bd3c3-bf06-4fca-bfe0-63df3ee2e942": "What are the limitations of the free version of Neeva and what is the pricing of the premium plan?", "127ab66b-5e19-4f2d-97d9-85eaaaa42241": "How does You.com group web results into website categories?", "b43a9820-c555-4c4e-9d9c-9e171ffc3815": "What are the features offered by You.com, including YouWrite and YouChat?", "c6a86a86-1dd9-4dde-83f4-8c0848ea7de6": "How does You.com prioritize privacy and personalization in its search engine?", "5ebbd052-af2f-4ea5-9571-5eb5af5d0e3e": "Does You.com collect users' personal information?", "3272eecb-9614-49fa-a4aa-7dacee576014": "How do the search results on You.com allow users to create content directly?", "f8ced64a-251d-466f-bbd2-867d62a83de2": "How does Neeva differentiate itself from other search engines in terms of privacy and personalization?", "d7850b04-3a99-4902-9e4c-59e8fc08d0c4": "What are the features offered by You.com, a California-based search engine?", "78b6f259-c504-4701-a787-83b2e77aad55": "How does GitHub Copilot assist developers in programming tasks?", "4f2e24da-9554-4092-bcb0-9d55451ada3e": "What programming languages can OpenAI Codex generate working code in?", "76704cb5-2e55-41de-be52-f6e97b69aacf": "What are the advantages and limitations of Jasper.AI in text generation?", "5d177782-b609-48f4-ac51-4204fd3abbe7": "What is the purpose of Jasper.AI in text generation?", "0850066e-8366-4e08-a8f4-cede380860b2": "What are some drawbacks of using Jasper.AI for text generation?", "d2623872-e0ce-446d-a2db-a2e3f7b45f7e": "What are the challenges in developing generative AI models?", "fa961796-5530-4dcf-85ab-5a9339ac994f": "How can smaller datasets enhance the performance of large language models in domain-specific tasks?", "dd34d499-d92a-4183-8e4f-fc8188f4bd80": "Why is compute cost optimization important for generative models?", "aaea47fe-2cae-4f99-9445-67f783947df3": "What are the pressing concerns regarding safety and security in the development of generative AI?", "b865dfb6-51a0-4aaf-8275-aa812f83583d": "Why are open-source alternatives necessary for increasing access to next-generation language models?", "bfb08106-c36a-4289-8567-0e182af73126": "How are artificial neural networks designed to mimic the structure and learning process of our brain?", "1c10ac76-c6af-4a1e-85be-4618f6d76da3": "What is the main feature of deep learning and why is it important in the field of AI?", "932b9fe4-2493-4c40-8ec7-f785d18a3fc7": "How do neural networks differ from conventional machine learning algorithms like linear regression?", "327261e9-b333-4c02-8441-3c2229b0efa7": "Can you provide examples of consumer tech that have been developed using neural networks?", "bb0cea7f-5ff2-4200-9cc8-9c724f452130": "According to Andrej Karpathy, how can neural networks be a new and better way of writing software?", "c20273ed-4cb8-4f8a-8687-235fadd606ae": "How do neural networks learn to adjust their behavior based on examples from training data?", "95377f9b-c1ca-49a1-ac20-76f8a8525f9b": "What is the role of weights and biases in a neural network, and how do they contribute to its behavior?", "801f65fc-e57c-46cf-bd0c-b6a7666e1667": "How does a neural network learn the logic of a program instead of hand coding it?", "347743e8-32f2-4b0e-bdbc-2264c24853f4": "Why are machine learning models trained on neural networks often referred to as 'black boxes'?", "0400b1e4-4546-4afd-b3d8-5b75fcda46e0": "What are emergent behaviors in large neural networks and how are they discovered?", "f171a883-843e-4223-92ca-3b32276ef0df": "Give an example of an emergent behavior in a system that is not a large neural network.", "7bf43357-0e9d-4576-85fc-3c6c13431d25": "What is the first step in creating something like ChatGPT?", "f0ba5ee6-19c1-4ff8-94c0-f992e584f722": "How do emergent behaviors arise in complex systems, and what is an example of such behavior in biological ecosystems?", "12bae8b2-de6c-4475-9cd6-d2ec91f0ad4d": "What is the purpose of pretraining a base model in the creation of ChatGPT?", "bc0a56dd-1f50-4ea8-8a1a-44b12093324f": "How are words represented in numerical form before being fed into a neural network during pretraining?", "1e367c83-c086-4796-b55e-da28c9aded4d": "Explain the process of predicting the next word in a sequence during pretraining, and how the model's accuracy is evaluated.", "9972f2e6-63c2-401f-865c-8fc38b8645a8": "Provide an example sentence from the training dataset and explain how it can be used in the pretraining process.", "1b62431a-55fe-48af-91ab-a59caf3fb039": "What is the purpose of using tokens and embeddings in a language model?", "60f264f3-76d7-4f21-b5de-d67ddd25c963": "How does the model learn the dependencies between different words in a sentence?", "986d0501-2f32-40f2-828b-3b55b478c88f": "Explain the process of training the model on a corpus of text data.", "9d01837c-1e31-4890-8819-51137e39d9ee": "What is the significance of evaluating the model's predictions on unseen data?", "6968e1e5-0401-4fa1-917c-7557806ec8df": "Can you provide an example of how the model generalizes its language understanding to unseen data?", "9dfd7af2-9a1b-4c29-9500-9f5aa939af80": "In the given sentence \"She needs to head to the ___\", what kind of word is expected to follow the phrase \"to head to the\" based on the training dataset?", "f9a12d88-6f73-46b0-8236-c0cc65390dfb": "How does the model make an educated prediction for the missing word in the unseen sentence?", "df634ce8-bcfe-4357-9f0c-d26be7c6851d": "Why is it important for the questions in the quiz to be diverse in nature across the document?", "dc9ef659-063f-436b-b8bd-7639c42a1680": "What is the purpose of using context information in language models?", "51a92dec-e4ab-4c49-969f-c59a7978a35d": "How does a language model make predictions based on similar contexts it has encountered during training?", "38e625d8-5662-4058-bc66-b73cafd5f2a9": "Explain the concept of loss function in language models and its role in training the model.", "05df988d-eb24-41c1-b756-d147a7900a06": "What are the steps involved in minimizing the loss value in a language model?", "bd002364-77f0-4b76-8dcc-36831b83d8e3": "As a teacher, how would you explain the process of backpropagation and gradient descent optimization in training a language model?", "42da00c2-d1b1-421f-ada6-ef1405791143": "What is the goal of the model in predicting the next word after the sequence \"She needs to head to the ___\"?", "37196b5a-886f-49d8-92d7-597d128a66ff": "How is the difference between the predicted and expected next word calculated?", "d8a3c287-4a59-43cd-9e99-8c53034d5429": "What is the purpose of the loss function in the context of the model?", "50b4f66c-9148-4132-b5a1-58a27dcb6bc4": "Explain the concept of backpropagation and its role in minimizing the loss value.", "4a7e9f3d-4c2e-4e8f-adfd-aff363efad62": "How does the neural network process word sequences and determine the probability distribution over the vocabulary?", "d0f7cc10-fa5d-4f09-8277-4c4cf7ffa0d5": "Can you explain the role of numerical representations and embeddings in the model's prediction ability?", "870163ef-6455-44be-80b0-b3976f82b221": "What is the significance of the probability assigned to different token ids by an untrained model?", "5b9c1dab-cc25-4dc4-b81e-191cb26e87b4": "How does the neural network handle text inputs and what is its primary mode of operation?", "bf38d745-ea32-4fea-9f9e-bdf5fa9e2d3b": "Describe the process of propagating the error (loss value) backward through the neural network.", "72119568-aed1-459d-8fcf-431f15b8f004": "Can you explain the analogy of model parameters (weights/biases) being adjustable knobs and their effects on prediction ability?", "a9b0d4a4-ce61-4f19-9111-aaf99562aff3": "What is the purpose of backpropagation in a neural network?", "0656d0f5-c162-406a-80a1-6bea6f49cd2d": "How are the derivatives of the model parameters calculated during backpropagation?", "4557dea8-8489-481c-ba14-85cdf643a754": "How can you visualize the output of backpropagation?", "01eaeec8-c7b1-4e3f-9198-71f2f85dbfdb": "What does the gradient vector represent in the context of backpropagation?", "7ba033da-6e04-41fc-9b42-c02fcd2f7dc1": "Explain the concept of gradient descent optimization and its role in reaching the minimum loss value.", "744bb3a7-2255-41f0-bb7f-c81d2e294eb2": "What iterative process is involved in adjusting the weights and biases of a neural network during gradient descent optimization?", "7747cd2c-cfa6-4837-80a8-672dffe2f7f6": "How does the learning rate affect the adjustment of weights and biases in a neural network?", "aeb7e28f-d58b-499e-b6b1-9f7f55771b28": "What is the ultimate goal of using backpropagation and gradient descent optimization in a neural network?", "6f8d257c-5d8b-484a-bcc9-cb520eaf6851": "How does the magnitude of a vector relate to the steepness or rate of increase?", "f1263d30-15a5-43dc-a861-e761303696f0": "Explain the concept of gradient descent optimization and its role in reaching the minimum loss value.", "e5e1134b-cc1c-473b-842f-de5d5e7b8708": "What is the purpose of adjusting the weights and biases of a neural network during the optimization process?", "0e1f3750-ca0a-4f82-a730-8e5287a590d2": "How does the learning rate affect the efficiency of gradient descent optimization?", "fa12424d-d5a5-4e50-b122-8aa102372d5a": "Describe the process of reaching convergence in neural network learning.", "3671cd0e-a471-4329-b495-f796e7b08ffd": "What is the significance of reducing the loss value in language models and their fluency or coherence?", "d2c94032-929d-4cfd-9ae2-f6ba03dbc647": "How do the millions or billions of mathematical operations in a neural network contribute to the development of a language model?", "d6b135e2-d58b-465d-b279-2755788e0754": "Explain the self-attention mechanism used in the Transformer architecture.", "e1ea5a36-852d-4621-83e6-7a94ffbda55e": "What is the main breakthrough of the Transformer architecture in natural language processing research?", "15f31f5e-dced-4d73-add8-44f1fd133344": "Discuss the concept of probability distribution and its role in generating the next token in a sequence.", "71d43304-19c7-47a0-ab0f-95295b1d5b8a": "What is the main breakthrough in natural language processing research that led to the development of ChatGPT?", "4ee5e640-8ee3-4c0d-9fa6-38d98cd1361e": "How does the Transformer architecture differ from previous neural network architectures used in NLP?", "1987c57e-ec71-43c1-8379-ef9e89456a7c": "What is the significance of the paper 'Attention is All You Need' in the field of deep learning?", "939964b7-d72f-444c-8f5d-06a5025eb93a": "How does the Transformer handle the issue of long-term dependencies in sequential data processing?", "f016839e-a079-43ae-b1b1-ae394bdc9694": "Can you explain the concept of self-attention mechanism used in the Transformer architecture?", "1560b9c2-30fa-4bb3-869a-16d53907050a": "How has the Transformer architecture impacted the field of vision tasks and handling other types of data?", "f6c5e552-ca88-4453-9577-e185ecf0c55d": "Why is the order of words important in preserving the context and meaning of a sequence in NLP?", "72ddc631-9733-4c60-a062-aaa7682d0783": "What are some of the state-of-the-art language models that have been built on or inspired by the ideas from the Transformer architecture?", "265b0a7b-1b85-4bf4-acce-d0779669c03a": "How does the Transformer encode the position information of each word in a sequence before processing it in the network's inner layers?", "f6d671b1-de4f-4582-90a6-0f45f032e023": "What is the significance of the '...is all you need' memetic trend that originated from the paper on the Transformer architecture?", "38ed311e-6eea-4741-852b-2ec3e7ebed9f": "How does the transformer encode the position information of each word in a sequence?", "66670fc6-1624-404d-a8f6-0f066c638b22": "What is the vanishing gradient problem and how does it affect RNNs?", "902f6470-9913-40fc-b3fc-5aaba937c2c0": "How does the transformer solve the issue of long-term dependencies in neural networks?", "1cc9050d-1ebc-4a8d-9563-8272014405e7": "What is the purpose of the self-attention mechanism in the transformer?", "f74db401-9112-409d-adf7-818901c32de3": "How does the transformer preserve the context of any sequence, regardless of its length or the relative distance of the attended word?", "07887b0e-14dd-4a22-b639-c2158daf194e": "What role do matrix multiplications play in encoding the degree of attention between words in the transformer?", "0ec711db-2f9c-482a-82f0-803a48c8c9bd": "How are the model weights adjusted during each training in the transformer?", "e6173e5b-1fed-4ead-a359-7a80019c0a50": "How does the self-attention mechanism in transformers encode information about the degree of attention between words in a sequence?", "a8fd0e10-bdf5-4b1b-9dac-032128906026": "What is the purpose of positional encoding in transformers and how does it contribute to preserving the context of a sequence?", "a311d23f-6115-4537-bfd5-ff6d6b4f17b8": "How are the learned traits of word attention stored in the model weights and encoded in word embeddings in transformers?", "cbdaf0e5-f7f8-438b-9a20-ab02086e58f9": "Explain the role of the feed-forward neural network in the transformer architecture and its relationship with the self-attention layer.", "b7eceb09-32b6-47eb-bdc8-91b05082ce9f": "According to the LLM scaling laws, what is more important for training better models in transformers: increasing the number of parameters or the size of the training data? Explain the reasoning behind this.", "0c8a2183-24e8-4015-9b8a-acf557365274": "Why is parallelization essential in dealing with the computational requirements of transformers? How does it differ from handling RNNs in terms of parallelization?", "26ff775d-8839-40db-b812-3d0165d83042": "How does the transformer architecture enable the computation of relationships between words in a parallelized manner?", "4171cae8-919f-4a63-b27d-a3e32164d5d3": "How does increasing the number of parameters compare to increasing the size of the training data in terms of training better models, according to OpenAI?", "2b99d28c-1a48-47b8-9cae-9852aa6ed616": "Why is parallelization difficult to achieve with RNNs but not with transformers?", "b219710d-a6e2-4327-aead-58b0fe7d55d7": "How do GPUs or video cards benefit the training of transformers in deep learning tasks?", "e3899405-8a4d-4c9b-941b-4a92ded5f0a8": "Which GPU manufacturer has experienced significant growth in stock price and market cap due to the increasing popularity of AI and larger models?", "f06ac511-50bc-41a6-b92f-63ed9c51005c": "Can you provide examples of recently released text-generation foundation models?", "73dbf6e7-9358-40b6-afea-7e197d566cec": "What are some resources that provide more in-depth explanations of the transformer model for those interested in learning more?", "a4b703b5-2b95-4fd9-b25e-893c0c5663a6": "What is the difference between base models or foundation models and fine-tuning 'chat' models like ChatGPT?", "05e4d0db-73db-4316-b4e9-e76bcdfb2fd1": "How can transfer learning be applied to base models in natural language processing?", "86fc1032-a4f4-4b37-be17-64601bca525b": "What are some examples of recently released text-generation foundation models?", "55136798-d2d7-418e-8dad-19f35d865b8a": "How does pretraining in natural language processing utilize self-supervised learning?", "5cbe99f6-5ae3-41c8-9541-7faf3f67e4e2": "What is the purpose of supervised fine-tuning in the training process of language models?", "8286f854-5813-4930-abdd-cc890291f54b": "How can base models be trained to answer questions by 'tricking' them into thinking they are completing a sequence?", "c64fb14a-528f-4075-a9d4-ceb7cd7bfc5b": "What are some downstream tasks that can be performed using the general language understanding of base models?", "5bf69c05-d72a-4243-b785-1db4ed7082a2": "How does the structure of sentences and relationships between words contribute to the extensive knowledge of base models?", "f921e1e8-52d6-4575-8239-664162d87628": "Can you explain the concept of transfer learning in the context of natural language processing?", "9ca8fda2-d349-4b0b-a1ef-6a4b71f672a0": "What are some examples of conversational assistants that can be built using base models?", "1abc88ee-7e97-47ac-b599-70255b133cc2": "How does the training process of base models differ from the training process of fine-tuned models in natural language processing?", "d5251308-e729-4f81-9112-97bdc83f5cb6": "How does the model learn to complete prompts in a question-and-answer format during the supervised fine-tuning stage?", "13ab7603-e4ff-473c-8a76-92316241816e": "What is the difference between pretraining and fine-tuning in the context of language model training?", "7cb61154-df67-4100-9534-6835492f3de1": "Why is the training data preparation in the fine-tuning stages considered labor intensive?", "62fc62ea-da8a-4430-946a-f6288c994a36": "How does transfer learning benefit the fine-tuning process in language model training?", "8e401b08-ae44-4c17-97e9-030a05f835e0": "According to Karpathy, what percentage of compute power and training time is utilized during the pretraining phase?", "26da5364-5ab2-47d6-8821-07762f5c0c8d": "What is the purpose of human-curated question/answer pairs in the fine-tuning stage of language model training?", "c04850fe-6e60-402c-85c7-3a9690b3162d": "How does the model adapt to a specific language task during the fine-tuning stage, even with limited task-specific training data?", "74944eee-b12b-4e9f-9afd-5b9e76c2de7c": "What is the role of target/ground truth words in determining the loss value during training?", "8aa86b02-6fd9-43fe-9cff-495f414eb5bb": "Can you explain the concept of self-supervised learning in the context of language model training?", "2d225f0f-8f52-4f53-8a77-5129ac73c185": "How does the model generate labels or target completions as it goes through the corpus during pretraining?", "c98aa852-6ad0-4d03-8a28-bd280e359abe": "What is the main benefit of transfer learning and pretraining in the context of language models?", "34fe0f65-33d0-498b-8233-2c5e077cd2ac": "How is fine-tuning different from pretraining in training language models?", "880bf74b-f8a9-4133-b27a-7cd0dd1029f9": "What are some of the attributes that human labelers use to score the model's completions during the fine-tuning stage?", "25dd3f61-3894-45e6-9d26-c861fad6c921": "How are the 'assistant' or 'chat' models like ChatGPT created?", "a923e376-e21c-49d5-8fa1-46e8bc3ce48b": "Can the GPT-4 base model be accessed via an API?", "3e5ac65d-34a4-4b28-9d23-b9186d1df3ed": "What are the fine-tuning steps involved in training commercial and open-source fine-tuned models?", "e1d76ee3-e307-4f43-ba85-53d18eaf9344": "What will be discussed in Part 2 of the document?", "0eebd526-b4ad-458e-9d26-1641a4340346": "As a teacher/professor, how would you define token in the context of this document?", "5c08b37a-6bc3-41fe-8fa7-00549c0fa8c0": "How are sentence or document-level embeddings generated from word embeddings in RAG systems?", "89cf7fdf-2493-4ae6-8bc5-226a01b305ca": "Why is it important for the questions in the quiz/examination to be diverse in nature across the document?", "910c428e-5d1d-44ca-96ba-dca1eb5080b9": "What is the significance of releasing the model as an API and why is it unlikely to be released by OpenAI?", "a5f8afdc-a0d6-48eb-addc-38d6f13776f9": "How are fine-tuned models trained and what are the steps involved?", "862cddc9-5d17-49fd-aa9e-4636b413875f": "In Part 2, what will be discussed about embeddings and their relevance in the context of RAG systems?", "574b710e-8c98-46b9-a7e7-865e97ba0ff9": "Can you explain the concept of tokens and their various types in a model vocabulary?", "e2ef37a0-254f-407f-831f-4c4ba6843043": "What are some resources/references mentioned in the context information that can provide further information on neural networks and transformer models?", "201e187c-8548-415c-b703-d12c58c744fc": "How does the dataset used by WizardCoder differ from other models in terms of instruction complexity?", "69f94354-6a65-464f-b7cc-dcd2c5f4a9fd": "What is the role of Evol-Instruct in enhancing the performance of LLMs like WizardCoder?", "711bb0fe-6650-4175-9555-961263e5b9f1": "Explain the concept of instruction evolution and how it is used to improve the difficulty level and diversity of the dataset.", "742ba2c4-d64a-4f71-a748-954e6130c15b": "What are the two types of instruction evolving techniques used by the Instruction Evolver in LLMs?", "4afa4c47-bd0b-4754-aef0-0effc7f9dfc8": "How does In-Depth Evolving contribute to creating a more complex and diverse dataset for WizardCoder?", "ce782026-e90b-4bc1-abbf-74b73688dadb": "How does the Instruction Evolver enhance instructions through in-depth evolving?", "891d4c09-3c4e-4aaf-a8f9-d6e5dbba53b5": "What are the five types of prompts used in in-depth evolving?", "d2d226a4-88a7-4e62-b576-873f626b6a24": "How does in-breadth evolving address the limitation of open-domain instruction finetune datasets?", "f69b28f9-b274-470d-9768-94cd736c3d6f": "What are the three aims of in-breadth evolving?", "6b1f7afc-6f12-4bbd-b073-b396fdb61c12": "How does the response generation process work in the context of evolved instructions?", "dbb94892-54e2-4c7e-bbc4-bc6d80970610": "What does it indicate when the generated response contains the word \"sorry\" and is relatively short in length?", "c430488f-47c9-4115-b400-d8a4ba5485f3": "How does Evolving In-breadth Evolving aim to enhance topic coverage, skill coverage, and overall dataset diversity?", "3f351db5-654a-4f0a-b20e-c1b4df0e990d": "What is the purpose of the LLM in generating responses for the evolved instructions?", "7f16aeb5-5343-4d36-a90e-ca533170464c": "How can the \"sorry\" and short length of a generated response indicate that the LLM struggles to respond to the evolved instruction?", "0101ad8a-4afe-4ad0-aa63-06a2418609ac": "What is the process of finetuning the LLM on the evolved instructions and how does it ensure an even distribution of instructions of varying difficulty levels?", "98fe3cee-cd13-4b3a-963d-b9c159f02b0d": "How does Wizardlm validate Evol-Instruct and what is the resulting model called?", "a06b2bfc-98d9-4a37-ad2a-f33746878c2e": "What are some use cases for WizardCoder and what code-related tasks can it be used for?", "b1826a0a-ea61-4e15-89f1-c8797d6ec651": "Can you provide an example of an input prompt that can be used with WizardCoder for code generation?", "c3e049f4-e7c6-4d4d-8ed7-5983f61ceebe": "How does the fine-tuning process improve the LLM's ability to generate coherent and fluent text in response to various inputs?", "b12b2ce2-7512-47b1-8610-080eb022c124": "What are the best use cases for WizardCoder in code-related tasks?", "a3e0a869-c884-46a9-a930-28a68700c77a": "Can you provide an example of a code generation prompt that can be used with WizardCoder?", "cf3fc1b3-4de0-45a1-97a5-939fff8640e0": "How can WizardCoder be used for code completion? Provide an example.", "d347cf20-2a84-4839-8587-8b9941a13bbb": "Explain how WizardCoder can generate a summary of a long code snippet. Provide an example input.", "f8a7c459-17b2-4f98-ab1f-7d36cd92e387": "In what specific coding tasks has the 34B model demonstrated exceptional performance according to the evaluation findings?", "8b18d12b-8c27-426f-9994-3b54e4f22de2": "How does WizardCoder-Python-34B compare to other open-source and closed LLMs in terms of performance on code generation benchmarks?", "293a1876-cbfd-4c4e-a15f-d8b805f3e490": "What position does WizardCoder-Python-34B-V1.0 attain in the HumanEval Benchmarks? How does it compare to GPT4?", "0965308b-9224-4925-9313-9069247e1610": "What are the benchmarks used to evaluate the performance of WizardCoder-Python-34B on code generation tasks?", "6a5f46a9-0c9c-44dc-bec8-d026f528bbed": "How does WizardCoder-Python-34B compare to other open-source and closed LLMs in terms of performance on code generation benchmarks?", "01625788-8e04-426e-b434-7438dfd1b3e1": "Which model attains the second position in the HumanEval Benchmarks, surpassing GPT4, ChatGPT-3.5, and Claude2?", "65022071-d30a-4c78-8efa-6cc722c181b6": "How does the pass@1 score of the WizardCoder-15B-v1.0 model on the HumanEval Benchmarks compare to other SOTA open-source Code LLMs?", "9ce0385a-c4b2-4962-9552-1904378f9ba5": "How does WizardCoder compare to other open-source Code LLMs with instructions fine-tuning in terms of performance?", "e8e90dd0-ef8a-440e-b5ba-bd2e139985cf": "What factors contribute to the success of WizardCoder in achieving outstanding performance on code-related tasks and benchmarks?", "1394007b-ad81-4d00-8659-b06f0eda8acc": "What is the unique dataset used by WizardCoder and how does it enhance instruction complexity?", "0073b091-7985-4a51-9dc4-7debc0cc95cd": "Where can I find more information about WizardCoder, including YouTube videos and research papers?" }, "corpus": { "4ab5bd897f01474fc9b0049f95e31edae3ccd9e74d0f0acd3932b50a74d608b6": "LLM Variants and Meta's Open Source Before shedding light on four major trends, I'd share the latest Meta's Llama 2 and Code Llama. Meta's Llama 2 represents a sophisticated evolution in LLMs. This suite spans models pretrained and fine-tuned across a parameter spectrum of 7 billion to 70 billion. A specialized derivative, Llama 2-Chat, has been engineered explicitly for dialogue-centric applications. Benchmarking revealed Llama 2's superior performance over most extant open-source chat models. Human-centric evaluations, focusing on safety and utility metrics, positioned Llama 2-Chat as a potential contender against proprietary, closed-source counterparts. The development trajectory of Llama 2 emphasized rigorous fine-tuning methodologies. Meta's transparent delineation of these processes aims to catalyze community-driven advancements in LLMs, underscoring a commitment to collaborative and responsible AI development. Code Llama is built on top of Llama 2 and is available in three models: Code Llama, the foundational code model;Codel Llama - Python specialized for Python;and Code Llama - Instruct, which is fine-tuned for understanding natural language instructions. Based on its benchmark testing, Code Llama outperformed state-of-the-art publicly available LLMs (except GPT-4) on code tasks. Llama 2, Llama 2-Chat, and Code Llama are key steps in LLM development but still have a way to go compared to GPT-4. Meta's open access and commitment to improving these models promise transparent and faster LLM progress in the future. Please refer to the LLM and Llama variants below: From LLMs to Multimodal LLMs, like OpenAI's ChatGPT (GPT-3.5), primarily focus on understanding and generating human language. They've been instrumental in tasks like text generation, translation, and even creative writing. However, their scope is limited to text. Enter multimodal models like GPT-4. These are a new breed of AI models that can understand and generate not just text, but also images, sounds, and potentially other types of data. The term \"multimodal\" refers to their ability to process multiple modes or", "e470fa0d001e50b3ec3088022462a94ea7c87dd80106411b7d120f90b379e977": "the LLM and Llama variants below: From LLMs to Multimodal LLMs, like OpenAI's ChatGPT (GPT-3.5), primarily focus on understanding and generating human language. They've been instrumental in tasks like text generation, translation, and even creative writing. However, their scope is limited to text. Enter multimodal models like GPT-4. These are a new breed of AI models that can understand and generate not just text, but also images, sounds, and potentially other types of data. The term \"multimodal\" refers to their ability to process multiple modes or types of data simultaneously. This is a game-changer. Imagine an AI that can not only read a description of a dress but also visualize it or even design it! Multimodal AI models are moving us towards more holistic AI systems. These systems can potentially understand our world in a more comprehensive manner, bridging the gap between different forms of data and providing richer, more integrated solutions. As we stand on the cusp of this new era, it's exciting to envision the myriad of applications and innovations that Multimodal models will bring to the table. The future of AI looks more integrated and versatile than ever before. From Connections to Vector DB The AI landscape is witnessing a fascinating transition: from Language Model (LLM) connections or integrations, e.g., LangChain and LlamaIndex, to the rise of Vector Databases (Vector DB) such as Weaviate, Milvus, Pinecone, Chroma, and Vespa.ai. But what's driving this shift, and why does it matter? LLM connections, like the LlamaIndex, primarily focus on linking and understanding vast amounts of external data. They've been pivotal in creating semantic connections, enabling more intuitive search experiences, and enhancing data accessibility. However, as the volume and variety of data grow, the need for more advanced storage and retrieval mechanisms becomes evident. This is where Vector DBs come into play. Unlike traditional databases that store data in rows and columns, Vector DBs store data in high-dimensional space, allowing for more efficient and accurate similarity searches. Tools like Weaviate and Milvus are designed to handle massive datasets, making them ideal for tasks like image", "4b3a13a10f7ea2464249fb6aa64e9f403f8151daf24133dbcffbfa0e01fa0d74": "LLM connections, like the LlamaIndex, primarily focus on linking and understanding vast amounts of external data. They've been pivotal in creating semantic connections, enabling more intuitive search experiences, and enhancing data accessibility. However, as the volume and variety of data grow, the need for more advanced storage and retrieval mechanisms becomes evident. This is where Vector DBs come into play. Unlike traditional databases that store data in rows and columns, Vector DBs store data in high-dimensional space, allowing for more efficient and accurate similarity searches. Tools like Weaviate and Milvus are designed to handle massive datasets, making them ideal for tasks like image recognition, recommendation systems, and more. The rise of Vector DBs represents a broader trend in AI: the quest for more efficient, scalable, and versatile data handling solutions. As we navigate this evolution, it's clear that the combination of LLMs and Vector DBs will redefine how we store, access, and understand data in the AI-driven future. From Agents to OS The AI realm is abuzz with innovations, and one of the most intriguing shifts we're witnessing is the transition from LLM agents to using LLMs as Operating Systems (OS). Let's delve into this evolution and its implications. LLM agents, like AutoGPT, AgentGPT, BabyAGI, and HuggingGPT, have been groundbreaking in automating tasks based on user requests. These agents leverage the power of Language Models (LLMs) to understand and execute commands, making them invaluable in tasks ranging from content generation to data analysis. Their adaptability and intelligence have made them a staple in many AI toolkits. However, the vision for AI doesn't stop there. The concept of LLM as an OS is emerging as the next big thing. Imagine an operating system where the core is a language model, orchestrating everything around it. Such a system would not just execute tasks but would understand context, anticipate needs, and offer solutions in real time. It's like turning the LLM into the brain of the digital ecosystem, making devices and applications more intuitive and responsive than ever. The move towards LLM as OS signifies a paradigm shift in how we perceive and utilize AI. It's not just about automation anymore; it's about creating a seamless, intelligent interface", "98e9cbb20d5a2f5ab9d5d9712f9e66ef7123b584e1e1985cebef6bd4f41c0858": "the vision for AI doesn't stop there. The concept of LLM as an OS is emerging as the next big thing. Imagine an operating system where the core is a language model, orchestrating everything around it. Such a system would not just execute tasks but would understand context, anticipate needs, and offer solutions in real time. It's like turning the LLM into the brain of the digital ecosystem, making devices and applications more intuitive and responsive than ever. The move towards LLM as OS signifies a paradigm shift in how we perceive and utilize AI. It's not just about automation anymore; it's about creating a seamless, intelligent interface between humans and technology. As we stand on the brink of this transformation, the potential for LLM-driven OS to revolutionize our digital interactions is immense. From Fine-tuning to Plugins The world of LLMs is undergoing a transformative shift, moving from intricate fine-tuning processes to the more dynamic realm of plugins. Let's unpack this evolution. Historically, fine-tuning has been the cornerstone of LLM optimization. There are two primary ways to fine-tune LLMs: feeding data into the LLM in real-time and directly fine-tuning on the LLM. From a technical standpoint, this involves three methods: Transfer Learning: Adapting a pre-trained model to new tasks.Sequential Fine-tuning: Refining models in stages for specific tasks.Task-specific Fine-tuning: Tailoring models for a particular function. Moreover, LLM techniques like In-context learning, Few-shot learning, and Zero-shot learning have further enhanced the model's adaptability, allowing them to understand and generate content with minimal data. However, the future of LLMs is leaning towards plugins. With the introduction of tools like GPT-4 Plugins, the focus is on extending LLMs seamlessly. Instead of running LLMs as a service, they're envisioned as platforms. This means integrating LLMs with various tools, enhancing their capabilities, and offering a more modular and scalable approach to AI applications. The journey from fine-tuning to plugins represents a move from static optimization to dynamic adaptability, ensuring that LLMs remain at the forefront of AI innovation. In a Nutshell The AI domain is witnessing rapid shifts, with LLMs playing a central", "df6183049976174f912d271a7d08fda25e3086030c160fdc603face8a6000e00": "the future of LLMs is leaning towards plugins. With the introduction of tools like GPT-4 Plugins, the focus is on extending LLMs seamlessly. Instead of running LLMs as a service, they're envisioned as platforms. This means integrating LLMs with various tools, enhancing their capabilities, and offering a more modular and scalable approach to AI applications. The journey from fine-tuning to plugins represents a move from static optimization to dynamic adaptability, ensuring that LLMs remain at the forefront of AI innovation. In a Nutshell The AI domain is witnessing rapid shifts, with LLMs playing a central role. Initially, the move was from LLMs to Multimodal models, expanding from text to include images and sounds. Simultaneously, the trend shifted from LLM connections, which linked external data, to Vector Databases for efficient high-dimensional storage. Another evolution saw LLM agents, which automated tasks, transitioning towards LLMs as Operating Systems. This change aims for more intuitive, context-aware devices and applications. Furthermore, the traditional fine-tuning processes of LLMs are now being replaced by dynamic plugins, turning LLMs into platforms integrated with various tools. Leading this LLM revolution are OpenAI's GPT-4 and Meta's LLaMA2. Their pioneering efforts are setting the stage for an AI future that's more integrated, responsive, and attuned to human interactions. More Readings Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond: https://arxiv.org/abs/2304.13712Sparks of Artificial General Intelligence: Early experiments with GPT-4: https://arxiv.org/abs/2303.12712GPT4All-J: https://huggingface.co/nomic-ai/gpt4all-jIntroducing Code Llama, a state-of-the-art large language model for coding: https://ai.meta.com/blog/code-llama-large-language-model-coding/Llama 2: Open Foundation and Fine-Tuned Chat Models: https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/", "de49ab9024a434ca1cd1efba258fbaa9a3e2d9a1bca3ab4a0349220cc1e2754f": "Private data to be used The example provided can be used with any dataset. I am using a data set that has Analyst recommendations from various stocks. For the purpose of demonstration, I have gathered publicly available analyst recommendations to showcase its capabilities. You can replace this with your own information to try this. Below is a partial extract of the information commonly found in these documents. If you wish to try it yourself, you can download analyst recommendations for your preferred stocks from online sources or access them through subscription platforms like Barron's. Although the example provided focuses on analyst recommendations, the underlying structure can be utilized to query various other types of documents in any industry as well. I have assembled such data for a few stocks for demonstration purposes. This includes Google, Microsoft, Meta, and Tesla. To facilitate easy access and updating of analysts' recommendations, all the recommendations can be organized into a designated folder. Each stock corresponds to a separate file within this folder. For example, if there are recommendations for 20 stocks, there will be 20 individual files. This organization enables convenient updating of information for each stock as new recommendations arrive, streamlining the process of managing and maintaining the most up-to-date data for each stock. Questions this Q&A bot application can answer The data we have for this application is stock market analyst recommendations for many stocks. Let's say you are looking for insight about Microsoft stock. You can ask any of the following questions as an example: What is the median target price for Microsoft (MSFT)?What is the highest price estimate for Microsoft (MSFT)?What is the lowest price estimate for Microsoft (MSFT)?How much percentage increase is expected in the stock price of Microsoft (MSFT)?How many analysts provided price forecasts for Microsoft (MSFT)?What is the current consensus among investment analysts regarding Microsoft (MSFT)?Has the consensus rating for Microsoft (MSFT) changed recently?When was the consensus rating last updated for Microsoft (MSFT)?Is the current recommendation for Microsoft (MSFT) to buy, sell, or hold the stock?Are there any recent analyst reports available for Microsoft (MSFT)? These questions cover various aspects of the stock analysis, including price forecasts, analyst recommendations, and recent changes in ratings. The", "15268fd9c2a45644a0c49ca1b4897b4fabfe3005fccee48af0acc7eea7dd0e9c": "much percentage increase is expected in the stock price of Microsoft (MSFT)?How many analysts provided price forecasts for Microsoft (MSFT)?What is the current consensus among investment analysts regarding Microsoft (MSFT)?Has the consensus rating for Microsoft (MSFT) changed recently?When was the consensus rating last updated for Microsoft (MSFT)?Is the current recommendation for Microsoft (MSFT) to buy, sell, or hold the stock?Are there any recent analyst reports available for Microsoft (MSFT)? These questions cover various aspects of the stock analysis, including price forecasts, analyst recommendations, and recent changes in ratings. The chat system can provide specific answers based on the information available in the financial documents. Please note that you can not only ask questions about an individual stock but can also ask comparative questions across stocks. For example, which stock has the most price increase? Here the system will compare the price increase across all the stocks and provide an answer. Quick summary of how the web application works This web-based application allows users to input their questions in a text box and receive answers based on insights gathered from multiple documents. For instance, users can inquire, \"What is the highest price estimate for Microsoft?\" and the application will query the relevant documents to provide an accurate response. Moreover, users can also compare stocks by asking questions such as, \"Which stock, Meta or Microsoft, has a higher percentage increase in the stock price?\" The application will analyze the data across the documents, enabling users to make informed investment decisions based on the comparative insights provided. Application Overview The application is built with LangChain and ChatGPT. Though it uses ChatGPT, we can also wire this to other LLMs as well. LangChain is an innovative framework designed to empower you in building sophisticated applications driven by large language models (LLMs). By offering a standardized interface, LangChain facilitates the seamless integration of various components, including LLMs, data sources, and actions. This streamlined approach accelerates the development of robust applications, enhanced by features such as chaining, data awareness, and agentic capabilities. To complement LangChain, the web application is built utilizing Streamlit, a Python library for creating interactive web applications and data dashboards. Streamlit's", "6d646836e0c2e6830a4c6d3147c3b1d28d3e92351cf0be1d27f5f3a18c520e3d": "Though it uses ChatGPT, we can also wire this to other LLMs as well. LangChain is an innovative framework designed to empower you in building sophisticated applications driven by large language models (LLMs). By offering a standardized interface, LangChain facilitates the seamless integration of various components, including LLMs, data sources, and actions. This streamlined approach accelerates the development of robust applications, enhanced by features such as chaining, data awareness, and agentic capabilities. To complement LangChain, the web application is built utilizing Streamlit, a Python library for creating interactive web applications and data dashboards. Streamlit's open-source nature and user-friendly features simplify the process of developing web apps with minimal effort. This has made it a popular choice among developers, data scientists, and machine learning engineers seeking to build engaging and accessible applications. Initial setup Install OpenAI, LangChain, and StreamLit Import the relevant packages Set the API keys Define the LLM to use Ingesting private documents We used Langchain to ingest data. LangChain offers a wide range of data ingestion methods, providing users with various options to load their data efficiently. It supports multiple formats, including text, images, PDFs, Word documents, and even data from URLs. In the current example, text files were utilized, but if you wish to work with a different format, you simply need to refer to the corresponding loader specifically tailored for that format. All the analysts' recommendations documents are stored in a dedicated folder. You have the flexibility to either refer to individual documents or retrieve all the documents within a specific folder. If you want to specify exact documents, you can do it the following way. To load the files you want to ingest, you can specify the path to each file individually. The loaded files can then be saved into a list. This list serves as the input that is sent to the vector database to store the data. The alternative approach is a more versatile method in which we can load all pertinent documents from a designated folder and store the file locations in a list for subsequent processing. This approach offers flexibility and allows for the efficient handling of multiple documents by capturing their locations in a centralized list, enabling seamless data retrieval and analysis. Load the documents", "b7eaf40d5ed90dbefc226732645cf49e5f98fb471a1b56a4151f646b60891738": "you want to specify exact documents, you can do it the following way. To load the files you want to ingest, you can specify the path to each file individually. The loaded files can then be saved into a list. This list serves as the input that is sent to the vector database to store the data. The alternative approach is a more versatile method in which we can load all pertinent documents from a designated folder and store the file locations in a list for subsequent processing. This approach offers flexibility and allows for the efficient handling of multiple documents by capturing their locations in a centralized list, enabling seamless data retrieval and analysis. Load the documents into the vector store. When dealing with a vast number of documents, it becomes inefficient to send all documents (analyst recommendations) to your large language model (LLM) when seeking answers to specific questions. For instance, if your question pertains to MSFT, it would be more cost-effective to only send document extracts that reference MSFT to your LLM for answering the question. This approach helps optimize resource utilization. To achieve this, all documents are split into chunks and stored in a vector database in a numeric format (embeddings). When a new question is posed, the system queries the vector database for relevant text chunks related to this question, which is then shared with the LLM to generate an appropriate response. Within the LangChain framework, the VectorstoreIndexCreator class serves as a utility for creating a vector store index. This index stores vector representations of the documents (in chromadb), enabling various text operations, such as finding similar documents based on a specific question. When a user asks a question, a similarity search is performed in the vector store to get document chunks relevant to the question. The question, along with the chunks are sent to OpenAI to get the response back. Now we are ready to query these documents. Setting up the web application The application is presented in the browser using Streamlit, providing a user-friendly interface. Within the application, a text box is available for users to enter their questions. Upon submitting the question by pressing enter, the application processes the input and generates a corresponding response. This response is then displayed below the text box, allowing users to conveniently view the relevant", "8bd2dacc5eca082fcea46f2e3aace5c8c3817dd817cffa9f1ab3800bd476a3d3": "When a user asks a question, a similarity search is performed in the vector store to get document chunks relevant to the question. The question, along with the chunks are sent to OpenAI to get the response back. Now we are ready to query these documents. Setting up the web application The application is presented in the browser using Streamlit, providing a user-friendly interface. Within the application, a text box is available for users to enter their questions. Upon submitting the question by pressing enter, the application processes the input and generates a corresponding response. This response is then displayed below the text box, allowing users to conveniently view the relevant information. Create a prompt based on the question asked by the user and display the response back to the user By calling index.query() with the specified parameters, you initiate the process of querying the vector database using the provided question. Vector database provides relevant text chunks that are relevant to the question asked. These text chunks, along with the original question, is passed to LLM. The LLM is invoked to analyze the question and generate a response based on the available data sent. The specific chaining process associated with the query is determined by the chain_type parameter, which is to use all the data (filtered by the question) sent to LLM. Now the entire application is ready, and let's take it for a spin next. Ask few questions Let's try few questions The range of questions encompasses diverse facets of stock analysis, encompassing price forecasts, analyst recommendations, and recent rating changes. The chat system excels in delivering precise answers by leveraging the information contained within the financial documents. The system extends beyond individual stock inquiries and accommodates comparative queries across multiple stocks. For instance, one can ask about the stock with the highest price increase, prompting the system to compare price increases across all stocks and provide a comprehensive response. This versatility allows users to gain insights and make informed decisions across a broader spectrum of stock analysis. Conclusion The development of a Q&A bot over private documents using OpenAI and LangChain represents a remarkable achievement in unlocking the invaluable knowledge hidden within private document repositories. This web-based Q&A bot has the potential to empower users from various industries, enabling efficient access and analysis of critical information and ultimately enhancing", "7d7e3d805418e033c4aa24a972a8358d33d94a60fef7af58a318efe9232be19b": "documents. The system extends beyond individual stock inquiries and accommodates comparative queries across multiple stocks. For instance, one can ask about the stock with the highest price increase, prompting the system to compare price increases across all stocks and provide a comprehensive response. This versatility allows users to gain insights and make informed decisions across a broader spectrum of stock analysis. Conclusion The development of a Q&A bot over private documents using OpenAI and LangChain represents a remarkable achievement in unlocking the invaluable knowledge hidden within private document repositories. This web-based Q&A bot has the potential to empower users from various industries, enabling efficient access and analysis of critical information and ultimately enhancing productivity and decision-making capabilities. While we showcased a finance example to illustrate the concept, the bot's functionality extends to any domain. Simply by providing a folder with the relevant privacy documents, users can engage in natural language conversations with the bot. Once the data is ingested into a vector database, users can seamlessly query and retrieve information, propelling the capabilities of intelligent document analysis to new heights.", "567b14c826413d4ff28ecb510609350966136f2d0914c2d28eda5d8b3e646e82": "Problem Statement Despite the pioneers like Amazon [2], many E-commerce platforms are still heavily relying on traditional retrieval techniques like TFIDF and BM25 for product search. Such sparse methods usually require customers to type explicit queries that match the product information and mostly struggle to achieve good relevance for queries that are colloquial and implicit. In consequence, the search engine either returns no result or results with low relevance ignoring the existence of the relevant ones, which harms the customer experience and business metrics. For instance, Ebay is returning \"No exact matches found\" for the query \"What are the best gifts for boys under 5?\". Although the \"Results matching fewer words\" solution avoids the \"no result\" situation, its search relevance has got the obvious potential to be improved. People might argue that it's rare for such queries to occur. However, it's not uncommon that many opportunities and advancements are actually driven by the use cases that are underestimated in the beginning. LLM-based Solution Today, thanks to the fast development of LLMs, one can quickly build prototypes without worrying about the effort needed to build in-house solutions from scratch. This enables my quick discovery to tackle the problem. As depicted in the image below, the idea is pretty straightforward. The LLM is leveraged to translate the raw query to an enhanced query that aims to contain the explicit product information for search. Potentially, the product range covered in the enhanced query could be broad for the raw query that is implicit and fuzzy. In consequence, sending the enhanced query directly to the keyword-based search engine will likely lead to poor results due to its ambiguity and uncertainty. As a solution, LLM embedding is adopted to address the semantic complexity. Specifically, the enhanced query is projected into the embedding space that contains the preprocessed product embeddings. Next, the product retrieval is done by comparing the similarity between the query embedding and product embeddings, which then generates the top-k products as search results. There is a wide range of techniques to implement the idea as there exist many options for each step. Here, I provide one example implementation based on Hugging Face and LangChain. The actual code is hosted on the Github repo below, with the details explained as follows. Generate the enhanced query First, the recently announced", "2652e0efd386340481f4aafc7721f97d5f2f4a87ab452b04f4952275cf5a9d9b": "As a solution, LLM embedding is adopted to address the semantic complexity. Specifically, the enhanced query is projected into the embedding space that contains the preprocessed product embeddings. Next, the product retrieval is done by comparing the similarity between the query embedding and product embeddings, which then generates the top-k products as search results. There is a wide range of techniques to implement the idea as there exist many options for each step. Here, I provide one example implementation based on Hugging Face and LangChain. The actual code is hosted on the Github repo below, with the details explained as follows. Generate the enhanced query First, the recently announced Llama 2 is adopted as the LLM to generate the enhanced query for a given raw query. As demonstrated below, the Hugging Face pipeline is used, considering its simplicity. It's worth noting that the pipeline itself is enough to accomplish the task so the use of LangChain is totally optional. The prompt template adopted here aims to generate relevant and diverse product names to address the fuzziness of the raw query. Create product embeddings Next, the sentence transformer and FAISS in LangChain are used to create and store the product embeddings based on the product titles in the inventory. Here, due to the lack of access to actual search engines, the offline Ebay product dataset \"products.csv\" is adopted as the mockup of the E-commerce product inventory. This dataset contains approximately 3,000 products covering a wide range of categories. Product retrieval When it comes to retrieval, the same sentence transformer model that encodes the products is used again to generate the query embedding for the enhanced query. Finally, the top-10 products are retrieved based on the similarity between the query embedding and product embeddings. Showcase To demonstrate the effectiveness of this approach, let's look at the above-mentioned query \"What are the best gifts for boys under 5?\" and compare the LLM enhancement with the original Ebay search results presented in Figure 1. First, after receiving the raw query, Llama 2 generates 10 products as instructed by the prompt template. They look pretty impressive for boys' gift ideas although a better product-level granularity is expected. Next, let's have a look at the similarity match in the embedding space. What are", "34766658d6856917c5fd75bc7ed377030aaa94e6020424190e8f4a78b13cc0e5": "the top-10 products are retrieved based on the similarity between the query embedding and product embeddings. Showcase To demonstrate the effectiveness of this approach, let's look at the above-mentioned query \"What are the best gifts for boys under 5?\" and compare the LLM enhancement with the original Ebay search results presented in Figure 1. First, after receiving the raw query, Llama 2 generates 10 products as instructed by the prompt template. They look pretty impressive for boys' gift ideas although a better product-level granularity is expected. Next, let's have a look at the similarity match in the embedding space. What are retrieved from the product inventory mockup are not bad at all in comparison with the results of the real-world Ebay search engine in Figure 1. Due to the limited product range of the inventory mockup, the comparison is somewhat unfair but we are still able to observe the significant difference before and after applying LLM. Overall, the retrieval in embedding space achieves both relevance and diversity. Final thoughts After conducting the initial discovery, it is obvious that LLMs are a powerful tool to enhance the product search of E-commerce platforms. For this task, there are many future explorations to conduct, including prompt engineering for generating queries, product embeddings with enriched attributes, online latency optimization for LLM query enhancement, etc. Hope this blog could inspire the E-commerce platforms that need solutions to improve product search. References [1] Nayak, P. (2019) Understanding searches better than ever before, Google. Available at: https://blog.google/products/search/search-language-understanding-bert/ (Accessed: 09 August 2023).[2] Muhamed, A. et al. (no date) Web-scale semantic product search with large language models, Amazon Science. Available at: https://www.amazon.science/publications/web-scale-semantic-product-search-with-large-language-models (Accessed: 09 August 2023).", "5c3e9cc715caad19aa790a573f7e9b7e7e13e699694a5293fae7a1da112818ee": "Fine Tuning on Custom Domain Data All the popular models like GPT3/3.4/4 and LLAMA2 are trained primarily on the data scraped from the internet. Common Crawl, WebText, GitHub, StackOverflow etc: These are massive datasets of text and code that are crawled from the public web and a few curated like the QA dataset SQAD. The worldview and information the model has learned are also based on this data. However, this means that if we have some domain-specific data that the model has not seen, then it won't be able on its own to answer questions related to such data in case of Closed Book QA use-case or any other use case that depends on the specific domain data. For example, most online portals are adding virtual assistants for their customers, banks, e-commerce, customer support etc. And a huge if not the majority of data in the world still lives outside of the internet in enterprises. We have seen in Part 2 how LLMs can help address information retrieval use cases based on Vector space embeddings. But what if our use case is more high level? It needs domain \"understanding\", maybe some higher level reasoning tasks. This is where fine-tuning with custom data comes into play. I am not able to provide a use case where higher-level reasoning can be used. There are a few simpler ones, like training on custom issues and then asking it to reason on similar issues and possible solutions, but these are as of now not tested. So let's stick with a simpler use-case Closed-Book QA - the model answers questions based on the knowledge it internally has for now. The above is from a 2021 paper Can Generative Pre-trained Language Models Serve as Knowledge Bases for Closed-book QA? This is already outdated in the sense of the number and size of models and training released. The authors with 2021 models could not achieve great results and the great results they found in some studies described could be attributed to the high train and test overlap in datasets. There are also a lot of tutorials on the internet that try to portray this concept with toy datasets. The real trouble is making the model 'understand' the data first and not just parrot it out. Without understanding, it will parrot out the answer based on the", "cd053b50ba3d43b725ea4cb957a0d0bd8ad2f16aef47a87b56056d2891c237ce": "paper Can Generative Pre-trained Language Models Serve as Knowledge Bases for Closed-book QA? This is already outdated in the sense of the number and size of models and training released. The authors with 2021 models could not achieve great results and the great results they found in some studies described could be attributed to the high train and test overlap in datasets. There are also a lot of tutorials on the internet that try to portray this concept with toy datasets. The real trouble is making the model 'understand' the data first and not just parrot it out. Without understanding, it will parrot out the answer based on the similarity of the question in the training set, or both the question and answer. To prevent this, the authors have an intermediate step called 'Recite' where the model is made to recite/output the relevant passages and, after that, output the answer. Just to be clear, there is no doubt now (2023), especially with GPT3/4, LLAMA2 and similar models about the feasibility of this use case, that a model can understand the question, has some ability for causal reasoning, and can generalize to learn a world model from its training data, and to use both to create a well-formed answer to the question. Let's see the difficulties one by one however, of training a large model. First is the importance of the model size. This GIF from the Google AI blog illustrates this beautifully. It is relatively easy and cost-efficient to train or fine-tune a small model with our custom data, as the GPU and infrastructure requirements are very less. On the contrary, it needs huge fleets of GPUs and training infrastructure to load very large language models and fine-tune them (without quantisation) in a distributed way (e.g. see libraries like DeepSpeed) LLMs come in various sizes, based on the number of trainable parameters or weights. The smaller ones, which have less than 1 billion parameters (GPT2 124 M, Bloom 560M, Flan-T5 783 M ) etc can be trained on a laptop GPU with 8 to 15 GB GPU RAM ) For quite some time, this is what I tried. I tried to overfit a small test data set on decoder models like GPP2-small,", "be37a750110aa95083ba1f01b25fd79e195cfb09272724ae43bd363226419229": "load very large language models and fine-tune them (without quantisation) in a distributed way (e.g. see libraries like DeepSpeed) LLMs come in various sizes, based on the number of trainable parameters or weights. The smaller ones, which have less than 1 billion parameters (GPT2 124 M, Bloom 560M, Flan-T5 783 M ) etc can be trained on a laptop GPU with 8 to 15 GB GPU RAM ) For quite some time, this is what I tried. I tried to overfit a small test data set on decoder models like GPP2-small, GPT-Medium, and Bloom and encoder-decoder models like Flan-T5, thinking somehow that the understanding we see in ChatGPT ( see- unsupervised learning Part 1) may come in some form if we train on these smaller models. ( less than one billion parameters). As per the paper, I tried both Causal training, where the model is presented with only previous tokens, and Masked LM-based training, where the model is presented with full tokens, but a certain percentage of tokens are masked in random, and the model has to predict it. The next option was to fine-tune a large model with the data. However, this is extremely difficult to do, and even if cloud-based solutions are used, it would be pretty expensive. (What OpenAI provides now is Instruct Fine-Tuning, which we will cover later) It takes months of GPU fleet time and a specialized library and infrastructure to distribute training across multiple GPUs needed to train LLMs. For example, even a relatively small model like the BigScience Bloom 3 Billion model, even when the weights are loaded in 16 Bit cannot be trained with A100 on ColabPro with 40GB GPU RAM ( the highest you can get) as it goes out of memory. Solution - Fine-Tuning Large Models via Qunaitsation and Parmeter Efficient Tuning The solution to this is to reduce the size of the models so that they can fit a commodity GPU and then fine-tune them. There are two parts to this- Quantisation and Parameter Efficient Tuning. The real magic of this is that a laptop with a sufficient recent GPU (having Tensor", "cb4e74d898bb0d095b09a2dd4df266921a0c17b3b27ff1b03bfa587843b4207d": "model like the BigScience Bloom 3 Billion model, even when the weights are loaded in 16 Bit cannot be trained with A100 on ColabPro with 40GB GPU RAM ( the highest you can get) as it goes out of memory. Solution - Fine-Tuning Large Models via Qunaitsation and Parmeter Efficient Tuning The solution to this is to reduce the size of the models so that they can fit a commodity GPU and then fine-tune them. There are two parts to this- Quantisation and Parameter Efficient Tuning. The real magic of this is that a laptop with a sufficient recent GPU (having Tensor Cores), can run the 7 billion Lamma2 pre-trained model open-sourced recently by Meta Research. Imagine the compressed knowledge and an NLU (Natural Language Understanding) model running on your local laptop. This is still a smallish model, but it's still capable of understanding and has sufficient world knowledge embedded in it to be quite useful. Imagine what a model like this or better models in the future could do if it could run in small servers or in cars, and leverage its causal reasoning and world model knowledge to supervise lower-level/specialist AI/ML systems. So we have now a way to fit reasonably large models (7B or more) in a single GPU, via Quantisation and then train them in a parameter-efficient way via LoRa/QLoRa. Take 1: Un-supervised Training Fine-tuning with QLoRa Using the small training data and QLoRA, I first tried to train a large 7B Lamma2 model by feeding in the training text as is (Causal LM model training via UnSupervised learning). Note that this model was loaded in 4-bit, making it runnable on a single T4 GPU and trained with QLoRa. With QLoRA, only a fraction of the adapter weights are trained and summed with the existing frozen pre-trained weights of the model during inference. Here is an illustrative Colab notebook. You can see that training the model with just the text as is, does not result in proper output to questions. The answers are not affected by the training data. Take 2: Instruct Fine-tuning with QLoRa Instruction Tuning concept is a higher-level", "8cf94b9369ba8da18d02172b9cbf885afb60cddd0a2381a86a81ca8e6a9b10f9": "LM model training via UnSupervised learning). Note that this model was loaded in 4-bit, making it runnable on a single T4 GPU and trained with QLoRa. With QLoRA, only a fraction of the adapter weights are trained and summed with the existing frozen pre-trained weights of the model during inference. Here is an illustrative Colab notebook. You can see that training the model with just the text as is, does not result in proper output to questions. The answers are not affected by the training data. Take 2: Instruct Fine-tuning with QLoRa Instruction Tuning concept is a higher-level training concept introduced by this paper FineTuned Language Models Are Zero shot Learners (FLAN) We leverage the intuition that NLP tasks can be described via natural language instructions, such as \"Is the sentiment of this movie review positive or negative?\" or \"Translate 'how are you' into Chinese.\" We take a pre-trained language model of 137B parameters and perform instruction tuning ... Since we use QLoRa we are effectively closely following this paper - QLORA: Efficient Finetuning of Quantized LLMs concerning the training data set, the format that the authors used to train their Gauanco model This is the format for the Llama2 model and will be different for others. One of the hardest problems of training is finding or creating a good quality data set to train. In our case, converting the available training data set to the instruction data set. Since our use case is Closed Book QA, we need to convert this to a QA format. Using older NLP methods like NER (Named Entity Recognition) and then using that to create a QA dataset was not effective. This is where the Self-instruct concept could be used However previous to Llama2, the best-performing model was the GPT 3/4 model via ChatGPT or its API and using these models to do the same was expensive. The 7 billion model of Llama2 has sufficient NLU (Natural Language Understanding) to create output based on a particular format. Running this in 4-bit mode via Quantisation makes it feasible compute-wise to run this on a large data set and convert it to a QA dataset. This was the prompt used. The", "e20f03d2063b6e0b4f922c00d62de19ee6657670a26577b04168e0bfc7b1eb42": "and then using that to create a QA dataset was not effective. This is where the Self-instruct concept could be used However previous to Llama2, the best-performing model was the GPT 3/4 model via ChatGPT or its API and using these models to do the same was expensive. The 7 billion model of Llama2 has sufficient NLU (Natural Language Understanding) to create output based on a particular format. Running this in 4-bit mode via Quantisation makes it feasible compute-wise to run this on a large data set and convert it to a QA dataset. This was the prompt used. The context was a sliding window from the text dataset. Some minimal parsing and finetuning were done on the output of the model, and we could generate a QA dataset of the format below. This was fed to the QLoRA-based fine-tuning (Colab Notebook). We can see that the output from a fine-tuned 4-bit quantized llama2 7 B model is pretty good. Colab Notebook Trying to reduce hallucination via fine-tuning In the generated dataset, I added a specific tag `Source:8989REF`. The idea was that via attention, this token will be somehow associated with the text that we were training on. And then to use this hash somehow to tweak the prompt to control hallucination. Something like \"[INST] <>\\nYou are a helpful Question Answering Assistant. Please only answer from this reference Source:8989REF\" However, that turned out to be a very naive attempt. Also, note that the generated QA missed transforming training data related to Professor Thiersch's method to a proper QA dataset. These and other improvements need to be experimented with, as well as to train with some completely new data that the model has not seen to test more effectively. Update: Training with new data was done by writing an imaginary story with ChatGPT help and then creating an instruction tuning data set (colab notebook). The model was then trained and tested (colab notebook) with this generated instruct dataset. The results confirm that the model learns via Instruct tuning, not only the fed questions but other details and relations of the domain. Problems with hallucinations remain (Bordor, Lila characters who are", "892a22c623618faecd553782dd97454d8c081170c04598767f4a36f05a8a3bb2": "method to a proper QA dataset. These and other improvements need to be experimented with, as well as to train with some completely new data that the model has not seen to test more effectively. Update: Training with new data was done by writing an imaginary story with ChatGPT help and then creating an instruction tuning data set (colab notebook). The model was then trained and tested (colab notebook) with this generated instruct dataset. The results confirm that the model learns via Instruct tuning, not only the fed questions but other details and relations of the domain. Problems with hallucinations remain (Bordor, Lila characters who are not in the story). The LLama2 13B 4-bit fine-tuned model has better output than the 7B model. A lot more needs to be explored in Fine-tuning. One observation is that slight changes in prompts give different answers. Since the output is not deterministic (that is, with even the same prompt, it varies over time), it is all the more difficult to fine-tune prompts to give the most effective output. This needs to be studied more. Also to be updated are higher level use-cases that should be possible with the fine-tuned models. Fine Tuning on Custom Domain Data All the popular models like GPT3/3.4/4 and LLAMA2 are trained primarily on the data scraped from the internet. Common Crawl, WebText, GitHub, StackOverflow etc: These are massive datasets of text and code that are crawled from the public web and a few curated like the QA dataset SQAD. The worldview and information the model has learned are also based on this data. However, this means that if we have some domain-specific data that the model has not seen, then it won't be able on its own to answer questions related to such data in case of Closed Book QA use-case or any other use case that depends on the specific domain data. For example, most online portals are adding virtual assistants for their customers, banks, e-commerce, customer support etc. And a huge if not the majority of data in the world still lives outside of the internet in enterprises. We have seen in Part 2 how LLMs can help address information retrieval use cases based on Vector space embeddings. But what", "2c17b2d996a4178952175483443612704c5dd7315f5b30beb7fb099ab044d68d": "data. However, this means that if we have some domain-specific data that the model has not seen, then it won't be able on its own to answer questions related to such data in case of Closed Book QA use-case or any other use case that depends on the specific domain data. For example, most online portals are adding virtual assistants for their customers, banks, e-commerce, customer support etc. And a huge if not the majority of data in the world still lives outside of the internet in enterprises. We have seen in Part 2 how LLMs can help address information retrieval use cases based on Vector space embeddings. But what if our use case is more high level? It needs domain \"understanding\", maybe some higher level reasoning tasks. This is where fine-tuning with custom data comes into play. I am not able to provide a use case where higher-level reasoning can be used. There are a few simpler ones, like training on custom issues and then asking it to reason on similar issues and possible solutions, but these are as of now not tested. So let's stick with a simpler use-case Closed-Book QA - the model answers questions based on the knowledge it internally has for now. The above is from a 2021 paper Can Generative Pre-trained Language Models Serve as Knowledge Bases for Closed-book QA? This is already outdated in the sense of the number and size of models and training released. The authors with 2021 models could not achieve great results and the great results they found in some studies described could be attributed to the high train and test overlap in datasets. There are also a lot of tutorials on the internet that try to portray this concept with toy datasets. The real trouble is making the model 'understand' the data first and not just parrot it out. Without understanding, it will parrot out the answer based on the similarity of the question in the training set, or both the question and answer. To prevent this, the authors have an intermediate step called 'Recite' where the model is made to recite/output the relevant passages and, after that, output the answer. Just to be clear, there is no doubt now (2023), especially with GPT3/4, LLAMA2 and similar models about the feasibility of this use case,", "252e704b7a4e29a36e01e55086d2027184611e0f1bc3f4abd273b0246c5d0c43": "concept with toy datasets. The real trouble is making the model 'understand' the data first and not just parrot it out. Without understanding, it will parrot out the answer based on the similarity of the question in the training set, or both the question and answer. To prevent this, the authors have an intermediate step called 'Recite' where the model is made to recite/output the relevant passages and, after that, output the answer. Just to be clear, there is no doubt now (2023), especially with GPT3/4, LLAMA2 and similar models about the feasibility of this use case, that a model can understand the question, has some ability for causal reasoning, and can generalize to learn a world model from its training data, and to use both to create a well-formed answer to the question. Let's see the difficulties one by one however, of training a large model. First is the importance of the model size. This GIF from the Google AI blog illustrates this beautifully. It is relatively easy and cost-efficient to train or fine-tune a small model with our custom data, as the GPU and infrastructure requirements are very less. On the contrary, it needs huge fleets of GPUs and training infrastructure to load very large language models and fine-tune them (without quantisation) in a distributed way (e.g. see libraries like DeepSpeed) LLMs come in various sizes, based on the number of trainable parameters or weights. The smaller ones, which have less than 1 billion parameters (GPT2 124 M, Bloom 560M, Flan-T5 783 M ) etc can be trained on a laptop GPU with 8 to 15 GB GPU RAM ) For quite some time, this is what I tried. I tried to overfit a small test data set on decoder models like GPP2-small, GPT-Medium, and Bloom and encoder-decoder models like Flan-T5, thinking somehow that the understanding we see in ChatGPT ( see- unsupervised learning Part 1) may come in some form if we train on these smaller models. ( less than one billion parameters). As per the paper, I tried both Causal training, where the model is presented with only previous tokens, and Masked", "53485f82f10af2df87678b8b4977e4aea87ea25f43b07cf62276798120705d49": "a laptop GPU with 8 to 15 GB GPU RAM ) For quite some time, this is what I tried. I tried to overfit a small test data set on decoder models like GPP2-small, GPT-Medium, and Bloom and encoder-decoder models like Flan-T5, thinking somehow that the understanding we see in ChatGPT ( see- unsupervised learning Part 1) may come in some form if we train on these smaller models. ( less than one billion parameters). As per the paper, I tried both Causal training, where the model is presented with only previous tokens, and Masked LM-based training, where the model is presented with full tokens, but a certain percentage of tokens are masked in random, and the model has to predict it. The next option was to fine-tune a large model with the data. However, this is extremely difficult to do, and even if cloud-based solutions are used, it would be pretty expensive. (What OpenAI provides now is Instruct Fine-Tuning, which we will cover later) It takes months of GPU fleet time and a specialized library and infrastructure to distribute training across multiple GPUs needed to train LLMs. For example, even a relatively small model like the BigScience Bloom 3 Billion model, even when the weights are loaded in 16 Bit cannot be trained with A100 on ColabPro with 40GB GPU RAM ( the highest you can get) as it goes out of memory. Solution - Fine-Tuning Large Models via Qunaitsation and Parmeter Efficient Tuning The solution to this is to reduce the size of the models so that they can fit a commodity GPU and then fine-tune them. There are two parts to this- Quantisation and Parameter Efficient Tuning. The real magic of this is that a laptop with a sufficient recent GPU (having Tensor Cores), can run the 7 billion Lamma2 pre-trained model open-sourced recently by Meta Research. Imagine the compressed knowledge and an NLU (Natural Language Understanding) model running on your local laptop. This is still a smallish model, but it's still capable of understanding and has sufficient world knowledge embedded in it to be quite useful. Imagine what a model like this or better models in the future could do if", "58a0db6126e2d6858f1cf7cb3e95df17f07b67d02f2ef02567d04358279a6276": "a commodity GPU and then fine-tune them. There are two parts to this- Quantisation and Parameter Efficient Tuning. The real magic of this is that a laptop with a sufficient recent GPU (having Tensor Cores), can run the 7 billion Lamma2 pre-trained model open-sourced recently by Meta Research. Imagine the compressed knowledge and an NLU (Natural Language Understanding) model running on your local laptop. This is still a smallish model, but it's still capable of understanding and has sufficient world knowledge embedded in it to be quite useful. Imagine what a model like this or better models in the future could do if it could run in small servers or in cars, and leverage its causal reasoning and world model knowledge to supervise lower-level/specialist AI/ML systems. So we have now a way to fit reasonably large models (7B or more) in a single GPU, via Quantisation and then train them in a parameter-efficient way via LoRa/QLoRa. Take 1: Un-supervised Training Fine-tuning with QLoRa Using the small training data and QLoRA, I first tried to train a large 7B Lamma2 model by feeding in the training text as is (Causal LM model training via UnSupervised learning). Note that this model was loaded in 4-bit, making it runnable on a single T4 GPU and trained with QLoRa. With QLoRA, only a fraction of the adapter weights are trained and summed with the existing frozen pre-trained weights of the model during inference. Here is an illustrative Colab notebook. You can see that training the model with just the text as is, does not result in proper output to questions. The answers are not affected by the training data. Take 2: Instruct Fine-tuning with QLoRa Instruction Tuning concept is a higher-level training concept introduced by this paper FineTuned Language Models Are Zero shot Learners (FLAN) We leverage the intuition that NLP tasks can be described via natural language instructions, such as \"Is the sentiment of this movie review positive or negative?\" or \"Translate 'how are you' into Chinese.\" We take a pre-trained language model of 137B parameters and perform instruction tuning ... Since we use QLoRa we are", "ef0097732e6eed361247a1081f21a3688bdcfff0d8ec6db66c2bfd6381359bf0": "is, does not result in proper output to questions. The answers are not affected by the training data. Take 2: Instruct Fine-tuning with QLoRa Instruction Tuning concept is a higher-level training concept introduced by this paper FineTuned Language Models Are Zero shot Learners (FLAN) We leverage the intuition that NLP tasks can be described via natural language instructions, such as \"Is the sentiment of this movie review positive or negative?\" or \"Translate 'how are you' into Chinese.\" We take a pre-trained language model of 137B parameters and perform instruction tuning ... Since we use QLoRa we are effectively closely following this paper - QLORA: Efficient Finetuning of Quantized LLMs concerning the training data set, the format that the authors used to train their Gauanco model This is the format for the Llama2 model and will be different for others. One of the hardest problems of training is finding or creating a good quality data set to train. In our case, converting the available training data set to the instruction data set. Since our use case is Closed Book QA, we need to convert this to a QA format. Using older NLP methods like NER (Named Entity Recognition) and then using that to create a QA dataset was not effective. This is where the Self-instruct concept could be used However previous to Llama2, the best-performing model was the GPT 3/4 model via ChatGPT or its API and using these models to do the same was expensive. The 7 billion model of Llama2 has sufficient NLU (Natural Language Understanding) to create output based on a particular format. Running this in 4-bit mode via Quantisation makes it feasible compute-wise to run this on a large data set and convert it to a QA dataset. This was the prompt used. The context was a sliding window from the text dataset. Some minimal parsing and finetuning were done on the output of the model, and we could generate a QA dataset of the format below. This was fed to the QLoRA-based fine-tuning (Colab Notebook). We can see that the output from a fine-tuned 4-bit quantized llama2 7 B model is pretty good. Colab Notebook Trying to", "b5eeda2ed7d31c3d4f55c6dd4d95f8c3bc0c4a14e3ef371f92770f124632dbef": "a particular format. Running this in 4-bit mode via Quantisation makes it feasible compute-wise to run this on a large data set and convert it to a QA dataset. This was the prompt used. The context was a sliding window from the text dataset. Some minimal parsing and finetuning were done on the output of the model, and we could generate a QA dataset of the format below. This was fed to the QLoRA-based fine-tuning (Colab Notebook). We can see that the output from a fine-tuned 4-bit quantized llama2 7 B model is pretty good. Colab Notebook Trying to reduce hallucination via fine-tuning In the generated dataset, I added a specific tag `Source:8989REF`. The idea was that via attention, this token will be somehow associated with the text that we were training on. And then to use this hash somehow to tweak the prompt to control hallucination. Something like \"[INST] <>\\nYou are a helpful Question Answering Assistant. Please only answer from this reference Source:8989REF\" However, that turned out to be a very naive attempt. Also, note that the generated QA missed transforming training data related to Professor Thiersch's method to a proper QA dataset. These and other improvements need to be experimented with, as well as to train with some completely new data that the model has not seen to test more effectively. Update: Training with new data was done by writing an imaginary story with ChatGPT help and then creating an instruction tuning data set (colab notebook). The model was then trained and tested (colab notebook) with this generated instruct dataset. The results confirm that the model learns via Instruct tuning, not only the fed questions but other details and relations of the domain. Problems with hallucinations remain (Bordor, Lila characters who are not in the story). The LLama2 13B 4-bit fine-tuned model has better output than the 7B model. A lot more needs to be explored in Fine-tuning. One observation is that slight changes in prompts give different answers. Since the output is not deterministic (that is, with even the same prompt, it varies over time), it is all the more difficult to fine-tune prompts to", "fde5e2f6b5535701bf6cf2a3e296bc595289512bb47225748f69904a512d5bea": "The results confirm that the model learns via Instruct tuning, not only the fed questions but other details and relations of the domain. Problems with hallucinations remain (Bordor, Lila characters who are not in the story). The LLama2 13B 4-bit fine-tuned model has better output than the 7B model. A lot more needs to be explored in Fine-tuning. One observation is that slight changes in prompts give different answers. Since the output is not deterministic (that is, with even the same prompt, it varies over time), it is all the more difficult to fine-tune prompts to give the most effective output. This needs to be studied more. Also to be updated are higher level use-cases that should be possible with the fine-tuned models.", "8286d989740b78f10c31a1452ad639165b59b8973eaa014edbd0ed888adccd8d": "New Llama-2 model In mid-July, Meta released its new family of pre-trained and finetuned models called Llama-2, with an open source and commercial character to facilitate its use and expansion. The base model was released with a chat version and sizes 7B, 13B, and 70B. Together with the models, the corresponding papers were published describing their characteristics and relevant points of the learning process, which provide very interesting information on the subject. For pre-training, 40% more tokens were used, reaching 2T, the context length was doubled and the grouped-query attention (GQA) technique was applied to speed up inference on the heavier 70B model. On the standard transformer architecture, RMSNorm normalization, SwiGLU activation, and rotatory positional embedding are used, the context length reaches 4096 tokens, and an Adam optimizer is applied with a cosine learning rate schedule, a weight decay of 0.1 and gradient clipping. The dataset for tuning For our tuning process, we will take a dataset containing about 18,000 examples where the model is asked to build a Python code that solves a given task. This is an extraction of the original dataset [2], where only the Python language examples are selected. Each row contains the description of the task to be solved, an example of data input to the task if applicable, and the generated code fragment that solves the task is provided [3]. Creating the prompt To carry out an instruction fine-tuning, we must transform each one of our data examples as if it were an instruction, outlining its main sections as follows: Output: Fine-tuning the model To carry out this stage, we have used the Google Colab environment, where we have developed a notebook that allows us to run the training in an interactive way and also a Python script to run the training in unattended mode. For the first test runs, a T4 instance with a high RAM capacity is enough, but when it comes to running the whole dataset and epochs, we have opted to use an A100 instance in order to speed up the training and ensure that its execution time is reasonable. In order to be able to", "582874015861ee2f137ad3ae2e4de45f2fd52aba9c92bb8589416d210d89c3eb": "if it were an instruction, outlining its main sections as follows: Output: Fine-tuning the model To carry out this stage, we have used the Google Colab environment, where we have developed a notebook that allows us to run the training in an interactive way and also a Python script to run the training in unattended mode. For the first test runs, a T4 instance with a high RAM capacity is enough, but when it comes to running the whole dataset and epochs, we have opted to use an A100 instance in order to speed up the training and ensure that its execution time is reasonable. In order to be able to share the model, we will log in to the Huggingface hub using the appropriate token, so that at the end of the whole process, we will upload the model files so that they can be shared with the rest of the users. Fine-tuning techniques: PEFT, Lora, and QLora In recent months, some papers have appeared showing how PEFT techniques can be used to train large language models with a drastic reduction of RAM requirements and consequently allowing fine-tuning of these models on a single GPU of reasonable size. The usual steps to train an LLM consist, first, an intensive pre-training on billions or trillions of tokens to obtain a foundation model, and then a fine-tuning is performed on this model to specialize it on a downstream task. In this fine-tuning phase is where the PEFT technique has its purpose. Parameter Efficient Fine-Tuning (PEFT) allows us to considerably reduce RAM and storage requirements by only fine-tuning a small number of additional parameters, with virtually all model parameters remaining frozen. PEFT has been found to produce good generalization with relatively low-volume datasets. Furthermore, it enhances the reusability and portability of the model, as the small checkpoints obtained can be easily added to the base model, and the base model can be easily fine-tuned and reused in multiple scenarios by adding the PEFT parameters. Finally, since the base model is not adjusted, all the knowledge acquired in the pre-training phase is preserved, thus avoiding catastrophic forgetting. Most widely used PEFT techniques aim to keep the pre-trained base model untouched", "9afbdeaea403deb0f61cdc3bca5b4a96afe98f4166b36b4f8606cc41a7c0a4c1": "only fine-tuning a small number of additional parameters, with virtually all model parameters remaining frozen. PEFT has been found to produce good generalization with relatively low-volume datasets. Furthermore, it enhances the reusability and portability of the model, as the small checkpoints obtained can be easily added to the base model, and the base model can be easily fine-tuned and reused in multiple scenarios by adding the PEFT parameters. Finally, since the base model is not adjusted, all the knowledge acquired in the pre-training phase is preserved, thus avoiding catastrophic forgetting. Most widely used PEFT techniques aim to keep the pre-trained base model untouched and add new layers or parameters on top of it. These layers are called \"Adapters\" and the technique of their adjustment \"adapter-tuning\", we add these layers to the pre-trained base model and only train the parameters of these new layers. However, a serious problem with this approach is that these layers lead to increased latency in the inference phase, which makes the process inefficient in many scenarios.In the LoRa technique, a Low-Rank Adaptation of Large Language Models, the idea is not to include new layers but to add values to the parameters in a way that avoids this scary problem of latency in the inference phase. LoRa trains and stores the changes of the additional weights while freezing all the weights of the pre-trained model. Therefore, we train a new weights matrix with the changes in the pre-trained model matrix, and this new matrix is decomposed into 2 Low-rank matrices as explained here: Merge the base model and the adapter weights As we mention, we have trained \"modification weights\" on the base model, our final model requires merging the pretrained model and the adapters in a single model. You can find and download the model in my Hugging Face account edumunozsala/llama-27b-int4-python-code-20k. Give it a try! Inferencing or generating Python code And finally, we will show you how you can download the model from the Hugging Face Hub and call the model to generate an accurate result: Thanks to Maxime Labonne for an excellent article [9] and Philipp Schmid who provides an inspiring", "b9507b49d2385ea4d7c6d656582f18b1a7ae64d6075ce9e2654788b4e8bcae8a": "weights As we mention, we have trained \"modification weights\" on the base model, our final model requires merging the pretrained model and the adapters in a single model. You can find and download the model in my Hugging Face account edumunozsala/llama-27b-int4-python-code-20k. Give it a try! Inferencing or generating Python code And finally, we will show you how you can download the model from the Hugging Face Hub and call the model to generate an accurate result: Thanks to Maxime Labonne for an excellent article [9] and Philipp Schmid who provides an inspiring code [8]. Their articles are a must-read for everyone interested in Llama 2 and model fine-tuning. And it is all I have to mention, I hope you find useful this article and claps are welcome!! You can Follow me and Subscribe to my articles, or even connect to me via Linkedin. The code is available in my Github Repository. References [1] Llama-2 paper [2] Link to the original dataset in the Huggingface hub [3] Link to the used dataset in the Huggingface hub [4] Fine-tuning a GPT - LoRA by Chris Kuo/Dr. Dataman [5] Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, & Weizhu Chen. (2021). LoRA: Low-Rank Adaptation of Large Language Models. arXiv:2106.09685 [6]. QLoRa: Efficient Finetuning of QuantizedLLMs [7] Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning [8] Extended Guide: Instruction-tune Llama 2 by Philipp Schmid. [9] Fine-Tune Your Own Llama 2 Model in a Colab Notebook by Maxime Labonne [10]. My Github Repository", "6f7cf5db827c32c8a735dd896b580efb8bdfb7fd45ebf0acd1e9eed5020b502f": "New Moore's Laws Achieving Zettascale Computing As the traditional Moore's Law reaches its twilight, new laws are emerging to define the evolution of computing performance. The exponential growth of GPU performance and supercomputer systems has accelerated AI's advancements, with LLMs as a prime example. Despite their extensive training times, these LLMs are benefiting from the rapid growth of computational power. Moore's Law, famously predicting the number of transistors on a microchip would double approximately every two years, is now being replaced by new performance-based laws: GPU performance doubles every 2.2 years, while supercomputer system performance doubles every 1.2 years. These advancements are shaping the way AI/ML technologies progress. Despite the rapidly increasing performance, the training of LLMs still takes anywhere from days to months. This extended duration speaks to the complexity and vast potential of these models. As computational power continues to soar, it will unlock new possibilities for AI research and development. In the coming decade, the AI landscape is set to enter the era of Zettascale Computing. As a result of the new Moore's Laws, AI performance is expected to dramatically outpace other computing advancements. This shift to Zettascale Computing will provide unprecedented processing capabilities, enabling further breakthroughs in AI and other domains. The new Moore's Laws, focusing on GPU and supercomputer performance, herald a new era for AI research and development. With the advent of Zettascale Computing, we can expect even more rapid growth in AI capabilities, impacting various industries and shaping the future of technology. Generative AI Journey with State-of-the-Art LLMs Generative AI (GAI) has experienced rapid advancements in text-to-images, videos, and 3D. But ChatGPT and GPT-4 took the world by storm. These are LLMs-based GAI like other state-of-the-art LLMs: Claude, Bard, LLaMA, ToolFormer, Google USM, PaLM, NeMo, Databricks Dolly, etc. These have revolutionized NLP, enabling a myriad of applications once thought to be unattainable. Despite their impressive capabilities and increasing computing power, LLMs face common challenges such as scalability, training efficiency, and the need for high-quality", "38a78f7bcbaeae4dc00110122fa4fc83108ba82f7fcf0fb5a724195c6ef83c85": "has experienced rapid advancements in text-to-images, videos, and 3D. But ChatGPT and GPT-4 took the world by storm. These are LLMs-based GAI like other state-of-the-art LLMs: Claude, Bard, LLaMA, ToolFormer, Google USM, PaLM, NeMo, Databricks Dolly, etc. These have revolutionized NLP, enabling a myriad of applications once thought to be unattainable. Despite their impressive capabilities and increasing computing power, LLMs face common challenges such as scalability, training efficiency, and the need for high-quality training data. It reportedly required over 3 million GPU hours across 3072 GPUs to train GPT-3's 175 billion parameters over a period of several months. To address these common challenges, foundation models are emerging as a potential solution. These models aim to provide a solid base for AI development, enabling researchers and developers to build upon them and adapt them for various tasks more efficiently. By focusing on foundation models, the AI community can tackle the limitations posed by scalability, performance, training efficiency, and data quality, ultimately unlocking the full potential of LLMs and other large-scale models (LSMs) in diverse applications. The Era of Foundation Models Foundation models are pre-trained AI models serving as a basis for building diverse applications and tasks. Designed to be versatile, adaptable, and robust, they offer strong leverage across a wide range of use cases. The concept of foundation models was introduced by Stanford with two significant points: emergence and homogenization. Emergence: Referring to the implicit induction of a system's behavior, emergence is a source of both scientific excitement and concern regarding unforeseen consequences. Foundation models learn from vast amounts of data, developing intricate patterns and relationships that can exhibit surprising behaviors.Homogenization: Foundation models consolidate methodologies for building ML systems across various applications. While this homogenization provides strong leverage for many tasks, it also creates single points of failure, raising concerns about resilience and reliability. The astounding success of GAI and human-like ChatGPT has ushered in a new era of foundation models, laying the groundwork for large-scale models and the rise of artificial general intelligence (AGI). Foundation models have emerged to transform the digital world. Their impact is comparable", "91e2d8c23a72466a8cd6bbc3a16ac0814a43b7c79b4da0345de6f1a5923efa19": "both scientific excitement and concern regarding unforeseen consequences. Foundation models learn from vast amounts of data, developing intricate patterns and relationships that can exhibit surprising behaviors.Homogenization: Foundation models consolidate methodologies for building ML systems across various applications. While this homogenization provides strong leverage for many tasks, it also creates single points of failure, raising concerns about resilience and reliability. The astounding success of GAI and human-like ChatGPT has ushered in a new era of foundation models, laying the groundwork for large-scale models and the rise of artificial general intelligence (AGI). Foundation models have emerged to transform the digital world. Their impact is comparable to other milestones in digital evolution, such as the invention of electricity, the advent of the internet, and the rise of cloud computing. By bridging the gap between narrow AI and AGI, foundation models are shaping the future of AI research and development, opening up new possibilities and opportunities in the rapidly evolving digital landscape. Key Characteristics of Foundation Models Foundation models have rapidly become the core of AI. They share several key characteristics, highlighting their potential and significance in shaping the future of AI. Pre-trained and Adaptable: A defining characteristic of foundation models is their pre-trained nature, allowing them to serve as a starting point for various applications and tasks. Through transfer learning and fine-tuning, these models can be adapted to address specific challenges and requirements, significantly reducing development time and resources.Scalability: Designed to be scalable, foundation models can handle vast amounts of data and grow in complexity as required. This scalability enables them to tackle a broad range of tasks and accommodate the ever-increasing demands of the AI landscape.Versatility: Foundation models boast remarkable versatility, as they can be employed across multiple domains and industries. From language and vision to healthcare and finance, these models serve as a basis for a wide range of applications.Self-Supervised Learning: A key aspect of foundation models is their ability to utilize self-supervised learning techniques. By leveraging large-scale, unlabeled data, these models can learn complex representations and features, greatly improving their performance on various tasks and reducing dependence on labeled data.Robustness: Foundation models are known for their robustness, demonstrating resilience in the face of noisy, incomplete, or even adversarial data. This robustness allows them to maintain high levels of", "76e1d67063713ec4c9c1c5d055a7edfb5ba0c445910200373d4675be0b941707": "as they can be employed across multiple domains and industries. From language and vision to healthcare and finance, these models serve as a basis for a wide range of applications.Self-Supervised Learning: A key aspect of foundation models is their ability to utilize self-supervised learning techniques. By leveraging large-scale, unlabeled data, these models can learn complex representations and features, greatly improving their performance on various tasks and reducing dependence on labeled data.Robustness: Foundation models are known for their robustness, demonstrating resilience in the face of noisy, incomplete, or even adversarial data. This robustness allows them to maintain high levels of performance and accuracy across different contexts and challenges.Interoperability: Interoperability is another critical characteristic of foundation models, as they can be easily integrated with existing systems and frameworks. This seamless integration facilitates collaboration between different AI models and components, streamlining the development process and fostering innovation.Generalization: The ability to generalize is a hallmark of foundation models, enabling them to perform well on unseen data and novel tasks. This characteristic allows them to adapt to a variety of challenges, making them an invaluable asset in AI research and development. By understanding the key characteristics of foundation models, such as their pre-trained nature, adaptability, scalability, versatility, self-supervised learning capabilities, robustness, interoperability, and generalization, we can better appreciate their potential and impact on the future of AI. Capabilities of Foundation Models Beyond LLMs Foundation models have made a significant impact beyond LLMs, offering a versatile and powerful approach to solving complex problems across various domains in language, vision, robotics, reasoning and search, interaction, and philosophy of understanding. Language: Foundation models excel in language, demonstrating human-like comprehension and generation of text. From machine translation and sentiment analysis to summarization and question-answering, these models are unlocking new possibilities in language-related applications and enhancing communication between humans and machines.Vision: In the realm of computer vision (CV), foundation models are transforming the way we analyze and interpret visual data. By effectively recognizing objects, detecting patterns, and segmenting images, these models are enabling advancements in fields such as autonomous vehicles, medical imaging, and surveillance systems.Robotics: By incorporating self-supervised learning and reinforcement learning techniques, foundation models are empowering robots to learn from their", "77c2dba0b606fc7d48f48508f56d5864ff9632a8179fee36c88446fd36ad4bc4": "Foundation models excel in language, demonstrating human-like comprehension and generation of text. From machine translation and sentiment analysis to summarization and question-answering, these models are unlocking new possibilities in language-related applications and enhancing communication between humans and machines.Vision: In the realm of computer vision (CV), foundation models are transforming the way we analyze and interpret visual data. By effectively recognizing objects, detecting patterns, and segmenting images, these models are enabling advancements in fields such as autonomous vehicles, medical imaging, and surveillance systems.Robotics: By incorporating self-supervised learning and reinforcement learning techniques, foundation models are empowering robots to learn from their environments, adapt to new tasks, and interact more effectively with humans.Reasoning and Search: Foundation models are enhancing our ability to reason and search through vast amounts of data, extracting valuable insights and uncovering hidden connections. Their capabilities extend to logical reasoning, pattern recognition, and knowledge graph exploration, enabling more informed decision-making and efficient problem-solving across numerous industries.Interaction: The interactive capabilities of foundation models facilitate more natural and intuitive communication between humans and machines. By understanding and generating human-like responses, these models pave the way for seamless collaboration and improved user experiences in applications such as chatbots, virtual assistants, and customer support systems.Philosophy of Understanding: At the core of foundation models lies the philosophy of understanding, aiming to uncover the underlying principles and mechanisms that enable machines to comprehend and interpret complex data. The capabilities of foundation models span across language, vision, robotics, reasoning and search, interaction, and philosophy of understanding, highlighting their potential to reshape the AI landscape. By exploring these capabilities, we can foster responsible innovation and unlock the full potential of foundation models in addressing the world's most pressing challenges. AI Engineering AI engineering is a burgeoning discipline combining software engineering principles with AI techniques to design, build, and scale intelligent systems. As large-scale foundation models continue to revolutionize the AI landscape, AI engineering plays a pivotal role in their development and deployment. AI engineering offers the tools and techniques necessary to scale out large-scale models while maintaining their performance and adaptability. Some aspects of scaling out these models through AI engineering include: Distributed Training: AI engineers harness the power of distributed computing to train large-scale models on vast amounts of data, accelerating the training process and improving model performance.Data Management:", "2bd15c93602c4856e678814faac0cbfc4e2881bacbcbccbe0dc756f764aa438f": "models in addressing the world's most pressing challenges. AI Engineering AI engineering is a burgeoning discipline combining software engineering principles with AI techniques to design, build, and scale intelligent systems. As large-scale foundation models continue to revolutionize the AI landscape, AI engineering plays a pivotal role in their development and deployment. AI engineering offers the tools and techniques necessary to scale out large-scale models while maintaining their performance and adaptability. Some aspects of scaling out these models through AI engineering include: Distributed Training: AI engineers harness the power of distributed computing to train large-scale models on vast amounts of data, accelerating the training process and improving model performance.Data Management: AI engineers ensure that the data used for training and fine-tuning foundation models is well-organized, clean, and representative of the target domain.Resource Management: AI engineers optimize the use of computational resources, such as GPUs and TPUs, ensuring that large-scale models can be trained and deployed efficiently and cost-effectively.Model Compression and Pruning: AI engineers employ model compression and pruning techniques to reduce the size and complexity of large-scale models, making them more accessible and deployable across various platforms.Monitoring and Maintenance: AI engineers continuously monitor the performance of large-scale models, identifying potential issues and implementing necessary updates and improvements to ensure their ongoing success. AI engineering is an essential discipline for building and scaling foundation models, providing the necessary expertise and techniques to ensure their robustness, efficiency, and adaptability. As we continue to push AI boundaries, AI engineering will play a crucial role in unlocking the full potential of foundation models and shaping the future of AI research and development. TL;DR In closing, foundation models represent a critical milestone in the advancement of AI, providing a versatile and adaptable approach to solving complex problems across multiple domains. From language and vision to robotics and reasoning, these models are unlocking new possibilities and driving innovation across various industries. As we continue to explore the full potential of foundation models and their role in the evolution towards AGI, it is crucial to foster responsible and ethical AI development, ensuring these models are used to benefit humanity and address the most pressing challenges of our time. With foundation models as a solid basis, we can accelerate AI research and development, unlocking new frontiers and shaping the future of intelligent systems. LLMs Papers GPT-4 Technical Report:", "7c0ff552ae4caad1b5fa1914f8c5ea0c907705192580cc127e76b245221805c1": "AI, providing a versatile and adaptable approach to solving complex problems across multiple domains. From language and vision to robotics and reasoning, these models are unlocking new possibilities and driving innovation across various industries. As we continue to explore the full potential of foundation models and their role in the evolution towards AGI, it is crucial to foster responsible and ethical AI development, ensuring these models are used to benefit humanity and address the most pressing challenges of our time. With foundation models as a solid basis, we can accelerate AI research and development, unlocking new frontiers and shaping the future of intelligent systems. LLMs Papers GPT-4 Technical Report: https://arxiv.org/abs/2303.08774GPT-3: Language Models are Few-Shot Learners: https://arxiv.org/abs/2005.14165Toolformer: Language Models Can Teach Themselves to Use Tools: https://arxiv.org/abs/2302.04761LLaMA: Open and Efficient Foundation Language Models: https://arxiv.org/abs/2302.13971Google USM: Scaling Automatic Speech Recognition Beyond 100 Languages: https://arxiv.org/abs/2303.01037Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model: https://arxiv.org/abs/2201.11990 Foundation Models Resources Reflections on Foundation Models: https://hai.stanford.edu/news/reflections-foundation-modelsOn the Opportunities and Risks of Foundation Models: https://arxiv.org/abs/2108.07258", "375a1bf6285e85964292342467b055c3d4558095abcea5dcc7bf2532178e174e": "GPTQ: Post-training quantization on generative models In a groundbreaking paper [1], researchers unveiled GPTQ, a novel post-training quantization method that has the potential to reshape the world of language model compression. GPTQ is not only efficient enough to be applied to models boasting hundreds of billions of parameters, but it can also achieve remarkable precision by compressing these models to a mere 2, 3, or 4 bits per parameter without sacrificing significant accuracy. This cutting-edge technique is showcased by its ability to quantize massive models, such as OPT-175B and BLOOM-176B, in just a matter of a few GPU hours while maintaining minimal perplexity, a stringent measure of accuracy. On the practical front, the researchers have developed an execution harness that enables efficient operation of the compressed models for generative tasks. Remarkably, they achieved the milestone of running the compressed OPT-175B model on a single NVIDIA A100 GPU, or with only two more cost-effective NVIDIA A6000 GPUs. Additionally, bespoke GPU kernels optimized for compression result in significant speedups, further enhancing the practicality of these compressed models. What makes GPTQ stand out is its ability to quantize language models with hundreds of billions of parameters to the 34 bits/component range. This is a remarkable leap, as prior methods struggled to maintain accuracy below 8 bits and typically focused on smaller models. However, the study also highlights the complex tradeoffs between perplexity, bit-width, and model size induced by compression. But it comes with limitations. GPTQ does not currently offer speedups for actual multiplications due to the lack of hardware support for mixed-precision operands on mainstream architectures. Activation quantization is also not included in the current results but can be addressed through orthogonal techniques. In sum, GPTQ's ability to compress extremely accurate language models to unprecedented levels marks a significant milestone in the field of machine learning and language modeling. It paves the way for more efficient and accessible applications of these colossal models while pointing toward further research possibilities in the realm of model compression. \u00bfWhen you should use GPTQ? The", "85fa508670e60b79935ae7d11f981ff814dab38ff9f7a17058bbd7bf814e2a39": "induced by compression. But it comes with limitations. GPTQ does not currently offer speedups for actual multiplications due to the lack of hardware support for mixed-precision operands on mainstream architectures. Activation quantization is also not included in the current results but can be addressed through orthogonal techniques. In sum, GPTQ's ability to compress extremely accurate language models to unprecedented levels marks a significant milestone in the field of machine learning and language modeling. It paves the way for more efficient and accessible applications of these colossal models while pointing toward further research possibilities in the realm of model compression. \u00bfWhen you should use GPTQ? The answer to this question will depend on each specific case and on the base model to be used, but an approach that is being applied to numerous models and that is indicated by HuggingFace, and the article I mentioned before, is the following: Fine-tune the original LLM model with bitsandbytes in 4-bit, nf4, and QLoRa for efficient fine-tuning.Merge the adapter into the original modelQuantize the resulting model with GPTQ 4-bit I ran the first two steps in my previous article [3] and now that the AutoGPT library is integrated with the Huggingface ecosystem, we will execute the third step in an extremely simple way. AutoGPT integrated with Hugging Face transformers The AutoGPTQ library emerges as a powerful tool for quantizing Transformer models, employing the efficient GPTQ method. Some efforts like GPTQ-for-LLaMa, Exllama, and llama.cpp, focuses on the quantization of the Llama architecture, but AutoGPTQ distinguishes itself by offering seamless support for a diverse array of transformer architectures. The Hugging Face team has taken a significant step to enhance accessibility to GPTQ, and they have integrated an inclusive Transformers API, simplifying the process of Low-Level Model (LLM) quantization for a wider audience. This integration includes essential optimization options, such as CUDA kernels, catering to common use cases. For users seeking more advanced quantization options, the Auto-GPTQ library remains a", "461b41572a7f92eaaa2db5ea8256e8f0977afac42f70f555a131d5fdfcaa4f9b": "Exllama, and llama.cpp, focuses on the quantization of the Llama architecture, but AutoGPTQ distinguishes itself by offering seamless support for a diverse array of transformer architectures. The Hugging Face team has taken a significant step to enhance accessibility to GPTQ, and they have integrated an inclusive Transformers API, simplifying the process of Low-Level Model (LLM) quantization for a wider audience. This integration includes essential optimization options, such as CUDA kernels, catering to common use cases. For users seeking more advanced quantization options, the Auto-GPTQ library remains a valuable resource, offering capabilities like Triton kernels and fused-attention compatibility, and ensuring versatility and adaptability in the world of transformer model quantization. Extracted from the Huggingface blog article \"Making LLMs lighter with AutoGPTQ and transformers\" [5]. Our approach to this task First, we will load our fine-tuned model Llama 2 7B 4-bit Python coder in a Colab session using a T4 with extra RAM. The model is loaded in 4-bit with bitsandbytes and then we execute about 12 examples to measure the inference time. In order to perform a simple evaluation of the performance at the inference time, we have taken as examples those whose input text was longer than 500 characters and in this way, we will try to better appreciate the impact of quantization during inference. You can extract the code to load this model in the description of the model in the hugging Face Hub. In my notebook, we will describe how to perform inference on the examples mentioned. Quantize the model using auto-gptq, transformers, and optimum The GPTQ quantization consumes a lot of GPU VRAM, for that reason we need to execute it in an A100 GPU in Colab. It takes about 45 minutes to quantize the model, less than $1 in Colab. You can find the code in this notebook in my repository. First, we need to install the libraries as it is recommended in the huggingface tutorial: Optimum library, Hugging", "b24bf1b2c3832f0f7d7afc99952a709167373f45b65aac0dc9bc29b2b1f6dca7": "model in the hugging Face Hub. In my notebook, we will describe how to perform inference on the examples mentioned. Quantize the model using auto-gptq, transformers, and optimum The GPTQ quantization consumes a lot of GPU VRAM, for that reason we need to execute it in an A100 GPU in Colab. It takes about 45 minutes to quantize the model, less than $1 in Colab. You can find the code in this notebook in my repository. First, we need to install the libraries as it is recommended in the huggingface tutorial: Optimum library, Hugging Face's toolkit for training and inference optimization, provides the integration of AutoGPTQ into Transformers. The GPTQ algorithm requires calibrating the quantized weights of the model by making inferences on the quantized model. For quantizing a model using auto-gptq, we need to pass a dataset to the quantizer. This can be achieved either by passing a supported default dataset among ['wikitext2','c4','c4-new','ptb','ptb-new'] or a list of strings that will be used as your custom dataset. Now you just need to load the model using a GPTQ configuration setting the desired parameters, as usual when working with transformers, it is very easy: As mentioned, this code takes about 45 minutes to run and consumes a peak of 32 GB of GPU VRAM. \"You will need a GPU to quantize a model. We will put the model in the CPU and move the modules back and forth to the GPU in order to quantize them. If you want to maximize your GPUs usage while using CPU offload, you can set device_map = \"auto\" \" [6], hugging Face docs. Parameters are self-explained, 4-bit quantization, C4 dataset, and the tokenizer to use during quantization. The other two parameters take the default values: group_size: The group size to use for quantization. Recommended value is 128 and -1 uses per-column quantizationdesc_act: Whether to quantize", "56b9500b587e8bf188557f35d5d3743ea2df8c2bdbe84be4fe25ce51f299bcca": "a model. We will put the model in the CPU and move the modules back and forth to the GPU in order to quantize them. If you want to maximize your GPUs usage while using CPU offload, you can set device_map = \"auto\" \" [6], hugging Face docs. Parameters are self-explained, 4-bit quantization, C4 dataset, and the tokenizer to use during quantization. The other two parameters take the default values: group_size: The group size to use for quantization. Recommended value is 128 and -1 uses per-column quantizationdesc_act: Whether to quantize columns in order of decreasing activation size. Setting it to False can significantly speed up inference but the perplexity may become slightly worse. Also known as act-order. Once you have your model quantized, it is time to upload it to the Huggin Face Hub and share it with the community. In my experiment using GPTQ, the reduction in model size is striking. My fine-tuned Llama 2 7B model with 4-bit weighted 13.5 GB on disk, but after quantization, its size was dramatically reduced to just 3.9 GB, a third of the original size. This feature is very attractive when deploying large language models. Loading the GPTQ Model from Hugging Face Hub and making some inferences Probably, all of you know how to do that but just in case you think this could be more \"trickier\" than with other models, we will show you that it is as usual. Remember you need to load all the libraries, including optimum, accelerate, and, of course, auto-gptq . Then you can upload the tokenizer and the model into your notebook in a T4 GPU in Google Colab: Now we can check our GPU to confirm how much memory we are consuming and, indeed, we can see that the model occupies 5,053 GB. We repeat the performance evaluation we mentioned earlier, making inferences on a bunch of long examples to compare with the original model. Both inference processes were executed on a T4 GPU,", "3b98a6582d3cd433a8e0d36c3acd40ccb64838c97eb454923d0a2c75e2d4bf35": "than with other models, we will show you that it is as usual. Remember you need to load all the libraries, including optimum, accelerate, and, of course, auto-gptq . Then you can upload the tokenizer and the model into your notebook in a T4 GPU in Google Colab: Now we can check our GPU to confirm how much memory we are consuming and, indeed, we can see that the model occupies 5,053 GB. We repeat the performance evaluation we mentioned earlier, making inferences on a bunch of long examples to compare with the original model. Both inference processes were executed on a T4 GPU, the base model took about 1719 seconds per inference while the quantized model ran in about 8 to 9 seconds per inference, a half. All code and examples are well-explained in his notebook in my repository. Any suggestion or bug fixing is welcome. References [1] ICLR 2023 paper \"GPTQ: Accurate Post-Training Quantization for Generative Pre-Trained Transformers\" [2] \"GPTQ or bitsandbytes: Which Quantization Method to Use for LLMs - Examples with Llama 2\" by Benjamin Marie. [3] \"Fine-Tuning a Llama 2 7B Model for Python Code Generation\" by Eduardo Mu\u00f1oz [4] Original fine-tuned model in Huggingface \"edumunozsala/llama-27b-int4-python-code-20k\" [5] Hugging Face blog article \"Making LLMs lighter with AutoGPTQ and transformers\" [6] Hugging Face official documentation about GPTQConfig [7] \"4-bit Quantization with GPTQ\" by Maxime Labonne.", "e97bbe3d37bacb34902b4db67351799f1309541d4879e53b97fad08a4417304f": "LLaMA: Meta's new AI tool According to the official release, LLaMA is a foundational language model developed to assist 'researchers and academics' in their work (as opposed to the average web user) to understand and study these NLP models. Leveraging AI in such a way could give researchers an edge in terms of time spent. You may not know this, but this would be Meta's third LLM after Blender Bot 3 and Galactica. However, the two LLMs were shut down soon, and Meta stopped their further development, as it produced erroneous results. Before moving further, it is important to emphasize that LLaMA is NOT a chatbot like ChatGPT. As I mentioned before, it is a 'research tool' for researchers. We can expect the initial versions of LLaMA to be a bit more technical and indirect to use as opposed to the case with ChatGPT, which was very direct, interactive, and a lot easy to use. \"Smaller, more performant models such as LLaMA enable ... research community who don't have access to large amounts of infrastructure to study these models.. further democratizing access in this important, fast-changing field,\" said Meta in its official blog. Meta's effort of \"democratizing\" access to the public could shed light on one of the critical issues of Generative AI - toxicity and bias. ChatGPT and other LLMs (obviously, I am referring to Bing) have a track record of responding in a way that is toxic and, well... evil. The Verge and major critics have covered it in much detail. Oh and the community did get the access, but not in the way Meta anticipated. On March 3rd, a downloadable torrent of the LLaMA system was posted on 4chan. 4chan is an anonymous online forum known for its controversial content and diverse range of discussions, which has nearly 222 million unique monthly visitors. LLaMA is currently not in use on any of Meta's products. But Meta has plans to make it available to researchers before they can use them in their own products. It's worth mentioning that Meta did not release", "ab651375c4bf52b30d0d709c5c1ac7c52e75399b0cdc1f1139c3d54cda15d0f4": "evil. The Verge and major critics have covered it in much detail. Oh and the community did get the access, but not in the way Meta anticipated. On March 3rd, a downloadable torrent of the LLaMA system was posted on 4chan. 4chan is an anonymous online forum known for its controversial content and diverse range of discussions, which has nearly 222 million unique monthly visitors. LLaMA is currently not in use on any of Meta's products. But Meta has plans to make it available to researchers before they can use them in their own products. It's worth mentioning that Meta did not release LLaMA as a public chatbot. LLaMA is more of an open-source package that can be accessed by trusted authorities upon request. Powerful LLMs: What to hope Whether to agree with Ladish's views or not is debatable. Personally, I feel open-sourcing AI models could only benefit the AI community to scrutinize the model and improve them for the better. What do you think? After all, one of LLaMA's major goals is to 'democratize' access to such models. But this access in the form of a leak put Meta into question - how it handles its tools and conducts release in public? Most of the users that got the leaked copies soon discovered that LLaMA was not at all similar to ChatGPT. \"Downloading\" LLaMA is going to do very little for the average internet user because it's a \"raw\" AI system that needs a decent amount of technical expertise to get up and running. However, as I am writing this, Meta hasn't acknowledged the leak to the public yet. Neither did they comment on it. There are both positive and negative consequences to this leak. On the one hand, unrestricted access to Llama could help researchers understand how and why large language models work, which could lead to improvements in robustness, bias, and the toxic nature of LLMs. This could really help in reducing the potential for generating misinformation by these troublesome machines. On the other hand, however, the leak could lead to people misusing the model itself. It is not yet perfect. Hence Meta", "1cce366d677f8376bf12534e297b90b03aca077aec33e886104e9e5ea7d51e5f": "to get up and running. However, as I am writing this, Meta hasn't acknowledged the leak to the public yet. Neither did they comment on it. There are both positive and negative consequences to this leak. On the one hand, unrestricted access to Llama could help researchers understand how and why large language models work, which could lead to improvements in robustness, bias, and the toxic nature of LLMs. This could really help in reducing the potential for generating misinformation by these troublesome machines. On the other hand, however, the leak could lead to people misusing the model itself. It is not yet perfect. Hence Meta hasn't released it fully to the public yet. Risks such as spam and phishing could be really hard to tackle if such superintelligent machines are put to the test. Thus, much safeguard must be applied to the use of these models. We can see such tools, like OpenAI Text Classifier, emerging. So there is a positive hope for this. AI is exciting, no doubt. But a lot scarier if we lose our control over it.", "663f0f64773dbe54b88622b68e405f1567fce9a775e5c2b56243bd5ac7c14b64": "Introduce GPT4All GPT4All is a large language model (LLM) chatbot developed by Nomic AI, the world's first information cartography company. It was fine-tuned from LLaMA 7B model, the leaked large language model from Meta (aka Facebook). GPT4All is trained on a massive dataset of text and code, and it can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. GPT4All is available to the public on GitHub. LLaMA is available for commercial use under the GPL-3.0 license - while the LLaMA code is available for commercial use, the WEIGHTS are not. This effectively puts it in the same license class as GPT4All. Nomic is working on a GPT-J-based version of GPT4All with an open commercial license. GPT4All is not going to have a subscription fee ever. GPT4All is Free4All. Although GPT4All is still in its early stages, it has already left a notable mark on the AI landscape. Its popularity and capabilities are expected to expand further in the future. How to Run GPT4All Locally GPT4All Readme provides some details about its usage. Here will briefly demonstrate to run GPT4All locally on M1 CPU Mac. Download gpt4all-lora-quantized.bin from the-eye.Clone this repository, navigate to chat, and place the downloaded file there. Simply run the following command for M1 Mac: Now, it's ready to run locally. Please see a few snapshots below: Similar to ChatGPT, GPT4All has the ability to comprehend Chinese, a feature that Bard lacks. If you want to interact with GPT4All programmatically, you can install the nomic client as follows. Install the nomic client using pip install nomic.Use the following Python script to interact with GPT4All: Chat4All Demystified GPT4All aims to provide a cost-effective and fine-tuned model for high-quality LLM results. The GPT4All model was fine-tuned using an instance of LLaMA", "abaa50b7db8fbf0dab4c81d1b4d70df618de96dd409f6b6c049ccc90dfb3d349": "ready to run locally. Please see a few snapshots below: Similar to ChatGPT, GPT4All has the ability to comprehend Chinese, a feature that Bard lacks. If you want to interact with GPT4All programmatically, you can install the nomic client as follows. Install the nomic client using pip install nomic.Use the following Python script to interact with GPT4All: Chat4All Demystified GPT4All aims to provide a cost-effective and fine-tuned model for high-quality LLM results. The GPT4All model was fine-tuned using an instance of LLaMA 7B with LoRA on 437,605 post-processed examples for 4 epochs. Detailed model hyperparameters and training codes can be found in the GitHub repository. GPT4All developers collected about 1 million prompt responses using the GPT-3.5-Turbo OpenAI API from various publicly available datasets. After an extensive data preparation process, they narrowed the dataset down to a final subset of 437,605 high-quality prompt-response pairs. Developing GPT4All took approximately four days and incurred $800 in GPU expenses and $500 in OpenAI API fees. The final gpt4all-lora model can be trained on a Lambda Labs DGX A100 8x 80GB in about 8 hours, with a total cost of $100. A preliminary evaluation of GPT4All compared its perplexity with the best publicly known alpaca-lora model. Results showed that the fine-tuned GPT4All models exhibited lower perplexity in the self-instruct evaluation compared to Alpaca. However, this assessment was not exhaustive due to encouraging users to run the model on local CPUs to gain qualitative insights into its capabilities. TL;DR Considering the expensive LLMs in training and serving, Meta LLaMA is a foundation for accelerating LLM open-source community. Stanford's Alpaca, based on LLaMA, offers an optimized smaller model with enhanced performance. Now, GPT4All, also built on LLaMA, enables local execution. Generative AI is evolving rapidly every day. Thanks to Brandon Duderstadt for reviewing this article.", "2d5974c99c10fd2236a171b4b213b6f9b7dbaa9888d35bd6ebf63135b5540d3d": "Inside Code Llama The release of Code Llama does not include a single model but three different variants, characterized by their parameter sizes of 7B, 13B, and 34B. Each of these models has been trained on an extensive pool of 500B tokens encompassing code and code-related information. Notably, the 7B and 13B base and instruct models have been endowed with fill-in-the-middle (FIM) competence, empowering them to seamlessly insert code into existing code structures. This attribute equips them to handle tasks like code completion right from the outset.The trio of models caters to distinct requisites concerning serving and latency. For instance, the 7B model boasts the ability to operate on a single GPU. While the 34B model stands out for yielding optimal outcomes and elevating coding assistance, the smaller 7B and 13B versions excel in speed, making them fitting for low-latency tasks such as real-time code completion. Meta AI's innovations further extend to two nuanced adaptations of Code Llama: Code Llama - Python and Code Llama - Instruct. Code Llama - Python is a specialized derivation, meticulously honed on a substantial volume of Python code spanning 100B tokens. Given Python's central role in code generation benchmarks and its significance within the AI community, this focused model augments utility.Code Llama - Instruct represents an alignment and refinement of Code Llama through instructional fine-tuning. This novel training approach entails furnishing the model with \"natural language instruction\" inputs paired with anticipated outputs. This strategic methodology enhances the model's capacity to grasp human expectations in prompts. For endeavors involving code generation, it is advised to opt for Code Llama - Instruct versions, as they have been calibrated to yield useful and secure natural language responses. Deep diving into the Code Llama training and fine-tuning, there are a few aspects that are worth highlighting 1) DatasetLlama's training rests on a meticulously curated dataset enriched with publicly available code, offering a near-duplicate-free landscape. The dataset consists of 500B tokens during the initial phase, starting from the 7B, 13B, and 34B", "ec9421d39825feed7e070c094dd2a261a3ce1ac756a615ac49ce62052495b70c": "This strategic methodology enhances the model's capacity to grasp human expectations in prompts. For endeavors involving code generation, it is advised to opt for Code Llama - Instruct versions, as they have been calibrated to yield useful and secure natural language responses. Deep diving into the Code Llama training and fine-tuning, there are a few aspects that are worth highlighting 1) DatasetLlama's training rests on a meticulously curated dataset enriched with publicly available code, offering a near-duplicate-free landscape. The dataset consists of 500B tokens during the initial phase, starting from the 7B, 13B, and 34B versions. A supplementary 8% of sample data is garnered from natural language datasets linked to code domains. 2) InfillingWithin the realm of Code Infilling, a pivotal task revolves around predicting missing segments within a program while being guided by contextual surroundings. Pragmatic applications encompass code completion within Integrated Development Environments (IDEs), type inference, and even the generation of in-code documentation such as docstrings. Operating in alignment with the concept of causal masking, a framework expounded by Aghajanyan et al. (2022) and Fried et al. (2023), Meta AI molds infilling models. The training process entails shifting parts of training sequences to the conclusion, paving the path for autoregressive predictions. In this endeavor, both the versatile 7B and 13B models undergo infilling-oriented training, echoing the strategies advised by Bavarian et al. (2022). 3) Long Context Fine-Tuning:Unraveling the intricacies of handling extensive sequences is a formidable pursuit in the realm of transformer-based language models. The pivotal challenges orbit around extrapolation - delving into sequence lengths beyond those encountered during training - and the quadratic complexity of attention passes that tilts the balance towards short-to-medium inputs for effective training. Meta AI steps forward with a unique solution, introducing the dedicated domain of long context fine-tuning (LCFT). Embracing sequences encompassing 16,384 tokens, a substantial leap from the 4,096 tokens featured in Llama 2's initial code training stages, LCFT", "2cdc07b5cd91addeca7dafc0fcac179c1303cece7ebe9f3e0a71701e8cf827d7": "Context Fine-Tuning:Unraveling the intricacies of handling extensive sequences is a formidable pursuit in the realm of transformer-based language models. The pivotal challenges orbit around extrapolation - delving into sequence lengths beyond those encountered during training - and the quadratic complexity of attention passes that tilts the balance towards short-to-medium inputs for effective training. Meta AI steps forward with a unique solution, introducing the dedicated domain of long context fine-tuning (LCFT). Embracing sequences encompassing 16,384 tokens, a substantial leap from the 4,096 tokens featured in Llama 2's initial code training stages, LCFT empowers models with extended-range capabilities. This strategic shift occurs within a fine-tuning phase, circumventing undue escalation in training costs. 4) Instruction Fine-Tuning:Code Llama's prowess extends to instruction fine-tuning, witnessed in the refined Code Llama - Instruct models. This iteration leverages Code Llama as its foundation, sculpted to aptly respond to queries. Merging Supervised Fine-Tuning with an expansive pool of Rejection Sampling examples yields this instructive competence. 5) Self-InstructIn the realm of datasets, Meta AI embarks on a proprietary journey, curating instances tethered to code-related tasks. In recognition of the resource-intensive nature of acquiring data from human annotators or through human feedback, a particular emphasis on self-instruction is embraced. The domain of coding tasks, steeped in the insights of professional developers, forms the canvas on which this innovative approach is painted. The Results The evaluate Code Llama, Meta AI engaged two widely acknowledged coding benchmarks: HumanEval and Mostly Basic Python Programming (MBPP). The HumanEval benchmark systematically assesses the model's prowess in code completion via docstrings, while the MBPP benchmark scrutinizes the model's capacity to translate descriptions into executable code.The meticulous benchmarking endeavor unfolded illuminating results: Code Llama outshone open-source, code-centric Large Language Models (LLMs) and even outperformed its predecessor, Llama 2. For instance, in the case of Code Llama 34B, remarkable scores emerged - an impressive 53.7%", "ead0550fc9f66c4a1cff4596f41f5d6c6b374111f2a3b495ab2044ea95f6dada": "Results The evaluate Code Llama, Meta AI engaged two widely acknowledged coding benchmarks: HumanEval and Mostly Basic Python Programming (MBPP). The HumanEval benchmark systematically assesses the model's prowess in code completion via docstrings, while the MBPP benchmark scrutinizes the model's capacity to translate descriptions into executable code.The meticulous benchmarking endeavor unfolded illuminating results: Code Llama outshone open-source, code-centric Large Language Models (LLMs) and even outperformed its predecessor, Llama 2. For instance, in the case of Code Llama 34B, remarkable scores emerged - an impressive 53.7% on the HumanEval benchmark and a formidable 56.2% on the MBPP benchmark. These scores stood as the highest amongst comparable state-of-the-art solutions, positioning Code Llama 34B on par with the notable capabilities of ChatGPT. Code Llama promises to be one of the most important code LLMs in the near future. It certainly contributes to reaffirm the value of open-source foundation models across different domains.", "f707756065d1f788b41fb97fcef81979e1fd241dbfa4034a24bec8e57b648482": "I. Llama 2: Revolutionizing Commercial Use Unlike its predecessor Llama 1, which was limited to research use, Llama 2 represents a major advancement as an open-source commercial model. Businesses can now integrate Llama 2 into products to create AI-powered applications. Availability on Azure and AWS facilitates fine-tuning and adoption. However, restrictions apply to prevent exploitation. Companies with over 700 million active daily users cannot use Llama 2. Additionally, its output cannot be used to improve other language models. II. Llama 2 Model Flavors Llama 2 is available in four different model sizes: 7 billion, 13 billion, 34 billion, and 70 billion parameters. While 7B, 13B, and 70B have already been released, the 34B model is still awaited. The pretrained variant, trained on a whopping 2 trillion tokens, boasts a context window of 4096 tokens, twice the size of its predecessor Llama 1. Meta also released a Llama 2 fine-tuned model for chat applications that was trained on over 1 million human annotations. Such extensive training comes at a cost, with the 70B model taking a staggering 1720320 GPU hours to train. The context window's length determines the amount of content the model can process at once, making Llama 2 a powerful language model in terms of scale and efficiency. III. Safety Considerations: A Top Priority for Meta Meta's commitment to safety and alignment shines through in Llama 2's design. The model demonstrates exceptionally low AI safety violation percentages, surpassing even ChatGPT in safety benchmarks. Finding the right balance between helpfulness and safety when optimizing a model poses significant challenges. While a highly helpful model may be capable of answering any question, including sensitive ones like \"How do I build a bomb?\", it also raises concerns about potential misuse. Thus, striking the perfect equilibrium between providing useful information and ensuring safety is paramount. However, prioritizing safety to an extreme extent can lead to a model that struggles to effectively address a diverse range of questions. This limitation could hinder the model's practical applicability and user experience. Thus, achieving", "636f98cf8754c3a4759da02aa11a3f2aa7cdeb848a4980ec99300ece4a2e92fd": "The model demonstrates exceptionally low AI safety violation percentages, surpassing even ChatGPT in safety benchmarks. Finding the right balance between helpfulness and safety when optimizing a model poses significant challenges. While a highly helpful model may be capable of answering any question, including sensitive ones like \"How do I build a bomb?\", it also raises concerns about potential misuse. Thus, striking the perfect equilibrium between providing useful information and ensuring safety is paramount. However, prioritizing safety to an extreme extent can lead to a model that struggles to effectively address a diverse range of questions. This limitation could hinder the model's practical applicability and user experience. Thus, achieving an optimum balance that allows the model to be both helpful and safe is of utmost importance. To strike the right balance between helpfulness and safety, Meta employed two reward models - one for helpfulness and another for safety - to optimize the model's responses. The 34B parameter model has reported higher safety violations than other variants, possibly contributing to the delay in its release. IV. Helpfulness Comparison: Llama 2 Outperforms Competitors Llama 2 emerges as a strong contender in the open-source language model arena, outperforming its competitors in most categories. The 70B parameter model outperforms all other open-source models, while the 7B and 34B models outshine Falcon in all categories and MPT in all categories except coding. Despite being smaller, Llam a2's performance rivals that of Chat GPT 3.5, a significantly larger closed-source model. While GPT 4 and PalM-2-L, with their larger size, outperform Llama 2, this is expected due to their capacity for handling complex language tasks. Llama 2's impressive ability to compete with larger models highlights its efficiency and potential in the market. However, Llama 2 does face challenges in coding and math problems, where models like Chat GPT 4 excel, given their significantly larger size. Chat GPT 4 performed significantly better than Llama 2 for coding (HumanEval benchmark)and math problem tasks (GSM8k benchmark). Open-source AI technologies, like Llama 2, continue to advance, offering", "2f429ec2a936a3dcd37504333de59d17ccd6f07f944ae6f5057aa8d29668662b": "with their larger size, outperform Llama 2, this is expected due to their capacity for handling complex language tasks. Llama 2's impressive ability to compete with larger models highlights its efficiency and potential in the market. However, Llama 2 does face challenges in coding and math problems, where models like Chat GPT 4 excel, given their significantly larger size. Chat GPT 4 performed significantly better than Llama 2 for coding (HumanEval benchmark)and math problem tasks (GSM8k benchmark). Open-source AI technologies, like Llama 2, continue to advance, offering strong competition to closed-source models. V. Ghost Attention: Enhancing Conversational Continuity One unique feature in Llama 2 is Ghost Attention, which ensures continuity in conversations. This means that even after multiple interactions, the model remembers its initial instructions, ensuring more coherent and consistent responses throughout the conversation. This feature significantly enhances the user experience and makes Llama 2 a more reliable language model for interactive applications. In the example below, on the left, it forgets to use an emoji after a few conversations. On the right, with Ghost Attention, even after having many conversations, it will remember the context and continue to use emojis in its response. VI. Temporal Capability: A Leap in Information Organization Meta reported a groundbreaking temporal capability, where the model organizes information based on time relevance. Each question posed to the model is associated with a date, and it responds accordingly by considering the event date before which the question becomes irrelevant. For example, if you ask the question, \"How long ago did Barack Obama become president?\", its only relevant after 2008. This temporal awareness allows Llama 2 to deliver more contextually accurate responses, enriching the user experience further. VII. Open Questions and Future Outlook Meta's open-sourcing of Llama 2 represents a seismic shift, now offering developers and researchers commercial access to a leading language model. With Llama 2 outperforming MosaicML's current MPT models, all eyes are on how Databricks will respond. Can MosaicML's next MPT iteration beat Llama 2? Is it worthwhile to compete", "9c39187e90964fb2503e9a6f9b6da69b965c9c8b53c57f3c0e4de3593e582bd9": "the question, \"How long ago did Barack Obama become president?\", its only relevant after 2008. This temporal awareness allows Llama 2 to deliver more contextually accurate responses, enriching the user experience further. VII. Open Questions and Future Outlook Meta's open-sourcing of Llama 2 represents a seismic shift, now offering developers and researchers commercial access to a leading language model. With Llama 2 outperforming MosaicML's current MPT models, all eyes are on how Databricks will respond. Can MosaicML's next MPT iteration beat Llama 2? Is it worthwhile to compete with Llama 2 or join hands with the open-source community to make the open-source models better? Meanwhile, Microsoft's move to host Llama 2 on Azure despite having significant investment in ChatGPT raises interesting questions. Will users prefer the capabilities and transparency of an open-source model like Llama 2 over closed, proprietary options? The stakes are high, as Meta's bold democratization play stands to reshape preferences and partnerships in the AI space. One thing is certain - the era of open language model competition has begun. VIII. Conclusion With the launch of Llama 2, Meta has achieved a landmark breakthrough in open-source language models, unleashing new potential through its commercial accessibility. Llama 2's formidable capabilities in natural language processing, along with robust safety protocols and temporal reasoning, set new benchmarks for the field. While select limitations around math and coding exist presently, Llama 2's strengths far outweigh its weaknesses. As Meta continues honing Llama technology, this latest innovation promises to be truly transformative. By open-sourcing such an advanced model, Meta is propelling democratization and proliferation of AI across industries. From healthcare to education and beyond, Llama 2 stands to shape the landscape by putting groundbreaking language modeling into the hands of all developers and researchers. The possibilities unlocked by this open-source approach signal a shift towards a more collaborative, creative AI future.", "60efdb2b99a3c7d843c7b470c81f561f033b31f0c5ba8a46fa6ca04c7cc421df": "What is Generative AI? Generative AI is a subfield of machine learning that involves training artificial intelligence models on large volumes of real-world data to generate new contents (text, image, code,...) that is comparable to what humans would create. This is achieved by training algorithms on large datasets to identify patterns and learn from them. Once the neural network has learned these patterns, it can generate new data that adheres to the same patterns. However, this process is computationally intensive. Fundamentally, a generative AI for NLP applications will process an enormous corpus on which it has been trained and respond to prompts with something that falls within the realm of probability, as learnt from the mentioned corpus. For example, autocomplete is a low-level form of generative AI. Advanced models like ChatGPT and DALL-E take the concept to a whole new level. Different model architectures, such as diffusion models and Transformer-based large language models (LLMs), can be employed for generative tasks such as image and language generation. Diffusion models are a type of generative AI model that can be used for a variety of tasks, including image generation, image denoising, and inpainting. Similarly, the Transformer architecture revolutionized the language domain. The new era of language models are Transformer-based, which is a type of deep learning architecture for natural language processing (NLP) tasks. They utilize a self-attention mechanism to transform the input sequence into a set of context-aware high dimensional vectors (also known as embeddings) that can be used for a variety of NLP tasks, including language generation, machine translation, and text classification. The most well-known transformer-based LLMs are the GPT family, developed by OpenAI. The primary advantage of transformer-based LLMs over traditional NLP models is that they are highly parallelizable and can handle long-range dependencies between words in a sentence more effectively. This makes them more suitable for tasks that require a deeper understanding of the context, such as text summarization or generating a coherent and fluent text. Let's explore the history and current state of generative AI and the key players shaping its future. The Generative AI Revolution Generative AI has been around for several years. One of the earliest examples", "17eeb609fad179ea13a99a4c7c967a3d4f78ad2a0f1fcb520ca3a74c9fdc3a49": "text classification. The most well-known transformer-based LLMs are the GPT family, developed by OpenAI. The primary advantage of transformer-based LLMs over traditional NLP models is that they are highly parallelizable and can handle long-range dependencies between words in a sentence more effectively. This makes them more suitable for tasks that require a deeper understanding of the context, such as text summarization or generating a coherent and fluent text. Let's explore the history and current state of generative AI and the key players shaping its future. The Generative AI Revolution Generative AI has been around for several years. One of the earliest examples is the Eliza chatbot developed by Joseph Weizenbaum in 1966. However, these early implementations relied on a rules-based approach that had several shortcomings, such as a limited vocabulary, lack of context, and overreliance on patterns. As a result, they were prone to frequent breakdowns, making it difficult to customize and expand these initial chatbots. Recently, significant progress has been made in AI and machine learning, resulting in the development of advanced generative AI systems. It's no coincidence that these breakthroughs have happened all at once. They're based on a new class of AI models that are incredibly flexible and powerful, surpassing anything we've seen before. In deep learning, there are three critical components that contributed the most to their recent success: scaling models, large datasets, and more compute power - all working together to bring us to this exciting point in AI advancement. Progress in GPUs and their application to Machine Learning GPUs are designed for parallel processing, making them well-suited for the computationally intensive tasks involved in training deep neural networks. Unlike CPUs, which focus on sequential processing, GPUs have thousands of smaller cores that can handle multiple tasks simultaneously, allowing for faster training of large networks. A key breakthrough for machine learning was the intuition that GPUs could be used for Neural Networks, together with software progress such as Nvidia's release of CUDA in 2007, a programming language that allowed GPUs to be used as general-purpose computers. Alexnet - 2012 - The Deep Learning Revolution The modern AI revolution began in 2012 with step change progress in deep learning and convolutional neural networks", "a03f36ef8f17234d2c3de94cfb8f7f9a2c8a9ef1e42a0af06b1f150f2eada805": "them well-suited for the computationally intensive tasks involved in training deep neural networks. Unlike CPUs, which focus on sequential processing, GPUs have thousands of smaller cores that can handle multiple tasks simultaneously, allowing for faster training of large networks. A key breakthrough for machine learning was the intuition that GPUs could be used for Neural Networks, together with software progress such as Nvidia's release of CUDA in 2007, a programming language that allowed GPUs to be used as general-purpose computers. Alexnet - 2012 - The Deep Learning Revolution The modern AI revolution began in 2012 with step change progress in deep learning and convolutional neural networks (CNNs), which were particularly effective in solving computer vision problems. Although CNNs had been around since the 1990s, they were not practical due to their intensive computing power requirements. However, In 2009, Stanford AI researchers introduced ImageNet, a labeled image dataset used to train computer vision algorithms, and a yearly challenge. In 2012, AlexNet combined CNNs trained on GPUs with ImageNet data to create the most advanced visual classifier at the time. The model outperformed the runner-up by a significant margin of nearly 11%! The success of CNNs, the ImageNet dataset, and GPUs drove significant progress in computer vision. Transformers: Attention Is All You Need (Google) - 2017 One critical area where deep learning lagged was natural language processing (NLP), which involves getting computers to understand and hold a coherent conversation with humans rather than translation or classification. NLP breakthroughs were needed to bridge this gap. Previously, researchers relied on models such as recurrent neural networks (RNNs) and long short-term memory (LSTM) to process and analyze time-based data. These models were proficient at recognizing short sequences such as spoken words but struggled with longer sentences and paragraphs. The architectural flaws of these models was unable to capture the complexity and richness of ideas that arise when sentences are combined into larger bodies of text. A significant breakthrough in AI was the development of the \"Transformer\" model by Google with the very popular paper \"Attention Is All You Need\". This model represented a major milestone as it revolutionized the approach to translation problems by utilizing a mechanism called \"attention\":", "39372868ea13500bdbb9e2e4fcee00653c352e0a03e24e0d0a77a1b6e008221c": "on models such as recurrent neural networks (RNNs) and long short-term memory (LSTM) to process and analyze time-based data. These models were proficient at recognizing short sequences such as spoken words but struggled with longer sentences and paragraphs. The architectural flaws of these models was unable to capture the complexity and richness of ideas that arise when sentences are combined into larger bodies of text. A significant breakthrough in AI was the development of the \"Transformer\" model by Google with the very popular paper \"Attention Is All You Need\". This model represented a major milestone as it revolutionized the approach to translation problems by utilizing a mechanism called \"attention\": a particular neural network that allowed the model to analyze the entire input sequence and determine relevance to each component of the output. In the years to come, Transformers have been found to be state-of-the-art models for many other NLP tasks as well, and recently also in other domains such as computer vision. Next word prediction, scale and fine tuning - BERT (Google) and GPT (OpenAI) family - 2018 With the advancement of Transformers, a key further breakthrough finding was the potential to train on unstructured data via next word prediction objective on website contents. It introduced the models such as BERT and GPT-2. This delivered surprising capabilities and \"zero shot\" performance at completing new tasks the model hadn't been trained for. OpenAI also continued to probe the ability for the performance of these models to continue increasing with more scale and more training data. One of the major challenges faced by researchers was acquiring the right training data. ImageNet, a collection of one hundred thousand labeled images, required a significant human effort. Despite the abundance of text available on the Internet, creating a meaningful dataset for teaching computers to work with human language beyond individual words is a time-consuming process. Additionally, labels created for one application using the same data may not apply to another task. With the advancements of BERT and first iteration of GPT, we started to harness the immense amount of unstructured text data available on the internet and the computational power of GPUs. OpenAI further advanced this approach with their development of GPT-2 and GPT-3 models, which are short for \"generative pre-trained", "f798d2742de5deb90469cc4e0d63e17050960d3f94cd1d8004c3a1052dabd303": "of one hundred thousand labeled images, required a significant human effort. Despite the abundance of text available on the Internet, creating a meaningful dataset for teaching computers to work with human language beyond individual words is a time-consuming process. Additionally, labels created for one application using the same data may not apply to another task. With the advancements of BERT and first iteration of GPT, we started to harness the immense amount of unstructured text data available on the internet and the computational power of GPUs. OpenAI further advanced this approach with their development of GPT-2 and GPT-3 models, which are short for \"generative pre-trained transformer.\" These models are specifically designed to generate new words in response to input and are pre-trained on vast amounts of text using the next world prediction objective. Another key breakthrough in these large transformer models is the concept of \"fine tuning\" - or adapting a large model to new more specific tasks or a new smaller and targeted data set - to improve performance in a particular domain with far lower compute cost than training a new model from scratch. For example, a foundational language model like GPT-3 may be fine-tuned on a dataset of medical documents to create an instruction-tuned model for medical document processing. This model will be better at understanding medical terminology, identifying medical entities, and extracting relevant information from medical texts. Instruction Tuning - Instruct GPT and ChatGPT (OpenAI) - 2022 The most recent advancement which has led to the Generative AI landscape today is the concept of Instruction Tuning - taking a model which has just been trained to predict the next word of a text document - and teaching it (via fine tuning) to actually follow human instructions and preferences. This made it far easier to interact with these LLMs and to get them to answer questions and perform tasks without getting sidetracked by just trying to predict the next word. A fortunate feature of instruction tuning is that not only it helps to increase the accuracy and capabilities of these models, but they also help align them to human values and helps prevent them from generating undesired or dangerous content. OpenAI's specific technique for instruction tuning is called reinforcement learning with human feedback (RLHF) where humans are used to train the model", "e1b95557b9446b4d7a16a21a8e8c2acfcca5d67c510fe0a515369d030649e222": "word of a text document - and teaching it (via fine tuning) to actually follow human instructions and preferences. This made it far easier to interact with these LLMs and to get them to answer questions and perform tasks without getting sidetracked by just trying to predict the next word. A fortunate feature of instruction tuning is that not only it helps to increase the accuracy and capabilities of these models, but they also help align them to human values and helps prevent them from generating undesired or dangerous content. OpenAI's specific technique for instruction tuning is called reinforcement learning with human feedback (RLHF) where humans are used to train the model by ranking its responses. Building on top of Instruction Tuning, OpenAI released ChatGPT - which reorganized instruction tuning into a dialogue format and created an easy to use interface for interacting with the AIs. This has catalyzed the mass awareness and adoption of Generative AI products and has led to the landscape we have today. The Current LLM Landscape The breakthroughs in Generative AI have left us with an extremely active and dynamic landscape of players. This consists of 1) AI hardware manufacturers such as Nvidia and Google, 2) AI cloud platforms such as Azure, AWS, Nvidia and Google, 3) open source platforms for accessing the full models, such as Hugging Face, 4) access to LLM models via API such as OpenAI, Cohere and Anthropic and 5) access to LLMs via consumer products such as ChatGPT and Bing. Additionally, there are many more breakthroughs happening each week in this universe such as the release of multi modal models (that can understand both text and image), new model architectures (such as Mixture of Experts), Agent Models (models that can set tasks and interact with each other and other tolls). This all leads to many questions such as; How will most people interact with LLMs?Who will be the leading players going forward?How fast will the capabilities of these models keep progressing?Are open source models dangerous because of the lack of control of their outputs and use, or are they beneficial due to democratizing access to this technology? 1. OpenAI's GPT Models Notable Models Task specific", "0616015c4c1671255a7809472a7ef05d5a698e0ee274e630cf212431b0f43808": "the release of multi modal models (that can understand both text and image), new model architectures (such as Mixture of Experts), Agent Models (models that can set tasks and interact with each other and other tolls). This all leads to many questions such as; How will most people interact with LLMs?Who will be the leading players going forward?How fast will the capabilities of these models keep progressing?Are open source models dangerous because of the lack of control of their outputs and use, or are they beneficial due to democratizing access to this technology? 1. OpenAI's GPT Models Notable Models Task specific models Find model information here: https://platform.openai.com/docs/models/gpt-3 Image & Audio Models OpenAI, the company behind the GPT models, is an AI research and deployment company. The San Francisco-based lab was founded in 2015 as a nonprofit with the goal of building \"artificial general intelligence\" (AGI), which is essentially software as smart as humans. OpenAI conducts innovative research in various fields of AI, such as deep learning, natural language processing, computer vision, and robotics, and develops AI technologies and products intended to solve real-world problems. OpenAI transitioned into a for-profit company in 2019. The company plans to cap the profit of the investors at a fixed multiple of their investment (noted by Sam Altman as currently ranging between 7x and 100x depending on the investment round date and risk). As per the WSJ OpenAI was initially funded by $130m of charity funding (Elon Musk tweeted he contributed $100m) and has since raised at least $13bn led by Microsoft (where OpenAI makes use of Azure cloud credits). With the Microsoft partnership, OpenAI's ChatGPT, along with Microsoft's own search AI, created an improved version of Bing and transformed Microsoft's Office productivity apps. In 2019, OpenAI released GPT-2, a model that could generate realistic human-like text in entire paragraphs with internal consistency, unlike any of the previous models. The next generation, GPT-3, launched in 2020, was trained with 175 billion parameters. GPT-3 is a", "760f192e2d4e2ef6d28bf514e6ce2283c939d6250becafd6c063450f5a803ba9": "he contributed $100m) and has since raised at least $13bn led by Microsoft (where OpenAI makes use of Azure cloud credits). With the Microsoft partnership, OpenAI's ChatGPT, along with Microsoft's own search AI, created an improved version of Bing and transformed Microsoft's Office productivity apps. In 2019, OpenAI released GPT-2, a model that could generate realistic human-like text in entire paragraphs with internal consistency, unlike any of the previous models. The next generation, GPT-3, launched in 2020, was trained with 175 billion parameters. GPT-3 is a multi-purpose language tool that users can access without requiring them to learn a programming language or other computer tools. In November 2022, OpenAI released ChatGPT, which is a superior version of the company's earlier text generation models with the capability to generate humanlike prose. After the success of ChatGPT (GPT 3.5), Open AI released GPT-4 in March 2023, which has multimodal capabilities. The model processes both image and text inputs for text generation. The model has a maximum token count of 32,768 capable of generating around 25,000 words as compared to GPT-3.5 which has 4,096 tokens context size. GPT-4 produces 40% more factual responses and its response rate for disallowed content is down by 82% as compared to previous models. (reported by OpenAI) 2. Google's Palm Models Google AI, formerly known as Google Research, is the AI research and development arm of Google. It was unveiled at Google I/O 2018. Google has contributed many of the most significant papers in breakthroughs in modern machine learning. Google's largest publicly disclosed model is its Pathways Language Model (PaLM) which has likely recently been rolled out in its Bard chatbot. PaLM has been used as a foundation model in several Google projects including the instruction tuned PaLM-Flan, and the recent PaLM-E (the first \"embodied\" multimodal language model). The pre-training of PaLM involved self-supervised learning drawing from a large text corpus that included multilingual web pages", "7785d72a4a2f8586d81a9b615dcd0f4cab7cf46f0c5cb64dce75188d7ff9ba80": "research and development arm of Google. It was unveiled at Google I/O 2018. Google has contributed many of the most significant papers in breakthroughs in modern machine learning. Google's largest publicly disclosed model is its Pathways Language Model (PaLM) which has likely recently been rolled out in its Bard chatbot. PaLM has been used as a foundation model in several Google projects including the instruction tuned PaLM-Flan, and the recent PaLM-E (the first \"embodied\" multimodal language model). The pre-training of PaLM involved self-supervised learning drawing from a large text corpus that included multilingual web pages (27%), English books (13%), open-source code repositories, and source code from GitHub (5%), multilingual Wikipedia articles (4%), English news articles (1%), and other social media conversations (50%). PaLM excelled in 28 out of 29 NLP tasks in the few-shot performance, beating the prior larger models like GPT-3 and Chinchilla. PaLM variants scale up to 540 billion parameters (vs GPT-3 at 175 billion) and trained on 780 billion tokens (vs GPT-3 300bn) - totalling around 8x more compute training than GPT-3 (but likely considerably less than GPT-4). PaLM was trained across multiple TPU v4 pods. Being a dense decoder-only Transformer model, PaLM is trained on two TPU V4 pods connected over a data center network and uses a combination of model and data parallelism. Researchers used 3072 TPU v4 chips in each pod, attached to 768 hosts. This large TPU configuration allows for efficient scale training without using pipeline parallelism. The Pathways system allows for scaling a model across Google's thousands of Tensor Processing Unit chips. 3. DeepMind's Chinchilla Model DeepMind Technologies, founded in 2010, is a British AI research laboratory. It became a wholly owned subsidiary of Alphabet Inc., in 2015 after its acquisition by Google in 2014. DeepMind has created a neural network or a Neural Turing machine that tries to replicate the short-term memory of the human brain. In 2016,", "6071d5b83d417ed7b2f47ea8dd5a380039620c6850ce1c6b48c07cb45072afc1": "chips in each pod, attached to 768 hosts. This large TPU configuration allows for efficient scale training without using pipeline parallelism. The Pathways system allows for scaling a model across Google's thousands of Tensor Processing Unit chips. 3. DeepMind's Chinchilla Model DeepMind Technologies, founded in 2010, is a British AI research laboratory. It became a wholly owned subsidiary of Alphabet Inc., in 2015 after its acquisition by Google in 2014. DeepMind has created a neural network or a Neural Turing machine that tries to replicate the short-term memory of the human brain. In 2016, DeepMind's AlphaGo program defeated a human professional Go player, and their program AlphaZaro defeated the most powerful programs in the games of Go and Shogi. The program acquired competence using reinforcement learning. In 2020, DeepMind's program AlphaFold started making advances in the problem of protein folding and by July 2022, it had predicted over 200 million protein structures. In April 2022, Flamingo, a single visual language model program capable of describing any picture, was launched. Three months later, in July 2022, DeepNash was announced; as a model-free multi-agent reinforcement learning system. DeepMind developed a language model called Chinchilla AI in March 2022, which claimed to outperform GPT-3. A key breakthrough in the Chinchilla paper was that previous LLMs had been trained on too little data - for a given parameter size the optimum model should use far more training data than GPT-3. While more training data takes more time to gather, and leads to more training costs, achieving more capable models for a smaller parameter size has huge benefits for inference costs (the costs needed to run and use the finished model which scale with parameter size). Chinchilla has 70B parameters (60% smaller than GPT-3) and was trained on 1,400 tokens (4.7x GPT-3). The average accuracy rate of Chinchilla AI is 67.5% on Measuring Massive Multitask Language Understanding (MMLU) and outperforms other large language model platforms like Gopher", "1a6600c1f5e5914041ed40bc246aacf63a7ec90454c4cd6cbb1e72da559c1a0e": "While more training data takes more time to gather, and leads to more training costs, achieving more capable models for a smaller parameter size has huge benefits for inference costs (the costs needed to run and use the finished model which scale with parameter size). Chinchilla has 70B parameters (60% smaller than GPT-3) and was trained on 1,400 tokens (4.7x GPT-3). The average accuracy rate of Chinchilla AI is 67.5% on Measuring Massive Multitask Language Understanding (MMLU) and outperforms other large language model platforms like Gopher (280B), GPT-3 (175B), Jurassic-1 (178B), and Megatron-Turing NLG (300 parameters and 530B parameters) on a large range of downstream evaluation tasks. 4. Microsoft & Nvidia's Megatron Turing Model Nvidia is a company that designs GPUs and APIs for data science and high-performance computing, and SoCs for mobile computing and the automotive market. The company is a leading supplier of AI hardware and software. Additionally, Nvidia's CUDA API enables the creation of massively parallel programs that leverage GPUs. Developed by NVIDIA's Applied Deep Learning Research team in 2021, the Megatron-Turing model consists of 530 billion parameters and 270 billion training tokens. Nvidia has provided access via an Early Access program for its managed API service to its MT-NLG model. Nvidia has made many of its LLM and Generative AI models and services available through its new DGX Cloud platform. 5. Meta's LlaMa Models Meta AI, formerly known as Facebook Artificial Intelligence Research (FAIR), is an artificial intelligence laboratory that aims to share open-source frameworks, tools, libraries, and models for research exploration and large-scale production deployment. In 2018, they released the open-source PyText, a modeling framework focused on NLP systems. Then, in August 2022, they announced the release of BlenderBot 3, a chatbot designed to improve conversational skills and safety. In November 2022, Meta developed a large language model called Galactica, which assists scientists with tasks such as summarizing academic papers and annotating molecules and", "2a289d03d479ea73b91440355275ca7a0aa8e70e2513608bbaef404a93e0a101": "Models Meta AI, formerly known as Facebook Artificial Intelligence Research (FAIR), is an artificial intelligence laboratory that aims to share open-source frameworks, tools, libraries, and models for research exploration and large-scale production deployment. In 2018, they released the open-source PyText, a modeling framework focused on NLP systems. Then, in August 2022, they announced the release of BlenderBot 3, a chatbot designed to improve conversational skills and safety. In November 2022, Meta developed a large language model called Galactica, which assists scientists with tasks such as summarizing academic papers and annotating molecules and proteins. Released in February 2023, LLaMA (Large Language Model Meta AI) is a transformer-based foundational large language model by Meta that ventures into both the AI and academic spaces. The model aims to help researchers, scientists, and engineers advance their work in exploring AI applications. It will be released under a non-commercial license to prevent misuse, and access will be granted to academic researchers, individuals, and organizations affiliated with the government, civil society, academia, and industry research facilities on a selective case-by-case basis. The sharing of codes and weights allows other researchers to test new approaches in LLMs. The LLaMA models have a range of 7 billion to 65 billion parameters. LLaMA-65B can be compared to DeepMind's Chinchilla and Google's PaLM. Publicly available unlabeled data was used to train these models, and training smaller foundational models require less computing power and resources. LLaMA 65B and 33B have been trained on 1.4 trillion tokens in 20 different languages, and according to the Facebook Artificial Intelligence Research (FAIR) team, the model's performance varies across languages. The data sources used for training included CCNet (67%), GitHub, Wikipedia, ArXiv, Stack Exchange, and books. LLaMA, like other large scale language models, has issues related to biased & toxic generation and hallucination. 6. Eleuther's GPT-Neo Models Founded in July 2020 by Connor Leahy, Sid Black, and Leo Gao, EleutherAI is a non-profit AI research lab", "4831172fc03283befa6bb0f0752b2cf8e5f59a22269e4657ecc09c39a37cfa44": "have been trained on 1.4 trillion tokens in 20 different languages, and according to the Facebook Artificial Intelligence Research (FAIR) team, the model's performance varies across languages. The data sources used for training included CCNet (67%), GitHub, Wikipedia, ArXiv, Stack Exchange, and books. LLaMA, like other large scale language models, has issues related to biased & toxic generation and hallucination. 6. Eleuther's GPT-Neo Models Founded in July 2020 by Connor Leahy, Sid Black, and Leo Gao, EleutherAI is a non-profit AI research lab The organization has emerged as a leading player in large-scale natural language processing research, with a focus on interpretability and alignment of large models. Their mission is to ensure that the ability to study foundation models is not limited to a few companies, promoting open science norms in NLP, and creating awareness about capabilities, limitations, and risks around these models. In December 2020, EleutherAI curated a dataset of diverse text for training LLMs called the Pile, which consisted of an 800GiB dataset. Subsequently, in March 2021, they released GPT-Neo models. EleutherAI also released GPT-J-6B in June 2021, which is a 6 billion parameter language model, making it the largest open-source GPT-3 like model at the time. Additionally, they combined CLIP with VQGAN to develop a free-to-use image generation model, which guided the foundation of Stability AI. EleutherAI also trains language models in other languages, such as Polyglot-Ko, which were trained in collaboration with the Korean NLP company TUNiB. EleutherAI used Google's TPU Research Cloud Program, but by 2021, they took funding from CoreWeave. The company also uses TensorFlow Research Cloud for cheaper computing resources. In February 2022, EleutherAI released the GPT-NeoX-20b model, which became the largest open-source language model of any type at the time. In January 2023, the company was formally incorporated as a non-profit research institute. EleutherAI's NLP", "3aff755a96b44a09548c6b1964e665e354a3344b6e2214098f4bff55ae3413c5": "language models in other languages, such as Polyglot-Ko, which were trained in collaboration with the Korean NLP company TUNiB. EleutherAI used Google's TPU Research Cloud Program, but by 2021, they took funding from CoreWeave. The company also uses TensorFlow Research Cloud for cheaper computing resources. In February 2022, EleutherAI released the GPT-NeoX-20b model, which became the largest open-source language model of any type at the time. In January 2023, the company was formally incorporated as a non-profit research institute. EleutherAI's NLP model, GPT-NeoX-20B, is trained on 20 billion parameters using the company's GPT-NeoX framework and GPUs from CoreWeave. The GPT-NeoX-20B model has a 72% accuracy on LAMBADA sentence completion. When measured for zero-shot accuracy for Stem using Hendrycks Test Evaluation, it had an average of 28.98%. The model uses the Pile dataset for training and consists of data from 22 sources that falls under the following 5 categories: academic writing (Pubmed Abstracts and PubMed Central, arXiv, FreeLaw, USPTO Backgrounds, PhilPapers, NIH Exporter), web-scrapes and Internet resources (CommonCrawl, OpenWebText2, StackExchange, Wikipedia-English), prose (BookCorpus2, Bibliotik, Project Gutenberg), dialogue (Youtube subtitles, Ubuntu IRC, OpenSubtitles, Hacker News, EuroParl), and miscellaneous (GitHub, the DeepMind Mathematics dataset, Enron Emails). GPT-NeoX-20B is publicly accessible and a pre-trained general-purpose autoregressive transformer decoder language model. It is a powerful few-shot reasoner with 44 layers and a hidden dimension size of 6144 and 64 heads. Additionally, it uses 1.1. Rotary Positional Embeddings instead of learned positional embeddings, as found in GPT models. 7. Cohere's XLarge Founded in 2019 by Aidan Gomez, Ivan Zhang, and Nick Frosst, Toronto-based Cohere specializes in natural language", "88f631025eca5d5be38b898c11dd6e3403c3c5ed4d20b3824b8101d9842bd97f": "miscellaneous (GitHub, the DeepMind Mathematics dataset, Enron Emails). GPT-NeoX-20B is publicly accessible and a pre-trained general-purpose autoregressive transformer decoder language model. It is a powerful few-shot reasoner with 44 layers and a hidden dimension size of 6144 and 64 heads. Additionally, it uses 1.1. Rotary Positional Embeddings instead of learned positional embeddings, as found in GPT models. 7. Cohere's XLarge Founded in 2019 by Aidan Gomez, Ivan Zhang, and Nick Frosst, Toronto-based Cohere specializes in natural language processing (NLP) models. Cohere has improved human-machine interactions and aided developers in performing tasks such as summarizing, classification, finding similarities in content, and building their own language models. Cohere's API helps users design tools for language comprehension and offers a backend toolkit for integration in multiple ways. Cohere provides two types of large language models: Generation Language Models and Representation Language Models. The company uses a foundation model to train AI systems on large-scale data, enabling them to learn from new data to perform various tasks. Generative AI aims to develop human-like creations through coding, and Cohere competes with similar model providers like OpenAI and Anthropic, with the point of differentiation being the focus on serving enterprise users in incorporating generative AI. Cohere's goal is to make NLP accessible to all while building machines that are safe to use. In September 2021, Cohere raised $40 million, and a few months later, in November 2021, Google Cloud announced its partnership with Cohere. The company intends to use Cloud's TPU for the development and deployment of its products, and Sagemaker by Amazon also gives access to Cohere's language AI. Cohere powers Hyperwrite, which helps in quickly generating articles. AWS has also announced a partnership with Cohere AI. To date, Cohere has raised $170 million, and with the ongoing rush of funding in AI platforms, the Canadian startup is expected to be valued at $6 billion. Cohere is set to introduce a new dialogue model to aid enterprise users in generating text while engaging with the model to fine-tune the", "5a885d39087ac603de1179cb84c9367e9c95e5144ab50d02df66d85855fc0531": "Cloud announced its partnership with Cohere. The company intends to use Cloud's TPU for the development and deployment of its products, and Sagemaker by Amazon also gives access to Cohere's language AI. Cohere powers Hyperwrite, which helps in quickly generating articles. AWS has also announced a partnership with Cohere AI. To date, Cohere has raised $170 million, and with the ongoing rush of funding in AI platforms, the Canadian startup is expected to be valued at $6 billion. Cohere is set to introduce a new dialogue model to aid enterprise users in generating text while engaging with the model to fine-tune the output. Cohere's Xlarge model resembles ChatGPT but provides developers and businesses with access to this technology. Cohere's base model has 52 billion parameters compared to OpenAI's GPT-3 DaVinci model, which has 175B parameters. Cohere stresses on accuracy, speed, safety, cost, and ease of use for its users and has paid much attention to the product and its design, developing a cohesive model. 8. Anthropic AI's Claude Anthropic is an American AI startup and public benefit corporation founded in 2021 by Daniela Amodei and Dario Amodei, former members of OpenAI. The company specializes in developing AI systems and language models, with a particular focus on transformer architecture. Anthropic's research on the interpretability of machine learning systems covers fields ranging from natural language and interpretability to human feedback, scaling laws, reinforcement learning, and code generation, among others. The company stresses the application of responsible AI and presents itself as an AI safety and research company working towards building reliable, steerable, and interpretable AI systems. By 2022, Google had invested nearly $400 million in Anthropic, resulting in a formal partnership between the two companies and giving Google a 10% stake in Anthropic. Outside backing amounted to $580 million, with total investments in Anthropic exceeding $1 billion to date. Anthropic has developed a conversational large language model AI chatbot named Claude, which uses a messaging interface and a technique called constitutional AI to better align AI systems with human intentions. AnthropicLM v4-s3 is a", "e35033d47f453f283f231643acf63fbe3271357aeec257539c182fa907950681": "AI and presents itself as an AI safety and research company working towards building reliable, steerable, and interpretable AI systems. By 2022, Google had invested nearly $400 million in Anthropic, resulting in a formal partnership between the two companies and giving Google a 10% stake in Anthropic. Outside backing amounted to $580 million, with total investments in Anthropic exceeding $1 billion to date. Anthropic has developed a conversational large language model AI chatbot named Claude, which uses a messaging interface and a technique called constitutional AI to better align AI systems with human intentions. AnthropicLM v4-s3 is a 52-billion-parameter, autoregressive model, trained unsupervised on a large text corpus. The ten principles used by Anthropic are based on the concepts of beneficence, non-maleficence, and autonomy. Claude is capable of a variety of conversational and text-processing tasks, such as summarization, search, creative and collaborative writing, Q&A, and coding. It is easy to converse with, more steerable, and takes directions on personality, tone, and behavior. Anthropic offers two versions of Claude - Claude (Claude-v1) and Claude Instant. Claude-v1 is a powerful, state-of-the-art high-performance model capable of handling complex dialogue, creative content generation, and detailed instructions. Claude Instant is lighter, less expensive, and much faster, making it suitable for handling casual dialogues, text analysis, and summarization. However, Claude is an expensive platform compared to ChatGPT. Anthropic vouches for Claude to be an honest, helpful, and harmless AI system, and much less likely to produce harmful outputs than present chatbots, which have been known to be toxic, biased, use offensive language and hallucinate. According to Anthropic, Claude cannot access the internet and is designed to be self-contained and trained to avoid sexist, racist, and otherwise toxic outputs, along with preventing human engagement in illegal and unethical activities. However, compared to ChatGPT, Claude is poor at math and programming. Still, the platform has also been seen to hallucinate and provide dubious instructions. Another major concern is that it is possible to intrude upon Claude's built-in safety", "d7a598ee33c540081edd5c4c063a63253146868aa40ff9bbf79b6e60b42acd83": "helpful, and harmless AI system, and much less likely to produce harmful outputs than present chatbots, which have been known to be toxic, biased, use offensive language and hallucinate. According to Anthropic, Claude cannot access the internet and is designed to be self-contained and trained to avoid sexist, racist, and otherwise toxic outputs, along with preventing human engagement in illegal and unethical activities. However, compared to ChatGPT, Claude is poor at math and programming. Still, the platform has also been seen to hallucinate and provide dubious instructions. Another major concern is that it is possible to intrude upon Claude's built-in safety features through clever prompting. The embargo on media coverage of Claude was lifted in January 2023, and a waiting list of users who wanted early access to Claude was released in February. Claude is now available and accessible to users through the Poe app by Quora. Also, Discord Juni Tutor Bot, an online tutoring solution, is powered by Anthropic. Additionally, Claude has found integration with Notion, DuckDuckGo, RobinAI, Assembly AI, and others. 9. AI21's Jurassic Models AI21 Labs specializes in Natural Language Processing to develop generative AI models that can understand and generate text. The Tel Aviv-based startup was founded in 2017 by Yoav Shoham, Ori Goshen, and Amnon Shashua. AI21 has emerged as a rival to OpenAI. In 2019, the startup raised $9.5 million, and in October 2020; it launched Wordtune which was an AI-based writing app. AI21 Labs launched AI21 Studio and Jurassic-1 in August 2021. This was followed by Walden Catalyst investing $20 million in AI21 Labs in November, soon after which the company completed a $25 million series A round led by Pitango First. AI21 raised $64 million in the next round of funding. In January, AI21 Labs launched Wordtune Spices and Jurassic-2 in March 2023. The Jurassic-1 model by AI21 Labs generates human-like texts and performs complex tasks like question answering, text classification, and others. The Jurassic-1 model comes in two sizes.", "5cd7638552ab4fc482002c5d33c8c439c5b08b3f2c49e0a56406cf922b760ae3": "app. AI21 Labs launched AI21 Studio and Jurassic-1 in August 2021. This was followed by Walden Catalyst investing $20 million in AI21 Labs in November, soon after which the company completed a $25 million series A round led by Pitango First. AI21 raised $64 million in the next round of funding. In January, AI21 Labs launched Wordtune Spices and Jurassic-2 in March 2023. The Jurassic-1 model by AI21 Labs generates human-like texts and performs complex tasks like question answering, text classification, and others. The Jurassic-1 model comes in two sizes. Jurassic-1 Jumbo contains 178 billion parameters. The model uses a unique 250,000 token vocabulary and includes multi-word tokens, reducing the model's need to use a large number of tokens and thus improving the computational efficiency and reducing latency. Jurassic-1 allows developers to train custom versions of the model with just 50100 training examples helping users to build customized applications and services. Jurassic-1 has been notably used by Latitude to scale production of its gaming world, by Harambee to create a custom chatbot to increase sign-ups for its youth employment programs, and by Verb to build a writing tool for authors. The next iteration of Jurassic (Jurassic-2) is a highly customizable language model. It has comprehensive instruction tuning on proprietary data, which gives it advanced instruction following capabilities. The model supports languages like Spanish, French, German, Portuguese, Italian, and Dutch. Compared to the Jurassic-1 model, it has up to 30% faster response time, significantly reducing latency. Jurassic-2 has three sizes, with each one having a separate instruction-tuned version - Large, Grande, and Jumbo. Jurassic-2 helps users to build virtual assistants and chatbots and helps in text simplification, content moderation, creative writing, etc. Jurassic-2 also has zero-shot instruction capabilities. The model boasts of the most current knowledge and up-to-date database, with training being based on data updated in the middle of 2022, as compared to ChatGPT, which had closed its database by the end of 2021. Jurassic-2 comes with five APIs built for businesses that want specifically tailored", "a29added10bfc900adfdc0ff58762fc8d1c9c4c8dfe8685644e15e40f3c05f99": "latency. Jurassic-2 has three sizes, with each one having a separate instruction-tuned version - Large, Grande, and Jumbo. Jurassic-2 helps users to build virtual assistants and chatbots and helps in text simplification, content moderation, creative writing, etc. Jurassic-2 also has zero-shot instruction capabilities. The model boasts of the most current knowledge and up-to-date database, with training being based on data updated in the middle of 2022, as compared to ChatGPT, which had closed its database by the end of 2021. Jurassic-2 comes with five APIs built for businesses that want specifically tailored generative AI features. The APIs include tools for paraphrasing, summarizing, checking grammar, segmenting long texts by topic, and recommending improvements. On Stanford's Holistic Evaluation of Language Models (HELM), Jurassic-2 Jumbo ranks second with an 86.8% win rate. Jurassic-2 is available for free till May 1st, 2023. 10. Baidu's ERNIE Model Baidu, based in Beijing, is a prominent Chinese company that specializes in artificial intelligence. In 2019, Baidu launched a powerful AI language model named Ernie (Enhanced Representation through Knowledge Integration), which has been open-sourced along with its code and pre-trained model based on PaddlePaddle. Since its inception, Ernie has undergone significant improvements and can now execute a diverse array of tasks, such as language comprehension, language generation, and text-to-image generation. ERNIE was designed to enhance language representations by implementing knowledge masking strategies, such as entity-level masking and phrase-level masking. Baidu launched ERNIE 2.0 in July 2019, which introduced a continual pre-training framework. This framework incrementally builds and learns tasks through constant multi-task learning. ERNIE 3.0 was unveiled in early 2021 and introduced a unified pretraining framework that allows collaborative pretraining among multi-task paradigms. Unlike other models such as GPT-3, ERNIE 3.0 showcased task-agnostic zero-shot and few-shot learning capabilities and could be easily tailored for natural language understanding and generation tasks with zero-shot learning,", "f93d692e26a4d3c2ff0160a00974e511af0bee82761a35378646090609b5096a": "strategies, such as entity-level masking and phrase-level masking. Baidu launched ERNIE 2.0 in July 2019, which introduced a continual pre-training framework. This framework incrementally builds and learns tasks through constant multi-task learning. ERNIE 3.0 was unveiled in early 2021 and introduced a unified pretraining framework that allows collaborative pretraining among multi-task paradigms. Unlike other models such as GPT-3, ERNIE 3.0 showcased task-agnostic zero-shot and few-shot learning capabilities and could be easily tailored for natural language understanding and generation tasks with zero-shot learning, few-shot learning, or fine-tuning. In late 2021, Baidu released ERNIE 3.0 Titan, a pre-training language model with 260 billion parameters that were trained on massive unstructured data. Baidu developed ERNIE Bot, its latest large language model (LLM), and generative AI product. It is designed to serve as a foundational AI platform that can facilitate intelligent transformations in various industries, including finance, energy, media, and public affairs. Access to ERNIE Bot is currently limited to invited users, with the API expected to be available to enterprise clients through Baidu AI Cloud after application (as of March 16th). Baidu aims to use the capabilities of ERNIE Bot to revolutionize its search engine, which holds the dominant position in China. Moreover, it is anticipated that ERNIE Bot will improve the operational efficiency of various mainstream industries, including cloud computing, smart cars, and home appliances. Hardware and Cloud Platforms Nvidia's H100 Tensor Core, their ninth-generation data center GPU, contains 80 billion transistors and is optimized for large-scale AI and high-performance computing (HPC) models. The A100, Nvidia's predecessor to the H100, is one of the best GPUs for deep learning. There is also Google's Tensor Processing Units (TPUs) which are custom-designed accelerator application-specific integrated circuits (ASIC) used for efficient machine learning workloads and are tightly integrated with TensorFlow, Google's machine learning framework. Google Cloud Platform has opened availability of TPU v4 on Cloud, specifically designed to accelerate NLP workloads,", "0e0d626360827416b55ec93f1854f38733746ad1133e5bc2da2e9f166800d363": "Platforms Nvidia's H100 Tensor Core, their ninth-generation data center GPU, contains 80 billion transistors and is optimized for large-scale AI and high-performance computing (HPC) models. The A100, Nvidia's predecessor to the H100, is one of the best GPUs for deep learning. There is also Google's Tensor Processing Units (TPUs) which are custom-designed accelerator application-specific integrated circuits (ASIC) used for efficient machine learning workloads and are tightly integrated with TensorFlow, Google's machine learning framework. Google Cloud Platform has opened availability of TPU v4 on Cloud, specifically designed to accelerate NLP workloads, and has also developed TPU v5 for use internally. Microsoft Azure also offers GPU instances powered by Nvidia GPUs, such as the A100 and P40, that can be used for various machine learning and deep learning workloads. Another key development is the partnership between Microsoft Azure and OpenAI, which has given OpenAI the resources to train both GPT-3 and GPT-4 that resulted in the availability of these models for developers in their applications through Azure's cloud infrastructure. AWS provides access to GPUs such as the Amazon Elastic Compute Cloud (EC2) P3 instances, which offer up to 8 Nvidia V100 GPUs with 5,120 CUDA cores and 300 GB of GPU memory. AWS has also developed its own chips for inference(Inferentia) and training (Trainium). Several advanced models have been developed on these computing and cloud systems, including BERT, RoBERTa, Bloom, Megatron and the GPT family. BERT is one of the first pre-trained models that incorporated transformer architecture and resulted in state of the art scores in many NLP tasks. RoBERTa is a variant of BERT, trained on a much larger dataset with a more efficient training procedure. Lastly, Bloom is an open-access multilingual language model, containing 176 billion parameters and was trained on 384 A10080GB GPUs. The increasing availability of specialized hardware for NLP tasks represents a significant development in cloud computing programs. With the availability of these tools, companies can now train and run models that were previously impossible to build. A note on Open Source Open-source LLMs efforts have been", "26e563cb86fc604f89f608f0aea428af8574a9bf1733a4799348c5f82ae33df3": "first pre-trained models that incorporated transformer architecture and resulted in state of the art scores in many NLP tasks. RoBERTa is a variant of BERT, trained on a much larger dataset with a more efficient training procedure. Lastly, Bloom is an open-access multilingual language model, containing 176 billion parameters and was trained on 384 A10080GB GPUs. The increasing availability of specialized hardware for NLP tasks represents a significant development in cloud computing programs. With the availability of these tools, companies can now train and run models that were previously impossible to build. A note on Open Source Open-source LLMs efforts have been progressing, both in terms of open data sets and open source models available for anyone to fine tune and use. The overall potential of open source models are very promising. They provide a more in-depth access to LLMs for everyone, not just by using an API. However there are definitely questions on the increased risks of models that haven't been aligned - and are more flexible to adapting for nefarious use cases such as misinformation. AI efforts like Eleuther's \"The Pile\" and LAION's LAION-5B dataset have facilitated rapid progress in text and image modeling. Many companies and groups are also making foundational models accessible with open-source data sets, such as Big Science's Bloom model and the strategic partnership between Hugging Face and Amazon Web Services (AWS), which increases the availability of open-source data sets and models hosted on Hugging Face. Stability AI also supports EleutherAI's work studying Large Language Models, while Laion's project involves crowdsourcing annotations for its OpenAssistant ChatGPT replication project. Additionally, Carper has developed open-source RLHF workflows ranging from human annotation with CHEESE to do RLHF training using trlX package. Generative AI applied to other modalities By some measures, consumer facing Generative AI has become the fastest growing technology trend of all time, with various models emerging for image, text, and code generation. For example, MidJourney's Discord has attracted around 13 million members for Image Generation, while ChatGPT has reportedly gained over 100 million users within a few months of release. Software development use cases have also seen a significant rise with over", "56e759ee4c342c6abb3d8d8c1c7b95109933108c2b82332b1770f9beaad3f6b9": "ChatGPT replication project. Additionally, Carper has developed open-source RLHF workflows ranging from human annotation with CHEESE to do RLHF training using trlX package. Generative AI applied to other modalities By some measures, consumer facing Generative AI has become the fastest growing technology trend of all time, with various models emerging for image, text, and code generation. For example, MidJourney's Discord has attracted around 13 million members for Image Generation, while ChatGPT has reportedly gained over 100 million users within a few months of release. Software development use cases have also seen a significant rise with over 1.2 million developers using GitHub Copilot's technical preview as of September. 1. Image Generation: Dall-E MidJourney Stable Diffusion DreamStudio The combination of models, data, and computing has provided an incredible set of tools for working with images. OpenAI's DALL-E is an AI system that uses deep learning and transformer language models to generate digital images from natural language descriptions. It employs a decoder-only transformer model that models text and images as a single data stream containing up to 256 tokens for text and 1024 for images. The neural network then autoregressively models them. DALL-E is a 12-billion parameter version of GPT-3. The model uses a causal mask for text tokens and sparse attention for image tokens. DALL-E 2 is capable of producing higher-resolution images and uses zero-shot visual reasoning. It can create anthropomorphized versions, fill in the blanks, and transform existing images. However, DALL-E uses public datasets as training data, which can affect its results and often leads to algorithmic biases. Midjourney is an artificial intelligence program developed by Midjourney, Inc., an independent research lab. The platform uses natural language descriptions to generate images, and users can create images by using Discord bot commands on the official Discord server. On March 16, 2023, beta version 5 was released. Users can generate images by typing the /imagine command followed by the prompt, and the bot generates four images, from which the user selects the image they want to upscale. Midjourney Inc.", "1349ea9d513a7e5b9f88b2ee83b38f0e641abc53336a3c1d16d1d40c77086fa0": "However, DALL-E uses public datasets as training data, which can affect its results and often leads to algorithmic biases. Midjourney is an artificial intelligence program developed by Midjourney, Inc., an independent research lab. The platform uses natural language descriptions to generate images, and users can create images by using Discord bot commands on the official Discord server. On March 16, 2023, beta version 5 was released. Users can generate images by typing the /imagine command followed by the prompt, and the bot generates four images, from which the user selects the image they want to upscale. Midjourney Inc. is also developing a web interface. Stable Diffusion is an open source image model funded by Stability AI that generates images from text and performs tasks like inpainting, outpainting, and generating image-to-image translations. It uses a latent diffusion model supported by EleutherAI and LAION. It requires a minimum of 8GB VRAM making it independent of needing cloud services. Stable Diffusion 2.0 was released in November 2022 and trained on pairs of images and captions from LAION-5B and its subsets. DreamStudio is the official online implementation and team interface API for Stable Diffusion, developed by Stability AI. DreamStudio and Stable Diffusion have slightly different interfaces even as they are applications of the same technology. The web app was launched in August 2022, replacing the free Discord bot. The web app offers better functionality and stability, using the Stable Diffusion algorithm to generate images based on the user's prompt. DreamStudio API Access has an access fee. One of the key features of DreamStudio is its support for negative prompting. It also allows users to overpaint, copy, modify, and distribute images for commercial purposes. 2. Audio Generation: Whisper AudioGen AudioLM Whisper, developed by OpenAI, is a versatile automatic speech recognition system that supports multilingual speech recognition, speech translation, and language identification. It has been trained on 680,000 hours of multilingual and multitask supervised data using Python 3.9.9 and PyTorch 1.10.1, and the codebase is expected to be compatible with Python", "2a0250238b14774dba4d228d65aa639a690b139438efccceb60c765e468a3a0c": "an access fee. One of the key features of DreamStudio is its support for negative prompting. It also allows users to overpaint, copy, modify, and distribute images for commercial purposes. 2. Audio Generation: Whisper AudioGen AudioLM Whisper, developed by OpenAI, is a versatile automatic speech recognition system that supports multilingual speech recognition, speech translation, and language identification. It has been trained on 680,000 hours of multilingual and multitask supervised data using Python 3.9.9 and PyTorch 1.10.1, and the codebase is expected to be compatible with Python 3.83.10 and recent PyTorch versions. It deploys an encoder-decoder transformer model that uses 30-second chunks of input audio converted to log-Mel spectrograms, which are then passed to an encoder. The decoder predicts the corresponding text caption and intermixes special tokens to perform various tasks. Whisper provides an open-source model and inference codes for speech processing research and new application development. With nearly one-third of its dataset being non-English, Whisper outperforms the supervised state-of-the-art on CoVoST2 to English translation zero-shot. Google's AudioLM is a pure audio model that uses language modeling to generate high-quality audio without annotated data. It generates speech continuations that preserve the identity, prosody, and accent of the speaker and recording conditions, and can also generate coherent piano music continuations. The model demonstrates long-term consistency in syntax, harmony, rhythm, and melody, and has the potential for extension to multilingual speech, polyphonic music, and audio events. AudioLM uses a hybrid tokenization scheme and a SoundStream neural codec to improve fidelity. The model achieved a 51.2% success rate from human raters and an audio classifier with 98.6% accuracy was trained to detect synthetic speech generated by AudioLM. Currently, AudioLM is only available for research purposes and is not publicly available. Meta's AudioGen AI converts text prompts into audio files. It is the audio parallel of image-generating AI like DALL-E. It uses a language AI model and approximately 4000 hours of training data to generate ambient sounds, sound events,", "f3241b6351791403dffc989cd90a6f9e3cfc889ca10db61c7ad2a8dc574d6aad": "music, and audio events. AudioLM uses a hybrid tokenization scheme and a SoundStream neural codec to improve fidelity. The model achieved a 51.2% success rate from human raters and an audio classifier with 98.6% accuracy was trained to detect synthetic speech generated by AudioLM. Currently, AudioLM is only available for research purposes and is not publicly available. Meta's AudioGen AI converts text prompts into audio files. It is the audio parallel of image-generating AI like DALL-E. It uses a language AI model and approximately 4000 hours of training data to generate ambient sounds, sound events, and their composition. Additionally, it can extend existing audio to create rudimentary music. The quality of the audio output has been rated at 70% via Amazon's Mechanical Turk platform. However, AudioGen currently cannot sequence sounds through time, and the ownership rights of the generated audio are unclear. 3. Search Engines: Neeva You Neeva is an AI-powered search engine that provides ad-free and private searches. It achieves this through its in-house LLMs and search stack, while also blocking third-party website trackers and not sharing user information. Neeva's unique feature is its AI summaries, which provide synthesized answers backed by cited authority. It also allows users to search personal email accounts, calendars, and cloud storage platforms. This feature combines the best aspects of LLMs, like ChatGPT, with authority and timeliness. However, it only functions with question queries and has limitations on the free version (the premium plan is priced at $4.95/mo). Neeva has over 2 million users and local language versions in Germany, France, and Spain. You.com is a California-based search engine that uses multimodal conversational AI to group web results into website categories sorted by user preferences. It was launched for public beta in November 2021 with a focus on privacy and personalization. It offers YouWrite, a text generator, and YouChat, a chatbot with community-built apps and blended LLMs. You.com does not collect users' personal information and offers personal and private search modes. The search results allow users to create content directly from the search results,", "9616970be61997dc658720f9c1a161f947ab3eb514b1b7f03e2958608ae921b8": "Neeva has over 2 million users and local language versions in Germany, France, and Spain. You.com is a California-based search engine that uses multimodal conversational AI to group web results into website categories sorted by user preferences. It was launched for public beta in November 2021 with a focus on privacy and personalization. It offers YouWrite, a text generator, and YouChat, a chatbot with community-built apps and blended LLMs. You.com does not collect users' personal information and offers personal and private search modes. The search results allow users to create content directly from the search results, building trust and reliability. 4. Code Generation: Copilot Codex GitHub Copilot is a tool that assists developers in programming by using AI to convert natural language into coding suggestions. It is powered by OpenAI Codex, which allows it to understand the developer's coding style and suggest context-specific solutions. When developers input their desired logic into the system, GitHub Copilot can generate code suggestions automatically. However, it is important to note that these suggestions are just that, suggestions, and it is up to the developer to decide whether to use them or not. OpenAI Codex is a natural language processing model that is based on GPT-3 and can generate working code in multiple programming languages such as Python, JavaScript, and Ruby, among others. To train Codex, billions of lines of source code from public sources, as well as natural language data, including code from GitHub repositories, were used. It has a memory of 14KB for Python code and is a powerful, transformer-driven system that can effectively and efficiently fulfill developers' tasks. 5. Text Generation: Jasper Jasper.AI is a subscription-based text generation model that requires minimal input from the user and searches the web to generate the desired output. It is particularly useful for generating short copy text where character limitations are important. The platform offers over 50 templates, including product descriptions, email subject lines, and Facebook headlines, among others. Additionally, it can help with generating ideas for blog posts and creating better outlines. However, Jasper.AI does have some drawbacks, such as the absence of fact-checking and citation of sources, which can lead to", "fb9fdab3042bb38ee9eef9fb28ba206ce937f565dabba093db46254ff19b4bbf": "transformer-driven system that can effectively and efficiently fulfill developers' tasks. 5. Text Generation: Jasper Jasper.AI is a subscription-based text generation model that requires minimal input from the user and searches the web to generate the desired output. It is particularly useful for generating short copy text where character limitations are important. The platform offers over 50 templates, including product descriptions, email subject lines, and Facebook headlines, among others. Additionally, it can help with generating ideas for blog posts and creating better outlines. However, Jasper.AI does have some drawbacks, such as the absence of fact-checking and citation of sources, which can lead to hallucinations. Additionally, learning the command input to achieve the desired output may take some time. Conclusion Generative AI is a revolutionary technology that has the ability to transform many aspects of our lives. Keep in mind that there are still challenges in developing these models such as massive datasets, compute power, high training cost, and accessibility. Studies have revealed that many large language models are not adequately trained. Additionally, smaller datasets are still crucial for enhancing LLM performance in domain-specific tasks. Compute cost optimization is also essential since generative models, especially large language models, are still expensive to both train and serve for inference. Big players in the industry are working on optimizing compute costs at every level. Safety and security remain pressing concerns in the development of generative AI, and key players are incorporating human feedback to make the models safer from the outset. Open-source alternatives are also necessary to increase access to the next-generation LLM models for practitioners and independent scientists to push the boundaries forward.", "28ea3bd2ef701fec4a9a9ce6d6a08b6336853e96095168f5bf418536c4bae297": "Neural Networks LLMs like ChatGPT are trained on huge amounts of publicly accessible text data from the internet using artificial neural networks. Artificial neural networks are machine learning algorithms that are designed to mimic in an abstract way, our brain's structure and learning process. They are made up of layers of interconnected nodes or \"neurons,\" and through repeated training iterations on massive amounts of text data, the network learns the patterns in the texts and the nuances of the language - enough to generate coherent words, sentences, or whole documents by itself. The artificial neural network is the main feature of a subset of machine learning called deep learning. It is very important in the field of AI due to its ability to capture intricate patterns and dependencies in data and generalize from these patterns to make predictions on new, unseen data. In the context of language modeling, this means predicting what word should come next given a sequence of a preceding word or words. Compared to conventional machine learning algorithms like linear regression, neural networks are able to represent and model non-linear relationships between different features present in large amounts of data through the use of nonlinear mathematical functions (the activation function) in the neurons of the network's hidden layers. Neural networks have produced consumer tech that you've probably interacted with (and they are not necessarily exclusive to language tasks) such as unlocking your phone using facial recognition, the augmented reality feature in your Pokemon game, or the show suggestions in Netflix home screens. Andrej Karpathy even argues that it can be a new and better way of writing software: For example, instead of hand coding the logic of a program (- if condition A is met, do x; if condition A is not met, do y) the neural network instead learns through examples from the training data that if it encounters 'condition A' in production it should do x. These conditions/logic are not defined by its creators, rather the neural network adjusts itself (by tweaking its billions or even trillions of parameters - the weights and biases) to conform to this desired behavior. Nobody knows what each individual weight and bias does specifically or how a single weight contributes to a specific change in the behavior", "cffcf0520b313d6fca6d9eaa67ba7e7e8f5519c10ca0f589304e60d7cb2c4ba6": "better way of writing software: For example, instead of hand coding the logic of a program (- if condition A is met, do x; if condition A is not met, do y) the neural network instead learns through examples from the training data that if it encounters 'condition A' in production it should do x. These conditions/logic are not defined by its creators, rather the neural network adjusts itself (by tweaking its billions or even trillions of parameters - the weights and biases) to conform to this desired behavior. Nobody knows what each individual weight and bias does specifically or how a single weight contributes to a specific change in the behavior of the artificial neural network. These parameters are changed en masse as a unit during training via gradient updates. (discussed in more detail later.) This is why you'll often hear machine learning models trained on neural networks as 'black boxes'. Their inputs and outputs can be observed, but the internal workings or how it does what it does is not easily understood. This is also the reason for discoveries of 'emergent' capabilities. As an LLM gets bigger and bigger (measured by its number of parameters), it starts coming out of training with unanticipated abilities. For example, GPT-2 was discovered to be good at language translation, GPT-3 was an excellent few-shot learner, and GPT-4 has shown sparks of artificial general intelligence or AGI. None of these were explicitly defined as a training goal - its main objective was to predict the next word in a sequence. Emergent behaviors are not unique to large neural networks. As a system gets bigger and more complex, the interaction between its individual components can lead to unexpected behaviors that cannot be fully explained by analyzing the properties of each individual component in isolation- a single ant is stupid, but a colony of ants can build very complex tunnel networks and wage war against other colonies. This phenomenon has been documented in systems like social insects (ants and bees), crowd behavior, and other biological ecosystems. Pretraining Foundation Models The first step in creating something like ChatGPT is pretraining a base model or a foundation model. The", "3556481522a40f4b76b6fd91eccdd281bafe748fe38b36f96e90bc59133ce1cd": "the next word in a sequence. Emergent behaviors are not unique to large neural networks. As a system gets bigger and more complex, the interaction between its individual components can lead to unexpected behaviors that cannot be fully explained by analyzing the properties of each individual component in isolation- a single ant is stupid, but a colony of ants can build very complex tunnel networks and wage war against other colonies. This phenomenon has been documented in systems like social insects (ants and bees), crowd behavior, and other biological ecosystems. Pretraining Foundation Models The first step in creating something like ChatGPT is pretraining a base model or a foundation model. The goal of this step is to create a machine learning model that is able to autonomously generate a coherent word structure or generate human-like text (phrase, sentence, paragraph) by generating words in sequence based on its prediction of what word should come next given the preceding words. It's called pretraining as the output of this step - the base model is still a raw product that has limited practical applications and is usually only of interest to researchers. Base models are 'trained' further via the fine-tuning stages for specific tasks with a real-world utility like text translation, summarization, classification, etc. At the start of pretraining, the parameters of the neural network are set with random numerical values. The words in the massive internet text data are converted into numerical representations in the form of tokens (as an integer) and embeddings (as vectors)- before being fed to the neural network. Tokens and embeddings will be discussed in detail in the next part of this series but for now, think of a token as the unique ID of a word in the model's vocabulary and the embedding as the meaning of that word. The model is given a word or words and is asked to predict the next word based on those preceding word/s. Then, it is tested on unseen data and evaluated on the accuracy of its predictions based on the 'ground truth' next word from a hidden dataset previously unseen by the model. Consider the example sentence in the training dataset: \"I have to go to the store\". This sentence might be used", "83ee49bd21571b536cb61b6cd61e2f7fc4f3ddbf1e5aa3635dae85468997ff48": "network. Tokens and embeddings will be discussed in detail in the next part of this series but for now, think of a token as the unique ID of a word in the model's vocabulary and the embedding as the meaning of that word. The model is given a word or words and is asked to predict the next word based on those preceding word/s. Then, it is tested on unseen data and evaluated on the accuracy of its predictions based on the 'ground truth' next word from a hidden dataset previously unseen by the model. Consider the example sentence in the training dataset: \"I have to go to the store\". This sentence might be used as follows: The model is given \"I have\" and is expected to predict \"to\".Then it's given \"I have to\" and is expected to predict \"go\".Then \"I have to go\" and is expected to predict \"to\".Finally, \"I have to go to the\" and is expected to predict \"store\". Going through the whole corpus of the training dataset like this, the model will be able to learn which words tend to appear after different sets of words. It learns the dependencies between \"I\" and \"have\", \"have\" and \"to\", and so on. In the testing step, the process is similar, but the sentences or texts used are the ones that the model has not been trained on. It's a way to check how well the model generalizes its language understanding to unseen data. Let's consider the unseen sentence from the test set: \"She needs to head to the ___\" Even though this exact sentence was not part of the training dataset, the model can use its understanding of similar contexts it encountered to make an educated prediction. For example, it has been seen in the training sentence \"I have to go to the store\" that the phrase \"to go to the\" or \"to head to the\" is often followed by a location or a destination. Based on this, the model might predict \"market\", \"store\", \"office\", or other similar words, as they are common destinations in this kind of context. So, while the model", "f7431c90fc9173a2c7ea066166097632a365b1e761ce6ce669f29c8628110cbb": "from the test set: \"She needs to head to the ___\" Even though this exact sentence was not part of the training dataset, the model can use its understanding of similar contexts it encountered to make an educated prediction. For example, it has been seen in the training sentence \"I have to go to the store\" that the phrase \"to go to the\" or \"to head to the\" is often followed by a location or a destination. Based on this, the model might predict \"market\", \"store\", \"office\", or other similar words, as they are common destinations in this kind of context. So, while the model was trained on \"I have to go to the store\" and variations of this text with similar meaning, it's able to generalize from that to understand that \"She needs to head to the...\" is likely to be followed by a similar type of word, even though this exact sentence was not part of its training data. How Models 'Learn' At the start of pretraining, the model would usually output nonsensical sequences of words when asked to make a prediction as it hasn't 'learned' anything yet. In our example sentence earlier, it might generate the word 'apple' instead of the ground truth next word - 'store'. Since LLMs are probabilistic, 'wrong' in this context means the model is assigning a higher probability (to be selected) to the word 'apple' compared to the expected word - 'store'. The ultimate goal is to have the model output 'store' every time it's asked to predict the next word that comes next after the sequence \"She needs to head to the ___\". The difference in the actual and the expected or ground truth next word is calculated using a 'loss function' where the greater the difference, the higher the 'loss' value. The loss is a single number that 'averages' the loss or error of all the predictions you asked the model to make. Through several iterations of these steps, the aim is to minimize the value of this 'loss' through processes called backpropagation and gradient descent optimization. The model", "dc8d9ca20449f03e0d752cf9c75faf84369193d95951bacba9dddbf0d7fe9273": "goal is to have the model output 'store' every time it's asked to predict the next word that comes next after the sequence \"She needs to head to the ___\". The difference in the actual and the expected or ground truth next word is calculated using a 'loss function' where the greater the difference, the higher the 'loss' value. The loss is a single number that 'averages' the loss or error of all the predictions you asked the model to make. Through several iterations of these steps, the aim is to minimize the value of this 'loss' through processes called backpropagation and gradient descent optimization. The model 'learns' or improves its prediction ability through these steps. You're probably wondering how you can 'calculate the difference between two words' to arrive at a loss value. Do note that what goes through the neural network are not actual texts (words, sentences) but numerical representations of these texts - their tokens and embeddings. The number representations of a word sequence are processed through the layers of the network where the output is a probability distribution over the vocabulary to determine what word comes next. An untrained model might assign a higher probability to the token id of the word 'apple' (say 0.8) compared to the token id of the ground truth next word - 'store' (at 0.3). The neural network will not encounter a single word or letter of any text. It works exclusively with numbers - basically a calculator with extra steps. Through backpropagation, the degree of the error of the model (the loss value) is propagated backward through the neural network. It computes the derivative to the output of each individual weight and bias i.e. how sensitive the output is to changes in each specific parameter. For my people who didn't take on differential calculus in school (such as myself), think of the model parameters (weights/biases) as adjustable knobs. These knobs are arbitrary - in the sense that you can't tell in what specific way it governs the prediction ability of the model. The knobs, which can be rotated clockwise or counterclockwise have different effects", "82622d900f8c83000f0c75145351d26374833ab456bc70d31b71d1e65daef9f0": "backpropagation, the degree of the error of the model (the loss value) is propagated backward through the neural network. It computes the derivative to the output of each individual weight and bias i.e. how sensitive the output is to changes in each specific parameter. For my people who didn't take on differential calculus in school (such as myself), think of the model parameters (weights/biases) as adjustable knobs. These knobs are arbitrary - in the sense that you can't tell in what specific way it governs the prediction ability of the model. The knobs, which can be rotated clockwise or counterclockwise have different effects on the behavior of the output. Knob A might increase the loss 3x when turned clockwise, knob B reduces the loss by 1/8 when turned counterclockwise (and so on). All these knobs are checked (all billions of them) and to get information on how sensitive the output is to adjustments of each knob - this numerical value is their derivative with respect to the output. Calculating these derivatives is called backpropagation. The output of backpropagation is a vector (a list of numbers) whose elements or dimensions consist of the parameters' individual derivatives. This vector is the gradient of the error with respect to the existing parameter values (or the current learnings) of the neural network. A vector has two properties: length or magnitude and direction. The gradient vector contains information on the direction in which the error or loss is increasing. The magnitude of the vector signifies the steepness or rate of increase. Think of the gradient vector as the map of a foggy hill you're descending from - gradient descent optimization is using the information about direction and steepness from the gradient vector to reach the bottom of the hill (the minimum loss value) as efficiently as possible by navigating to the path with the greatest downward incline (the opposite direction of the gradient vector). This involves iteratively adjusting the values of the weights and biases of the network (by subtracting small values to it i.e. the learning rate) en masse to reach this optimal state. After these steps, the hope", "5fee65260a21f9ee0835dab817fba3ef6e79f253ef990d148f963dbdee91cdf3": "magnitude of the vector signifies the steepness or rate of increase. Think of the gradient vector as the map of a foggy hill you're descending from - gradient descent optimization is using the information about direction and steepness from the gradient vector to reach the bottom of the hill (the minimum loss value) as efficiently as possible by navigating to the path with the greatest downward incline (the opposite direction of the gradient vector). This involves iteratively adjusting the values of the weights and biases of the network (by subtracting small values to it i.e. the learning rate) en masse to reach this optimal state. After these steps, the hope is during the next training iteration, when the model is again asked to predict the next word for \"She needs to head to the...\" it should assign a higher probability to the word 'store'. This process is repeated several times until there is no significant change to the loss value meaning the model's learning has stabilized or has reached convergence. So the TL;DR on how neural networks learn to communicate in English (and other languages) is - math in serious amount. Like oodles. It boils down to reducing the value of a single number (the loss value) generated from complex computations within the neural network - where, as this number gets smaller, the more 'fluent' or 'coherent' the language model becomes. The millions or billions of mathematical operations applied between matrices and vectors in the inner layers of the network somehow coalesce into a geometric model of the language. To help with intuition, we've anthropomorphized the model by using words like 'understand', 'seen', and 'learn', but in truth, it has no capacity to do any of these things. It's just an algorithm that outputs the next best token of a sequence based on a probability distribution given a sampling method. The Transformer The Transformer is the breakthrough in natural language processing (NLP) research that gave us ChatGPT. It is a type of neural network architecture that utilizes a unique self-attention mechanism. It's famously discussed in the paper 'Attention is All You Need' that came out in", "1cda21ed21749db131dbc066728d54413387dfd2ad7115cd9026182cecdc7003": "language. To help with intuition, we've anthropomorphized the model by using words like 'understand', 'seen', and 'learn', but in truth, it has no capacity to do any of these things. It's just an algorithm that outputs the next best token of a sequence based on a probability distribution given a sampling method. The Transformer The Transformer is the breakthrough in natural language processing (NLP) research that gave us ChatGPT. It is a type of neural network architecture that utilizes a unique self-attention mechanism. It's famously discussed in the paper 'Attention is All You Need' that came out in 2017. Almost all state-of-the-art LLMs (like BERT, GPT-1) that came out after this paper, were built on or using ideas from the transformer. It's hard to overstate the importance of this paper due to its impact on deep learning. It's now finding its way to vision tasks making it truly multi-modal and demonstrating its flexibility to handle other types of data. It also started the '...is all you need' memetic trend that even the Towards AI editorial team is unable to resist. Prior to transformers, neural networks used in NLP that produced SOTA models, relied on architectures that utilize sequential data processing e.g. recurrent neural networks or RNNs - this means that during training, each word or token is processed by the network one after the other in sequence. Note that the order of words is important to preserve the context/meaning of a sequence - 'the cat ate the mouse' and 'the mouse ate the cat' are two sentences with two different meanings even though they are made up of the exact same words/tokens (albeit in a different order). One of the key innovations of the transformer is doing away with recurrence or sequential token processing. Instead of processing tokens sequentially, it encodes the position information of each word (i.e. in which order a word appears in the sequence being processed) into its embedding before being inputted in the network's inner layers. More importantly, transformers solved the issue of long-term dependencies", "e21ae95d09ae67ca4dabf8a75cd3a16f78a96dbdc9b8051418ac6a94f535f48e": "important to preserve the context/meaning of a sequence - 'the cat ate the mouse' and 'the mouse ate the cat' are two sentences with two different meanings even though they are made up of the exact same words/tokens (albeit in a different order). One of the key innovations of the transformer is doing away with recurrence or sequential token processing. Instead of processing tokens sequentially, it encodes the position information of each word (i.e. in which order a word appears in the sequence being processed) into its embedding before being inputted in the network's inner layers. More importantly, transformers solved the issue of long-term dependencies that neural nets like RNNs struggled with. Given a long enough sequence of words (e.g. a very long paragraph), RNNs will 'forget' the context of the word it processed earlier in the sequence - this is called the vanishing gradient problem. RNNs store information on the relevance of words in a sequence up to that point in what's called the hidden state at each sequential or time step. As it processes a long sequence, gradients corresponding to earlier time steps can become very small during backpropagation. This makes it challenging for the RNN to learn from the early parts of the sequence and can lead to the 'loss' of information about words processed earlier. This is problematic for a next-word prediction model, especially if those 'forgotten' words are important to the context of the sequence currently being generated The transformer solves this limitation through the 'self-attention' mechanism. Same with positional encoding, each word, through its embedding is encoded with information on the degree or how much it should 'attend to' the rest of the words in the sequence - no matter the length of the sequence or the relative distance of the attended word in the sequence. This encoding is done simultaneously for all words in the sequence allowing the transformer to preserve the context of any sequence. The degree how which one word should attend to other words is a 'learned' trait stored in the model weights and encoded in the word embeddings via matrix multiplications. These 'learnings' get adjusted during each training", "99cfcef446616528546ea91078e562427e097d84b2f37628c9487419f9d88716": "'self-attention' mechanism. Same with positional encoding, each word, through its embedding is encoded with information on the degree or how much it should 'attend to' the rest of the words in the sequence - no matter the length of the sequence or the relative distance of the attended word in the sequence. This encoding is done simultaneously for all words in the sequence allowing the transformer to preserve the context of any sequence. The degree how which one word should attend to other words is a 'learned' trait stored in the model weights and encoded in the word embeddings via matrix multiplications. These 'learnings' get adjusted during each training iteration as the model learns more about the relationships between words in the training data. The final output of the self-attention layer (the Z matrix in our illustration above) is a matrix of the word embeddings that is encoded with information on the position of each word in the sequence (from the position encoding step) and how much each word should attend to all other words in the sequence. It is then fed to a traditional neural network like the one discussed earlier in the article (called the feed-forward neural network). These steps (attention + feed-forward - which makes up a transformer block) are repeated multiple times for each hidden layer of the transformer - 96 times for GPT3, for example. The transformation in each layer adds additional information to the 'knowledge' of the model on how to best predict the next word in the sequence. According to the LLM scaling laws published by OpenAI, to train better models, increasing the number of parameters is 3x more important than increasing the size of the training data. (Note: DeepMind has since published a paper with a differing view.) This translates to a significant increase in computational requirements, as handling a larger number of parameters demands more complex calculations. Parallelization, which is the process of dividing a single task into multiple sub-tasks that can be processed simultaneously across multiple compute resources, becomes essential in dealing with this problem. Parallelization is difficult to achieve with RNNs given their sequential nature. This is not an issue for transformers as it computes relationships", "1e76535804dbe9eee7314ea79e14e8f3aadf04dd1d561c411eef8e8e76e177df": "published by OpenAI, to train better models, increasing the number of parameters is 3x more important than increasing the size of the training data. (Note: DeepMind has since published a paper with a differing view.) This translates to a significant increase in computational requirements, as handling a larger number of parameters demands more complex calculations. Parallelization, which is the process of dividing a single task into multiple sub-tasks that can be processed simultaneously across multiple compute resources, becomes essential in dealing with this problem. Parallelization is difficult to achieve with RNNs given their sequential nature. This is not an issue for transformers as it computes relationships between all elements in a sequence simultaneously, rather than sequentially. It also means that they work well with GPUs or video cards. Graphics rendering requires a large number of simple calculations happening concurrently. The numerous, small, and efficient processing cores that a GPU has, which are designed for simultaneous operations, make it a good fit for tasks such as matrix and vector operations that are central to deep learning. AI going 'mainstream' and the mad scramble to build larger and better models is a boon to GPU manufacturers. NVIDIA- specifically - whose stock price has grown 200% YTD as of this writing, has made them the highest-performing stock this year and pushed their market cap to USD 1 trillion. They join megacaps like Apple, Google, Microsoft, and Amazon in this exclusive club. The Transformer is a decidedly complex topic and the explanation above wholesale left out important concepts in order to be more digestible to a broader audience. If you want to know more, I found these gentle yet significantly more fleshed-out introductions to the topic: Jay Allamar's illustrated transformer, Lili Jiang's potion analogy, or if you want something more advanced - Karpathy's nanoGPT that babbles in Shakepear-ish. Fine-tuning 'chat' models like ChatGPT The output of pretrainings are base models or foundation models. Examples of recently released text-generation foundation models are GPT-4, Bard, LLaMa 1 & 2, and Claude 1", "985e22cfddd3066e0bd6c04fe826ffb13b281b7afbc9b762fab5f4a10ec75161": "concepts in order to be more digestible to a broader audience. If you want to know more, I found these gentle yet significantly more fleshed-out introductions to the topic: Jay Allamar's illustrated transformer, Lili Jiang's potion analogy, or if you want something more advanced - Karpathy's nanoGPT that babbles in Shakepear-ish. Fine-tuning 'chat' models like ChatGPT The output of pretrainings are base models or foundation models. Examples of recently released text-generation foundation models are GPT-4, Bard, LLaMa 1 & 2, and Claude 1 & 2. Since base models already have extensive knowledge of the language from pretraining (the structure of sentences, relationships between words, etc.), you can leverage this knowledge to further train the model to do specific tasks - translation, summarization, or conversational assistants like ChatGPT. The underlying idea is that the model's general language understanding gained from pretraining can be used for a wide range of downstream tasks. This idea is called transfer learning. If you ask or prompt a base model a question, it will probably reply with another question. Remember that it's trained to complete a word sequence by predicting the word that should come next given the previous words in the sequence. Example: However, we can get a base model to answer questions by 'tricking' it into thinking that it's trying to complete a sequence: Using this idea, the model goes through another round of training using different sets of prompt/completion pairs in a question-and-answer format. Instead of 'learning English' from random texts found on the internet by predicting what words come next after a set of words, the model 'learns' that to complete a prompt in a 'question' form, the completion should be in an 'answer' form. This is the supervised fine-tuning (SFT) stage. Pretraining utilizes a type of machine learning called self-supervised learning where the model trains itself by creating the 'label' or ground truth word it's trying to predict from the training data itself. Here is our example", "fb68a1e3255fef744120387b55c4fa859fe0e5908bc5906729867c2ad4821217": "round of training using different sets of prompt/completion pairs in a question-and-answer format. Instead of 'learning English' from random texts found on the internet by predicting what words come next after a set of words, the model 'learns' that to complete a prompt in a 'question' form, the completion should be in an 'answer' form. This is the supervised fine-tuning (SFT) stage. Pretraining utilizes a type of machine learning called self-supervised learning where the model trains itself by creating the 'label' or ground truth word it's trying to predict from the training data itself. Here is our example from earlier: The model is given \"I have\" and is expected to predict \"to\".Then it's given \"I have to\" and is expected to predict \"go\". The labels or target words 'to' and 'go' are created by the model as it goes through the corpus. Note that the target/ground truth words are important as this is the basis of the loss value - i.e. how good the current model's prediction is versus the target- and the subsequent gradient updates. Compared to the pretraining phase, the training data preparation in the fine-tuning stages can be labor intensive. It requires human labelers and reviewers that will do careful annotation of the 'labels' or the target completions. However, since the model has already learned general features of the language, it can quickly adapt to the language task it's being fine-tuned for, even with the limited availability of task-specific training data. This is one of the benefits of transfer learning and the motivation behind pretraining. According to Karpathy, 99 percent of the compute power and training time and most of the data to train an LLM are utilized during the pretraining phase and only a fraction is used during the fine-tuning stages. Fine-tuning uses the same gradient update method outlined earlier but this time it's learning from a list of human-curated question/answer pairs that teaches it how to structure its completions i.e. 'what to say and how to say it'. It goes through other", "c5c2d7f9960994a3810cf19a7b98f28516c020fb2ac3b52d624aaf09872bb6b7": "even with the limited availability of task-specific training data. This is one of the benefits of transfer learning and the motivation behind pretraining. According to Karpathy, 99 percent of the compute power and training time and most of the data to train an LLM are utilized during the pretraining phase and only a fraction is used during the fine-tuning stages. Fine-tuning uses the same gradient update method outlined earlier but this time it's learning from a list of human-curated question/answer pairs that teaches it how to structure its completions i.e. 'what to say and how to say it'. It goes through other fine-tuning stages like reward modeling and reinforcement learning from human feedback (RLHF) to train the model to output completions that cater more to human preference. In this stage, the human labelers score the model's completions on attributes like truthfulness, helpfulness, harmlessness, and toxicity. The human-preferred completions get reinforced into the training i.e. it will have a higher probability to appear in completions of the fine-tuned version of the model. The output of these fine-tuning steps is the 'assistant' or 'chat' models like ChatGPT. These are the 'retail' versions of these foundation models and are what you interact with when you go to the ChatGPT website. The GPT-3 base model (davinci) can be accessed via an API. The GPT-4 base model has not been released as an API as of this writing and is unlikely to be released by OpenAI, given their recent statements about competition and LLM safety. These fine-tuning steps are generally the same for all available commercial and open-source fine-tuned models. End of Part 1 Note: Part 2 will talk about Embeddings which predates the LLM explosion but is equally as fascinating - how embedding models are trained, and how a sentence or document-level embeddings (used in RAG systems) are generated from word embeddings. We will also be discussing about Tokens and why it's needed. We have implied in this post that token = word", "4f72cdfed884a4e385c723b952c4616a7fb2dbbf6f7482828b33161735e77331": "released as an API as of this writing and is unlikely to be released by OpenAI, given their recent statements about competition and LLM safety. These fine-tuning steps are generally the same for all available commercial and open-source fine-tuned models. End of Part 1 Note: Part 2 will talk about Embeddings which predates the LLM explosion but is equally as fascinating - how embedding models are trained, and how a sentence or document-level embeddings (used in RAG systems) are generated from word embeddings. We will also be discussing about Tokens and why it's needed. We have implied in this post that token = word to simplify things, but a real-world token can be an individual character or letter, a subword, a whole word, a series of words, or all of these types in a single model vocabulary! If I got anything wrong, I'm happy to be corrected in the comments! :) Resources/References: 3blue1brown. What is a neural network? Geeks for Geeks. Artificial Neural Networks and its Applications Jay Alammar. The Illustrated Transformer Luis Serrano. What Are Transformer Models and How Do They Work? Andrej Karpathy. The State of GPT Andrej Karpathy. Let's build GPT: from scratch, in code, spelled out.", "ef5a3fdf1f8b689057e76000b4e2d10a98aaf2490066141277b0fd5db350bca9": "What Sets WizardCoder Apart One might wonder what makes WizardCoder's performance on HumanEval so exceptional, especially considering its relatively compact size. To put it into perspective, let's compare WizardCoder-python-34B with CoderLlama-Python-34B: The unique and most important factor of such large difference in HumanEval benchmark performance is the dataset the model trained on. The Power of Data: WizardCoder's Unique Dataset One of the key factors contributing to WizardCoder's remarkable performance is its training dataset. Most models rely on a dataset structure that typically includes: Solid base with a lot of simple instructionsReduced amount of complex instructionsAnd minimal amount of really complex instructions To train a model for peak performance on evaluation benchmarks, training dataset should have a balance between simple instructions, complex instructions and really complex instructions. This is where WizardCoder's dataset shines. It boasts: Good amount of really complex instructionsGood amount of complex instructionsSolid base with a lot of simple instructions But there's a challenge: creating a dataset with complex instructions is inherently difficult, while simple instructions are readily available. Evol Instruct Evol-Instruct is an evolutionary algorithm for generating diverse and complex instruction datasets using LLMs(GPT-4). It is designed to enhance the performance of LLMs by providing them with high-quality instructions that are difficult to create manually. In simple teams, Evol-Instruct is a complexity cascade of synthetically genearted (GPT-4) instruction dataset. Instruction Evolution LLMs can make given instructions more complex and difficult using specific prompts. Additionally, they can generate entirely new instructions that are equally complex but completely different. Using this, we can iteratively evolve an initial instruction dataset, improving the difficulty level and expanding its richness and diversity. A. Instruction Evolver The Instruction Evolver is an LLM that uses prompts to evolve (develop) instructions, with two types: In-depth evolving.In-breadth evolving A base dataset is given (e.g., Alpaca: generated using self-instruct, or 70k ShareGPT (shared by real users)) and using this base dataset, we can create a more complex and diverse dataset. a) In-depth Evolving In-Depth Evolving enhances", "768b6fd49ab75703c694a6281d5b276114bdb094d335a9744d54685e6d5a7de8": "that are equally complex but completely different. Using this, we can iteratively evolve an initial instruction dataset, improving the difficulty level and expanding its richness and diversity. A. Instruction Evolver The Instruction Evolver is an LLM that uses prompts to evolve (develop) instructions, with two types: In-depth evolving.In-breadth evolving A base dataset is given (e.g., Alpaca: generated using self-instruct, or 70k ShareGPT (shared by real users)) and using this base dataset, we can create a more complex and diverse dataset. a) In-depth Evolving In-Depth Evolving enhances instructions by making them more complex and difficult through five types of prompts: Prompt of In-depth Evolving In-depth Evolving aims to: (i) Add constraints (ii) Deepening (iii) Concretizing (more specific) (iv) Increased Reasoning Steps (v) Complicating Inputs The core part of In-Depth Evolving's prompt is The example prompt of add constraints is: These prompts help generate a complex instruction dataset, with similar templates for the other types of In-depth Evolving. b) In-breadth Evolving In-breadth Evolving addresses the limitation of open-domain instruction finetune datasets (e.g., Alpaca, ShareGPT, etc.), which are often small in scale, and lacking topic and skill diversity. In-breadth Evolving solves this problem by designing a prompt to generate a completely new instruction based on the given instruction, requiring the new instruction to be more long-tailed. Prompt of In-breadth Evolving In-breadth Evolving aims to 1. Enhance topic coverage 2. skill coverage 3. Overall dataset diversity The in-breadth prompt is as follows: B. Response Generation The same LLM used to generate the corresponding responses for the evolved instructions using the prompt: C. Elimination Evolving(Instruction Eliminator) The evolved instruction may challenge the LLM to generate a response. Sometimes, when the generated response contains \"sorry' and is relatively short in length (i.e., less than 80 words), it often indicates that the LLM struggles to respond", "1fd92d3abd050b65219d4ec0f87dc46ef657502096f3bc20daf012e5867e4755": "Evolving In-breadth Evolving aims to 1. Enhance topic coverage 2. skill coverage 3. Overall dataset diversity The in-breadth prompt is as follows: B. Response Generation The same LLM used to generate the corresponding responses for the evolved instructions using the prompt: C. Elimination Evolving(Instruction Eliminator) The evolved instruction may challenge the LLM to generate a response. Sometimes, when the generated response contains \"sorry' and is relatively short in length (i.e., less than 80 words), it often indicates that the LLM struggles to respond to the evolved instruction. So, we can use this rule to make a judgment. The response generated by the LLM only contains punctuation and stop words. D. Finetuning the LLM on the Evolved Instructions Once all evolutions are done, the initial instruction dataset (the 52K instruction dataset of Alpaca) merges with evolved instruction data from all epochs and randomly shuffled the samples to create the final fine-tuning dataset. This processing ensures an even distribution of instructions of varying difficulty levels in the dataset, maximizing model fine-tuning smoothness. Wizardlm validates Evol-Instruct by fine-tuning open-source LLaMA 7B with evolved instructions and evaluating its performance and name the model WizardLM. Evol-Instruct works by generating a pool of initial instructions(52k instruction dataset of Alpaca), which are then evolved through a series of steps to create more complex and diverse instructions. Once the instruction pool is generated, it is used to fine-tune an LLM, resulting in a new model called WizardCoder. The fine-tuning process involves training the LLM on the instruction data to improve its ability to generate coherent and fluent text in response to various inputs. Prompt Format For WizardCoder, the Prompt should be as follows: Best Use Cases WizardCoder can be used for a variety of code-related tasks, including code generation, code completion, and code summarization. Here are some examples of input prompts that can be used with the model: Code generation: Given a description of a programming task, generate the corresponding code. Example input: \"Write a Python function that", "4f53f92bee39a35bd0d6b4b8f81a7d596c5a6d109338bcae2fa0f21b3237f73a": "an LLM, resulting in a new model called WizardCoder. The fine-tuning process involves training the LLM on the instruction data to improve its ability to generate coherent and fluent text in response to various inputs. Prompt Format For WizardCoder, the Prompt should be as follows: Best Use Cases WizardCoder can be used for a variety of code-related tasks, including code generation, code completion, and code summarization. Here are some examples of input prompts that can be used with the model: Code generation: Given a description of a programming task, generate the corresponding code. Example input: \"Write a Python function that takes a list of integers as input and returns the sum of all even numbers in the list.\"Code completion: Given an incomplete code snippet, complete the code. Example input: \"def multiply(a, b): \\n return a * b _\"Code summarization: Given a long code snippet, generate a summary of the code. Example input: \"Write a Python program that reads a CSV file and calculates the average of a specific column.\" The 34B model is not just a coding assistant; it's a powerhouse capable of: Automating DevOps Scripts: Generate shell scripts or Python scripts for automating tasks.Data Analysis: Generate Python code for data preprocessing, analysis, and visualization.Machine Learning Pipelines: Generate end-to-end ML pipelines, from data collection to model deployment.Web Scraping: Generate code for web scraping tasks.API Development: Generate boilerplate code for RESTful APIs.Blockchain: Generate smart contracts for Ethereum or other blockchain platforms Evaluation WizardCoder beats all other open-source Code LLMs, attaining state-of-the-art (SOTA) performance, according to experimental findings from four code-generating benchmarks, including HumanEval, HumanEval+, MBPP, and DS-100. WizardCoder-Python-34B has demonstrated exceptional performance on code-related tasks. The model has outperformed other open-source and closed LLMs on prominent code generation benchmarks, including HumanEval (73.2%), HumanEval+, and MBPP(61.2%). WizardCoder-Python-34B-V1.0 attains the second position in HumanEval Benchmarks, surpassing GPT4", "c48c271334c619c6f14c1c84025caf638cb3b61b898bdd6c6be317a698ea4db7": "other open-source Code LLMs, attaining state-of-the-art (SOTA) performance, according to experimental findings from four code-generating benchmarks, including HumanEval, HumanEval+, MBPP, and DS-100. WizardCoder-Python-34B has demonstrated exceptional performance on code-related tasks. The model has outperformed other open-source and closed LLMs on prominent code generation benchmarks, including HumanEval (73.2%), HumanEval+, and MBPP(61.2%). WizardCoder-Python-34B-V1.0 attains the second position in HumanEval Benchmarks, surpassing GPT4 (2023/03/15, 73.2 vs. 67.0), ChatGPT-3.5 (73.2 vs. 72.5) and Claude2 (73.2 vs. 71.2). WizardCoder-15B-v1.0 model achieves the 57.3 pass@1 on the HumanEval Benchmarks, which is 22.3 points higher than the SOTA open-source Code LLMs, including StarCoder, CodeGen, CodeGee, and CodeT5+. Additionally, WizardCoder significantly outperforms all the open-source Code LLMs with instructions fine-tuning, including InstructCodeT5+, StarCoder-GPTeacher, and Instruct-Codegen-16B. In conclusion, WizardCoder's success is attributed to its unique dataset and the innovative use of Evol-Instruct to enhance instruction complexity, leading to its outstanding performance across various code-related tasks and benchmarks. References YouTube: WizardCoder 34B: Complex Fine-Tuning Explained GitHub Paper: WizardLM- Empowering Large Language Models to Follow Complex Instructions Paper: WizardCoder: Empowering Code Large Language Models with Evol-Instruct" }, "relevant_docs": { "243aa579-a9ef-45d4-9b31-a5028a4fc982": [ "4ab5bd897f01474fc9b0049f95e31edae3ccd9e74d0f0acd3932b50a74d608b6" ], "7b62af16-2525-4ec8-be02-0a24292997bb": [ "4ab5bd897f01474fc9b0049f95e31edae3ccd9e74d0f0acd3932b50a74d608b6" ], "f31f38df-d8c4-404e-a7e1-4560c6a01e9b": [ "4ab5bd897f01474fc9b0049f95e31edae3ccd9e74d0f0acd3932b50a74d608b6" ], "9f483f37-bc72-4155-b560-97ac6e67e31d": [ "4ab5bd897f01474fc9b0049f95e31edae3ccd9e74d0f0acd3932b50a74d608b6" ], "e017ef5c-ea94-4eb3-8aa8-7dbd435a5a2a": [ "4ab5bd897f01474fc9b0049f95e31edae3ccd9e74d0f0acd3932b50a74d608b6" ], "5d74b1f2-742e-4831-82ac-c4e9a3ef670f": [ "4ab5bd897f01474fc9b0049f95e31edae3ccd9e74d0f0acd3932b50a74d608b6" ], "9f5d663d-1ee5-4a6f-902c-da3331abe1c7": [ "e470fa0d001e50b3ec3088022462a94ea7c87dd80106411b7d120f90b379e977" ], "4e9d8820-1ef6-4f15-a8f9-2e382770def0": [ "e470fa0d001e50b3ec3088022462a94ea7c87dd80106411b7d120f90b379e977" ], "5fe168b8-1c13-4e50-8252-5defb5eaff49": [ "e470fa0d001e50b3ec3088022462a94ea7c87dd80106411b7d120f90b379e977" ], "c0300fcf-46d5-4d18-a533-6348053e5a18": [ "e470fa0d001e50b3ec3088022462a94ea7c87dd80106411b7d120f90b379e977" ], "cea8b601-d18c-44fa-8ee2-2839b35563ab": [ "e470fa0d001e50b3ec3088022462a94ea7c87dd80106411b7d120f90b379e977" ], "5305a67e-e2be-48e0-85d9-b496cc54275e": [ "4b3a13a10f7ea2464249fb6aa64e9f403f8151daf24133dbcffbfa0e01fa0d74" ], "bf95258d-987a-4455-9b48-14b2bcd60289": [ "4b3a13a10f7ea2464249fb6aa64e9f403f8151daf24133dbcffbfa0e01fa0d74" ], "ae6f37fe-8a44-44e7-afec-fc989f72825c": [ "4b3a13a10f7ea2464249fb6aa64e9f403f8151daf24133dbcffbfa0e01fa0d74" ], "fa7ccac1-22ab-4d8c-a826-c14e1c1de935": [ "4b3a13a10f7ea2464249fb6aa64e9f403f8151daf24133dbcffbfa0e01fa0d74" ], "858fbc09-86ee-471a-b5de-85b7336646d9": [ "4b3a13a10f7ea2464249fb6aa64e9f403f8151daf24133dbcffbfa0e01fa0d74" ], "c1fe6048-1afd-413f-b66f-1e435a15c0fb": [ "4b3a13a10f7ea2464249fb6aa64e9f403f8151daf24133dbcffbfa0e01fa0d74" ], "ae388eb1-a321-4944-aa42-8bad2efd20bc": [ "4b3a13a10f7ea2464249fb6aa64e9f403f8151daf24133dbcffbfa0e01fa0d74" ], "fab93cba-489c-4498-b172-a727be03c4ae": [ "4b3a13a10f7ea2464249fb6aa64e9f403f8151daf24133dbcffbfa0e01fa0d74" ], "2033b91f-132b-4048-bac1-b6fa17b4086b": [ "4b3a13a10f7ea2464249fb6aa64e9f403f8151daf24133dbcffbfa0e01fa0d74" ], "cfa2de30-1199-491b-a4da-72f1c5b0b430": [ "4b3a13a10f7ea2464249fb6aa64e9f403f8151daf24133dbcffbfa0e01fa0d74" ], "c34da090-a3ad-4016-95e0-8869a89467d9": [ "98e9cbb20d5a2f5ab9d5d9712f9e66ef7123b584e1e1985cebef6bd4f41c0858" ], "1bf4fd7d-3405-459c-b423-74a7ffa68aa2": [ "98e9cbb20d5a2f5ab9d5d9712f9e66ef7123b584e1e1985cebef6bd4f41c0858" ], "cc0e6ff8-f27d-4808-9053-d9202de32bf7": [ "98e9cbb20d5a2f5ab9d5d9712f9e66ef7123b584e1e1985cebef6bd4f41c0858" ], "1b11a1e8-2254-4a05-8a41-73318b88a579": [ "98e9cbb20d5a2f5ab9d5d9712f9e66ef7123b584e1e1985cebef6bd4f41c0858" ], "635d0716-5b83-4684-b4bc-df7e7f0dc863": [ "98e9cbb20d5a2f5ab9d5d9712f9e66ef7123b584e1e1985cebef6bd4f41c0858" ], "50d051a8-bb38-49a5-ba53-2796c608ccca": [ "df6183049976174f912d271a7d08fda25e3086030c160fdc603face8a6000e00" ], "2e43d520-1b5b-4a4a-92bc-924a17a3185a": [ "df6183049976174f912d271a7d08fda25e3086030c160fdc603face8a6000e00" ], "d3ed15a1-3ec6-47a4-814a-24bdf82d8a08": [ "df6183049976174f912d271a7d08fda25e3086030c160fdc603face8a6000e00" ], "da5a76f3-98a2-4d45-b5cd-2757594f0fba": [ "df6183049976174f912d271a7d08fda25e3086030c160fdc603face8a6000e00" ], "81ab10f4-7441-40ba-9b06-3363ce49e8cb": [ "df6183049976174f912d271a7d08fda25e3086030c160fdc603face8a6000e00" ], "ad3a7531-38de-4e08-a6c7-22f9cab8bf7b": [ "df6183049976174f912d271a7d08fda25e3086030c160fdc603face8a6000e00" ], "d5e84653-31e8-496e-8923-8bdb4fd0ea12": [ "df6183049976174f912d271a7d08fda25e3086030c160fdc603face8a6000e00" ], "829f2429-4553-4ab5-b9d2-eecbcb4b6718": [ "df6183049976174f912d271a7d08fda25e3086030c160fdc603face8a6000e00" ], "7636d48b-9d7f-44b1-ad64-265910a33d21": [ "df6183049976174f912d271a7d08fda25e3086030c160fdc603face8a6000e00" ], "6d541245-a5a3-4774-a958-f35d95ea0e4c": [ "df6183049976174f912d271a7d08fda25e3086030c160fdc603face8a6000e00" ], "1695952a-a7fe-4b29-8026-9e231ef9dabd": [ "de49ab9024a434ca1cd1efba258fbaa9a3e2d9a1bca3ab4a0349220cc1e2754f" ], "e6b94f32-80b5-4fff-8a33-4ca4f1767260": [ "de49ab9024a434ca1cd1efba258fbaa9a3e2d9a1bca3ab4a0349220cc1e2754f" ], "cb142f7f-43a3-4db9-87a3-b094bd7cf813": [ "de49ab9024a434ca1cd1efba258fbaa9a3e2d9a1bca3ab4a0349220cc1e2754f" ], "27652216-84df-47a4-aa16-c2a629b608be": [ "de49ab9024a434ca1cd1efba258fbaa9a3e2d9a1bca3ab4a0349220cc1e2754f" ], "ada00d31-b42a-451c-8199-742a8f749528": [ "de49ab9024a434ca1cd1efba258fbaa9a3e2d9a1bca3ab4a0349220cc1e2754f" ], "dc0cc2d8-95e5-46c9-a04f-281b4b2cf377": [ "de49ab9024a434ca1cd1efba258fbaa9a3e2d9a1bca3ab4a0349220cc1e2754f" ], "370edbff-3e64-4fdc-91ea-f3b618d76bb4": [ "de49ab9024a434ca1cd1efba258fbaa9a3e2d9a1bca3ab4a0349220cc1e2754f" ], "8d263218-833a-425f-985f-20d8171721f6": [ "de49ab9024a434ca1cd1efba258fbaa9a3e2d9a1bca3ab4a0349220cc1e2754f" ], "aad10d48-5471-4650-9592-330033df2332": [ "de49ab9024a434ca1cd1efba258fbaa9a3e2d9a1bca3ab4a0349220cc1e2754f" ], "ef7528a9-cd3f-4cd5-ada3-cc1d1ddf46b3": [ "de49ab9024a434ca1cd1efba258fbaa9a3e2d9a1bca3ab4a0349220cc1e2754f" ], "7478f0a0-60fe-4e70-a19a-83500ea963fb": [ "15268fd9c2a45644a0c49ca1b4897b4fabfe3005fccee48af0acc7eea7dd0e9c" ], "fadfe6f6-8fd2-4bf6-9e01-1ed2a5c9d39d": [ "15268fd9c2a45644a0c49ca1b4897b4fabfe3005fccee48af0acc7eea7dd0e9c" ], "de63ef8f-c982-4c04-9b07-f6589beecf0c": [ "15268fd9c2a45644a0c49ca1b4897b4fabfe3005fccee48af0acc7eea7dd0e9c" ], "e5609b94-374c-4046-af05-67551084fb35": [ "15268fd9c2a45644a0c49ca1b4897b4fabfe3005fccee48af0acc7eea7dd0e9c" ], "3bad1bda-b270-41fa-9cb8-8d54116169d6": [ "15268fd9c2a45644a0c49ca1b4897b4fabfe3005fccee48af0acc7eea7dd0e9c" ], "a79e9da2-3c2f-4f77-bd2c-49119635dc17": [ "6d646836e0c2e6830a4c6d3147c3b1d28d3e92351cf0be1d27f5f3a18c520e3d" ], "a084df16-10ef-4732-bcf6-39d151a1db33": [ "6d646836e0c2e6830a4c6d3147c3b1d28d3e92351cf0be1d27f5f3a18c520e3d" ], "27ca7f7d-0b1c-42e7-a339-3419aeae695d": [ "6d646836e0c2e6830a4c6d3147c3b1d28d3e92351cf0be1d27f5f3a18c520e3d" ], "222b57fa-ff23-4c13-ad98-5d7a3406eb8a": [ "6d646836e0c2e6830a4c6d3147c3b1d28d3e92351cf0be1d27f5f3a18c520e3d" ], "d69a415b-da07-4f18-95fd-e6f78e6b0bed": [ "6d646836e0c2e6830a4c6d3147c3b1d28d3e92351cf0be1d27f5f3a18c520e3d" ], "8d96d6b6-bd8d-46f3-9649-26ed22838fdb": [ "6d646836e0c2e6830a4c6d3147c3b1d28d3e92351cf0be1d27f5f3a18c520e3d" ], "5c79c2a7-4daa-4abc-b940-9a98b4f15b78": [ "6d646836e0c2e6830a4c6d3147c3b1d28d3e92351cf0be1d27f5f3a18c520e3d" ], "f27b7dd1-9bc7-4d47-b1fb-9f68e4c14368": [ "6d646836e0c2e6830a4c6d3147c3b1d28d3e92351cf0be1d27f5f3a18c520e3d" ], "f2198fcc-68a2-4624-bc68-0a8317c4d15c": [ "6d646836e0c2e6830a4c6d3147c3b1d28d3e92351cf0be1d27f5f3a18c520e3d" ], "3b6c58c6-d9b1-4654-a335-78433567b156": [ "6d646836e0c2e6830a4c6d3147c3b1d28d3e92351cf0be1d27f5f3a18c520e3d" ], "88ee5364-50c1-4b33-87b8-0536b1213857": [ "b7eaf40d5ed90dbefc226732645cf49e5f98fb471a1b56a4151f646b60891738" ], "59b8e4f0-3700-4f9f-96ff-d3c8e8148380": [ "b7eaf40d5ed90dbefc226732645cf49e5f98fb471a1b56a4151f646b60891738" ], "4a07ca52-5a6e-42ba-a32c-d5d1dd583050": [ "b7eaf40d5ed90dbefc226732645cf49e5f98fb471a1b56a4151f646b60891738" ], "d0f0aa8d-24ea-46da-9365-98416340c3d9": [ "b7eaf40d5ed90dbefc226732645cf49e5f98fb471a1b56a4151f646b60891738" ], "52983063-0c02-4649-8f9d-e18c772b69ad": [ "b7eaf40d5ed90dbefc226732645cf49e5f98fb471a1b56a4151f646b60891738" ], "37a9982d-e911-4f4d-ada8-7012195f5c9f": [ "b7eaf40d5ed90dbefc226732645cf49e5f98fb471a1b56a4151f646b60891738" ], "7fc08e64-4956-429f-b739-27abeff5c560": [ "b7eaf40d5ed90dbefc226732645cf49e5f98fb471a1b56a4151f646b60891738" ], "a735c483-cdd5-43ba-9487-8100a4f3e7c7": [ "b7eaf40d5ed90dbefc226732645cf49e5f98fb471a1b56a4151f646b60891738" ], "c3d252fe-16f5-4324-bf9b-eb8b51e9dfe2": [ "b7eaf40d5ed90dbefc226732645cf49e5f98fb471a1b56a4151f646b60891738" ], "6ce3662b-8fe6-41c0-b7d8-e28c6196c753": [ "b7eaf40d5ed90dbefc226732645cf49e5f98fb471a1b56a4151f646b60891738" ], "0e44d863-1cd0-4fda-afcf-251a224d14c7": [ "8bd2dacc5eca082fcea46f2e3aace5c8c3817dd817cffa9f1ab3800bd476a3d3" ], "b7721e66-f104-41aa-8b73-65aca2546180": [ "8bd2dacc5eca082fcea46f2e3aace5c8c3817dd817cffa9f1ab3800bd476a3d3" ], "3b0a6880-9de2-48ea-a1d7-adf3df8abb6d": [ "8bd2dacc5eca082fcea46f2e3aace5c8c3817dd817cffa9f1ab3800bd476a3d3" ], "f822adb7-a33f-4173-9e9f-9b87c4456c59": [ "8bd2dacc5eca082fcea46f2e3aace5c8c3817dd817cffa9f1ab3800bd476a3d3" ], "372be9f6-ba2b-4a18-9745-5e213fae2b1a": [ "8bd2dacc5eca082fcea46f2e3aace5c8c3817dd817cffa9f1ab3800bd476a3d3" ], "992b5f16-1939-45af-b144-fbb6b21be149": [ "8bd2dacc5eca082fcea46f2e3aace5c8c3817dd817cffa9f1ab3800bd476a3d3" ], "723ce5ae-bbdb-4129-b5f4-62004a6f3edb": [ "8bd2dacc5eca082fcea46f2e3aace5c8c3817dd817cffa9f1ab3800bd476a3d3" ], "200cbe5d-818d-49a7-a27c-103b80cf44a4": [ "8bd2dacc5eca082fcea46f2e3aace5c8c3817dd817cffa9f1ab3800bd476a3d3" ], "de7847a0-202e-4965-ad7d-15a652737d46": [ "8bd2dacc5eca082fcea46f2e3aace5c8c3817dd817cffa9f1ab3800bd476a3d3" ], "6a517f18-34b0-4530-94c4-c1f598ed9886": [ "8bd2dacc5eca082fcea46f2e3aace5c8c3817dd817cffa9f1ab3800bd476a3d3" ], "82108654-4c9b-42fe-995e-e3fe16ed49d2": [ "7d7e3d805418e033c4aa24a972a8358d33d94a60fef7af58a318efe9232be19b" ], "8bf2e25d-518e-467d-99ea-84bf48746cda": [ "567b14c826413d4ff28ecb510609350966136f2d0914c2d28eda5d8b3e646e82" ], "bd844d10-9ed8-41cf-9f20-8458e63fa085": [ "567b14c826413d4ff28ecb510609350966136f2d0914c2d28eda5d8b3e646e82" ], "02a18800-d21d-430a-865d-9136bd7b31ad": [ "567b14c826413d4ff28ecb510609350966136f2d0914c2d28eda5d8b3e646e82" ], "ba25f6e3-924f-4c6d-b32f-789dfd007750": [ "567b14c826413d4ff28ecb510609350966136f2d0914c2d28eda5d8b3e646e82" ], "c0b842ef-3b6d-4948-8c4a-23ff451ca20e": [ "567b14c826413d4ff28ecb510609350966136f2d0914c2d28eda5d8b3e646e82" ], "664ecee0-85b6-4358-8fad-54a91946d762": [ "567b14c826413d4ff28ecb510609350966136f2d0914c2d28eda5d8b3e646e82" ], "c6eab763-e0fd-4752-a35f-3cfc224235da": [ "567b14c826413d4ff28ecb510609350966136f2d0914c2d28eda5d8b3e646e82" ], "7db6d66c-4bd2-49aa-a58e-7f236a938523": [ "567b14c826413d4ff28ecb510609350966136f2d0914c2d28eda5d8b3e646e82" ], "1ceb4fc3-5528-498c-8c72-f6e380b5592f": [ "567b14c826413d4ff28ecb510609350966136f2d0914c2d28eda5d8b3e646e82" ], "2d69aedd-98e0-421f-b9d7-b963a54357c6": [ "567b14c826413d4ff28ecb510609350966136f2d0914c2d28eda5d8b3e646e82" ], "5c0eaeb7-e4f0-480a-a4e5-14ba8af5b57e": [ "2652e0efd386340481f4aafc7721f97d5f2f4a87ab452b04f4952275cf5a9d9b" ], "c6365c48-cbb7-42ff-b13f-16d729d77db0": [ "2652e0efd386340481f4aafc7721f97d5f2f4a87ab452b04f4952275cf5a9d9b" ], "e663bb1a-f846-46a2-86f0-15276012daf3": [ "2652e0efd386340481f4aafc7721f97d5f2f4a87ab452b04f4952275cf5a9d9b" ], "cad68d61-b813-439f-ac9a-7bb70d919332": [ "2652e0efd386340481f4aafc7721f97d5f2f4a87ab452b04f4952275cf5a9d9b" ], "7d2e657c-6cc0-4bdd-b43d-eb76fe1ccef5": [ "2652e0efd386340481f4aafc7721f97d5f2f4a87ab452b04f4952275cf5a9d9b" ], "76279a31-7688-4151-8550-f3f990a5bb02": [ "2652e0efd386340481f4aafc7721f97d5f2f4a87ab452b04f4952275cf5a9d9b" ], "a6ae346e-322e-45c3-a3a9-bb8345fb347b": [ "2652e0efd386340481f4aafc7721f97d5f2f4a87ab452b04f4952275cf5a9d9b" ], "eaf81e0f-6099-4dfd-93cf-094f5c425fc6": [ "2652e0efd386340481f4aafc7721f97d5f2f4a87ab452b04f4952275cf5a9d9b" ], "4c43b674-a3ee-4b62-b34d-797d3a4a349c": [ "2652e0efd386340481f4aafc7721f97d5f2f4a87ab452b04f4952275cf5a9d9b" ], "d7d14c74-ebac-4d86-a422-63df5407576d": [ "2652e0efd386340481f4aafc7721f97d5f2f4a87ab452b04f4952275cf5a9d9b" ], "8b23a9b3-42cd-41cd-ba44-b40b26b9c8bf": [ "34766658d6856917c5fd75bc7ed377030aaa94e6020424190e8f4a78b13cc0e5" ], "87c22e42-d91c-48ab-88f5-f01b2cd15daa": [ "34766658d6856917c5fd75bc7ed377030aaa94e6020424190e8f4a78b13cc0e5" ], "6483b47c-5b51-42e5-ab80-17584162971f": [ "34766658d6856917c5fd75bc7ed377030aaa94e6020424190e8f4a78b13cc0e5" ], "495023b1-1e00-46fb-bc0a-911902eca927": [ "34766658d6856917c5fd75bc7ed377030aaa94e6020424190e8f4a78b13cc0e5" ], "6aecc26b-9a6c-4540-8808-5997de3f86f8": [ "34766658d6856917c5fd75bc7ed377030aaa94e6020424190e8f4a78b13cc0e5" ], "51b01eae-6086-4bdc-bad4-554c1ebe9117": [ "34766658d6856917c5fd75bc7ed377030aaa94e6020424190e8f4a78b13cc0e5" ], "cef39516-a21b-485e-b108-40124f2f7723": [ "34766658d6856917c5fd75bc7ed377030aaa94e6020424190e8f4a78b13cc0e5" ], "1335bf22-d2d7-405f-b553-58ebcaa36096": [ "34766658d6856917c5fd75bc7ed377030aaa94e6020424190e8f4a78b13cc0e5" ], "5999d29e-673e-406a-b1d8-98761fc61268": [ "34766658d6856917c5fd75bc7ed377030aaa94e6020424190e8f4a78b13cc0e5" ], "b58398f7-67fe-46b3-8324-1efffdfd7ec9": [ "34766658d6856917c5fd75bc7ed377030aaa94e6020424190e8f4a78b13cc0e5" ], "82b03482-d056-4c54-8995-4e31c04d35dc": [ "5c3e9cc715caad19aa790a573f7e9b7e7e13e699694a5293fae7a1da112818ee" ], "745ccacf-076d-418e-92c5-7c53af25c9ca": [ "5c3e9cc715caad19aa790a573f7e9b7e7e13e699694a5293fae7a1da112818ee" ], "3f977785-1bca-46a9-83e1-2ddde707a6c5": [ "5c3e9cc715caad19aa790a573f7e9b7e7e13e699694a5293fae7a1da112818ee" ], "40ddf905-7767-4927-929a-27f7e824a019": [ "5c3e9cc715caad19aa790a573f7e9b7e7e13e699694a5293fae7a1da112818ee" ], "7f490f61-29f2-4977-957f-235a379c59d9": [ "5c3e9cc715caad19aa790a573f7e9b7e7e13e699694a5293fae7a1da112818ee" ], "73faec18-4634-4c68-a3c7-a91b0553111f": [ "5c3e9cc715caad19aa790a573f7e9b7e7e13e699694a5293fae7a1da112818ee" ], "ef30349a-ef3b-4d61-a200-c7ee331419a1": [ "5c3e9cc715caad19aa790a573f7e9b7e7e13e699694a5293fae7a1da112818ee" ], "fdc63905-4d33-429f-8305-317bbe36af47": [ "5c3e9cc715caad19aa790a573f7e9b7e7e13e699694a5293fae7a1da112818ee" ], "c6c95df2-503a-4309-934b-f25dfd5d134f": [ "5c3e9cc715caad19aa790a573f7e9b7e7e13e699694a5293fae7a1da112818ee" ], "4217518e-39c9-44e1-b2b9-66d5deb5e736": [ "5c3e9cc715caad19aa790a573f7e9b7e7e13e699694a5293fae7a1da112818ee" ], "ca67decb-ef89-4f62-b38e-1f920a467a5d": [ "cd053b50ba3d43b725ea4cb957a0d0bd8ad2f16aef47a87b56056d2891c237ce" ], "b4d020a8-ebd3-492d-a2eb-8bd17dcdbe49": [ "cd053b50ba3d43b725ea4cb957a0d0bd8ad2f16aef47a87b56056d2891c237ce" ], "12ffa782-903e-4f43-98e4-da856b795a35": [ "cd053b50ba3d43b725ea4cb957a0d0bd8ad2f16aef47a87b56056d2891c237ce" ], "52364c26-dcf1-4fb8-81a0-08112e840c5c": [ "cd053b50ba3d43b725ea4cb957a0d0bd8ad2f16aef47a87b56056d2891c237ce" ], "95b77bf8-16fa-4eab-ba2d-7355c3ada900": [ "cd053b50ba3d43b725ea4cb957a0d0bd8ad2f16aef47a87b56056d2891c237ce" ], "0a6d9bc9-e816-4cc8-9e5c-6d4bdd36ac4c": [ "be37a750110aa95083ba1f01b25fd79e195cfb09272724ae43bd363226419229" ], "945f8a0b-2e6c-44de-ace2-8817ac49abfc": [ "be37a750110aa95083ba1f01b25fd79e195cfb09272724ae43bd363226419229" ], "fb510a04-ac4e-4baa-95dc-5911baadb5cf": [ "be37a750110aa95083ba1f01b25fd79e195cfb09272724ae43bd363226419229" ], "f24f5fac-f385-4505-b51f-847c015c8487": [ "be37a750110aa95083ba1f01b25fd79e195cfb09272724ae43bd363226419229" ], "6476cf5c-9a3a-4387-be00-b9ae9a5204d6": [ "be37a750110aa95083ba1f01b25fd79e195cfb09272724ae43bd363226419229" ], "589aa372-a0c7-4427-a219-287ce0130ffa": [ "be37a750110aa95083ba1f01b25fd79e195cfb09272724ae43bd363226419229" ], "65576667-0111-4e63-b733-264756cbd0a3": [ "be37a750110aa95083ba1f01b25fd79e195cfb09272724ae43bd363226419229" ], "6377553e-7c3b-42a0-9608-8c16544b428e": [ "be37a750110aa95083ba1f01b25fd79e195cfb09272724ae43bd363226419229" ], "8e828a96-ce28-43a4-a038-4349d7e2e1e9": [ "be37a750110aa95083ba1f01b25fd79e195cfb09272724ae43bd363226419229" ], "edf530de-605d-4d53-a50f-9176f44974a8": [ "be37a750110aa95083ba1f01b25fd79e195cfb09272724ae43bd363226419229" ], "194fb2ae-7416-40b6-a0ab-2b475c6c3f2a": [ "cb4e74d898bb0d095b09a2dd4df266921a0c17b3b27ff1b03bfa587843b4207d" ], "15bb043e-2be2-4747-9c46-28d3dba5741d": [ "cb4e74d898bb0d095b09a2dd4df266921a0c17b3b27ff1b03bfa587843b4207d" ], "0354aa3e-5107-487e-a75d-7645f024b6a7": [ "cb4e74d898bb0d095b09a2dd4df266921a0c17b3b27ff1b03bfa587843b4207d" ], "17f92e8b-6195-4658-80e7-02820779dcde": [ "cb4e74d898bb0d095b09a2dd4df266921a0c17b3b27ff1b03bfa587843b4207d" ], "b45c4675-4af9-4115-abcf-acb115947505": [ "cb4e74d898bb0d095b09a2dd4df266921a0c17b3b27ff1b03bfa587843b4207d" ], "684de7b9-38bd-4728-9141-995cc393e4a4": [ "cb4e74d898bb0d095b09a2dd4df266921a0c17b3b27ff1b03bfa587843b4207d" ], "82b8e771-ac90-4882-8b30-6bd28e562f58": [ "cb4e74d898bb0d095b09a2dd4df266921a0c17b3b27ff1b03bfa587843b4207d" ], "eb0ecc73-3ab5-4f19-b692-607b77dca2df": [ "cb4e74d898bb0d095b09a2dd4df266921a0c17b3b27ff1b03bfa587843b4207d" ], "b27c58db-b096-450b-89c4-fbec2538b865": [ "8cf94b9369ba8da18d02172b9cbf885afb60cddd0a2381a86a81ca8e6a9b10f9" ], "79d11385-90a3-47c2-8a83-decaeda5f910": [ "8cf94b9369ba8da18d02172b9cbf885afb60cddd0a2381a86a81ca8e6a9b10f9" ], "ba74b7f4-306c-45dd-afa3-59cf545d8813": [ "8cf94b9369ba8da18d02172b9cbf885afb60cddd0a2381a86a81ca8e6a9b10f9" ], "75eb288b-35b1-4fd1-8010-2697e4123b05": [ "8cf94b9369ba8da18d02172b9cbf885afb60cddd0a2381a86a81ca8e6a9b10f9" ], "ba7a92cb-172f-4e01-bd01-ed6e721b3b50": [ "8cf94b9369ba8da18d02172b9cbf885afb60cddd0a2381a86a81ca8e6a9b10f9" ], "5f99854b-b456-423c-9a3b-cb42adc2dd16": [ "8cf94b9369ba8da18d02172b9cbf885afb60cddd0a2381a86a81ca8e6a9b10f9" ], "7612678c-2b10-4ea0-ad20-175672320b2c": [ "8cf94b9369ba8da18d02172b9cbf885afb60cddd0a2381a86a81ca8e6a9b10f9" ], "3d354e56-1a6b-4f06-bba5-9b419901cade": [ "8cf94b9369ba8da18d02172b9cbf885afb60cddd0a2381a86a81ca8e6a9b10f9" ], "53cca198-2cd5-4ed2-aee3-43006b221e17": [ "8cf94b9369ba8da18d02172b9cbf885afb60cddd0a2381a86a81ca8e6a9b10f9" ], "93281044-aec3-4b9a-87d2-e7a25a7e53ab": [ "8cf94b9369ba8da18d02172b9cbf885afb60cddd0a2381a86a81ca8e6a9b10f9" ], "f45dc043-52ea-4aa3-9aec-867ce1c50d8b": [ "e20f03d2063b6e0b4f922c00d62de19ee6657670a26577b04168e0bfc7b1eb42" ], "411e998b-0ae2-4926-915a-5ab79bd375e0": [ "e20f03d2063b6e0b4f922c00d62de19ee6657670a26577b04168e0bfc7b1eb42" ], "aeb8aea1-e968-4ae0-b8da-915959d6ffe5": [ "e20f03d2063b6e0b4f922c00d62de19ee6657670a26577b04168e0bfc7b1eb42" ], "a7e8b7ad-05b5-49ad-b389-aeabc5e6f021": [ "e20f03d2063b6e0b4f922c00d62de19ee6657670a26577b04168e0bfc7b1eb42" ], "7f7024bc-56e3-443d-8f9f-0a5d63b18b96": [ "e20f03d2063b6e0b4f922c00d62de19ee6657670a26577b04168e0bfc7b1eb42" ], "c6d6e385-4783-4ba7-b83f-88b492a73160": [ "e20f03d2063b6e0b4f922c00d62de19ee6657670a26577b04168e0bfc7b1eb42" ], "1253bfe1-ce02-4534-b9fd-8b8f7d68c8dc": [ "e20f03d2063b6e0b4f922c00d62de19ee6657670a26577b04168e0bfc7b1eb42" ], "5252f766-a345-4ce8-a366-92bfa97a8804": [ "892a22c623618faecd553782dd97454d8c081170c04598767f4a36f05a8a3bb2" ], "6faa562b-ae7b-421c-b5d3-b25e97c49afc": [ "892a22c623618faecd553782dd97454d8c081170c04598767f4a36f05a8a3bb2" ], "d19c6d03-433e-415a-83a4-1d73147c6fa5": [ "892a22c623618faecd553782dd97454d8c081170c04598767f4a36f05a8a3bb2" ], "58a0571f-04b1-4204-9792-3de3a9971ade": [ "892a22c623618faecd553782dd97454d8c081170c04598767f4a36f05a8a3bb2" ], "5383aa4d-7764-4541-a95a-d0fa40defe42": [ "892a22c623618faecd553782dd97454d8c081170c04598767f4a36f05a8a3bb2" ], "0dcb9371-a21e-42a3-adef-284196543732": [ "892a22c623618faecd553782dd97454d8c081170c04598767f4a36f05a8a3bb2" ], "ebcd0860-e5e0-44f2-9967-2a2471609f3b": [ "892a22c623618faecd553782dd97454d8c081170c04598767f4a36f05a8a3bb2" ], "52f15db9-fdd2-4ce1-bd1e-20e6b4470ec6": [ "892a22c623618faecd553782dd97454d8c081170c04598767f4a36f05a8a3bb2" ], "b262a6e2-0e71-4d1a-8b58-1545b5d8e50e": [ "892a22c623618faecd553782dd97454d8c081170c04598767f4a36f05a8a3bb2" ], "e0d6a930-3c72-45e2-8f31-eddccd9177af": [ "892a22c623618faecd553782dd97454d8c081170c04598767f4a36f05a8a3bb2" ], "a275aa45-490d-4ac5-ade9-2d74329403c2": [ "2c17b2d996a4178952175483443612704c5dd7315f5b30beb7fb099ab044d68d" ], "ba6a49e9-ab6b-40cb-b33f-f6d5c53685c0": [ "2c17b2d996a4178952175483443612704c5dd7315f5b30beb7fb099ab044d68d" ], "39248aff-56f4-435f-977e-f8cab4925067": [ "2c17b2d996a4178952175483443612704c5dd7315f5b30beb7fb099ab044d68d" ], "cdc9f971-be96-4e91-af4e-33f1727a39fe": [ "2c17b2d996a4178952175483443612704c5dd7315f5b30beb7fb099ab044d68d" ], "324bae9a-0e13-4a32-aeb2-5facf8df73be": [ "2c17b2d996a4178952175483443612704c5dd7315f5b30beb7fb099ab044d68d" ], "ed37f5f8-0445-44bd-aa31-a32fe16232ea": [ "2c17b2d996a4178952175483443612704c5dd7315f5b30beb7fb099ab044d68d" ], "7b9cdc1a-1aa7-4277-a0af-817d525079e5": [ "2c17b2d996a4178952175483443612704c5dd7315f5b30beb7fb099ab044d68d" ], "d9c419e9-d183-40f8-a28f-5d5f35cd7dc5": [ "2c17b2d996a4178952175483443612704c5dd7315f5b30beb7fb099ab044d68d" ], "9dad5d74-a662-439a-add2-48218cf16154": [ "2c17b2d996a4178952175483443612704c5dd7315f5b30beb7fb099ab044d68d" ], "b44fecbf-9b16-42c5-983c-0576ed4d40bb": [ "2c17b2d996a4178952175483443612704c5dd7315f5b30beb7fb099ab044d68d" ], "b35f823d-bd13-492b-8bbf-e347fe5223a2": [ "252e704b7a4e29a36e01e55086d2027184611e0f1bc3f4abd273b0246c5d0c43" ], "4bdf0a4e-8374-477b-9136-918d837f5f42": [ "252e704b7a4e29a36e01e55086d2027184611e0f1bc3f4abd273b0246c5d0c43" ], "ca9997c5-c45c-44ea-91a5-2acce2d16a4c": [ "252e704b7a4e29a36e01e55086d2027184611e0f1bc3f4abd273b0246c5d0c43" ], "addbfe80-1777-4470-a94c-18cd2d2fbf5c": [ "252e704b7a4e29a36e01e55086d2027184611e0f1bc3f4abd273b0246c5d0c43" ], "41b92d2b-097e-417d-a8bc-c20692bf7d10": [ "252e704b7a4e29a36e01e55086d2027184611e0f1bc3f4abd273b0246c5d0c43" ], "50b79b4c-f4f7-42d3-bf8c-d3988eee4bfd": [ "252e704b7a4e29a36e01e55086d2027184611e0f1bc3f4abd273b0246c5d0c43" ], "bfcb7682-e7b1-4bf6-ba68-f07b18ec8a8f": [ "252e704b7a4e29a36e01e55086d2027184611e0f1bc3f4abd273b0246c5d0c43" ], "794d14a7-ace6-4883-88ee-a191f2cf0ee8": [ "53485f82f10af2df87678b8b4977e4aea87ea25f43b07cf62276798120705d49" ], "46724109-b187-497f-9afa-3218fe1a7266": [ "53485f82f10af2df87678b8b4977e4aea87ea25f43b07cf62276798120705d49" ], "29b71cea-f96c-4587-a30f-7501101e105f": [ "53485f82f10af2df87678b8b4977e4aea87ea25f43b07cf62276798120705d49" ], "075cdb61-c673-4b6e-abf7-b466a1aa0fdc": [ "53485f82f10af2df87678b8b4977e4aea87ea25f43b07cf62276798120705d49" ], "85f1b2ef-e467-4aa1-b07a-4b4d8aafe2c7": [ "53485f82f10af2df87678b8b4977e4aea87ea25f43b07cf62276798120705d49" ], "cfe759fe-15d4-40b8-b5e3-909c2f884dce": [ "58a0db6126e2d6858f1cf7cb3e95df17f07b67d02f2ef02567d04358279a6276" ], "6494dac2-a279-4637-b2e8-2bf78b39e1cc": [ "58a0db6126e2d6858f1cf7cb3e95df17f07b67d02f2ef02567d04358279a6276" ], "75fb6e7e-c7e0-4d9f-9d6c-7bd960bc36cf": [ "58a0db6126e2d6858f1cf7cb3e95df17f07b67d02f2ef02567d04358279a6276" ], "4b1c7d83-5461-406a-b228-fcdfe96a4757": [ "58a0db6126e2d6858f1cf7cb3e95df17f07b67d02f2ef02567d04358279a6276" ], "83c6035b-9246-4393-92d6-7ce28d85244a": [ "58a0db6126e2d6858f1cf7cb3e95df17f07b67d02f2ef02567d04358279a6276" ], "f3afacc0-cf22-443b-a5a1-1715a02c0166": [ "ef0097732e6eed361247a1081f21a3688bdcfff0d8ec6db66c2bfd6381359bf0" ], "88dc5024-c0d5-45a2-92a5-80fe804e3215": [ "ef0097732e6eed361247a1081f21a3688bdcfff0d8ec6db66c2bfd6381359bf0" ], "d54df81f-f141-41c9-bcef-4a5a63ff3615": [ "ef0097732e6eed361247a1081f21a3688bdcfff0d8ec6db66c2bfd6381359bf0" ], "c80a1f4a-ab53-42f8-b86f-8618d4bb5ab6": [ "ef0097732e6eed361247a1081f21a3688bdcfff0d8ec6db66c2bfd6381359bf0" ], "c042df38-a83d-4889-8491-cc9d8103f138": [ "ef0097732e6eed361247a1081f21a3688bdcfff0d8ec6db66c2bfd6381359bf0" ], "67e45811-15fc-45f0-ad47-855af1b33b41": [ "ef0097732e6eed361247a1081f21a3688bdcfff0d8ec6db66c2bfd6381359bf0" ], "65a7c1e1-db25-4d91-9fc5-d7e8c34ef225": [ "ef0097732e6eed361247a1081f21a3688bdcfff0d8ec6db66c2bfd6381359bf0" ], "48ce41d3-9aae-4805-929a-4c5eeb5322e0": [ "ef0097732e6eed361247a1081f21a3688bdcfff0d8ec6db66c2bfd6381359bf0" ], "eac35e01-9043-4fd0-a057-857cb557d5f5": [ "ef0097732e6eed361247a1081f21a3688bdcfff0d8ec6db66c2bfd6381359bf0" ], "2872a536-b393-4ea8-bbfe-ddaaf37464ef": [ "ef0097732e6eed361247a1081f21a3688bdcfff0d8ec6db66c2bfd6381359bf0" ], "2ab9a13e-8d90-4c32-b532-ae870c1af18f": [ "b5eeda2ed7d31c3d4f55c6dd4d95f8c3bc0c4a14e3ef371f92770f124632dbef" ], "f12483d7-66c0-404f-900f-9b5916f685b7": [ "b5eeda2ed7d31c3d4f55c6dd4d95f8c3bc0c4a14e3ef371f92770f124632dbef" ], "10687fa0-766c-4c01-9a74-3016d6b0d7e2": [ "b5eeda2ed7d31c3d4f55c6dd4d95f8c3bc0c4a14e3ef371f92770f124632dbef" ], "c7bccab9-9f2d-459b-a6fb-e681d999e951": [ "b5eeda2ed7d31c3d4f55c6dd4d95f8c3bc0c4a14e3ef371f92770f124632dbef" ], "187f62c4-d5ea-4d9f-a356-dd9a68a2bd7c": [ "b5eeda2ed7d31c3d4f55c6dd4d95f8c3bc0c4a14e3ef371f92770f124632dbef" ], "052a879f-8079-4288-90e7-6a6fa2c0df6a": [ "b5eeda2ed7d31c3d4f55c6dd4d95f8c3bc0c4a14e3ef371f92770f124632dbef" ], "21688709-9c73-418a-938f-8a0103619292": [ "b5eeda2ed7d31c3d4f55c6dd4d95f8c3bc0c4a14e3ef371f92770f124632dbef" ], "59721eb2-fd20-45b5-a6e6-96e274293d51": [ "b5eeda2ed7d31c3d4f55c6dd4d95f8c3bc0c4a14e3ef371f92770f124632dbef" ], "02fc24ae-6d5d-4c84-8e6e-1996802fc717": [ "b5eeda2ed7d31c3d4f55c6dd4d95f8c3bc0c4a14e3ef371f92770f124632dbef" ], "150c2f00-25fe-45f3-8fa8-82c2136b86e8": [ "b5eeda2ed7d31c3d4f55c6dd4d95f8c3bc0c4a14e3ef371f92770f124632dbef" ], "3659d53d-c5e4-49c0-9c03-69018fd74ab7": [ "fde5e2f6b5535701bf6cf2a3e296bc595289512bb47225748f69904a512d5bea" ], "0fca48a3-4d64-4b46-8a4f-557051c71ba8": [ "fde5e2f6b5535701bf6cf2a3e296bc595289512bb47225748f69904a512d5bea" ], "b3d8e3eb-be87-4e38-ac1b-a4c63326dbce": [ "fde5e2f6b5535701bf6cf2a3e296bc595289512bb47225748f69904a512d5bea" ], "09229886-907d-4fb6-a154-24c1530dfba7": [ "fde5e2f6b5535701bf6cf2a3e296bc595289512bb47225748f69904a512d5bea" ], "adefa977-b7c5-44f6-a08f-0b2147b3d05e": [ "fde5e2f6b5535701bf6cf2a3e296bc595289512bb47225748f69904a512d5bea" ], "a13a455b-7f02-4a43-9c2d-d067ab38ef12": [ "fde5e2f6b5535701bf6cf2a3e296bc595289512bb47225748f69904a512d5bea" ], "d74d7582-39d3-483a-bb5f-31f0c24c1e19": [ "8286d989740b78f10c31a1452ad639165b59b8973eaa014edbd0ed888adccd8d" ], "7b979fb2-cded-4bee-82c4-839678313754": [ "8286d989740b78f10c31a1452ad639165b59b8973eaa014edbd0ed888adccd8d" ], "eb240ff4-e3e8-4bca-b6b5-9f323889e58e": [ "8286d989740b78f10c31a1452ad639165b59b8973eaa014edbd0ed888adccd8d" ], "4ed18d32-1144-48a1-a07c-71b3f42b347c": [ "8286d989740b78f10c31a1452ad639165b59b8973eaa014edbd0ed888adccd8d" ], "953bf848-6987-4f3c-a838-62266c4d07ff": [ "8286d989740b78f10c31a1452ad639165b59b8973eaa014edbd0ed888adccd8d" ], "915ad456-3234-41c8-a776-65a81b9346fe": [ "8286d989740b78f10c31a1452ad639165b59b8973eaa014edbd0ed888adccd8d" ], "91c0baad-e6eb-439f-8627-048998523db7": [ "8286d989740b78f10c31a1452ad639165b59b8973eaa014edbd0ed888adccd8d" ], "3615cca0-2bbc-4669-a2dd-c830121574c1": [ "8286d989740b78f10c31a1452ad639165b59b8973eaa014edbd0ed888adccd8d" ], "37041a7a-4bb2-4a11-bae6-45b9ecd98017": [ "8286d989740b78f10c31a1452ad639165b59b8973eaa014edbd0ed888adccd8d" ], "b992db86-6502-4748-b1fe-0ebb411ad046": [ "8286d989740b78f10c31a1452ad639165b59b8973eaa014edbd0ed888adccd8d" ], "1a135bdd-7ce9-49a9-8b33-c4fa2d3501b6": [ "582874015861ee2f137ad3ae2e4de45f2fd52aba9c92bb8589416d210d89c3eb" ], "f98d9839-0c56-4bb4-8284-99019fc9b67f": [ "582874015861ee2f137ad3ae2e4de45f2fd52aba9c92bb8589416d210d89c3eb" ], "6776fc85-0e03-4fcb-b1f4-bda339c21a6a": [ "582874015861ee2f137ad3ae2e4de45f2fd52aba9c92bb8589416d210d89c3eb" ], "ec4760be-308e-4f80-ae5a-a6f46c61e01b": [ "582874015861ee2f137ad3ae2e4de45f2fd52aba9c92bb8589416d210d89c3eb" ], "4d6d050a-582f-47ce-87df-09b2ea5993b0": [ "582874015861ee2f137ad3ae2e4de45f2fd52aba9c92bb8589416d210d89c3eb" ], "e59ad353-124b-4f91-9bc4-cb035c417221": [ "582874015861ee2f137ad3ae2e4de45f2fd52aba9c92bb8589416d210d89c3eb" ], "16be711f-8564-41a9-8afc-bd8e29309565": [ "582874015861ee2f137ad3ae2e4de45f2fd52aba9c92bb8589416d210d89c3eb" ], "5d772c63-777f-4a90-80d6-dbc783f988eb": [ "582874015861ee2f137ad3ae2e4de45f2fd52aba9c92bb8589416d210d89c3eb" ], "291dd070-87ca-4dc8-b1a1-6c2974aa6490": [ "582874015861ee2f137ad3ae2e4de45f2fd52aba9c92bb8589416d210d89c3eb" ], "ca27a1a0-4fa1-4577-a61d-b08cf80b0dd8": [ "582874015861ee2f137ad3ae2e4de45f2fd52aba9c92bb8589416d210d89c3eb" ], "33030676-7448-49e0-9670-5d1c301e5c70": [ "9afbdeaea403deb0f61cdc3bca5b4a96afe98f4166b36b4f8606cc41a7c0a4c1" ], "cfc4c8ad-30f1-47e7-8baa-7b7aa777f408": [ "9afbdeaea403deb0f61cdc3bca5b4a96afe98f4166b36b4f8606cc41a7c0a4c1" ], "9344374b-45cb-4732-a5a5-e390bd39475d": [ "9afbdeaea403deb0f61cdc3bca5b4a96afe98f4166b36b4f8606cc41a7c0a4c1" ], "05801a0b-d3cd-4ae0-a733-fb5800844028": [ "9afbdeaea403deb0f61cdc3bca5b4a96afe98f4166b36b4f8606cc41a7c0a4c1" ], "c8582643-20e7-4a0e-ac02-ead6ced37152": [ "9afbdeaea403deb0f61cdc3bca5b4a96afe98f4166b36b4f8606cc41a7c0a4c1" ], "53e2850b-e588-43bc-b3dd-a8eed68e7ce3": [ "9afbdeaea403deb0f61cdc3bca5b4a96afe98f4166b36b4f8606cc41a7c0a4c1" ], "f1d0a075-463e-4115-b44f-b41c9c9a7c19": [ "9afbdeaea403deb0f61cdc3bca5b4a96afe98f4166b36b4f8606cc41a7c0a4c1" ], "f132d59f-c86c-4618-b705-d48ca686adf0": [ "9afbdeaea403deb0f61cdc3bca5b4a96afe98f4166b36b4f8606cc41a7c0a4c1" ], "8281cd57-69b2-4a89-a57a-9d2f4d89f007": [ "b9507b49d2385ea4d7c6d656582f18b1a7ae64d6075ce9e2654788b4e8bcae8a" ], "9f2a6f68-e1ed-4fc9-8843-91888bd0629a": [ "b9507b49d2385ea4d7c6d656582f18b1a7ae64d6075ce9e2654788b4e8bcae8a" ], "b220bb91-0c8b-48cc-9cfa-0802d60f91b3": [ "b9507b49d2385ea4d7c6d656582f18b1a7ae64d6075ce9e2654788b4e8bcae8a" ], "a91d677a-671b-488e-a5fe-0b406c3a4eb7": [ "b9507b49d2385ea4d7c6d656582f18b1a7ae64d6075ce9e2654788b4e8bcae8a" ], "a52e8cc1-defd-47a2-91a0-f300cb47784f": [ "b9507b49d2385ea4d7c6d656582f18b1a7ae64d6075ce9e2654788b4e8bcae8a" ], "d381e506-d8d6-4251-ad1a-648d08f3f400": [ "b9507b49d2385ea4d7c6d656582f18b1a7ae64d6075ce9e2654788b4e8bcae8a" ], "8d1b8fdb-de07-4520-bda2-125b701d0b98": [ "b9507b49d2385ea4d7c6d656582f18b1a7ae64d6075ce9e2654788b4e8bcae8a" ], "77a47343-e614-4009-8f78-b9cd7dd31bf1": [ "b9507b49d2385ea4d7c6d656582f18b1a7ae64d6075ce9e2654788b4e8bcae8a" ], "6d40a95d-b8f0-4b42-a698-ba05301a2e99": [ "b9507b49d2385ea4d7c6d656582f18b1a7ae64d6075ce9e2654788b4e8bcae8a" ], "05870a10-e822-45a1-8aa4-b482dd6c2f6f": [ "b9507b49d2385ea4d7c6d656582f18b1a7ae64d6075ce9e2654788b4e8bcae8a" ], "5b3334bb-3a99-40fe-a06a-30ce94da5c69": [ "6f7cf5db827c32c8a735dd896b580efb8bdfb7fd45ebf0acd1e9eed5020b502f" ], "08db1e69-1228-4051-96fa-9add10343e36": [ "6f7cf5db827c32c8a735dd896b580efb8bdfb7fd45ebf0acd1e9eed5020b502f" ], "ec1b5986-683a-42eb-aae5-bad40916eb82": [ "6f7cf5db827c32c8a735dd896b580efb8bdfb7fd45ebf0acd1e9eed5020b502f" ], "1154bdc9-dd50-41fc-bab1-0844b6602da2": [ "6f7cf5db827c32c8a735dd896b580efb8bdfb7fd45ebf0acd1e9eed5020b502f" ], "2bd20d66-1b51-4dbf-a549-fc58bfc41468": [ "6f7cf5db827c32c8a735dd896b580efb8bdfb7fd45ebf0acd1e9eed5020b502f" ], "2df44f3d-352d-44b7-859b-8c576e1dca9e": [ "6f7cf5db827c32c8a735dd896b580efb8bdfb7fd45ebf0acd1e9eed5020b502f" ], "bfb97b11-22bf-4dc5-b1e7-67d4b947372e": [ "38a78f7bcbaeae4dc00110122fa4fc83108ba82f7fcf0fb5a724195c6ef83c85" ], "d5128435-d710-429a-a4ce-c367503a016c": [ "38a78f7bcbaeae4dc00110122fa4fc83108ba82f7fcf0fb5a724195c6ef83c85" ], "ec781e9f-4a6c-40f0-9c7b-dcce45820a9b": [ "38a78f7bcbaeae4dc00110122fa4fc83108ba82f7fcf0fb5a724195c6ef83c85" ], "beb99bb4-ff83-490f-aa00-f2bc77886b45": [ "38a78f7bcbaeae4dc00110122fa4fc83108ba82f7fcf0fb5a724195c6ef83c85" ], "3a53ab1b-3167-48ad-8260-939ce05718b2": [ "38a78f7bcbaeae4dc00110122fa4fc83108ba82f7fcf0fb5a724195c6ef83c85" ], "61901d8c-de48-45c5-9b05-291d9abc67e7": [ "38a78f7bcbaeae4dc00110122fa4fc83108ba82f7fcf0fb5a724195c6ef83c85" ], "bf7080f2-1703-4630-a538-ad70d568629d": [ "38a78f7bcbaeae4dc00110122fa4fc83108ba82f7fcf0fb5a724195c6ef83c85" ], "23030dad-1f89-4ff6-a837-3fd087575c4c": [ "38a78f7bcbaeae4dc00110122fa4fc83108ba82f7fcf0fb5a724195c6ef83c85" ], "e374158c-fd76-4cdd-9f65-edb5978d62fe": [ "38a78f7bcbaeae4dc00110122fa4fc83108ba82f7fcf0fb5a724195c6ef83c85" ], "8e521a68-6ecd-4812-bd81-0853b2c1f88a": [ "38a78f7bcbaeae4dc00110122fa4fc83108ba82f7fcf0fb5a724195c6ef83c85" ], "41df3732-63c2-4877-9a16-0f24edd9975e": [ "91e2d8c23a72466a8cd6bbc3a16ac0814a43b7c79b4da0345de6f1a5923efa19" ], "87bf2c71-ee72-4dde-a38c-d8ac9fb5d822": [ "91e2d8c23a72466a8cd6bbc3a16ac0814a43b7c79b4da0345de6f1a5923efa19" ], "89cf2cdc-a512-4ded-bb6d-9389a20d531c": [ "91e2d8c23a72466a8cd6bbc3a16ac0814a43b7c79b4da0345de6f1a5923efa19" ], "ac2afabd-3d45-4355-91ec-360e281f7edc": [ "91e2d8c23a72466a8cd6bbc3a16ac0814a43b7c79b4da0345de6f1a5923efa19" ], "e88fda3f-d819-4524-8d76-9686953923a0": [ "91e2d8c23a72466a8cd6bbc3a16ac0814a43b7c79b4da0345de6f1a5923efa19" ], "53dc2583-781d-4596-b0f1-171324d80625": [ "91e2d8c23a72466a8cd6bbc3a16ac0814a43b7c79b4da0345de6f1a5923efa19" ], "efae5e98-b9c5-48a8-acad-da9165463c3b": [ "91e2d8c23a72466a8cd6bbc3a16ac0814a43b7c79b4da0345de6f1a5923efa19" ], "c9fe874f-d002-418e-8c4f-accf85db8830": [ "91e2d8c23a72466a8cd6bbc3a16ac0814a43b7c79b4da0345de6f1a5923efa19" ], "451c0363-c1af-44fd-a7ea-9e802204081d": [ "76e1d67063713ec4c9c1c5d055a7edfb5ba0c445910200373d4675be0b941707" ], "b2325882-dbe0-41a4-92e8-24c6f93732e9": [ "76e1d67063713ec4c9c1c5d055a7edfb5ba0c445910200373d4675be0b941707" ], "54ff999b-f69b-4abb-a686-50c55ecfc966": [ "76e1d67063713ec4c9c1c5d055a7edfb5ba0c445910200373d4675be0b941707" ], "71ab9f66-a755-4636-bb87-c18984f556ca": [ "76e1d67063713ec4c9c1c5d055a7edfb5ba0c445910200373d4675be0b941707" ], "499ad8a4-bb01-4364-a0b1-dedd76c3e23f": [ "76e1d67063713ec4c9c1c5d055a7edfb5ba0c445910200373d4675be0b941707" ], "3fa455b9-1f20-47af-a4be-59833f6f0c5f": [ "76e1d67063713ec4c9c1c5d055a7edfb5ba0c445910200373d4675be0b941707" ], "288db2a1-9560-4015-9448-adfe995de237": [ "76e1d67063713ec4c9c1c5d055a7edfb5ba0c445910200373d4675be0b941707" ], "5bfd2998-bd37-44ea-af30-a8a30b51f6be": [ "77c2dba0b606fc7d48f48508f56d5864ff9632a8179fee36c88446fd36ad4bc4" ], "a24b54f4-6c29-469f-8fc8-fd0471d4f27c": [ "77c2dba0b606fc7d48f48508f56d5864ff9632a8179fee36c88446fd36ad4bc4" ], "5007a78b-4e0a-415f-ada0-93453978abe2": [ "77c2dba0b606fc7d48f48508f56d5864ff9632a8179fee36c88446fd36ad4bc4" ], "beaac0a0-dcf6-4f3e-8f79-b1f7866e655a": [ "77c2dba0b606fc7d48f48508f56d5864ff9632a8179fee36c88446fd36ad4bc4" ], "f541009a-2fb4-4a29-a0fb-8c9582500499": [ "77c2dba0b606fc7d48f48508f56d5864ff9632a8179fee36c88446fd36ad4bc4" ], "ca0de3e2-136f-4ae1-8338-07e73d37537b": [ "77c2dba0b606fc7d48f48508f56d5864ff9632a8179fee36c88446fd36ad4bc4" ], "e2efd559-bcea-48bc-8c84-b8ad7aff2636": [ "77c2dba0b606fc7d48f48508f56d5864ff9632a8179fee36c88446fd36ad4bc4" ], "2925134e-805e-4acf-a734-b7605879a0b8": [ "77c2dba0b606fc7d48f48508f56d5864ff9632a8179fee36c88446fd36ad4bc4" ], "c6fdd337-7ce6-4e0b-8599-31ed8c7bd7d8": [ "2bd15c93602c4856e678814faac0cbfc4e2881bacbcbccbe0dc756f764aa438f" ], "2151bfb2-4fae-4d91-a835-93da09744fa1": [ "2bd15c93602c4856e678814faac0cbfc4e2881bacbcbccbe0dc756f764aa438f" ], "a8d4afe8-8959-4905-956d-825502f76652": [ "2bd15c93602c4856e678814faac0cbfc4e2881bacbcbccbe0dc756f764aa438f" ], "ef17388d-1879-453a-89fd-c128249f624b": [ "2bd15c93602c4856e678814faac0cbfc4e2881bacbcbccbe0dc756f764aa438f" ], "eddb397e-ddec-4d72-9be5-b3cd9eff791b": [ "2bd15c93602c4856e678814faac0cbfc4e2881bacbcbccbe0dc756f764aa438f" ], "47fe7a00-1b80-4902-8440-56a1100702d9": [ "2bd15c93602c4856e678814faac0cbfc4e2881bacbcbccbe0dc756f764aa438f" ], "94144c6b-f32b-4b7a-86cc-2debae62a754": [ "2bd15c93602c4856e678814faac0cbfc4e2881bacbcbccbe0dc756f764aa438f" ], "001c262a-04b6-4f9a-b211-2e7bb5cfd37e": [ "2bd15c93602c4856e678814faac0cbfc4e2881bacbcbccbe0dc756f764aa438f" ], "bc69d2e7-e9a7-40d7-b1db-f5054dc1a579": [ "2bd15c93602c4856e678814faac0cbfc4e2881bacbcbccbe0dc756f764aa438f" ], "9b87bf9d-85b3-4493-ad63-aa1fbc1586e6": [ "2bd15c93602c4856e678814faac0cbfc4e2881bacbcbccbe0dc756f764aa438f" ], "8353781b-ae6e-4263-9ba7-2bfbcb7716f9": [ "7c0ff552ae4caad1b5fa1914f8c5ea0c907705192580cc127e76b245221805c1" ], "7bccbe23-199e-4703-9013-8ce8183fd76c": [ "7c0ff552ae4caad1b5fa1914f8c5ea0c907705192580cc127e76b245221805c1" ], "4407d9b7-1865-442b-9180-e6302c48c942": [ "7c0ff552ae4caad1b5fa1914f8c5ea0c907705192580cc127e76b245221805c1" ], "5d42b302-53d0-44ca-b83a-60aa6f60cfb7": [ "7c0ff552ae4caad1b5fa1914f8c5ea0c907705192580cc127e76b245221805c1" ], "839c023b-fe8a-4eb4-8ee8-72e4548fde63": [ "7c0ff552ae4caad1b5fa1914f8c5ea0c907705192580cc127e76b245221805c1" ], "56e24dbe-cf03-4591-9d67-1084bc91b722": [ "7c0ff552ae4caad1b5fa1914f8c5ea0c907705192580cc127e76b245221805c1" ], "a530257e-e50d-431a-a0ea-5a2ac0c28bc4": [ "7c0ff552ae4caad1b5fa1914f8c5ea0c907705192580cc127e76b245221805c1" ], "3f3359d0-7472-4bdf-a0b6-a227e729cfe5": [ "7c0ff552ae4caad1b5fa1914f8c5ea0c907705192580cc127e76b245221805c1" ], "0aee13b0-822a-4b7a-827a-3cf8300e1538": [ "7c0ff552ae4caad1b5fa1914f8c5ea0c907705192580cc127e76b245221805c1" ], "8f7359ed-3b74-4af7-a8b4-409c5bd298b0": [ "7c0ff552ae4caad1b5fa1914f8c5ea0c907705192580cc127e76b245221805c1" ], "9747304f-f32e-4f24-84ff-fada76d17d60": [ "375a1bf6285e85964292342467b055c3d4558095abcea5dcc7bf2532178e174e" ], "cd97928a-4ec2-4c85-8d55-a0ae89c8a270": [ "375a1bf6285e85964292342467b055c3d4558095abcea5dcc7bf2532178e174e" ], "5f0a0e9b-85e7-4745-aace-ed1732be1523": [ "375a1bf6285e85964292342467b055c3d4558095abcea5dcc7bf2532178e174e" ], "8c74df67-2a1f-4d7e-ae3d-ff1d9a153043": [ "375a1bf6285e85964292342467b055c3d4558095abcea5dcc7bf2532178e174e" ], "ac312983-7ede-41ef-928c-4c1a7515b3c0": [ "375a1bf6285e85964292342467b055c3d4558095abcea5dcc7bf2532178e174e" ], "b9a5cfcd-c1e7-41a2-881a-d223989baa4b": [ "375a1bf6285e85964292342467b055c3d4558095abcea5dcc7bf2532178e174e" ], "8231fe8b-4eb1-4b27-a0b1-f77aeea730b7": [ "375a1bf6285e85964292342467b055c3d4558095abcea5dcc7bf2532178e174e" ], "ebffd4c6-db5e-4397-9579-c7f14f605a22": [ "375a1bf6285e85964292342467b055c3d4558095abcea5dcc7bf2532178e174e" ], "38d678ec-02b5-4c23-a51d-dc098dc7d930": [ "85fa508670e60b79935ae7d11f981ff814dab38ff9f7a17058bbd7bf814e2a39" ], "284d44f5-ffce-4f9f-af2f-5adfb0841855": [ "85fa508670e60b79935ae7d11f981ff814dab38ff9f7a17058bbd7bf814e2a39" ], "86a86d5c-b68a-42bb-83ff-cf2bbe5d2135": [ "85fa508670e60b79935ae7d11f981ff814dab38ff9f7a17058bbd7bf814e2a39" ], "67a4c3a2-ad70-48df-bcd4-6d6d7fbb1351": [ "85fa508670e60b79935ae7d11f981ff814dab38ff9f7a17058bbd7bf814e2a39" ], "3e2be401-57f0-4ebc-b7d6-92ad2955e39c": [ "85fa508670e60b79935ae7d11f981ff814dab38ff9f7a17058bbd7bf814e2a39" ], "9bcdbb29-24a9-4e32-a636-689fc71e2b42": [ "461b41572a7f92eaaa2db5ea8256e8f0977afac42f70f555a131d5fdfcaa4f9b" ], "ae73c44f-6801-424c-b482-abdab766bade": [ "461b41572a7f92eaaa2db5ea8256e8f0977afac42f70f555a131d5fdfcaa4f9b" ], "f4cb0f3a-61eb-48b0-a792-b82302190b06": [ "461b41572a7f92eaaa2db5ea8256e8f0977afac42f70f555a131d5fdfcaa4f9b" ], "3292ae44-c14a-4d0c-afa1-5433ce835489": [ "461b41572a7f92eaaa2db5ea8256e8f0977afac42f70f555a131d5fdfcaa4f9b" ], "a5c55093-e349-42a6-93b5-561b244bd978": [ "461b41572a7f92eaaa2db5ea8256e8f0977afac42f70f555a131d5fdfcaa4f9b" ], "b4a5fb12-b393-4722-9b65-b99f327399b8": [ "461b41572a7f92eaaa2db5ea8256e8f0977afac42f70f555a131d5fdfcaa4f9b" ], "a7ea1303-3182-42d8-9c96-a11d2ca03718": [ "461b41572a7f92eaaa2db5ea8256e8f0977afac42f70f555a131d5fdfcaa4f9b" ], "56a1cf05-8f15-4d20-a97a-401312e9198e": [ "461b41572a7f92eaaa2db5ea8256e8f0977afac42f70f555a131d5fdfcaa4f9b" ], "0952ee8b-cf01-499e-87d8-a0deeda1395f": [ "461b41572a7f92eaaa2db5ea8256e8f0977afac42f70f555a131d5fdfcaa4f9b" ], "aabdd808-a68d-4eca-a1f8-9e5e6941a646": [ "461b41572a7f92eaaa2db5ea8256e8f0977afac42f70f555a131d5fdfcaa4f9b" ], "62afd36c-6276-419a-be02-a863c8054827": [ "b24bf1b2c3832f0f7d7afc99952a709167373f45b65aac0dc9bc29b2b1f6dca7" ], "e78bcdbc-4813-40f4-a4b0-ce82e36b26dd": [ "b24bf1b2c3832f0f7d7afc99952a709167373f45b65aac0dc9bc29b2b1f6dca7" ], "6db424ee-f80e-46de-af77-0de5eaac67b6": [ "b24bf1b2c3832f0f7d7afc99952a709167373f45b65aac0dc9bc29b2b1f6dca7" ], "df9bf7ab-3963-4723-98f5-14cb7e18e767": [ "b24bf1b2c3832f0f7d7afc99952a709167373f45b65aac0dc9bc29b2b1f6dca7" ], "003c7810-37eb-4897-ab0d-deff4ebef2e6": [ "b24bf1b2c3832f0f7d7afc99952a709167373f45b65aac0dc9bc29b2b1f6dca7" ], "f9a8577a-e18f-43d7-8da3-1d9a48e0a3d8": [ "b24bf1b2c3832f0f7d7afc99952a709167373f45b65aac0dc9bc29b2b1f6dca7" ], "bdfe288c-389e-478f-8b6a-0079a341fd46": [ "b24bf1b2c3832f0f7d7afc99952a709167373f45b65aac0dc9bc29b2b1f6dca7" ], "2deb8212-48a9-4c72-8b1d-2ac16aaed171": [ "b24bf1b2c3832f0f7d7afc99952a709167373f45b65aac0dc9bc29b2b1f6dca7" ], "43be575d-1f6b-4494-9834-5743019cb8c5": [ "b24bf1b2c3832f0f7d7afc99952a709167373f45b65aac0dc9bc29b2b1f6dca7" ], "592aedba-e9ac-45cc-a67f-e544c762cfa7": [ "b24bf1b2c3832f0f7d7afc99952a709167373f45b65aac0dc9bc29b2b1f6dca7" ], "2c3d793a-04ec-465a-992a-f3ccdc280698": [ "56b9500b587e8bf188557f35d5d3743ea2df8c2bdbe84be4fe25ce51f299bcca" ], "ca2228d3-36e8-4c94-b8b0-a60088fa78bd": [ "56b9500b587e8bf188557f35d5d3743ea2df8c2bdbe84be4fe25ce51f299bcca" ], "de0dbe1e-8a76-4c3d-8b94-eb08bee8868a": [ "56b9500b587e8bf188557f35d5d3743ea2df8c2bdbe84be4fe25ce51f299bcca" ], "9c629a45-19e9-4d4b-82e0-676c71a6a9bc": [ "56b9500b587e8bf188557f35d5d3743ea2df8c2bdbe84be4fe25ce51f299bcca" ], "b6abba07-0495-4017-a5f0-89d3c2806ec0": [ "56b9500b587e8bf188557f35d5d3743ea2df8c2bdbe84be4fe25ce51f299bcca" ], "df4a2d89-20c6-45d6-b9ca-5cd91fc0f0b0": [ "56b9500b587e8bf188557f35d5d3743ea2df8c2bdbe84be4fe25ce51f299bcca" ], "0652be9a-9da8-488d-b63a-98064d571336": [ "56b9500b587e8bf188557f35d5d3743ea2df8c2bdbe84be4fe25ce51f299bcca" ], "bd1824d7-6a1f-4ae6-bed9-d65758f26d22": [ "56b9500b587e8bf188557f35d5d3743ea2df8c2bdbe84be4fe25ce51f299bcca" ], "60d240a2-1ba9-4c1c-87f8-79fcbd546f5f": [ "56b9500b587e8bf188557f35d5d3743ea2df8c2bdbe84be4fe25ce51f299bcca" ], "7f93f6da-e3d9-406a-a475-f41338823b7a": [ "56b9500b587e8bf188557f35d5d3743ea2df8c2bdbe84be4fe25ce51f299bcca" ], "a31ec0ff-e208-4174-bc05-08c9df3dc975": [ "3b98a6582d3cd433a8e0d36c3acd40ccb64838c97eb454923d0a2c75e2d4bf35" ], "4adda408-364a-4785-8a3c-dcf42b9006e0": [ "3b98a6582d3cd433a8e0d36c3acd40ccb64838c97eb454923d0a2c75e2d4bf35" ], "4fc2c2f0-36f4-407f-93d5-2ec635ec24c3": [ "3b98a6582d3cd433a8e0d36c3acd40ccb64838c97eb454923d0a2c75e2d4bf35" ], "125121a9-4e12-4723-bc6c-8d5cdceb4649": [ "3b98a6582d3cd433a8e0d36c3acd40ccb64838c97eb454923d0a2c75e2d4bf35" ], "b9c1d069-ef7f-4b41-9437-e18165ec5e47": [ "3b98a6582d3cd433a8e0d36c3acd40ccb64838c97eb454923d0a2c75e2d4bf35" ], "62e58574-e5d5-4564-b7e6-38923053c2b1": [ "3b98a6582d3cd433a8e0d36c3acd40ccb64838c97eb454923d0a2c75e2d4bf35" ], "beeca607-0de7-4dbc-a130-5c2f503f96ed": [ "3b98a6582d3cd433a8e0d36c3acd40ccb64838c97eb454923d0a2c75e2d4bf35" ], "d4b4c178-3523-4d32-ac9c-7b4ce4a61b61": [ "3b98a6582d3cd433a8e0d36c3acd40ccb64838c97eb454923d0a2c75e2d4bf35" ], "97d81d59-fa63-468f-9486-7bcdeb5cb9bb": [ "3b98a6582d3cd433a8e0d36c3acd40ccb64838c97eb454923d0a2c75e2d4bf35" ], "eba4cdeb-dd25-4ff0-a0af-1906a3b27ffa": [ "3b98a6582d3cd433a8e0d36c3acd40ccb64838c97eb454923d0a2c75e2d4bf35" ], "78186c11-d875-4315-8ac0-d6137b3c1604": [ "e97bbe3d37bacb34902b4db67351799f1309541d4879e53b97fad08a4417304f" ], "75c28f2b-32fc-4613-b150-ef9a48e59737": [ "e97bbe3d37bacb34902b4db67351799f1309541d4879e53b97fad08a4417304f" ], "ff86ae5f-c134-4950-8c64-6beebfc47c67": [ "e97bbe3d37bacb34902b4db67351799f1309541d4879e53b97fad08a4417304f" ], "c8bd6b2f-83e7-424b-9f4d-f727a981e75b": [ "e97bbe3d37bacb34902b4db67351799f1309541d4879e53b97fad08a4417304f" ], "5ec34627-fe89-4306-a291-41c5f6a4ada1": [ "e97bbe3d37bacb34902b4db67351799f1309541d4879e53b97fad08a4417304f" ], "a7687172-1660-42c6-86bf-ceee43f812fb": [ "e97bbe3d37bacb34902b4db67351799f1309541d4879e53b97fad08a4417304f" ], "c3fed643-389e-437a-adde-d215903c4894": [ "e97bbe3d37bacb34902b4db67351799f1309541d4879e53b97fad08a4417304f" ], "36c85b25-fe24-42bf-a7f0-faed90766ece": [ "ab651375c4bf52b30d0d709c5c1ac7c52e75399b0cdc1f1139c3d54cda15d0f4" ], "4102b74a-a3ec-4c78-b299-4ca6862e1a4d": [ "ab651375c4bf52b30d0d709c5c1ac7c52e75399b0cdc1f1139c3d54cda15d0f4" ], "bd9489fb-9a9f-410e-b6da-092b6d4c78a9": [ "ab651375c4bf52b30d0d709c5c1ac7c52e75399b0cdc1f1139c3d54cda15d0f4" ], "1c31fd29-fb1c-45d0-8822-c9dbed591db6": [ "ab651375c4bf52b30d0d709c5c1ac7c52e75399b0cdc1f1139c3d54cda15d0f4" ], "4639867c-7186-4562-a0b6-222f27c3b028": [ "ab651375c4bf52b30d0d709c5c1ac7c52e75399b0cdc1f1139c3d54cda15d0f4" ], "d2938311-953b-4b13-aa44-d73cb9000d14": [ "ab651375c4bf52b30d0d709c5c1ac7c52e75399b0cdc1f1139c3d54cda15d0f4" ], "19456a47-9c65-441e-ade5-92258438a23c": [ "ab651375c4bf52b30d0d709c5c1ac7c52e75399b0cdc1f1139c3d54cda15d0f4" ], "589cd5ae-11ab-48e3-bd31-45a368e74073": [ "ab651375c4bf52b30d0d709c5c1ac7c52e75399b0cdc1f1139c3d54cda15d0f4" ], "e48842bb-9e09-45f8-a4c7-ddfe3c95155b": [ "ab651375c4bf52b30d0d709c5c1ac7c52e75399b0cdc1f1139c3d54cda15d0f4" ], "9a314b77-c2ad-4785-9001-6704ca2e9e75": [ "ab651375c4bf52b30d0d709c5c1ac7c52e75399b0cdc1f1139c3d54cda15d0f4" ], "c7539f18-7ca4-4472-bd67-1547ef2184ea": [ "1cce366d677f8376bf12534e297b90b03aca077aec33e886104e9e5ea7d51e5f" ], "925a4956-1722-4902-8731-3c24fa644f3e": [ "663f0f64773dbe54b88622b68e405f1567fce9a775e5c2b56243bd5ac7c14b64" ], "5da04889-afb4-4605-b604-29d921182b9a": [ "663f0f64773dbe54b88622b68e405f1567fce9a775e5c2b56243bd5ac7c14b64" ], "d90677f7-435b-40f9-845e-ba3aba39aec7": [ "663f0f64773dbe54b88622b68e405f1567fce9a775e5c2b56243bd5ac7c14b64" ], "528b8ac4-c95b-47a4-aeff-a43daf751076": [ "663f0f64773dbe54b88622b68e405f1567fce9a775e5c2b56243bd5ac7c14b64" ], "8f03977a-079e-47d5-859e-43e2686fa941": [ "663f0f64773dbe54b88622b68e405f1567fce9a775e5c2b56243bd5ac7c14b64" ], "f7e35567-daea-422f-be90-ff25b4096474": [ "663f0f64773dbe54b88622b68e405f1567fce9a775e5c2b56243bd5ac7c14b64" ], "e3e5d5bc-69c1-4bb1-a2cb-0325538a20ac": [ "663f0f64773dbe54b88622b68e405f1567fce9a775e5c2b56243bd5ac7c14b64" ], "d7a1dc4e-794d-4a15-b55b-98708e38d1d9": [ "663f0f64773dbe54b88622b68e405f1567fce9a775e5c2b56243bd5ac7c14b64" ], "aa0a445f-9701-4608-8629-746387cd799f": [ "663f0f64773dbe54b88622b68e405f1567fce9a775e5c2b56243bd5ac7c14b64" ], "7fad798d-b345-4647-adb6-626044c9cacb": [ "663f0f64773dbe54b88622b68e405f1567fce9a775e5c2b56243bd5ac7c14b64" ], "de54119d-e04e-434f-8308-681dfbf7f970": [ "abaa50b7db8fbf0dab4c81d1b4d70df618de96dd409f6b6c049ccc90dfb3d349" ], "3f760ce0-6852-4e38-b18d-8b4c51916596": [ "abaa50b7db8fbf0dab4c81d1b4d70df618de96dd409f6b6c049ccc90dfb3d349" ], "d0e7510b-9aa0-4ae5-8f39-250d1e47bb04": [ "abaa50b7db8fbf0dab4c81d1b4d70df618de96dd409f6b6c049ccc90dfb3d349" ], "366e105a-cb20-4e2c-8bbe-0ad25a10d165": [ "abaa50b7db8fbf0dab4c81d1b4d70df618de96dd409f6b6c049ccc90dfb3d349" ], "630266d9-ce8f-467f-92d8-710b00d7c64a": [ "abaa50b7db8fbf0dab4c81d1b4d70df618de96dd409f6b6c049ccc90dfb3d349" ], "27af08f3-0778-4a2b-a2f6-0b05815be4ae": [ "abaa50b7db8fbf0dab4c81d1b4d70df618de96dd409f6b6c049ccc90dfb3d349" ], "9a63a52b-e577-4d90-b5df-fc599f3f7baa": [ "abaa50b7db8fbf0dab4c81d1b4d70df618de96dd409f6b6c049ccc90dfb3d349" ], "19adb054-76c6-4782-9c02-16b95229d3dc": [ "abaa50b7db8fbf0dab4c81d1b4d70df618de96dd409f6b6c049ccc90dfb3d349" ], "3466c3d1-2f58-4fd9-891f-b36a00b78680": [ "abaa50b7db8fbf0dab4c81d1b4d70df618de96dd409f6b6c049ccc90dfb3d349" ], "7accb6b8-bdc0-47db-a50e-380b0285d17c": [ "abaa50b7db8fbf0dab4c81d1b4d70df618de96dd409f6b6c049ccc90dfb3d349" ], "c79289c6-9353-48d8-b851-3981aa15f1b7": [ "2d5974c99c10fd2236a171b4b213b6f9b7dbaa9888d35bd6ebf63135b5540d3d" ], "1fa91681-f03d-42b7-af28-ec803ee9ddb8": [ "2d5974c99c10fd2236a171b4b213b6f9b7dbaa9888d35bd6ebf63135b5540d3d" ], "714cbcce-c26f-4e69-8a78-e0002cdd78be": [ "2d5974c99c10fd2236a171b4b213b6f9b7dbaa9888d35bd6ebf63135b5540d3d" ], "131b9e27-6076-4f98-82d4-265c5fd73ffc": [ "2d5974c99c10fd2236a171b4b213b6f9b7dbaa9888d35bd6ebf63135b5540d3d" ], "d2460fc5-b818-4401-bcde-b0848ce4a474": [ "2d5974c99c10fd2236a171b4b213b6f9b7dbaa9888d35bd6ebf63135b5540d3d" ], "35331e63-f0f6-423b-ba7b-8bcb69ae5533": [ "2d5974c99c10fd2236a171b4b213b6f9b7dbaa9888d35bd6ebf63135b5540d3d" ], "eff40c0a-c7d5-4626-ae5c-18d0750145b5": [ "2d5974c99c10fd2236a171b4b213b6f9b7dbaa9888d35bd6ebf63135b5540d3d" ], "1072ab1c-7fc5-4bb7-b52c-75856acc8cf7": [ "2d5974c99c10fd2236a171b4b213b6f9b7dbaa9888d35bd6ebf63135b5540d3d" ], "5f650314-c5a3-4670-a2e9-75f87283bc07": [ "2d5974c99c10fd2236a171b4b213b6f9b7dbaa9888d35bd6ebf63135b5540d3d" ], "0e949a55-b37a-4101-9bd8-2e87c19af966": [ "2d5974c99c10fd2236a171b4b213b6f9b7dbaa9888d35bd6ebf63135b5540d3d" ], "f2d97dd9-505e-47e1-bcc1-7512cd441049": [ "ec9421d39825feed7e070c094dd2a261a3ce1ac756a615ac49ce62052495b70c" ], "473e07b4-002b-4001-89d5-637fdc0c149a": [ "ec9421d39825feed7e070c094dd2a261a3ce1ac756a615ac49ce62052495b70c" ], "8f766a48-11d9-4b65-9717-030ec49d0527": [ "ec9421d39825feed7e070c094dd2a261a3ce1ac756a615ac49ce62052495b70c" ], "08243684-d92c-4299-934e-c9ec21107f79": [ "ec9421d39825feed7e070c094dd2a261a3ce1ac756a615ac49ce62052495b70c" ], "8cdb5518-f9bd-4e39-b7e4-0261c0a79e78": [ "ec9421d39825feed7e070c094dd2a261a3ce1ac756a615ac49ce62052495b70c" ], "1efdbfdc-1a36-43f2-83c4-c556023adcef": [ "ec9421d39825feed7e070c094dd2a261a3ce1ac756a615ac49ce62052495b70c" ], "3e857e60-cb77-446c-8467-e6989e5b6a46": [ "2cdc07b5cd91addeca7dafc0fcac179c1303cece7ebe9f3e0a71701e8cf827d7" ], "77bebcd7-ffae-4989-8947-a93bfca47a9d": [ "2cdc07b5cd91addeca7dafc0fcac179c1303cece7ebe9f3e0a71701e8cf827d7" ], "5b791597-944e-4d9a-a98b-d7762fe0b979": [ "2cdc07b5cd91addeca7dafc0fcac179c1303cece7ebe9f3e0a71701e8cf827d7" ], "15c71053-4157-40a6-b94e-d1924f9119b3": [ "2cdc07b5cd91addeca7dafc0fcac179c1303cece7ebe9f3e0a71701e8cf827d7" ], "dcd38c37-c2b5-4140-824e-144922117305": [ "2cdc07b5cd91addeca7dafc0fcac179c1303cece7ebe9f3e0a71701e8cf827d7" ], "3448bcc6-6e95-4cf4-8c3e-46b4c9902a69": [ "ead0550fc9f66c4a1cff4596f41f5d6c6b374111f2a3b495ab2044ea95f6dada" ], "d0546d50-16c8-4d36-ac5a-4170208e301b": [ "ead0550fc9f66c4a1cff4596f41f5d6c6b374111f2a3b495ab2044ea95f6dada" ], "5df6a4e8-3b87-440c-88ab-7c3ca4242494": [ "ead0550fc9f66c4a1cff4596f41f5d6c6b374111f2a3b495ab2044ea95f6dada" ], "58755a31-bd68-4d01-8543-0e688c8338a1": [ "ead0550fc9f66c4a1cff4596f41f5d6c6b374111f2a3b495ab2044ea95f6dada" ], "0926908e-23e3-4475-a18b-0f607ecc612c": [ "ead0550fc9f66c4a1cff4596f41f5d6c6b374111f2a3b495ab2044ea95f6dada" ], "8d2a6c64-0da1-4f83-8cb3-edf1f28d4858": [ "f707756065d1f788b41fb97fcef81979e1fd241dbfa4034a24bec8e57b648482" ], "1deaa36c-6346-459b-8d9b-7de5bf053ef1": [ "f707756065d1f788b41fb97fcef81979e1fd241dbfa4034a24bec8e57b648482" ], "cb0a27f8-14f6-4f45-b4fc-8abbed8803b9": [ "f707756065d1f788b41fb97fcef81979e1fd241dbfa4034a24bec8e57b648482" ], "94312564-2389-487f-9105-480f1e37c8e9": [ "f707756065d1f788b41fb97fcef81979e1fd241dbfa4034a24bec8e57b648482" ], "b5331a2c-8f81-4cc2-a8e7-e399543243e3": [ "f707756065d1f788b41fb97fcef81979e1fd241dbfa4034a24bec8e57b648482" ], "56e2c925-aad3-4e2a-8b8c-6ad8219f9385": [ "f707756065d1f788b41fb97fcef81979e1fd241dbfa4034a24bec8e57b648482" ], "e6b3248d-59b8-41ba-8b2d-284f4ae8d31f": [ "f707756065d1f788b41fb97fcef81979e1fd241dbfa4034a24bec8e57b648482" ], "0974f8a5-5950-45fa-b688-29bffb1d7b1b": [ "f707756065d1f788b41fb97fcef81979e1fd241dbfa4034a24bec8e57b648482" ], "8093d04f-2c5c-47d9-876f-946dde659659": [ "f707756065d1f788b41fb97fcef81979e1fd241dbfa4034a24bec8e57b648482" ], "6c14f6c3-9cac-41c8-8070-d15366356138": [ "f707756065d1f788b41fb97fcef81979e1fd241dbfa4034a24bec8e57b648482" ], "92267346-14f0-411b-b3a4-26f75a59f171": [ "636f98cf8754c3a4759da02aa11a3f2aa7cdeb848a4980ec99300ece4a2e92fd" ], "133d8aaf-75ad-4309-adc8-939a7c58d6f2": [ "636f98cf8754c3a4759da02aa11a3f2aa7cdeb848a4980ec99300ece4a2e92fd" ], "1e815e07-81c2-4776-8d9b-a67515f730ef": [ "636f98cf8754c3a4759da02aa11a3f2aa7cdeb848a4980ec99300ece4a2e92fd" ], "4ce67274-507c-4048-95d3-1643825715f3": [ "636f98cf8754c3a4759da02aa11a3f2aa7cdeb848a4980ec99300ece4a2e92fd" ], "a1da755f-0429-4172-90ad-5cbcdb9928ff": [ "636f98cf8754c3a4759da02aa11a3f2aa7cdeb848a4980ec99300ece4a2e92fd" ], "18f5994e-1a4c-4b79-92cb-59074a4176c8": [ "636f98cf8754c3a4759da02aa11a3f2aa7cdeb848a4980ec99300ece4a2e92fd" ], "504f89b3-0a29-4da6-a161-7cd706837a3c": [ "636f98cf8754c3a4759da02aa11a3f2aa7cdeb848a4980ec99300ece4a2e92fd" ], "2d4fa91c-99f2-4a34-a9cf-e771666e8655": [ "636f98cf8754c3a4759da02aa11a3f2aa7cdeb848a4980ec99300ece4a2e92fd" ], "a686d02d-fbe9-46aa-b260-90927dc8132f": [ "636f98cf8754c3a4759da02aa11a3f2aa7cdeb848a4980ec99300ece4a2e92fd" ], "e405de4a-00d7-407e-bc95-c9f00ebb7975": [ "636f98cf8754c3a4759da02aa11a3f2aa7cdeb848a4980ec99300ece4a2e92fd" ], "a9a9daf2-c76e-49cc-9796-3acd2c98c52b": [ "2f429ec2a936a3dcd37504333de59d17ccd6f07f944ae6f5057aa8d29668662b" ], "f45f3ae8-768d-4d24-87c3-35e1ebe98687": [ "2f429ec2a936a3dcd37504333de59d17ccd6f07f944ae6f5057aa8d29668662b" ], "4b136829-dbdf-4d27-a383-72a1ec65b7ca": [ "2f429ec2a936a3dcd37504333de59d17ccd6f07f944ae6f5057aa8d29668662b" ], "9e642ceb-2763-4987-8f1b-897f31ec0dc6": [ "2f429ec2a936a3dcd37504333de59d17ccd6f07f944ae6f5057aa8d29668662b" ], "a5262825-dd80-458d-87a4-dd943a473bc3": [ "2f429ec2a936a3dcd37504333de59d17ccd6f07f944ae6f5057aa8d29668662b" ], "ec5d7d6c-02c8-45ef-81f3-0263c960001c": [ "9c39187e90964fb2503e9a6f9b6da69b965c9c8b53c57f3c0e4de3593e582bd9" ], "33782dcc-bcfe-4811-ba9d-6e69d913692f": [ "9c39187e90964fb2503e9a6f9b6da69b965c9c8b53c57f3c0e4de3593e582bd9" ], "9bff3e07-a994-4e3c-b4d8-0267a3057f17": [ "9c39187e90964fb2503e9a6f9b6da69b965c9c8b53c57f3c0e4de3593e582bd9" ], "e48a6a3f-94f1-4951-a1c1-0904d49f6a42": [ "9c39187e90964fb2503e9a6f9b6da69b965c9c8b53c57f3c0e4de3593e582bd9" ], "da012f33-6b38-41c8-9ffd-5989c1d74a6b": [ "9c39187e90964fb2503e9a6f9b6da69b965c9c8b53c57f3c0e4de3593e582bd9" ], "42d59926-830d-47a3-a10c-b68d5aadd7ed": [ "9c39187e90964fb2503e9a6f9b6da69b965c9c8b53c57f3c0e4de3593e582bd9" ], "b7825e35-2248-4dd4-8864-4496d9784f76": [ "9c39187e90964fb2503e9a6f9b6da69b965c9c8b53c57f3c0e4de3593e582bd9" ], "43dbcfd7-6a5d-4eb0-b2a1-224ef169f0f1": [ "9c39187e90964fb2503e9a6f9b6da69b965c9c8b53c57f3c0e4de3593e582bd9" ], "3773eea9-ac6f-4235-97df-7c2bc609b024": [ "9c39187e90964fb2503e9a6f9b6da69b965c9c8b53c57f3c0e4de3593e582bd9" ], "b62d6ad5-ea0c-40d7-89e2-02b8f8009dcd": [ "9c39187e90964fb2503e9a6f9b6da69b965c9c8b53c57f3c0e4de3593e582bd9" ], "7634741a-bc14-48d2-b8eb-5e3d185872e3": [ "60efdb2b99a3c7d843c7b470c81f561f033b31f0c5ba8a46fa6ca04c7cc421df" ], "2ab9ed95-623c-423e-a420-71ec8c922460": [ "60efdb2b99a3c7d843c7b470c81f561f033b31f0c5ba8a46fa6ca04c7cc421df" ], "96d50690-2055-4c0f-a1c6-f7ab62e23e73": [ "60efdb2b99a3c7d843c7b470c81f561f033b31f0c5ba8a46fa6ca04c7cc421df" ], "085cf51d-346e-49f7-9209-0c91def31b1b": [ "60efdb2b99a3c7d843c7b470c81f561f033b31f0c5ba8a46fa6ca04c7cc421df" ], "37b89b20-6366-4deb-b926-145927fbd40e": [ "60efdb2b99a3c7d843c7b470c81f561f033b31f0c5ba8a46fa6ca04c7cc421df" ], "6fcc2ade-04c7-4079-a420-35af5dd0097c": [ "60efdb2b99a3c7d843c7b470c81f561f033b31f0c5ba8a46fa6ca04c7cc421df" ], "c6b60a4d-f275-46d8-871f-98cb1b5d55a2": [ "60efdb2b99a3c7d843c7b470c81f561f033b31f0c5ba8a46fa6ca04c7cc421df" ], "b0c5155b-f104-4aad-aa68-8e683d276bc2": [ "60efdb2b99a3c7d843c7b470c81f561f033b31f0c5ba8a46fa6ca04c7cc421df" ], "f3a1ffc3-c234-43ea-849e-dffd22c132bf": [ "60efdb2b99a3c7d843c7b470c81f561f033b31f0c5ba8a46fa6ca04c7cc421df" ], "98210cc8-73b2-4d05-8d87-1bd18fbb9503": [ "60efdb2b99a3c7d843c7b470c81f561f033b31f0c5ba8a46fa6ca04c7cc421df" ], "5869f6a6-45fe-4a1e-bcb5-eb0210d134cb": [ "17eeb609fad179ea13a99a4c7c967a3d4f78ad2a0f1fcb520ca3a74c9fdc3a49" ], "39040e8d-bba3-4016-b522-39b69c4826f2": [ "17eeb609fad179ea13a99a4c7c967a3d4f78ad2a0f1fcb520ca3a74c9fdc3a49" ], "ee384c2c-a021-4f04-b015-769ba7f321fb": [ "17eeb609fad179ea13a99a4c7c967a3d4f78ad2a0f1fcb520ca3a74c9fdc3a49" ], "d5e96307-5843-466d-940a-b2301fa1c25d": [ "17eeb609fad179ea13a99a4c7c967a3d4f78ad2a0f1fcb520ca3a74c9fdc3a49" ], "94d7a0d9-e2be-44ac-8225-b0af91664576": [ "17eeb609fad179ea13a99a4c7c967a3d4f78ad2a0f1fcb520ca3a74c9fdc3a49" ], "28973571-3231-4df0-890f-b77aca020732": [ "17eeb609fad179ea13a99a4c7c967a3d4f78ad2a0f1fcb520ca3a74c9fdc3a49" ], "a5c5de91-d9e6-434b-89a3-4cf7fcdc8b9b": [ "17eeb609fad179ea13a99a4c7c967a3d4f78ad2a0f1fcb520ca3a74c9fdc3a49" ], "6d82dc62-484c-420b-9b85-54b9ad078340": [ "17eeb609fad179ea13a99a4c7c967a3d4f78ad2a0f1fcb520ca3a74c9fdc3a49" ], "ae5dda95-3566-4e84-b1d5-edc5c2ed2fec": [ "17eeb609fad179ea13a99a4c7c967a3d4f78ad2a0f1fcb520ca3a74c9fdc3a49" ], "47954fe6-60c8-461b-ac15-53482439d18c": [ "17eeb609fad179ea13a99a4c7c967a3d4f78ad2a0f1fcb520ca3a74c9fdc3a49" ], "85f51464-f90a-468c-accf-82950b2e87ac": [ "a03f36ef8f17234d2c3de94cfb8f7f9a2c8a9ef1e42a0af06b1f150f2eada805" ], "d92943c0-bc17-4998-9b87-c05d0727af9b": [ "a03f36ef8f17234d2c3de94cfb8f7f9a2c8a9ef1e42a0af06b1f150f2eada805" ], "f23c71e2-8abc-411b-ae5c-bee9f4707b3d": [ "a03f36ef8f17234d2c3de94cfb8f7f9a2c8a9ef1e42a0af06b1f150f2eada805" ], "1d41a2e7-666f-493e-a681-ecc0a54770e1": [ "a03f36ef8f17234d2c3de94cfb8f7f9a2c8a9ef1e42a0af06b1f150f2eada805" ], "a5bb38b7-4091-4f45-bda3-7448463746a9": [ "a03f36ef8f17234d2c3de94cfb8f7f9a2c8a9ef1e42a0af06b1f150f2eada805" ], "efbe8ee9-ea68-4563-ab4f-fcb67774ffb6": [ "a03f36ef8f17234d2c3de94cfb8f7f9a2c8a9ef1e42a0af06b1f150f2eada805" ], "6be49186-946b-4ed2-a342-763ba0ae69ba": [ "a03f36ef8f17234d2c3de94cfb8f7f9a2c8a9ef1e42a0af06b1f150f2eada805" ], "3f7b59db-6eab-4d12-a504-b2cf290c76e4": [ "a03f36ef8f17234d2c3de94cfb8f7f9a2c8a9ef1e42a0af06b1f150f2eada805" ], "595b1e2e-face-45eb-b7bd-531d555a789f": [ "a03f36ef8f17234d2c3de94cfb8f7f9a2c8a9ef1e42a0af06b1f150f2eada805" ], "2b17b99b-31d9-4d5d-80d5-7ebe849f5e1a": [ "a03f36ef8f17234d2c3de94cfb8f7f9a2c8a9ef1e42a0af06b1f150f2eada805" ], "acb676fb-e076-48a1-9d45-38d888f87768": [ "39372868ea13500bdbb9e2e4fcee00653c352e0a03e24e0d0a77a1b6e008221c" ], "7d7700c3-d15e-454b-8228-887841bf593a": [ "39372868ea13500bdbb9e2e4fcee00653c352e0a03e24e0d0a77a1b6e008221c" ], "9795a659-f502-4904-96de-7f47a0ff2a44": [ "39372868ea13500bdbb9e2e4fcee00653c352e0a03e24e0d0a77a1b6e008221c" ], "0b501dc3-bf2c-4a8b-9925-4030b703ee5d": [ "39372868ea13500bdbb9e2e4fcee00653c352e0a03e24e0d0a77a1b6e008221c" ], "a0724f11-6e22-4b5f-965b-f5790e963a1d": [ "39372868ea13500bdbb9e2e4fcee00653c352e0a03e24e0d0a77a1b6e008221c" ], "b7b7386d-c26b-496c-93ab-40898a8f9d0a": [ "39372868ea13500bdbb9e2e4fcee00653c352e0a03e24e0d0a77a1b6e008221c" ], "2362adaf-8156-4b09-91a3-69e1008b72a9": [ "39372868ea13500bdbb9e2e4fcee00653c352e0a03e24e0d0a77a1b6e008221c" ], "ba4a6a3c-2c0d-4ebf-94f5-63be5da31dcc": [ "39372868ea13500bdbb9e2e4fcee00653c352e0a03e24e0d0a77a1b6e008221c" ], "95ebdd63-ed74-4f6f-b87a-7ecd9af63fa1": [ "39372868ea13500bdbb9e2e4fcee00653c352e0a03e24e0d0a77a1b6e008221c" ], "78b5054e-0920-4026-828f-fdc1ab212be2": [ "39372868ea13500bdbb9e2e4fcee00653c352e0a03e24e0d0a77a1b6e008221c" ], "ae77025b-4e91-475b-a0e0-77242408a2c5": [ "f798d2742de5deb90469cc4e0d63e17050960d3f94cd1d8004c3a1052dabd303" ], "f1e5d696-206c-4dcb-b80b-65265e1f47be": [ "f798d2742de5deb90469cc4e0d63e17050960d3f94cd1d8004c3a1052dabd303" ], "9c1dc96c-392c-439c-943b-3b9cf2cedddd": [ "f798d2742de5deb90469cc4e0d63e17050960d3f94cd1d8004c3a1052dabd303" ], "470b146c-045f-4dca-84f2-5a8068090106": [ "f798d2742de5deb90469cc4e0d63e17050960d3f94cd1d8004c3a1052dabd303" ], "5bf0b560-c262-49fe-abca-836428bc55f7": [ "f798d2742de5deb90469cc4e0d63e17050960d3f94cd1d8004c3a1052dabd303" ], "680a5913-d7db-43db-9c2f-c3937c86d94b": [ "e1b95557b9446b4d7a16a21a8e8c2acfcca5d67c510fe0a515369d030649e222" ], "7cf9e061-f143-4e59-b4c3-e6d1dbf2a67b": [ "e1b95557b9446b4d7a16a21a8e8c2acfcca5d67c510fe0a515369d030649e222" ], "c1a4d9f7-6f4b-4780-a1a0-e1ec6ca2fdf1": [ "e1b95557b9446b4d7a16a21a8e8c2acfcca5d67c510fe0a515369d030649e222" ], "2e7b17e1-9758-4030-ac2c-12a904eb7778": [ "e1b95557b9446b4d7a16a21a8e8c2acfcca5d67c510fe0a515369d030649e222" ], "041e4db3-cd81-456f-bb26-a3d7f3af94af": [ "e1b95557b9446b4d7a16a21a8e8c2acfcca5d67c510fe0a515369d030649e222" ], "73c8b04c-715e-474d-a9b2-ee375203cf6e": [ "e1b95557b9446b4d7a16a21a8e8c2acfcca5d67c510fe0a515369d030649e222" ], "3d5591d3-b25d-4027-82b0-eca2f1f7ecd6": [ "e1b95557b9446b4d7a16a21a8e8c2acfcca5d67c510fe0a515369d030649e222" ], "6fbbd452-b179-4b29-b3d9-56fc6e25bd12": [ "e1b95557b9446b4d7a16a21a8e8c2acfcca5d67c510fe0a515369d030649e222" ], "52a08f8a-0098-41fd-99f9-e91031e26963": [ "0616015c4c1671255a7809472a7ef05d5a698e0ee274e630cf212431b0f43808" ], "4b871b31-43a3-40cf-9054-1333645bd1eb": [ "0616015c4c1671255a7809472a7ef05d5a698e0ee274e630cf212431b0f43808" ], "b34e2821-c76a-44ef-a17c-e52a0e8c531b": [ "0616015c4c1671255a7809472a7ef05d5a698e0ee274e630cf212431b0f43808" ], "7fdcdb19-7340-45d0-8589-b8ee0b8a02ae": [ "0616015c4c1671255a7809472a7ef05d5a698e0ee274e630cf212431b0f43808" ], "bd3f6076-7449-44d9-9d14-9306af40199a": [ "0616015c4c1671255a7809472a7ef05d5a698e0ee274e630cf212431b0f43808" ], "a5c4a0cf-8103-47b4-af5d-9bf7f9e2ef66": [ "0616015c4c1671255a7809472a7ef05d5a698e0ee274e630cf212431b0f43808" ], "2ea1873e-7b39-4852-9947-3f10aacceb70": [ "0616015c4c1671255a7809472a7ef05d5a698e0ee274e630cf212431b0f43808" ], "a4c47dbf-366a-4cb1-9d99-0311ddb0a1cd": [ "0616015c4c1671255a7809472a7ef05d5a698e0ee274e630cf212431b0f43808" ], "3108de15-43e3-46ee-9a61-e3afa333d8bf": [ "0616015c4c1671255a7809472a7ef05d5a698e0ee274e630cf212431b0f43808" ], "be2545cb-5064-42dc-958d-a1c0201c523e": [ "0616015c4c1671255a7809472a7ef05d5a698e0ee274e630cf212431b0f43808" ], "46256a75-fef8-4d0f-baa3-755ad3e2a28c": [ "760f192e2d4e2ef6d28bf514e6ce2283c939d6250becafd6c063450f5a803ba9" ], "a27950ed-384a-46ec-a33c-3db3150783aa": [ "760f192e2d4e2ef6d28bf514e6ce2283c939d6250becafd6c063450f5a803ba9" ], "57263c9d-7694-437b-8000-f9e0dfc45364": [ "760f192e2d4e2ef6d28bf514e6ce2283c939d6250becafd6c063450f5a803ba9" ], "28c0010a-a5d4-4efc-90e5-d580f847ce26": [ "760f192e2d4e2ef6d28bf514e6ce2283c939d6250becafd6c063450f5a803ba9" ], "0e8f5d08-c465-45b4-ab7f-c4e34a1f5e7a": [ "760f192e2d4e2ef6d28bf514e6ce2283c939d6250becafd6c063450f5a803ba9" ], "d0ba6b7b-fdc5-4286-85b3-299bac3dffc3": [ "760f192e2d4e2ef6d28bf514e6ce2283c939d6250becafd6c063450f5a803ba9" ], "ca271d63-1afc-43d1-8cc6-62d158cbc263": [ "760f192e2d4e2ef6d28bf514e6ce2283c939d6250becafd6c063450f5a803ba9" ], "c9010b88-1885-4df7-8c31-053103816b27": [ "760f192e2d4e2ef6d28bf514e6ce2283c939d6250becafd6c063450f5a803ba9" ], "45f924f7-5bcc-4f7d-a01e-db111dd42e4d": [ "760f192e2d4e2ef6d28bf514e6ce2283c939d6250becafd6c063450f5a803ba9" ], "b2b05d15-f0e5-4f1a-934d-0b75fe383b51": [ "760f192e2d4e2ef6d28bf514e6ce2283c939d6250becafd6c063450f5a803ba9" ], "65b4e0af-6beb-47f5-a688-f363baf2bfce": [ "7785d72a4a2f8586d81a9b615dcd0f4cab7cf46f0c5cb64dce75188d7ff9ba80" ], "9cb25ceb-ab15-4d18-8621-2ce2d21a5fce": [ "7785d72a4a2f8586d81a9b615dcd0f4cab7cf46f0c5cb64dce75188d7ff9ba80" ], "65d9d10d-ce06-4ace-90e8-7e1fb6ea1f5f": [ "7785d72a4a2f8586d81a9b615dcd0f4cab7cf46f0c5cb64dce75188d7ff9ba80" ], "87216062-1098-4e29-b6ef-6711aac9682c": [ "7785d72a4a2f8586d81a9b615dcd0f4cab7cf46f0c5cb64dce75188d7ff9ba80" ], "d1681027-fc9f-40b0-951e-bbaf27a7cc1e": [ "7785d72a4a2f8586d81a9b615dcd0f4cab7cf46f0c5cb64dce75188d7ff9ba80" ], "d0691ab8-3e44-4efa-9e00-201cb04da401": [ "7785d72a4a2f8586d81a9b615dcd0f4cab7cf46f0c5cb64dce75188d7ff9ba80" ], "c9b46b6b-51ac-4329-b770-621963705da8": [ "7785d72a4a2f8586d81a9b615dcd0f4cab7cf46f0c5cb64dce75188d7ff9ba80" ], "31508774-8e6a-4c45-b8d9-01bdc02c94eb": [ "7785d72a4a2f8586d81a9b615dcd0f4cab7cf46f0c5cb64dce75188d7ff9ba80" ], "f4eca483-270c-4e07-a259-e14c67108e86": [ "7785d72a4a2f8586d81a9b615dcd0f4cab7cf46f0c5cb64dce75188d7ff9ba80" ], "0afcceeb-d08a-46a4-955a-bec1db823bdc": [ "7785d72a4a2f8586d81a9b615dcd0f4cab7cf46f0c5cb64dce75188d7ff9ba80" ], "1277f80c-1161-4529-8768-02f6310854db": [ "6071d5b83d417ed7b2f47ea8dd5a380039620c6850ce1c6b48c07cb45072afc1" ], "c9942a3d-c065-46f1-823c-a2d3a0efb490": [ "6071d5b83d417ed7b2f47ea8dd5a380039620c6850ce1c6b48c07cb45072afc1" ], "e17d0629-cf85-4c91-b550-f4a843b1ba1f": [ "6071d5b83d417ed7b2f47ea8dd5a380039620c6850ce1c6b48c07cb45072afc1" ], "aed59082-a01b-4a61-8e49-c7f0f09cc78f": [ "6071d5b83d417ed7b2f47ea8dd5a380039620c6850ce1c6b48c07cb45072afc1" ], "f3179b86-d958-448d-afe2-8d1dc121b702": [ "6071d5b83d417ed7b2f47ea8dd5a380039620c6850ce1c6b48c07cb45072afc1" ], "d3d1ff9e-681e-4d09-a520-efbd0aca785e": [ "6071d5b83d417ed7b2f47ea8dd5a380039620c6850ce1c6b48c07cb45072afc1" ], "7c8aa3be-934e-4842-9175-7a1d3ddcd01b": [ "6071d5b83d417ed7b2f47ea8dd5a380039620c6850ce1c6b48c07cb45072afc1" ], "d904489f-7f99-4f9b-b778-660e1e6543b1": [ "6071d5b83d417ed7b2f47ea8dd5a380039620c6850ce1c6b48c07cb45072afc1" ], "0d60dd22-e460-42d4-9a4c-ab265f4bafa0": [ "1a6600c1f5e5914041ed40bc246aacf63a7ec90454c4cd6cbb1e72da559c1a0e" ], "da8dbf74-a52e-4902-807f-4b046105b45b": [ "1a6600c1f5e5914041ed40bc246aacf63a7ec90454c4cd6cbb1e72da559c1a0e" ], "7ebdee83-e2e4-4f89-b025-9bae04322f95": [ "1a6600c1f5e5914041ed40bc246aacf63a7ec90454c4cd6cbb1e72da559c1a0e" ], "1eeca274-3cf5-4c01-ade4-66e0abc3fb6d": [ "1a6600c1f5e5914041ed40bc246aacf63a7ec90454c4cd6cbb1e72da559c1a0e" ], "d4e78b30-78b1-471b-a428-512ea31b31d6": [ "1a6600c1f5e5914041ed40bc246aacf63a7ec90454c4cd6cbb1e72da559c1a0e" ], "576ed9ac-572d-4b79-a76f-0f1d4bdcfb5e": [ "1a6600c1f5e5914041ed40bc246aacf63a7ec90454c4cd6cbb1e72da559c1a0e" ], "99f8f7b0-0830-4a3b-84b3-0efc6c399c06": [ "1a6600c1f5e5914041ed40bc246aacf63a7ec90454c4cd6cbb1e72da559c1a0e" ], "17497e26-3a2a-481b-94d5-5a423f3dc8c0": [ "1a6600c1f5e5914041ed40bc246aacf63a7ec90454c4cd6cbb1e72da559c1a0e" ], "f09dd32e-a859-45c5-98e6-a8b535d79228": [ "1a6600c1f5e5914041ed40bc246aacf63a7ec90454c4cd6cbb1e72da559c1a0e" ], "3f06139b-8325-41d8-a613-f7b5b0372b75": [ "1a6600c1f5e5914041ed40bc246aacf63a7ec90454c4cd6cbb1e72da559c1a0e" ], "eb5f0613-ead0-4f96-ab26-cb931b42b132": [ "1a6600c1f5e5914041ed40bc246aacf63a7ec90454c4cd6cbb1e72da559c1a0e" ], "fc224d5a-4c01-42de-b141-331bdba781a4": [ "1a6600c1f5e5914041ed40bc246aacf63a7ec90454c4cd6cbb1e72da559c1a0e" ], "0d4ca200-7e16-48c8-bab3-1eccd0d08a7e": [ "2a289d03d479ea73b91440355275ca7a0aa8e70e2513608bbaef404a93e0a101" ], "8535dcbc-84fc-4902-80ef-3fba21ebc4fc": [ "2a289d03d479ea73b91440355275ca7a0aa8e70e2513608bbaef404a93e0a101" ], "202d877f-735e-41d7-b26c-59f9eb75d3b2": [ "2a289d03d479ea73b91440355275ca7a0aa8e70e2513608bbaef404a93e0a101" ], "3cb05b28-07b3-487b-bfb7-ed9fda029cd8": [ "2a289d03d479ea73b91440355275ca7a0aa8e70e2513608bbaef404a93e0a101" ], "aeab3182-2e98-4ee2-9e99-2c5496b05131": [ "2a289d03d479ea73b91440355275ca7a0aa8e70e2513608bbaef404a93e0a101" ], "1aae2f40-1bf9-4a0e-af74-5b482995f0e7": [ "2a289d03d479ea73b91440355275ca7a0aa8e70e2513608bbaef404a93e0a101" ], "51c38c29-c374-49a0-9dee-d61725418c8a": [ "2a289d03d479ea73b91440355275ca7a0aa8e70e2513608bbaef404a93e0a101" ], "f504d403-54d3-45f4-bd67-96f7980f2f2d": [ "2a289d03d479ea73b91440355275ca7a0aa8e70e2513608bbaef404a93e0a101" ], "00aaaf2f-94ea-4550-89e3-1f7d0a612697": [ "2a289d03d479ea73b91440355275ca7a0aa8e70e2513608bbaef404a93e0a101" ], "ed118a91-f123-4fd5-a59d-2bd6a8d1fa25": [ "2a289d03d479ea73b91440355275ca7a0aa8e70e2513608bbaef404a93e0a101" ], "60eb145b-6054-4ac2-ba76-8c84b9bedf52": [ "4831172fc03283befa6bb0f0752b2cf8e5f59a22269e4657ecc09c39a37cfa44" ], "b509fda6-8f12-4205-9ae6-d1f47380c87f": [ "4831172fc03283befa6bb0f0752b2cf8e5f59a22269e4657ecc09c39a37cfa44" ], "4a21b5f5-155e-4e05-9d83-d8ad208a9291": [ "4831172fc03283befa6bb0f0752b2cf8e5f59a22269e4657ecc09c39a37cfa44" ], "95039793-be2a-4a0f-b288-2a0ff2dd3359": [ "4831172fc03283befa6bb0f0752b2cf8e5f59a22269e4657ecc09c39a37cfa44" ], "7f395a44-71f3-47a8-beca-dc31b3be48d3": [ "4831172fc03283befa6bb0f0752b2cf8e5f59a22269e4657ecc09c39a37cfa44" ], "e1051a5d-137c-440c-b909-3541dca24bfb": [ "4831172fc03283befa6bb0f0752b2cf8e5f59a22269e4657ecc09c39a37cfa44" ], "5efde3ea-d12c-4db1-ad4d-d4690126d1eb": [ "4831172fc03283befa6bb0f0752b2cf8e5f59a22269e4657ecc09c39a37cfa44" ], "035be70e-e5be-43ed-b4db-1ed486856ff4": [ "4831172fc03283befa6bb0f0752b2cf8e5f59a22269e4657ecc09c39a37cfa44" ], "211ff822-9a0e-4b7a-ab3d-5495a5a468c5": [ "4831172fc03283befa6bb0f0752b2cf8e5f59a22269e4657ecc09c39a37cfa44" ], "de01d6b0-abfe-4dd8-aef0-bd263512ade6": [ "4831172fc03283befa6bb0f0752b2cf8e5f59a22269e4657ecc09c39a37cfa44" ], "5ea40dcf-1e3f-416a-91bd-3a0fb1302d8e": [ "3aff755a96b44a09548c6b1964e665e354a3344b6e2214098f4bff55ae3413c5" ], "0d2ec195-ba0f-42dd-80bb-e75f8f85fff8": [ "3aff755a96b44a09548c6b1964e665e354a3344b6e2214098f4bff55ae3413c5" ], "cc1742e9-fa0a-47e7-b9f5-3223c601a1c7": [ "3aff755a96b44a09548c6b1964e665e354a3344b6e2214098f4bff55ae3413c5" ], "822200f4-5fbd-437e-8ce8-a2845a129e1a": [ "3aff755a96b44a09548c6b1964e665e354a3344b6e2214098f4bff55ae3413c5" ], "346d4f82-a98e-41a7-a095-e275eeefaa3a": [ "3aff755a96b44a09548c6b1964e665e354a3344b6e2214098f4bff55ae3413c5" ], "12b59791-9a76-4b48-84ed-ba0df6225087": [ "3aff755a96b44a09548c6b1964e665e354a3344b6e2214098f4bff55ae3413c5" ], "c27215a2-f2c4-4832-bbc2-676d5e33e8e4": [ "3aff755a96b44a09548c6b1964e665e354a3344b6e2214098f4bff55ae3413c5" ], "42420951-90e2-429f-a5db-d3eba40b4fc2": [ "3aff755a96b44a09548c6b1964e665e354a3344b6e2214098f4bff55ae3413c5" ], "09e16361-f8d7-4131-896a-ad3afff20993": [ "3aff755a96b44a09548c6b1964e665e354a3344b6e2214098f4bff55ae3413c5" ], "bbf6eac5-1e90-4e8f-9394-82b998e618b6": [ "3aff755a96b44a09548c6b1964e665e354a3344b6e2214098f4bff55ae3413c5" ], "8c5c3769-8cf3-486a-a822-f3e2ca278d25": [ "88f631025eca5d5be38b898c11dd6e3403c3c5ed4d20b3824b8101d9842bd97f" ], "b90cc661-0f3b-483e-a714-21bb0057a1fc": [ "88f631025eca5d5be38b898c11dd6e3403c3c5ed4d20b3824b8101d9842bd97f" ], "be627025-cf38-4080-b0d8-1d564c405026": [ "88f631025eca5d5be38b898c11dd6e3403c3c5ed4d20b3824b8101d9842bd97f" ], "f431befb-9632-4f92-9da6-de670d58bb6a": [ "88f631025eca5d5be38b898c11dd6e3403c3c5ed4d20b3824b8101d9842bd97f" ], "d1ab81d3-d723-4d2b-8763-298a312717a1": [ "88f631025eca5d5be38b898c11dd6e3403c3c5ed4d20b3824b8101d9842bd97f" ], "e9ec5dac-672b-4938-bbc3-2839043c641f": [ "88f631025eca5d5be38b898c11dd6e3403c3c5ed4d20b3824b8101d9842bd97f" ], "1b589f6e-8fe0-4acc-80bb-e4df9a054a52": [ "88f631025eca5d5be38b898c11dd6e3403c3c5ed4d20b3824b8101d9842bd97f" ], "672a6f8b-2253-48df-a0cb-794669879513": [ "88f631025eca5d5be38b898c11dd6e3403c3c5ed4d20b3824b8101d9842bd97f" ], "fdb9e1a3-1292-456b-aa3a-4882345b503e": [ "88f631025eca5d5be38b898c11dd6e3403c3c5ed4d20b3824b8101d9842bd97f" ], "db3884ba-204f-4a62-bf67-221fc556fdc3": [ "5a885d39087ac603de1179cb84c9367e9c95e5144ab50d02df66d85855fc0531" ], "f346ba96-54da-4d3c-afdf-63eb4a80c7b9": [ "5a885d39087ac603de1179cb84c9367e9c95e5144ab50d02df66d85855fc0531" ], "050513ad-837c-4c91-863c-bdff7d16d700": [ "5a885d39087ac603de1179cb84c9367e9c95e5144ab50d02df66d85855fc0531" ], "f787cad4-a870-4d09-bcb9-ce96caf11728": [ "5a885d39087ac603de1179cb84c9367e9c95e5144ab50d02df66d85855fc0531" ], "19272547-4336-4dee-af14-2e30ce3b408a": [ "5a885d39087ac603de1179cb84c9367e9c95e5144ab50d02df66d85855fc0531" ], "77335e18-57eb-4ecd-ab96-e592ec26cc1d": [ "5a885d39087ac603de1179cb84c9367e9c95e5144ab50d02df66d85855fc0531" ], "0d1c9643-4a65-477c-8da1-9b02287e97c0": [ "5a885d39087ac603de1179cb84c9367e9c95e5144ab50d02df66d85855fc0531" ], "d788e897-be37-4a85-b94f-ee1549485c1d": [ "5a885d39087ac603de1179cb84c9367e9c95e5144ab50d02df66d85855fc0531" ], "ad6231a5-8168-447c-9b4a-783205e350aa": [ "e35033d47f453f283f231643acf63fbe3271357aeec257539c182fa907950681" ], "16de0b1c-45a9-472f-add7-8e4a30bfb8ef": [ "e35033d47f453f283f231643acf63fbe3271357aeec257539c182fa907950681" ], "86322b87-e3b0-4a63-83c6-33e0f905a6a3": [ "e35033d47f453f283f231643acf63fbe3271357aeec257539c182fa907950681" ], "02b4d0f6-8140-4425-b502-0ae9206f4d20": [ "e35033d47f453f283f231643acf63fbe3271357aeec257539c182fa907950681" ], "d69bead5-7ec8-48d1-bea3-b2422eff2dd8": [ "e35033d47f453f283f231643acf63fbe3271357aeec257539c182fa907950681" ], "b8dc7237-7cde-495e-b5f9-75e5114aa02a": [ "e35033d47f453f283f231643acf63fbe3271357aeec257539c182fa907950681" ], "378e7e38-efc2-40b2-80f8-94ab2d6b756e": [ "e35033d47f453f283f231643acf63fbe3271357aeec257539c182fa907950681" ], "e20c99cc-7a0e-4626-a15e-972de9433472": [ "e35033d47f453f283f231643acf63fbe3271357aeec257539c182fa907950681" ], "5fc63179-5e0f-4f74-a0fe-282456216b11": [ "e35033d47f453f283f231643acf63fbe3271357aeec257539c182fa907950681" ], "65d83987-5509-463e-91e2-d4b9319ff84f": [ "e35033d47f453f283f231643acf63fbe3271357aeec257539c182fa907950681" ], "68c65b4e-6e57-4adc-b2ac-c2735ebd9cbc": [ "d7a598ee33c540081edd5c4c063a63253146868aa40ff9bbf79b6e60b42acd83" ], "a47cf168-58f4-41a6-b0db-22fcee176538": [ "d7a598ee33c540081edd5c4c063a63253146868aa40ff9bbf79b6e60b42acd83" ], "647afdba-c7ba-4835-8992-74cf4ac74dbb": [ "d7a598ee33c540081edd5c4c063a63253146868aa40ff9bbf79b6e60b42acd83" ], "93ba7d74-cc70-4b8c-b334-2a740202338d": [ "d7a598ee33c540081edd5c4c063a63253146868aa40ff9bbf79b6e60b42acd83" ], "061154cd-f61a-46bf-a5b5-33ecd9de1224": [ "d7a598ee33c540081edd5c4c063a63253146868aa40ff9bbf79b6e60b42acd83" ], "5cf54d2f-cb91-46be-8b67-33ea03344b07": [ "d7a598ee33c540081edd5c4c063a63253146868aa40ff9bbf79b6e60b42acd83" ], "5264ed0c-555e-437a-8a3e-1c990a0c8b75": [ "d7a598ee33c540081edd5c4c063a63253146868aa40ff9bbf79b6e60b42acd83" ], "795e6730-68a3-4430-bc92-b7c0c2f8742b": [ "d7a598ee33c540081edd5c4c063a63253146868aa40ff9bbf79b6e60b42acd83" ], "578f031b-2582-4b47-9ee9-27163336659e": [ "d7a598ee33c540081edd5c4c063a63253146868aa40ff9bbf79b6e60b42acd83" ], "bbdbc7c5-bc4e-4f7c-8000-1bfaef5976e7": [ "d7a598ee33c540081edd5c4c063a63253146868aa40ff9bbf79b6e60b42acd83" ], "e5b1c4c8-647e-434f-8707-3404022dd820": [ "5cd7638552ab4fc482002c5d33c8c439c5b08b3f2c49e0a56406cf922b760ae3" ], "3138ab5f-eefa-4d2e-944e-384d5a78b9d2": [ "5cd7638552ab4fc482002c5d33c8c439c5b08b3f2c49e0a56406cf922b760ae3" ], "4d579d52-a6d1-4a16-8cf4-83b9707ed5e8": [ "5cd7638552ab4fc482002c5d33c8c439c5b08b3f2c49e0a56406cf922b760ae3" ], "7fd4ce59-6a29-4db2-ae2f-85067b634fb6": [ "5cd7638552ab4fc482002c5d33c8c439c5b08b3f2c49e0a56406cf922b760ae3" ], "c3e7d428-9dc9-4294-b8fe-c869a3f6aa46": [ "5cd7638552ab4fc482002c5d33c8c439c5b08b3f2c49e0a56406cf922b760ae3" ], "9fc91922-a8cb-430d-b9ca-6fdb04c5cd36": [ "5cd7638552ab4fc482002c5d33c8c439c5b08b3f2c49e0a56406cf922b760ae3" ], "53000e9d-9dc4-420a-8b30-a1f85f2a34f0": [ "5cd7638552ab4fc482002c5d33c8c439c5b08b3f2c49e0a56406cf922b760ae3" ], "8d71a839-4e71-49ad-ae19-9c6f0dc433e6": [ "a29added10bfc900adfdc0ff58762fc8d1c9c4c8dfe8685644e15e40f3c05f99" ], "26696beb-e52a-4355-88a1-dc5eddd471a5": [ "a29added10bfc900adfdc0ff58762fc8d1c9c4c8dfe8685644e15e40f3c05f99" ], "8dcc95df-2d57-4f53-aedc-095d8a2ba4c9": [ "a29added10bfc900adfdc0ff58762fc8d1c9c4c8dfe8685644e15e40f3c05f99" ], "d1871f51-da6e-4e43-8c07-7b363c9c44cc": [ "a29added10bfc900adfdc0ff58762fc8d1c9c4c8dfe8685644e15e40f3c05f99" ], "67ad9fe2-2f25-4677-ac1d-4bf65fd68607": [ "a29added10bfc900adfdc0ff58762fc8d1c9c4c8dfe8685644e15e40f3c05f99" ], "ffde2142-0cfe-44bc-8f5a-11455db2ec0f": [ "a29added10bfc900adfdc0ff58762fc8d1c9c4c8dfe8685644e15e40f3c05f99" ], "6ef5711e-e246-4067-bb50-bb9719b70587": [ "a29added10bfc900adfdc0ff58762fc8d1c9c4c8dfe8685644e15e40f3c05f99" ], "6321c716-b2ee-4d3c-b7e0-50462736e922": [ "a29added10bfc900adfdc0ff58762fc8d1c9c4c8dfe8685644e15e40f3c05f99" ], "98be5170-3e52-4dc0-bb4f-1c746cc9b53f": [ "a29added10bfc900adfdc0ff58762fc8d1c9c4c8dfe8685644e15e40f3c05f99" ], "b5f9087c-62ff-46ce-a428-e3dab99370d0": [ "a29added10bfc900adfdc0ff58762fc8d1c9c4c8dfe8685644e15e40f3c05f99" ], "4c4f898b-0550-4551-907f-6ed2d9961a68": [ "f93d692e26a4d3c2ff0160a00974e511af0bee82761a35378646090609b5096a" ], "3219b63b-2ada-4777-bdc7-0efd520ff735": [ "f93d692e26a4d3c2ff0160a00974e511af0bee82761a35378646090609b5096a" ], "b3c4fb2b-ec7b-450e-a828-12d37ecb01b8": [ "f93d692e26a4d3c2ff0160a00974e511af0bee82761a35378646090609b5096a" ], "8a8eeaa5-94d2-4776-94d1-1c7f43022cdb": [ "f93d692e26a4d3c2ff0160a00974e511af0bee82761a35378646090609b5096a" ], "8f8cae8d-0fd0-4975-a631-7dfe363df63a": [ "f93d692e26a4d3c2ff0160a00974e511af0bee82761a35378646090609b5096a" ], "ca3f726e-455b-4503-9326-498a185503f2": [ "f93d692e26a4d3c2ff0160a00974e511af0bee82761a35378646090609b5096a" ], "a79a885b-58a0-4d26-abfe-713dcf00bf0b": [ "f93d692e26a4d3c2ff0160a00974e511af0bee82761a35378646090609b5096a" ], "239d8b04-7356-447b-931c-524d414704e6": [ "f93d692e26a4d3c2ff0160a00974e511af0bee82761a35378646090609b5096a" ], "45b0ed8c-0bc9-409e-a7bf-f0379a23ab79": [ "f93d692e26a4d3c2ff0160a00974e511af0bee82761a35378646090609b5096a" ], "e6964209-fa22-4a39-880f-ffd95cdd48cf": [ "f93d692e26a4d3c2ff0160a00974e511af0bee82761a35378646090609b5096a" ], "0c297862-7a1e-44fb-b6e5-f0bd81716a00": [ "0e0d626360827416b55ec93f1854f38733746ad1133e5bc2da2e9f166800d363" ], "43487332-1d1d-4412-a963-243b0b29c71c": [ "0e0d626360827416b55ec93f1854f38733746ad1133e5bc2da2e9f166800d363" ], "81a9f085-8262-47e0-aeb3-aa69f1e51cf0": [ "0e0d626360827416b55ec93f1854f38733746ad1133e5bc2da2e9f166800d363" ], "dd0ac7fd-5758-4303-a187-aeee2e44b2ac": [ "0e0d626360827416b55ec93f1854f38733746ad1133e5bc2da2e9f166800d363" ], "da9a9dbd-39c6-42c9-a537-772e61f5fbc5": [ "0e0d626360827416b55ec93f1854f38733746ad1133e5bc2da2e9f166800d363" ], "eca3ce12-d0d3-4a98-be43-fa0652e0e598": [ "26e563cb86fc604f89f608f0aea428af8574a9bf1733a4799348c5f82ae33df3" ], "e8e3ada0-e32c-45fb-920a-eb97a86be967": [ "26e563cb86fc604f89f608f0aea428af8574a9bf1733a4799348c5f82ae33df3" ], "1143bcbb-80c7-4df9-bb17-186bd88e4097": [ "26e563cb86fc604f89f608f0aea428af8574a9bf1733a4799348c5f82ae33df3" ], "9d0a0ec5-d649-45cc-bfa0-6955b6785bf5": [ "26e563cb86fc604f89f608f0aea428af8574a9bf1733a4799348c5f82ae33df3" ], "68f73351-d85e-460b-9322-c679e53b3da1": [ "26e563cb86fc604f89f608f0aea428af8574a9bf1733a4799348c5f82ae33df3" ], "2ae28963-37f7-4c74-b82e-800c9563a433": [ "26e563cb86fc604f89f608f0aea428af8574a9bf1733a4799348c5f82ae33df3" ], "42e52a1c-ca40-4cf9-8ae9-e4fa35d4fd1a": [ "26e563cb86fc604f89f608f0aea428af8574a9bf1733a4799348c5f82ae33df3" ], "e4289919-2ecb-4877-abd1-55b62c6d8385": [ "26e563cb86fc604f89f608f0aea428af8574a9bf1733a4799348c5f82ae33df3" ], "059c9bfb-27e3-4f6f-90b8-a33f9a5a1a65": [ "56e759ee4c342c6abb3d8d8c1c7b95109933108c2b82332b1770f9beaad3f6b9" ], "b9a17eec-2ebf-426d-8095-bd2c4c3504e1": [ "56e759ee4c342c6abb3d8d8c1c7b95109933108c2b82332b1770f9beaad3f6b9" ], "0480086a-0153-4b2e-8cbf-511d3547b1b7": [ "56e759ee4c342c6abb3d8d8c1c7b95109933108c2b82332b1770f9beaad3f6b9" ], "04e1acbf-8404-4181-a1a5-ab4c683b1711": [ "56e759ee4c342c6abb3d8d8c1c7b95109933108c2b82332b1770f9beaad3f6b9" ], "736f8826-fe05-4a66-b4e8-7d916b2c8645": [ "56e759ee4c342c6abb3d8d8c1c7b95109933108c2b82332b1770f9beaad3f6b9" ], "7cdd0423-f0bb-479e-970a-2e128d8c8474": [ "56e759ee4c342c6abb3d8d8c1c7b95109933108c2b82332b1770f9beaad3f6b9" ], "4bbb2226-30f4-4cff-aa89-7d7f92d14184": [ "56e759ee4c342c6abb3d8d8c1c7b95109933108c2b82332b1770f9beaad3f6b9" ], "ebabafaf-c7b3-493a-8ba8-37d9ff4b81c1": [ "56e759ee4c342c6abb3d8d8c1c7b95109933108c2b82332b1770f9beaad3f6b9" ], "ce5b309d-b03a-4916-8bdc-48342b22c79b": [ "56e759ee4c342c6abb3d8d8c1c7b95109933108c2b82332b1770f9beaad3f6b9" ], "f57b7cec-86db-4747-949e-50bf5f67e2c0": [ "56e759ee4c342c6abb3d8d8c1c7b95109933108c2b82332b1770f9beaad3f6b9" ], "52974393-9046-4cab-a6b1-d9fed62db27a": [ "1349ea9d513a7e5b9f88b2ee83b38f0e641abc53336a3c1d16d1d40c77086fa0" ], "27c02c88-9884-4466-8a50-b8f4e8dd7b8b": [ "1349ea9d513a7e5b9f88b2ee83b38f0e641abc53336a3c1d16d1d40c77086fa0" ], "aea5f85a-1522-4ebe-9944-a80570b013f4": [ "1349ea9d513a7e5b9f88b2ee83b38f0e641abc53336a3c1d16d1d40c77086fa0" ], "2b0169d3-8565-4a50-8cc6-a107ff21d0f5": [ "1349ea9d513a7e5b9f88b2ee83b38f0e641abc53336a3c1d16d1d40c77086fa0" ], "bfd36cc6-f615-4123-8f19-f5fe0a50a39f": [ "1349ea9d513a7e5b9f88b2ee83b38f0e641abc53336a3c1d16d1d40c77086fa0" ], "78e6ace7-f4a6-4d2c-bf19-6ad3c5428614": [ "1349ea9d513a7e5b9f88b2ee83b38f0e641abc53336a3c1d16d1d40c77086fa0" ], "53dc5192-c1fa-462e-b016-5b8af208d355": [ "1349ea9d513a7e5b9f88b2ee83b38f0e641abc53336a3c1d16d1d40c77086fa0" ], "13792f38-a238-42c1-b4b6-3f4d74c5d9af": [ "1349ea9d513a7e5b9f88b2ee83b38f0e641abc53336a3c1d16d1d40c77086fa0" ], "7e940d4d-d815-4b10-92bf-3aef5d27971e": [ "1349ea9d513a7e5b9f88b2ee83b38f0e641abc53336a3c1d16d1d40c77086fa0" ], "f19cd821-7ee1-44b8-924c-f577ecc9aaf9": [ "1349ea9d513a7e5b9f88b2ee83b38f0e641abc53336a3c1d16d1d40c77086fa0" ], "5e3527a0-5bb0-4fe8-a356-d2565c5ebe6c": [ "2a0250238b14774dba4d228d65aa639a690b139438efccceb60c765e468a3a0c" ], "6737fb3a-c5c1-438b-827b-fa8dd3d507e8": [ "2a0250238b14774dba4d228d65aa639a690b139438efccceb60c765e468a3a0c" ], "dace7eef-9d55-4b6b-814f-488374b1b32e": [ "2a0250238b14774dba4d228d65aa639a690b139438efccceb60c765e468a3a0c" ], "30382fd2-4c9e-4cad-8d6d-94cdc8bb3a84": [ "2a0250238b14774dba4d228d65aa639a690b139438efccceb60c765e468a3a0c" ], "504d73a0-5e70-41d0-b09f-f3db18184846": [ "f3241b6351791403dffc989cd90a6f9e3cfc889ca10db61c7ad2a8dc574d6aad" ], "6da3c0bc-e5d1-4114-9a71-262610e9237a": [ "f3241b6351791403dffc989cd90a6f9e3cfc889ca10db61c7ad2a8dc574d6aad" ], "b981df39-67a1-4d86-8578-70b1446f1882": [ "f3241b6351791403dffc989cd90a6f9e3cfc889ca10db61c7ad2a8dc574d6aad" ], "952188a9-78bb-4f3a-a1fd-5648ba8e0b89": [ "f3241b6351791403dffc989cd90a6f9e3cfc889ca10db61c7ad2a8dc574d6aad" ], "6986ab0a-147e-4c29-b180-f105bd6c3f7d": [ "f3241b6351791403dffc989cd90a6f9e3cfc889ca10db61c7ad2a8dc574d6aad" ], "b9062f64-45f4-4fea-a659-3653ed200ea1": [ "f3241b6351791403dffc989cd90a6f9e3cfc889ca10db61c7ad2a8dc574d6aad" ], "03ead958-0652-44c6-9fdd-d565d27ee683": [ "f3241b6351791403dffc989cd90a6f9e3cfc889ca10db61c7ad2a8dc574d6aad" ], "fbdc2b91-c100-43f6-a70e-46e20ffd6be9": [ "f3241b6351791403dffc989cd90a6f9e3cfc889ca10db61c7ad2a8dc574d6aad" ], "321f8a79-e392-44d5-8bd3-f1a9f371f5f4": [ "f3241b6351791403dffc989cd90a6f9e3cfc889ca10db61c7ad2a8dc574d6aad" ], "167bd3c3-bf06-4fca-bfe0-63df3ee2e942": [ "f3241b6351791403dffc989cd90a6f9e3cfc889ca10db61c7ad2a8dc574d6aad" ], "127ab66b-5e19-4f2d-97d9-85eaaaa42241": [ "f3241b6351791403dffc989cd90a6f9e3cfc889ca10db61c7ad2a8dc574d6aad" ], "b43a9820-c555-4c4e-9d9c-9e171ffc3815": [ "f3241b6351791403dffc989cd90a6f9e3cfc889ca10db61c7ad2a8dc574d6aad" ], "c6a86a86-1dd9-4dde-83f4-8c0848ea7de6": [ "f3241b6351791403dffc989cd90a6f9e3cfc889ca10db61c7ad2a8dc574d6aad" ], "5ebbd052-af2f-4ea5-9571-5eb5af5d0e3e": [ "f3241b6351791403dffc989cd90a6f9e3cfc889ca10db61c7ad2a8dc574d6aad" ], "3272eecb-9614-49fa-a4aa-7dacee576014": [ "f3241b6351791403dffc989cd90a6f9e3cfc889ca10db61c7ad2a8dc574d6aad" ], "f8ced64a-251d-466f-bbd2-867d62a83de2": [ "9616970be61997dc658720f9c1a161f947ab3eb514b1b7f03e2958608ae921b8" ], "d7850b04-3a99-4902-9e4c-59e8fc08d0c4": [ "9616970be61997dc658720f9c1a161f947ab3eb514b1b7f03e2958608ae921b8" ], "78b6f259-c504-4701-a787-83b2e77aad55": [ "9616970be61997dc658720f9c1a161f947ab3eb514b1b7f03e2958608ae921b8" ], "4f2e24da-9554-4092-bcb0-9d55451ada3e": [ "9616970be61997dc658720f9c1a161f947ab3eb514b1b7f03e2958608ae921b8" ], "76704cb5-2e55-41de-be52-f6e97b69aacf": [ "9616970be61997dc658720f9c1a161f947ab3eb514b1b7f03e2958608ae921b8" ], "5d177782-b609-48f4-ac51-4204fd3abbe7": [ "fb9fdab3042bb38ee9eef9fb28ba206ce937f565dabba093db46254ff19b4bbf" ], "0850066e-8366-4e08-a8f4-cede380860b2": [ "fb9fdab3042bb38ee9eef9fb28ba206ce937f565dabba093db46254ff19b4bbf" ], "d2623872-e0ce-446d-a2db-a2e3f7b45f7e": [ "fb9fdab3042bb38ee9eef9fb28ba206ce937f565dabba093db46254ff19b4bbf" ], "fa961796-5530-4dcf-85ab-5a9339ac994f": [ "fb9fdab3042bb38ee9eef9fb28ba206ce937f565dabba093db46254ff19b4bbf" ], "dd34d499-d92a-4183-8e4f-fc8188f4bd80": [ "fb9fdab3042bb38ee9eef9fb28ba206ce937f565dabba093db46254ff19b4bbf" ], "aaea47fe-2cae-4f99-9445-67f783947df3": [ "fb9fdab3042bb38ee9eef9fb28ba206ce937f565dabba093db46254ff19b4bbf" ], "b865dfb6-51a0-4aaf-8275-aa812f83583d": [ "fb9fdab3042bb38ee9eef9fb28ba206ce937f565dabba093db46254ff19b4bbf" ], "bfb08106-c36a-4289-8567-0e182af73126": [ "28ea3bd2ef701fec4a9a9ce6d6a08b6336853e96095168f5bf418536c4bae297" ], "1c10ac76-c6af-4a1e-85be-4618f6d76da3": [ "28ea3bd2ef701fec4a9a9ce6d6a08b6336853e96095168f5bf418536c4bae297" ], "932b9fe4-2493-4c40-8ec7-f785d18a3fc7": [ "28ea3bd2ef701fec4a9a9ce6d6a08b6336853e96095168f5bf418536c4bae297" ], "327261e9-b333-4c02-8441-3c2229b0efa7": [ "28ea3bd2ef701fec4a9a9ce6d6a08b6336853e96095168f5bf418536c4bae297" ], "bb0cea7f-5ff2-4200-9cc8-9c724f452130": [ "28ea3bd2ef701fec4a9a9ce6d6a08b6336853e96095168f5bf418536c4bae297" ], "c20273ed-4cb8-4f8a-8687-235fadd606ae": [ "28ea3bd2ef701fec4a9a9ce6d6a08b6336853e96095168f5bf418536c4bae297" ], "95377f9b-c1ca-49a1-ac20-76f8a8525f9b": [ "28ea3bd2ef701fec4a9a9ce6d6a08b6336853e96095168f5bf418536c4bae297" ], "801f65fc-e57c-46cf-bd0c-b6a7666e1667": [ "cffcf0520b313d6fca6d9eaa67ba7e7e8f5519c10ca0f589304e60d7cb2c4ba6" ], "347743e8-32f2-4b0e-bdbc-2264c24853f4": [ "cffcf0520b313d6fca6d9eaa67ba7e7e8f5519c10ca0f589304e60d7cb2c4ba6" ], "0400b1e4-4546-4afd-b3d8-5b75fcda46e0": [ "cffcf0520b313d6fca6d9eaa67ba7e7e8f5519c10ca0f589304e60d7cb2c4ba6" ], "f171a883-843e-4223-92ca-3b32276ef0df": [ "cffcf0520b313d6fca6d9eaa67ba7e7e8f5519c10ca0f589304e60d7cb2c4ba6" ], "7bf43357-0e9d-4576-85fc-3c6c13431d25": [ "cffcf0520b313d6fca6d9eaa67ba7e7e8f5519c10ca0f589304e60d7cb2c4ba6" ], "f0ba5ee6-19c1-4ff8-94c0-f992e584f722": [ "3556481522a40f4b76b6fd91eccdd281bafe748fe38b36f96e90bc59133ce1cd" ], "12bae8b2-de6c-4475-9cd6-d2ec91f0ad4d": [ "3556481522a40f4b76b6fd91eccdd281bafe748fe38b36f96e90bc59133ce1cd" ], "bc0a56dd-1f50-4ea8-8a1a-44b12093324f": [ "3556481522a40f4b76b6fd91eccdd281bafe748fe38b36f96e90bc59133ce1cd" ], "1e367c83-c086-4796-b55e-da28c9aded4d": [ "3556481522a40f4b76b6fd91eccdd281bafe748fe38b36f96e90bc59133ce1cd" ], "9972f2e6-63c2-401f-865c-8fc38b8645a8": [ "3556481522a40f4b76b6fd91eccdd281bafe748fe38b36f96e90bc59133ce1cd" ], "1b62431a-55fe-48af-91ab-a59caf3fb039": [ "83ee49bd21571b536cb61b6cd61e2f7fc4f3ddbf1e5aa3635dae85468997ff48" ], "60f264f3-76d7-4f21-b5de-d67ddd25c963": [ "83ee49bd21571b536cb61b6cd61e2f7fc4f3ddbf1e5aa3635dae85468997ff48" ], "986d0501-2f32-40f2-828b-3b55b478c88f": [ "83ee49bd21571b536cb61b6cd61e2f7fc4f3ddbf1e5aa3635dae85468997ff48" ], "9d01837c-1e31-4890-8819-51137e39d9ee": [ "83ee49bd21571b536cb61b6cd61e2f7fc4f3ddbf1e5aa3635dae85468997ff48" ], "6968e1e5-0401-4fa1-917c-7557806ec8df": [ "83ee49bd21571b536cb61b6cd61e2f7fc4f3ddbf1e5aa3635dae85468997ff48" ], "9dfd7af2-9a1b-4c29-9500-9f5aa939af80": [ "83ee49bd21571b536cb61b6cd61e2f7fc4f3ddbf1e5aa3635dae85468997ff48" ], "f9a12d88-6f73-46b0-8236-c0cc65390dfb": [ "83ee49bd21571b536cb61b6cd61e2f7fc4f3ddbf1e5aa3635dae85468997ff48" ], "df634ce8-bcfe-4357-9f0c-d26be7c6851d": [ "83ee49bd21571b536cb61b6cd61e2f7fc4f3ddbf1e5aa3635dae85468997ff48" ], "dc9ef659-063f-436b-b8bd-7639c42a1680": [ "f7431c90fc9173a2c7ea066166097632a365b1e761ce6ce669f29c8628110cbb" ], "51a92dec-e4ab-4c49-969f-c59a7978a35d": [ "f7431c90fc9173a2c7ea066166097632a365b1e761ce6ce669f29c8628110cbb" ], "38e625d8-5662-4058-bc66-b73cafd5f2a9": [ "f7431c90fc9173a2c7ea066166097632a365b1e761ce6ce669f29c8628110cbb" ], "05df988d-eb24-41c1-b756-d147a7900a06": [ "f7431c90fc9173a2c7ea066166097632a365b1e761ce6ce669f29c8628110cbb" ], "bd002364-77f0-4b76-8dcc-36831b83d8e3": [ "f7431c90fc9173a2c7ea066166097632a365b1e761ce6ce669f29c8628110cbb" ], "42da00c2-d1b1-421f-ada6-ef1405791143": [ "dc8d9ca20449f03e0d752cf9c75faf84369193d95951bacba9dddbf0d7fe9273" ], "37196b5a-886f-49d8-92d7-597d128a66ff": [ "dc8d9ca20449f03e0d752cf9c75faf84369193d95951bacba9dddbf0d7fe9273" ], "d8a3c287-4a59-43cd-9e99-8c53034d5429": [ "dc8d9ca20449f03e0d752cf9c75faf84369193d95951bacba9dddbf0d7fe9273" ], "50b4f66c-9148-4132-b5a1-58a27dcb6bc4": [ "dc8d9ca20449f03e0d752cf9c75faf84369193d95951bacba9dddbf0d7fe9273" ], "4a7e9f3d-4c2e-4e8f-adfd-aff363efad62": [ "dc8d9ca20449f03e0d752cf9c75faf84369193d95951bacba9dddbf0d7fe9273" ], "d0f7cc10-fa5d-4f09-8277-4c4cf7ffa0d5": [ "dc8d9ca20449f03e0d752cf9c75faf84369193d95951bacba9dddbf0d7fe9273" ], "870163ef-6455-44be-80b0-b3976f82b221": [ "dc8d9ca20449f03e0d752cf9c75faf84369193d95951bacba9dddbf0d7fe9273" ], "5b9c1dab-cc25-4dc4-b81e-191cb26e87b4": [ "dc8d9ca20449f03e0d752cf9c75faf84369193d95951bacba9dddbf0d7fe9273" ], "bf38d745-ea32-4fea-9f9e-bdf5fa9e2d3b": [ "dc8d9ca20449f03e0d752cf9c75faf84369193d95951bacba9dddbf0d7fe9273" ], "72119568-aed1-459d-8fcf-431f15b8f004": [ "dc8d9ca20449f03e0d752cf9c75faf84369193d95951bacba9dddbf0d7fe9273" ], "a9b0d4a4-ce61-4f19-9111-aaf99562aff3": [ "82622d900f8c83000f0c75145351d26374833ab456bc70d31b71d1e65daef9f0" ], "0656d0f5-c162-406a-80a1-6bea6f49cd2d": [ "82622d900f8c83000f0c75145351d26374833ab456bc70d31b71d1e65daef9f0" ], "4557dea8-8489-481c-ba14-85cdf643a754": [ "82622d900f8c83000f0c75145351d26374833ab456bc70d31b71d1e65daef9f0" ], "01eaeec8-c7b1-4e3f-9198-71f2f85dbfdb": [ "82622d900f8c83000f0c75145351d26374833ab456bc70d31b71d1e65daef9f0" ], "7ba033da-6e04-41fc-9b42-c02fcd2f7dc1": [ "82622d900f8c83000f0c75145351d26374833ab456bc70d31b71d1e65daef9f0" ], "744bb3a7-2255-41f0-bb7f-c81d2e294eb2": [ "82622d900f8c83000f0c75145351d26374833ab456bc70d31b71d1e65daef9f0" ], "7747cd2c-cfa6-4837-80a8-672dffe2f7f6": [ "82622d900f8c83000f0c75145351d26374833ab456bc70d31b71d1e65daef9f0" ], "aeb7e28f-d58b-499e-b6b1-9f7f55771b28": [ "82622d900f8c83000f0c75145351d26374833ab456bc70d31b71d1e65daef9f0" ], "6f8d257c-5d8b-484a-bcc9-cb520eaf6851": [ "5fee65260a21f9ee0835dab817fba3ef6e79f253ef990d148f963dbdee91cdf3" ], "f1263d30-15a5-43dc-a861-e761303696f0": [ "5fee65260a21f9ee0835dab817fba3ef6e79f253ef990d148f963dbdee91cdf3" ], "e5e1134b-cc1c-473b-842f-de5d5e7b8708": [ "5fee65260a21f9ee0835dab817fba3ef6e79f253ef990d148f963dbdee91cdf3" ], "0e1f3750-ca0a-4f82-a730-8e5287a590d2": [ "5fee65260a21f9ee0835dab817fba3ef6e79f253ef990d148f963dbdee91cdf3" ], "fa12424d-d5a5-4e50-b122-8aa102372d5a": [ "5fee65260a21f9ee0835dab817fba3ef6e79f253ef990d148f963dbdee91cdf3" ], "3671cd0e-a471-4329-b495-f796e7b08ffd": [ "5fee65260a21f9ee0835dab817fba3ef6e79f253ef990d148f963dbdee91cdf3" ], "d2c94032-929d-4cfd-9ae2-f6ba03dbc647": [ "5fee65260a21f9ee0835dab817fba3ef6e79f253ef990d148f963dbdee91cdf3" ], "d6b135e2-d58b-465d-b279-2755788e0754": [ "5fee65260a21f9ee0835dab817fba3ef6e79f253ef990d148f963dbdee91cdf3" ], "e1ea5a36-852d-4621-83e6-7a94ffbda55e": [ "5fee65260a21f9ee0835dab817fba3ef6e79f253ef990d148f963dbdee91cdf3" ], "15f31f5e-dced-4d73-add8-44f1fd133344": [ "5fee65260a21f9ee0835dab817fba3ef6e79f253ef990d148f963dbdee91cdf3" ], "71d43304-19c7-47a0-ab0f-95295b1d5b8a": [ "1cda21ed21749db131dbc066728d54413387dfd2ad7115cd9026182cecdc7003" ], "4ee5e640-8ee3-4c0d-9fa6-38d98cd1361e": [ "1cda21ed21749db131dbc066728d54413387dfd2ad7115cd9026182cecdc7003" ], "1987c57e-ec71-43c1-8379-ef9e89456a7c": [ "1cda21ed21749db131dbc066728d54413387dfd2ad7115cd9026182cecdc7003" ], "939964b7-d72f-444c-8f5d-06a5025eb93a": [ "1cda21ed21749db131dbc066728d54413387dfd2ad7115cd9026182cecdc7003" ], "f016839e-a079-43ae-b1b1-ae394bdc9694": [ "1cda21ed21749db131dbc066728d54413387dfd2ad7115cd9026182cecdc7003" ], "1560b9c2-30fa-4bb3-869a-16d53907050a": [ "1cda21ed21749db131dbc066728d54413387dfd2ad7115cd9026182cecdc7003" ], "f6c5e552-ca88-4453-9577-e185ecf0c55d": [ "1cda21ed21749db131dbc066728d54413387dfd2ad7115cd9026182cecdc7003" ], "72ddc631-9733-4c60-a062-aaa7682d0783": [ "1cda21ed21749db131dbc066728d54413387dfd2ad7115cd9026182cecdc7003" ], "265b0a7b-1b85-4bf4-acce-d0779669c03a": [ "1cda21ed21749db131dbc066728d54413387dfd2ad7115cd9026182cecdc7003" ], "f6d671b1-de4f-4582-90a6-0f45f032e023": [ "1cda21ed21749db131dbc066728d54413387dfd2ad7115cd9026182cecdc7003" ], "38ed311e-6eea-4741-852b-2ec3e7ebed9f": [ "e21ae95d09ae67ca4dabf8a75cd3a16f78a96dbdc9b8051418ac6a94f535f48e" ], "66670fc6-1624-404d-a8f6-0f066c638b22": [ "e21ae95d09ae67ca4dabf8a75cd3a16f78a96dbdc9b8051418ac6a94f535f48e" ], "902f6470-9913-40fc-b3fc-5aaba937c2c0": [ "e21ae95d09ae67ca4dabf8a75cd3a16f78a96dbdc9b8051418ac6a94f535f48e" ], "1cc9050d-1ebc-4a8d-9563-8272014405e7": [ "e21ae95d09ae67ca4dabf8a75cd3a16f78a96dbdc9b8051418ac6a94f535f48e" ], "f74db401-9112-409d-adf7-818901c32de3": [ "e21ae95d09ae67ca4dabf8a75cd3a16f78a96dbdc9b8051418ac6a94f535f48e" ], "07887b0e-14dd-4a22-b639-c2158daf194e": [ "e21ae95d09ae67ca4dabf8a75cd3a16f78a96dbdc9b8051418ac6a94f535f48e" ], "0ec711db-2f9c-482a-82f0-803a48c8c9bd": [ "e21ae95d09ae67ca4dabf8a75cd3a16f78a96dbdc9b8051418ac6a94f535f48e" ], "e6173e5b-1fed-4ead-a359-7a80019c0a50": [ "99cfcef446616528546ea91078e562427e097d84b2f37628c9487419f9d88716" ], "a8fd0e10-bdf5-4b1b-9dac-032128906026": [ "99cfcef446616528546ea91078e562427e097d84b2f37628c9487419f9d88716" ], "a311d23f-6115-4537-bfd5-ff6d6b4f17b8": [ "99cfcef446616528546ea91078e562427e097d84b2f37628c9487419f9d88716" ], "cbdaf0e5-f7f8-438b-9a20-ab02086e58f9": [ "99cfcef446616528546ea91078e562427e097d84b2f37628c9487419f9d88716" ], "b7eceb09-32b6-47eb-bdc8-91b05082ce9f": [ "99cfcef446616528546ea91078e562427e097d84b2f37628c9487419f9d88716" ], "0c8a2183-24e8-4015-9b8a-acf557365274": [ "99cfcef446616528546ea91078e562427e097d84b2f37628c9487419f9d88716" ], "26ff775d-8839-40db-b812-3d0165d83042": [ "99cfcef446616528546ea91078e562427e097d84b2f37628c9487419f9d88716" ], "4171cae8-919f-4a63-b27d-a3e32164d5d3": [ "1e76535804dbe9eee7314ea79e14e8f3aadf04dd1d561c411eef8e8e76e177df" ], "2b99d28c-1a48-47b8-9cae-9852aa6ed616": [ "1e76535804dbe9eee7314ea79e14e8f3aadf04dd1d561c411eef8e8e76e177df" ], "b219710d-a6e2-4327-aead-58b0fe7d55d7": [ "1e76535804dbe9eee7314ea79e14e8f3aadf04dd1d561c411eef8e8e76e177df" ], "e3899405-8a4d-4c9b-941b-4a92ded5f0a8": [ "1e76535804dbe9eee7314ea79e14e8f3aadf04dd1d561c411eef8e8e76e177df" ], "f06ac511-50bc-41a6-b92f-63ed9c51005c": [ "1e76535804dbe9eee7314ea79e14e8f3aadf04dd1d561c411eef8e8e76e177df" ], "73dbf6e7-9358-40b6-afea-7e197d566cec": [ "1e76535804dbe9eee7314ea79e14e8f3aadf04dd1d561c411eef8e8e76e177df" ], "a4b703b5-2b95-4fd9-b25e-893c0c5663a6": [ "1e76535804dbe9eee7314ea79e14e8f3aadf04dd1d561c411eef8e8e76e177df" ], "05e4d0db-73db-4316-b4e9-e76bcdfb2fd1": [ "985e22cfddd3066e0bd6c04fe826ffb13b281b7afbc9b762fab5f4a10ec75161" ], "86fc1032-a4f4-4b37-be17-64601bca525b": [ "985e22cfddd3066e0bd6c04fe826ffb13b281b7afbc9b762fab5f4a10ec75161" ], "55136798-d2d7-418e-8dad-19f35d865b8a": [ "985e22cfddd3066e0bd6c04fe826ffb13b281b7afbc9b762fab5f4a10ec75161" ], "5cbe99f6-5ae3-41c8-9541-7faf3f67e4e2": [ "985e22cfddd3066e0bd6c04fe826ffb13b281b7afbc9b762fab5f4a10ec75161" ], "8286f854-5813-4930-abdd-cc890291f54b": [ "985e22cfddd3066e0bd6c04fe826ffb13b281b7afbc9b762fab5f4a10ec75161" ], "c64fb14a-528f-4075-a9d4-ceb7cd7bfc5b": [ "985e22cfddd3066e0bd6c04fe826ffb13b281b7afbc9b762fab5f4a10ec75161" ], "5bf69c05-d72a-4243-b785-1db4ed7082a2": [ "985e22cfddd3066e0bd6c04fe826ffb13b281b7afbc9b762fab5f4a10ec75161" ], "f921e1e8-52d6-4575-8239-664162d87628": [ "985e22cfddd3066e0bd6c04fe826ffb13b281b7afbc9b762fab5f4a10ec75161" ], "9ca8fda2-d349-4b0b-a1ef-6a4b71f672a0": [ "985e22cfddd3066e0bd6c04fe826ffb13b281b7afbc9b762fab5f4a10ec75161" ], "1abc88ee-7e97-47ac-b599-70255b133cc2": [ "985e22cfddd3066e0bd6c04fe826ffb13b281b7afbc9b762fab5f4a10ec75161" ], "d5251308-e729-4f81-9112-97bdc83f5cb6": [ "fb68a1e3255fef744120387b55c4fa859fe0e5908bc5906729867c2ad4821217" ], "13ab7603-e4ff-473c-8a76-92316241816e": [ "fb68a1e3255fef744120387b55c4fa859fe0e5908bc5906729867c2ad4821217" ], "7cb61154-df67-4100-9534-6835492f3de1": [ "fb68a1e3255fef744120387b55c4fa859fe0e5908bc5906729867c2ad4821217" ], "62fc62ea-da8a-4430-946a-f6288c994a36": [ "fb68a1e3255fef744120387b55c4fa859fe0e5908bc5906729867c2ad4821217" ], "8e401b08-ae44-4c17-97e9-030a05f835e0": [ "fb68a1e3255fef744120387b55c4fa859fe0e5908bc5906729867c2ad4821217" ], "26da5364-5ab2-47d6-8821-07762f5c0c8d": [ "fb68a1e3255fef744120387b55c4fa859fe0e5908bc5906729867c2ad4821217" ], "c04850fe-6e60-402c-85c7-3a9690b3162d": [ "fb68a1e3255fef744120387b55c4fa859fe0e5908bc5906729867c2ad4821217" ], "74944eee-b12b-4e9f-9afd-5b9e76c2de7c": [ "fb68a1e3255fef744120387b55c4fa859fe0e5908bc5906729867c2ad4821217" ], "8aa86b02-6fd9-43fe-9cff-495f414eb5bb": [ "fb68a1e3255fef744120387b55c4fa859fe0e5908bc5906729867c2ad4821217" ], "2d225f0f-8f52-4f53-8a77-5129ac73c185": [ "fb68a1e3255fef744120387b55c4fa859fe0e5908bc5906729867c2ad4821217" ], "c98aa852-6ad0-4d03-8a28-bd280e359abe": [ "c5c2d7f9960994a3810cf19a7b98f28516c020fb2ac3b52d624aaf09872bb6b7" ], "34fe0f65-33d0-498b-8233-2c5e077cd2ac": [ "c5c2d7f9960994a3810cf19a7b98f28516c020fb2ac3b52d624aaf09872bb6b7" ], "880bf74b-f8a9-4133-b27a-7cd0dd1029f9": [ "c5c2d7f9960994a3810cf19a7b98f28516c020fb2ac3b52d624aaf09872bb6b7" ], "25dd3f61-3894-45e6-9d26-c861fad6c921": [ "c5c2d7f9960994a3810cf19a7b98f28516c020fb2ac3b52d624aaf09872bb6b7" ], "a923e376-e21c-49d5-8fa1-46e8bc3ce48b": [ "c5c2d7f9960994a3810cf19a7b98f28516c020fb2ac3b52d624aaf09872bb6b7" ], "3e5ac65d-34a4-4b28-9d23-b9186d1df3ed": [ "c5c2d7f9960994a3810cf19a7b98f28516c020fb2ac3b52d624aaf09872bb6b7" ], "e1d76ee3-e307-4f43-ba85-53d18eaf9344": [ "c5c2d7f9960994a3810cf19a7b98f28516c020fb2ac3b52d624aaf09872bb6b7" ], "0eebd526-b4ad-458e-9d26-1641a4340346": [ "c5c2d7f9960994a3810cf19a7b98f28516c020fb2ac3b52d624aaf09872bb6b7" ], "5c08b37a-6bc3-41fe-8fa7-00549c0fa8c0": [ "c5c2d7f9960994a3810cf19a7b98f28516c020fb2ac3b52d624aaf09872bb6b7" ], "89cf7fdf-2493-4ae6-8bc5-226a01b305ca": [ "c5c2d7f9960994a3810cf19a7b98f28516c020fb2ac3b52d624aaf09872bb6b7" ], "910c428e-5d1d-44ca-96ba-dca1eb5080b9": [ "4f72cdfed884a4e385c723b952c4616a7fb2dbbf6f7482828b33161735e77331" ], "a5f8afdc-a0d6-48eb-addc-38d6f13776f9": [ "4f72cdfed884a4e385c723b952c4616a7fb2dbbf6f7482828b33161735e77331" ], "862cddc9-5d17-49fd-aa9e-4636b413875f": [ "4f72cdfed884a4e385c723b952c4616a7fb2dbbf6f7482828b33161735e77331" ], "574b710e-8c98-46b9-a7e7-865e97ba0ff9": [ "4f72cdfed884a4e385c723b952c4616a7fb2dbbf6f7482828b33161735e77331" ], "e2ef37a0-254f-407f-831f-4c4ba6843043": [ "4f72cdfed884a4e385c723b952c4616a7fb2dbbf6f7482828b33161735e77331" ], "201e187c-8548-415c-b703-d12c58c744fc": [ "ef5a3fdf1f8b689057e76000b4e2d10a98aaf2490066141277b0fd5db350bca9" ], "69f94354-6a65-464f-b7cc-dcd2c5f4a9fd": [ "ef5a3fdf1f8b689057e76000b4e2d10a98aaf2490066141277b0fd5db350bca9" ], "711bb0fe-6650-4175-9555-961263e5b9f1": [ "ef5a3fdf1f8b689057e76000b4e2d10a98aaf2490066141277b0fd5db350bca9" ], "742ba2c4-d64a-4f71-a748-954e6130c15b": [ "ef5a3fdf1f8b689057e76000b4e2d10a98aaf2490066141277b0fd5db350bca9" ], "4afa4c47-bd0b-4754-aef0-0effc7f9dfc8": [ "ef5a3fdf1f8b689057e76000b4e2d10a98aaf2490066141277b0fd5db350bca9" ], "ce782026-e90b-4bc1-abbf-74b73688dadb": [ "768b6fd49ab75703c694a6281d5b276114bdb094d335a9744d54685e6d5a7de8" ], "891d4c09-3c4e-4aaf-a8f9-d6e5dbba53b5": [ "768b6fd49ab75703c694a6281d5b276114bdb094d335a9744d54685e6d5a7de8" ], "d2d226a4-88a7-4e62-b576-873f626b6a24": [ "768b6fd49ab75703c694a6281d5b276114bdb094d335a9744d54685e6d5a7de8" ], "f69b28f9-b274-470d-9768-94cd736c3d6f": [ "768b6fd49ab75703c694a6281d5b276114bdb094d335a9744d54685e6d5a7de8" ], "6b1f7afc-6f12-4bbd-b073-b396fdb61c12": [ "768b6fd49ab75703c694a6281d5b276114bdb094d335a9744d54685e6d5a7de8" ], "dbb94892-54e2-4c7e-bbc4-bc6d80970610": [ "768b6fd49ab75703c694a6281d5b276114bdb094d335a9744d54685e6d5a7de8" ], "c430488f-47c9-4115-b400-d8a4ba5485f3": [ "1fd92d3abd050b65219d4ec0f87dc46ef657502096f3bc20daf012e5867e4755" ], "3f351db5-654a-4f0a-b20e-c1b4df0e990d": [ "1fd92d3abd050b65219d4ec0f87dc46ef657502096f3bc20daf012e5867e4755" ], "7f16aeb5-5343-4d36-a90e-ca533170464c": [ "1fd92d3abd050b65219d4ec0f87dc46ef657502096f3bc20daf012e5867e4755" ], "0101ad8a-4afe-4ad0-aa63-06a2418609ac": [ "1fd92d3abd050b65219d4ec0f87dc46ef657502096f3bc20daf012e5867e4755" ], "98fe3cee-cd13-4b3a-963d-b9c159f02b0d": [ "1fd92d3abd050b65219d4ec0f87dc46ef657502096f3bc20daf012e5867e4755" ], "a06b2bfc-98d9-4a37-ad2a-f33746878c2e": [ "1fd92d3abd050b65219d4ec0f87dc46ef657502096f3bc20daf012e5867e4755" ], "b1826a0a-ea61-4e15-89f1-c8797d6ec651": [ "1fd92d3abd050b65219d4ec0f87dc46ef657502096f3bc20daf012e5867e4755" ], "c3e049f4-e7c6-4d4d-8ed7-5983f61ceebe": [ "4f53f92bee39a35bd0d6b4b8f81a7d596c5a6d109338bcae2fa0f21b3237f73a" ], "b12b2ce2-7512-47b1-8610-080eb022c124": [ "4f53f92bee39a35bd0d6b4b8f81a7d596c5a6d109338bcae2fa0f21b3237f73a" ], "a3e0a869-c884-46a9-a930-28a68700c77a": [ "4f53f92bee39a35bd0d6b4b8f81a7d596c5a6d109338bcae2fa0f21b3237f73a" ], "cf3fc1b3-4de0-45a1-97a5-939fff8640e0": [ "4f53f92bee39a35bd0d6b4b8f81a7d596c5a6d109338bcae2fa0f21b3237f73a" ], "d347cf20-2a84-4839-8587-8b9941a13bbb": [ "4f53f92bee39a35bd0d6b4b8f81a7d596c5a6d109338bcae2fa0f21b3237f73a" ], "f8a7c459-17b2-4f98-ab1f-7d36cd92e387": [ "4f53f92bee39a35bd0d6b4b8f81a7d596c5a6d109338bcae2fa0f21b3237f73a" ], "8b18d12b-8c27-426f-9994-3b54e4f22de2": [ "4f53f92bee39a35bd0d6b4b8f81a7d596c5a6d109338bcae2fa0f21b3237f73a" ], "293a1876-cbfd-4c4e-a15f-d8b805f3e490": [ "4f53f92bee39a35bd0d6b4b8f81a7d596c5a6d109338bcae2fa0f21b3237f73a" ], "0965308b-9224-4925-9313-9069247e1610": [ "c48c271334c619c6f14c1c84025caf638cb3b61b898bdd6c6be317a698ea4db7" ], "6a5f46a9-0c9c-44dc-bec8-d026f528bbed": [ "c48c271334c619c6f14c1c84025caf638cb3b61b898bdd6c6be317a698ea4db7" ], "01625788-8e04-426e-b434-7438dfd1b3e1": [ "c48c271334c619c6f14c1c84025caf638cb3b61b898bdd6c6be317a698ea4db7" ], "65022071-d30a-4c78-8efa-6cc722c181b6": [ "c48c271334c619c6f14c1c84025caf638cb3b61b898bdd6c6be317a698ea4db7" ], "9ce0385a-c4b2-4962-9552-1904378f9ba5": [ "c48c271334c619c6f14c1c84025caf638cb3b61b898bdd6c6be317a698ea4db7" ], "e8e90dd0-ef8a-440e-b5ba-bd2e139985cf": [ "c48c271334c619c6f14c1c84025caf638cb3b61b898bdd6c6be317a698ea4db7" ], "1394007b-ad81-4d00-8659-b06f0eda8acc": [ "c48c271334c619c6f14c1c84025caf638cb3b61b898bdd6c6be317a698ea4db7" ], "0073b091-7985-4a51-9dc4-7debc0cc95cd": [ "c48c271334c619c6f14c1c84025caf638cb3b61b898bdd6c6be317a698ea4db7" ] }, "mode": "text" }