id
stringlengths
14
16
text
stringlengths
31
2.73k
metadata
dict
e4c8db52ee25-627
Affirmative.\r\n \r\n PATRICE\r\n Have you handed in your Resignation\r\n as a Undercover Detective for The\r\n Colorado Springs Police Department?\r\n \r\n RON STALLWORTH\r\n Negative. Truth be told I\'ve always\r\n wanted to be a Cop...and I\'m still\r\n for The
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/imsdb.html" }
e4c8db52ee25-628
for The Liberation for My People.\r\n \r\n PATRICE\r\n My Conscience won\'t let me Sleep with\r\n The Enemy.\r\n \r\n RON STALLWORTH\r\n Enemy? I\'m a Black Man that saved\r\n your life.\r\n \r\n
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/imsdb.html" }
e4c8db52ee25-629
PATRICE\r\n You\'re absolutely right, and I Thank\r\n you for it.\r\n \r\n Patrice Kisses Ron on the cheek. Good Bye. WE HEAR a KNOCK on\r\n Ron\'s DOOR. Ron, who is startled, slowly rises. We HEAR\r\n another KNOCK.\r\n \r\n QUICK FLASHES - of a an OLD TIME KLAN RALLY. Ron moves\r\n quietly to pull out his SERVICE REVOLVER from the COUNTER\r\n DRAWER. WE HEAR ANOTHER KNOCK on the DOOR. Patrice stands\r\n behind
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/imsdb.html" }
e4c8db52ee25-630
Patrice stands\r\n behind him.\r\n \r\n QUICK FLASHES - BLACK BODY HANGING FROM A TREE (STRANGE\r\n FRUIT) Ron slowly moves to the DOOR. Ron has his SERVICE\r\n REVOLVER up and aimed ready to fire. Ron swings open the\r\n DOOR.\r\n ANGLE - HALLWAY\r\n \r\n CU - RON\'S POV\r\n \r\n WE TRACK DOWN THE EMPTY HALLWAY PANNING OUT THE WINDOW.\r\n \r\n CLOSE - RON AND PATRICE\r\n \r\n Looking in
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/imsdb.html" }
e4c8db52ee25-631
\r\n Looking in the distance: The Rolling Hills surrounding The\r\n Neighborhood lead towards Pike\'s Peak, which sits on the\r\n horizon like a King on A Throne.\r\n \r\n WE SEE: Something Burning.\r\n \r\n CLOSER-- WE SEE a CROSS, its Flames dancing, sending embers\r\n into The BLACK, Colorado Sky.\r\n OMITTED\r\n \r\n EXT. UVA CAMPUS - NIGHT\r\n \r\n WE SEE FOOTAGE of NEO-NAZIS, ALT RIGHT, THE KLAN, NEO-\r\n
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/imsdb.html" }
e4c8db52ee25-632
ALT RIGHT, THE KLAN, NEO-\r\n CONFEDERATES AND WHITE NATIONALISTS MARCHING, HOLDING UP\r\n THEIR TIKI TORCHES, CHANTING.\r\n \r\n AMERICAN TERRORISTS\r\n YOU WILL NOT REPLACE US!!!\r\n JEWS WILL NOT REPLACE US!!!\r\n BLOOD AND SOIL!!!\r\n \r\n CUT TO BLACK.\r\n \r\n
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/imsdb.html" }
e4c8db52ee25-633
\r\n FINI.\r\n\r\n\r\n\n\n\n\nBlacKkKlansman\nWriters : \xa0\xa0Charlie Wachtel\xa0\xa0David Rabinowitz\xa0\xa0Kevin Willmott\xa0\xa0Spike Lee\nGenres : \xa0\xa0Crime\xa0\xa0Drama\nUser Comments\n\n\n\n\n\r\nBack to IMSDb\n\n\n', lookup_str='', metadata={'source': 'https://imsdb.com/scripts/BlacKkKlansman.html'}, lookup_index=0)]
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/imsdb.html" }
e4c8db52ee25-634
previous Images next Markdown By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/imsdb.html" }
d2155c8e3156-0
.ipynb .pdf URL Contents URL Selenium URL Loader Setup URL# This covers how to load HTML documents from a list of URLs into a document format that we can use downstream. from langchain.document_loaders import UnstructuredURLLoader urls = [ "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-8-2023", "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-9-2023" ] loader = UnstructuredURLLoader(urls=urls) data = loader.load() Selenium URL Loader# This covers how to load HTML documents from a list of URLs using the SeleniumURLLoader. Using selenium allows us to load pages that require JavaScript to render. Setup# To use the SeleniumURLLoader, you will need to install selenium and unstructured. from langchain.document_loaders import SeleniumURLLoader urls = [ "https://www.youtube.com/watch?v=dQw4w9WgXcQ", "https://goo.gl/maps/NDSHwePEyaHMFGwh8" ] loader = SeleniumURLLoader(urls=urls) data = loader.load() previous Unstructured File Loader next Web Base Contents URL Selenium URL Loader Setup By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/url.html" }
7f76b1acdb19-0
.ipynb .pdf PDF Contents Using PyPDF Using Unstructured Retain Elements Fetching remote PDFs using Unstructured Using PDFMiner Using PyMuPDF PDF# This covers how to load pdfs into a document format that we can use downstream. Using PyPDF# Load PDF using pypdf into array of documents, where each document contains the page content and metadata with page number. from langchain.document_loaders import PyPDFLoader loader = PyPDFLoader("example_data/layout-parser-paper.pdf") pages = loader.load_and_split() pages[0]
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-1
Document(page_content='LayoutParser : A Uni\x0ced Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen1( \x00), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\nLee4, Jacob Carlson3, and Weining Li5\n1Allen Institute for AI\nshannons@allenai.org\n2Brown University\nruochen zhang@brown.edu\n3Harvard University\nfmelissadell,jacob carlson g@fas.harvard.edu\n4University of Washington\nbcgl@cs.washington.edu\n5University of Waterloo\nw422li@uwaterloo.ca\nAbstract. Recent advances in document image analysis (DIA) have been\nprimarily driven by the application of neural networks. Ideally, research\noutcomes could be easily deployed in production and extended for further\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model con\x0cgurations complicate the easy reuse of im-\nportant innovations by a wide audience. Though there have been on-going\ne\x0borts to improve reusability and simplify deep learning (DL) model\ndevelopment in disciplines like natural language processing and computer\nvision, none of them are optimized for
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-2
processing and computer\nvision, none of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademic research across a wide range of disciplines in the social sciences\nand humanities. This paper introduces LayoutParser , an open-source\nlibrary for streamlining the usage of DL in DIA research and applica-\ntions. The core LayoutParser library comes with a set of simple and\nintuitive interfaces for applying and customizing DL models for layout de-\ntection, character recognition, and many other document processing tasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti-\nzation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io .\nKeywords: Document Image Analysis ·Deep Learning ·Layout Analysis\n·Character Recognition ·Open Source library ·Toolkit.\n1 Introduction\nDeep Learning(DL)-based approaches are the state-of-the-art for
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-3
Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocument image analysis (DIA) tasks including document image classi\x0ccation [ 11,arXiv:2103.15348v2 [cs.CV] 21 Jun 2021', lookup_str='', metadata={'source': 'example_data/layout-parser-paper.pdf', 'page': '0'}, lookup_index=0)
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-4
An advantage of this approach is that documents can be retrieved with page numbers. from langchain.vectorstores import FAISS from langchain.embeddings.openai import OpenAIEmbeddings faiss_index = FAISS.from_documents(pages, OpenAIEmbeddings()) docs = faiss_index.similarity_search("How will the community be engaged?", k=2) for doc in docs: print(str(doc.metadata["page"]) + ":", doc.page_content) 9: 10 Z. Shen et al. Fig. 4: Illustration of (a) the original historical Japanese document with layout detection results and (b) a recreated version of the document image that achieves much better character recognition recall. The reorganization algorithm rearranges the tokens based on the their detected bounding boxes given a maximum allowed height. 4LayoutParser Community Platform Another focus of LayoutParser is promoting the reusability of layout detection models and full digitization pipelines. Similar to many existing deep learning libraries, LayoutParser comes with a community model hub for distributing layout models. End-users can upload their self-trained models to the model hub, and these models can be loaded into a similar interface as the currently available LayoutParser pre-trained models. For example, the model trained on the News Navigator dataset [17] has been incorporated in the model hub. Beyond DL models, LayoutParser also promotes the sharing of entire doc- ument digitization pipelines. For example, sometimes the pipeline requires the combination of multiple DL models to achieve better accuracy. Currently, pipelines are mainly described in academic papers and implementations are often not pub- licly available. To this end, the LayoutParser community platform also enables the sharing of layout pipelines to promote the discussion and reuse of techniques. For each shared pipeline, it has a dedicated project page, with links to the source
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-5
For each shared pipeline, it has a dedicated project page, with links to the source code, documentation, and an outline of the approaches. A discussion panel is provided for exchanging ideas. Combined with the core LayoutParser library, users can easily build reusable components based on the shared pipelines and apply them to solve their unique problems. 5 Use Cases The core objective of LayoutParser is to make it easier to create both large-scale and light-weight document digitization pipelines. Large-scale document processing 3: 4 Z. Shen et al. Efficient Data AnnotationC u s t o m i z e d M o d e l T r a i n i n gModel Cust omizationDI A Model HubDI A Pipeline SharingCommunity PlatformLa y out Detection ModelsDocument Images T h e C o r e L a y o u t P a r s e r L i b r a r yOCR ModuleSt or age & VisualizationLa y out Data Structur e Fig. 1: The overall architecture of LayoutParser . For an input document image, the core LayoutParser library provides a set of o-the-shelf tools for layout detection, OCR, visualization, and storage, backed by a carefully designed layout data structure. LayoutParser also supports high level customization via ecient layout annotation and model training functions. These improve model accuracy on the target samples. The community platform enables the easy sharing of DIA models and whole digitization pipelines to promote reusability and reproducibility. A collection of detailed documentation, tutorials and exemplar projects make LayoutParser easy to learn and use. AllenNLP [ 8] and transformers [ 34] have provided the community with complete DL-based support for developing and deploying models for general computer vision and natural language processing problems. LayoutParser , on the other
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-6
vision and natural language processing problems. LayoutParser , on the other hand, specializes specically in DIA tasks. LayoutParser is also equipped with a community platform inspired by established model hubs such as Torch Hub [23] andTensorFlow Hub [1]. It enables the sharing of pretrained models as well as full document processing pipelines that are unique to DIA tasks. There have been a variety of document data collections to facilitate the development of DL models. Some examples include PRImA [ 3](magazine layouts), PubLayNet [ 38](academic paper layouts), Table Bank [ 18](tables in academic papers), Newspaper Navigator Dataset [ 16,17](newspaper gure layouts) and HJDataset [31](historical Japanese document layouts). A spectrum of models trained on these datasets are currently available in the LayoutParser model zoo to support dierent use cases. 3 The Core LayoutParser Library At the core of LayoutParser is an o-the-shelf toolkit that streamlines DL- based document image analysis. Five components support a simple interface with comprehensive functionalities: 1) The layout detection models enable using pre-trained or self-trained DL models for layout detection with just four lines of code. 2) The detected layout information is stored in carefully engineered Using Unstructured# from langchain.document_loaders import UnstructuredPDFLoader loader = UnstructuredPDFLoader("example_data/layout-parser-paper.pdf") data = loader.load() Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredPDFLoader("example_data/layout-parser-paper.pdf", mode="elements") data = loader.load() data[0]
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-7
Document(page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen1 (�), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\nLee4, Jacob Carlson3, and Weining Li5\n1 Allen Institute for AI\nshannons@allenai.org\n2 Brown University\nruochen zhang@brown.edu\n3 Harvard University\n{melissadell,jacob carlson}@fas.harvard.edu\n4 University of Washington\nbcgl@cs.washington.edu\n5 University of Waterloo\nw422li@uwaterloo.ca\nAbstract. Recent advances in document image analysis (DIA) have been\nprimarily driven by the application of neural networks. Ideally, research\noutcomes could be easily deployed in production and extended for further\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model configurations complicate the easy reuse of im-\nportant innovations by a wide audience. Though there have been on-going\nefforts to improve reusability and simplify deep learning (DL) model\ndevelopment in disciplines like natural language processing and computer\nvision, none of them are optimized for
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-8
processing and computer\nvision, none of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademic research across a wide range of disciplines in the social sciences\nand humanities. This paper introduces LayoutParser, an open-source\nlibrary for streamlining the usage of DL in DIA research and applica-\ntions. The core LayoutParser library comes with a set of simple and\nintuitive interfaces for applying and customizing DL models for layout de-\ntection, character recognition, and many other document processing tasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti-\nzation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io.\nKeywords: Document Image Analysis · Deep Learning · Layout Analysis\n· Character Recognition · Open Source library · Toolkit.\n1\nIntroduction\nDeep Learning(DL)-based approaches are the state-of-the-art
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-9
Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocument image analysis (DIA) tasks including document image classification [11,\narXiv:2103.15348v2 [cs.CV] 21 Jun 2021\n', lookup_str='', metadata={'file_path': 'example_data/layout-parser-paper.pdf', 'page_number': 1, 'total_pages': 16, 'format': 'PDF 1.5', 'title': '', 'author': '', 'subject': '', 'keywords': '', 'creator': 'LaTeX with hyperref', 'producer': 'pdfTeX-1.40.21', 'creationDate': 'D:20210622012710Z', 'modDate': 'D:20210622012710Z', 'trapped': '', 'encryption': None}, lookup_index=0)
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-10
Fetching remote PDFs using Unstructured# This covers how to load online pdfs into a document format that we can use downstream. This can be used for various online pdf sites such as https://open.umn.edu/opentextbooks/textbooks/ and https://arxiv.org/archive/ Note: all other pdf loaders can also be used to fetch remote PDFs, but OnlinePDFLoader is a legacy function, and works specifically with UnstructuredPDFLoader. from langchain.document_loaders import OnlinePDFLoader loader = OnlinePDFLoader("https://arxiv.org/pdf/2302.03803.pdf") data = loader.load() print(data)
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-11
[Document(page_content='A WEAK ( k, k ) -LEFSCHETZ THEOREM FOR PROJECTIVE TORIC ORBIFOLDS\n\nWilliam D. Montoya\n\nInstituto de Matem´atica, Estat´ıstica e Computa¸c˜ao Cient´ıfica,\n\nIn [3] we proved that, under suitable conditions, on a very general codimension s quasi- smooth intersection subvariety X in a projective toric orbifold P d Σ with d + s = 2 ( k + 1 ) the Hodge conjecture holds, that is, every ( p, p ) -cohomology class, under the Poincar´e duality is a rational linear combination of fundamental classes of algebraic subvarieties of X . The proof of the above-mentioned result relies, for p ≠ d + 1 − s , on a Lefschetz\n\nKeywords: (1,1)- Lefschetz theorem, Hodge conjecture, toric varieties, complete intersection Email: wmontoya@ime.unicamp.br\n\ntheorem ([7]) and the Hard Lefschetz theorem for projective orbifolds ([11]). When p =
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-12
theorem for projective orbifolds ([11]). When p = d + 1 − s the proof relies on the Cayley trick, a trick which associates to X a quasi-smooth hypersurface Y in a projective vector bundle, and the Cayley Proposition (4.3) which gives an isomorphism of some primitive cohomologies (4.2) of X and Y . The Cayley trick, following the philosophy of Mavlyutov in [7], reduces results known for quasi-smooth hypersurfaces to quasi-smooth intersection subvarieties. The idea in this paper goes the other way around, we translate some results for quasi-smooth intersection subvarieties to\n\nAcknowledgement. I thank Prof. Ugo Bruzzo and Tiago Fonseca for useful discus- sions. I also acknowledge support from FAPESP postdoctoral grant No. 2019/23499-7.\n\nLet M be a free abelian group of rank d , let N = Hom ( M, Z ) , and N R = N ⊗ Z R .\n\nif there exist k
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-13
N ⊗ Z R .\n\nif there exist k linearly independent primitive elements e\n\n, . . . , e k ∈ N such that σ = { µ\n\ne\n\n+ ⋯ + µ k e k } . • The generators e i are integral if for every i and any nonnegative rational number µ the product µe i is in N only if µ is an integer. • Given two rational simplicial cones σ , σ ′ one says that σ ′ is a face of σ ( σ ′ < σ ) if the set of integral generators of σ ′ is a subset of the set of integral generators of σ . • A finite set Σ = { σ\n\n, . . . , σ t } of rational simplicial cones is called a rational simplicial complete d -dimensional fan if:\n\nall faces of cones in Σ are in Σ ;\n\nif σ, σ ′ ∈ Σ then σ ∩ σ ′ < σ and σ ∩ σ ′ < σ ′
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-14
< σ and σ ∩ σ ′ < σ ′ ;\n\nN R = σ\n\n∪ ⋅ ⋅ ⋅ ∪ σ t .\n\nA rational simplicial complete d -dimensional fan Σ defines a d -dimensional toric variety P d Σ having only orbifold singularities which we assume to be projective. Moreover, T ∶ = N ⊗ Z C ∗ ≃ ( C ∗ ) d is the torus action on P d Σ . We denote by Σ ( i ) the i -dimensional cones\n\nFor a cone σ ∈ Σ, ˆ σ is the set of 1-dimensional cone in Σ that are not contained in σ\n\nand x ˆ σ ∶ = ∏ ρ ∈ ˆ σ x ρ is the associated monomial in S .\n\nDefinition 2.2. The irrelevant ideal of P d Σ is the monomial ideal B Σ ∶ =< x ˆ σ ∣ σ ∈ Σ > and the zero locus Z ( Σ ) ∶ = V (
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-15
locus Z ( Σ ) ∶ = V ( B Σ ) in the affine space A d ∶ = Spec ( S ) is the irrelevant locus.\n\nProposition 2.3 (Theorem 5.1.11 [5]) . The toric variety P d Σ is a categorical quotient A d ∖ Z ( Σ ) by the group Hom ( Cl ( Σ ) , C ∗ ) and the group action is induced by the Cl ( Σ ) - grading of S .\n\nNow we give a brief introduction to complex orbifolds and we mention the needed theorems for the next section. Namely: de Rham theorem and Dolbeault theorem for complex orbifolds.\n\nDefinition 2.4. A complex orbifold of complex dimension d is a singular complex space whose singularities are locally isomorphic to quotient singularities C d / G , for finite sub- groups G ⊂ Gl ( d, C ) .\n\nDefinition 2.5. A differential form on a complex orbifold
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-16
A differential form on a complex orbifold Z is defined locally at z ∈ Z as a G -invariant differential form on C d where G ⊂ Gl ( d, C ) and Z is locally isomorphic to d\n\nRoughly speaking the local geometry of orbifolds reduces to local G -invariant geometry.\n\nWe have a complex of differential forms ( A ● ( Z ) , d ) and a double complex ( A ● , ● ( Z ) , ∂, ¯ ∂ ) of bigraded differential forms which define the de Rham and the Dolbeault cohomology groups (for a fixed p ∈ N ) respectively:\n\n(1,1)-Lefschetz theorem for projective toric orbifolds\n\nDefinition 3.1. A subvariety X ⊂ P d Σ is quasi-smooth if V ( I X ) ⊂ A #Σ ( 1 ) is smooth outside\n\nExample 3.2 . Quasi-smooth hypersurfaces or more generally
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-17
. Quasi-smooth hypersurfaces or more generally quasi-smooth intersection sub-\n\nExample 3.2 . Quasi-smooth hypersurfaces or more generally quasi-smooth intersection sub- varieties are quasi-smooth subvarieties (see [2] or [7] for more details).\n\nRemark 3.3 . Quasi-smooth subvarieties are suborbifolds of P d Σ in the sense of Satake in [8]. Intuitively speaking they are subvarieties whose only singularities come from the ambient\n\nProof. From the exponential short exact sequence\n\nwe have a long exact sequence in cohomology\n\nH 1 (O ∗ X ) → H 2 ( X, Z ) → H 2 (O X ) ≃ H 0 , 2 ( X )\n\nwhere the last isomorphisms is due to Steenbrink in [9]. Now, it is enough to prove the commutativity of the next diagram\n\nwhere the last isomorphisms is due to Steenbrink in [9]. Now,\n\nH 2 ( X, Z ) / / H 2 ( X, O X ) ≃
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-18
/ H 2 ( X, O X ) ≃ Dolbeault H 2 ( X, C ) deRham ≃ H 2 dR ( X, C ) / / H 0 , 2 ¯ ∂ ( X )\n\nof the proof follows as the ( 1 , 1 ) -Lefschetz theorem in [6].\n\nRemark 3.5 . For k = 1 and P d Σ as the projective space, we recover the classical ( 1 , 1 ) - Lefschetz theorem.\n\nBy the Hard Lefschetz Theorem for projective orbifolds (see [11] for details) we\n\nBy the Hard Lefschetz Theorem for projective orbifolds (see [11] for details) we get an isomorphism of cohomologies :\n\ngiven by the Lefschetz morphism and since it is a morphism of Hodge structures, we have:\n\nH 1 , 1 ( X, Q ) ≃ H dim X − 1 , dim X − 1 ( X, Q )\n\nCorollary 3.6. If the dimension of X is 1 , 2 or
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-19
If the dimension of X is 1 , 2 or 3 . The Hodge conjecture holds on X\n\nProof. If the dim C X = 1 the result is clear by the Hard Lefschetz theorem for projective orbifolds. The dimension 2 and 3 cases are covered by Theorem 3.5 and the Hard Lefschetz.\n\nCayley trick and Cayley proposition\n\nThe Cayley trick is a way to associate to a quasi-smooth intersection subvariety a quasi- smooth hypersurface. Let L 1 , . . . , L s be line bundles on P d Σ and let π ∶ P ( E ) → P d Σ be the projective space bundle associated to the vector bundle E = L 1 ⊕ ⋯ ⊕ L s . It is known that P ( E ) is a ( d + s − 1 ) -dimensional simplicial toric variety whose fan depends on the degrees of the line bundles and the fan Σ. Furthermore, if the Cox ring, without considering the grading, of P
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-20
Cox ring, without considering the grading, of P d Σ is C [ x 1 , . . . , x m ] then the Cox ring of P ( E ) is\n\nMoreover for X a quasi-smooth intersection subvariety cut off by f 1 , . . . , f s with deg ( f i ) = [ L i ] we relate the hypersurface Y cut off by F = y 1 f 1 + ⋅ ⋅ ⋅ + y s f s which turns out to be quasi-smooth. For more details see Section 2 in [7].\n\nWe will denote P ( E ) as P d + s − 1 Σ ,X to keep track of its relation with X and P d Σ .\n\nThe following is a key remark.\n\nRemark 4.1 . There is a morphism ι ∶ X → Y ⊂ P d + s − 1 Σ ,X . Moreover every point z ∶ = ( x, y ) ∈ Y with y ≠ 0 has
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-21
y ) ∈ Y with y ≠ 0 has a preimage. Hence for any subvariety W = V ( I W ) ⊂ X ⊂ P d Σ there exists W ′ ⊂ Y ⊂ P d + s − 1 Σ ,X such that π ( W ′ ) = W , i.e., W ′ = { z = ( x, y ) ∣ x ∈ W } .\n\nFor X ⊂ P d Σ a quasi-smooth intersection variety the morphism in cohomology induced by the inclusion i ∗ ∶ H d − s ( P d Σ , C ) → H d − s ( X, C ) is injective by Proposition 1.4 in [7].\n\nDefinition 4.2. The primitive cohomology of H d − s prim ( X ) is the quotient H d − s ( X, C )/ i ∗ ( H d − s ( P d Σ , C )) and H d − s prim ( X, Q ) with rational
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-22
− s prim ( X, Q ) with rational coefficients.\n\nH d − s ( P d Σ , C ) and H d − s ( X, C ) have pure Hodge structures, and the morphism i ∗ is com- patible with them, so that H d − s prim ( X ) gets a pure Hodge structure.\n\nThe next Proposition is the Cayley proposition.\n\nProposition 4.3. [Proposition 2.3 in [3] ] Let X = X 1 ∩⋅ ⋅ ⋅∩ X s be a quasi-smooth intersec- tion subvariety in P d Σ cut off by homogeneous polynomials f 1 . . . f s . Then for p ≠ d + s − 1 2 , d + s − 3 2\n\nRemark 4.5 . The above isomorphisms are also true with rational coefficients since H ● ( X, C ) = H ● ( X, Q ) ⊗ Q C . See the beginning of Section 7.1 in
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-23
C . See the beginning of Section 7.1 in [10] for more details.\n\nTheorem 5.1. Let Y = { F = y 1 f 1 + ⋯ + y k f k = 0 } ⊂ P 2 k + 1 Σ ,X be the quasi-smooth hypersurface associated to the quasi-smooth intersection surface X = X f 1 ∩ ⋅ ⋅ ⋅ ∩ X f k ⊂ P k + 2 Σ . Then on Y the Hodge conjecture holds.\n\nthe Hodge conjecture holds.\n\nProof. If H k,k prim ( X, Q ) = 0 we are done. So let us assume H k,k prim ( X, Q ) ≠ 0. By the Cayley proposition H k,k prim ( Y, Q ) ≃ H 1 , 1 prim ( X, Q ) and by the ( 1 , 1 ) -Lefschetz theorem for projective\n\ntoric orbifolds there is a non-zero algebraic basis λ C 1 , . . . , λ C n with
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-24
1 , . . . , λ C n with rational coefficients of H 1 , 1 prim ( X, Q ) , that is, there are n ∶ = h 1 , 1 prim ( X, Q ) algebraic curves C 1 , . . . , C n in X such that under the Poincar´e duality the class in homology [ C i ] goes to λ C i , [ C i ] ↦ λ C i . Recall that the Cox ring of P k + 2 is contained in the Cox ring of P 2 k + 1 Σ ,X without considering the grading. Considering the grading we have that if α ∈ Cl ( P k + 2 Σ ) then ( α, 0 ) ∈ Cl ( P 2 k + 1 Σ ,X ) . So the polynomials defining C i ⊂ P k + 2 Σ can be interpreted in P 2 k + 1 X, Σ but with different degree. Moreover, by Remark 4.1 each
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-25
degree. Moreover, by Remark 4.1 each C i is contained in Y = { F = y 1 f 1 + ⋯ + y k f k = 0 } and\n\nfurthermore it has codimension k .\n\nClaim: { C i } ni = 1 is a basis of prim ( ) . It is enough to prove that λ C i is different from zero in H k,k prim ( Y, Q ) or equivalently that the cohomology classes { λ C i } ni = 1 do not come from the ambient space. By contradiction, let us assume that there exists a j and C ⊂ P 2 k + 1 Σ ,X such that λ C ∈ H k,k ( P 2 k + 1 Σ ,X , Q ) with i ∗ ( λ C ) = λ C j or in terms of homology there exists a ( k + 2 ) -dimensional algebraic subvariety V ⊂ P 2 k + 1 Σ ,X such that V ∩ Y = C j so
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-26
,X such that V ∩ Y = C j so they are equal as a homology class of P 2 k + 1 Σ ,X ,i.e., [ V ∩ Y ] = [ C j ] . It is easy to check that π ( V ) ∩ X = C j as a subvariety of P k + 2 Σ where π ∶ ( x, y ) ↦ x . Hence [ π ( V ) ∩ X ] = [ C j ] which is equivalent to say that λ C j comes from P k + 2 Σ which contradicts the choice of [ C j ] .\n\nRemark 5.2 . Into the proof of the previous theorem, the key fact was that on X the Hodge conjecture holds and we translate it to Y by contradiction. So, using an analogous argument we have:\n\nargument we have:\n\nProposition 5.3. Let Y = { F = y 1 f s +⋯+ y s f s = 0 } ⊂ P 2 k + 1 Σ
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-27
0 } ⊂ P 2 k + 1 Σ ,X be the quasi-smooth hypersurface associated to a quasi-smooth intersection subvariety X = X f 1 ∩ ⋅ ⋅ ⋅ ∩ X f s ⊂ P d Σ such that d + s = 2 ( k + 1 ) . If the Hodge conjecture holds on X then it holds as well on Y .\n\nCorollary 5.4. If the dimension of Y is 2 s − 1 , 2 s or 2 s + 1 then the Hodge conjecture holds on Y .\n\nProof. By Proposition 5.3 and Corollary 3.6.\n\n[\n\n] Angella, D. Cohomologies of certain orbifolds. Journal of Geometry and Physics\n\n(\n\n),\n\n–\n\n[\n\n] Batyrev, V. V., and Cox, D. A. On the Hodge structure of projective hypersur- faces in toric varieties. Duke Mathematical Journal\n\n,\n\n(Aug\n\n). [\n\n] Bruzzo, U., and Montoya, W. On the Hodge
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-28
U., and Montoya, W. On the Hodge conjecture for quasi-smooth in- tersections in toric varieties. S˜ao Paulo J. Math. Sci. Special Section: Geometry in Algebra and Algebra in Geometry (\n\n). [\n\n] Caramello Jr, F. C. Introduction to orbifolds. a\n\niv:\n\nv\n\n(\n\n). [\n\n] Cox, D., Little, J., and Schenck, H. Toric varieties, vol.\n\nAmerican Math- ematical Soc.,\n\n[\n\n] Griffiths, P., and Harris, J. Principles of Algebraic Geometry. John Wiley & Sons, Ltd,\n\n[\n\n] Mavlyutov, A. R. Cohomology of complete intersections in toric varieties. Pub- lished in Pacific J. of Math.\n\nNo.\n\n(\n\n),\n\n–\n\n[\n\n] Satake, I. On a Generalization of the Notion of Manifold. Proceedings of the National Academy of Sciences of the United States of America\n\n,\n\n(\n\n),\n\n–\n\n[\n\n] Steenbrink,
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-29
Steenbrink, J. H. M. Intersection form for quasi-homogeneous singularities. Com- positio Mathematica\n\n,\n\n(\n\n),\n\n–\n\n[\n\n] Voisin, C. Hodge Theory and Complex Algebraic Geometry I, vol.\n\nof Cambridge Studies in Advanced Mathematics . Cambridge University Press,\n\n[\n\n] Wang, Z. Z., and Zaffran, D. A remark on the Hard Lefschetz theorem for K¨ahler orbifolds. Proceedings of the American Mathematical Society\n\n,\n\n(Aug\n\n).\n\n[2] Batyrev, V. V., and Cox, D. A. On the Hodge structure of projective hypersur- faces in toric varieties. Duke Mathematical Journal 75, 2 (Aug 1994).\n\n[\n\n] Bruzzo, U., and Montoya, W. On the Hodge conjecture for quasi-smooth in- tersections in toric varieties. S˜ao Paulo J. Math. Sci. Special Section: Geometry in Algebra and Algebra in Geometry (\n\n).\n\n[3] Bruzzo, U., and Montoya, W. On the Hodge
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-30
U., and Montoya, W. On the Hodge conjecture for quasi-smooth in- tersections in toric varieties. S˜ao Paulo J. Math. Sci. Special Section: Geometry in Algebra and Algebra in Geometry (2021).\n\nA. R. Cohomology of complete intersections in toric varieties. Pub-', lookup_str='', metadata={'source': '/var/folders/ph/hhm7_zyx4l13k3v8z02dwp1w0000gn/T/tmpgq0ckaja/online_file.pdf'}, lookup_index=0)]
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-31
Using PDFMiner# from langchain.document_loaders import PDFMinerLoader loader = PDFMinerLoader("example_data/layout-parser-paper.pdf") data = loader.load() Using PyMuPDF# This is the fastest of the PDF parsing options, and contains detailed metadata about the PDF and its pages, as well as returns one document per page. from langchain.document_loaders import PyMuPDFLoader loader = PyMuPDFLoader("example_data/layout-parser-paper.pdf") data = loader.load() data[0]
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-32
Document(page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen1 (�), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\nLee4, Jacob Carlson3, and Weining Li5\n1 Allen Institute for AI\nshannons@allenai.org\n2 Brown University\nruochen zhang@brown.edu\n3 Harvard University\n{melissadell,jacob carlson}@fas.harvard.edu\n4 University of Washington\nbcgl@cs.washington.edu\n5 University of Waterloo\nw422li@uwaterloo.ca\nAbstract. Recent advances in document image analysis (DIA) have been\nprimarily driven by the application of neural networks. Ideally, research\noutcomes could be easily deployed in production and extended for further\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model configurations complicate the easy reuse of im-\nportant innovations by a wide audience. Though there have been on-going\nefforts to improve reusability and simplify deep learning (DL) model\ndevelopment in disciplines like natural language processing and computer\nvision, none of them are optimized for
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-33
processing and computer\nvision, none of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademic research across a wide range of disciplines in the social sciences\nand humanities. This paper introduces LayoutParser, an open-source\nlibrary for streamlining the usage of DL in DIA research and applica-\ntions. The core LayoutParser library comes with a set of simple and\nintuitive interfaces for applying and customizing DL models for layout de-\ntection, character recognition, and many other document processing tasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti-\nzation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io.\nKeywords: Document Image Analysis · Deep Learning · Layout Analysis\n· Character Recognition · Open Source library · Toolkit.\n1\nIntroduction\nDeep Learning(DL)-based approaches are the state-of-the-art
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-34
Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocument image analysis (DIA) tasks including document image classification [11,\narXiv:2103.15348v2 [cs.CV] 21 Jun 2021\n', lookup_str='', metadata={'file_path': 'example_data/layout-parser-paper.pdf', 'page_number': 1, 'total_pages': 16, 'format': 'PDF 1.5', 'title': '', 'author': '', 'subject': '', 'keywords': '', 'creator': 'LaTeX with hyperref', 'producer': 'pdfTeX-1.40.21', 'creationDate': 'D:20210622012710Z', 'modDate': 'D:20210622012710Z', 'trapped': '', 'encryption': None}, lookup_index=0)
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
7f76b1acdb19-35
Additionally, you can pass along any of the options from the PyMuPDF documentation as keyword arguments in the load call, and it will be pass along to the get_text() call. previous Obsidian next PowerPoint Contents Using PyPDF Using Unstructured Retain Elements Fetching remote PDFs using Unstructured Using PDFMiner Using PyMuPDF By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html" }
02449ad9b5e9-0
.ipynb .pdf Telegram Telegram# This notebook covers how to load data from Telegram into a format that can be ingested into LangChain. from langchain.document_loaders import TelegramChatLoader loader = TelegramChatLoader("example_data/telegram.json") loader.load() [Document(page_content="Henry on 2020-01-01T00:00:02: It's 2020...\n\nHenry on 2020-01-01T00:00:04: Fireworks!\n\nGrace 🧤 ðŸ\x8d’ on 2020-01-01T00:00:05: You're a minute late!\n\n", lookup_str='', metadata={'source': 'example_data/telegram.json'}, lookup_index=0)] previous Subtitle Files next Unstructured File Loader By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/telegram.html" }
ac224c4ccf4b-0
.ipynb .pdf DataFrame Loader DataFrame Loader# This notebook goes over how to load data from a pandas dataframe import pandas as pd df = pd.read_csv('example_data/mlb_teams_2012.csv') df.head() Team "Payroll (millions)" "Wins" 0 Nationals 81.34 98 1 Reds 82.20 97 2 Yankees 197.96 95 3 Giants 117.62 94 4 Braves 83.31 94 from langchain.document_loaders import DataFrameLoader loader = DataFrameLoader(df, page_content_column="Team") loader.load() [Document(page_content='Nationals', metadata={' "Payroll (millions)"': 81.34, ' "Wins"': 98}), Document(page_content='Reds', metadata={' "Payroll (millions)"': 82.2, ' "Wins"': 97}), Document(page_content='Yankees', metadata={' "Payroll (millions)"': 197.96, ' "Wins"': 95}), Document(page_content='Giants', metadata={' "Payroll (millions)"': 117.62, ' "Wins"': 94}), Document(page_content='Braves', metadata={' "Payroll (millions)"': 83.31, ' "Wins"': 94}), Document(page_content='Athletics', metadata={' "Payroll (millions)"': 55.37, ' "Wins"': 94}), Document(page_content='Rangers', metadata={' "Payroll (millions)"': 120.51, ' "Wins"': 93}), Document(page_content='Orioles', metadata={' "Payroll (millions)"': 81.43, ' "Wins"': 93}),
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/dataframe.html" }
ac224c4ccf4b-1
Document(page_content='Rays', metadata={' "Payroll (millions)"': 64.17, ' "Wins"': 90}), Document(page_content='Angels', metadata={' "Payroll (millions)"': 154.49, ' "Wins"': 89}), Document(page_content='Tigers', metadata={' "Payroll (millions)"': 132.3, ' "Wins"': 88}), Document(page_content='Cardinals', metadata={' "Payroll (millions)"': 110.3, ' "Wins"': 88}), Document(page_content='Dodgers', metadata={' "Payroll (millions)"': 95.14, ' "Wins"': 86}), Document(page_content='White Sox', metadata={' "Payroll (millions)"': 96.92, ' "Wins"': 85}), Document(page_content='Brewers', metadata={' "Payroll (millions)"': 97.65, ' "Wins"': 83}), Document(page_content='Phillies', metadata={' "Payroll (millions)"': 174.54, ' "Wins"': 81}), Document(page_content='Diamondbacks', metadata={' "Payroll (millions)"': 74.28, ' "Wins"': 81}), Document(page_content='Pirates', metadata={' "Payroll (millions)"': 63.43, ' "Wins"': 79}), Document(page_content='Padres', metadata={' "Payroll (millions)"': 55.24, ' "Wins"': 76}), Document(page_content='Mariners', metadata={' "Payroll (millions)"': 81.97, ' "Wins"': 75}),
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/dataframe.html" }
ac224c4ccf4b-2
Document(page_content='Mets', metadata={' "Payroll (millions)"': 93.35, ' "Wins"': 74}), Document(page_content='Blue Jays', metadata={' "Payroll (millions)"': 75.48, ' "Wins"': 73}), Document(page_content='Royals', metadata={' "Payroll (millions)"': 60.91, ' "Wins"': 72}), Document(page_content='Marlins', metadata={' "Payroll (millions)"': 118.07, ' "Wins"': 69}), Document(page_content='Red Sox', metadata={' "Payroll (millions)"': 173.18, ' "Wins"': 69}), Document(page_content='Indians', metadata={' "Payroll (millions)"': 78.43, ' "Wins"': 68}), Document(page_content='Twins', metadata={' "Payroll (millions)"': 94.08, ' "Wins"': 66}), Document(page_content='Rockies', metadata={' "Payroll (millions)"': 78.06, ' "Wins"': 64}), Document(page_content='Cubs', metadata={' "Payroll (millions)"': 88.19, ' "Wins"': 61}), Document(page_content='Astros', metadata={' "Payroll (millions)"': 60.65, ' "Wins"': 55})] previous CSV Loader next Directory Loader By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/dataframe.html" }
7d7847c6ee73-0
.ipynb .pdf GitBook Contents Load from single GitBook page Load from all paths in a given GitBook GitBook# How to pull page data from any GitBook. from langchain.document_loaders import GitbookLoader loader = GitbookLoader("https://docs.gitbook.com") Load from single GitBook page# page_data = loader.load() page_data [Document(page_content='Introduction to GitBook\nGitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.\nWe want to help \nteams to work more efficiently\n by creating a simple yet powerful platform for them to \nshare their knowledge\n.\nOur mission is to make a \nuser-friendly\n and \ncollaborative\n product for everyone to create, edit and share knowledge through documentation.\nPublish your documentation in 5 easy steps\nImport\n\nMove your existing content to GitBook with ease.\nGit Sync\n\nBenefit from our bi-directional synchronisation with GitHub and GitLab.\nOrganise your content\n\nCreate pages and spaces and organize them into collections\nCollaborate\n\nInvite other users and collaborate asynchronously with ease.\nPublish your docs\n\nShare your documentation with selected users or with everyone.\nNext\n - Getting started\nOverview\nLast modified \n3mo ago', lookup_str='', metadata={'source': 'https://docs.gitbook.com', 'title': 'Introduction to GitBook'}, lookup_index=0)] Load from all paths in a given GitBook# For this to work, the GitbookLoader needs to be initialized with the root path (https://docs.gitbook.com in this example) and have load_all_paths set to True. loader = GitbookLoader("https://docs.gitbook.com", load_all_paths=True) all_pages_data = loader.load()
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html" }
7d7847c6ee73-1
all_pages_data = loader.load() Fetching text from https://docs.gitbook.com/ Fetching text from https://docs.gitbook.com/getting-started/overview Fetching text from https://docs.gitbook.com/getting-started/import Fetching text from https://docs.gitbook.com/getting-started/git-sync Fetching text from https://docs.gitbook.com/getting-started/content-structure Fetching text from https://docs.gitbook.com/getting-started/collaboration Fetching text from https://docs.gitbook.com/getting-started/publishing Fetching text from https://docs.gitbook.com/tour/quick-find Fetching text from https://docs.gitbook.com/tour/editor Fetching text from https://docs.gitbook.com/tour/customization Fetching text from https://docs.gitbook.com/tour/member-management Fetching text from https://docs.gitbook.com/tour/pdf-export Fetching text from https://docs.gitbook.com/tour/activity-history Fetching text from https://docs.gitbook.com/tour/insights Fetching text from https://docs.gitbook.com/tour/notifications Fetching text from https://docs.gitbook.com/tour/internationalization Fetching text from https://docs.gitbook.com/tour/keyboard-shortcuts Fetching text from https://docs.gitbook.com/tour/seo Fetching text from https://docs.gitbook.com/advanced-guides/custom-domain Fetching text from https://docs.gitbook.com/advanced-guides/advanced-sharing-and-security Fetching text from https://docs.gitbook.com/advanced-guides/integrations Fetching text from https://docs.gitbook.com/billing-and-admin/account-settings Fetching text from https://docs.gitbook.com/billing-and-admin/plans Fetching text from https://docs.gitbook.com/troubleshooting/faqs Fetching text from https://docs.gitbook.com/troubleshooting/hard-refresh
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html" }
7d7847c6ee73-2
Fetching text from https://docs.gitbook.com/troubleshooting/hard-refresh Fetching text from https://docs.gitbook.com/troubleshooting/report-bugs Fetching text from https://docs.gitbook.com/troubleshooting/connectivity-issues Fetching text from https://docs.gitbook.com/troubleshooting/support print(f"fetched {len(all_pages_data)} documents.") # show second document all_pages_data[2] fetched 28 documents.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html" }
7d7847c6ee73-3
Document(page_content="Import\nFind out how to easily migrate your existing documentation and which formats are supported.\nThe import function allows you to migrate and unify existing documentation in GitBook. You can choose to import single or multiple pages although limits apply. \nPermissions\nAll members with editor permission or above can use the import feature.\nSupported formats\nGitBook supports imports from websites or files that are:\nMarkdown (.md or .markdown)\nHTML (.html)\nMicrosoft Word (.docx).\nWe also support import from:\nConfluence\nNotion\nGitHub Wiki\nQuip\nDropbox Paper\nGoogle Docs\nYou can also upload a ZIP\n \ncontaining HTML or Markdown files when \nimporting multiple pages.\nNote: this feature is in beta.\nFeel free to suggest import sources we don't support yet and \nlet us know\n if you have any issues.\nImport panel\nWhen you create a new space, you'll have the option to import content straight away:\nThe new page menu\nImport a page or subpage by selecting \nImport Page\n from the New Page menu, or \nImport Subpage\n in the page action menu, found in the table
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html" }
7d7847c6ee73-4
in the page action menu, found in the table of contents:\nImport from the page action menu\nWhen you choose your input source, instructions will explain how to proceed.\nAlthough GitBook supports importing content from different kinds of sources, the end result might be different from your source due to differences in product features and document format.\nLimits\nGitBook currently has the following limits for imported content:\nThe maximum number of pages that can be uploaded in a single import is \n20.\nThe maximum number of files (images etc.) that can be uploaded in a single import is \n20.\nGetting started - \nPrevious\nOverview\nNext\n - Getting started\nGit Sync\nLast modified \n4mo ago", lookup_str='', metadata={'source': 'https://docs.gitbook.com/getting-started/import', 'title': 'Import'}, lookup_index=0)
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html" }
7d7847c6ee73-5
previous GCS File Storage next Google Drive Contents Load from single GitBook page Load from all paths in a given GitBook By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/gitbook.html" }
39b394319130-0
.ipynb .pdf Images Contents Using Unstructured Retain Elements Images# This covers how to load images such as JPGs PNGs into a document format that we can use downstream. Using Unstructured# from langchain.document_loaders.image import UnstructuredImageLoader loader = UnstructuredImageLoader("layout-parser-paper-fast.jpg") data = loader.load() data[0]
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image.html" }
39b394319130-1
Document(page_content="LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n\n\n‘Zxjiang Shen' (F3}, Ruochen Zhang”, Melissa Dell*, Benjamin Charles Germain\nLeet, Jacob Carlson, and Weining LiF\n\n\nsugehen\n\nshangthrows, et\n\n“Abstract. Recent advanocs in document image analysis (DIA) have been\n‘pimarliy driven bythe application of neural networks dell roar\n{uteomer could be aly deployed in production and extended fo farther\n[nvetigtion. However, various factory ke lcely organize codebanee\nsnd sophisticated modal cnigurations compat the ey ree of\n‘erin! innovation by wide sence, Though there have been sng\n‘Hors to improve reuablty and simplify deep lees (DL) mode\n‘aon, sone of them ae optimized for challenge inthe demain of DIA,\nThis roprscte a major gap in the extng fol, sw DIA i eal to\nscademic research acon wie range of dpi in the social ssencee\n[rary for streamlining the sage of DL in DIA research and
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image.html" }
39b394319130-2
streamlining the sage of DL in DIA research and appicn\n‘tons The core LayoutFaraer brary comes with a sch of simple and\nIntative interfaee or applying and eutomiing DI. odel fr Inyo de\npltfom for sharing both protrined modes an fal document dist\n{ation pipeline We demonutate that LayootPareer shea fr both\nlightweight and lrgeseledgtieation pipelines in eal-word uae ces\nThe leary pblely smal at Btspe://layost-pareergsthab So\n\n\n\n‘Keywords: Document Image Analysis» Deep Learning Layout Analysis\n‘Character Renguition - Open Serres dary « Tol\n\n\nIntroduction\n\n\n‘Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndoctiment image analysis (DIA) tea including document image clasiffeation [I]\n", lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg'}, lookup_index=0)
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image.html" }
39b394319130-3
Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredImageLoader("layout-parser-paper-fast.jpg", mode="elements") data = loader.load() data[0] Document(page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.jpg', 'filename': 'layout-parser-paper-fast.jpg', 'page_number': 1, 'category': 'Title'}, lookup_index=0) previous iFixit next IMSDb Contents Using Unstructured Retain Elements By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/image.html" }
f148d87c90bc-0
.ipynb .pdf Google Drive Contents Prerequisites 🧑 Instructions for ingesting your Google Docs data Google Drive# This notebook covers how to load documents from Google Drive. Currently, only Google Docs are supported. Prerequisites# Create a Google Cloud project or use an existing project Enable the Google Drive API Authorize credentials for desktop app pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib 🧑 Instructions for ingesting your Google Docs data# By default, the GoogleDriveLoader expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the credentials_file keyword argument. Same thing with token.json. Note that token.json will be created automatically the first time you use the loader. GoogleDriveLoader can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL: Folder: https://drive.google.com/drive/u/0/folders/1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5 -> folder id is "1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5" Document: https://docs.google.com/document/d/1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw/edit -> document id is "1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw" from langchain.document_loaders import GoogleDriveLoader loader = GoogleDriveLoader(folder_id="1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5") docs = loader.load() previous GitBook next Gutenberg Contents
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/googledrive.html" }
f148d87c90bc-1
docs = loader.load() previous GitBook next Gutenberg Contents Prerequisites 🧑 Instructions for ingesting your Google Docs data By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/googledrive.html" }
888f597cc180-0
.ipynb .pdf YouTube Contents Add video info YouTube loader from Google Cloud Prerequisites 🧑 Instructions for ingesting your Google Docs data YouTube# How to load documents from YouTube transcripts. from langchain.document_loaders import YoutubeLoader # !pip install youtube-transcript-api loader = YoutubeLoader.from_youtube_url("https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True) loader.load() Add video info# # ! pip install pytube loader = YoutubeLoader.from_youtube_url("https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True) loader.load() YouTube loader from Google Cloud# Prerequisites# Create a Google Cloud project or use an existing project Enable the Youtube Api Authorize credentials for desktop app pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib youtube-transcript-api 🧑 Instructions for ingesting your Google Docs data# By default, the GoogleDriveLoader expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the credentials_file keyword argument. Same thing with token.json. Note that token.json will be created automatically the first time you use the loader. GoogleApiYoutubeLoader can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL: Note depending on your set up, the service_account_path needs to be set up. See here for more details. from langchain.document_loaders import GoogleApiClient, GoogleApiYoutubeLoader # Init the GoogleApiClient from pathlib import Path google_api_client = GoogleApiClient(credentials_path=Path("your_path_creds.json")) # Use a Channel
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/youtube.html" }
888f597cc180-1
# Use a Channel youtube_loader_channel = GoogleApiYoutubeLoader(google_api_client=google_api_client, channel_name="Reducible",captions_language="en") # Use Youtube Ids youtube_loader_ids = GoogleApiYoutubeLoader(google_api_client=google_api_client, video_ids=["TrdevFK_am4"], add_video_info=True) # returns a list of Documents youtube_loader_channel.load() previous Word Documents next Text Splitters Contents Add video info YouTube loader from Google Cloud Prerequisites 🧑 Instructions for ingesting your Google Docs data By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/youtube.html" }
5a122111cfe6-0
.ipynb .pdf CoNLL-U CoNLL-U# This is an example of how to load a file in CoNLL-U format. The whole file is treated as one document. The example data (conllu.conllu) is based on one of the standard UD/CoNLL-U examples. from langchain.document_loaders import CoNLLULoader loader = CoNLLULoader("example_data/conllu.conllu") document = loader.load() document previous Document Loaders next Airbyte JSON By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/CoNLL-U.html" }
90566b674f8b-0
.ipynb .pdf Apify Dataset Contents Prerequisites An example with question answering Apify Dataset# This notebook shows how to load Apify datasets to LangChain. Apify Dataset is a scaleable append-only storage with sequential access built for storing structured web scraping results, such as a list of products or Google SERPs, and then export them to various formats like JSON, CSV, or Excel. Datasets are mainly used to save results of Apify Actors—serverless cloud programs for varius web scraping, crawling, and data extraction use cases. Prerequisites# You need to have an existing dataset on the Apify platform. If you don’t have one, please first check out this notebook on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs. First, import ApifyDatasetLoader into your source code: from langchain.document_loaders import ApifyDatasetLoader from langchain.document_loaders.base import Document Then provide a function that maps Apify dataset record fields to LangChain Document format. For example, if your dataset items are structured like this: { "url": "https://apify.com", "text": "Apify is the best web scraping and automation platform." } The mapping function in the code below will convert them to LangChain Document format, so that you can use them further with any LLM model (e.g. for question answering). loader = ApifyDatasetLoader( dataset_id="your-dataset-id", dataset_mapping_function=lambda dataset_item: Document( page_content=dataset_item["text"], metadata={"source": dataset_item["url"]} ), ) data = loader.load() An example with question answering# In this example, we use data from a dataset to answer a question. from langchain.docstore.document import Document
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/apify_dataset.html" }
90566b674f8b-1
from langchain.docstore.document import Document from langchain.document_loaders import ApifyDatasetLoader from langchain.indexes import VectorstoreIndexCreator loader = ApifyDatasetLoader( dataset_id="your-dataset-id", dataset_mapping_function=lambda item: Document( page_content=item["text"] or "", metadata={"source": item["url"]} ), ) index = VectorstoreIndexCreator().from_loaders([loader]) query = "What is Apify?" result = index.query_with_sources(query) print(result["answer"]) print(result["sources"]) Apify is a platform for developing, running, and sharing serverless cloud programs. It enables users to create web scraping and automation tools and publish them on the Apify platform. https://docs.apify.com/platform/actors, https://docs.apify.com/platform/actors/running/actors-in-store, https://docs.apify.com/platform/security, https://docs.apify.com/platform/actors/examples previous Airbyte JSON next AZLyrics Contents Prerequisites An example with question answering By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/apify_dataset.html" }
646661607df5-0
.ipynb .pdf s3 File s3 File# This covers how to load document objects from an s3 file object. from langchain.document_loaders import S3FileLoader #!pip install boto3 loader = S3FileLoader("testing-hwc", "fake.docx") loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpxvave6wl/fake.docx'}, lookup_index=0)] previous s3 Directory next Sitemap Loader By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/s3_file.html" }
b2db3ead3d97-0
.ipynb .pdf HTML Contents Loading HTML with BeautifulSoup4 HTML# This covers how to load HTML documents into a document format that we can use downstream. from langchain.document_loaders import UnstructuredHTMLLoader loader = UnstructuredHTMLLoader("example_data/fake-content.html") data = loader.load() data [Document(page_content='My First Heading\n\nMy first paragraph.', lookup_str='', metadata={'source': 'example_data/fake-content.html'}, lookup_index=0)] Loading HTML with BeautifulSoup4# We can also use BeautifulSoup4 to load HTML documents using the BSHTMLLoader. This will extract the text from the html into page_content, and the page title as title into metadata. from langchain.document_loaders import BSHTMLLoader loader = BSHTMLLoader("example_data/fake-content.html") data = loader.load() data [Document(page_content='\n\nTest Title\n\n\nMy First Heading\nMy first paragraph.\n\n\n', lookup_str='', metadata={'source': 'example_data/fake-content.html', 'title': 'Test Title'}, lookup_index=0)] previous Hacker News next iFixit Contents Loading HTML with BeautifulSoup4 By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/html.html" }
26706249fca0-0
.ipynb .pdf EPubs Contents Retain Elements EPubs# This covers how to load .epub documents into a document format that we can use downstream. You’ll need to install the pandocs package for this loader to work. from langchain.document_loaders import UnstructuredEPubLoader loader = UnstructuredEPubLoader("winter-sports.epub") data = loader.load() Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredEPubLoader("winter-sports.epub", mode="elements") data = loader.load() data[0] Document(page_content='The Project Gutenberg eBook of Winter Sports in\nSwitzerland, by E. F. Benson', lookup_str='', metadata={'source': 'winter-sports.epub', 'page_number': 1, 'category': 'Title'}, lookup_index=0) previous Email next EverNote Contents Retain Elements By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/epub.html" }
d3a9ece2140d-0
.ipynb .pdf Notion DB Loader Contents Requirements Setup 1. Create a Notion Table Database 2. Create a Notion Integration 3. Connect the Integration to the Database 4. Get the Database ID Usage Notion DB Loader# NotionDBLoader is a Python class for loading content from a Notion database. It retrieves pages from the database, reads their content, and returns a list of Document objects. Requirements# A Notion Database Notion Integration Token Setup# 1. Create a Notion Table Database# Create a new table database in Notion. You can add any column to the database and they will be treated as metadata. For example you can add the following columns: Title: set Title as the default property. Categories: A Multi-select property to store categories associated with the page. Keywords: A Multi-select property to store keywords associated with the page. Add your content to the body of each page in the database. The NotionDBLoader will extract the content and metadata from these pages. 2. Create a Notion Integration# To create a Notion Integration, follow these steps: Visit the (Notion Developers)[https://www.notion.com/my-integrations] page and log in with your Notion account. Click on the “+ New integration” button. Give your integration a name and choose the workspace where your database is located. Select the require capabilities, this extension only need the Read content capability Click the “Submit” button to create the integration. Once the integration is created, you’ll be provided with an Integration Token (API key). Copy this token and keep it safe, as you’ll need it to use the NotionDBLoader. 3. Connect the Integration to the Database# To connect your integration to the database, follow these steps: Open your database in Notion.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/notiondb.html" }
d3a9ece2140d-1
To connect your integration to the database, follow these steps: Open your database in Notion. Click on the three-dot menu icon in the top right corner of the database view. Click on the “+ New integration” button. Find your integration, you may need to start typing its name in the search box. Click on the “Connect” button to connect the integration to the database. 4. Get the Database ID# To get the database ID, follow these steps: Open your database in Notion. Click on the three-dot menu icon in the top right corner of the database view. Select “Copy link” from the menu to copy the database URL to your clipboard. The database ID is the long string of alphanumeric characters found in the URL. It typically looks like this: https://www.notion.so/username/8935f9d140a04f95a872520c4f123456?v=…. In this example, the database ID is 8935f9d140a04f95a872520c4f123456. With the database properly set up and the integration token and database ID in hand, you can now use the NotionDBLoader code to load content and metadata from your Notion database. Usage# NotionDBLoader is part of the langchain package’s document loaders. You can use it as follows: from getpass import getpass NOTION_TOKEN = getpass() DATABASE_ID = getpass() ········ ········ from langchain.document_loaders import NotionDBLoader loader = NotionDBLoader(NOTION_TOKEN, DATABASE_ID) docs = loader.load() print(docs) previous Notion next Obsidian Contents Requirements Setup 1. Create a Notion Table Database 2. Create a Notion Integration
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/notiondb.html" }
d3a9ece2140d-2
Requirements Setup 1. Create a Notion Table Database 2. Create a Notion Integration 3. Connect the Integration to the Database 4. Get the Database ID Usage By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/notiondb.html" }
562ea008d6da-0
.ipynb .pdf Notebook Notebook# This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain. from langchain.document_loaders import NotebookLoader loader = NotebookLoader("example_data/notebook.ipynb", include_outputs=True, max_output_length=20, remove_newline=True) NotebookLoader.load() loads the .ipynb notebook file into a Document object. Parameters: include_outputs (bool): whether to include cell outputs in the resulting document (default is False). max_output_length (int): the maximum number of characters to include from each cell output (default is 10). remove_newline (bool): whether to remove newline characters from the cell sources and outputs (default is False). traceback (bool): whether to include full traceback (default is False). loader.load()
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/notebook.html" }
562ea008d6da-1
traceback (bool): whether to include full traceback (default is False). loader.load() [Document(page_content='\'markdown\' cell: \'[\'# Notebook\', \'\', \'This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain.\']\'\n\n \'code\' cell: \'[\'from langchain.document_loaders import NotebookLoader\']\'\n\n \'code\' cell: \'[\'loader = NotebookLoader("example_data/notebook.ipynb")\']\'\n\n \'markdown\' cell: \'[\'`NotebookLoader.load()` loads the `.ipynb` notebook file into a `Document` object.\', \'\', \'**Parameters**:\', \'\', \'* `include_outputs` (bool): whether to include cell outputs in the resulting document (default is False).\', \'* `max_output_length` (int): the maximum number of characters to include from each cell output (default is 10).\', \'* `remove_newline` (bool): whether to remove newline characters from the cell sources and outputs (default is False).\', \'* `traceback` (bool): whether to include full traceback (default is False).\']\'\n\n \'code\' cell: \'[\'loader.load(include_outputs=True, max_output_length=20, remove_newline=True)\']\'\n\n', lookup_str='', metadata={'source': 'example_data/notebook.ipynb'}, lookup_index=0)] previous Markdown next Notion By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/notebook.html" }
3d68e88119e4-0
.ipynb .pdf Word Documents Contents Retain Elements Word Documents# This covers how to load Word documents into a document format that we can use downstream. from langchain.document_loaders import UnstructuredWordDocumentLoader loader = UnstructuredWordDocumentLoader("example_data/fake.docx") data = loader.load() data [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 'fake.docx'}, lookup_index=0)] Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredWordDocumentLoader("example_data/fake.docx", mode="elements") data = loader.load() data[0] Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 'fake.docx', 'filename': 'fake.docx', 'category': 'Title'}, lookup_index=0) previous WhatsApp Chat next YouTube Contents Retain Elements By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/word_document.html" }
7b147ea505cc-0
.ipynb .pdf Figma Figma# This notebook covers how to load data from the Figma REST API into a format that can be ingested into LangChain, along with example usage for code generation. import os from langchain.document_loaders.figma import FigmaFileLoader from langchain.text_splitter import CharacterTextSplitter from langchain.chat_models import ChatOpenAI from langchain.indexes import VectorstoreIndexCreator from langchain.chains import ConversationChain, LLMChain from langchain.memory import ConversationBufferWindowMemory from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate, ) The Figma API Requires an access token, node_ids, and a file key. The file key can be pulled from the URL. https://www.figma.com/file/{filekey}/sampleFilename Node IDs are also available in the URL. Click on anything and look for the ‘?node-id={node_id}’ param. Access token instructions are in the Figma help center article: https://help.figma.com/hc/en-us/articles/8085703771159-Manage-personal-access-tokens figma_loader = FigmaFileLoader( os.environ.get('ACCESS_TOKEN'), os.environ.get('NODE_IDS'), os.environ.get('FILE_KEY') ) # see https://python.langchain.com/en/latest/modules/indexes/getting_started.html for more details index = VectorstoreIndexCreator().from_loaders([figma_loader]) figma_doc_retriever = index.vectorstore.as_retriever() def generate_code(human_input): # I have no idea if the Jon Carmack thing makes for better code. YMMV.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html" }
7b147ea505cc-1
# See https://python.langchain.com/en/latest/modules/models/chat/getting_started.html for chat info system_prompt_template = """You are expert coder Jon Carmack. Use the provided design context to create idomatic HTML/CSS code as possible based on the user request. Everything must be inline in one file and your response must be directly renderable by the browser. Figma file nodes and metadata: {context}""" human_prompt_template = "Code the {text}. Ensure it's mobile responsive" system_message_prompt = SystemMessagePromptTemplate.from_template(system_prompt_template) human_message_prompt = HumanMessagePromptTemplate.from_template(human_prompt_template) # delete the gpt-4 model_name to use the default gpt-3.5 turbo for faster results gpt_4 = ChatOpenAI(temperature=.02, model_name='gpt-4') # Use the retriever's 'get_relevant_documents' method if needed to filter down longer docs relevant_nodes = figma_doc_retriever.get_relevant_documents(human_input) conversation = [system_message_prompt, human_message_prompt] chat_prompt = ChatPromptTemplate.from_messages(conversation) response = gpt_4(chat_prompt.format_prompt( context=relevant_nodes, text=human_input).to_messages()) return response response = generate_code("page top header") Returns the following in response.content:
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html" }
7b147ea505cc-2
<!DOCTYPE html>\n<html lang="en">\n<head>\n <meta charset="UTF-8">\n <meta name="viewport" content="width=device-width, initial-scale=1.0">\n <style>\n @import url(\'https://fonts.googleapis.com/css2?family=DM+Sans:wght@500;700&family=Inter:wght@600&display=swap\');\n\n body {\n margin: 0;\n font-family: \'DM Sans\', sans-serif;\n }\n\n .header {\n display: flex;\n justify-content: space-between;\n align-items: center;\n padding: 20px;\n background-color: #fff;\n box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);\n }\n\n .header h1 {\n font-size: 16px;\n font-weight: 700;\n
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html" }
7b147ea505cc-3
font-weight: 700;\n margin: 0;\n }\n\n .header nav {\n display: flex;\n align-items: center;\n }\n\n .header nav a {\n font-size: 14px;\n font-weight: 500;\n text-decoration: none;\n color: #000;\n margin-left: 20px;\n }\n\n @media (max-width: 768px) {\n .header nav {\n display: none;\n }\n }\n </style>\n</head>\n<body>\n <header class="header">\n <h1>Company Contact</h1>\n <nav>\n <a href="#">Lorem Ipsum</a>\n
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html" }
7b147ea505cc-4
Ipsum</a>\n <a href="#">Lorem Ipsum</a>\n <a href="#">Lorem Ipsum</a>\n </nav>\n </header>\n</body>\n</html>
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html" }
7b147ea505cc-5
previous Facebook Chat next GCS Directory By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/figma.html" }
87174ab5b3a6-0
.ipynb .pdf Azure Blob Storage Container Contents Specifying a prefix Azure Blob Storage Container# This covers how to load document objects from a container on Azure Blob Storage. from langchain.document_loaders import AzureBlobStorageContainerLoader #!pip install azure-storage-blob loader = AzureBlobStorageContainerLoader(conn_str="<conn_str>", container="<container>") loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpaa9xl6ch/fake.docx'}, lookup_index=0)] Specifying a prefix# You can also specify a prefix for more finegrained control over what files to load. loader = AzureBlobStorageContainerLoader(conn_str="<conn_str>", container="<container>", prefix="<prefix>") loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpujbkzf_l/fake.docx'}, lookup_index=0)] previous AZLyrics next Azure Blob Storage File Contents Specifying a prefix By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/azure_blob_storage_container.html" }
a6ffc682e9f1-0
.ipynb .pdf Unstructured File Loader Contents Retain Elements Define a Partitioning Strategy PDF Example Unstructured File Loader# This notebook covers how to use Unstructured to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more. # # Install package !pip install "unstructured[local-inference]" !pip install "detectron2@git+https://github.com/facebookresearch/detectron2.git@v0.6#egg=detectron2" !pip install layoutparser[layoutmodels,tesseract] # # Install other dependencies # # https://github.com/Unstructured-IO/unstructured/blob/main/docs/source/installing.rst # !brew install libmagic # !brew install poppler # !brew install tesseract # # If parsing xml / html documents: # !brew install libxml2 # !brew install libxslt # import nltk # nltk.download('punkt') from langchain.document_loaders import UnstructuredFileLoader loader = UnstructuredFileLoader("./example_data/state_of_the_union.txt") docs = loader.load() docs[0].page_content[:400] 'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.\n\nLast year COVID-19 kept us apart. This year we are finally together again.\n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.\n\nWith a duty to one another to the American people to the Constit' Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html" }
a6ffc682e9f1-1
loader = UnstructuredFileLoader("./example_data/state_of_the_union.txt", mode="elements") docs = loader.load() docs[:5] [Document(page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Last year COVID-19 kept us apart. This year we are finally together again.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='With a duty to one another to the American people to the Constitution.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='And with an unwavering resolve that freedom will always triumph over tyranny.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)] Define a Partitioning Strategy# Unstructured document loader allow users to pass in a strategy parameter that lets unstructured know how to partitioning the document. Currently supported strategies are "hi_res" (the default) and "fast". Hi res partitioning strategies are more accurate, but take longer to process. Fast strategies partition the document more quickly, but trade-off accuracy. Not all document types have separate hi res and fast partitioning strategies. For those document types, the strategy kwarg is ignored. In some cases, the high res strategy will fallback to fast if there is a dependency missing (i.e. a model for document partitioning). You can see how to apply a strategy to an UnstructuredFileLoader below.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html" }
a6ffc682e9f1-2
from langchain.document_loaders import UnstructuredFileLoader loader = UnstructuredFileLoader("layout-parser-paper-fast.pdf", strategy="fast", mode="elements") docs = loader.load() docs[:5] [Document(page_content='1', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='0', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'Title'}, lookup_index=0)] PDF Example# Processing PDF documents works exactly the same way. Unstructured detects the file type and extracts the same types of elements. !wget https://raw.githubusercontent.com/Unstructured-IO/unstructured/main/example-docs/layout-parser-paper.pdf -P "../../" loader = UnstructuredFileLoader("./example_data/layout-parser-paper.pdf", mode="elements") docs = loader.load() docs[:5]
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html" }
a6ffc682e9f1-3
docs = loader.load() docs[:5] [Document(page_content='LayoutParser : A Unified Toolkit for Deep Learning Based Document Image Analysis', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Zejiang Shen 1 ( (ea)\n ), Ruochen Zhang 2 , Melissa Dell 3 , Benjamin Charles Germain Lee 4 , Jacob Carlson 3 , and Weining Li 5', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Allen Institute for AI shannons@allenai.org', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Brown University ruochen zhang@brown.edu', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Harvard University { melissadell,jacob carlson } @fas.harvard.edu', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0)] previous Telegram next URL Contents Retain Elements Define a Partitioning Strategy PDF Example By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html" }
50de35d9d741-0
.ipynb .pdf Markdown Contents Retain Elements Markdown# This covers how to load markdown documents into a document format that we can use downstream. from langchain.document_loaders import UnstructuredMarkdownLoader loader = UnstructuredMarkdownLoader("../../../../README.md") data = loader.load() data
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/markdown.html" }
50de35d9d741-1
[Document(page_content="ð\x9f¦\x9cï¸\x8fð\x9f”\x97 LangChain\n\nâ\x9a¡ Building applications with LLMs through composability â\x9a¡\n\nProduction Support: As you move your LangChains into production, we'd love to offer more comprehensive support.\nPlease fill out this form and we'll set up a dedicated support Slack channel.\n\nQuick Install\n\npip install langchain\n\nð\x9f¤” What is this?\n\nLarge language models (LLMs) are emerging as a transformative technology, enabling\ndevelopers to build applications that they previously could not.\nBut using these LLMs in isolation is often not enough to\ncreate a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge.\n\nThis library is aimed at assisting in the development of those types of applications. Common examples of these types of applications include:\n\nâ\x9d“ Question Answering over specific documents\n\nDocumentation\n\nEnd-to-end Example: Question Answering over Notion Database\n\nð\x9f’¬ Chatbots\n\nDocumentation\n\nEnd-to-end Example:
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/markdown.html" }
50de35d9d741-2
Chatbots\n\nDocumentation\n\nEnd-to-end Example: Chat-LangChain\n\nð\x9f¤\x96 Agents\n\nDocumentation\n\nEnd-to-end Example: GPT+WolframAlpha\n\nð\x9f“\x96 Documentation\n\nPlease see here for full documentation on:\n\nGetting started (installation, setting up the environment, simple examples)\n\nHow-To examples (demos, integrations, helper functions)\n\nReference (full API docs)\n Resources (high-level explanation of core concepts)\n\nð\x9f\x9a\x80 What can this help with?\n\nThere are six main areas that LangChain is designed to help with.\nThese are, in increasing order of complexity:\n\nð\x9f“\x83 LLMs and Prompts:\n\nThis includes prompt management, prompt optimization, generic interface for all LLMs, and common utilities for working with LLMs.\n\nð\x9f”\x97 Chains:\n\nChains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\n\nð\x9f“\x9a
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/markdown.html" }
50de35d9d741-3
chains for common applications.\n\nð\x9f“\x9a Data Augmented Generation:\n\nData Augmented Generation involves specific types of chains that first interact with an external datasource to fetch data to use in the generation step. Examples of this include summarization of long pieces of text and question/answering over specific data sources.\n\nð\x9f¤\x96 Agents:\n\nAgents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.\n\nð\x9f§\xa0 Memory:\n\nMemory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\n\nð\x9f§\x90 Evaluation:\n\n[BETA] Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/markdown.html" }
50de35d9d741-4
is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\n\nFor more information on these concepts, please see our full documentation.\n\nð\x9f’\x81 Contributing\n\nAs an open source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infra, or better documentation.\n\nFor detailed information on how to contribute, see here.", lookup_str='', metadata={'source': '../../../../README.md'}, lookup_index=0)]
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/markdown.html" }
50de35d9d741-5
Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredMarkdownLoader("../../../../README.md", mode="elements") data = loader.load() data[0] Document(page_content='ð\x9f¦\x9cï¸\x8fð\x9f”\x97 LangChain', lookup_str='', metadata={'source': '../../../../README.md', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0) previous IMSDb next Notebook Contents Retain Elements By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 08, 2023.
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/markdown.html" }
5e60c21cc732-0
.ipynb .pdf iFixit Contents Searching iFixit using /suggest iFixit# iFixit is the largest, open repair community on the web. The site contains nearly 100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is licensed under CC-BY-NC-SA 3.0. This loader will allow you to download the text of a repair guide, text of Q&A’s and wikis from devices on iFixit using their open APIs. It’s incredibly useful for context related to technical documents and answers to questions about devices in the corpus of data on iFixit. from langchain.document_loaders import IFixitLoader loader = IFixitLoader("https://www.ifixit.com/Teardown/Banana+Teardown/811") data = loader.load() data
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/ifixit.html" }
5e60c21cc732-1
data = loader.load() data [Document(page_content="# Banana Teardown\nIn this teardown, we open a banana to see what's inside. Yellow and delicious, but most importantly, yellow.\n\n\n###Tools Required:\n\n - Fingers\n\n - Teeth\n\n - Thumbs\n\n\n###Parts Required:\n\n - None\n\n\n## Step 1\nTake one banana from the bunch.\nDon't squeeze too hard!\n\n\n## Step 2\nHold the banana in your left hand and grip the stem between your right thumb and forefinger.\n\n\n## Step 3\nPull the stem downward until the peel splits.\n\n\n## Step 4\nInsert your thumbs into the split of the peel and pull the two sides apart.\nExpose the top of the banana. It may be slightly squished from pulling on the stem, but this will not affect the flavor.\n\n\n## Step 5\nPull open the peel, starting from your original split, and opening it along the length of the banana.\n\n\n## Step 6\nRemove fruit from peel.\n\n\n## Step 7\nEat and enjoy!\nThis is where you'll need your teeth.\nDo not choke on banana!\n", lookup_str='', metadata={'source': 'https://www.ifixit.com/Teardown/Banana+Teardown/811', 'title': 'Banana Teardown'}, lookup_index=0)] loader = IFixitLoader("https://www.ifixit.com/Answers/View/318583/My+iPhone+6+is+typing+and+opening+apps+by+itself") data = loader.load() data
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/ifixit.html" }
5e60c21cc732-2
[Document(page_content='# My iPhone 6 is typing and opening apps by itself\nmy iphone 6 is typing and opening apps by itself. How do i fix this. I just bought it last week.\nI restored as manufactures cleaned up the screen\nthe problem continues\n\n## 27 Answers\n\nFilter by: \n\nMost Helpful\nNewest\nOldest\n\n### Accepted Answer\nHi,\nWhere did you buy it? If you bought it from Apple or from an official retailer like Carphone warehouse etc. Then you\'ll have a year warranty and can get it replaced free.\nIf you bought it second hand, from a third part repair shop or online, then it may still have warranty, unless it is refurbished and has been repaired elsewhere.\nIf this is the case, it may be the screen that needs replacing to solve your issue.\nEither way, wherever you got it, it\'s best to return it and get a refund or a replacement device. :-)\n\n\n\n### Most Helpful Answer\nI had the same issues, screen freezing, opening apps by itself,
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/ifixit.html" }
5e60c21cc732-3
same issues, screen freezing, opening apps by itself, selecting the screens and typing on it\'s own. I first suspected aliens and then ghosts and then hackers.\niPhone 6 is weak physically and tend to bend on pressure. And my phone had no case or cover.\nI took the phone to apple stores and they said sensors need to be replaced and possibly screen replacement as well. My phone is just 17 months old.\nHere is what I did two days ago and since then it is working like a charm..\nHold the phone in portrait (as if watching a movie). Twist it very very gently. do it few times.Rest the phone for 10 mins (put it on a flat surface). You can now notice those self typing things gone and screen getting stabilized.\nThen, reset the hardware (hold the power and home button till the screen goes off and comes back with apple logo). release the buttons when you see this.\nThen, connect to your laptop and log in to iTunes and reset your phone completely. (please
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/ifixit.html" }
5e60c21cc732-4
to iTunes and reset your phone completely. (please take a back-up first).\nAnd your phone should be good to use again.\nWhat really happened here for me is that the sensors might have stuck to the screen and with mild twisting, they got disengaged/released.\nI posted this in Apple Community and the moderators deleted it, for the best reasons known to them.\nInstead of throwing away your phone (or selling cheaply), try this and you could be saving your phone.\nLet me know how it goes.\n\n\n\n### Other Answer\nIt was the charging cord! I bought a gas station braided cord and it was the culprit. Once I plugged my OEM cord into the phone the GHOSTS went away.\n\n\n\n### Other Answer\nI\'ve same issue that I just get resolved. I first tried to restore it from iCloud back, however it was not a software issue or any virus issue, so after restore same problem continues. Then I get my phone to local area iphone repairing lab, and they detected
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/ifixit.html" }
5e60c21cc732-5
to local area iphone repairing lab, and they detected that it is an LCD issue. LCD get out of order without any reason (It was neither hit or nor slipped, but LCD get out of order all and sudden, while using it) it started opening things at random. I get LCD replaced with new one, that cost me $80.00 in total ($70.00 LCD charges + $10.00 as labor charges to fix it). iPhone is back to perfect mode now. It was iphone 6s. Thanks.\n\n\n\n### Other Answer\nI was having the same issue with my 6 plus, I took it to a repair shop, they opened the phone, disconnected the three ribbons the screen has, blew up and cleaned the connectors and connected the screen again and it solved the issue… it’s hardware, not software.\n\n\n\n### Other Answer\nHey.\nJust had this problem now. As it turns out, you just need to plug in your phone. I use a case and when I took it off I noticed that there was a lot
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/ifixit.html" }
5e60c21cc732-6
took it off I noticed that there was a lot of dust and dirt around the areas that the case didn\'t cover. I shined a light in my ports and noticed they were filled with dust. Tomorrow I plan on using pressurized air to clean it out and the problem should be solved. If you plug in your phone and unplug it and it stops the issue, I recommend cleaning your phone thoroughly.\n\n\n\n### Other Answer\nI simply changed the power supply and problem was gone. The block that plugs in the wall not the sub cord. The cord was fine but not the block.\n\n\n\n### Other Answer\nSomeone ask! I purchased my iPhone 6s Plus for 1000 from at&t. Before I touched it, I purchased a otter defender case. I read where at&t said touch desease was due to dropping! Bullshit!! I am 56 I have never dropped it!! Looks brand new! Never dropped or abused any way! I have my original charger. I am going to
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/ifixit.html" }
5e60c21cc732-7
I have my original charger. I am going to clean it and try everyone’s advice. It really sucks! I had 40,000,000 on my heart of Vegas slots! I play every day. I would be spinning and my fingers were no where max buttons and it would light up and switch to max. It did it 3 times before I caught it light up by its self. It sucks. Hope I can fix it!!!!\n\n\n\n### Other Answer\nNo answer, but same problem with iPhone 6 plus--random, self-generated jumping amongst apps and typing on its own--plus freezing regularly (aha--maybe that\'s what the "plus" in "6 plus" refers to?). An Apple Genius recommended upgrading to iOS 11.3.1 from 11.2.2, to see if that fixed the trouble. If it didn\'t, Apple will sell me a new phone for $168! Of couese the OS upgrade didn\'t fix the problem. Thanks for helping me figure out that it\'s most likely a hardware problem--which the
{ "url": "https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/ifixit.html" }