source
stringlengths
26
381
text
stringlengths
53
1.64M
https://www.databricks.com/dataaisummit?itm_data=events-hp-nav-dais23
Data and AI Summit 2023 - DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingGeneration AILarge Language Models (LLM) are taking AI mainstream. Join the premier event for the global data community to understand their potential and shape the future of your industry with data and AI. Register NowSan Francisco, Moscone CenterJune 26 - 29, 2023Featured SpeakersTop experts, researchers and open source contributors from Databricks and across the data and AI community will speak at Data + AI Summit. Whether you’re an engineering wizard, ML pro, SQL expert — or you want to learn how to build, train and deploy LLMs — you’ll be in good company. See all speakersDaniela RusDirector, MIT CSAIL; Professor of EECS, MITPercy LiangProfessor of Computer Science, StanfordNat FriedmanCreator of Copilot; Former CEO, GitHubMichael CarbinCo-founder, MosaicML; Professor of EECS, MITKasey UhlenhuthStaff Product Manager, DatabricksWassym BensaidSr. Vice President, Software Development, RivianEric SchmidtCo-Founder, Schmidt Futures; Former CEO and Chairman, GoogleAdi PolakData & AI Technologist, lakeFSAli GhodsiCo-founder and CEO, DatabricksManu SharmaCEO, LabelboxMatei ZahariaOriginal Creator of Apache Spark™ and MLflow; Chief Technologist, DatabricksLin QiaoCo-creator of PyTorch, Co-founder and CEO, FireworksSai RavuruSenior Manager of Data Science & Analytics, Jet BlueEmad MostaqueCEO, Stability.AIHarrison ChaseCreator of LangChainSatya NadellaChairman and CEO, Microsoft, Live Virtual GuestZaheera ValaniSenior Director of Engineering, DatabricksHannes MühleisenCreator of DuckDBBrooke WenigMachine Learning Practice Lead, DatabricksJitendra MalikComputer Vision Pioneer, Former Head of Facebook AI ResearchRobin SutaraField CTO, DatabricksLior GavishCEO and Co-founder, Monte Carlo DataDawn SongProfessor of EECS, UC BerkeleyReynold XinCo-founder and Chief Architect, DatabricksDaniela RusDirector, MIT CSAIL; Professor of EECS, MITPercy LiangProfessor of Computer Science, StanfordNat FriedmanCreator of Copilot; Former CEO, GitHubMichael CarbinCo-founder, MosaicML; Professor of EECS, MITKasey UhlenhuthStaff Product Manager, DatabricksWassym BensaidSr. Vice President, Software Development, RivianEric SchmidtCo-Founder, Schmidt Futures; Former CEO and Chairman, GoogleAdi PolakData & AI Technologist, lakeFSAli GhodsiCo-founder and CEO, DatabricksManu SharmaCEO, LabelboxMatei ZahariaOriginal Creator of Apache Spark™ and MLflow; Chief Technologist, DatabricksLin QiaoCo-creator of PyTorch, Co-founder and CEO, FireworksSai RavuruSenior Manager of Data Science & Analytics, Jet BlueEmad MostaqueCEO, Stability.AIHarrison ChaseCreator of LangChainSatya NadellaChairman and CEO, Microsoft, Live Virtual GuestZaheera ValaniSenior Director of Engineering, DatabricksHannes MühleisenCreator of DuckDBBrooke WenigMachine Learning Practice Lead, DatabricksJitendra MalikComputer Vision Pioneer, Former Head of Facebook AI ResearchRobin SutaraField CTO, DatabricksLior GavishCEO and Co-founder, Monte Carlo DataDawn SongProfessor of EECS, UC BerkeleyReynold XinCo-founder and Chief Architect, DatabricksDaniela RusDirector, MIT CSAIL; Professor of EECS, MITPercy LiangProfessor of Computer Science, StanfordNat FriedmanCreator of Copilot; Former CEO, GitHubMichael CarbinCo-founder, MosaicML; Professor of EECS, MITKasey UhlenhuthStaff Product Manager, DatabricksWassym BensaidSr. Vice President, Software Development, RivianEric SchmidtCo-Founder, Schmidt Futures; Former CEO and Chairman, GoogleWhy attend?Join thousands of data leaders, engineers, scientists and analysts to explore all things data, analytics and AI — and how these are unified on the lakehouse. Hear from the data teams who are transforming their industries. Learn how to build and apply LLMs to your business. Uplevel your skills with hands-on training and role-based certifications. Connect with data professionals from around the world and learn more about all Data + AI Summit has to offer. SessionsWith more than 250 sessions, Data + AI Summit has something for everyone. Choose from technical deep dives, hands-on training, lightning talks, industry sessions, and more. Explore sessionsTechnologyExplore the latest advances in leading open source projects and industry technologies like Apache Spark™, Delta Lake, MLflow, Dolly, PyTorch, dbt, Presto/Trino, DuckDB and much more. You’ll also get a first look at new products and features in the Databricks Lakehouse Platform. Browse catalogNetworkingConnect with thousands of data + AI community peers and grow your professional network in social meetups, on the expo floor, or at our event party. Learn moreChoose your experienceGet access to all the sessions, training, and special events live in San Francisco or join us virtually for the keynotes.RECOMMENDEDActivitiesIn Person EventVirtual EventKeynotesBreakout Sessions300+10Hands-on Training Courses for Data Engineering, Machine Learning, LLMs, and many onsite certificationsConnect with other data pros at “birds of a feather” meals, happy hours and special eventsLightning talks, AMAs and meetups on topics such as Apache Spark™, Delta Lake, MLflow and DollyAccess to 100+ leading data and AI companies in Dev Hub + ExpoIndustry Forums for Financial Services, Retail and Consumer Goods, Healthcare and Life Sciences, Communications, Media and Entertainment, Public Sector, and Manufacturing and EnergySee pricingTrusted by the data communityHear data practitioners from trusted companies all over the world See agendaDon’t miss this year’s event!Register nowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/dataaisummit/speaker/alon-gubkin
Alon Gubkin - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingAlon GubkinCo-Founder & CTO at AporiaBack to speakersWriting code from the age of 7, Alon has a fairly healthy obsession with programming. He’s also an avid fan of all things machine learning, software architecture, video games, and biology. At the age of 17, Alon created his first successful open source project, a video chat platform for mobile. Before joining Aporia, Alon served as an R&D Lead in the elite 81 intelligence unit of the Israel Defense Forces.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/fr#yourprivacychoices
Architecture de data lakehouse et IA – DatabricksSkip to main contentPlateformeThe Databricks Lakehouse PlatformDelta LakeGouvernance des donnéesData EngineeringStreaming de donnéesEntreposage des donnéesPartage de donnéesMachine LearningData ScienceTarifsMarketplaceOpen source techCentre sécurité et confianceWEBINAIRE mai 18 / 8 AM PT Au revoir, entrepôt de données. Bonjour, Lakehouse. Assistez pour comprendre comment un data lakehouse s’intègre dans votre pile de données moderne. Inscrivez-vous maintenantSolutionsSolutions par secteurServices financiersSanté et sciences du vivantProduction industrielleCommunications, médias et divertissementSecteur publicVente au détailDécouvrez tous les secteurs d'activitéSolutions par cas d'utilisationSolution AcceleratorsServices professionnelsEntreprises digital-nativesMigration des plateformes de données9 mai | 8h PT   Découvrez le Lakehouse pour la fabrication Découvrez comment Corning prend des décisions critiques qui minimisent les inspections manuelles, réduisent les coûts d’expédition et augmentent la satisfaction des clients.Inscrivez-vous dès aujourd’huiApprendreDocumentationFORMATION ET CERTIFICATIONDémosRessourcesCommunauté en ligneUniversity AllianceÉvénementsSommet Data + IABlogLabosBeacons26-29 juin 2023 Assistez en personne ou connectez-vous pour le livestream du keynoteS'inscrireClientsPartenairesPartenaires cloudAWSAzureGoogle CloudContact partenairesPartenaires technologiques et de donnéesProgramme partenaires technologiquesProgramme Partenaire de donnéesBuilt on Databricks Partner ProgramPartenaires consulting et ISProgramme Partenaire C&SISolutions partenairesConnectez-vous en quelques clics à des solutions partenaires validées.En savoir plusEntrepriseOffres d'emploi chez DatabricksNotre équipeConseil d'administrationBlog de l'entreprisePresseDatabricks VenturesPrix et distinctionsNous contacterDécouvrez pourquoi Gartner a désigné Databricks comme leader pour la deuxième année consécutiveObtenir le rapportEssayer DatabricksRegarder les démosNous contacterLoginJUNE 26-29REGISTER NOWLe meilleur data warehouse est un lakehouseUnifiez vos données, votre analytique et votre IA sur une seule plateformeEssayer gratuitementEn savoir plusRéduisez les coûts et accélérez l'innovation avec la plateforme lakehouseEn savoir plusUnifiéeUne seule plateforme pour toutes vos données, régies par une gouvernance cohérente et disponibles pour toutes vos tâches d'analyse et d'IAOuvertBâtie sur des standards ouvertes, intégrée à tous les clouds pour une harmonie parfaite avec votre pile de données moderneÉvolutiveUne évolutivité efficace pour toutes les charges de travail, des simples pipelines de données aux LLM de grande envergure.Les organisations data-driven choisissent Lakehouse Voir tous les clients Le lakehouse unifie vos équipes dataGestion des données et Data EngineeringRationalisez l'ingestion et la gestion de vos donnéesAvec un ETL automatisé et fiable, un partage de données ouvert et sécurisé et des performances rapides comme l'éclair, Delta Lake envoie toutes vos données structurées, semi-structurées et non structurées vers votre datalake.En savoir plus Regarder la démoEntreposage des donnéesObtenez de nouveaux insights à partir des données les plus complètesGrâce à l'accès immédiat aux données les plus récentes et complètes et à la puissance de Databricks SQL — dont le rapport prix / performance est jusqu'à douze fois supérieur à celui des data warehouses Cloud traditionnels — les analystes et data scientists peuvent désormais bénéficier rapidement de nouveaux insights.En savoir plus Regarder la démoData Science et Machine LearningAccélérer le ML sur l'ensemble du cycle de vieLe lakehouse constitue la base de Databricks Machine Learning, une solution collaborative et native en matière de données pour l'ensemble du cycle de vie du Machine Learning, de la mise en forme à la production. Combiné à des pipelines de données de haute qualité et hautement performants, le lakehouse accélère le Machine Learning et la productivité des équipes.En savoir plus Regarder la démoGouvernance et partage des donnéesDonnées, analytique et IA : conjuguez gouvernance et partageAvec Databricks, vous bénéficiez d'un modèle commun de sécurité et de gouvernance pour l'ensemble de vos assets de données, d'analytique et d'IA au sein de votre lakehouse, quel que soit le cloud. Découvrez et partagez les données de l'ensemble de vos plateformes, cloud ou régions, sans les répliquer ni être tributaire d'un fournisseur. Vous pouvez même distribuer des produits de données sur une marketplace ouverte.En savoir plus Regarder la démoLe data warehouse est de l'histoire ancienne. Découvrez pourquoi le lakehouse constitue l'architecture moderne pour les données et l'IA.Discover LakehouseLe catalogue de sessions est disponibleRejoignez-nous à San Francisco pour explorer l'écosystème du lakehouse et découvrir les nouvelles avancées des technologies open sourceParcourez les sessionsDécouvrez pourquoi Gartner a désigné Databricks comme leader pour la deuxième année consécutive.Obtenir le rapport600 DSI. 14 secteurs. 18 pays.Une nouvelle étude révèle que la stratégie de données joue un rôle crucial dans le succès de l'IA. Découvrez les perspectives d'autres DSI.Obtenir le rapportUne lecture incontournable pour les ingénieurs ML et les data scientists qui veulent optimiser leur pratique MLOpsObtenir l'e-bookPrêt à vous lancer ?ESSAYER GRATUITEMENT DATABRICKSProduitPlatform OverviewTarifsOpen Source TechEssayer DatabricksDémoProduitPlatform OverviewTarifsOpen Source TechEssayer DatabricksDémoLearn & SupportDocumentationGlossaryFORMATION ET CERTIFICATIONHelp CenterLegalCommunauté en ligneLearn & SupportDocumentationGlossaryFORMATION ET CERTIFICATIONHelp CenterLegalCommunauté en ligneSolutionsBy IndustriesServices professionnelsSolutionsBy IndustriesServices professionnelsEntrepriseNous connaîtreOffres d'emploi chez DatabricksDiversité et inclusionBlog de l'entrepriseNous contacterEntrepriseNous connaîtreOffres d'emploi chez DatabricksDiversité et inclusionBlog de l'entrepriseNous contacterDécouvrez les offres d'emploi chez Databrickspays/régionsEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Avis de confidentialité|Conditions d'utilisation|Vos choix de confidentialité|Vos droits de confidentialité en Californie
https://www.databricks.com/company/careers/open-positions?department=field%20engineering
Current job openings at Databricks | DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack. Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence. See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco June 26–29   Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOW OverviewCultureBenefitsDiversityStudents & new gradsCurrent job openings at DatabricksDepartmentLocationProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/dataaisummit/speaker/mike-del-balso/#
Mike Del Balso - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingMike Del BalsoCo-founder + CEO at TectonBack to speakersMike Del Balso is the co-founder of Tecton, where he is building next-generation data infrastructure for Real-Time ML. Before Tecton, Mike was the PM lead for the Uber's Michelangelo ML platform. He was also a product manager at Google where he managed the core ML systems that power Google’s Search Ads business.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/kr/solutions/industries/healthcare-and-life-sciences
의료 및 생명 공학 레이크하우스 - DatabricksSkip to main content플랫폼Databricks 레이크하우스 플랫폼Delta Lake데이터 거버넌스데이터 엔지니어링데이터 스트리밍데이터 웨어하우징데이터 공유머신 러닝데이터 사이언스가격Marketplace오픈 소스 기술보안 및 신뢰 센터웨비나 5월 18일 / 오전 8시(태평양 표준시) 안녕, 데이터 웨어하우스. 안녕하세요, 레이크하우스입니다. 데이터 레이크하우스가 최신 데이터 스택에 어떻게 부합하는지 이해하려면 참석하십시오. 지금 등록하세요솔루션산업별 솔루션금융 서비스의료 서비스 및 생명 공학제조커뮤니케이션, 미디어 및 엔터테인먼트공공 부문리테일모든 산업 보기사용 사례별 솔루션솔루션 액셀러레이터프로페셔널 서비스디지털 네이티브 비즈니스데이터 플랫폼 마이그레이션5월 9일 | 오전 8시(태평양 표준시)   제조업을 위한 레이크하우스 살펴보기 코닝이 수동 검사를 최소화하고 운송 비용을 절감하며 고객 만족도를 높이는 중요한 결정을 내리는 방법을 들어보십시오.지금 등록하세요학습관련 문서교육 및 인증데모리소스온라인 커뮤니티University Alliance이벤트Data + AI Summit블로그LabsBeacons2023년 6월 26일~29일 직접 참석하거나 키노트 라이브스트림을 시청하세요.지금 등록하기고객파트너클라우드 파트너AWSAzureGoogle CloudPartner Connect기술 및 데이터 파트너기술 파트너 프로그램데이터 파트너 프로그램Built on Databricks Partner Program컨설팅 & SI 파트너C&SI 파트너 프로그램파트너 솔루션클릭 몇 번만으로 검증된 파트너 솔루션과 연결됩니다.자세히회사Databricks 채용Databricks 팀이사회회사 블로그보도 자료Databricks 벤처수상 실적문의처Gartner가 Databricks를 2년 연속 리더로 선정한 이유 알아보기보고서 받기Databricks 이용해 보기데모 보기문의처로그인JUNE 26-29REGISTER NOW의료 및 생명 공학 레이크하우스데이터 및 AI의 힘으로 환자 치료 효과 향상시작하기데모 예약의료 서비스 및 생명 공학의 4가지 데이터 문제파편화된 환자 데이터 데이터 사일로와 비구조적 데이터에 대한 제한적 지원으로 인해 환자의 치료 여정을 이해하는 데 어려움이 생깁니다.급증하는 건강 데이터 기존 온프레미스 데이터 아키텍처는 관리가 복잡하고, 이미징 및 유전체 데이터 등이 폭증하여 방대한 용량을 차지하는 의료 데이터에 맞춰 규모를 확장하기까지는 비용이 많이 들어갑니다.실시간 치료 및 운영 데이터 웨어하우스와 여기저기 흩어진 도구로 인해 중요한 의료 결정에 필요한 실시간 인사이트를 제공할 수 없고, 중요한 치료제를 안전하게 생산하여 제공할 수도 없습니다.복잡한 고급 건강 분석 ML 기능이 가벼우면 조직에서 차세대 환자 치료 모델, 약물 R&D를 위한 예측 분석 등의 다양한 문제를 해결하는 데 어려움이 있습니다.의료 및 생명 공학 레이크하우스통합 데이터 및 AI 플랫폼 모든 데이터와 분석 워크로드를 하나로 통합해 환자 치료와 의료 R&D에서 혁신을 지원하는 단일 플랫폼입니다. 파트너 솔루션 Deloitte, Accenture, ZS Associates와 같이 세계 유수의 의료 및 생명 공학 솔루션 제공업체들이 레이크하우스를 기반으로 솔루션을 구축하고 있습니다. 약물 R&D와 환자 치료에서 데이터 중심 혁신을 가속화하는 사전 구축된 서비스를 활용해 보세요. 가속화를 위한 도구 Databricks 및 파트너들은 HL7 메시지 등의 의료 데이터를 손쉽게 수집하고 의료 텍스트 분석, 약물 안전성 모니터링 등의 사용 사례를 해결하는 솔루션즈 액셀러레이터 제품군을 개발했습니다. 산업 협력 의료 에코시스템 내 기관들에게 안전하고 개방적으로 데이터를 공유하고 협업을 증진하여 생명을 구하는 연구를 가속화하고 치료 방식을 개선할 수 있습니다. 레이크하우스로 의료의 미래 실현   “Databricks 의료 및 생명 공학 레이크하우스는 GH Healthcare에 현대적이고 개방적인 협업 플랫폼을 제공하여 치료 과정에서 환자에 대한 정보를 구축할 수 있도록 합니다. 이러한 기능 덕분에 값비싼 비용이 발생하는 기존의 데이터 사일로를 완화하고, 우리 팀에게 시기 적절하고 정확한 인사이트를 제공할 수 있게 되었습니다.”— Joji George, 최고 기술 책임자, LCS 디지털, GE Healthcare의료 및 생명 공학 레이크하우스를 사용해야 하는 이유 데이터 및 AI를 지원하는 개방적인 협업 플랫폼에서 연구 기간을 단축하고 환자 치료 효과 개선환자 360° 뷰 환자, R&D, 운영 분야의 모든 정형, 비정형 데이터를 하나의 플랫폼에 통합하여 분석과 AI를 적용합니다. 환자의 치료 여정을 전체적으로 알게 되면 더욱 개인화된 치료를 제공할 수 있습니다.모집단 수준 연구에 무한한 확장 제공 클라우드의 확장형 플랫폼에서 수많은 환자 데이터를 빠르고 안정적으로 분석합니다. 모집단 전체에 대한 인사이트를 확보하면 의료 트렌드에 대해 전체적으로 조망할 수 있고, 나아가서는 더 나은 치료법을 개발할 수 있습니다.실시간 분석, 실시간 운영 모든 곳에서 스트리밍 데이터를 신속하게 수집, 처리하여 실시간 분석을 지원합니다. 가용 병상 관리에서부터 제조 최적화, 약물 유통에 이르기까지 다양한 분야에 활용할 수 있습니다.ML 기반 약물 탐색 및 환자 치료 머신 러닝을 활용하여 질병에 대한 이해도를 높이고 의료 요구 사항을 예측할 수 있습니다. 모든 데이터는 모든 협업 도구와 매끄럽게 연결하여 고급 분석을 실행할 수 있습니다.eBook 다운로드파트너 및 솔루션의료 및 생명 공학 분야의 다양한 데이터, 분석 솔루션, 템플릿 사용해 보기지능형 건강 관리개인 맞춤형 환자 치료 여정과 치료 접근성을 통해 환자 경험과 치료 효과를 개선합니다.시작하기PrecisionView™기능을 확장하고, 역량을 높이고, 의료와 생명공학에 대한 내부 협업을 보강합니다.시작하기건강 데이터 상호운용성스트리밍 FHIR 번들을 레이크하우스로 자동 수집하여 다운스트림 환자를 대규모로 분석합니다.시작하기시작하기생물의학 연구를 위한 지능적 데이터 관리전체적 가치 사슬을 구현함으로써 과학적 데이터를 엔터프라이즈 자산으로 전환합니다.시작하기모든 파트너 솔루션 보기데이터 모델 및 코호트 구축: OMOP 및 성향 점수 매칭실제 데이터를 레이크하우스로 간편하게 수집하여 표준화하고, 대규모 관찰 분석 제공시작하기상호운용성: HL7V2 메시지 수집스트리밍 HL7V2 메시지를 레이크하우스에 자동 수집하여 실시간 분석시작하기상호운용성: FHIR 번들 수집스트리밍 FHIR 번들을 레이크하우스로 자동 수집하여 다운스트림 환자 분석시작하기이미지: 디지털 병리학 분류디지털 병리학 슬라이드에서 전이를 찾아냄으로써 딥 러닝으로 진단 워크플로 강화시작하기R&D: 약물 타겟 식별대규모로 유전적 관계를 분석하여 R&D 팀이 새로운 약물 타겟을 찾아내도록 지원시작하기인류 건강: 질병 위험 예측질병 위험 예측 모델을 구축하여 치료 관리 프로그램 개선시작하기NLP: 이상 반응 탐지John Snow Labs와 의 공동 NLP 솔루션을 사용한 이상 반응 탐지를 통해 약물 안전성 모니터링 개선시작하기NLP: 실제 종양학 데이터 추출비구조적 종양학 소견을 John Snow Labs와의 공동 NLP 솔루션을 사용하여 새로운 환자 인사이트로 추출시작하기NLP: PHI 제거 자동화John Snow Labs와의 공동 NLP 솔루션을 사용하여 텍스트 데이터에서 민감한 환자 정보를 자동 제거시작하기모든 솔루션 보기의료 및 생명 공학 레이크하우스 데모자세한 정보가 필요하신가요?의료 서비스 데이터와 AI의 힘으로 환자 중심 의료 제공생명 과학 데이터 분석과 AI로 도움이 절실한 환자에게 새로운 치료 제공데이터 및 AI로 치료 효과 개선eBook 다운로드 →리소스 여러분에게 필요한 모든 리소스가 한 곳에 있습니다. 리소스 라이브러리에서 의료 서비스 및 생명 공학을 위한 데이터 및 AI ebook과 동영상을 찾아보세요. 리소스 둘러보기eBook의료 및 생명 공학 레이크하우스로 치료 효과 개선대규모 자연 언어 처리로 새로운 환자 인사이트 발견레이크하우스로 실제 데이터의 잠재력 구현솔루션 시트: 의료 및 생명 공학 레이크하우스 소개웨비나워크숍: 레이크하우스를 통한 FHIR 기반 실시간 환자 분석웨비나: Chesapeake Regional Information System을 통해 환자를 위한 의료 혁신워크숍: OMOP를 사용한 데이터 표준화와 ML을 활용한 질병 위험 예측워크숍: NLP로 실시간 데이터 추출워크숍: 실제 데이터 + AI로 R&D 가속화블로그Amgen, 의료 및 생명 공학 레이크하우스로 약물 개발 및 전달 가속화의료 및 생명 공학 레이크하우스 출시NLP를 사용한 이상 반응 탐지로 약물 안전성 개선Databricks의 오픈 소스 유전체 툴킷이 유명 도구보다 우수한 성능을 자랑NLP를 사용하여 실제 임상 데이터에서 종양학 인사이트 추출시작할 준비가 되셨나요?귀사의 비즈니스 목표를 자세히 알아보고 서비스 팀에서 귀사의 성공을 도울 방법을 함께 알아보고자 합니다.Databricks 무료로 시작하기데모 예약제품플랫폼 개요가격오픈 소스 기술Databricks 이용해 보기데모제품플랫폼 개요가격오픈 소스 기술Databricks 이용해 보기데모학습 및 지원관련 문서용어집교육 및 인증헬프 센터법적 고지온라인 커뮤니티학습 및 지원관련 문서용어집교육 및 인증헬프 센터법적 고지온라인 커뮤니티솔루션산업 기준프로페셔널 서비스솔루션산업 기준프로페셔널 서비스회사Databricks 소개Databricks 채용다양성 및 포용성회사 블로그문의처회사Databricks 소개Databricks 채용다양성 및 포용성회사 블로그문의처Databricks 채용 확인하기WorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark 및 Spark 로고는 Apache Software Foundation의 상표입니다.개인 정보 보호 고지|이용약관|귀하의 개인 정보 선택|귀하의 캘리포니아 프라이버시 권리
https://www.databricks.com/jp/professional-services
Databricks プロフェッショナルサービス | DatabricksSkip to main contentプラットフォームデータブリックスのレイクハウスプラットフォームDelta Lakeデータガバナンスデータエンジニアリングデータストリーミングデータウェアハウスデータ共有機械学習データサイエンス料金Marketplaceオープンソーステクノロジーセキュリティ&トラストセンターウェビナー 5 月 18 日午前 8 時 PT さようなら、データウェアハウス。こんにちは、レイクハウス。 データレイクハウスが最新のデータスタックにどのように適合するかを理解するために出席してください。 今すぐ登録ソリューション業種別のソリューション金融サービス医療・ライフサイエンス製造通信、メディア・エンターテイメント公共機関小売・消費財全ての業界を見るユースケース別ソリューションソリューションアクセラレータプロフェッショナルサービスデジタルネイティブビジネスデータプラットフォームの移行5月9日 |午前8時(太平洋標準時)   製造業のためのレイクハウスを発見する コーニングが、手作業による検査を最小限に抑え、輸送コストを削減し、顧客満足度を高める重要な意思決定をどのように行っているかをご覧ください。今すぐ登録学習ドキュメントトレーニング・認定デモ関連リソースオンラインコミュニティ大学との連携イベントDATA+AI サミットブログラボBeacons2023年6月26日~29日 直接参加するか、基調講演のライブストリームに参加してくださいご登録導入事例パートナークラウドパートナーAWSAzureGoogle CloudPartner Connect技術・データパートナー技術パートナープログラムデータパートナープログラムBuilt on Databricks Partner ProgramSI コンサルティングパートナーC&SI パートナーパートナーソリューションDatabricks 認定のパートナーソリューションをご利用いただけます。詳しく見る会社情報採用情報経営陣取締役会Databricks ブログニュースルームDatabricks Ventures受賞歴と業界評価ご相談・お問い合わせDatabricks は、ガートナーのマジック・クアドラントで 2 年連続でリーダーに位置付けられています。レポートをダウンロードDatabricks 無料トライアルデモを見るご相談・お問い合わせログインJUNE 26-29REGISTER NOWDatabricks プロフェッショナルサービスご相談・お問い合わせDatabricks のプロフェッショナルサービスは、世界トップクラスのデータエンジニアリング、データサイエンス、プロジェクトマネジメントの知識と経験をもってデータと AI の活用をあらゆる局面で支援し、お客様のプロジェクトを成功に導きます。メリットデータと AI の活用を加速 Databricks のプロフェッショナルサービスは、データと AI の活用を加速させます。ワークスペースのオンボーディングから、エンタープライズ規模での DataOps や CoE(センターオブエクセレンス)のプラクティス構築まで、ニーズに応じた最適なサービスを提供します。 プロジェクトのリスクを回避 従来のワークロードから Databricks への移行、新たなデータ製品の構築、データや AI のパイプライン、機械学習プロジェクトなど、あらゆる場面でお客様をお手伝いします。お客様の信頼できるパートナーおよびアドバイザとなり、リスクを最小限に抑えて価値を最大化します。 大規模な展開 データパイプライン PoC の構築や単一ノードのモデル開発は、比較的管理が容易ですが、データと AI を駆使したプラクティスをいかにして企業全体に導入し、拡大していくかが課題になります。このような課題の解決に、Databricks がお役に立ちます。 サービス内容世界トップクラスのデータエンジニアリング、データサイエンス、プロジェクトマネジメントの経験・知識をもって、お客様のプロジェクトの成功を支援します。ジャンプスタートDatabricks プラットフォームおよび、主要な機能のベストプラクティスに関するノウハウを獲得することで、プロジェクトの迅速な立ち上げが可能になります。Hadoop からの移行既存データおよび、ML パイプラインへの投資の価値を最大化する方法を提供します。スムーズな移行もサポートします。レイクハウスの構築データ分析、データサイエンス、機械学習の統合を可能にする単一のシンプルなプラットフォームの導入を加速させ、レイクハウス基盤の構築を支援します。機械学習機械学習のエンタープライズ規模の取り組みと導入を容易にします。共有サービスアクセラレータエンタープライズ規模の運用の効率化および、データ・AI 活用のベストプラクティスの推進を支援します。カスタマイズお客様のニーズに応じた業務範囲記述書(SOW)のご相談を承ります。Databricks プロフェッショナルサービスでは、複雑で特殊な要件を持つ多くのプロジェクトを成功させた実績があります。豊富な専門知識でプロジェクトのライフサイクル全体をサポートします。常駐サービス:ソリューションアーキテクト/データサイエンティスト卓越したリーダーシップとコンサルティング能力を持つ、経験豊富な技術者による常駐サービスのオプションを提供しています。さらに、Databricks の広範なパートナーエコシステムを通じて、お客様のニーズに最適なサービスを利用できます。Databricks と Spark のエキスパート10 年以上の実務経験 ビッグデータの豊富な知識 導入に関する実践的なスキル プロジェクトのプランニングと実行設計・アーキテクチャ構築の支援 プロジェクトとプラットフォームの整合性の確保 プロジェクトタイムラインやリソースニーズへの提言 導入から本番運用までのプランニングスケーラビリティを考慮したソリューションの構築 プロトタイプの開発支援 DevOps の統合要件への対応 COE の導入支援 共通の基準やフレームワークの開発 複数のチームでリソースを共有可能 他の Databricks チームとの交流を促進 リソースサクセスクレジットプログラムの概要引き換え申請フォームDatabricks アカデミーを選ぶ理由お客様の成功の鍵は「人」にあります。Databricks アカデミーでは、お客様の成功を支える技術者のためのトレーニングや認定試験を提供しています。カリフォルニア大学バークレー校で Spark の研究プロジェクトを開始したチームから、データ分析について学べます。効果的なスキルアップに、Databricks アカデミーをご利用ください。詳しく見る無料お試し・その他ご相談を承りますご相談・お問い合わせ製品プラットフォーム料金オープンソーステクノロジーDatabricks 無料トライアルデモ製品プラットフォーム料金オープンソーステクノロジーDatabricks 無料トライアルデモ学習・サポートドキュメント用語集トレーニング・認定ヘルプセンター法務オンラインコミュニティ学習・サポートドキュメント用語集トレーニング・認定ヘルプセンター法務オンラインコミュニティソリューション業種別プロフェッショナルサービスソリューション業種別プロフェッショナルサービス会社情報会社概要採用情報ダイバーシティ&インクルージョンDatabricks ブログご相談・お問い合わせ会社情報会社概要採用情報ダイバーシティ&インクルージョンDatabricks ブログご相談・お問い合わせ採用情報言語地域English (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.プライバシー通知|利用規約|プライバシー設定|カリフォルニア州のプライバシー権利
https://www.databricks.com/company/contact?contactReason=Have%20Sales%20Contact%20Me&message=I%20would%20need%20information%20about%20AWS%20pricing
Contact Us - DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack. Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence. See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco June 26–29   Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWContact usNeed assistance with training or support? See these additional resources.DocumentationRead technical documentation for Databricks on AWS, Azure or Google CloudDatabricks CommunityDiscuss, share and network with Databricks users and expertsTrainingMaster the Databricks Lakehouse Platform with instructor-led and self-paced training or become a certified developerSupportAlready a customer? Click here if you are encountering a technical or payment issueOur office locationsSee all our office locations globally and get in touchKnowledge baseFind quick answers to the most frequently asked questions about Databricks products and servicesProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/dataaisummit/speaker/ketan-ganatra/#
Ketan Ganatra - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingKetan GanatraSolution Architect at DatabricksBack to speakersAs a Solution Architect working with Databrick's SI partners, Ketan helps them build competencies and products around the Lakehouse platform. Ketan acts as a SME on ML topics. Prior to joining Databricks, he led development and production roll-out of the first ever Enterprise ML use case in the Department of Defense. Ketan Has 20+ years of IT experience and degrees in Electronics Engg and MBA.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/dataaisummit/speaker/young-bang/#
Young Bang - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingYoung BangPrincipal Deputy Assistant Secretary of the Army for Acquisition, Logistics and Technology at U.S. ArmyBack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/training-services-schedule
Training Services Schedule | DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack. Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence. See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco June 26–29   Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWLegalTermsDatabricks Master Cloud Services AgreementAdvisory ServicesTraining ServicesUS Public Sector ServicesExternal User TermsWebsite Terms of UseCommunity Edition Terms of ServiceAcceptable Use PolicyPrivacyPrivacy NoticeCookie NoticeApplicant Privacy NoticeDatabricks SubprocessorsPrivacy FAQsDatabricks Data Processing AddendumAmendment to Data Processing AddendumSecurityDatabricks SecuritySecurity AddendumLegal Compliance and EthicsLegal Compliance & EthicsCode of ConductThird Party Code of ConductModern Slavery StatementFrance Pay Equity ReportSubscribe to UpdatesTraining Services ScheduleThis Schedule sets forth terms related to the Training Services and is incorporated as part of the Master Cloud Services Agreement (“MCSA“). The MCSA and this Schedule, together with any other Schedules that reference or are otherwise incorporated into the MCSA, and any accompanying or future Order you enter into with Databricks issued under the MCSA, comprise the Agreement. This Schedule will co-terminate with the MCSA. Other Schedules do not apply to the services ordered under this Schedule unless expressly referenced as being applicable. Capitalized terms used but not defined in this Schedule have the meaning assigned to them in the MCSA.Additional Definitions.“Course” means an instance of either Instructor-Led Training Services or Self-Paced Training Services“Course Materials” means the training materials and other information and content provided by Databricks in conducting the Training ServicesGenerally. Databricks may provide, as set forth in an Order, certain Training Services, delivered (i) by instructors (“Instructor-led Training Services”), either in person or online-only; or (ii) as a self-paced online training course (“Self-Paced Training Services”). If we have agreed to provide you with Training Services, we will provide qualified training personnel and/or suitable training materials. You will make available to Databricks any materials Databricks reasonably requires to perform the Training Services, but unless agreed between the parties in writing, you will not provide Databricks any Customer Content for use with the Training Services.Instructor-led Training Services. If we have agreed to provide you with Instructor-led Training Services, except as otherwise mutually agreed upon by the parties, you will, as reasonably applicable: (i) provide qualified personnel to assist in coordinating and implementing the Instructor-led Training Services; (ii) provide Databricks with access to your sites and facilities (or temporary off-site facilities) during normal business hours and as otherwise reasonably required by Databricks to perform the Instructor-led Training Services; (iii) provide Databricks with such working space and office support as Databricks may reasonably request; and (iv) perform such tasks as may be reasonably required to permit Databricks to perform the Instructor-led Training Services.Self-Paced Training Services. Databricks may make available certain Self-Paced Training Services. Unless otherwise set forth in an Order or when signing up for a Self-Paced Training Service, the Self-Paced Training Services will expire 12 months from the earlier of purchase or activation and are licensed on a per-user basis.Course Materials.Ownership of Course Materials. Databricks retains all Intellectual Property Rights and all other proprietary rights related to the Course Materials. You will not delete or alter the copyright, trademark, or other proprietary rights notices or markings appearing within the Course Materials as delivered to you. You agree that the Course Materials are provided on a non-exclusive basis and not sold. You further acknowledge and agree that portions of the Course Materials, including but not limited to the source code and the specific design and structure of individual modules or programs, constitute or contain trade secrets and other Intellectual Property Rights of Databricks and its licensors.Course Materials License. Databricks grants a limited, non-sublicensable, non-transferable, license to the Course Materials, solely for the internal educational use by individuals who attend a training course (“Students”). Unless indicated by a Databricks instructor, Students are permitted to retain any Course Materials provided during the Course for continued learning following the conclusion of the Course. For the avoidance of doubt, Course Materials may not be used other than by the Students who received them, and may not be shared with individuals who did not attend a Course.Expenses. Customer will reimburse Databricks for reasonable travel expenses. Customer is responsible for all training venue expenses.Compliance with your Onsite Access Policies. If, in the course of providing Databricks Services, Databricks personnel go onsite at your premises, Databricks will require such personnel to comply with your commercially reasonable onsite access policies that have been provided by you to such personnel reasonably in advance. For the avoidance of doubt, no such policies will be deemed to modify the terms of the Agreement.Use of Workspace during Performance of Training Services. You may be required to use a Workspace in order to receive the benefit of the Training Services. Your use of the Workspace constitutes acceptance of the Platform Services terms of the MCSA unless the Workspace is provided as part of a Databricks Powered Service. If the Workspace is created for you by Databricks, it may be terminated following the conclusion of the Training Services without prior notice or any liability to you unless you enter into an Order with Databricks for its continued use. Last Updated August 12, 2022. For earlier versions, please send a request to [email protected] (with “TOS Request” in the subject). ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/dataaisummit/speaker/mostafa-mokhtar/#
Mostafa Mokhtar - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingMostafa MokhtarPrincipal Software Engineer at DatabricksBack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/kr/product/delta-lake-on-databricks
Databricks 기반 Delta Lake - 지금 데모를 예약하세요!Skip to main content플랫폼Databricks 레이크하우스 플랫폼Delta Lake데이터 거버넌스데이터 엔지니어링데이터 스트리밍데이터 웨어하우징데이터 공유머신 러닝데이터 사이언스가격Marketplace오픈 소스 기술보안 및 신뢰 센터웨비나 5월 18일 / 오전 8시(태평양 표준시) 안녕, 데이터 웨어하우스. 안녕하세요, 레이크하우스입니다. 데이터 레이크하우스가 최신 데이터 스택에 어떻게 부합하는지 이해하려면 참석하십시오. 지금 등록하세요솔루션산업별 솔루션금융 서비스의료 서비스 및 생명 공학제조커뮤니케이션, 미디어 및 엔터테인먼트공공 부문리테일모든 산업 보기사용 사례별 솔루션솔루션 액셀러레이터프로페셔널 서비스디지털 네이티브 비즈니스데이터 플랫폼 마이그레이션5월 9일 | 오전 8시(태평양 표준시)   제조업을 위한 레이크하우스 살펴보기 코닝이 수동 검사를 최소화하고 운송 비용을 절감하며 고객 만족도를 높이는 중요한 결정을 내리는 방법을 들어보십시오.지금 등록하세요학습관련 문서교육 및 인증데모리소스온라인 커뮤니티University Alliance이벤트Data + AI Summit블로그LabsBeacons2023년 6월 26일~29일 직접 참석하거나 키노트 라이브스트림을 시청하세요.지금 등록하기고객파트너클라우드 파트너AWSAzureGoogle CloudPartner Connect기술 및 데이터 파트너기술 파트너 프로그램데이터 파트너 프로그램Built on Databricks Partner Program컨설팅 & SI 파트너C&SI 파트너 프로그램파트너 솔루션클릭 몇 번만으로 검증된 파트너 솔루션과 연결됩니다.자세히회사Databricks 채용Databricks 팀이사회회사 블로그보도 자료Databricks 벤처수상 실적문의처Gartner가 Databricks를 2년 연속 리더로 선정한 이유 알아보기보고서 받기Databricks 이용해 보기데모 보기문의처로그인JUNE 26-29REGISTER NOWDelta Lake데이터 레이크 안정성, 보안과 성능무료 시험판사용해 보기데모 보기Delta Lake란 무엇인가요?Delta Lake는 데이터 레이크에서 안정성, 보안과 성능을 제공하는 오픈 형식 스토리지 계층입니다. 스트리밍 작업과 배치 작업 둘 다에 적합합니다. Delta Lake는 데이터 사일로를 구조적, 반구조적, 비구조적 데이터를 모두 담은 단 하나의 홈으로 대체하므로 비용 효율적이고 고도로 확장 가능한 레이크하우스의 기본 토대가 되어줍니다.양질의 안정적인 데이터실시간 스트림까지 포함해 데이터 전체에 믿을 수 있는 단일 정보 출처(Single Source of Truth, SSOT)를 제공하므로 데이터 팀이 항상 최신 데이터를 다루도록 보장됩니다. Delta Lake는 ACID 트랜잭션과 스키마 적용을 지원하여 기존 데이터 레이크에 미흡했던 안정성을 제공합니다. 이 때문에 전사적으로 믿을 수 있는 데이터 인사이트를 확장하고, 데이터 레이크에서 직접 분석 및 여타 데이터 프로젝트를 실행하면서도 인사이트 도출 시간을 최대 50배나 단축할 수 있습니다.안전한 오픈 데이터 공유Delta Sharing은 안전한 데이터 공유를 위한 업계 최초의 오픈 프로토콜로, 데이터 위치에 구애받지 않고 다른 조직과 간편하게 데이터를 공유할 수 있게 해줍니다. Unity Catalog와 기본적으로 통합되기 때문에 전사적으로 공유 데이터를 중앙에서 관리하고 감사할 수 있습니다. 이렇게 하면 공급자, 협력업체와 신뢰를 바탕으로 데이터 자산을 공유할 수 있어 비즈니스 조율에 도움이 되고, 동시에 보안과 규정 준수 요구 사항에도 부합할 수 있습니다. 주요 툴, 플랫폼과 통합되므로 사용자가 선택한 툴에서 공유 데이터를 시각화, 쿼리, 보강하고 관리할 수 있습니다.초고속 성능Delta Lake는 Apache Spark™ 기반으로, 뛰어난 확장성과 속도를 제공합니다. 또한 인덱싱과 같은 성능 기능을 염두에 두고 최적화하였기 때문에 Delta Lake를 이용해본 고객은 최대 48%까지 ETL 워크로드 실행 속도가 빨라지는 효과를 체험했습니다.오픈, 애자일Delta Lake 내 모든 데이터는 오픈 Apache Parquet 형식으로 저장되므로, 호환되는 리더라면 무엇이든 종류와 관계없이 데이터를 읽을 수 있습니다. API도 오픈 형식이고 Apache Spark와 호환됩니다. Databricks의 Delta Lake를 이용하면 광범위한 오픈 소스 에코 시스템으로 액세스할 수 있으므로 특정 벤더의 데이터 포멧에 락인(lock-in)되는 문제를 방지할 수 있습니다.자동화되고 신뢰할 수 있는데이터 엔지니어링Delta Live Table과 함께라면 데이터 엔지니어링이 간단해집니다. Delta Lake에서 최신 고품질 데이터에 적합한 데이터 파이프라인을 빌드하여 관리할 손쉬운 방법을 소개합니다. 이 테이블은 선언적 파이프라인 개발, 데이터 안정성 개선 및 클라우드 규모 프로덕션 작업을 통해 레이크하우스의 기초를 구축하도록 도와 ETL 개발과 관리를 간소화하여 데이터 엔지니어링 팀에 큰 도움이 되어줍니다.대규모 보안 및 거버넌스Delta Lake는 데이터 거버넌스, 기능에 세분화된 액세스 관리를 활용하여 리스크를 줄입니다. 이것은 보통 데이터 레이크로는 불가능한 일입니다. 데이터 레이크에 보관된 데이터를 빠르고 정확하게 업데이트하여 GDPR과 같은 규제를 준수할 수 있고, 감사 로깅을 통해 개선된 데이터 거버넌스를 유지관리할 수 있습니다. 이러한 기능은 Databricks에서 레이크하우스용 최초의 멀티클라우드 데이터 카탈로그인 Unity Catalog의 일부분으로 기본 통합 및 강화됩니다.사용 사례기존 데이터를 활용한 BI데이터 레이크에서 직접 비즈니스 워크로드를 실행하여 비즈니스에 즉각적인 인사이트를 얻을 수 있도록 최신 실시간 데이터를 데이터 애널리스트가 쿼리할 수 있는 상태로, 바로 이용할 수 있게 제공합니다. Delta Lake를 사용하면 데이터 레이크 수준의 비용으로 데이터 웨어하우징 성능을 제공하는 멀티클라우드 레이크하우스 아키텍처를 운영해 기존 클라우드 데이터 웨어하우스 대비 최대 6배 더 나은 가격/성능으로 SQL 워크로드를 처리할 수 있습니다.자세히배치와 스트리밍 통합간결한 단일 아키텍처에서 배치와 스트리밍 작업을 모두 실행함으로써 복잡하고 중복된 시스템과 운영상의 문제를 피할 수 있습니다. Delta Lake의 경우 테이블 하나가 배치 테이블과 스트리밍 소스 및 싱크를 겸합니다. 스트리밍 데이터 수집, 과거 백필(backfill)) 배치 처리와 대화형 쿼리 모두 바로 사용할 수 있으며 Spark Structured Streaming과 직접 통합됩니다.규제 요구사항에 부합Delta Lake는 형식이 잘못된 데이터 수집 문제를 없애고 규제 준수를 위한 데이터 삭제의 어려움을 완화하며 변경 데이터 캡처를 위한 데이터 수정 문제도 없애줍니다. Delta Lake는 데이터 레이크에서 ACID 트랜잭션을 지원하여 모든 작업이 완전히 성공하거나 나중에 다시 시도할 수 있게 완전히 중단되도록 보장합니다.이를 위해 데이터 파이프라인을 새로 만들 필요도 없습니다. 또한 Delta lake는 데이터 레이크에 과거 트랜잭션을 모두 기록하므로 GDPR과 CCPA 등의 규정 준수 표준에 안정적으로 부합하기 위해 데이터의 기존 버전에 액세스하여 이를 활용하기도 간편합니다.데이터 수집 네트워크네이티브 커넥터로 각종 애플리케이션, 데이터베이스와 파일 스토리지에서 빠르고 안정적으로 데이터를 손쉽게 수집해 Delta Lake에 보관합니다.고객"Databricks는 의료 서비스 부문의 새로운 수요에 부합하기 위해 꼭 필요했던 분석과 운영성 개선은 물론 출시 시간 단축이라는 효과까지 달성했습니다." – Healthdirect Australia의 Chief Architect, Peter James자세히"Databricks와 Delta Lake를 활용하면서 이미 대규모로 데이터를 민주화(democratize)할 수 있게 되었고, 동시에 프로덕션 워크로드 실행 비용을 60% 줄여서 수백만 달러를 절감하는 효과를 거두었습니다." — YipitData 최고 기술 책임자(CTO) Steve Pulec자세히"Delta Lake는 ACID 기능을 제공해 데이터 파이프라인 운영을 간소화하기 때문에 파이프라인 안정성과 데이터 일관성이 좋아집니다. 동시에 캐싱이나 자동 인덱싱과 같은 기능을 이용하면 데이터에 효율적으로, 성능 수준에 맞춰 액세스할 수 있습니다." — Columbia Sportswear 선임 엔터프라이즈 데이터 관리자 Lara Minor자세히"Delta Lake 덕분에 데이터 파이프라인 관리에 간소한 방식으로 접근할 수 있게 되었습니다. 이 덕분에 운영 비용은 절감하면서 동시에 다운스트림 분석과 데이터 사이언스의 인사이트 도출 시간(time-to-insight)은 짧아졌어요." — Viacom18 디지털 변혁 및 기술 사업부 AVP(Assistant Vice President) Parijat Dey자세히리소스 여러분에게 필요한 모든 리소스가 한 곳에 있습니다. 리소스 라이브러리의 ebook과 동영상을 통해 Databricks에서 데이터 엔지니어링을 활용하는 장점에 대해 알아보세요. 리소스 둘러보기eBook데이터 엔지니어링 사용 사례 Big Book데이터, 분석 및 AI 거버넌스데이터 관리 기초 살펴보기데이터 웨어하우스의 아버지 Bill Inmon이 구축한 데이터 레이크하우스Tech Talks 및 교육Delta Lake 시작하기 – Tech Talk 시리즈웨비나좋은 놈, 나쁜 놈, 흉한 놈(The Good The Bad The Ugly)Delta Lake로 레이크하우스 자세히 살펴보기시작할 준비가 되셨나요?Databricks 무료로 시작하기Delta Lake 관련 문서제품플랫폼 개요가격오픈 소스 기술Databricks 이용해 보기데모제품플랫폼 개요가격오픈 소스 기술Databricks 이용해 보기데모학습 및 지원관련 문서용어집교육 및 인증헬프 센터법적 고지온라인 커뮤니티학습 및 지원관련 문서용어집교육 및 인증헬프 센터법적 고지온라인 커뮤니티솔루션산업 기준프로페셔널 서비스솔루션산업 기준프로페셔널 서비스회사Databricks 소개Databricks 채용다양성 및 포용성회사 블로그문의처회사Databricks 소개Databricks 채용다양성 및 포용성회사 블로그문의처Databricks 채용 확인하기WorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark 및 Spark 로고는 Apache Software Foundation의 상표입니다.개인 정보 보호 고지|이용약관|귀하의 개인 정보 선택|귀하의 캘리포니아 프라이버시 권리
https://www.databricks.com/dataaisummit/speaker/mike-tang
Mike Tang - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingMike TangAssociate Director, Responsible AI at VerizonBack to speakersXuning (Mike) Tang has more than a decade of academic and industrial experience in machine learning, natural language processing, and big data technologies. He is enthusiastic about applied research and solving complex business problems with cutting-edge technologies. He has managed large teams to build advanced analytics solutions for major manufacturing, hospitality, and banking companies, as well as Am Law 100 law firms. Before joining Verizon, Mike was the leader of Berkeley Research Group (BRG)’s Artificial Intelligence & Machine Learning practice, where he initiated BRG’s ethical AI market offering. Prior to that, he worked for Deloitte and Fannie Mae. Mike earned his Ph.D. from Drexel University in the College of Computing and Informatics. He has filed multiple patents and inventions and published more than 40 peer-reviewed research papers in top computer science journals and international conferences. He also serves as an associate editor and reviewer for multiple flagship journals in Artificial Intelligence and Machine Learning.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/dataaisummit/speaker/cyrielle-simeone/#
Cyrielle Simeone - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingCyrielle SimeonePrincipal Product Marketing Manager at DatabricksBack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/dataaisummit/speaker/lin-qiao
Lin Qiao - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingLin QiaoCo-creator of PyTorch, Co-founder and CEO at FireworksBack to speakersLin Qiao is the co-founder and CEO of Fireworks, which is on a mission to accelerate the transition to AI-powered business via interactive experimentation and a production platform centered around PyTorch technologies and state-of-the-art models built using PyTorch. She led the development of PyTorch, AI compilers and on-device AI platforms at Meta for the past half-decade. She drove AI research to production innovations across hardware acceleration, enabling model exploration and large and complex model scaling, building production ecosystems and platforms for all Meta’s AI use cases. She received a Ph.D. in computer science, started her career as a researcher in Almaden Research Lab, and later moved to the industry as an engineer. Prior to Meta, she worked on a broad range of distributed and data processing domains, from high-performance columnar databases, OLTP systems, streaming processing systems, data warehouse systems, and logging and metrics platforms.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/product/google-cloud
Databricks Google Cloud Platform (GCP) | DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack. Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence. See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco June 26–29   Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWDatabricks on Google CloudOpen lakehouse platform meets open cloud to unify data engineering, data science and analyticsGet startedDatabricks on Google Cloud is a jointly developed service that allows you to store all your data on a simple, open lakehouse platform that combines the best of data warehouses and data lakes to unify all your analytics and AI workloads. Tight integration with Google Cloud Storage, BigQuery and the Google Cloud AI Platform enables Databricks to work seamlessly across data and AI services on Google Cloud. Reliable data engineeringSQL analytics on all your dataCollaborative data scienceProduction machine learning Why Databricks on Google Cloud?OpenBuilt on open standards, open APIs and open infrastructure so you can access, process and analyze data on your terms.OptimizedDeploy Databricks on Google Kubernetes Engine, the first Kubernetes-based Databricks runtime on any cloud, to get insights faster.IntegratedGet one-click access to Databricks from the Google Cloud Console, with integrated security, billing and management.Learn more from these customers “Databricks on Google Cloud simplifies the process of driving any number of use cases on a scalable compute platform, reducing the planning cycles that are needed to deliver a solution for each business question or problem statement that we use.”— Harish Kumar, the Global Data Science Director at ReckittStreamlined integration with Google CloudGoogle Cloud StorageGoogle Cloud StorageEnable seamless read/write access for data in Google Cloud Storage (GCS) and leverage the Delta Lake open format to add powerful reliability and performance capabilities within Databricks.Google Kubernetes EngineBigQueryBig Cloud IdentityGoogle Cloud AI PlatformGoogle Cloud BillingLookerPartner ecosystemResourcesVirtual EventsVirtual Workshop: The Open Data LakeNewsDatabricks Partners With Google Cloud to Deliver Its Platform to Global BusinessesBlogs and ReportsAnnouncing the Launch of Databricks on Google CloudIntroducing Databricks on Google Cloud – Now in Public PreviewDatabricks on Google Cloud DatasheetDatabricks on Google Cloud Now Generally AvailableData Engineering, Data Science and Analytics With Databricks on Google CloudReady to get started?Try Databricks for freeProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/dataaisummit/speaker/wassym-bensaid
Wassym Bensaid - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingWassym BensaidSr. Vice President, Software Development at RivianBack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/de/product/delta-sharing
Delta-Freigabe | DatabricksSkip to main contentPlattformDie Lakehouse-Plattform von DatabricksDelta LakeData GovernanceData EngineeringDatenstreamingData-WarehousingGemeinsame DatennutzungMachine LearningData SciencePreiseMarketplaceOpen source techSecurity & Trust CenterWEBINAR 18. Mai / 8 Uhr PT Auf Wiedersehen, Data Warehouse. Hallo, Lakehouse. Nehmen Sie teil, um zu verstehen, wie ein Data Lakehouse in Ihren modernen Datenstapel passt. Melden Sie sich jetzt anLösungenLösungen nach BrancheFinanzdienstleistungenGesundheitswesen und BiowissenschaftenFertigungKommunikation, Medien und UnterhaltungÖffentlicher SektorEinzelhandelAlle Branchen anzeigenLösungen nach AnwendungsfallSolution AcceleratorsProfessionelle ServicesDigital-Native-UnternehmenMigration der Datenplattform9. Mai | 8 Uhr PT   Entdecken Sie das Lakehouse für die Fertigung Erfahren Sie, wie Corning wichtige Entscheidungen trifft, die manuelle Inspektionen minimieren, die Versandkosten senken und die Kundenzufriedenheit erhöhen.Registrieren Sie sich noch heuteLernenDokumentationWEITERBILDUNG & ZERTIFIZIERUNGDemosRessourcenOnline-CommunityUniversity AllianceVeranstaltungenData + AI SummitBlogLabsBaken26.–29. Juni 2023 Nehmen Sie persönlich teil oder schalten Sie für den Livestream der Keynote einJetzt registrierenKundenPartnerCloud-PartnerAWSAzureGoogle CloudPartner ConnectTechnologie- und DatenpartnerTechnologiepartnerprogrammDatenpartner-ProgrammBuilt on Databricks Partner ProgramConsulting- und SI-PartnerC&SI-PartnerprogrammLösungen von PartnernVernetzen Sie sich mit validierten Partnerlösungen mit nur wenigen Klicks.Mehr InformationenUnternehmenKarriere bei DatabricksUnser TeamVorstandUnternehmensblogPresseAktuelle Unternehmungen von DatabricksAuszeichnungen und AnerkennungenKontaktErfahren Sie, warum Gartner Databricks zum zweiten Mal in Folge als Leader benannt hatBericht abrufenDatabricks testenDemos ansehenKontaktLoginJUNE 26-29REGISTER NOWDelta-FreigabeEin offener Standard für die sichere Freigabe von Daten-AssetsErste SchritteDemo ansehenDatabricks Delta Sharing bietet eine offene Lösung zum sicheren Freigeben von Live-Daten aus Ihrem Lakehouse an jede beliebige Computerplattform.Die wichtigsten VorteileOffene plattformübergreifende FreigabeGeben Sie ganz einfach vorhandene Daten in den Formaten Delta Lake und Apache Parquet für jede Computing-Plattform frei.Live-Daten ohne Replikation freigebenGeben Sie Live-Daten frei, ohne sie zu replizieren oder in ein anderes System zu kopieren.Zentralisierte GovernanceVerwalten, kontrollieren, prüfen und verfolgen Sie die Nutzung der freigegebenen Daten zentral auf einer Plattform.Marktplatz für DatenprodukteErstellen und paketieren Sie Datenprodukte, einschließlich Datensätze, ML-Modelle und Notebooks, einmalig und verteilen Sie sie überall über einen zentralen Marktplatz.Datenschutzsichere DatenreinräumeArbeiten Sie einfach mit Ihren Kunden und Partnern in jeder Cloud über eine sichere, gehostete Umgebung zusammen und sorgen Sie gleichzeitig für den Datenschutz.Wie es funktioniertNative Integration mit der Databricks-PlattformDie native Integration mit dem Unity-Katalog gestattet organisationsübergreifend das zentrale Verwalten und Auditing freigegebener Daten. So können Sie Datenbestände vertrauensvoll mit Lieferanten und Partnern teilen, damit Unternehmensabläufe besser koordiniert und Sicherheits- und Compliance-Anforderungen zuverlässig erfüllt werden.Freigaben einfach verwaltenErstellen und verwalten Sie Anbieter, Empfänger und Freigaben mit einer benutzerfreundlichen Benutzeroberfläche, SQL-Befehlen oder REST-APIs mit vollständiger CLI- und Terraform-UnterstützungDatenprodukte erkunden und über einen offenen Marktplatz darauf zugreifenFinden, bewerten und erhalten Sie von überall aus Zugriff auf Datenprodukte wie Datensätze, Modelle für Machine Learning, Dashboards und Notebooks, ohne dass Sie auf der Databricks-Plattform sein müssen.Datenschutzsichere DatenreinräumeArbeiten Sie mit Ihren Kunden und Partnern in jeder Cloud in einer datenschutzsicheren Umgebung zusammen. Geben Sie Daten aus Ihren Data Lakes sicher ohne Datenreplikation frei. Arbeiten Sie mit Beteiligten in ihrer bevorzugten Cloud zusammen und bieten Sie ihnen die Flexibilität, komplexe Berechnungen und Workloads in jeder Sprache auszuführen – SQL, R, Scala, Java und Python. Führen Sie Mitarbeiter mithilfe vordefinierter Vorlagen, Notebooks und Dashboards durch gängige Anwendungsfälle und verkürzen Sie so die Zeit bis zur Erkenntnisgewinnung.AnwendungsfälleInterne Freigabe innerhalb des GeschäftsbereichsErstellen Sie Data Mesh mit Delta Sharing, um Daten sicher zwischen Geschäftsbereichen und Tochtergesellschaften auszutauschen.B2B-FreigabeDatenmonetarisierungKunden„Delta Sharing hat uns geholfen, unseren Datenbereitstellungsprozess für große Datenmengen zu optimieren. So können unsere Kunden ihre eigene Rechenumgebung nutzen, um frisch kuratierte Daten mit wenig bis gar keiner Integrationsarbeit zu lesen, und wir konnten unseren Katalog einzigartiger, hochwertiger Datenprodukte erweitern.“– William Dague, Head of Alternative Data„Als Datenunternehmen ist es von entscheidender Bedeutung, unseren Kunden Zugriff auf unsere Datensätze zu geben. Die Databricks Lakehouse-Plattform mit Delta Sharing rationalisiert diesen Prozess wirklich und ermöglicht es uns, unabhängig von Cloud oder Plattform sicher eine viel breitere Benutzerbasis zu erreichen.“– Felix Cheung, VP of Engineering„Die Nutzung der leistungsstarken Funktionen von Delta Sharing von Databricks ermöglicht Pumpjack Dataworks ein schnelleres Onboarding-Erlebnis, sodass der Export, Import und die Remodellierung von Daten überflüssig werden, was unseren Kunden einen sofortigen Mehrwert bietet. Schnellere Ergebnisse bieten unseren Kunden und ihren Partnern größere Geschäftschancen.“– Corey Zwart, Head of Engineering„Mit Delta Sharing können unsere Kunden fast sofort auf kuratierte Datensätze zugreifen und sie in Analytics-Tools ihrer Wahl integrieren. Der Dialog mit unseren Kunden verlagert sich von einem geringwertigen, kleinlichen Hin und Her bei der Aufnahme zu einer hochwertigen analytischen Diskussion, in der wir erfolgreiche Kundenerlebnisse fördern. Während sich unsere Kundenbeziehungen weiterentwickeln, können wir über Delta Sharing nahtlos neue Datensätze bereitstellen und bestehende aktualisieren, um Kunden über die wichtigsten Trends in ihren Branchen auf dem Laufenden zu halten.“– Anup Segu, Data Engineering Tech LeadEin offenes ÖkosystemGreifen Sie direkt vom Anbieter in benutzerfreundlichen SQL-, Python- oder BI-Tools auf die neueste veröffentlichte Version zu.RessourcenKeynote und Webinar[Keynote]Data Governance and sharing on the lakehouse beim Data + AI Summit 2022[On-Demand-Webinar]Beschleunigen Sie den Geschäftswert mit Delta-SharingBlogsAnkündigung der allgemeinen Verfügbarkeit von Delta-SharingDelta Sharing: Ein offener Standard für sicheren DatenaustauschDie drei wichtigsten Anwendungsfälle für den Datenaustausch mit Delta SharingLösungsübersicht und E-Book[E-Book] Die neue Delta-Sharing-Lösung erkundenDelta Sharing: Ein offener Standard für sicheren DatenaustauschRise of the Data Lakehouse von Bill Inmon, dem Vater des Data WarehouseBereit für Ihre ersten Schritte mit Databricks?DATABRICKS KOSTENLOS TESTENProduktPlatform OverviewPreiseOpen Source TechDatabricks testenDemoProduktPlatform OverviewPreiseOpen Source TechDatabricks testenDemoLearn & SupportDokumentationGlossaryWEITERBILDUNG & ZERTIFIZIERUNGHelp CenterLegalOnline-CommunityLearn & SupportDokumentationGlossaryWEITERBILDUNG & ZERTIFIZIERUNGHelp CenterLegalOnline-CommunityLösungenBy IndustriesProfessionelle ServicesLösungenBy IndustriesProfessionelle ServicesUnternehmenÜber unsKarriere bei DatabricksDiversität und InklusionUnternehmensblogKontaktUnternehmenÜber unsKarriere bei DatabricksDiversität und InklusionUnternehmensblogKontaktWeitere Informationen unter „Karriere bei DatabricksWeltweitEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Datenschutzhinweis|Terms of Use|Ihre Datenschutzwahlen|Ihre kalifornischen Datenschutzrechte
https://www.databricks.com/us-pub-sector-services
Public Sector Services Schedule | DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack. Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence. See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco June 26–29   Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWLegalTermsDatabricks Master Cloud Services AgreementAdvisory ServicesTraining ServicesUS Public Sector ServicesExternal User TermsWebsite Terms of UseCommunity Edition Terms of ServiceAcceptable Use PolicyPrivacyPrivacy NoticeCookie NoticeApplicant Privacy NoticeDatabricks SubprocessorsPrivacy FAQsDatabricks Data Processing AddendumAmendment to Data Processing AddendumSecurityDatabricks SecuritySecurity AddendumLegal Compliance and EthicsLegal Compliance & EthicsCode of ConductThird Party Code of ConductModern Slavery StatementFrance Pay Equity ReportSubscribe to UpdatesPublic Sector Services ScheduleThis Schedule sets forth terms related to use by Public Sector Customers of Databricks Services and is incorporated as part of the Master Cloud Services Agreement (the “MCSA”). The MCSA and this Schedule, together with any other Schedules that reference or are otherwise incorporated into the MCSA, and any accompanying or future Order you enter into with Databricks issued under the MCSA, comprise the Agreement. This Schedule will co-terminate with the MCSA. Capitalized terms used but not defined in this Schedule have the meaning assigned to them in the MCSA.Contracting Entity and Order of Precedence. If an Order identifies the Databricks contracting party as Databricks Federal, LLC (“DBF”), then any references in the Agreement to Databricks, including those contained in this Schedule, will be deemed to refer to DBF. Orders referencing DBF or any Order between Databricks and a Public Sector Customer will be deemed to incorporate this Schedule. In the event of a conflict between this Schedule and the MCSA or another Schedule thereto, this Schedule will be deemed to control solely for such DBF Order or other Public Sector Order.Databricks Services. This Schedule establishes the terms and conditions enabling Databricks to provide Databricks Services to Public Sector Customers. For the purposes of this Schedule, references to “Customer” in an Order, the MCSA or any Schedule thereto will be deemed to refer to Public Sector Customer or Customer Representative, as applicable. This Schedule does not grant you access to the Databricks Services without mutual agreement to an Order and applicable Databricks Services Schedule.Additional Definitions.“Customer Information” means Customer Content (if you are purchasing Platform Services) and / or Customer Materials (if you are purchasing Advisory Services).“Customer Representative” means an organization authorized on behalf of a Public Sector Customer to purchase Databricks Services on its behalf, as designated in an applicable Order.“Federal Customer” means any United States federal government branch or agency Customer of Databricks Services subject to this Schedule, including agencies and departments from the Executive Branch, the Congress, or the Military.“Public Sector Customer” means any Federal Customer or other United States state or local government, or entity, authority, agency, or body exercising executive, legislative, judicial, regulatory or administrative functions of any such government, who purchases Databricks Services subject to this Schedule. Public Sector Customer(s) may include public universities and hospitals.Modification. Notwithstanding anything contained in the MCSA or any other Schedule, you agree as follows: U.S. Government License Rights. If Customer is a Federal Customer, or the Agreement otherwise become subject to the Federal Acquisition Regulation (FAR), Customer acknowledges and agrees that the Platform Services (including PVC Services, as applicable), Support Services (as applicable), and accompanying documentation constitute “commercial computer software” and “commercial computer software documentation”, respectively, provided as “Commercial Items” as defined in FAR 2.101. The Platform Services (including PVC Services, as applicable), Support Services (as applicable), and accompanying documentation have been developed solely at private expense, and as set forth in FAR 12.212 and DFARS Section 227.7202, any use, modification, reproduction release, performance, display or disclosure thereof shall be solely in accordance with the terms of the MCSA as modified by this Schedule.Customer Representative Obligations. Customer Representative will have no rights to the Databricks Services if purchasing Databricks Services on behalf of a Public Sector Customer except to the extent that a Public Sector Customer provides access to Customer Representative as its Authorized User. Customer Representative agrees to bind Public Sector Customer to the terms of the Agreement, including this Schedule, and will ensure that Databricks will be a third party beneficiary to such agreement. Customer Representative acknowledges that submission of a purchase order or Order will be deemed to be a representation to Databricks that the applicable Public Sector Customer has assented to the terms of the Agreement. Customer Representative will use commercially reasonable efforts to enforce the terms of the Agreement in the event it becomes aware that the Public Sector Customer has breached any terms and conditions of the Agreement where such breach adversely affects Databricks’ rights or contractual protections. Customer Representative will notify Databricks promptly upon learning of any such breach.Acceptance. Any Advisory Services or Training Services provided under the MCSA shall be deemed accepted upon delivery.Confidentiality. Any provisions contained in the Agreement that require a Public Sector Customer to keep certain information confidential may be subject to the Freedom of Information Act, 5 U.S.C. §552 or similar applicable state or local state law, and any order by a United States Federal Court or court of appropriate jurisdiction.Equitable Relief. Orders entered into directly with Databricks by a Federal Customer subject to this Schedule will only allow equitable relief when explicitly provided by statute (e.g., Prompt Payment Act or Equal Access to Justice Act).Auto Renewal, Customer Responsibility, and Indemnification. If you are a Federal Customer subject to the Anti-Deficiency Act or similar limitations on fees or payment absent approved allocation of funding, any auto-renewal of Databricks Services, penalty fees or interest accrual associated with late payment, or Customer indemnification obligations in the Agreement will be deemed inapplicable. Federal Customer agrees to be responsible to Databricks for any claim (i) by a third party alleging that Customer Information or its use with the Databricks Services infringes or misappropriates such party’s Intellectual Property Rights; and / or (ii) arising from or related to your use of the Databricks Services allegedly in violation of any applicable law or the Agreement. Any clause in the Agreement requiring Databricks to defend or indemnify a Public Sector Customer is hereby amended solely to the extent that (a) the U.S. Department of Justice has the sole right to represent the Federal Customers in any such action in accordance with 28 U.S.C. 516, and (b) representation on behalf of Public Sector Customers may lie solely with the applicable state attorney general’s office if you are a state or local government entity.Governing Law. Orders entered into directly by a Federal Customer subject to this Schedule, the MCSA and any other applicable Schedules will be governed by and construed in accordance with the laws of the United States, and venue and jurisdiction of any dispute will be determined by applicable federal statute. If the federal laws of the United States are not dispositive, then to the extent permitted by federal law, the Agreement will be governed by the laws of the State of California, excluding its conflict of law principles. If you are a state or local government entity, the Agreement is governed by the laws of your state, excluding its conflict of laws principles. In the event the Uniform Computer Information Transactions Act (UCITA) or any similar laws or regulations are enacted, to the extent allowed by law, such law or regulation will not apply to the Agreement, and the governing law will remain as if such law or regulation had not been enacted. The Agreement does not affect statutory rights that cannot be waived or changed by contract.Disputes. Notwithstanding any other provision in the Agreement, any dispute between Databricks and any Federal Customer arising under or related to the Agreement will be resolved exclusively under the terms and procedures of the Contract Disputes Act (41 U.S.C. Chapter 71).Assignment. If you are a Federal Customer purchasing directly from Databricks, then except to the extent transfer may not be legally restricted, you may not assign the Agreement, any Order, or any right or obligation under the Agreement, or delegate any performance, without Databricks’ prior written consent, which consent will not be unreasonably withheld. Databricks may assign its right to receive payment in accordance with the Assignment of Claims Act (31 U.S.C. 3727) and FAR 52.212-4(b), and may assign the Agreement to the extent not prohibited by the Anti-Assignment Act (41 U.S.C. 15). Subject to the requirements of FAR 42.12 (Novation and Change-of-Name Agreements), you agree to recognize Databricks’ successor in interest following a transfer of assets or a change in name. Any attempted assignment or transfer in violation of the foregoing will be void. Subject to the foregoing, the Agreement will be binding upon and will inure to the benefit of the parties and their respective successors and assigns.Taxes. References to taxes payable by Customer will not apply to your purchases of Databricks Services to the extent You are exempt from such taxes, provided that you agree to provide documentation reasonably acceptable to Databricks evidencing your tax-exempt status.Last Updated August 12, 2022. For earlier versions, please send a request to [email protected] (with “TOS Request” in the subject).ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/dataaisummit/speaker/gidon-gershinsky/#
Gidon Gershinsky - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingGidon GershinskyLead Systems Architect at AppleBack to speakersGidon Gershinsky designs and builds Data Security solutions at Apple. He plays a leading role in the Apache Parquet community work on big data encryption and integrity verification technologies.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/explore/de-data-warehousing/the-modern-cloud-data-platform-dummies
Modern Cloud Platform for Dummies Thumbnails Document Outline Attachments Layers Current Outline Item Previous Next Highlight All Match Case Match Diacritics Whole Words Color Size Color Thickness Opacity Presentation Mode Open Print Download Current View Go to First Page Go to Last Page Rotate Clockwise Rotate Counterclockwise Text Selection Tool Hand Tool Page Scrolling Vertical Scrolling Horizontal Scrolling Wrapped Scrolling No Spreads Odd Spreads Even Spreads Document Properties… Toggle Sidebar Find Previous Next Presentation Mode Open Print Download Current View FreeText Annotation Ink Annotation Tools Zoom Out Zoom In Automatic Zoom Actual Size Page Fit Page Width 50% 75% 100% 125% 150% 200% 300% 400% More Information Less Information Close Enter the password to open this PDF file: Cancel OK File name: - File size: - Title: - Author: - Subject: - Keywords: - Creation Date: - Modification Date: - Creator: - PDF Producer: - PDF Version: - Page Count: - Page Size: - Fast Web View: - Close Preparing document for printing… 0% Cancel
https://www.databricks.com/dataaisummit/worldtour/
World Tour 2022 Homepage - Data + AI Summit 2022 | DatabricksThis site works best with JavaScript enabled.HomepageFind My LocationOn demandSpeakersTrainingsSpecial EventsSponsorsHealthFAQDestination LakehouseFind Your LocationData + AI World Tour brings the data lakehouse to the global data community. With content, customers and speakers tailored to each region, the tour showcases how and why the data lakehouse is quickly becoming the cloud data architecture of today’s most data-driven companies.Featured SpeakersTony ChanSenior Data Science ManagerOctopus CardMichael CamarriPhD, Head of Data Science for APACCognizantRaela WangHead of Field EngineeringDatabricksScott SunPartner, Digital Analytics ServiceDeloitteVeronica HoHead of Data Analytics & InsightsSwire PropertiesOlivier KuzinerGeneral ManagerEkimetricsJia Woei LingManaging Director, North AsiaDatabricks Wang YiField Engineering ManagerDatabricksGerman ChungData Engineering LeadLi & Fung | LFXArnab MaulikAssociate Head of Beta LabsLane Crawford Joyce GroupEd LentaSVP and GM, Asia Pacific & JapanDatabricksSun Yoke KuanRegional DirectorASEANTrirat SuwanprateebChief Executive OfficerSCB TechWanlapa LinlawanChief Operating OfficerDataXPedro Uria-RecioChief Analytics & AI OfficerTrue DigitalPaul HoweGroup Chief Information OfficerMakroKramol PulkesExecutive Vice President, Data and Analytics HeadKASIKORNBANKChris D'AgostinoGlobal Field CTODatabricksLeo LiuChief Digital OfficerLi & Fung | LFXGary LamChief Technology OfficerLivi BankAlex WuChief ArchitectLivi BankNicholas EayrsVice President, Field EngineeringDatabricksThomas QianWholesale Chief Data Science Architect & Analytical Platform LeadHSBCJack NgDirector of TechnologyLane Crawford Joyce GroupTony ChanSenior Data Science ManagerOctopus CardMichael CamarriPhD, Head of Data Science for APACCognizantRaela WangHead of Field EngineeringDatabricksScott SunPartner, Digital Analytics ServiceDeloitteVeronica HoHead of Data Analytics & InsightsSwire PropertiesOlivier KuzinerGeneral ManagerEkimetricsJia Woei LingManaging Director, North AsiaDatabricks Wang YiField Engineering ManagerDatabricksGerman ChungData Engineering LeadLi & Fung | LFXArnab MaulikAssociate Head of Beta LabsLane Crawford Joyce GroupEd LentaSVP and GM, Asia Pacific & JapanDatabricksSun Yoke KuanRegional DirectorASEANTrirat SuwanprateebChief Executive OfficerSCB TechWanlapa LinlawanChief Operating OfficerDataXPedro Uria-RecioChief Analytics & AI OfficerTrue DigitalPaul HoweGroup Chief Information OfficerMakroKramol PulkesExecutive Vice President, Data and Analytics HeadKASIKORNBANKChris D'AgostinoGlobal Field CTODatabricksLeo LiuChief Digital OfficerLi & Fung | LFXGary LamChief Technology OfficerLivi BankJoin the World TourBangkok30 MarRegister NowHong Kong20 AprRegister NowSeoul25 AprRegister NowBangalore5 MayRegister NowPast EventsLondonNov 2nd 2022 Amsterdam VirtualDec 13th 2022 Japan VirtualDec 6th 2022 Korea VirtualDec 7th 2022 APAC VirtualDec 1st 2022 EMEA VirtualNov 2nd 2022 SydneyNov 18th 2022 New YorkDec 13th 2022 MunichNov 15th 2022 ParisNov 8th 2022 HomepageLONDONPARISMUNICHSYDNEYNEW YORK CITYEMEA VIRTUALASIA-PACIFIC VIRTUALJAPAN VIRTUALKOREA VIRTUALAMERICAS VIRTUALPrivacy NoticeTerms of useYour Privacy ChoicesYour California Privacy Rights© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/dataaisummit/speaker/miranda-luna
Miranda Luna - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingMiranda LunaProduct Management at DatabricksBack to speakersMiranda focuses on making all aspects of the Databricks SQL experience delightful. She resides in Seattle where outside of work she can usually be found skiing or golfing.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/dataaisummit/speaker/steve-mahoney
Steve Mahoney - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingSteve MahoneySr Manager, Product Management at DatabricksBack to speakersSteve has spent the past 18 years building platform-driven businesses in the data analytics and infrastructure space as a founding engineer and leading product. He brings his healthy obsession for analytics, visualization, and data-driven storytelling as the product leader for Databricks' collaboration products including Delta Sharing, Databricks Marketplace, Clean Rooms, and OEMs.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/jp/dataaisummit
Data and AI Summit 2023 - DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingGeneration AILarge Language Models (LLM) are taking AI mainstream. Join the premier event for the global data community to understand their potential and shape the future of your industry with data and AI. Register NowSan Francisco, Moscone CenterJune 26 - 29, 2023Featured SpeakersTop experts, researchers and open source contributors from Databricks and across the data and AI community will speak at Data + AI Summit. Whether you’re an engineering wizard, ML pro, SQL expert — or you want to learn how to build, train and deploy LLMs — you’ll be in good company. See all speakersDaniela RusDirector, MIT CSAIL; Professor of EECS, MITPercy LiangProfessor of Computer Science, StanfordNat FriedmanCreator of Copilot; Former CEO, GitHubMichael CarbinCo-founder, MosaicML; Professor of EECS, MITKasey UhlenhuthStaff Product Manager, DatabricksWassym BensaidSr. Vice President, Software Development, RivianEric SchmidtCo-Founder, Schmidt Futures; Former CEO and Chairman, GoogleAdi PolakData & AI Technologist, lakeFSAli GhodsiCo-founder and CEO, DatabricksManu SharmaCEO, LabelboxMatei ZahariaOriginal Creator of Apache Spark™ and MLflow; Chief Technologist, DatabricksLin QiaoCo-creator of PyTorch, Co-founder and CEO, FireworksSai RavuruSenior Manager of Data Science & Analytics, Jet BlueEmad MostaqueCEO, Stability.AIHarrison ChaseCreator of LangChainSatya NadellaChairman and CEO, Microsoft, Live Virtual GuestZaheera ValaniSenior Director of Engineering, DatabricksHannes MühleisenCreator of DuckDBBrooke WenigMachine Learning Practice Lead, DatabricksJitendra MalikComputer Vision Pioneer, Former Head of Facebook AI ResearchRobin SutaraField CTO, DatabricksLior GavishCEO and Co-founder, Monte Carlo DataDawn SongProfessor of EECS, UC BerkeleyReynold XinCo-founder and Chief Architect, DatabricksDaniela RusDirector, MIT CSAIL; Professor of EECS, MITPercy LiangProfessor of Computer Science, StanfordNat FriedmanCreator of Copilot; Former CEO, GitHubMichael CarbinCo-founder, MosaicML; Professor of EECS, MITKasey UhlenhuthStaff Product Manager, DatabricksWassym BensaidSr. Vice President, Software Development, RivianEric SchmidtCo-Founder, Schmidt Futures; Former CEO and Chairman, GoogleAdi PolakData & AI Technologist, lakeFSAli GhodsiCo-founder and CEO, DatabricksManu SharmaCEO, LabelboxMatei ZahariaOriginal Creator of Apache Spark™ and MLflow; Chief Technologist, DatabricksLin QiaoCo-creator of PyTorch, Co-founder and CEO, FireworksSai RavuruSenior Manager of Data Science & Analytics, Jet BlueEmad MostaqueCEO, Stability.AIHarrison ChaseCreator of LangChainSatya NadellaChairman and CEO, Microsoft, Live Virtual GuestZaheera ValaniSenior Director of Engineering, DatabricksHannes MühleisenCreator of DuckDBBrooke WenigMachine Learning Practice Lead, DatabricksJitendra MalikComputer Vision Pioneer, Former Head of Facebook AI ResearchRobin SutaraField CTO, DatabricksLior GavishCEO and Co-founder, Monte Carlo DataDawn SongProfessor of EECS, UC BerkeleyReynold XinCo-founder and Chief Architect, DatabricksDaniela RusDirector, MIT CSAIL; Professor of EECS, MITPercy LiangProfessor of Computer Science, StanfordNat FriedmanCreator of Copilot; Former CEO, GitHubMichael CarbinCo-founder, MosaicML; Professor of EECS, MITKasey UhlenhuthStaff Product Manager, DatabricksWassym BensaidSr. Vice President, Software Development, RivianEric SchmidtCo-Founder, Schmidt Futures; Former CEO and Chairman, GoogleWhy attend?Join thousands of data leaders, engineers, scientists and analysts to explore all things data, analytics and AI — and how these are unified on the lakehouse. Hear from the data teams who are transforming their industries. Learn how to build and apply LLMs to your business. Uplevel your skills with hands-on training and role-based certifications. Connect with data professionals from around the world and learn more about all Data + AI Summit has to offer. SessionsWith more than 250 sessions, Data + AI Summit has something for everyone. Choose from technical deep dives, hands-on training, lightning talks, industry sessions, and more. Explore sessionsTechnologyExplore the latest advances in leading open source projects and industry technologies like Apache Spark™, Delta Lake, MLflow, Dolly, PyTorch, dbt, Presto/Trino, DuckDB and much more. You’ll also get a first look at new products and features in the Databricks Lakehouse Platform. Browse catalogNetworkingConnect with thousands of data + AI community peers and grow your professional network in social meetups, on the expo floor, or at our event party. Learn moreChoose your experienceGet access to all the sessions, training, and special events live in San Francisco or join us virtually for the keynotes.RECOMMENDEDActivitiesIn Person EventVirtual EventKeynotesBreakout Sessions300+10Hands-on Training Courses for Data Engineering, Machine Learning, LLMs, and many onsite certificationsConnect with other data pros at “birds of a feather” meals, happy hours and special eventsLightning talks, AMAs and meetups on topics such as Apache Spark™, Delta Lake, MLflow and DollyAccess to 100+ leading data and AI companies in Dev Hub + ExpoIndustry Forums for Financial Services, Retail and Consumer Goods, Healthcare and Life Sciences, Communications, Media and Entertainment, Public Sector, and Manufacturing and EnergySee pricingTrusted by the data communityHear data practitioners from trusted companies all over the world See agendaDon’t miss this year’s event!Register nowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/dataaisummit/speaker/praveen-vemulapalli/#
Praveen Vemulapalli - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingPraveen VemulapalliDirector- Technology at AT&TBack to speakersPraveen is the Director-Technology for Chief Data Office at AT&T. He oversees and manages AT&T's Network Traffic Data and Artificial Intelligence platforms. Responsible for 5G Analytics/AI Research & Development (R&D). He also leads the on-premise to cloud transformation of the Core Network Usage platforms. Leading a strong team of Data Engineers, Data Scientists, ML/AI Ops Engineers and Solution Architects.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/fr/dataaisummit
Data and AI Summit 2023 - DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingGeneration AILarge Language Models (LLM) are taking AI mainstream. Join the premier event for the global data community to understand their potential and shape the future of your industry with data and AI. Register NowSan Francisco, Moscone CenterJune 26 - 29, 2023Featured SpeakersTop experts, researchers and open source contributors from Databricks and across the data and AI community will speak at Data + AI Summit. Whether you’re an engineering wizard, ML pro, SQL expert — or you want to learn how to build, train and deploy LLMs — you’ll be in good company. See all speakersDaniela RusDirector, MIT CSAIL; Professor of EECS, MITPercy LiangProfessor of Computer Science, StanfordNat FriedmanCreator of Copilot; Former CEO, GitHubMichael CarbinCo-founder, MosaicML; Professor of EECS, MITKasey UhlenhuthStaff Product Manager, DatabricksWassym BensaidSr. Vice President, Software Development, RivianEric SchmidtCo-Founder, Schmidt Futures; Former CEO and Chairman, GoogleAdi PolakData & AI Technologist, lakeFSAli GhodsiCo-founder and CEO, DatabricksManu SharmaCEO, LabelboxMatei ZahariaOriginal Creator of Apache Spark™ and MLflow; Chief Technologist, DatabricksLin QiaoCo-creator of PyTorch, Co-founder and CEO, FireworksSai RavuruSenior Manager of Data Science & Analytics, Jet BlueEmad MostaqueCEO, Stability.AIHarrison ChaseCreator of LangChainSatya NadellaChairman and CEO, Microsoft, Live Virtual GuestZaheera ValaniSenior Director of Engineering, DatabricksHannes MühleisenCreator of DuckDBBrooke WenigMachine Learning Practice Lead, DatabricksJitendra MalikComputer Vision Pioneer, Former Head of Facebook AI ResearchRobin SutaraField CTO, DatabricksLior GavishCEO and Co-founder, Monte Carlo DataDawn SongProfessor of EECS, UC BerkeleyReynold XinCo-founder and Chief Architect, DatabricksDaniela RusDirector, MIT CSAIL; Professor of EECS, MITPercy LiangProfessor of Computer Science, StanfordNat FriedmanCreator of Copilot; Former CEO, GitHubMichael CarbinCo-founder, MosaicML; Professor of EECS, MITKasey UhlenhuthStaff Product Manager, DatabricksWassym BensaidSr. Vice President, Software Development, RivianEric SchmidtCo-Founder, Schmidt Futures; Former CEO and Chairman, GoogleAdi PolakData & AI Technologist, lakeFSAli GhodsiCo-founder and CEO, DatabricksManu SharmaCEO, LabelboxMatei ZahariaOriginal Creator of Apache Spark™ and MLflow; Chief Technologist, DatabricksLin QiaoCo-creator of PyTorch, Co-founder and CEO, FireworksSai RavuruSenior Manager of Data Science & Analytics, Jet BlueEmad MostaqueCEO, Stability.AIHarrison ChaseCreator of LangChainSatya NadellaChairman and CEO, Microsoft, Live Virtual GuestZaheera ValaniSenior Director of Engineering, DatabricksHannes MühleisenCreator of DuckDBBrooke WenigMachine Learning Practice Lead, DatabricksJitendra MalikComputer Vision Pioneer, Former Head of Facebook AI ResearchRobin SutaraField CTO, DatabricksLior GavishCEO and Co-founder, Monte Carlo DataDawn SongProfessor of EECS, UC BerkeleyReynold XinCo-founder and Chief Architect, DatabricksDaniela RusDirector, MIT CSAIL; Professor of EECS, MITPercy LiangProfessor of Computer Science, StanfordNat FriedmanCreator of Copilot; Former CEO, GitHubMichael CarbinCo-founder, MosaicML; Professor of EECS, MITKasey UhlenhuthStaff Product Manager, DatabricksWassym BensaidSr. Vice President, Software Development, RivianEric SchmidtCo-Founder, Schmidt Futures; Former CEO and Chairman, GoogleWhy attend?Join thousands of data leaders, engineers, scientists and analysts to explore all things data, analytics and AI — and how these are unified on the lakehouse. Hear from the data teams who are transforming their industries. Learn how to build and apply LLMs to your business. Uplevel your skills with hands-on training and role-based certifications. Connect with data professionals from around the world and learn more about all Data + AI Summit has to offer. SessionsWith more than 250 sessions, Data + AI Summit has something for everyone. Choose from technical deep dives, hands-on training, lightning talks, industry sessions, and more. Explore sessionsTechnologyExplore the latest advances in leading open source projects and industry technologies like Apache Spark™, Delta Lake, MLflow, Dolly, PyTorch, dbt, Presto/Trino, DuckDB and much more. You’ll also get a first look at new products and features in the Databricks Lakehouse Platform. Browse catalogNetworkingConnect with thousands of data + AI community peers and grow your professional network in social meetups, on the expo floor, or at our event party. Learn moreChoose your experienceGet access to all the sessions, training, and special events live in San Francisco or join us virtually for the keynotes.RECOMMENDEDActivitiesIn Person EventVirtual EventKeynotesBreakout Sessions300+10Hands-on Training Courses for Data Engineering, Machine Learning, LLMs, and many onsite certificationsConnect with other data pros at “birds of a feather” meals, happy hours and special eventsLightning talks, AMAs and meetups on topics such as Apache Spark™, Delta Lake, MLflow and DollyAccess to 100+ leading data and AI companies in Dev Hub + ExpoIndustry Forums for Financial Services, Retail and Consumer Goods, Healthcare and Life Sciences, Communications, Media and Entertainment, Public Sector, and Manufacturing and EnergySee pricingTrusted by the data communityHear data practitioners from trusted companies all over the world See agendaDon’t miss this year’s event!Register nowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/cdn-cgi/l/email-protection#b0d7d4c0c29dc2d5c0f0d4d1c4d1d2c2d9d3dbc39ed3dfdd
Email Protection | Cloudflare Please enable cookies. Email Protection You are unable to access this email address databricks.com The website from which you got to this page is protected by Cloudflare. Email addresses on that page have been hidden in order to keep them from being accessed by malicious bots. You must enable Javascript in your browser in order to decode the e-mail address. If you have a website and are interested in protecting it in a similar way, you can sign up for Cloudflare. How does Cloudflare protect email addresses on website from spammers? Can I sign up for Cloudflare? Cloudflare Ray ID: 7c5c3da3cf0f398c • Your IP: Click to reveal 2601:147:4700:3180:15eb:de93:22f5:f511 • Performance & security by Cloudflare
https://www.databricks.com/dataaisummit/speaker/itai-yaffe/#
Itai Yaffe - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingItai YaffeSenior Big Data Architect at AkamaiBack to speakersItai Yaffe is a Senior Big Data Architect at Akamai. Prior to Akamai, Itai was a Senior Solutions Architect at Databricks, a Principal Solutions Architect at Imply, and a big data tech lead at Nielsen Identity, where he dealt with big data challenges using tools like Spark, Druid, Kafka, and others. He is also a part of the Israeli chapter's core team of Women in Big Data. Itai is keen on sharing his knowledge and has presented his real-life experience in various forums in the past.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/jp/terms-of-use
Terms of Use | DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack. Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence. See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco June 26–29   Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWLegalTermsDatabricks Master Cloud Services AgreementAdvisory ServicesTraining ServicesUS Public Sector ServicesExternal User TermsWebsite Terms of UseCommunity Edition Terms of ServiceAcceptable Use PolicyPrivacyPrivacy NoticeCookie NoticeApplicant Privacy NoticeDatabricks SubprocessorsPrivacy FAQsDatabricks Data Processing AddendumAmendment to Data Processing AddendumSecurityDatabricks SecuritySecurity AddendumLegal Compliance and EthicsLegal Compliance & EthicsCode of ConductThird Party Code of ConductModern Slavery StatementFrance Pay Equity ReportSubscribe to UpdatesTerms of UseWebsite Terms of UseThese terms of use (“Terms”) govern your access to and use of all Databricks-branded publicly available websites, including sites located on databricks.com (other than *.cloud.databricks.com and help.databricks.com), as well as spark-summit.org, and spark-packages.org and any other pages that link to these Terms (collectively, the “Sites”). These Terms expressly do not govern your access to or use of the Databricks Platform Services (known as Databricks and Databricks Community Edition, each located at *.cloud.databricks.com, or the related website at help.databricks.com and platform support services, together the “Platform Services”), which are subject to the Databricks Terms of Service (or with respect to the Community Edition, the Community Edition Terms of Service) or other written agreement in place between Databricks, Inc. (“Databricks”) and our subscribers (“Subscribers”) (any such agreement, a “Services Agreement”).PLEASE READ CAREFULLY THESE TERMS AND THE DATABRICKS PRIVACY POLICY (“PRIVACY POLICY”) WHICH IS INCORPORATED BY REFERENCE INTO THESE TERMS. BY ACCESSING OR USING ANY OF THE SITES, YOU REPRESENT THAT YOU ARE AT LEAST 18 YEARS OLD, YOU ACKNOWLEDGE AND AGREE THAT YOU HAVE READ AND UNDERSTOOD THESE TERMS, AND YOU AGREE TO BE LEGALLY BOUND BY ALL OF THESE TERMS. IF YOU DO NOT AGREE TO ALL OF THESE TERMS, DO NOT ACCESS OR USE THE SITES. WE SUGGEST YOU PRINT A COPY OF THESE TERMS FOR YOUR RECORDS.Throughout the Terms, “we,” “us,” “our” and “ours” refer to Databricks, and “you,” “your” or “yours” refer to you personally (i.e., the individual who reads and agrees to be bound by these Terms) and, if you access the Sites on behalf of a legal entity, to that entity. If you are using the Sites on behalf of any entity you represent and warrant that you are authorized to accept these Terms on such entity’s behalf and, by accepting these Terms, you are hereby binding such entity to the Terms.Subject to your compliance with these Terms, solely for so long as you are permitted by Databricks to access and use the Sites, and provided that you keep intact all copyright and other proprietary notices, you may view Content and you may download and print the materials that Databricks specifically makes available for downloading from the Sites (such as white papers or user documentation), in each case solely for informational purposes and solely for personal or internal business use.ACCEPTANCE OF TERMS Databricks provides the Sites to you conditioned upon your accepting all of the Terms, without modification. Your use of the Sites constitutes your agreement with such Terms. We reserve the right to change, modify, add to, or remove portions of these Terms in our sole discretion at any time and we will, at our sole discretion, either post the modification on https://www.databricks.com/terms-of-use or provide you with email notice of the modification. You should check these Terms periodically for changes and you can determine when these Terms were last revised by referring to the “Last Updated” reference at the top of these Terms. Any modification shall be effective immediately upon the uploading of modified Terms. You indicate your agreement to comply with, and be bound by, any such modification by continuing to use or access the Sites after modified Terms are posted. If the modified Terms are not acceptable to you, your sole recourse is to discontinue your use of the Sites.  If you have registered for and opened an account through the Sites (an “Account”), you are entirely responsible for maintaining the confidentiality of your Account information, including your password, and for any and all activity that occurs under your Account. You agree to notify Databricks immediately of any unauthorized use of your Account or password, or any other breach of security. However, you will remain responsible for losses incurred by Databricks or by any other party due to your knowingly or inadvertently permitting unauthorized use of your Account or your Account information. You may not use anyone else’s ID, password or account at any time unless we expressly pre-approve such use, or unless expressly permitted under a Services Agreement. Databricks cannot and will not be liable for any loss or damage arising from your failure to comply with these obligations. Registration for any account is void where the user lacks the requisite eligibility for registration or if such registration is otherwise prohibited.Software (“Software”) or the Platform Services may be made available to you through the Sites. Your rights to access and use the Platform Services, including any Software will be subject to your agreement to the applicable Services Agreement governing your use of the Platform Services and to any terms and conditions of any applicable third party software license agreement (“Software License”) identified in the Software or on the web page providing access to the Software. You may not use any Software unless you agree to be bound by all terms and conditions of any applicable Software License. If there is a conflict between any Services Agreement and any Software License, the conflicting term of the Software License shall control but only to the extent necessary to eliminate the conflict.LICENSE GRANT AND PROPRIETARY RIGHTS Provided that you fully comply at all times with these Terms and any other policies or restrictions posted on or transmitted through the Sites, Databricks grants you a limited, non-exclusive, non-transferable, revocable license to access and use the Sites. Except as otherwise specifically noted in these Terms or on the Sites, the Software, Submissions (as later defined), and all other information, content, user interfaces, graphics, registered or unregistered trademarks, logos, images, artwork, videos, and documents, and the design, structure, selection, coordination, expression, “look and feel” and arrangement of such materials, made available through the Sites (collectively, the “Content”), regardless of its source or creation, is owned, controlled or licensed by or to Databricks, and is protected by trade dress, copyright, patent and trademark laws, and various other intellectual property rights and unfair competition laws, and Databricks reserves and retains all rights in and to such Content. Any reproduction, redistribution or other use or exploitation of Software in violation of any applicable Software License or in violation of any license granted under these Terms or, if applicable, under a Services Agreement, is expressly prohibited by law, and may result in civil and criminal penalties.  “Apache” and “Spark” are trademarks of the Apache Software Foundation. Any other third party trademarks, service marks, logos, trade names or other proprietary designations, that are or may become present within the Sites, including within any Content, are the registered or unregistered trademarks of the respective parties.Except solely as necessary for you to access the Sites for the intended purpose pursuant to these Terms, you may not copy, collect, modify, create derivative works or uses of, translate, distribute, transmit, publish, re-publish, perform, display, post, download, upload, sublicense, transfer, dispose of, resell or sell the Content or any other part of the Services. Except as expressly set forth in these Terms, these Terms do not grant to you any license to any intellectual property rights or other proprietary rights, including any implied licenses or licenses granted by estoppel or otherwise.INFORMATION SUBMITTED THROUGH OR TO OUR SITES At our sole discretion, you may be permitted to provide Submissions (as defined in the next sentence) to the Sites (e.g., through our forums). “Submissions” are defined to include: any messages, emails, text, graphics, code, questions, suggestions, comments, feedback, ideas, plans, notes, drawings, sample data, sound, images, video, original or creative materials, and other items or materials that you may provide to discussion forums, blogs, or other interactive features or areas of the Services where you or other users can create, post, transmit or store Content. Unless otherwise specifically agreed to by you and Databricks, by uploading, e-mailing, posting, publishing or otherwise transmitting any Submission, you hereby acknowledge that such Submission is non-confidential and you automatically grant (or warrant that the owner of such rights has expressly granted) to Databricks a perpetual, irrevocable, worldwide, non-exclusive, sub-licensable, fully paid-up and royalty-free license to use, make, have made, offer for sale, sell, copy, distribute, perform, display (whether publicly or otherwise), modify, adapt, publish, transmit and otherwise exploit such Submission, by means of any form, medium, or technology now known or later developed, and to grant to others rights to do any of the foregoing. In addition, you warrant that all so-called moral rights in such Submission have been waived.  For each Submission, you represent and warrant that you have all rights necessary for you to grant the license granted in the prior paragraph, and that such Submission, and your provision thereof to and through the Sites, does not violate any privacy, publicity, contractual, intellectual property, or other right or rights of any person or entity or otherwise violate any applicable laws, rules or regulations. You acknowledge that Databricks may have ideas or materials already under consideration or development that are or may be similar to your Submissions and that you are not entitled to any form of compensation or reimbursement from Databricks in connection with your Submissions. You agree to be fully responsible for, and to pay any and all royalties, fees, damages, and any other monies owing any person or entity by reason of, any Submission you provide to the Sites. We reserve the right to terminate access to all or any part of the Sites for anyone we suspect to be an infringer of our or any third party’s intellectual property rights of any kind whatsoever.You agree that you will not, and will not allow or authorize any third party to, post Submissions containing:Anything that is or may be (a) threatening, harassing, degrading, abusive or hateful; (b) an incitement to violence, terrorism or other wrongdoing; (c) defamatory or libelous; (d) invasive of privacy rights; (e) fraudulent, deceptive, impersonating of any person or entity, or misrepresentative of your affiliation with any person or entity; (f) obscene, pornographic, indecent, grotesque or otherwise objectionable; or (g) protected by copyright, trademark, confidentiality obligations, or other proprietary or privacy right without the express prior written consent of the owner of such right.Any material, the posting or usage of which would give rise to criminal or civil liability, or cause violation of any rules or regulations, or that encourages conduct that constitutes a criminal offense.Any virus, worm, Trojan horse or other computer code, file, data or program that is harmful, disruptive, corrupted, or invasive, or may be or is intended to damage or hijack the operation of any hardware or software.Any information identifiable to a particular individual, including but not limited to addresses, phone numbers, email addresses, birthdates, Social Security numbers and other government-issued identification numbers, payment card and other financial account numbers or login credentials, and health information.Any unsolicited or unauthorized advertising, promotional materials, junk mail, spam, chain letter, pyramid scheme, political campaign message, offering of an investment opportunity, or any other form of solicitation.Any material with respect to which you do not have all rights, power and authority necessary for its collection, use and processing, or where your use and provision to the Sites would breach any agreement between you and any third party.Databricks generally does not pre-screen or monitor Submissions (but reserves the right to do so) and does not control Submissions. Therefore, Databricks does not guarantee the accuracy, quality or appropriateness of Submissions and disclaims any responsibility for Submissions, including any liability for errors or omissions, or for any loss or damage of any kind incurred as a result of their use. However, Databricks reserves the right at its sole discretion to refuse, delete, screen or edit Submissions, provided that even if we do remove or alter any Submission, we shall have no obligation to stop our other uses of such Submission or any other Submission as permitted above. We have no obligation to store any of your Submissions. We have no responsibility or liability for the deletion or failure to store, transmit or receive your Submissions, nor do we have any responsibility for the security, privacy, storage or transmission of other communications originating with or involving your use of the Sites, except as may be expressly stated in these Terms or in the Privacy Policy. You are solely responsible for creating backup copies of and replacing any Submissions at your sole cost and expense. Our Privacy Policy governs your Submissions.By accepting these Terms, you agree to our collection, use, and disclosure of your information as described in the Privacy Policy. No one under age 18 may register for an Account or provide any personal information to Databricks or to the Sites. If we learn that we have collected personal information from or about anyone under age 18, we will delete that information as quickly as possible. If you believe that we might have any information from or about a child under 18, please contact us at [email protected] with the subject “Child Data“.Databricks reserves the right to disclose any Submissions, and the circumstances surrounding their transmission, to any third party to operate the Sites, to protect Databricks or its suppliers or representatives, to protect users of the Sites, to comply with legal or regulatory obligations, to enforce these Terms, or for any other reason. Databricks is not responsible or liable for the conduct of, or your interactions with, any other users of the Sites (whether online or offline), or for any associated loss, damage, injury or harm. By using the Site, you may be exposed to Submissions that are offensive, indecent or objectionable and you agree that Databricks bears no liability for such exposure.REQUIRED CONDUCT WHILE USING OUR SITES While using the Sites you will comply with all applicable laws, rules and regulations. In addition, Databricks expects users of the Sites to respect the rights and dignity of others. Your use of the Sites is conditioned on your compliance with the rules of conduct set forth in this Section; any failure to comply may also result in termination of your access to the Sites pursuant to Section 9 (Suspension or Termination of Access to Our Sites). In using the Sites, you agree that you will not, and will not allow or authorize any third party to:  Use the Sites or any Content for any purpose that is illegal, fraudulent, deceptive or unauthorized by these Terms, or would give rise to civil liability, or to solicit the performance of any illegal activity or other activity which infringes the rights of Databricks or others, or to encourage or promote any such activity.Engage in or promote any conduct that is offensive, harassing, predatory, stalking, violent, threatening, discriminatory, racist, hateful, or otherwise harmful, against any individual or group.Harvest or collect information about any third parties, including their email addresses or other personally identifiable information.Send, by email or other means, any unsolicited or unauthorized advertising, promotional materials, junk mail, spam, chain letter, pyramid scheme, political campaign message, offering of an investment opportunity, or any other form of solicitation, or conceal or forge headers of emails or other messages, or otherwise misrepresent the identity of senders, for the purpose of sending spam or other unsolicited messages.Impersonate or post on behalf of, or express or imply the endorsement of, any individual or entity, including Databricks or any of its representatives, or otherwise misrepresent your affiliation with a person or entity.Use the Sites in any manner, whether deliberate or otherwise, including without limitation a denial of service attack, that could in any way (a) interfere with, damage, disable, overburden or impair the functioning of the Sites, or Databricks’ systems or networks, or any systems or networks connected to the Sites, or (b) violate any requirements, procedures, policies or regulations of such systems or networks.Operate non-permissioned network services, including open proxies, mail relays or recursive domain name servers, or use any means to bypass user limitations relating to the Sites.Use any robot, spider, crawler, scraper, deep-link, page-scrape, site search/retrieval application or other manual or automated device, program, algorithm or methodology or interface not provided by us to access, acquire, copy, retrieve, index, scrape, data mine, in any way reproduce or circumvent the navigational structure or presentation of the Sites or monitor any portion of the Sites or to extract data, or to sell, resell, frame, mirror or otherwise exploit for any commercial purpose, any portion of, use of, or access to the Sites (including any Content, Software and other materials available through the Sites), or attempt to circumvent any content filtering techniques we may employ.Remove any copyright, trademark or other proprietary rights notice from the Sites or from Content or other materials contained on or originating from the Sites.Create a database of any type by systematically downloading and storing any Content unless expressly permitted by Databricks to do so.Attempt to gain unauthorized access to any portion or feature of the Sites, or any other systems or networks connected to the Sites or to any Databricks server, or to any of the services offered on or through the Sites, by hacking, password mining or any other illegitimate means.Use or attempt to use any account you are not authorized to use.Probe, scan, monitor or test the vulnerability of the Sites or any network connected to the Sites, or breach the security or authentication measures on the Sites or any network connected to the Sites.Modify, adapt, create derivative works of, translate, reverse engineer, decompile or disassemble any portion of the Sites (including any Content or other materials available through the Sites), or do anything that might discover source code or bypass or circumvent measures employed to prevent or limit access to any area, Content or code within the Sites except as, and solely to the extent, expressly authorized under applicable law overriding any of these restrictions.Develop any third-party applications that interact with the Sites or Content without our prior written consent.Use or apply the Sites in any manner directly or indirectly competitive with any business of Databricks. LINKS We may from time-to-time at our discretion host or provide links to services, products, web pages, websites or other content of third parties (“Third-Party Content”). The inclusion of any link to, or the hosting of, any Third Party Content is provided solely as a convenience to our users, including you, and does not imply affiliation, endorsement, approval, control or adoption by us of the Third-Party Content. We make no claims or representations regarding, and accept no responsibility or liability for, Third-Party Content including without limitation its quality, accuracy, nature, ownership or reliability. Your use of Third-Party Content is at your own risk. When you leave the Sites to access Third Party Content via a link, you should be aware that our policies, including the Privacy Policy, no longer govern. You should review the applicable terms and policies, including privacy and data gathering policies, of any website to which you navigate from the Sites.DISCLAIMER OF WARRANTIES YOU EXPRESSLY AGREE THAT YOUR USE OF THE SITES, INCLUDING ANY CONTENT, IS AT YOUR SOLE RISK. ALL OF THE SITES AND CONTENT ARE PROVIDED TO YOU ON AN “AS IS” AND “AS AVAILABLE” BASIS, AND DATABRICKS MAKES NO RELATED REPRESENTATIONS, AND DISCLAIMS ALL POSSIBLE WARRANTIES, EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. WE DO NOT WARRANT THAT THE SITES OR CONTENT ARE ACCURATE, CONTINUOUSLY AVAILABLE, COMPLETE, RELIABLE, SECURE, CURRENT, ERROR-FREE, OR FREE OF VIRUSES OR OTHER HARMFUL COMPONENTS. DATABRICKS CANNOT AND DOES NOT GUARANTEE THAT ANY DEFECTS, ERRORS OR OMISSIONS WILL BE CORRECTED, REGARDLESS OF WHETHER DATABRICKS IS AWARE OF SUCH DEFECTS, ERRORS OR OMISSIONS.  TO THE EXTENT APPLICABLE STATE LAW DOES NOT ALLOW THE EXCLUSIONS AND DISCLAIMERS OF WARRANTIES AS SET FORTH IN THIS SECTION 6, SOME OR ALL OF THE ABOVE EXCLUSIONS AND DISCLAIMERS MAY NOT APPLY TO YOU, IN WHICH CASE SUCH EXCLUSIONS AND DISCLAIMERS WILL APPLY TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW. YOU ACKNOWLEDGE THAT THE DISCLAIMERS, LIMITATIONS, AND WAIVERS OF LIABILITY SET FORTH IN THIS SECTION 6 SHALL SURVIVE ANY EXPIRATION OR TERMINATION OF THESE TERMS OR YOUR USE OF THE SITES.LIMITATION OF LIABILITY YOU ACKNOWLEDGE AND AGREE THAT, TO THE MAXIMUM EXTENT PERMITTED BY LAW, THE ENTIRE RISK ARISING OUT OF YOUR ACCESS TO AND USE OF THE SITES AND CONTENT REMAINS WITH YOU. IN NO EVENT WILL DATABRICKS OR ANY OF ITS DIRECTORS, EMPLOYEES, AGENTS OR SUPPLIERS BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL, EXEMPLARY, CONSEQUENTIAL OR PUNITIVE DAMAGES OF ANY KIND (INCLUDING, BUT NOT LIMITED TO, LOSS OF USE, LOSS OF BUSINESS, LOSS OF PROFITS, LOSS OF DATA, LOSS OF GOODWILL, SERVICE INTERRUPTION, COMPUTER DAMAGE, SYSTEM FAILURE OR THE COST OF SUBSTITUTE PRODUCTS OR SERVICES) ARISING OUT OF OR IN CONNECTION WITH THE SITES, AND ANY CONTENT, SERVICES OR PRODUCTS INCLUDED ON OR OTHERWISE MADE AVAILABLE THROUGH THE SITES, REGARDLESS OF THE FORM OF ACTION (WHETHER IN CONTRACT, TORT, STRICT LIABILITY, EQUITY OR OTHERWISE) AND EVEN IF WE ARE AWARE OF OR HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.  IN NO EVENT WILL OUR TOTAL CUMULATIVE LIABILITY TO YOU ARISING OUT OF OR IN CONNECTION WITH THESE TERMS, OR FROM THE USE OF OR INABILITY TO USE THE SITES, INCLUDING ANY CONTENT, OR FROM THE USE OF OR EXPOSURE TO ANY SUBMISSIONS, EXCEED ONE HUNDRED DOLLARS ($100.00). MULTIPLE CLAIMS WILL NOT EXPAND THIS LIMITATION.THE FOREGOING LIMITATIONS AND EXCLUSIONS SHALL NOT APPLY WITH RESPECT TO ANY LIABILITY ARISING UNDER FRAUD, FRAUDULENT MISREPRESENTATION, GROSS NEGLIGENCE, OR ANY OTHER LIABILITY THAT CANNOT BE LIMITED OR EXCLUDED BY LAW. ADDITIONALLY, TO THE EXTENT APPLICABLE STATE OR OTHER LAW DOES NOT ALLOW THE EXCLUSIONS AND LIMITATIONS OF DAMAGES AS SET FORTH IN THIS SECTION 7, SOME OR ALL OF THE ABOVE EXCLUSIONS AND LIMITATIONS MAY NOT APPLY TO YOU, IN WHICH CASE DATABRICKS’ LIABILITY TO YOU WILL BE LIMITED BY THIS SECTION TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW.THIS SECTION WILL BE GIVEN FULL EFFECT EVEN IF ANY REMEDY SPECIFIED IN THESE TERMS IS DEEMED TO HAVE FAILED OF ITS ESSENTIAL PURPOSE. THESE LIMITATIONS OF LIABILITY FORM AN ESSENTIAL BASIS OF THE BARGAIN BETWEEN THE PARTIES. YOU ACKNOWLEDGE THAT THE LIMITATIONS OF LIABILITY SET FORTH IN THIS SECTION 7 SHALL SURVIVE ANY TERMINATION OR EXPIRATION OF THESE TERMS OR YOUR USE OF THE SITES.INDEMNIFICATION To the fullest extent permitted by law, you agree to indemnify, defend and hold harmless Databricks, its officers, directors, shareholders, successors in interest, employees, agents, subsidiaries and affiliates, from and against any and all actual or threatened third party claims (groundless or otherwise), demands, losses, damages, costs and liability, proceedings (at law or in equity) and expenses (including reasonable attorneys’ and expert fees and costs of investigation) arising out of or in connection with (a) your use of the Sites, including without limitation any of your Submissions, (b) your breach of these Terms, including your breach of any covenant, representation, warranty, term, or condition set forth herein, including, without limitation, the obligations set forth in Section 3 (Information Submitted Through Our Sites) and Section 4 (Required Conduct While Using Our Sites), (c) your violation of any law or regulation or of any third party rights, including infringement, libel, misappropriation, or other violation of any third party’s intellectual property or other legal rights or (d) the disclosure, solicitation or use of any personal information by you, whether with or without your knowledge or consent. Databricks reserves the right, however, to assume the exclusive defense and control of any matter otherwise subject to indemnification by you and, in such case, you agree to cooperate with Databricks’ defense of such claim, and in no event may you agree to any settlement affecting Databricks without Databricks’ prior written consent.SUSPENSION OR TERMINATION OF ACCESS TO OUR SITES Notwithstanding any provision to the contrary in these Terms, you agree that Databricks may, in its sole discretion and with or without prior notice, for any or no reason, suspend or terminate your access to any or all of the Sites and/or block your future access to any or all of the Sites, including without limitation for any of the following reasons: (a) if we determine that you have violated any provision, or the spirit, of these Terms, (b) in response to a request by a law enforcement or other government agency, (c) due to discontinuance or material modification of any of the Sites, or (d) due to unexpected technical issues or problems. Databricks shall not be liable to you or any third party for any termination of your access to any part of the Sites. The rights and obligations of these Terms which by their nature should survive, shall so survive any termination of your use of the Sites.CONTACT Questions or comments about the Terms or about the Sites may be directed to Databricks at the email address [email protected] You may also email us at that address if you would like to report what you believe to be a violation of these Terms. However, please note that we do not accept any responsibility to maintain the confidentiality of any report of a violation you may submit to us, including your identity, nor do we commit to providing a personal reply to any report you submit, nor are we obligated to take action in response to your report.CLAIMS OF COPYRIGHT INFRINGEMENT Databricks respects the intellectual property rights of others and we request that the people who use the Sites do the same. The Digital Millennium Copyright Act of 1998 (the “DMCA”) provides recourse for copyright owners who believe that material appearing on the Internet infringes their rights under U.S. copyright law. If you believe in good faith that materials available on the Sites infringe your copyright, you (or your agent) may send Databricks a notice requesting that we remove the material or block access to it. If you believe in good faith that someone has wrongly filed a notice of copyright infringement against you, you may send a counter-notice to Databricks under applicable provisions of the DMCA. Please note that substantial penalties under U.S. copyright law may be levied against any filer of a false counter-notice. Notices and counter-notices must meet the then-current statutory requirements imposed by the DMCA. See 17 U.S.C. § 512(c)(3), available at https://www.copyright.gov/title17/92chap5.html for details. Notices and counter-notices should be sent to:  Attn: Legal Department/DMCA Copyright Agent Databricks, Inc. 160 Spear Street, Suite 1300 San Francisco, CA 94105 [email protected] (866) 330-0121You should note that if you knowingly misrepresent in your notification that the material or activity is infringing, you will be liable for any damages, including costs and attorneys’ fees, incurred by us or the alleged infringer as the result of our relying upon such misrepresentation in removing or disabling access to the material or activity claimed to be infringing. We encourage you to consult your legal advisor before filing a notice or counter-notice.In accordance with the DMCA and other applicable law, Databricks may at our discretion limit access to the Sites and/or terminate the accounts of any users who infringe any intellectual property rights of others, whether or not there is any repeat infringement.GENERAL The Terms and the relationship between each user and Databricks shall be governed by the laws of the State of California without regard to its conflict of law provisions and each party shall submit to the personal and exclusive jurisdiction of the courts located in San Francisco, California. The application of the United Nations Convention on Contracts for the International Sale of Goods is expressly excluded. Except to the extent a Services Agreement applies, these Terms, along with the Privacy Policy, constitute the entire agreement between you and Databricks with respect to your use of the Sites and supersede all prior or contemporaneous communications and proposals, whether electronic, oral or written, between you and Databricks with respect to the Sites. If any provision of the Terms is found by a court of competent jurisdiction to be invalid, the parties nevertheless agree that the court should endeavor to give effect to the parties’ intentions as reflected in the provision, and the other provisions of the Terms remain in full force and effect. A party may only waive its rights under these Terms by a written document executed by both parties. Databricks’ failure to insist on or enforce strict performance of these Terms shall not be construed as a waiver by Databricks of any provision or any right it has to enforce these Terms, nor shall any course of conduct between Databricks and you or any other party be deemed to modify any provision of these Terms. The headings of the sections of these Terms are for convenience of reference only and are not intended to restrict, affect or be of any weight in the interpretation or construction of the provisions of such sections.  None of your rights or duties under these Terms may be transferred, assigned or delegated by you without our prior written consent, and any attempted transfer, assignment or delegation without such consent will be void and without effect. We may freely transfer, assign or delegate any of our rights or duties under these Terms. Subject to the foregoing, these Terms will be binding upon and will inure to the benefit of the parties and their respective representatives, heirs, administrators, successors and permitted assigns. No provision of these Terms is intended for the benefit of any third party, and the parties do not intend that any provision should be enforceable by a third party. Our relationship is an independent contractor relationship, and neither these Terms nor any actions by either party may be interpreted as creating an agency or partnership relationship. Nothing in these Terms shall be construed to obligate Databricks to enter into or engage with you on any commercial transaction.If you are provided access to any Software, you acknowledge that such Software may be subject to regulation by local laws and United States government agencies which prohibit export or diversion of certain products or information about products to certain countries and certain persons. You represent and warrant that you will not export or re-export such Software in violation of these regulations.You acknowledge that your breach of any of the provisions of these Terms may cause immediate and irreparable harm to Databricks for which we may not have an adequate remedy in money damages. We will therefore be entitled to obtain an injunction against such breach from any court of competent jurisdiction immediately upon request and will be entitled to recover from you the costs incurred in seeking such an injunction. The availability or exercise of our right to obtain injunctive relief will not limit our right to seek or obtain any other remedy.You agree that we will not be liable for delays, failures, or inadequate performance of the Sites resulting from conditions outside of our reasonable control, including but not limited to natural disasters or other acts of God, failure of telecommunications networks or any other network or utility, threatened or actual acts of terrorism or war, riots, labor strikes, or governmental acts or orders.Last Updated May 25, 2018ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/terms-of-use
Terms of Use | DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack. Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence. See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco June 26–29   Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWLegalTermsDatabricks Master Cloud Services AgreementAdvisory ServicesTraining ServicesUS Public Sector ServicesExternal User TermsWebsite Terms of UseCommunity Edition Terms of ServiceAcceptable Use PolicyPrivacyPrivacy NoticeCookie NoticeApplicant Privacy NoticeDatabricks SubprocessorsPrivacy FAQsDatabricks Data Processing AddendumAmendment to Data Processing AddendumSecurityDatabricks SecuritySecurity AddendumLegal Compliance and EthicsLegal Compliance & EthicsCode of ConductThird Party Code of ConductModern Slavery StatementFrance Pay Equity ReportSubscribe to UpdatesTerms of UseWebsite Terms of UseThese terms of use (“Terms”) govern your access to and use of all Databricks-branded publicly available websites, including sites located on databricks.com (other than *.cloud.databricks.com and help.databricks.com), as well as spark-summit.org, and spark-packages.org and any other pages that link to these Terms (collectively, the “Sites”). These Terms expressly do not govern your access to or use of the Databricks Platform Services (known as Databricks and Databricks Community Edition, each located at *.cloud.databricks.com, or the related website at help.databricks.com and platform support services, together the “Platform Services”), which are subject to the Databricks Terms of Service (or with respect to the Community Edition, the Community Edition Terms of Service) or other written agreement in place between Databricks, Inc. (“Databricks”) and our subscribers (“Subscribers”) (any such agreement, a “Services Agreement”).PLEASE READ CAREFULLY THESE TERMS AND THE DATABRICKS PRIVACY POLICY (“PRIVACY POLICY”) WHICH IS INCORPORATED BY REFERENCE INTO THESE TERMS. BY ACCESSING OR USING ANY OF THE SITES, YOU REPRESENT THAT YOU ARE AT LEAST 18 YEARS OLD, YOU ACKNOWLEDGE AND AGREE THAT YOU HAVE READ AND UNDERSTOOD THESE TERMS, AND YOU AGREE TO BE LEGALLY BOUND BY ALL OF THESE TERMS. IF YOU DO NOT AGREE TO ALL OF THESE TERMS, DO NOT ACCESS OR USE THE SITES. WE SUGGEST YOU PRINT A COPY OF THESE TERMS FOR YOUR RECORDS.Throughout the Terms, “we,” “us,” “our” and “ours” refer to Databricks, and “you,” “your” or “yours” refer to you personally (i.e., the individual who reads and agrees to be bound by these Terms) and, if you access the Sites on behalf of a legal entity, to that entity. If you are using the Sites on behalf of any entity you represent and warrant that you are authorized to accept these Terms on such entity’s behalf and, by accepting these Terms, you are hereby binding such entity to the Terms.Subject to your compliance with these Terms, solely for so long as you are permitted by Databricks to access and use the Sites, and provided that you keep intact all copyright and other proprietary notices, you may view Content and you may download and print the materials that Databricks specifically makes available for downloading from the Sites (such as white papers or user documentation), in each case solely for informational purposes and solely for personal or internal business use.ACCEPTANCE OF TERMS Databricks provides the Sites to you conditioned upon your accepting all of the Terms, without modification. Your use of the Sites constitutes your agreement with such Terms. We reserve the right to change, modify, add to, or remove portions of these Terms in our sole discretion at any time and we will, at our sole discretion, either post the modification on https://www.databricks.com/terms-of-use or provide you with email notice of the modification. You should check these Terms periodically for changes and you can determine when these Terms were last revised by referring to the “Last Updated” reference at the top of these Terms. Any modification shall be effective immediately upon the uploading of modified Terms. You indicate your agreement to comply with, and be bound by, any such modification by continuing to use or access the Sites after modified Terms are posted. If the modified Terms are not acceptable to you, your sole recourse is to discontinue your use of the Sites.  If you have registered for and opened an account through the Sites (an “Account”), you are entirely responsible for maintaining the confidentiality of your Account information, including your password, and for any and all activity that occurs under your Account. You agree to notify Databricks immediately of any unauthorized use of your Account or password, or any other breach of security. However, you will remain responsible for losses incurred by Databricks or by any other party due to your knowingly or inadvertently permitting unauthorized use of your Account or your Account information. You may not use anyone else’s ID, password or account at any time unless we expressly pre-approve such use, or unless expressly permitted under a Services Agreement. Databricks cannot and will not be liable for any loss or damage arising from your failure to comply with these obligations. Registration for any account is void where the user lacks the requisite eligibility for registration or if such registration is otherwise prohibited.Software (“Software”) or the Platform Services may be made available to you through the Sites. Your rights to access and use the Platform Services, including any Software will be subject to your agreement to the applicable Services Agreement governing your use of the Platform Services and to any terms and conditions of any applicable third party software license agreement (“Software License”) identified in the Software or on the web page providing access to the Software. You may not use any Software unless you agree to be bound by all terms and conditions of any applicable Software License. If there is a conflict between any Services Agreement and any Software License, the conflicting term of the Software License shall control but only to the extent necessary to eliminate the conflict.LICENSE GRANT AND PROPRIETARY RIGHTS Provided that you fully comply at all times with these Terms and any other policies or restrictions posted on or transmitted through the Sites, Databricks grants you a limited, non-exclusive, non-transferable, revocable license to access and use the Sites. Except as otherwise specifically noted in these Terms or on the Sites, the Software, Submissions (as later defined), and all other information, content, user interfaces, graphics, registered or unregistered trademarks, logos, images, artwork, videos, and documents, and the design, structure, selection, coordination, expression, “look and feel” and arrangement of such materials, made available through the Sites (collectively, the “Content”), regardless of its source or creation, is owned, controlled or licensed by or to Databricks, and is protected by trade dress, copyright, patent and trademark laws, and various other intellectual property rights and unfair competition laws, and Databricks reserves and retains all rights in and to such Content. Any reproduction, redistribution or other use or exploitation of Software in violation of any applicable Software License or in violation of any license granted under these Terms or, if applicable, under a Services Agreement, is expressly prohibited by law, and may result in civil and criminal penalties.  “Apache” and “Spark” are trademarks of the Apache Software Foundation. Any other third party trademarks, service marks, logos, trade names or other proprietary designations, that are or may become present within the Sites, including within any Content, are the registered or unregistered trademarks of the respective parties.Except solely as necessary for you to access the Sites for the intended purpose pursuant to these Terms, you may not copy, collect, modify, create derivative works or uses of, translate, distribute, transmit, publish, re-publish, perform, display, post, download, upload, sublicense, transfer, dispose of, resell or sell the Content or any other part of the Services. Except as expressly set forth in these Terms, these Terms do not grant to you any license to any intellectual property rights or other proprietary rights, including any implied licenses or licenses granted by estoppel or otherwise.INFORMATION SUBMITTED THROUGH OR TO OUR SITES At our sole discretion, you may be permitted to provide Submissions (as defined in the next sentence) to the Sites (e.g., through our forums). “Submissions” are defined to include: any messages, emails, text, graphics, code, questions, suggestions, comments, feedback, ideas, plans, notes, drawings, sample data, sound, images, video, original or creative materials, and other items or materials that you may provide to discussion forums, blogs, or other interactive features or areas of the Services where you or other users can create, post, transmit or store Content. Unless otherwise specifically agreed to by you and Databricks, by uploading, e-mailing, posting, publishing or otherwise transmitting any Submission, you hereby acknowledge that such Submission is non-confidential and you automatically grant (or warrant that the owner of such rights has expressly granted) to Databricks a perpetual, irrevocable, worldwide, non-exclusive, sub-licensable, fully paid-up and royalty-free license to use, make, have made, offer for sale, sell, copy, distribute, perform, display (whether publicly or otherwise), modify, adapt, publish, transmit and otherwise exploit such Submission, by means of any form, medium, or technology now known or later developed, and to grant to others rights to do any of the foregoing. In addition, you warrant that all so-called moral rights in such Submission have been waived.  For each Submission, you represent and warrant that you have all rights necessary for you to grant the license granted in the prior paragraph, and that such Submission, and your provision thereof to and through the Sites, does not violate any privacy, publicity, contractual, intellectual property, or other right or rights of any person or entity or otherwise violate any applicable laws, rules or regulations. You acknowledge that Databricks may have ideas or materials already under consideration or development that are or may be similar to your Submissions and that you are not entitled to any form of compensation or reimbursement from Databricks in connection with your Submissions. You agree to be fully responsible for, and to pay any and all royalties, fees, damages, and any other monies owing any person or entity by reason of, any Submission you provide to the Sites. We reserve the right to terminate access to all or any part of the Sites for anyone we suspect to be an infringer of our or any third party’s intellectual property rights of any kind whatsoever.You agree that you will not, and will not allow or authorize any third party to, post Submissions containing:Anything that is or may be (a) threatening, harassing, degrading, abusive or hateful; (b) an incitement to violence, terrorism or other wrongdoing; (c) defamatory or libelous; (d) invasive of privacy rights; (e) fraudulent, deceptive, impersonating of any person or entity, or misrepresentative of your affiliation with any person or entity; (f) obscene, pornographic, indecent, grotesque or otherwise objectionable; or (g) protected by copyright, trademark, confidentiality obligations, or other proprietary or privacy right without the express prior written consent of the owner of such right.Any material, the posting or usage of which would give rise to criminal or civil liability, or cause violation of any rules or regulations, or that encourages conduct that constitutes a criminal offense.Any virus, worm, Trojan horse or other computer code, file, data or program that is harmful, disruptive, corrupted, or invasive, or may be or is intended to damage or hijack the operation of any hardware or software.Any information identifiable to a particular individual, including but not limited to addresses, phone numbers, email addresses, birthdates, Social Security numbers and other government-issued identification numbers, payment card and other financial account numbers or login credentials, and health information.Any unsolicited or unauthorized advertising, promotional materials, junk mail, spam, chain letter, pyramid scheme, political campaign message, offering of an investment opportunity, or any other form of solicitation.Any material with respect to which you do not have all rights, power and authority necessary for its collection, use and processing, or where your use and provision to the Sites would breach any agreement between you and any third party.Databricks generally does not pre-screen or monitor Submissions (but reserves the right to do so) and does not control Submissions. Therefore, Databricks does not guarantee the accuracy, quality or appropriateness of Submissions and disclaims any responsibility for Submissions, including any liability for errors or omissions, or for any loss or damage of any kind incurred as a result of their use. However, Databricks reserves the right at its sole discretion to refuse, delete, screen or edit Submissions, provided that even if we do remove or alter any Submission, we shall have no obligation to stop our other uses of such Submission or any other Submission as permitted above. We have no obligation to store any of your Submissions. We have no responsibility or liability for the deletion or failure to store, transmit or receive your Submissions, nor do we have any responsibility for the security, privacy, storage or transmission of other communications originating with or involving your use of the Sites, except as may be expressly stated in these Terms or in the Privacy Policy. You are solely responsible for creating backup copies of and replacing any Submissions at your sole cost and expense. Our Privacy Policy governs your Submissions.By accepting these Terms, you agree to our collection, use, and disclosure of your information as described in the Privacy Policy. No one under age 18 may register for an Account or provide any personal information to Databricks or to the Sites. If we learn that we have collected personal information from or about anyone under age 18, we will delete that information as quickly as possible. If you believe that we might have any information from or about a child under 18, please contact us at [email protected] with the subject “Child Data“.Databricks reserves the right to disclose any Submissions, and the circumstances surrounding their transmission, to any third party to operate the Sites, to protect Databricks or its suppliers or representatives, to protect users of the Sites, to comply with legal or regulatory obligations, to enforce these Terms, or for any other reason. Databricks is not responsible or liable for the conduct of, or your interactions with, any other users of the Sites (whether online or offline), or for any associated loss, damage, injury or harm. By using the Site, you may be exposed to Submissions that are offensive, indecent or objectionable and you agree that Databricks bears no liability for such exposure.REQUIRED CONDUCT WHILE USING OUR SITES While using the Sites you will comply with all applicable laws, rules and regulations. In addition, Databricks expects users of the Sites to respect the rights and dignity of others. Your use of the Sites is conditioned on your compliance with the rules of conduct set forth in this Section; any failure to comply may also result in termination of your access to the Sites pursuant to Section 9 (Suspension or Termination of Access to Our Sites). In using the Sites, you agree that you will not, and will not allow or authorize any third party to:  Use the Sites or any Content for any purpose that is illegal, fraudulent, deceptive or unauthorized by these Terms, or would give rise to civil liability, or to solicit the performance of any illegal activity or other activity which infringes the rights of Databricks or others, or to encourage or promote any such activity.Engage in or promote any conduct that is offensive, harassing, predatory, stalking, violent, threatening, discriminatory, racist, hateful, or otherwise harmful, against any individual or group.Harvest or collect information about any third parties, including their email addresses or other personally identifiable information.Send, by email or other means, any unsolicited or unauthorized advertising, promotional materials, junk mail, spam, chain letter, pyramid scheme, political campaign message, offering of an investment opportunity, or any other form of solicitation, or conceal or forge headers of emails or other messages, or otherwise misrepresent the identity of senders, for the purpose of sending spam or other unsolicited messages.Impersonate or post on behalf of, or express or imply the endorsement of, any individual or entity, including Databricks or any of its representatives, or otherwise misrepresent your affiliation with a person or entity.Use the Sites in any manner, whether deliberate or otherwise, including without limitation a denial of service attack, that could in any way (a) interfere with, damage, disable, overburden or impair the functioning of the Sites, or Databricks’ systems or networks, or any systems or networks connected to the Sites, or (b) violate any requirements, procedures, policies or regulations of such systems or networks.Operate non-permissioned network services, including open proxies, mail relays or recursive domain name servers, or use any means to bypass user limitations relating to the Sites.Use any robot, spider, crawler, scraper, deep-link, page-scrape, site search/retrieval application or other manual or automated device, program, algorithm or methodology or interface not provided by us to access, acquire, copy, retrieve, index, scrape, data mine, in any way reproduce or circumvent the navigational structure or presentation of the Sites or monitor any portion of the Sites or to extract data, or to sell, resell, frame, mirror or otherwise exploit for any commercial purpose, any portion of, use of, or access to the Sites (including any Content, Software and other materials available through the Sites), or attempt to circumvent any content filtering techniques we may employ.Remove any copyright, trademark or other proprietary rights notice from the Sites or from Content or other materials contained on or originating from the Sites.Create a database of any type by systematically downloading and storing any Content unless expressly permitted by Databricks to do so.Attempt to gain unauthorized access to any portion or feature of the Sites, or any other systems or networks connected to the Sites or to any Databricks server, or to any of the services offered on or through the Sites, by hacking, password mining or any other illegitimate means.Use or attempt to use any account you are not authorized to use.Probe, scan, monitor or test the vulnerability of the Sites or any network connected to the Sites, or breach the security or authentication measures on the Sites or any network connected to the Sites.Modify, adapt, create derivative works of, translate, reverse engineer, decompile or disassemble any portion of the Sites (including any Content or other materials available through the Sites), or do anything that might discover source code or bypass or circumvent measures employed to prevent or limit access to any area, Content or code within the Sites except as, and solely to the extent, expressly authorized under applicable law overriding any of these restrictions.Develop any third-party applications that interact with the Sites or Content without our prior written consent.Use or apply the Sites in any manner directly or indirectly competitive with any business of Databricks. LINKS We may from time-to-time at our discretion host or provide links to services, products, web pages, websites or other content of third parties (“Third-Party Content”). The inclusion of any link to, or the hosting of, any Third Party Content is provided solely as a convenience to our users, including you, and does not imply affiliation, endorsement, approval, control or adoption by us of the Third-Party Content. We make no claims or representations regarding, and accept no responsibility or liability for, Third-Party Content including without limitation its quality, accuracy, nature, ownership or reliability. Your use of Third-Party Content is at your own risk. When you leave the Sites to access Third Party Content via a link, you should be aware that our policies, including the Privacy Policy, no longer govern. You should review the applicable terms and policies, including privacy and data gathering policies, of any website to which you navigate from the Sites.DISCLAIMER OF WARRANTIES YOU EXPRESSLY AGREE THAT YOUR USE OF THE SITES, INCLUDING ANY CONTENT, IS AT YOUR SOLE RISK. ALL OF THE SITES AND CONTENT ARE PROVIDED TO YOU ON AN “AS IS” AND “AS AVAILABLE” BASIS, AND DATABRICKS MAKES NO RELATED REPRESENTATIONS, AND DISCLAIMS ALL POSSIBLE WARRANTIES, EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. WE DO NOT WARRANT THAT THE SITES OR CONTENT ARE ACCURATE, CONTINUOUSLY AVAILABLE, COMPLETE, RELIABLE, SECURE, CURRENT, ERROR-FREE, OR FREE OF VIRUSES OR OTHER HARMFUL COMPONENTS. DATABRICKS CANNOT AND DOES NOT GUARANTEE THAT ANY DEFECTS, ERRORS OR OMISSIONS WILL BE CORRECTED, REGARDLESS OF WHETHER DATABRICKS IS AWARE OF SUCH DEFECTS, ERRORS OR OMISSIONS.  TO THE EXTENT APPLICABLE STATE LAW DOES NOT ALLOW THE EXCLUSIONS AND DISCLAIMERS OF WARRANTIES AS SET FORTH IN THIS SECTION 6, SOME OR ALL OF THE ABOVE EXCLUSIONS AND DISCLAIMERS MAY NOT APPLY TO YOU, IN WHICH CASE SUCH EXCLUSIONS AND DISCLAIMERS WILL APPLY TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW. YOU ACKNOWLEDGE THAT THE DISCLAIMERS, LIMITATIONS, AND WAIVERS OF LIABILITY SET FORTH IN THIS SECTION 6 SHALL SURVIVE ANY EXPIRATION OR TERMINATION OF THESE TERMS OR YOUR USE OF THE SITES.LIMITATION OF LIABILITY YOU ACKNOWLEDGE AND AGREE THAT, TO THE MAXIMUM EXTENT PERMITTED BY LAW, THE ENTIRE RISK ARISING OUT OF YOUR ACCESS TO AND USE OF THE SITES AND CONTENT REMAINS WITH YOU. IN NO EVENT WILL DATABRICKS OR ANY OF ITS DIRECTORS, EMPLOYEES, AGENTS OR SUPPLIERS BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL, EXEMPLARY, CONSEQUENTIAL OR PUNITIVE DAMAGES OF ANY KIND (INCLUDING, BUT NOT LIMITED TO, LOSS OF USE, LOSS OF BUSINESS, LOSS OF PROFITS, LOSS OF DATA, LOSS OF GOODWILL, SERVICE INTERRUPTION, COMPUTER DAMAGE, SYSTEM FAILURE OR THE COST OF SUBSTITUTE PRODUCTS OR SERVICES) ARISING OUT OF OR IN CONNECTION WITH THE SITES, AND ANY CONTENT, SERVICES OR PRODUCTS INCLUDED ON OR OTHERWISE MADE AVAILABLE THROUGH THE SITES, REGARDLESS OF THE FORM OF ACTION (WHETHER IN CONTRACT, TORT, STRICT LIABILITY, EQUITY OR OTHERWISE) AND EVEN IF WE ARE AWARE OF OR HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.  IN NO EVENT WILL OUR TOTAL CUMULATIVE LIABILITY TO YOU ARISING OUT OF OR IN CONNECTION WITH THESE TERMS, OR FROM THE USE OF OR INABILITY TO USE THE SITES, INCLUDING ANY CONTENT, OR FROM THE USE OF OR EXPOSURE TO ANY SUBMISSIONS, EXCEED ONE HUNDRED DOLLARS ($100.00). MULTIPLE CLAIMS WILL NOT EXPAND THIS LIMITATION.THE FOREGOING LIMITATIONS AND EXCLUSIONS SHALL NOT APPLY WITH RESPECT TO ANY LIABILITY ARISING UNDER FRAUD, FRAUDULENT MISREPRESENTATION, GROSS NEGLIGENCE, OR ANY OTHER LIABILITY THAT CANNOT BE LIMITED OR EXCLUDED BY LAW. ADDITIONALLY, TO THE EXTENT APPLICABLE STATE OR OTHER LAW DOES NOT ALLOW THE EXCLUSIONS AND LIMITATIONS OF DAMAGES AS SET FORTH IN THIS SECTION 7, SOME OR ALL OF THE ABOVE EXCLUSIONS AND LIMITATIONS MAY NOT APPLY TO YOU, IN WHICH CASE DATABRICKS’ LIABILITY TO YOU WILL BE LIMITED BY THIS SECTION TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW.THIS SECTION WILL BE GIVEN FULL EFFECT EVEN IF ANY REMEDY SPECIFIED IN THESE TERMS IS DEEMED TO HAVE FAILED OF ITS ESSENTIAL PURPOSE. THESE LIMITATIONS OF LIABILITY FORM AN ESSENTIAL BASIS OF THE BARGAIN BETWEEN THE PARTIES. YOU ACKNOWLEDGE THAT THE LIMITATIONS OF LIABILITY SET FORTH IN THIS SECTION 7 SHALL SURVIVE ANY TERMINATION OR EXPIRATION OF THESE TERMS OR YOUR USE OF THE SITES.INDEMNIFICATION To the fullest extent permitted by law, you agree to indemnify, defend and hold harmless Databricks, its officers, directors, shareholders, successors in interest, employees, agents, subsidiaries and affiliates, from and against any and all actual or threatened third party claims (groundless or otherwise), demands, losses, damages, costs and liability, proceedings (at law or in equity) and expenses (including reasonable attorneys’ and expert fees and costs of investigation) arising out of or in connection with (a) your use of the Sites, including without limitation any of your Submissions, (b) your breach of these Terms, including your breach of any covenant, representation, warranty, term, or condition set forth herein, including, without limitation, the obligations set forth in Section 3 (Information Submitted Through Our Sites) and Section 4 (Required Conduct While Using Our Sites), (c) your violation of any law or regulation or of any third party rights, including infringement, libel, misappropriation, or other violation of any third party’s intellectual property or other legal rights or (d) the disclosure, solicitation or use of any personal information by you, whether with or without your knowledge or consent. Databricks reserves the right, however, to assume the exclusive defense and control of any matter otherwise subject to indemnification by you and, in such case, you agree to cooperate with Databricks’ defense of such claim, and in no event may you agree to any settlement affecting Databricks without Databricks’ prior written consent.SUSPENSION OR TERMINATION OF ACCESS TO OUR SITES Notwithstanding any provision to the contrary in these Terms, you agree that Databricks may, in its sole discretion and with or without prior notice, for any or no reason, suspend or terminate your access to any or all of the Sites and/or block your future access to any or all of the Sites, including without limitation for any of the following reasons: (a) if we determine that you have violated any provision, or the spirit, of these Terms, (b) in response to a request by a law enforcement or other government agency, (c) due to discontinuance or material modification of any of the Sites, or (d) due to unexpected technical issues or problems. Databricks shall not be liable to you or any third party for any termination of your access to any part of the Sites. The rights and obligations of these Terms which by their nature should survive, shall so survive any termination of your use of the Sites.CONTACT Questions or comments about the Terms or about the Sites may be directed to Databricks at the email address [email protected] You may also email us at that address if you would like to report what you believe to be a violation of these Terms. However, please note that we do not accept any responsibility to maintain the confidentiality of any report of a violation you may submit to us, including your identity, nor do we commit to providing a personal reply to any report you submit, nor are we obligated to take action in response to your report.CLAIMS OF COPYRIGHT INFRINGEMENT Databricks respects the intellectual property rights of others and we request that the people who use the Sites do the same. The Digital Millennium Copyright Act of 1998 (the “DMCA”) provides recourse for copyright owners who believe that material appearing on the Internet infringes their rights under U.S. copyright law. If you believe in good faith that materials available on the Sites infringe your copyright, you (or your agent) may send Databricks a notice requesting that we remove the material or block access to it. If you believe in good faith that someone has wrongly filed a notice of copyright infringement against you, you may send a counter-notice to Databricks under applicable provisions of the DMCA. Please note that substantial penalties under U.S. copyright law may be levied against any filer of a false counter-notice. Notices and counter-notices must meet the then-current statutory requirements imposed by the DMCA. See 17 U.S.C. § 512(c)(3), available at https://www.copyright.gov/title17/92chap5.html for details. Notices and counter-notices should be sent to:  Attn: Legal Department/DMCA Copyright Agent Databricks, Inc. 160 Spear Street, Suite 1300 San Francisco, CA 94105 [email protected] (866) 330-0121You should note that if you knowingly misrepresent in your notification that the material or activity is infringing, you will be liable for any damages, including costs and attorneys’ fees, incurred by us or the alleged infringer as the result of our relying upon such misrepresentation in removing or disabling access to the material or activity claimed to be infringing. We encourage you to consult your legal advisor before filing a notice or counter-notice.In accordance with the DMCA and other applicable law, Databricks may at our discretion limit access to the Sites and/or terminate the accounts of any users who infringe any intellectual property rights of others, whether or not there is any repeat infringement.GENERAL The Terms and the relationship between each user and Databricks shall be governed by the laws of the State of California without regard to its conflict of law provisions and each party shall submit to the personal and exclusive jurisdiction of the courts located in San Francisco, California. The application of the United Nations Convention on Contracts for the International Sale of Goods is expressly excluded. Except to the extent a Services Agreement applies, these Terms, along with the Privacy Policy, constitute the entire agreement between you and Databricks with respect to your use of the Sites and supersede all prior or contemporaneous communications and proposals, whether electronic, oral or written, between you and Databricks with respect to the Sites. If any provision of the Terms is found by a court of competent jurisdiction to be invalid, the parties nevertheless agree that the court should endeavor to give effect to the parties’ intentions as reflected in the provision, and the other provisions of the Terms remain in full force and effect. A party may only waive its rights under these Terms by a written document executed by both parties. Databricks’ failure to insist on or enforce strict performance of these Terms shall not be construed as a waiver by Databricks of any provision or any right it has to enforce these Terms, nor shall any course of conduct between Databricks and you or any other party be deemed to modify any provision of these Terms. The headings of the sections of these Terms are for convenience of reference only and are not intended to restrict, affect or be of any weight in the interpretation or construction of the provisions of such sections.  None of your rights or duties under these Terms may be transferred, assigned or delegated by you without our prior written consent, and any attempted transfer, assignment or delegation without such consent will be void and without effect. We may freely transfer, assign or delegate any of our rights or duties under these Terms. Subject to the foregoing, these Terms will be binding upon and will inure to the benefit of the parties and their respective representatives, heirs, administrators, successors and permitted assigns. No provision of these Terms is intended for the benefit of any third party, and the parties do not intend that any provision should be enforceable by a third party. Our relationship is an independent contractor relationship, and neither these Terms nor any actions by either party may be interpreted as creating an agency or partnership relationship. Nothing in these Terms shall be construed to obligate Databricks to enter into or engage with you on any commercial transaction.If you are provided access to any Software, you acknowledge that such Software may be subject to regulation by local laws and United States government agencies which prohibit export or diversion of certain products or information about products to certain countries and certain persons. You represent and warrant that you will not export or re-export such Software in violation of these regulations.You acknowledge that your breach of any of the provisions of these Terms may cause immediate and irreparable harm to Databricks for which we may not have an adequate remedy in money damages. We will therefore be entitled to obtain an injunction against such breach from any court of competent jurisdiction immediately upon request and will be entitled to recover from you the costs incurred in seeking such an injunction. The availability or exercise of our right to obtain injunctive relief will not limit our right to seek or obtain any other remedy.You agree that we will not be liable for delays, failures, or inadequate performance of the Sites resulting from conditions outside of our reasonable control, including but not limited to natural disasters or other acts of God, failure of telecommunications networks or any other network or utility, threatened or actual acts of terrorism or war, riots, labor strikes, or governmental acts or orders.Last Updated May 25, 2018ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/dataaisummit/speaker/erika-ehrli
Erika Ehrli - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingErika EhrliSenior Director of Product Marketing at DatabricksBack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/dataaisummit/speaker/jacob-renn/#
Jacob Renn - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingJacob RennChief Technologist at AI Squared, IncBack to speakersDr. Jacob Renn is co-founder and Chief Technologist of AI Squared, a seed-stage startup located in the Washington, DC area. At AI Squared, Jacob leads the company’s R&D efforts. Jacob is the lead developer of DLite, a family of large language models developed by AI Squared, and he is also the creator of the BeyondML project. Jacob also serves as adjunct faculty at Capitol Technology University, where he completed his PhD in Technology with a focus in Explainable Artificial Intelligence.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/dataaisummit/speaker/jackie-brosamer
Jackie Brosamer - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingJackie BrosamerDirector of Software Engineering at BlockBack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/blog/2022/01/17/improving-drug-safety-with-adverse-event-detection-using-nlp.html
Improving Drug Safety With Adverse Event Detection Using NLP -Skip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack. Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence. See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco June 26–29   Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWCategoriesAll blog postsCompanyCultureCustomersEventsNewsPlatformAnnouncementsPartnersProductSolutionsSecurity and TrustEngineeringData Science and MLOpen SourceSolutions AcceleratorsData EngineeringTutorialsData StreamingData WarehousingData StrategyBest PracticesData LeaderInsightsIndustriesFinancial ServicesHealth and Life SciencesMedia and EntertainmentRetailManufacturingPublic SectorImproving Drug Safety With Adverse Event Detection Using NLPby Amir Kermany, Michael Ortega, Moritz Steller, David Talby and Michael SankyJanuary 17, 2022 in Engineering BlogShare this postDon't miss our upcoming virtual workshop with John Snow Labs, Improve Drug Safety with NLP, to learn more about our joint NLP solution accelerator for adverse drug event detection.The World Health Organization defines pharmacovigilance as "the science and activities relating to the detection, assessment, understanding and prevention of adverse effects or any other medicine/vaccine-related problem." In other words, drug safety.Pharmacovigilance: drug safety monitoring in the real-worldWhile all medicines and vaccines undergo rigorous testing for safety and efficacy in clinical trials, certain side effects may only emerge once these products are used by a larger and more diverse patient population, including people with other concurrent diseases.To support ongoing drug safety, biopharmaceutical manufacturers must report adverse drug events (ADEs) to regulatory agencies, such as the US Food and Drug Administration (FDA) in the United States and the European Medicines Agency (EMA) in the EU. Adverse drug reactions or events are medical problems that occur during treatment with a drug or therapy. Of note, ADEs do not necessarily have a casual relationship with the treatment. But in aggregate, the proactive reporting of adverse events is a key part of the signal detection system used to ensure drug safety.Adverse event detection requires the right data foundationMonitoring patient safety is becoming more complex as more data is collected. In fact, less than 5% of ADEs are reported via official channels and the vast majority are captured in free-text channels: emails and phone calls to patient support centers, social media posts, sales conversations between clinicians and pharma sales reps, online patient forums, and so on.Robust drug safety monitoring requires manufacturers, pharmaceutical companies and drug safety groups to monitor and analyze unstructured medical text from a variety of jargons, formats, channels and languages. To do this effectively, organizations need a modern, scalable data and AI platform that can provide scientifically rigorous, near real-time insights.The path forward begins with the Databricks Lakehouse, a modern data platform that combines the best elements of a data warehouse with the low-cost, flexibility and scale of a cloud data lake. This new, simplified architecture enables healthcare providers and life sciences organizations to bring together all their data—structured (like diagnoses and procedure codes found in EMRs), semi-structured (like clinical notes) and unstructured (like images)— into a single, high-performance platform for both traditional analytics and data science.Building on these capabilities, Databricks has partnered with John Snow Labs, the leader in healthcare natural language process (NLP), to provide a robust set of NLP tools tailored for healthcare text. This is critical, as much of the data used for adverse event detection is text-based. You can learn more about our partnership with John Snow in our previous blog, Applying Natural Language Processing to Health Text at Scale.Solution accelerator for adverse drug event detectionTo help organizations monitor drug safety issues, Databricks and John Snow Labs built a solution accelerator notebook for ADE using NLP. As demonstrated in our previous blog, by leveraging the Databricks Lakehouse Platform, we can use pre-trained NLP models to extract highly-specialized structures from unstructured text and build powerful analytics and dashboards for different personas. In this solution accelerator, we show how to use pre-trained models to process conversational text, extract adverse events and drug information and build a Lakehouse for pharmacovigilance that powers various downstream use cases.The solution accelerator follows 4 basic steps:Ingest unstructured medical text at scale.Use pre-trained NLP models to extract useful information such as adverse events (e.g., renal damage), drug names and timing of the events in near real-time.Correlate adverse events with drug entities to establish a relationship.Measure frequency of events to determine significance.Below is a brief summary of the workflow contained within the notebook.Overview of the adverse drug event detection workflowStarting with raw text data, we use a corpus of 20,000 texts with known ADE status (4,200 texts containing ADE) and apply a pre-trained biobert model to detect ADE status and assess the specificity and sensitivity of the model based on the ground truth and the confidence level in accuracy of the assignment. In addition, we extract ADE status and drug entities from the conversational texts by using a combination of ner_ade_clinical and ner_posology models.By simply adding a stage in the pipeline, we can detect the assertion status of the ADE (present, absence, occured in the past, etc).To infer the relationship status of an ADE with a clinical entity, we use a pre-trained model (re_ade_clinical), which detects the relationships between a clinical entity (in this case drug) and the inferred ADE.The sparknlp_display library has the ability to show relations on the raw text and their linguistic relationships and dependencies as demonstrated below.After the ADE and drug entity data has been processed and correlated, we can build powerful dashboards to monitor the frequency of ADE and drug entity pairs in real time.Get started analyzing adverse drug events with NLP on DatabricksWith this solution accelerator, Databricks and John Snow Labs make it easy to analyze large volumes of text data to help with real-time drug signal detection and safety monitoring. To use this solution accelerator, you can preview the notebooks online and import them directly into your Databricks account. The notebooks include guidance for installing the related John Snow Labs NLP libraries and license keys.You can also visit our industry pages to learn more about our Healthcare and Life Sciences solutions.Try Databricks for freeGet StartedSee all Engineering Blog postsProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/explore/de-data-warehousing/rise-of-the-data-lakehouse#page=1?itm_data=deltalake-link-pf-riselakehousebook
Rise Of The Data Lakehouse by Bill Inmon Thumbnails Document Outline Attachments Layers Current Outline Item Previous Next Highlight All Match Case Match Diacritics Whole Words Color Size Color Thickness Opacity Presentation Mode Open Print Download Current View Go to First Page Go to Last Page Rotate Clockwise Rotate Counterclockwise Text Selection Tool Hand Tool Page Scrolling Vertical Scrolling Horizontal Scrolling Wrapped Scrolling No Spreads Odd Spreads Even Spreads Document Properties… Toggle Sidebar Find Previous Next Presentation Mode Open Print Download Current View FreeText Annotation Ink Annotation Tools Zoom Out Zoom In Automatic Zoom Actual Size Page Fit Page Width 50% 75% 100% 125% 150% 200% 300% 400% More Information Less Information Close Enter the password to open this PDF file: Cancel OK File name: - File size: - Title: - Author: - Subject: - Keywords: - Creation Date: - Modification Date: - Creator: - PDF Producer: - PDF Version: - Page Count: - Page Size: - Fast Web View: - Close Preparing document for printing… 0% Cancel
https://www.databricks.com/product/machine-learning-runtime
ML Runtime – DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack. Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence. See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco June 26–29   Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWMachine Learning RuntimeReady-to-use and optimized machine learning environmentTry for freeSchedule a demoThe Machine Learning Runtime (MLR) provides data scientists and ML practitioners with scalable clusters that include popular frameworks, built-in AutoML and optimizations for unmatched performance.BenefitsFrameworks of ChoiceML Frameworks are evolving at a frenetic pace and practitioners need to manage on average 8 libraries. The ML Runtime provides one-click access to a reliable and performant distribution of the most popular ML frameworks, and custom ML environments via pre-built containers.Augmented Machine LearningAccelerate machine learning from data prep to inference with built-in AutoML capabilities including hyperparameter tuning and model search using Hyperopt and MLflow.Simplified ScalingGo from small to big data effortlessly with an auto-managed and scalable cluster infrastructure. The Machine Learning Runtime also includes unique performance improvements for the most popular algorithms as well as HorovodRunner, a simple API for distributed deep learning.FeaturesFrameworks of ChoiceML Frameworks: The most popular ML libraries and frameworks are provided out-of-the-box including TensorFlow, Keras, PyTorch, MLflow, Horovod, GraphFrames, scikit-learn, XGboost, numpy, MLeap, and Pandas.Augmented MLAutomated Experiments Tracking: Track, compare, and visualize hundreds of thousands of experiments using open source or Managed MLflow and the parallel coordinates plot feature.Automated Model Search (for Single-node ML): Optimized and distributed conditional hyperparameter search across multiple model architectures with enhanced Hyperopt and automated tracking to MLflow.Automated Hyperparameter Tuning for Single-node Machine Learning: Optimized and distributed hyperparameter search with enhanced Hyperopt and automated tracking to MLflow.Automated Hyperparameter Tuning for Distributed Machine Learning: Deep integration with PySpark MLlib’s Cross Validation to automatically track MLlib experiments in MLflow.Optimized for simplified scalingOptimized TensorFlow: Benefit from TensorFlow CUDA-optimized version on GPU clusters for maximum performance.HorovodRunner: Quickly migrate your single node deep learning training code to run on a Databricks cluster with HorovodRunner, a simple API that abstracts complications faced when using Horovod for distributed training.Optimized MLlib Logistic Regression and Tree Classifiers: The most popular estimators have been optimized as part of the Databricks Runtime for ML to provide you with up to 40% speed-up compared to Apache Spark 2.4.0.Optimized GraphFrames: Run GraphFrames 2-4 times faster and benefit from up to 100 times speed-up for Graph queries, depending on the workloads and data skew.Optimized Storage for Deep Learning Workloads: Leverage high-performance solutions on Azure, AWS, and GCP for data loading and model checkpointing, both of which are critical to deep learning training workloads.How it worksThe Machine Learning Runtime is built on top and updated with every Databricks Runtime release. It is generally available across all Databricks product offerings including: Azure Databricks, AWS cloud, GPU clusters and CPU clusters.To use the ML Runtime, simply select the ML version of the runtime when you create your cluster.Databricks Runtime for Machine LearningCustomer StoryResourcesBlogsProductionizing Machine Learning: From Deployment to Drift DetectionDetecting Bias with SHAPHyperparameter Tuning with MLflow, Apache Spark MLlib and HyperoptIntroducing HorovodRunner for Distributed Deep Learning TrainingLoan Risk Analysis with XGBoost and Databricks Runtime for Machine LearningeBooksThe Big Book of Machine Learning Use CasesWebinarsAutoML Rapid, simplified machine learning for everyoneMLOps Virtual Event - Standardizing MLOps at ScaleAutomating the ML Lifecycle With Databricks Machine LearningMLOps Virtual EventAutomated Hyperparameter Tuning, Scaling and Tracking on DatabricksSimple Steps to Distributed Deep LearningAccelerating Machine Learning on DatabricksNotebooksUse Keras with TensorFlow on a single node on DatabricksFrom single node to distributed training with PyTorch on DatabricksSimple Steps to Distributed Deep Learning Demo NotebookDocumentationDatabricks Runtime for Machine LearningApache Spark MLlibAutoMLDeep LearningHorovodRunner: Distributed Deep Learning with HorovodReady to get started?Follow the Quickstart GuideProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/solutions/accelerators?filters=retail
Databricks Solution Accelerators - Deliver Data & AI Value Faster - DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack. Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence. See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco June 26–29   Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWIndustry SolutionsDeliver the data and AI-driven outcomes that matter most — fasterStart your free trialDatabricks Solution AcceleratorsSave hours of discovery, design, development and testing with Databricks Solution Accelerators. Our purpose-built guides — fully functional notebooks and best practices — speed up results across your most common and high-impact use cases. Go from idea to proof of concept (PoC) in as little as two weeks.   Start using Solution Accelerators with your free Databricks trial or your existing account.   Start your free trial todayBrowse AcceleratorssearchHide filtersIndustrySortDatabricksToxicity Detection in GamingfeaturedDatabricksOn-Shelf AvailabilityfeaturedDatabricksFine-Grained Demand Forecastingfeatured🔥DatabricksAutomated PHI RemovalfeaturednewDatabricksAbstracting Real-World Data for OncologyDatabricksAnti-Money LaunderingDatabricksCohort Building with Knowledge GraphsDatabricksComputer Vision FoundationsDatabricksCustomer Entity ResolutionDatabricksCustomer Lifetime Value🔥DatabricksCustomer SegmentationSplunkCyber Analytics (Splunk Connector)DatabricksDetecting Adverse Drug EventsnewDatabricksDigital Pathology Image AnalysisDatabricksDigital TwinsDatabricksESG Performance Analytics🔥DatabricksFHIR Interoperability with dbigniteDatabricksFuzzy Item MatchingDatabricksGenome-Wide Association StudiesDatabricksGeospatial Analytics to Identify FraudDatabricksHL7v2 Interoperability With SmolderDatabricksMedia Mix Modeling (MMM)DatabricksMedicare Risk AdjustmentDatabricksMerchant ClassificationDatabricksModern Investment PlatformDatabricksMulti-Touch AttributionDatabricksOptimized Order PickingDatabricksOverall Equipment EffectivenessDatabricksPredictive Maintenance (IoT)DatabricksPrice TransparencynewDatabricksProduct Quality InspectionDatabricksPropensity ScoringnewDatabricksR&D Optimization with Knowledge GraphsnewDatabricksReal-Time Bidding OptimizationDatabricksReal-Time Financial Fraud PreventionDatabricksReal-Time Point-of-Sale AnalyticsnewDatabricksReal-world EvidenceDatabricksRecommendation EngineDatabricksRegulatory ReportingDatabricksReputation RiskDatabricksRetention ManagementDatabricksRisk ManagementDatabricksSafety StockDatabricksSales Forecasting & Ad Attribution🔥DatabricksScalable Route GenerationDatabricksSocial Determinants of HealthDatabricksSubscriber Churn PredictionDatabricksSupply Chain Distribution OptimizationDatabricksSurvival Analysis & Lifetime ValueDatabricksThreat Detection with DNSDatabricksVideo Quality of ExperienceFAQHow much do Solution Accelerators cost?Solution Accelerators are available to any Databricks customer free of charge.Do I need to be a Databricks customer to use a Solution Accelerator?You can implement Solution Accelerators with a free Databricks trial or with your existing Databricks account.What can I expect from a Solution Accelerator?Solution Accelerators are designed to help you save hours of discovery, design, development and testing. Our goal is to jump-start your data and AI use cases by providing the right resources (notebooks, proven patterns and best practices). You can expect to go from idea to proof of concept (PoC) in as little as two weeks.See how actual customers use DatabricksExplore Customer StoriesProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/solutions/accelerators/threat-detection
Threat Detection at Scale With DNS Data and AI | DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack. Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence. See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco June 26–29   Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWSolution AcceleratorThreat Detection at Scale With DNS Data and AIPre-built code, sample data and step-by-step instructions ready to go in a Databricks notebookGet startedDetect cybercriminals using DNS data, threat intelligence feeds and MLLeverage the Databricks Solution Accelerator for DNS analytics to accelerate time to detection and response across petabytes of data. Tap into DNS traffic logs, enrich streaming threat intelligence, and apply advanced analytics to detect DNS abnormalities and prevent malicious attacks.Enrich petabytes of DNS data for analyticsUncover unknown threat patternsScale security operations efficientlyDownload notebookResourcesCase studyLearn moreWebinarLearn moreBlogLearn moreDeliver innovation faster with Solution Accelerators for popular data and AI use cases across industries. See our full library of solutionsReady to get started?Try Databricks for freeProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/blog/2020/11/13/how-to-evaluate-data-pipelines-for-cost-to-performance.html
How Databricks Customer METEONOMIQS Evaluates Data Pipeline Cost to Performance - The Databricks BlogSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack. Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence. See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco June 26–29   Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWCategoriesAll blog postsCompanyCultureCustomersEventsNewsPlatformAnnouncementsPartnersProductSolutionsSecurity and TrustEngineeringData Science and MLOpen SourceSolutions AcceleratorsData EngineeringTutorialsData StreamingData WarehousingData StrategyBest PracticesData LeaderInsightsIndustriesFinancial ServicesHealth and Life SciencesMedia and EntertainmentRetailManufacturingPublic SectorHow to Evaluate Data Pipelines for Cost to Performanceby Hector LeanoNovember 13, 2020 in Company BlogShare this postLearn best practices for designing and evaluating cost-to-performance benchmarks from Germany’s #1 weather portal.While we certainly conduct several benchmarks, we know the best benchmark is your queries running on your data. But what are you benchmarking against in your evaluation? The answer seems obvious - cost and integration with your cloud architecture roadmap. We are finding, however, that many enterprises are only measuring the costs of individual services within a workflow, rather than the entire cost of the workflow. When comparing different architectures, running a complete workflow will demonstrate the total resources consumed (data engine + compute + ancillary support functions). Without knowing the duration, job failure rate of each architecture, and manual effort required to support a job, comparing list prices of the individual components in two architectures will be misleading at best.wetter.com case studywetter.com is the DACH region’s #1 B2C weather portal with up to 20 million monthly unique users along with full cross-media production. To leverage and monetize its data, wetter.com created a new business unit called METEONOMIQS. With METEONOMIQS, the company could now generate new revenue-streams out of their data by developing and selling data-products to business customers. METEONOMIQS provides weather and geo-based data science services to decode the interrelation between weather, consumer behaviour and many other factors used by clients in retail, FMCG, e-commerce, tourism, food and advertising. METEONOMIQS’ challenge METEONOMIQS had chosen Amazon EMR for processing their data from raw ingestion through to cleansed and aggregated to serve downstream API users. Originally EMR had been the obvious choice as a best-in-class cloud-based Spark engine that fit into their AWS stack. However, this architecture soon hit its limits. The data pipeline required substantial manual effort to update rows and clean tables, required high DevOps effort to maintain, and limited the potential to use ML due to prolonged development cycles. The poor notebook experience and risk of errors when handing over ML models from DS to DE made it harder to support  multiple models at a time. The greatest risk to the business however was the inability to implement an automated GDPR-compliant workflow by, for example, easily deleting individual customers. Instead METEONOMIQS had to manually clean the data, leading to days of downtime. With GDPR penalties reaching up to 4% of the parent company’s global revenue, this presented a large risk for parent company ProSiebenSat.1. Building the testMETEONOMIQS turned to Databricks to see if there was a better way to architect their data ingest, processing, and management on Amazon S3. Working with Databricks, they set up a test to see how running this pipeline on Databricks compared in terms of: Vector analyzedCapabilities requiredSetupAbility to set up IAM-access roles by usersAbility to integrate into their existing AWS Glue data catalogue as a metastorePipeline migrationAbility to migrate code from existing pipeline directly to Databricks without major re-engineering. Note: they did not tackle code optimization in this testGDPR complianceAbility to build a table with (test) customer/app-ids which could be removed to fulfill the GDPR requirements (right to be forgotten).Ability to set up automated deletion job removing the IDs from all intermediate and results-tables and validate the outcomeClean up / UpdateAbility to reconstruct an example of a previously updated / cleaned-up procedure.Build a clean-up procedure based on above example and do an update on the affected recordsEase of useEase of building visualisations within the databricks-notebooks by using the built-in functionalities and external plotting libraries (like matplotlib).Ability to work on multiple projects/streams by attaching two notebooks to a clusterML model managementSelect an existing model from the current environment and migrate the code for the training-procedure to DatabricksConduct training-run(s) and use MLFlow tracking server to track all parameters, metrics and artifactsOPTIONAL: Store the artifacts in the currently used proprietary formatRegister (best) model in the MLflow Model Registry, set it into “production” state and demonstrate the approval processDemonstrate the handover from data domain (model building) to systems of engagement domain (model production) via MLflow Model RegistryTotal costUse the generated data from the PoC and additional information (further pipelines/size of the data/number of users/ …) to project infrastructure costs, inclusive of Databricks, compute, and storage.Benchmark resultsVector analyzedEMR-based architectureDatabricks-based architectureSetup✔✔Pipeline migration—✔GDPR compliance✘ GDPR deletes in hours/days with downtime ✔ GDPR deletes in minutes without downtime Clean up / Update✘ Requires days of downtime ✔ Data corrections/enhancements without downtime Ease of use✘✔ML model management✘✔ Improved collaboration between Data Scientists and Data Engineers / Dev Team Total cost 80% of EMR costs were from dedicated dev and analytics clusters leading to unpredictable compute costs. DataOps required substantial developer resources to maintain. Through cluster sharing, METEONOMIQS could use cloud resources much more efficiently But more importantly, they can now do new use cases like automated GDPR compliance and scale their ML in ways not possible before. For METEONOMIQS the main gains to the Databricks architecture were: Adding use cases (e.g., automated data corrections and enhancements) that hadn’t been deployed on EMR due to the high level of development costsMassively decreasing the amount of manual maintenance required for the pipelineSimplifying and automating GDPR compliance of the pipeline so that it could now be done in minutes without downtime compared to hours/days with downtime previouslyAdditionally, the team had high AWS resource consumption in the EMR architecture since shared environments were not possible on EMR. As a result team members had to use dedicated clusters. Databricks’ shared environment for all developers plus the ability to work on shared projects (i.e., notebooks), resulted in a more efficient use of human and infrastructure resources. Handover of ML models from data scientists to the data engineering team was complicated and led the ML code to diverge. With MLflow the team now has a comfortable way to hand over models and track changes over time. Further, as Databricks notebooks are much easier to use, METEONOMIQS could enable access to the data lake to a broader audience like, for example, the mobile app team. As one of their next steps, METEONOMIQS will look to optimize their code for further infrastructure savings and performance gains as well as look at other pipelines to transition to Databricks architecture. TakeawaysThe keys to the team’s successful benchmark relied on Knowing what they were measuring for: Often clients will only compare list prices of individual services (e.g., compare the cost of one Spark engine versus another) when evaluating different architectures. What we try to advise clients is not to look at individual services but rather the total job cost (data engine + compute + team productivity) against the business value delivered. In this case, wetter.com’s data engineering team aligned their test with the overall business goal - ensuring their data pipelines could support business and regulatory requirements while decreasing infrastructure and developer overhead.Choosing critical workloads: Instead of trying to migrate all pipelines at once, the team  narrowed the scope to their most pressing business case. Through this project they were able to validate that Databricks could handle data engineering, machine learning, and even basic business analytics at scale, on budget, and in a timely manner.Delivering value quickly: Critical for this team was to move from discussions to PoCs to production as quickly as possible to start driving cost savings. Discussions stretching months or longer was not an option nor a good use of their team’s time. Working with Databricks, they were able to stand up the first benchmark PoCs in less than three weeks.Ready to run your own evaluation?If you are looking to run your own tests to compare costs and performance of different cloud data pipelines, drop us a line at [email protected]. We can provide a custom assessment based on your complete job flow and help qualify you for any available promotions. Included in the assessment are: Tech validation: understand data sources, downstream data use, and resources currently required to run pipeline jobBusiness value analysis: identify the company’s strategic priorities, to understand how the technical use case (e.g., ETL) drives business use cases (e.g., personalization, supply chain efficiency, quality of experience). This ensures our SAs are designing a solution that fits not just today’s needs but the ongoing evolution of your business.Below is an outline of our general approach based on best practices for designing and evaluating your benchmark test for data pipelines. Designing the testGiven data pipelines within the same enterprise can vary widely depending on the data’s sources and end uses - and large enterprises can have thousands of data pipelines spanning supply chain, marketing, product, and operations - how do you test an architecture to ensure it can work across a range of scenarios, end-user personas, and use cases? More importantly, how can you do it within a limited time? What you want is to be able to go from test, to validation, to scaling across as many pipelines as possible as quickly as possible to reduce costs as well as the support burden on your data engineers. One approach we have seen is to select pipelines that are architecturally representative of most of an enterprise’s pipelines. While this is a good consideration, we find selecting pipelines based primarily on architectural considerations does not necessarily lead to the biggest overall impact. For example, your most common data pipeline architecture might be for smaller pipelines that aren’t necessarily the ones driving your infrastructure costs or requiring the most troubleshooting support from your data engineers. Instead, we recommend clients limit the scope of their benchmark tests to 3-5 data pipelines based on just two considerations: Test first on business critical data workloads: Often the first reflex is to start with less important workloads and then move up the stack as the architecture proves itself. However, we recommend running the test on strategic, business critical pipelines first because it is better to know earlier rather than later if an architecture can deliver on the necessary business SLAs. Once you prove you can deliver on the important jobs, then it becomes easier to move less critical pipelines over to a new architecture. But the reverse (moving from less critical to more critical) will require validating twice - first on the initial test and then once again for important workloads.Select pipelines based on the major stressors affecting performance: What’s causing long lead times, job delays, or job failures? When selecting test pipelines, make sure you know what the stressors are to your current architecture, and select representative pipelines generating long delays, high fail rates, and/or require constant support from your data engineering teams. For example, if you’re a manufacturer trying to get a real-time view of your supply chain, from parts vendors to assembly to shipping, but your IoT pipelines take hours to process large volumes of small files in batches, that is an ideal test candidate.Evaluating the resultsOnce you have selected the data pipelines to test, the key metrics to evaluate are: Total cost to run a job: What are the total resources required to run a job? This means looking not just at the data engine costs for ingest and processing, but also total compute and support function costs (like data validation) to complete the data query. In addition, what is your pipeline’s failure rate? Frequent job failures mean reprocessing the data several times, significantly increasing infrastructure costs.Amount of time to run a job: How long does it take to run a job once you add cluster spin up and data processing along with the amount of time it takes to identify and remediate any job failures? The longer this period, the higher the infrastructure costs but also, the longer it will take for your data to drive real business value/insights. Enterprises rely on data to make important business decisions and rigid pipelines with long lead times prevent businesses from iterating quickly.Productivity: How often are your jobs failing and how long does it take your data engineers to go through the logs to find the root cause, troubleshoot, and resolve? This loss of productivity is a real cost in terms of increased headcount plus the opportunity cost of having your data engineers focused on basic data reliability issues instead of solving higher level business problems. Even if your jobs run correctly, are your downstream users working with the most up to date information? Are they forced to deduplicate and clean data before use in reports, analytics, and data science? Particularly with streaming data where you can have out-of-order files, how can you ensure you have consistent data across users?Extensibility: Will adding new use cases or data sources require full re-engineering of your data pipelines, or do you have a schema that can evolve with your data needs?Additionally, as enterprises look to create a more future proof architecture, they should look to: Implementation complexity: How big of a migration will this be? How complex is the re-engineering required? How much and for how long will it take data engineering resources to stand up a new data pipeline? How quickly can your architecture conform to security requirements? When UK-based food box company Guosto rebuilt their ETL pipelines to Delta Lake on Databricks, they noted, “the whole implementation, from the first contact with Databricks to have the job running in production took about two months — which was surprisingly fast given the size of Gousto tech and the governance processes in place.”Portability: As more enterprises look to multi-cloud, how portable is their architecture across clouds? Is data being saved in proprietary formats resulting in vendor lock in (i.e., will it require substantial costs to switch in the future)?Try Databricks for freeGet StartedSee all Company Blog postsProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/glossary/tensorflow-guide
Everything You Wanted To Know About TensorFlowPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack. Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence. See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco June 26–29   Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWTensorFlowAll>TensorFlowTry Databricks for freeGet StartedIn November of 2015, Google released its open-source framework for machine learning and named it TensorFlow. It supports deep-learning, neural networks, and general numerical computations on CPUs, GPUs, and clusters of GPUs. One of the biggest advantages of TensorFlow is its open-source community of developers, data scientists, and data engineers who contribute to its repository. The current version of TensorFlow can be found on GitHub along with release notes. TensorFlow is by far the most popular AI engine being used today. What is TensorFlow?TensorFlow is an open-source library for numerical computation, large-scale machine learning, deep learning, and other statistical and predictive analytics workloads. This type of technology makes it faster and easier for developers to implement machine learning models, as it assists the process of acquiring data, serving predictions at scale, and refining future results.Okay, so exactly what does TensorFlow do? It can train and run deep neural networks for things like handwritten digit classification, image recognition, word embeddings, and natural language processing (NLP). The code contained in its software libraries can be added to any application to help it learn these tasks.TensorFlow applications can run on either conventional CPUs (central processing units) or GPUs (higher-performance graphics processing units). Because TensorFlow was developed by Google, it also operates on the company’s own tensor processing units (TPUs), which are specifically designed to speed up TensorFlow jobs.You might also be wondering: what language is TensorFlow written in? Although it uses Python as a front-end API for building applications with the framework, it actually has wrappers in several languages including C++ and Java. This means you can train and deploy your machine learning model quickly, regardless of the language or platform you use.Click here for more FAQs about TensorFlow model development.TensorFlow historyGoogle first released TensorFlow in 2015 under the Apache 2.0 license. Its predecessor was a closed-source Google framework called DistBelief, which provided a testbed for deep learning implementation.Google's first TPUs were detailed publicly in 2016, and used internally in conjunction with TensorFlow to power some of the company’s applications and online services. This included Google’s RankBrain search algorithm and the Street View mapping technology.In early 2017, Google TensorFlow reached Release 1.0.0 status. A year later, Google made the second generation of TPUs available to Google Cloud Platform users for training and running their own machine learning models.Google released the most recent version, TensorFlow 2.0, in October 2019. Google had taken user feedback into account in order to make various improvements to the framework and make it easier to work with — for instance, it now uses the relatively simple Keras API for model training.Who created TensorFlow?As you now know, Google developed TensorFlow, and continues to own and maintain the framework. It was created by the Google Brain team of researchers, who carry out fundamental research in order to advance key areas of machine intelligence and promote a better theoretical understanding of deep learning.The Google Brain team designed TensorFlow to be able to work independently from Google’s own computing infrastructure, but it gains many advantages from being backed by a commercial giant. As well as funding the project’s rapid development, over the years Google has also improved TensorFlow to ensure that it is easy to deploy and use.Google chose to make TensorFlow an open source framework with the aim of accelerating the development of AI. As a community-based project, all users can help to improve the technology — and everyone shares the benefits.How does TensorFlow work?TensorFlow combines various machine learning and deep learning (or neural networking) models and algorithms, and makes them useful by way of a common interface.It enables developers to create dataflow graphs with computational nodes representing mathematical operations. Each connection between nodes represents multidimensional vectors or matrices, creating what are known as tensors.While Python provides the front-end API for TensorFlow, the actual math operations are not performed in Python. Instead, high-performance C++ binaries perform these operations behind the scenes. Python directs traffic between the pieces and hooks them together via high-level programming abstractions.TensorFlow applications can be run on almost any target that’s convenient, including iOS and Android devices, local machines, or a cluster in the cloud — as well as CPUs or GPUs (or Google’s custom TPUs if you’re using Google Cloud).TensorFlow includes sets of both high-level and low-level APIs. Google recommends the high-level APIs for simplifying data pipeline development and application programming, but the low-level APIs (called TensorFlow Core) are useful for debugging applications and experimentation.What is TensorFlow used for? What can you do with TensorFlow?TensorFlow is designed to streamline the process of developing and executing advanced analytics applications for users such as data scientists, statisticians, and predictive modelers.Businesses of varying types and sizes widely use the framework to automate processes and develop new systems, and it’s particularly useful for very large-scale parallel processing applications such as neural networks. It’s also been used in experiments and tests for self-driving vehicles.As you’d expect, TensorFlow’s parent company Google also uses it for in-house operations, such as improving the information retrieval capabilities of its search engine, and powering applications for automatic email response generation, image classification, and optical character recognition.One of the advantages of TensorFlow is that it provides abstraction, which means developers can focus on the overall logic of the application while the framework takes care of the fine details. It’s also convenient for developers who need to debug and gain introspection into TensorFlow apps.The TensorBoard visualization suite has an interactive, web-based dashboard that lets you inspect and profile the way graphs run. There’s also an eager execution mode that allows you to evaluate and modify each graph operation separately and transparently, rather than creating the entire graph as a single opaque object and evaluating it all at once.Databricks Runtime for Machine Learning includes TensorFlow and TensorBoard, so you can use these libraries without installing any packages.Now let’s take a look at how to use TensorFlow.How to install TensorFlowFull instructions and tutorials are available on tensorflow.org, but here are the basics.System requirements:Python 3.7+pip 19.0 or later (requires manylinux2010 support, and TensorFlow 2 requires a newer version of pip)Ubuntu 16.04 or later (64-bit)macOS 10.12.6 (Sierra) or later (64-bit) (no GPU support)Windows 7 or later (64-bit)Hardware requirements:Starting with TensorFlow 1.6, binaries use AVX instructions which may not run on older CPUs.GPU support requires a CUDA®-enabled card (Ubuntu and Windows)#1. Install the Python development environment on your systemCheck if your Python environment is already configured:python3 - -versionpip3 - -versionIf these packages are already installed, skip to the next step.Otherwise, install Python, the pip package manager, and venv.If not in a virtual environment, use python3 -m pip for the commands below. This ensures that you upgrade and use the Python pip instead of the system pip.#2. Create a virtual environment (recommended)Python virtual environments are used to isolate package installation from the system.#3. Install the TensorFlow pip packageChoose one of the following TensorFlow packages to install from PyPI:tensorflow —Latest stable release with CPU and GPU support (Ubuntu and Windows)tf-nightly —Preview build (unstable). Ubuntu and Windows include GPU supporttensorflow==1.15 —The final version of TensorFlow 1.x.Verify the installation. If a tensor is returned, you've installed TensorFlow successfully.Note: A few installation mechanisms require the URL of the TensorFlow Python package. The value you specify depends on your Python version.How to update TensorFlowThe pip package manager offers a simple method to upgrade TensorFlow, regardless of the environment.Prerequisites:Python 3.6-3.9 installed and configured (check the Python version before starting).TensorFlow 2 installed.The pip package manager version 19.0 or greater (check the pip version and upgrade if necessary).Access to the command line/terminal or notebook environment.To upgrade TensorFlow to a newer version:#1. Open the terminal (CTRL+ALT+T).#2. Check the currently installed TensorFlow version:pip3 show tensorflowThe command shows information about the package, including the version.#3. Upgrade TensorFlow to a newer version with:pip3 install - -upgrade tensorflow==<version>Make sure you select a version compatible with your Python release, otherwise the version will not install. For the notebook environment, use the following command, and restart the kernel after completion:!pip install - -upgrade tensorflow==<version>This automatically removes the old version along with the dependencies and installs the newer upgrade.#4. Check the upgraded version by running:pip3 show tensorflowWhat Is TensorFlow Lite?In 2017, Google introduced a new version of TensorFlow called TensorFlow Lite. TensorFlow Lite is optimized for use on embedded and mobile devices. It’s an open-source, product-ready, cross-platform deep learning framework that converts a pre-trained TensorFlow model to a special format that can be optimized for speed or storage.To ensure you’re using the right version for any given scenario, you need to know when to use TensorFlow and when to use TensorFlow Lite. For example, if you need to deploy a high-performance deep learning model in an area without a good network connection, you’d use TensorFlow Lite to reduce the file size.If you’re developing a model for edge devices, it needs to be lightweight so that it uses minimal space and increases download speed on lower-bandwidth networks. To achieve this, you need optimization to reduce the size of the model or improve the latency—which TensorFlow Lite does via quantization and weight pruning.The resulting models are lightweight enough to be deployed for a low-latency inference on edge devices such as Android or iOS cell phones, or Linux-based embedded devices like Raspberry Pis or Microcontrollers. TensorFlow Lite also uses several hardware accelerators for speed, accuracy, and optimizing power consumption, which are important for running inferences at the edge.What is a dense layer in TensorFlow?Dense layers are used in the creation of both shallow and deep neural networks. Artificial neural networks are brain-like architectures made up of a system of neurons, and they’re able to learn through examples instead of being programmed with specific rules.In deep learning, multiple layers are used to extract higher-level features from the raw input; when the network is composed of several layers, it’s called a stacked neural network. Each of these layers is made of nodes, which combine input from the data with a set of coefficients called weights that either amplify or dampen the input.For its 2.0 release, TensorFlow has adopted a deep learning API called Keras, which runs on top of TensorFlow and provides a number of pre-built layers for different neural network architectures and purposes. A dense layer is one of these — it has a deep connection, meaning that each neuron receives input from all the neurons of its previous layer.Dense layers are typically used for changing dimensions, rotation, scaling, and translation of the vectors it generates. They have the ability to learn features from all the combined features of the previous layer.What's the difference between TensorFlow and Python?TensorFlow is an open-source machine learning framework, and Python is a popular computer programming language. It’s one of the languages used in TensorFlow. Python is the recommended language for TensorFlow, although it also uses C++ and JavaScript.Python was developed to help programmers write clear, logical code for both small and large projects. It’s often used to build websites and software, automate tasks, and carry out data analysis. This makes it relatively simple for beginners to learn TensorFlow.A useful question to ask is: what version of Python does TensorFlow support? Certain TensorFlow releases are only compatible with specific versions of Python, and 2.0 requires Python 3.7 to 3.10. Make sure you check the requirements before installing TensorFlow.What is PyTorch and TensorFlow?TensorFlow is not the only machine learning framework in existence. There are a number of other choices, such as PyTorch, which have similarities and cover many of the same needs. So, what is the actual difference between TensorFlow and PyTorch?PyTorch and TensorFlow are just two of the frameworks developed by tech companies for the Python deep learning environment, helping human-like computers to solve real-world problems. The key difference between PyTorch and TensorFlow is the way they execute code. PyTorch is more tightly integrated with the Python language.As we’ve seen, TensorFlow has robust visualization capabilities, production-ready deployment options, and support for mobile platforms. PyTorch isn’t as established but is still popular for its simplicity and ease of use, as well as dynamic computational graphs and efficient memory usage.As for the question of which is better, TensorFlow or Pytorch ­— it really depends on what you want to achieve. If your aim is to build AI-related products, TensorFlow will be a good fit for you, whereas PyTorch is more suited to research-oriented developers. PyTorch is a good fit for getting projects up and running in short order, but TensorFlow has more robust capabilities for larger projects and more complex workflows.Companies Using TensorFlowAccording to the TensorFlow website, a number of other big-name companies use the framework as well as Google. These include Airbnb, Coca-Cola, eBay, Intel, Qualcomm, SAP, Twitter, Uber, Snapchat developer Snap Inc., and sports consulting company STATS LLC.Top five TensorFlow alternatives 1. DataRobot DataRobot is a cloud-based machine learning framework designed to help businesses extend their data science capabilities by deploying machine learning models and creating advanced AI applications.The framework enables you to use and optimize the most valuable open-source modeling techniques from the likes of R, Python, Spark, H2O, VW, and XGBoost. By automating predictive analytics, DataRobot helps data scientists and analysts produce more accurate predictive models.There’s a growing library of the best features, algorithms, and parameter values for building each model—and with automated ensembling, users can easily find and combine multiple algorithms and pre-built prototypes for feature extraction and data preparation (with no need for trial-and-error guesswork).2. PyTorchDeveloped by the team at Facebook and open sourced on GitHub.com in 2017, PyTorch is one of the newer deep learning frameworks. As we mentioned earlier, it has several similarities with TensorFlow, including hardware-accelerated components and a highly interactive development model for design-as-you-go.PyTorch also optimizes performance by taking advantage of native support for asynchronous execution from Python. Benefits include built-in dynamic graphs and a stronger community than that of TensorFlow.However, PyTorch doesn't provide a framework to deploy trained models directly online, and an API server is needed for production. It also requires a third party — Visdom — for visualization, and the features of this are rather limited.3. KerasKeras is a high-level open-source neural network library designed to be user-friendly, modular, and easy to extend. It’s written in Python and supports multiple back-end neural network computation engines — although its primary (and default) back end is TensorFlow, and its primary supporter is Google.We already mentioned the TensorFlow Keras high-level API, and Keras also runs on top of Theano. It has a number of standalone modules that you can combine, including neural layers, cost functions, optimizers, initialization schemes, activation functions, and regularization schemes.Keras provides support for a wide range of production deployment options, and strong support for multiple GPUs and distributed training. However, community support is minimal, and the library is typically used for small datasets.4. MXNetApache MXNet is an open-source deep learning software framework, which is used to define, train and deploy deep neural networks on a wide array of devices. It has the honor of being adopted by Amazon as the premier deep learning framework on AWS.It can scale almost linearly across multiple GPUs and multiple machines, allowing for fast model training, and supports a flexible programming model that enables users to mix symbolic and imperative programming for maximum efficiency and productivity.MXNet also supports multiple programming language APIs, including Python, C++, Scala, R, JavaScript, Julia, Perl, and Go (although its native APIs aren’t as task-agreeable as TensorFlow’s).5. CNTKCNTK, also known as the Microsoft Cognitive Toolkit, is a unified deep-learning toolkit that uses a graph structure to describe data flow as a series of computational steps (just like TensorFlow, although it isn’t as easy to learn or deploy).It focuses largely on creating deep learning neural networks and can handle these tasks rapidly. CNTK allows users to easily realize and combine popular model types such as feed-forward DNNs, convolutional nets (CNNs), and recurrent networks (RNNs/LSTMs).CNTK has a broad set of APIs (Python, C++, C#, Java) and can be included as a library in your Python, C#, or C++ programs—or used as a standalone machine-learning tool through its own model description language (BrainScript). It supports 64-bit Linux or 64-bit Windows operating systems.Note: the 2.7 version was the last main release of CNTK, and there are no plans for new feature development.Should I use TensorFlow?TensorFlow has plenty of advantages. The open source machine learning framework provides excellent architectural support, which allows for the easy deployment of computational frameworks across a variety of platforms. It benefits from Google’s reputation, and several big names have adopted TensorFlow to carry out artificial intelligence tasks.On the flipside, some details of TensorFlow’s implementation make it difficult to obtain totally deterministic model training results for some training jobs — but the team is considering more controls to affect determinism in a workflow.Getting started is simple, especially with TensorFlow by Databricks, an out-of-the-box integration via Databricks Runtime for Machine Learning. You can get clusters up and running in seconds and benefit from a range of low-level and high-level APIs.Additional ResourcesTry TensorFlow on Databricks todayBack to GlossaryProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/de/solutions/data-pipelines
Data Engineering | DatabricksSkip to main contentPlattformDie Lakehouse-Plattform von DatabricksDelta LakeData GovernanceData EngineeringDatenstreamingData-WarehousingGemeinsame DatennutzungMachine LearningData SciencePreiseMarketplaceOpen source techSecurity & Trust CenterWEBINAR 18. Mai / 8 Uhr PT Auf Wiedersehen, Data Warehouse. Hallo, Lakehouse. Nehmen Sie teil, um zu verstehen, wie ein Data Lakehouse in Ihren modernen Datenstapel passt. Melden Sie sich jetzt anLösungenLösungen nach BrancheFinanzdienstleistungenGesundheitswesen und BiowissenschaftenFertigungKommunikation, Medien und UnterhaltungÖffentlicher SektorEinzelhandelAlle Branchen anzeigenLösungen nach AnwendungsfallSolution AcceleratorsProfessionelle ServicesDigital-Native-UnternehmenMigration der Datenplattform9. Mai | 8 Uhr PT   Entdecken Sie das Lakehouse für die Fertigung Erfahren Sie, wie Corning wichtige Entscheidungen trifft, die manuelle Inspektionen minimieren, die Versandkosten senken und die Kundenzufriedenheit erhöhen.Registrieren Sie sich noch heuteLernenDokumentationWEITERBILDUNG & ZERTIFIZIERUNGDemosRessourcenOnline-CommunityUniversity AllianceVeranstaltungenData + AI SummitBlogLabsBaken26.–29. Juni 2023 Nehmen Sie persönlich teil oder schalten Sie für den Livestream der Keynote einJetzt registrierenKundenPartnerCloud-PartnerAWSAzureGoogle CloudPartner ConnectTechnologie- und DatenpartnerTechnologiepartnerprogrammDatenpartner-ProgrammBuilt on Databricks Partner ProgramConsulting- und SI-PartnerC&SI-PartnerprogrammLösungen von PartnernVernetzen Sie sich mit validierten Partnerlösungen mit nur wenigen Klicks.Mehr InformationenUnternehmenKarriere bei DatabricksUnser TeamVorstandUnternehmensblogPresseAktuelle Unternehmungen von DatabricksAuszeichnungen und AnerkennungenKontaktErfahren Sie, warum Gartner Databricks zum zweiten Mal in Folge als Leader benannt hatBericht abrufenDatabricks testenDemos ansehenKontaktLoginJUNE 26-29REGISTER NOWData EngineeringMillionen Produktions-Workloads werden täglich auf Databricks ausgeführtErste SchritteDemo ansehen WEBINAR • Goodbye, Data Warehouse. Hello, Lakehouse. Attend on May 18 and get a $100 credit toward a Databricks certification course Register nowNehmen Sie Batch- und Streaming-Daten auf einfache Weise in die Databricks Lakehouse-Plattform auf und transformieren Sie sie. Orchestrieren Sie zuverlässige Produktionsabläufe, während Databricks Ihre Infrastruktur automatisch in großem Maßstab verwaltet. Steigern Sie die Produktivität Ihrer Teams mit integrierten Datenqualitätstests und Support für Best Practices für die Softwareentwicklung.Batch und Streaming zusammenführenBeseitigen Sie die Trennung von Datenbeständen und führen Sie sie auf einer Plattform mit einer einzigen und einheitlichen API zusammen, um Batch- und Streaming-Daten in großem Maßstab zu erfassen, zu transformieren und schrittweise zu verarbeiten.Auf die Wertschöpfung aus Daten konzentrierenDatabricks verwaltet automatisch Ihre Infrastruktur und die operativen Komponenten Ihrer Produktionsabläufe, sodass Sie sich auf die Wertschöpfung und nicht auf die Tools konzentrieren können.Die Tools Ihrer Wahl verbindenEine offene Lakehouse-Plattform zum Verbinden und Verwenden Ihrer bevorzugten Data Engineering-Tools für Datenaufnahme, ETL/ELT und Orchestrierung.Auf der Lakehouse-Plattform entwickeltDie Lakehouse-Plattform bietet die beste Grundlage für die Entwicklung und die gemeinsame Nutzung vertrauenswürdiger Datenbestände, die zentral verwaltet werden, zuverlässig und blitzschnell sind.„Für uns wird Databricks zum Allzweck-Tool für unsere gesamte ETL-Arbeit. Je mehr wir mit der Lakehouse-Plattform arbeiten, desto einfacher ist sie sowohl für Benutzer als auch für Plattformadministratoren.“– Hillevi Crognale, Engineering Manager, YipitDataWie funktioniert es?Vereinfachte DatenaufnahmeAutomatisierte ETL-VerarbeitungZuverlässige Workflow-OrchestrierungDurchgängige Beobachtbarkeit und ÜberwachungDatenverarbeitungs-Engine der nächsten GenerationGrundlage von Governance, Zuverlässigkeit und LeistungVereinfachte DatenaufnahmeNehmen Sie Daten in Ihre Lakehouse-Plattform auf und betreiben Sie Ihre Analytics-, KI- und Streaming-Anwendungen von einem Ort aus. Auto Loader verarbeitet Dateien, die im Cloud-Speicher landen, inkrementell und automatisch in geplanten oder kontinuierlichen Jobs, ohne dass Statusinformationen verwaltet werden müssten. Neue Dateien werden effizient überwacht, ohne sie in einem Verzeichnis auflisten zu müssen, wobei das System bis in den Milliardenbereich skaliert. Außerdem kann Auto Loader das Schema automatisch aus den Quelldaten ableiten und im Laufe der Zeit an aufkommende Änderungen anpassen. Der Befehl COPY INTO erleichtert Analysten die Batch-Dateiaufnahme in Delta Lake über SQL.„Wir konnten eine Produktivitätssteigerung von 40 % beim Data Engineering verzeichnen – die Zeit, die für die Entwicklung neuer Ideen benötigt wird, wurde von Tagen auf Minuten reduziert und die Verfügbarkeit und Genauigkeit unserer Daten erhöht.“– Shaun Pearce, Chief Technology Officer, GoustoMehr InformationenAutomatisierte ETL-VerarbeitungNach der Erfassung müssen Rohdaten transformiert werden, um sie für Analytics und KI aufzubereiten. Databricks bietet mit Delta-Live-Tables (DLT) leistungsstarke ETL-Funktionen für Data Engineers, Data Scientists und Analysten. DLT ist das erste Framework, das in der Lage ist, mithilfe eines einfachen deklarativen Ansatzes ETL- und ML-Pipelines für Batch- oder Streaming-Daten zu erstellen und gleichzeitig betriebliche Komplexitäten wie Infrastrukturmanagement, Aufgabenorchestrierung, Fehlerbehandlung und Wiederherstellung sowie Leistungsoptimierung zu automatisieren. Mit DLT können Engineers ihre Daten auch als Code behandeln und Best Practices für die Softwareentwicklung wie Tests, Überwachung und Dokumentation anwenden, um zuverlässige Pipelines auch in großer Zahl bereitzustellen.Mehr InformationenZuverlässige Workflow-OrchestrierungDatabricks Workflows ist der vollständig verwaltete Orchestrierungsservice für alle Ihre Daten, Analytics und KI und nativ in Ihrer Lakehouse-Plattform vorhanden. Orchestrieren Sie verschiedene Workloads für den gesamten Lebenszyklus, einschließlich Delta-Live-Tables und Jobs für SQL, Spark, Notebooks, dbt, ML-Modelle und mehr. Die tiefgreifende Integration in die zugrunde liegende Lakehouse-Plattform stellt sicher, dass Sie zuverlässige Produktions-Workloads in jeder Cloud erstellen und ausführen und gleichzeitig eine umfassende und zentralisierte, für Endbenutzer jedoch niederschwellige Überwachung bieten.„Unsere Mission ist es, die Art und Weise zu verändern, wie wir den Planeten mit Energie versorgen. Unsere Kunden im Energiesektor benötigen Daten, Beratungsdienste und Forschung, um diese Transformation zu erreichen. Databricks Workflows gibt uns die Geschwindigkeit und Flexibilität, um die Erkenntnisse zu liefern, die unsere Kunden benötigen.“— Yanyan Wu, Vice President of Data, Wood MackenzieMehr InformationenDurchgängige Beobachtbarkeit und ÜberwachungDie Lakehouse-Plattform bietet Ihnen Transparenz über den gesamten Daten- und KI-Lebenszyklus, sodass Data Engineers und Betriebsteams den Zustand ihrer Produktionsabläufe in Echtzeit sehen, die Datenqualität verwalten und historische Trends verstehen können. In Databricks Workflows können Sie auf Datenflussdiagramme und Dashboards zugreifen, die den Zustand und die Leistung Ihrer Produktionsaufträge und Delta Live Tables-Pipelines verfolgen. Ereignisprotokolle werden auch als Delta Lake Tables angezeigt, sodass Sie Leistungs-, Datenqualitäts- und Zuverlässigkeits-Metrics aus jedem Blickwinkel überwachen und visualisieren können.Datenverarbeitungs-Engine der nächsten GenerationDatabricks Data Engineering basiert auf Photon. Photon ist die mit Apache Spark-APIs kompatible Next-Generation-Engine, die ein rekordverdächtiges Preis-Leistungs-Verhältnis bietet und gleichzeitig automatisch auf Tausende von Knoten skalierbar ist. Spark Structured Streaming implementiert eine zentrale und einheitliche API für die Batch- und Stream-Verarbeitung, sodass Streaming im Lakehouse ganz einfach eingeführt werden kann, ohne Code ändern oder neue Kompetenzen erwerben zu müssen.Mehr InformationenModernste Data-Governance, Zuverlässigkeit und LeistungData Engineering auf Databricks bedeutet, dass Sie von den grundlegenden Komponenten der Lakehouse-Plattform – Unity Catalog und Delta Lake – profitieren. Ihre Rohdaten sind für Delta Lake optimiert, ein Open-Source-Speicherformat, das durch ACID-Transaktionen zuverlässig ist und skalierbare Metadatenbearbeitung mit blitzschneller Leistung bietet. In Kombination mit Unity Catalog erhalten Sie eine detaillierte Governance für alle Ihre Daten- und KI-Assets. So wird die Art und Weise, wie Sie Governance durchsetzen, mit einem einheitlichen Modell vereinfacht, um Daten über Clouds hinweg zu entdecken, darauf zuzugreifen und sie zu teilen. Unity Catalog bietet auch native Unterstützung für Delta Sharing, das branchenweit erste offene Protokoll für den einfachen und sicheren Datenaustausch mit anderen Unternehmen.Zu Databricks migrierenSind die Datensilos, die langsame Leistung und die hohen Kosten für Altsysteme wie Hadoop und Enterprise Data Warehouses müde? Holen Sie sich eine einzige moderne Plattform für alle Ihre Daten-, Analytics- und KI-Anwendungsfälle.Zu Databricks migrierenIntegrationenBieten Sie Ihren Datenteams maximale Flexibilität – nutzen Sie Partner Connect und ein Ökosystem aus Technologiepartnern, um nahtlos beliebte Data Engineering-Tools zu integrieren. Sie können beispielsweise geschäftskritische Daten mit Fivetran aufnehmen, sie mit dbt transformieren und Ihre Pipelines mit Apache Airflow orchestrieren.Datenaufnahme und ETL+ jeder sonstige Apache Spark™-kompatible ClientKundenberichteMehr entdeckenDelta LakeWorkflowsDelta Live TablesDelta-FreigabeÄhnliche Inhalte Alle Ressourcen, die Sie brauchen. Alle an einem Ort. In der Ressourcenbibliothek finden Sie E-Books und Videos zu den Vorteilen von Data Engineering in Databricks. Ressourcen erkundenE-Books„Errichten eines Data Lakehouse“ von Bill Inmon, dem Vater des Data WarehouseDaten, Analysen und KI-GovernanceGrundlagen des DatenmanagementsDas große Buch des Data EngineeringDie neue Delta-Sharing-Lösung erkundenMigrieren von einem Data Warehouse zu einem Data Lakehouse für DummiesVeranstaltungenDatentransformation – mit Delta Live Tables kein ProblemProblemlose Datenerfassung Webinar-ReiheDATA+AI SUMMIT 2022Ihr Data Warehouse modernisierenBlogsBekanntgabe der allgemeinen Verfügbarkeit von Databricks Delta Live Tables (DLT)Einführung in die Databricks-AbläufeEin Überblick über alle neuen strukturierten Streaming-Funktionen, die 2021 für Databricks und Apache Spark entwickelt wurden10 leistungsstarke Funktionen zur Vereinfachung des halbstrukturierten Datenmanagements im Databricks LakehouseMöchten Sie loslegen?Kostenlos testenMitglied der Community werdenAWSAzureGCPProduktPlatform OverviewPreiseOpen Source TechDatabricks testenDemoProduktPlatform OverviewPreiseOpen Source TechDatabricks testenDemoLearn & SupportDokumentationGlossaryWEITERBILDUNG & ZERTIFIZIERUNGHelp CenterLegalOnline-CommunityLearn & SupportDokumentationGlossaryWEITERBILDUNG & ZERTIFIZIERUNGHelp CenterLegalOnline-CommunityLösungenBy IndustriesProfessionelle ServicesLösungenBy IndustriesProfessionelle ServicesUnternehmenÜber unsKarriere bei DatabricksDiversität und InklusionUnternehmensblogKontaktUnternehmenÜber unsKarriere bei DatabricksDiversität und InklusionUnternehmensblogKontaktWeitere Informationen unter „Karriere bei DatabricksWeltweitEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Datenschutzhinweis|Terms of Use|Ihre Datenschutzwahlen|Ihre kalifornischen Datenschutzrechte
https://www.databricks.com/blog/2020/09/15/building-a-modern-risk-management-platform-in-financial-services.html
How to Build a Modern Financial Services Risk Management Platform - The Databricks BlogSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack. Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence. See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco June 26–29   Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWCategoriesAll blog postsCompanyCultureCustomersEventsNewsPlatformAnnouncementsPartnersProductSolutionsSecurity and TrustEngineeringData Science and MLOpen SourceSolutions AcceleratorsData EngineeringTutorialsData StreamingData WarehousingData StrategyBest PracticesData LeaderInsightsIndustriesFinancial ServicesHealth and Life SciencesMedia and EntertainmentRetailManufacturingPublic SectorBuilding a Modern Risk Management Platform in Financial Servicesby Antoine Amend and Dael WilliamsonSeptember 15, 2020 in Company BlogShare this postThis blog was collaboratively written with Databricks partner Avanade. A special thanks to Dael Williamson, Avanade CTO, for his contributions.Financial Institutions today are still struggling to keep up with the emerging risks and threats facing their business. Managing risk, especially within the banking sector, has increased in complexity over the past several years. First, new frameworks (such as FRTB) are being introduced that potentially require tremendous computing power and an ability to analyze years of historical data. Second, regulators are demanding more transparency and explainability from the banks they oversee. Finally, the introduction of new technologies and business models means that the need for sound risk governance is at an all-time high. However, the ability for the banking industry to effectively meet these demands has not been an easy undertaking. Agile approach to risk managementTraditional banks relying on on-premises infrastructure can no longer effectively manage risk. Banks must abandon the computational inefficiencies of legacy technologies and build an agile Modern Risk Management practice capable of rapidly responding to market and economic volatility using data and advanced analytics. Our work with clients shows that as new threats, such as the last decade’s financial crisis, emerge, historical data and aggregated risk models lose their predictive values quickly. Luckily, modernization is made possible today based on open-source technologies powered by cloud-native big data infrastructure that bring an agile and forward-looking approach to financial risk analysis and management. Traditional datasets limit transparency and reliabilityRisk analysts must augment traditional data with alternative datasets to explore new ways of identifying and quantifying the risk factors facing their business, both at scale and in real time. Risk management teams must be able to efficiently scale their simulations from tens of thousands up to millions by leveraging both the flexibility of cloud compute and the robustness of open-source computing frameworks like Apache SparkTM. They must accelerate model development lifecycle by bringing together both the transparency of their experiment and the reliability in their data, bridging the gap between science and engineering and enabling banks to have a more robust approach to risk management. Data organization is critical to understanding and mitigating riskHow data is organized and collected is critical to creating highly reliable, flexible and accurate data models. This is particularly important when it comes to creating financial risk models for areas such as wealth management and investment banking. In the financial world, risk management is the process of identification, analysis and acceptance or mitigation of uncertainty in investment decisions. When data is organized and designed to flow within an independent pipeline, separate from massive dependencies and sequential tools, the time to run financial risk models is significantly reduced. Data is more flexible, easier to slice and dice, so institutions can apply their risk portfolio at a global and regional level as well as firmwide. Plagued by the limitations of on-premises infrastructure and legacy technologies, banks particularly have not had the tools until recently to effectively build a modern risk management practice. A modern risk management framework enables intraday views, aggregations on demand and an ability to future proof/scale risk assessment and management. Replace historical returns with highly accurate predictive modelsFinancial risk modeling should include multiple data sources to create more predictive financial and credit risk models. A modern risk and portfolio management practice should not be solely based on historical returns but also must embrace the variety of information available today. For example, a white paper from Atkins et al describes how financial news can be used to predict stock market volatility better than close price. As indicated in the white paper, the use of alternative data can dramatically augment the intelligence for risk analysts to have a more descriptive lens of modern economy, enabling them to better understand and react to exogenous shocks in real time. A modern risk management model in the cloudAvanade and Databricks have demonstrated how Apache Spark, Delta Lake and MLflow can be used in the real world to organize and rapidly deploy data into a value-at-risk (VAR) data model. This enables financial institutions to modernize their risk management practices into the cloud and adopt a unified approach to data analytics with Databricks.Using the flexibility and scale of cloud compute and the level of interactivity in an organization’s data, clients can better understand the risks facing their business and quickly develop accurate financial market risk calculations. With Avanade and Databricks, businesses can identify how much risk can be decreased and then accurately pinpoint where and how they can quickly apply risk measures to reduce their exposure. Join us at the Modern Data Engineering with Azure Databricks Virtual Event on October 8th to hear Avanade present on how Avanade and Databricks can help you manage risk through our Financial Services Risk Management model. Sign up here today.Try Databricks for freeGet StartedSee all Company Blog postsProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/explore/de-data-warehousing/rise-of-the-data-lakehouse#page=1?itm_data=dataengineering-link-pf-riselakehousebook
Rise Of The Data Lakehouse by Bill Inmon Thumbnails Document Outline Attachments Layers Current Outline Item Previous Next Highlight All Match Case Match Diacritics Whole Words Color Size Color Thickness Opacity Presentation Mode Open Print Download Current View Go to First Page Go to Last Page Rotate Clockwise Rotate Counterclockwise Text Selection Tool Hand Tool Page Scrolling Vertical Scrolling Horizontal Scrolling Wrapped Scrolling No Spreads Odd Spreads Even Spreads Document Properties… Toggle Sidebar Find Previous Next Presentation Mode Open Print Download Current View FreeText Annotation Ink Annotation Tools Zoom Out Zoom In Automatic Zoom Actual Size Page Fit Page Width 50% 75% 100% 125% 150% 200% 300% 400% More Information Less Information Close Enter the password to open this PDF file: Cancel OK File name: - File size: - Title: - Author: - Subject: - Keywords: - Creation Date: - Modification Date: - Creator: - PDF Producer: - PDF Version: - Page Count: - Page Size: - Fast Web View: - Close Preparing document for printing… 0% Cancel
https://www.databricks.com/dataaisummit/speaker/anil-puliyeril/#
Anil Puliyeril - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingAnil PuliyerilSenior Architect at HabuBack to speakersAnil Puliyeril, Senior Architect at Habu, has 10+ years of experience building innovative, highly scalable and best in class software systems using open source technologies. He is a specialist in Microservices, Cloud Services, and a self-proclaimed API-enthusiast. Before Habu, Anil was a Principle Software Engineer at Salesforce where he was part of the Data Management Platform (DMP) Engineering Team that provided cloud-based data management services to organizations across the world. Prior to that, Anil held numerous engineering roles at Saba where he led the design and development of an api management framework and data integration platform. Anil has a Master of Science, Computer Software Engineering degree from the Birla Institute of Technology and Science, Pilani and a Bachelor of Engineering, Computer Science from the University of Mumbai.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/dataaisummit/speaker/xuefu-wang
Xuefu Wang - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingXuefu WangSr. Data Scientist at The Trade DeskBack to speakersXuefu Wang is a Sr. Data Scientist at The Trade Desk, the world's largest demand-side platform for accessing premium advertisement inventories across multiple channels. He has a PhD in statistics and previously worked in data science at JP Morgan ChaseLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/p/webinar/unify-bi-and-ai-on-a-single-platform?itm_data=gcp-promo-gcplaunch
Resources - DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack. Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence. See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco June 26–29   Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWResourcesLoading...ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/fr/company/partners/consulting-and-si/candsi-partner-program
Page partenaire : partenaires Conseil et SI | DatabricksSkip to main contentPlateformeThe Databricks Lakehouse PlatformDelta LakeGouvernance des donnéesData EngineeringStreaming de donnéesEntreposage des donnéesPartage de donnéesMachine LearningData ScienceTarifsMarketplaceOpen source techCentre sécurité et confianceWEBINAIRE mai 18 / 8 AM PT Au revoir, entrepôt de données. Bonjour, Lakehouse. Assistez pour comprendre comment un data lakehouse s’intègre dans votre pile de données moderne. Inscrivez-vous maintenantSolutionsSolutions par secteurServices financiersSanté et sciences du vivantProduction industrielleCommunications, médias et divertissementSecteur publicVente au détailDécouvrez tous les secteurs d'activitéSolutions par cas d'utilisationSolution AcceleratorsServices professionnelsEntreprises digital-nativesMigration des plateformes de données9 mai | 8h PT   Découvrez le Lakehouse pour la fabrication Découvrez comment Corning prend des décisions critiques qui minimisent les inspections manuelles, réduisent les coûts d’expédition et augmentent la satisfaction des clients.Inscrivez-vous dès aujourd’huiApprendreDocumentationFORMATION ET CERTIFICATIONDémosRessourcesCommunauté en ligneUniversity AllianceÉvénementsSommet Data + IABlogLabosBeacons26-29 juin 2023 Assistez en personne ou connectez-vous pour le livestream du keynoteS'inscrireClientsPartenairesPartenaires cloudAWSAzureGoogle CloudContact partenairesPartenaires technologiques et de donnéesProgramme partenaires technologiquesProgramme Partenaire de donnéesBuilt on Databricks Partner ProgramPartenaires consulting et ISProgramme Partenaire C&SISolutions partenairesConnectez-vous en quelques clics à des solutions partenaires validées.En savoir plusEntrepriseOffres d'emploi chez DatabricksNotre équipeConseil d'administrationBlog de l'entreprisePresseDatabricks VenturesPrix et distinctionsNous contacterDécouvrez pourquoi Gartner a désigné Databricks comme leader pour la deuxième année consécutiveObtenir le rapportEssayer DatabricksRegarder les démosNous contacterLoginJUNE 26-29REGISTER NOWProgramme Partenaire C&SILe Programme Partenaire C&SI (conseil et intégration de systèmes) de Databricks est axé sur la valeur et optimise notre collaboration pour produire d'excellents résultats.Inscrivez-vous maintenantRejoignez un écosystème mondial de fournisseurs de données et de services d’IA : ensemble, nous aidons nos clients communs à devenir des entreprises axées sur les données. En tant que partenaire, vous jouez un rôle stratégique :Impact commercial Échangez avec nos équipes de terrain pour vendre la vision du lakehouse et élargir l'empreinte de Databricks.Valeur client Formez votre équipe sur la plateforme pour développer ses capacités et ses compétences.innovation Démontrez votre expertise en développant des solutions reproductibles pour répondre aux cas d'usage qui ont le plus d'impact pour vos clients.Accédez à des avantages en matière d'engagement et d'activation pour développer votre pratique et consolider votre partenariat :Formation technique DatabricksAccès à la plateforme Lakehouse de DatabricksAssistance technique et commercialeFrais d'enregistrement et de recommandation d’opportunitéFonds d'investissement clientsActivation et ressources de mise sur le marchéPrêt à vous lancer ?Inscrivez-vous maintenantTrouver un partenaireProduitPlatform OverviewTarifsOpen Source TechEssayer DatabricksDémoProduitPlatform OverviewTarifsOpen Source TechEssayer DatabricksDémoLearn & SupportDocumentationGlossaryFORMATION ET CERTIFICATIONHelp CenterLegalCommunauté en ligneLearn & SupportDocumentationGlossaryFORMATION ET CERTIFICATIONHelp CenterLegalCommunauté en ligneSolutionsBy IndustriesServices professionnelsSolutionsBy IndustriesServices professionnelsEntrepriseNous connaîtreOffres d'emploi chez DatabricksDiversité et inclusionBlog de l'entrepriseNous contacterEntrepriseNous connaîtreOffres d'emploi chez DatabricksDiversité et inclusionBlog de l'entrepriseNous contacterDécouvrez les offres d'emploi chez Databrickspays/régionsEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Avis de confidentialité|Conditions d'utilisation|Vos choix de confidentialité|Vos droits de confidentialité en Californie
https://www.databricks.com/fr/solutions/migration
Migrer vers Databricks | DatabricksSkip to main contentPlateformeThe Databricks Lakehouse PlatformDelta LakeGouvernance des donnéesData EngineeringStreaming de donnéesEntreposage des donnéesPartage de donnéesMachine LearningData ScienceTarifsMarketplaceOpen source techCentre sécurité et confianceWEBINAIRE mai 18 / 8 AM PT Au revoir, entrepôt de données. Bonjour, Lakehouse. Assistez pour comprendre comment un data lakehouse s’intègre dans votre pile de données moderne. Inscrivez-vous maintenantSolutionsSolutions par secteurServices financiersSanté et sciences du vivantProduction industrielleCommunications, médias et divertissementSecteur publicVente au détailDécouvrez tous les secteurs d'activitéSolutions par cas d'utilisationSolution AcceleratorsServices professionnelsEntreprises digital-nativesMigration des plateformes de données9 mai | 8h PT   Découvrez le Lakehouse pour la fabrication Découvrez comment Corning prend des décisions critiques qui minimisent les inspections manuelles, réduisent les coûts d’expédition et augmentent la satisfaction des clients.Inscrivez-vous dès aujourd’huiApprendreDocumentationFORMATION ET CERTIFICATIONDémosRessourcesCommunauté en ligneUniversity AllianceÉvénementsSommet Data + IABlogLabosBeacons26-29 juin 2023 Assistez en personne ou connectez-vous pour le livestream du keynoteS'inscrireClientsPartenairesPartenaires cloudAWSAzureGoogle CloudContact partenairesPartenaires technologiques et de donnéesProgramme partenaires technologiquesProgramme Partenaire de donnéesBuilt on Databricks Partner ProgramPartenaires consulting et ISProgramme Partenaire C&SISolutions partenairesConnectez-vous en quelques clics à des solutions partenaires validées.En savoir plusEntrepriseOffres d'emploi chez DatabricksNotre équipeConseil d'administrationBlog de l'entreprisePresseDatabricks VenturesPrix et distinctionsNous contacterDécouvrez pourquoi Gartner a désigné Databricks comme leader pour la deuxième année consécutiveObtenir le rapportEssayer DatabricksRegarder les démosNous contacterLoginJUNE 26-29REGISTER NOWMigrer vers DatabricksModernisez votre plateforme de données en adoptant le lakehouse de Databricks Essayer gratuitement DatabricksContacter DatabricksRéduisez les coûts, innovez plus rapidement et simplifiez votre plateforme de données en migrant les données de votre data warehouse ou de votre data lake d'entreprise vers le lakehouse Databricks. Vous pouvez désormais exécuter toutes vos charges de données, d'analytique et d'IA sur une plateforme unifiée moderne qui s'appuie sur des normes ouvertes et sur une approche de gouvernance commune pour plus de sécurité.Pourquoi migrer vers Databricks ?Simplifiez votre plateforme de donnéesCentralisez vos cas d'usage de données, d'analytique et d'IA sur une plateforme moderne unique. Unifier la gouvernance et l'expérience utilisateur pour toutes les équipes de données et sur l'ensemble de vos clouds. Montez en charge à bas coûtArrêtez de gérer des serveurs. Évoluez en fonction de vos besoins avec le serverless. Faites passer l'entreposage de données à l'échelle supérieure avec un rapport performance / prix jusqu'à 12 fois meilleur. Accélérez l'innovationCréez rapidement des capacités d'IA, de ML et d'analytique en temps réel grâce à des outils collaboratifs en libre-service. Appuyez-vous sur des technologies open source telles que MLflow et Apache Spark™. Migration depuis Hadoop En savoir plusMigration depuis un data warehouse En savoir plusMigrez en toute confianceNous avons aidé des centaines de clients à migrer leurs données depuis des plateformes traditionnelles. Notre processus de migration complet et échelonné offre un modèle prévisible qui permet de comprendre les coûts, pendant et après la migration. Quant à notre approche « lakehouse-first », qui permet de migrer toutes les charges de travail, elle garantit la prise en charge des cas d'usage actuels comme des nouveaux. Le résultat ? Des risques réduits, une rentabilisation plus rapide et un meilleur ROI.Les cinq phases de la migration des donnéesPhase 1 : Découverte Utilisez des profileurs pour automatiser la découverte. Obtenez des insights sur les charges de travail des plateformes traditionelles et estimez les coûts de consommation de la plateforme Databricks. Phase 2 : Évaluation Utilisez des analyseurs pour évaluer la complexité du code avec finesse et estimer les coûts du projet de migration. Phase 3 : Stratégie Avec les conseils d'experts Databricks, finalisez le mapping de vos technologies et créez des parcours optimaux pour la migration depuis chaque plateforme source. Phase 4 : Projet pilote en production Réalisez un pilote pour tester vos cas d'usage et utilisez des convertisseurs de code pour rendre votre code existant compatible avec Databricks. Élaborez un plan de migration et une feuille de route de migration. Phase 5 : Exécution Répétez l'opération pour toutes les charges. Obtenez de l'aide pour l'exécution de la migration et bénéficiez de l'appui de partenaires certifiés ou des Services professionnels Databricks. Clients ayant réussi leur migration vers DatabricksTémoignage de clientPersonnaliser les soins pharmaceutiques pour améliorer les résultats des patients En savoir plusBlog clientsAbandon complet des solutions on-premice, réduction des coûts et accélération de la réussite En savoir plusTémoignage de clientLa migration vers le cloud s'impose dans une nouvelle ère de la vente au détail data-driven En savoir plusSolutions BrickbuilderAppuyez-vous sur les solutions Brickbuilder, proposées par des partenaires de premier plan spécialisés dans le conseil et conçues pour migrer vers le lakehouse Databricks.Migration des données vers le cloud par AccentureRéduisez l'incertitude et optimisez la valeur DémarrerMigration des systèmes hérités par AvanadeDéplacez vos données pour libérer toute leur valeur DémarrerMigrer vers le cloud et Databricks par CapgeminiRationaliser la migration des données vers la plateforme lakehouse Databricks DémarrerMigrer vers Databricks par Celebal TechnologiesUne migration plus rapide et moins chère des systèmes sur site DémarrerSolution de migration LeapLogic par ImpetusTransformation automatique des charges de travail ETL, data warehouse, analytique et Hadoop dans Databricks DémarrerAssistant de migration des données Hadoop/EDW par InfosysTransférez vos données vers Databricks en toute confiance DémarrerMigration de Snowflake vers Databricks par LovelyticsPour un processus de migration rapide et fiable DémarrerAccélérateur de migration SAS par Tensile AIUne solution de migration développée par Tensile AI et basée sur la plateforme Lakehouse de Databricks.DémarrerSuite de Data intelligence de WiproMigrez vers Databricks en toute confiance DémarrerAdopter une stratégie de data lakehouse, le choix de la sagesseUne approche de data lakehouse peut vous aider à dépasser les limitations courantes des data warehouses et des data lakesTÉLÉCHARGER MAINTENANTRessourcesEbooksMigrer d'Hadoop vers un data lakehouse en quelques motsLa migration d'un data warehouse vers un data lakehouse en quelques motsÉvénementsGuide étape par étape de la migration HadoopModernisez votre data warehouseBlogsCinq étapes clés pour réussir la migration de Hadoop vers l'architecture Lakehouse7 raisons de migrer d'un Hadoop cloud vers la plateforme lakehouse DatabricksDémarrerChanger de plateforme ne doit pas être un casse-tête. Contactez-nous dès aujourd'hui et nous verrons comment vous pourriez procéder.Essayer gratuitement DatabricksContacter DatabricksProduitPlatform OverviewTarifsOpen Source TechEssayer DatabricksDémoProduitPlatform OverviewTarifsOpen Source TechEssayer DatabricksDémoLearn & SupportDocumentationGlossaryFORMATION ET CERTIFICATIONHelp CenterLegalCommunauté en ligneLearn & SupportDocumentationGlossaryFORMATION ET CERTIFICATIONHelp CenterLegalCommunauté en ligneSolutionsBy IndustriesServices professionnelsSolutionsBy IndustriesServices professionnelsEntrepriseNous connaîtreOffres d'emploi chez DatabricksDiversité et inclusionBlog de l'entrepriseNous contacterEntrepriseNous connaîtreOffres d'emploi chez DatabricksDiversité et inclusionBlog de l'entrepriseNous contacterDécouvrez les offres d'emploi chez Databrickspays/régionsEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Avis de confidentialité|Conditions d'utilisation|Vos choix de confidentialité|Vos droits de confidentialité en Californie
https://www.databricks.com/fr/legal
Master Cloud Services Agreement | DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack. Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence. See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco June 26–29   Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWLegalTermsDatabricks Master Cloud Services AgreementAdvisory ServicesTraining ServicesUS Public Sector ServicesExternal User TermsWebsite Terms of UseCommunity Edition Terms of ServiceAcceptable Use PolicyPrivacyPrivacy NoticeCookie NoticeApplicant Privacy NoticeDatabricks SubprocessorsPrivacy FAQsDatabricks Data Processing AddendumAmendment to Data Processing AddendumSecurityDatabricks SecuritySecurity AddendumLegal Compliance and EthicsLegal Compliance & EthicsCode of ConductThird Party Code of ConductModern Slavery StatementFrance Pay Equity ReportSubscribe to UpdatesMaster Cloud Services AgreementThis Master Cloud Services Agreement (the “MCSA”) is entered into as of the Effective Date between Databricks, Inc. (“Databricks” or “we”) and Customer (as defined below) and governs Customer’s use of the Databricks Services, including the right to access and use the Databricks data processing platform services (the “Platform Services”), on each cloud service where Databricks directly provides customers with access to such Platform Services. For the avoidance of doubt, this Agreement does not govern the use of Databricks Powered Services. Unless otherwise indicated, capitalized terms have the meaning assigned to them in this MCSA or in an incorporated Schedule.If you are entering into this MCSA on behalf of a company (such as your employer) or other legal entity, you represent and warrant that You are authorized to bind that entity to this MCSA, in which case “Customer,” “you,” or “your” will refer to that entity (otherwise, such terms refer to you as an individual). If you do not have authority to bind Your entity or do not agree with any provision of this MCSA, you must not accept this MCSA and may not use the Databricks Services.By accepting this MCSA, either by executing this MCSA, an Order, or another agreement that explicitly incorporates this MCSA by reference, Customer enters into the MCSA and the following Schedules, each of which are incorporated into the MCSA and apply to the provision of the applicable Databricks Services upon your ordering such service:Advisory ServicesTraining ServicesU.S. Public Sector ServicesYour Order may include one or more of the following: (a) the Platform Services, (b) support services (“Support Services“), (c) training services (“Training Services“), or (d) advisory services (“Advisory Services,” and together with any other services provided by Databricks, (a), (b), (c), and (d) shall be defined as the “Databricks Services”). You acknowledge that no term in any Order entered into via a reseller will be deemed to modify the Agreement unless pre-authorized in writing by Databricks. Definitions. Defined terms are set out below. Capitalized terms used but not defined in a Schedule or an Order will have the meaning assigned to them, if any, within this MCSA.  “Acceptable Use Policy” means the acceptable use policy governing the Platform Services located at databricks.com/legal/aup.“Affiliate” of a party means an entity that controls, is actually or in effect controlled by, or is under common control with such party.“Agreement” means this MCSA, the referenced Schedules, and any accompanying or future Order you enter into under this MCSA.“Authorized User” means employees or agents of Customer or its Affiliates selected by Customer to access and use the Platform Services.“Beta Service” means any feature of the  Databricks Services ( that is clearly designated as “beta”, “experimental”, “preview” or similar, that is provided prior to general commercial release, and that Databricks at its sole discretion offers to Customer, and Customer at its sole discretion elects to use."Cloud Environment” means a cloud or other compute or storage infrastructure controlled by a party or by an external user (as may be defined where appropriate by schedule or amendment hereto) according to context and utilized under the Agreement.“Cloud Service Provider” means a cloud service provider on whose platform Databricks directly provides the Platform Services. For clarity, the Databricks Powered Services are not directly provided by Databricks and are not considered Platform Services under this Agreement.“Customer Content” means all data input into or made available by Customer for processing within the Platform Services or Support Services or generated from the Platform or Support Services.“Customer Data” means the data, other than Customer Instructional Input, made available by Customer and its Authorized Users for processing within the Platform Services or Support Services. “Customer Instructional Input” means information other than Customer Data that Customer inputs into the Platform Services to direct how the Platform Services process Customer Data, including without limitation the code and any libraries (including third party libraries) Customer utilizes within the Platform Services.“Customer Results” means any output Customer or its Authorized Users generate from their use of the Platform Services."Databricks Global Code of Conduct” means the Databricks Global Code of Conduct located at  databricks.com/global-code-of-conduct.“Databricks Powered Service” means any third-party software or service powered by Databricks, including those at https://www.databricks.com/legal/cloud-provider-directory, that is provided to you under contractual terms between you and a third-party.  This Agreement does not amend any term of such contract; the Databricks Powered Services are not considered Databricks Services (and, for the avoidance of doubt, are not considered Platform Services) under the Agreement and Databricks shall have no liability to you relating to your use of the Databricks Powered Services.“Documentation” means the documentation related to the Platform Services located at databricks.com/documentation.“DPA” means the  Data Processing Addendum located at databricks.com/legal/dpa.“Effective Date” means the earliest of: the effective date of the initial Order that references this MCSA, the date of last signature of the MCSA, or the date you first access or use any Databricks Services.“Fees” means all amounts payable for Databricks Services.“HIPAA” means the Health Insurance Portability and Accountability Act of 1996, as amended and supplemented from time to time.“Intellectual Property Rights” means all worldwide intellectual property rights available under applicable laws including without limitation rights with respect to patents, copyrights, moral rights, trademarks, trade secrets, know-how, and databases.“Order” means an order form (“Order Form”), online order (including the provisioning of any Databricks Services) or similar agreement for the provision of Databricks Services, entered into by the parties or any of their Affiliates, incorporated by reference into, and governed by, the Agreement. By entering into an Order Form hereunder, an Affiliate agrees to be bound by the terms of this Agreement as if it were an original party hereto.“PCI-DSS” means the Payment Card Industry Data Security Standard.“PHI” means health information regulated by HIPAA or by any similar privacy law governing the use of or access to health information.“Security Addendum” means the Platform Security Addendum located at databricks.com/legal/security-addendum.“Schedule” means any of the schedules referenced herein or otherwise set forth in an Order.“Shared Data” means  (i) Customer Content that you electto share with third parties or  (ii) data you elect to receive from third parties, under an applicable configuration of the Platform Services.“Support Policy” means the available Support Services plans located at databricks.com/support.“System” means any application, computing or storage device, or network.“Usage Data” means usage data and telemetry collected by Databricks relating to Customer's use of the Platform Services. Usage Data may contain queries entered by an Authorized User but not the results of those queries.“Workspace” means a Platform Services environment.Confidentiality.Confidential Information. “Confidential Information” means any business or technical information disclosed by either party to the other that is designated as confidential at the time of disclosure or that, under the circumstances, a person exercising reasonable business judgment would understand to be confidential or proprietary. Without limiting the foregoing, all non-public elements of the Databricks Services are Databricks’ Confidential Information, Customer Content is Customer’s Confidential Information, and the terms of the Agreement and any information that either party conveys to the other party concerning data security measures, incidents, or findings constitute Confidential Information of both parties. Confidential Information will not include information that the receiving party can demonstrate (a) is or becomes publicly known through no fault of the receiving party, (b) is, when it is supplied, already known to whoever it is disclosed to in circumstances in which they are not prevented from disclosing it to others, (c) is independently obtained by whoever it is disclosed to in circumstances in which they are not prevented from disclosing it to others or (d) was independently developed by the receiving party without use of or reference to the Confidential Information.Confidentiality. A receiving party will not use the disclosing party’s Confidential Information except as permitted under the Agreement or to enforce its rights under the Agreement and will not disclose such Confidential Information to any third party except to those of its employees and/or subcontractors who have a bona fide need to know such Confidential Information for the performance or enforcement of the Agreement; provided that each such employee and/or subcontractor is bound by a written agreement that contains use and disclosure restrictions consistent with the terms set forth in this Section 2.2. Each receiving party will protect the disclosing party’s Confidential Information from unauthorized use and disclosure using efforts equivalent to those that the receiving party ordinarily uses with respect to its own Confidential Information of similar nature and in no event using less than a reasonable standard of care; provided, however, that a party may disclose such Confidential Information as required by applicable laws, subject to the party required to make such disclosure giving reasonable notice to the other party to enable it to contest such order or requirement or limit the scope of such request. The provisions of this Section 2.2 will supersede any non-disclosure agreement by and between the parties (whether entered into before, on or after the Effective Date) that would purport to address the confidentiality and security of Customer Content and such agreement will have no further force or effect with respect to Customer Content.Equitable Relief. Each party acknowledges and agrees that the other party may be irreparably harmed in the event that such party breaches Section 2.2 (Confidentiality), and that monetary damages alone cannot fully compensate the non-breaching party for such harm. Accordingly, each party hereto hereby agrees that the non-breaching party will be entitled to seek injunctive relief to prevent or stop such breach, and to obtain specific enforcement thereof. Any such equitable remedies obtained will be in addition to, and not foreclose, any other remedies that may be available.Intellectual Property. Ownership of the Databricks Services. Except for the limited licenses expressly set forth in the Agreement, Databricks retains all Intellectual Property Rights and all other proprietary rights related to the Databricks Services. You will not delete or alter the copyright, trademark, or other proprietary rights notices or markings appearing within the Databricks Services as delivered to you. You agree that the Databricks Services are provided on a non-exclusive basis and that no transfer of ownership of Intellectual Property Rights will occur. You further acknowledge and agree that portions of the Databricks Services, including but not limited to the source code and the specific design and structure of individual modules or programs, constitute or contain trade secrets and other Intellectual Property Rights of Databricks and its licensors.Ownership of Customer Content. As between you and Databricks, you retain all ownership or license rights in Customer Content.Usage Data. Notwithstanding anything to the contrary in the Agreement, Databricks may collect and use Usage Data to develop, improve, operate, and support its products and services. Databricks will not share any Usage Data that includes Customer Confidential Information except either (a) to the extent that such Usage Data is anonymized and aggregated such that it does not identify Customer or Customer Confidential Information; or (b) in accordance with Section 2 (Confidentiality) of this Agreement.Feedback. You are under no duty to provide any suggestions, enhancement requests, or other feedback regarding the Databricks Services (“Feedback”). If you choose to offer Feedback to Databricks, you hereby grant Databricks a perpetual, irrevocable, non-exclusive, worldwide, fully-paid, sub-licensable, assignable license to incorporate into the Databricks Services or otherwise use any Feedback Databricks receives from you solely to improve Databricks products and services, provided that such Feedback is used in a manner that is not attributable to you. You also irrevocably waive in favor of Databricks any moral rights which you may have in such Feedback pursuant to applicable copyright law. Databricks acknowledges that any Feedback is provided on an “as-is” basis with no warranties of any kind.Use of the Platform Services. Access. Databricks will make the Platform Services available to Customer and its Authorized Users in accordance with the terms and conditions of this Agreement, the Documentation, and an applicable Order.Databricks Responsibilities. Services. Databricks is responsible for (a) the operation of the Databricks Cloud Environment; and (b) the Databricks software used to operate the computing resources. Security Measures. Databricks shall implement reasonable administrative, physical, and technical safeguards to protect the security of the Platform Services and the Customer Content as set forth in the Security Addendum (“Security Measures”); and shall, without limiting the foregoing, maintain certification to ISO/IEC 27001:2013 or equivalent/greater standards during the term of this Agreement. Additionally, while it is your responsibility to back up Customer Content, Databricks will, at your reasonable request, provide commercially reasonable assistance with recovery efforts. While Databricks may update the Security Measures, it shall not materially diminish the effectiveness of the Security Measures.Customer Responsibilities. General Responsibilities.  You acknowledge and agree that you are responsible for: ensuring that each Authorized User has their own credentials, protecting those credentials, and not permitting any sharing of credentials;securing any Customer Cloud Environment, and any Customer System;backing up Customer Content; configuring the Platform Services in an appropriate way taking into account the sensitivity of the Customer Content that you choose to process using the Platform Services, including Shared Data; using commercially reasonable efforts to ensure that your Authorized Users review the portions of Documentation relevant to your use of the Platform Services and any security information published by Databricks and referenced therein that is designed to assist you in securing Customer Content;risks associated with all use of the Platform Services by an Authorized User under an Authorized User’s account (including for the payment of Fees related to such use), whether such action was taken by an Authorized User or by another party, and whether or not such action was authorized by an Authorized User, provided that such action was not (1) taken by Databricks or by a party acting under the direction of Databricks, or (2) an action by a third party that Databricks should reasonably have prevented. Use Limits. You will not, and will not permit your Authorized Users to: violate the Acceptable Use Policy or use the Platform Services other than in accordance with the Documentation;copy, modify, disassemble, decompile, reverse engineer, or attempt to view or discover the source code of the Platform Services, in whole or in part, or permit or authorize a third party to do so, except to the extent such activities are expressly permitted by the Agreement or by law notwithstanding this prohibition;sell, resell, license, sublicense, distribute, rent, lease, or otherwise provide access to the Platform Services to any third party except to the extent explicitly authorized in writing by Databricks;use the Platform Services to develop or offer a service made available to any third party that could reasonably be seen to serve as a substitute for such third party’s possible purchase of any Databricks product or service;transfer or assign any of your rights hereunder except as permitted under Section 12.5  (Assignment); orduring any free trial period granted by Databricks, including during the use of any Beta Service, use the Databricks Services for any purpose other than to evaluate whether to purchase the Databricks Services.Shared Responsibilities. Customer acknowledges that the Platform Services may be implemented in a manner that divides the Platform Services between the Customer Cloud Environment and the Databricks Cloud Environment, and that accordingly each party must undertake certain technical and organizational measures in order to protect the Platform Services and the Customer Content.  Permitted Benchmarking. You may perform benchmarks or comparative tests or evaluations (each, a “Benchmark”) of the Platform Services and may disclose the results of the Benchmark other than for Beta Services. If you perform or disclose, or direct or permit any third party to perform or disclose, any Benchmark of any of the Platform Services, you (i) will include in any disclosure, and will disclose to us, all information necessary to replicate such Benchmark, and (ii) agree that we may perform and disclose the results of Benchmarks of your products or services, irrespective of any restrictions on Benchmarks in the terms governing your products or services.Customer Content.Limits on What Customer Content May Contain. You agree that you will not include in Customer Content, or generate any Customer Results that include, any data for which you do not have all rights, power and authority necessary for its collection, use and processing as contemplated by the Agreement.PHI Data. You shall not include in Customer Content any PHI unless (a) you have entered into an Order permitting you to process PHI, and then only with respect to the Workspace(s) or account (if applicable) (together the “PHI Permitted Workspaces”) identified on such Order; and (b) you have entered into a Business Associate Agreement (“BAA”) with Databricks. If you have not entered into a BAA with Databricks or if you provide PHI to Databricks other than through the PHI Permitted Workspaces, Databricks will have no liability under the Agreement relating to PHI notwithstanding anything in the Agreement or in HIPAA or any similar laws to the contrary.Cardholder Data. You shall not include in Customer Content any cardholder data as defined under PCI-DSS (“Cardholder Data”) unless (1) you are processing the Cardholder Data in a PCI Permitted Workspace and configure and operate such Workspace in accordance with the Documentation; and (2) you have entered into an Order that (a) specifies Databricks then-current certification status under PCI-DSS; and (b) explicitly permits you to process Cardholder Data within the Platform Services (including specifying the types and quantities of such data) and then only with respect to the Workspace(s) identified in such Order (the “PCI Permitted Workspaces”). Databricks will have no liability under the Agreement relating to Cardholder Data that is not processed in accordance with the terms of this section notwithstanding anything in the Agreement or in PCI-DSS or any similar regulations to the contrary.Architectures and Services Updates. Databricks provides the Platform Services according to different architectural models (e.g. models where computing resources are deployed into Customer Cloud Environment and models where computing resources are deployed into Databricks Cloud Environments) depending on the specific feature being used by Customer, as further described in the Documentation. Accordingly, Customer acknowledges and agrees that different portions of the Platform Services are and may in the future be subject to changes reflected in the Documentation or terms and conditions that provide for different rights and responsibilities of the parties for their use.  Databricks Container ServicesAs part of Databricks Container Services, Databricks may provide a sample stub container file (a “Sample Container”) that you may use to create a custom container file (a “Modified Sample Container”). Databricks grants you a limited, non-exclusive right and license to use and modify the Sample Container to create a Modified Sample Container to use with Databricks Container Services. The Sample Container may contain libraries that are subject to open source licenses. It is your obligation to review and comply with any such licenses prior to your creation of the Modified Sample Container.You may not: include in a Custom Container any code: (i) for which you do not have the necessary right or license; or (ii) that contains any code that could subject Databricks to any condition that Databricks make any of its source code available or which may impose any other obligation or restriction with respect to Databricks’ Intellectual Property Rights; orattempt to disable or interfere with any technical limitations contained within Databricks Container Services.You grant Databricks a worldwide, non-exclusive royalty free right and license to use, reproduce and make derivative works of the Custom Container solely as necessary to provide Databricks Container Services to Customer.Data Protection. Except with respect to a free trial, the terms of the DPA are hereby incorporated by reference and shall apply to the processing of personal data as described in the DPA.Suspension and Termination of Platform Services. Suspension. Databricks may temporarily suspend any or all Workspaces at any time: (i) immediately without notice if Databricks reasonably suspects that you have violated your obligations under Section 4.3 (Customer Responsibilities), Section 4.6 (Customer Content), or Section 11 (Compliance with Laws) in a manner that may cause material harm or material risk of harm to Databricks or to any other party; (ii) or if you (or any third party responsible for making payment on your behalf) fail to pay undisputed Fees after receiving notice that you are delinquent in payment.Termination; Workspace Cancellation. Databricks may terminate your use of the Platform Services and any Workspaces and any applicable Order for material breach, including without limitation your breach of Section 4.10(a), that in each case is either not cured within thirty (30) days of notice of such breach or that by its nature is incapable of cure. If the Agreement or any applicable Order is terminated for any reason or upon your written request, Databricks may cancel your Workspaces. Upon termination of the Agreement for any reason you will delete all stored elements of the Platform Services from your Systems.Deletion of Customer Content upon Workspace Cancellation. Databricks will automatically delete all Customer Content contained within a Workspace within thirty (30) days following the cancellation of such Workspace. Monthly Pay-As-You-Go (PAYG) Services. Notwithstanding anything in the Agreement to the contrary, Databricks may suspend or terminate any Platform Services provided on a month-to-month basis with payment based only on Customer’s usage of the Platform Services during the billing month and delete any Customer Content relating to such Workspace that may be stored within the Platform Services or other Databricks’ Systems, upon thirty (30) days’ prior written notice (email sufficient) if Databricks reasonably determines the account is inactive as set forth in the Acceptable Use Policy.Notice. Notwithstanding Section 12.6 (Notice), notice under this Section 4.10 (Suspension; Termination) may be provided by email sent to a person the party providing notice reasonably believes to have responsibility for the other party’s activities under the Agreement.Support Services. Databricks will provide you with the level of Support Services specified in an Order in accordance with the Support Policy. If Support Services are not specified in an Order, your support shall be limited to public Documentation and forums.Warranties; Remedy.Warranties. Each party warrants that it is validly entering into the Agreement and has the legal authority to do so.  In addition to the warranties provided by the parties as set forth in any applicable Schedule, Databricks warrants that, during the term of any Order for Platform Services: (a) the Platform Services will function substantially in accordance with the Documentation; and (b) Databricks will employ commercially reasonable efforts in accordance with industry standards to prevent the transmission of malware or malicious code via the Platform Services.Disclaimer. THE WARRANTIES PROVIDED BY DATABRICKS IN SECTION 6.1 (WARRANTIES) ARE EXCLUSIVE AND IN LIEU OF ALL OTHER WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, REGARDING DATABRICKS AND DATABRICKS’ SERVICES PROVIDED HEREUNDER. DATABRICKS AND ITS LICENSORS SPECIFICALLY DISCLAIM ALL IMPLIED WARRANTIES, CONDITIONS AND OTHER TERMS, INCLUDING, WITHOUT LIMITATION, IMPLIED WARRANTIES OF MERCHANTABILITY, SATISFACTORY QUALITY OR FITNESS FOR A PARTICULAR PURPOSE. NOTWITHSTANDING ANYTHING TO THE CONTRARY HEREIN: (a) ANY SERVICES PROVIDED UNDER ANY FREE TRIAL PERIOD ARE PROVIDED “AS-IS” AND WITHOUT WARRANTY OF ANY KIND; (b) WITHOUT LIMITATION, DATABRICKS DOES NOT MAKE ANY WARRANTY OF ACCURACY, COMPLETENESS, TIMELINESS, OR UNINTERRUPTABILITY, OF THE PLATFORM SERVICES; (c), DATABRICKS IS NOT RESPONSIBLE FOR RESULTS OBTAINED FROM THE USE OF THE DATABRICKS SERVICES OR FOR CONCLUSIONS DRAWN FROM SUCH USE; AND (d) EXCEPT AS OTHERWISE STATED IN SECTION 4 (USE OF THE PLATFORM SERVICES), DATABRICKS’ REASONABLE EFFORTS TO RESTORE LOST OR CORRUPTED CUSTOMER INSTRUCTIONAL INPUT DESCRIBED THEREIN SHALL BE DATABRICKS’ SOLE LIABILITY AND YOUR SOLE AND EXCLUSIVE REMEDY IN THE EVENT OF ANY LOSS OR CORRUPTION OF CUSTOMER CONTENT IN CONNECTION WITH THE DATABRICKS SERVICES.Platform Services Warranty Remedy. FOR ANY BREACH OF THE WARRANTIES RELATED TO THE PLATFORM SERVICES PROVIDED BY DATABRICKS IN SECTION 6.1 (WARRANTIES), YOUR EXCLUSIVE REMEDY AND DATABRICKS’ ENTIRE LIABILITY WILL BE THE MATERIAL CORRECTION OF THE DEFICIENT SERVICES THAT CAUSED THE BREACH OF WARRANTY, OR, IF WE CANNOT SUBSTANTIALLY CORRECT THE DEFICIENCY IN A COMMERCIALLY REASONABLE MANNER, DATABRICKS WILL END THE DEFICIENT SERVICES AND REFUND TO YOU THE PORTION OF ANY PREPAID FEES PAID BY YOU TO DATABRICKS APPLICABLE TO THE PERIOD FOLLOWING THE EFFECTIVE DATE OF TERMINATION.Indemnification.  Indemnification by Databricks. Subject to Section 7.5 (Conditions of Indemnification), Databricks will defend Customer against any claim, demand, suit or proceeding made or brought against Customer by a third party (a “Claim Against Customer”)alleging that the Databricks Services as provided to Customer by Databricks or Customer’s use of the Databricks Services in accordance with the Documentation and the Agreement infringes or misappropriates such party’s Intellectual Property Rights (an “IP Claim”), and will indemnify Customer from and against any damages, attorney fees and costs finally awarded against Customer as a result of, or for amounts paid by Customer under a settlement approved by Databricks in writing of, a Claim Against Customer. Notwithstanding the foregoing, Databricks will have no liability for any infringement or misappropriation claim of any kind if such claim arises from: (a) the public open source version of Apache Spark (located at github.com/apache/spark) if the claim of infringement or misappropriation does not allege specifically that the infringement or misappropriation arises from the Platform Services (as opposed to Apache Spark itself); (b) the combination, operation or use of the Databricks Services with equipment, devices, software or data (including without limitation your Confidential Information) not supplied by Databricks if a claim would not have occurred but for such combination, operation or use; or (c) your or an Authorized User’s use of the Databricks Services other than in accordance with the Documentation and the Agreement.Other Remedies. If Databricks receives information about an infringement or misappropriation claim related to a Databricks Service or otherwise becomes aware of a claim that the provision of any of the Databricks Services is unlawful in a particular territory, then Databricks may at its sole option and expense: (a) replace or modify the applicable Databricks Services to make them non-infringing and of substantially equivalent functionality; (b) procure for you the right to continue using the Databricks Services under the terms of the Agreement; or (c) if Databricks is unable to accomplish either (a) or (b) despite using its reasonable efforts, terminate your rights and Databricks’ obligations under the Agreement with respect to such Databricks Services and refund to you any Fees prepaid by you to Databricks for Databricks Services not yet provided.Indemnification by Customer. Subject to Section 7.5 (Conditions of Indemnification), Customer will defend Databricks against any claim, demand, suit or proceeding made or brought against  Databricks by a third party (a “Claim Against Databricks”) (a) arising from or related to Customer’s use of the Databricks Services in violation of any applicable laws, the rights of a third party, or the Agreement, or (b) arising from or related to Customer Content or its use with the Databricks Services, (c) alleging that any information and / or  materials you provide to Databricks for Databricks to perform Advisory Services as defined in an Advisory Services Schedule (if applicable) (“Customer Materials”) or the use of Customer Materials with the Databricks Services infringes or misappropriates such party’s Intellectual Property Rights, and / or (d) arising from any instructions provided by Customer to Databricks in the creation by Databricks of the Deliverables (as defined in the Advisory Services Schedule (if applicable)), and will indemnify Databricks from and against any damages, attorney fees and costs finally awarded against Databricks as a result of a Claim Against Databricks, or for amounts paid by Databricks under a settlement approved by Customer in writing.Sole Remedy. SUBJECT TO SECTION 7.5 (CONDITIONS OF INDEMNIFICATION) BELOW, THE FOREGOING SECTIONS 7.1 (INDEMNIFICATION BY DATABRICKS) AND 7.2 (OTHER REMEDIES) STATE THE ENTIRE OBLIGATION OF DATABRICKS AND ITS LICENSORS WITH RESPECT TO ANY ALLEGED OR ACTUAL INFRINGEMENT OR MISAPPROPRIATION OF INTELLECTUAL PROPERTY RIGHTS BY THE DATABRICKS SERVICES.Conditions of Indemnification. As a condition to an indemnifying party’s (each, an “Indemnitor”) obligations under this Section 7 (Indemnification), a party seeking indemnification (each, an ”Indemnitee”) will: (a) promptly notify the Indemnitor of the claim for which the Indemnitee is seeking indemnification (but late notice will only relieve Indemnitor of its obligation to indemnify to the extent that it has been prejudiced by the delay); (b) grant the Indemnitor sole control of the defense (including selection of counsel) and settlement of the claim; (c) provide the Indemnitor, at the Indemnitor’s expense, with all assistance, information and authority reasonably required for the defense and settlement of the claim; and (d) preserve and will not waive legal, professional or any other privilege attaching to any of the records, documents, or other information in relation to such claim without prior notification of consent by the Indemnitor. The Indemnitor will not settle any claim in a manner that does not fully discharge the claim against an Indemnitee or that imposes any obligation on, or restricts any right of, an Indemnitee without the Indemnitee’s prior written consent, which may not be unreasonably withheld or delayed. An Indemnitee has the right to retain counsel, at the Indemnitee’s expense, to participate in the defense or settlement of any claim. The Indemnitor will not be liable for any settlement or compromise that an Indemnitee enters into without the Indemnitor’s prior written consent.Limitation of Liability. EXCEPT WITH RESPECT TO (I) LIABILITY THAT CANNOT BE EXCLUDED OR LIMITED BY APPLICABLE LAWS, (II) LIABILITY ARISING OUT OF FRAUD OR FRAUDULENT MISREPRESENTATION, OR (III) CUSTOMER’S INDEMNIFICATION OBLIGATIONS, NEITHER PARTY WILL HAVE ANY LIABILITY FOR: (A) INDIRECT, INCIDENTAL, SPECIAL, PUNITIVE, OR CONSEQUENTIAL LOSS OR DAMAGES; (B) LOST PROFITS OR REVENUE; (C) LOSS OF GOODWILL; (D) LOSS OF DATA; OR (E) LOSS ARISING FROM INACCURATE OR UNEXPECTED RESULTS ARISING FROM THE USE OF THE DATABRICKS SERVICES, REGARDLESS OF WHETHER SUCH PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH LOSSES OR DAMAGES ARISING.SUBJECT TO SECTIONS 8.1, 8.3, 8.4 AND 8.5, EXCEPT WITH RESPECT TO LIABILITY ARISING OUT OF: (I) PERSONAL INJURY OR DEATH CAUSED BY THE NEGLIGENCE OF A PARTY, ITS EMPLOYEES, OR AGENTS; (II) DATABRICKS’ INDEMNIFICATION OBLIGATIONS FOR AN IP CLAIM; OR (III) CUSTOMER’S INDEMNIFICATION OBLIGATIONS, IN NO EVENT WILL THE AGGREGATE LIABILITY OF EACH PARTY TOGETHER WITH ALL OF ITS AFFILIATES ARISING OUT OF OR RELATED TO THE AGREEMENT EXCEED THE TOTAL AMOUNT PAID BY CUSTOMER AND ITS AFFILIATES FOR THE DATABRICKS SERVICES GIVING RISE TO THE LIABILITY IN THE TWELVE (12) MONTHS PRECEDING THE FIRST INCIDENT OUT OF WHICH THE LIABILITY AROSE (THE “GENERAL CAP”). THE FOREGOING LIMITATION WILL APPLY WHETHER AN ACTION IS IN CONTRACT OR TORT AND REGARDLESS OF THE THEORY OF LIABILITY, BUT WILL NOT LIMIT CUSTOMER’S AND ITS AFFILIATES’ PAYMENT OBLIGATIONS UNDER SECTION 10 (PAYMENT).SUBJECT TO SECTIONS 8.1, 8.4 AND 8.5, DATABRICKS’ AGGREGATE LIABILITY FOR ANY CLAIMS OR DAMAGES, DIRECT OR OTHERWISE, ARISING OUT OF OR IN CONNECTION WITH DATABRICKS’ BREACH OF ITS CONFIDENTIALITY OBLIGATIONS (SECTION 2.2) OR, WITH RESPECT TO THE PROVISION BY DATABRICKS OF THE PLATFORM SERVICES (IF APPLICABLE), THE DATA PROTECTION AND SECURITY OBLIGATIONS SET FORTH IN THIS AGREEMENT AND THE DPA, WHERE SUCH BREACH RESULTS IN UNAUTHORIZED DISCLOSURE OF CUSTOMER CONTENT, EXCEPT TO THE EXTENT SUCH CLAIMS OR DAMAGES ARE CAUSED BY DATABRICKS’ GROSS NEGLIGENCE OR WILLFUL MISCONDUCT, SHALL BE LIMITED TO TWO (2) TIMES THE TOTAL AMOUNT PAID BY CUSTOMER AND ITS AFFILIATES FOR THE DATABRICKS SERVICES GIVING RISE TO THE LIABILITY IN THE TWELVE (12) MONTHS PRECEDING THE FIRST INCIDENT OUT OF WHICH THE LIABILITY AROSE (“SUPERCAP”).IN NO EVENT SHALL DATABRICKS BE LIABLE FOR THE SAME EVENT UNDER BOTH THE GENERAL CAP AND THE SUPERCAP. SIMILARLY, THOSE CAPS SHALL NOT BE CUMULATIVE; IF THERE ARE ONE OR MORE CLAIMS SUBJECT TO EACH OF THOSE CAPS, THE MAXIMUM TOTAL LIABILITY FOR ALL CLAIMS IN THE AGGREGATE SHALL NOT EXCEED THE SUPERCAP.NOTWITHSTANDING ANYTHING CONTAINED ABOVE, DATABRICKS' LIABILITY RELATING TO BETA SERVICES OR ANY DATABRICKS SERVICES PROVIDED FREE OF CHARGE, INCLUDING ANY DATABRICKS SERVICES PROVIDED DURING A FREE TRIAL PERIOD, WILL BE LIMITED TO FIVE THOUSAND US DOLLARS (USD $5,000).TermTerm of Agreement. The Agreement will become effective on the Effective Date and will continue in full force and effect until terminated by either party pursuant to this Section 9 (Term). The Agreement may be terminated (i) by either party on thirty (30) days’ prior written notice if (a) there are no operative Orders outstanding or (b) the other party is in material breach of the Agreement and the breaching party fails to cure the breach prior to the end of the notice period; or (ii) by Databricks upon thirty (30) days’ prior written notice following your receipt of a notice that you are delinquent in the payment of undisputed Fees. If the Agreement terminates pursuant to the prior sentence due to Databricks’ material breach, Databricks will refund to you that portion of any prepayments made to Databricks related to Databricks Services not yet provided. Either party can immediately terminate the Agreement if the other becomes insolvent, makes an assignment for the benefit of its creditors, has a receiver, examiner, or administrator of its undertaking of the whole or a substantial part of its assets appointed, or an order is made, or an effective resolution is passed, for its administration, examinership, receivership, liquidation, winding-up or other similar process, or has any distress, execution or other process levied or enforced against the whole or a substantial part of its assets (which is not discharged, paid out, withdrawn or removed within 30 days), or is subject to any proceedings which are equivalent or substantially similar to any of the foregoing under any applicable jurisdiction, or ceases to conduct business or threatens to do so.Term of Orders. The Term of an Order will be as specified in the Order.Survival. All provisions of the Agreement that by their nature should survive termination will so survive.Payment. Unless your usage of the Databricks Services is being paid for by a third party under contract with Databricks, you will pay all Fees specified in the applicable Order. With respect to direct Order, except as otherwise specified therein: (a) all Fees owed to Databricks will be paid in U.S. Dollars; (b) invoiced payments will be due within 30 days of the date of your receipt of each invoice; (c) Fees for all prepaid committed Databricks Services will be invoiced in full upon execution of the applicable Order; and (d) all excess usage will be invoiced monthly in arrears. With respect to an Order entered into with a reseller, payment terms will be specified on such Order, provided that should you fail to pay Fees when due to a Databricks-authorized reseller, Databricks may seek payment directly from you. All past due payments, except to the extent reasonably disputed, will accrue interest at the highest rate allowed under applicable laws but in no event more than one and one-half percent (1.5%) per month. You will be solely responsible for payment of any applicable sales, value added or use taxes, or similar government fees or taxes.Compliance with Laws.By Databricks Generally. Databricks will provide the Databricks Services in accordance with its obligations under laws and government regulations applicable to Databricks’ provision of the Databricks Services to its customers generally, including, without limitation those related to data protection and data privacy, irrespective of Customer’s particular use of the services.By Customer Generally. You represent and warrant to Databricks that your use of Databricks Services will comply with all applicable laws and government regulations, including without limitation those related to data protection and data privacy. Export Controls; Trade Sanctions. The Databricks Services may be subject to export controls and trade sanctions laws of the United States and other jurisdictions. Customer acknowledges and agrees that it will comply with all applicable export controls and trade sanctions laws, regulations and/or any other relevant restrictions in Customer’s use of the Databricks Services, including that you will not permit access to or use of any Databricks Services in any country where such access or use is subject to a trade embargo or prohibition, and that you will not use Databricks Services in support of any controlled technology, industry, or goods or services, or any other restricted use, without having a valid governmental license, authority, or permission to engage in such conduct. Each party further represents that it (and with respect to Customer, each Authorized User and / or Affiliate accessing the Databricks Services)  is not named on any governmental or quasi-governmental denied party or debarment list relevant to this Agreement, and is not owned directly or indirectly by persons whose aggregated interest in such party is 50% or more and who are named on any such list(s). Business Practices; Code of Conduct. Databricks maintains a set of business practice principles and policies in the Databricks Global Code of Conduct, which employees are required to follow. Databricks will abide by these principles and policies in the conduct of all business for Customer and expects your use of any Databricks Services to be conducted utilizing principles of business ethics and social responsibility and, with respect to any Platform Services, in accordance with Databricks’ Acceptable Use Policy and the applicable Platform Services terms set forth in the Agreement.General. Governing Law and Venue. The governing law and exclusive venue applicable to any lawsuit or other dispute arising in connection with the Agreement will be determined by the location of Customer’s principal place of business (“Domicile”), as follows: Customer’s DomicileGoverning LawVenue(courts with exclusive jurisdiction)CaliforniaCaliforniaSan Francisco(state and U.S. federal courts)Americas (except California and Canada); Middle East; AfricaDelawareDelaware(state and U.S. federal courts)CanadaOntarioTorontoUnited KingdomEngland & WalesLondonEurope (including Turkey)IrelandDublinPacific & AsiaSingaporeSingaporeAustralia and New ZealandAustraliaVictoriaThe parties hereby irrevocably consent to the personal jurisdiction and venue of the courts in the venues shown above. Unless prohibited by governing law or venue, each party irrevocably agrees to waive jury trial.  In all cases, the application of law will be without regard to, or application of, conflict of law rules or principles, and the United Nations Convention on Contracts for the International Sale of Goods will not apply.Insurance Coverage.Databricks will maintain commercially appropriate insurance coverage given the nature of the Databricks Services and Databricks’ obligations under the Agreement. Such insurance will be in an industry standard form with licensed insurance carriers with A.M. Best ratings of A-IX or better, and will include commercially appropriate cyber liability insurance coverage. Upon request, Databricks will provide Customer with certificates of insurance evidencing such coverage.Entire Agreement, Construction, Amendment and Execution. The Agreement is the complete and exclusive understanding and agreement between the parties regarding its subject matter, provided that to the extent Customer uses any Databricks Services subject to Schedules not included in the Agreement, the relevant Schedule in effect at the time of first use at databricks.com/legal/mcsa shall be deemed to govern use of such Databricks Services unless the parties agree otherwise in writing and any reference to a term in such Schedule shall be interpreted accordingly. Databricks may change and update the Platform Services, in which case Databricks may update the Documentation. To the extent any provision in an Order clearly conflicts with a provision of this MCSA or a provision of an earlier Order, the provision in the new Order will be binding and the conflicting provision in this MCSA or in the earlier Order will be deemed modified solely to the extent reasonably necessary to eliminate the conflict and solely with respect to the new Order (unless expressly intended to permanently amend the Agreement including any Schedule). Customer’s Affiliates may receive the Databricks Services under this Agreement as Authorized Users, however in the event that a Customer Affiliate wishes to execute its own Order subject to the terms of this Agreement then Customer agrees to remain jointly and severally liable for such use. If any provision of the Agreement is held to be unenforceable or invalid, that provision will be enforced to the maximum extent possible and the other provisions will remain in full force and effect. The headings in the Agreement are solely for convenience and will not be taken into consideration in interpretation of the Agreement. Any translation of the Agreement or an Order that is provided as a courtesy shall not be legally binding and the English language version will always prevail. Each party acknowledges and agrees that it has adequate sophistication, including legal representation, fully to review and understand the Agreement; therefore, in interpretation of the Agreement with respect to any drafting ambiguities that may be identified or alleged, no presumption will be given in favor of the non-drafting party. The Agreement may not be modified or amended except by mutual written agreement of the parties. Without limiting the foregoing, no Customer purchase order will be deemed to modify an Order or the Agreement unless expressly pre-authorized in writing by Databricks. The Agreement may be executed in two or more counterparts, each of which will be deemed an original and all of which, taken together, will constitute one and the same instrument. A party’s electronic signature or transmission of any document by electronic means will be deemed to bind such party as if signed and transmitted in physical form.Publicity. Customer consents to Databricks’ use of Customer's name and logo for public identification as a customer, along with general descriptions of any non-confidential matters Databricks has handled for Customer in promotional marketing materials and press releases. In addition, upon request, Customer consents to participating in a case study regarding its experiences with the Databricks Services ("Case Study"), and inclusion of the Case Study in promotional marketing materials and press releases.Assignment. No assignment, novation or transfer of a party’s rights and obligations under the Agreement (“Assignment”) is permitted except with the prior written approval of the other party, which will not be unreasonably withheld.  Notwithstanding the foregoing, either party may freely make an Assignment to a successor in interest upon a change of control; if such Assignment is to a direct competitor of the other party or would cause the other party to become in violation of applicable laws that is not reasonably addressable, such other party may terminate the Agreement upon written notice.Notice. Any required notice under the Agreement will be deemed given when received by letter delivered by nationally recognized overnight delivery service or recorded prepaid mail. Unless notified in writing of a change of address, you will send any required notice to Databricks, Inc., 160 Spear Street, Suite 1300, San Francisco, CA 94105, USA, attention: Legal Department, or to the alternative Databricks Affiliate (if any) identified in an applicable Order, and Databricks will send any required notice to you directed to the most recent address you have provided to Databricks for such notice.Force Majeure. Neither party will be liable or responsible to the other party nor be deemed to have defaulted under or breached the Agreement for any failure or delay in fulfilling or performing any term of the Agreement (except for any obligations to make payments to the other party), when and to the extent such failure or delay is caused by or results from acts beyond the impacted party’s (“Impacted Party”) reasonable control, including without limitation the following force majeure events (“Force Majeure Event(s)“): (a) acts of God, (b) acts of government, including any changes in law or regulations, (c) acts or omissions of third parties, (d) flood, fire, earthquakes, civil unrest, wars, acts of terror, pandemics, or strikes or other actions taken by labor organizations, (e) computer, telecommunications, the Internet, Internet service provider or hosting facility failures or delays involving hardware, software or power systems not within the Impacted Party’s possession or reasonable control, (f) network intrusions or denial of service attacks, or (g) any other cause, whether similar or dissimilar to any of the foregoing, that is beyond the Impacted Party’s reasonable control.Last Updated December 1, 2022. For earlier versions, please send a request to [email protected] (with “TOS Request” in the subject).MCSA_OnlineStandard_ENG_v.3.0_20221201ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/dataaisummit/speaker/frank-munz/#
Frank Munz - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFrank MunzPrincipal Technical Marketing Engineer at DatabricksBack to speakersDr. Frank Munz works as a Principal Technical Marketing Engineer at Databricks after kickstarting DevRel EMEA. Previously, Frank built up technical evangelism for Amazon Web Services in Germany, Austria and Switzerland. Frank’s focus is on Cloud strategies, Machine Learning, cloud-native/containers, big & fast & non-relational data and High-Performance Computing. With over 22 years of professional experience in distributed systems, Java Enterprise, microservices, SOA, and cloud computing, Frank has published 17 scientific articles in computer science and brain research as well as three computer science textbooks with some of them used by lecturers of US universities. He trained keynote speakers for events with up to 3.500 attendees from many verticals including clinical research, pharmacy and genomics. Frank still presents regularly at conferences all over the world such as Devoxx, Java One, JConf, Voxxed Days, Code One, and KubeCon. His prolific global and cross-industry experience in IT was awarded the Technologist of the Year Award for Cloud Computing and nominated as an independently working Oracle ACE Director. 2017 he was nominated as one of the first Developer Champions worldwide. Frank holds a Ph.D. with summa cum laude (1,0) in computer science from the Technische Universität München where he spent 4 years in supercomputing and distributed systems. He developed distributed functional algorithms for human brain research, cardiology and oncology. Frank sold his first computer program (a kind of mini e-business suite) with 16 years. After having jobs as a furniture mover in Paris he was hired as a data scientist at the German Cancer Research Center with the team of Prof. zur Hausen. The study Frank worked on won the Nobel prize in medicine for linking a human virus (HPV) to the development of cancer. The vaccine in use today saves over 30k people in the US from developing cancer. Just recently Frank unlocked his goal to present at major conferences on every continent except Antarctica (because it is so cold there). He enjoys writing about himself in the third person.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/dataaisummit/speaker/sachin-balgonda-patil/#
Sachin Balgonda Patil - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingSachin Balgonda PatilSolutions Architect at DatabricksBack to speakersSachin is a Solutions Architect at Databricks and is based out of London, UK. He has spent around 20 years Architecting, Designing and Implementing complex production grade applications for various customers across the globe. In his prior role, he has implemented streaming applications for the financial services and has deep interest in real time streaming workloads. Before joining Databricks, he worked for a global system integration company.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/dataaisummit/speaker/rong-ma
Rong Ma - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingRong MaSoftware Engineer at intelBack to speakersRong Ma is a software engineer from Intel data analytics team. She has two years’ experience in bigdata and cloud system optimization, focusing on computation, storage, network software stack performance analysis and optimization. She participated in the development works including Spark-Sql, Spark-Shuffle optimization, cache implementation, etc.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/de/company/partners/technology-partner-program
Databricks-Technologiepartnerprogramm – DatabricksSkip to main contentPlattformDie Lakehouse-Plattform von DatabricksDelta LakeData GovernanceData EngineeringDatenstreamingData-WarehousingGemeinsame DatennutzungMachine LearningData SciencePreiseMarketplaceOpen source techSecurity & Trust CenterWEBINAR 18. Mai / 8 Uhr PT Auf Wiedersehen, Data Warehouse. Hallo, Lakehouse. Nehmen Sie teil, um zu verstehen, wie ein Data Lakehouse in Ihren modernen Datenstapel passt. Melden Sie sich jetzt anLösungenLösungen nach BrancheFinanzdienstleistungenGesundheitswesen und BiowissenschaftenFertigungKommunikation, Medien und UnterhaltungÖffentlicher SektorEinzelhandelAlle Branchen anzeigenLösungen nach AnwendungsfallSolution AcceleratorsProfessionelle ServicesDigital-Native-UnternehmenMigration der Datenplattform9. Mai | 8 Uhr PT   Entdecken Sie das Lakehouse für die Fertigung Erfahren Sie, wie Corning wichtige Entscheidungen trifft, die manuelle Inspektionen minimieren, die Versandkosten senken und die Kundenzufriedenheit erhöhen.Registrieren Sie sich noch heuteLernenDokumentationWEITERBILDUNG & ZERTIFIZIERUNGDemosRessourcenOnline-CommunityUniversity AllianceVeranstaltungenData + AI SummitBlogLabsBaken26.–29. Juni 2023 Nehmen Sie persönlich teil oder schalten Sie für den Livestream der Keynote einJetzt registrierenKundenPartnerCloud-PartnerAWSAzureGoogle CloudPartner ConnectTechnologie- und DatenpartnerTechnologiepartnerprogrammDatenpartner-ProgrammBuilt on Databricks Partner ProgramConsulting- und SI-PartnerC&SI-PartnerprogrammLösungen von PartnernVernetzen Sie sich mit validierten Partnerlösungen mit nur wenigen Klicks.Mehr InformationenUnternehmenKarriere bei DatabricksUnser TeamVorstandUnternehmensblogPresseAktuelle Unternehmungen von DatabricksAuszeichnungen und AnerkennungenKontaktErfahren Sie, warum Gartner Databricks zum zweiten Mal in Folge als Leader benannt hatBericht abrufenDatabricks testenDemos ansehenKontaktLoginJUNE 26-29REGISTER NOWTechnologiepartnerprogrammSprechen Sie Tausende von Databricks-Kunden an, die nur darauf warten, Ihr Produkt zu nutzenJetzt bewerbenDatabricks lässt sich in die besten Daten- und KI-Produkte auf dem Markt integrieren und fördert sie. Wir bieten unseren Partnern technischen Support und Unterstützung bei der Vermarktung, die sie benötigen, um neue Kunden zu gewinnen und ihr Geschäft zu befeuern.Vorteile für TechnologiepartnerVerkaufsanreizeIncentives für Databricks-Vertreter, um den Verkauf Ihres Produkts zu unterstützenZugang zu KundenSofort neue Kunden gewinnen – mit Databricks Partner ConnectMehr InformationenMarketingunterstützungNutzen Sie Marketinginvestitionen, um Ihre Reichweite zu erhöhenInanspruchnahme von Produkt- und F&E-TeamsErhalten Sie Zugang zu Produkt-, Technik- und Supportmitarbeitern von DatabricksKostenlose Sandbox-UmgebungEntwickeln und Testen in einer kostenfreien Sandbox-UmgebungGemeinsame MarketingprogrammeGemeinsamen Marketingprogrammen mit Databricks beitretenBereit zur Kontaktaufnahme?Jetzt bewerbenPartner findenProduktPlatform OverviewPreiseOpen Source TechDatabricks testenDemoProduktPlatform OverviewPreiseOpen Source TechDatabricks testenDemoLearn & SupportDokumentationGlossaryWEITERBILDUNG & ZERTIFIZIERUNGHelp CenterLegalOnline-CommunityLearn & SupportDokumentationGlossaryWEITERBILDUNG & ZERTIFIZIERUNGHelp CenterLegalOnline-CommunityLösungenBy IndustriesProfessionelle ServicesLösungenBy IndustriesProfessionelle ServicesUnternehmenÜber unsKarriere bei DatabricksDiversität und InklusionUnternehmensblogKontaktUnternehmenÜber unsKarriere bei DatabricksDiversität und InklusionUnternehmensblogKontaktWeitere Informationen unter „Karriere bei DatabricksWeltweitEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Datenschutzhinweis|Terms of Use|Ihre Datenschutzwahlen|Ihre kalifornischen Datenschutzrechte
https://www.databricks.com/dataaisummit/speaker/morgan-hsu/#
Morgan Hsu - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingMorgan HsuDirector, Data and ML Engineering at FanDuelBack to speakersMorgan manages the Insights and ML platform teams at FanDuel. His teams provide ML infrastructure and services used to support use cases including responsible gaming, fraud detection, and marketing optimization.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/dataaisummit/speaker/bin-mu
Bin Mu - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingBin MuVice President and Head of Data & Analytics at AdobeBack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/de/company/partners/data-partner-program
Datenpartner-Programm | DatabricksSkip to main contentPlattformDie Lakehouse-Plattform von DatabricksDelta LakeData GovernanceData EngineeringDatenstreamingData-WarehousingGemeinsame DatennutzungMachine LearningData SciencePreiseMarketplaceOpen source techSecurity & Trust CenterWEBINAR 18. Mai / 8 Uhr PT Auf Wiedersehen, Data Warehouse. Hallo, Lakehouse. Nehmen Sie teil, um zu verstehen, wie ein Data Lakehouse in Ihren modernen Datenstapel passt. Melden Sie sich jetzt anLösungenLösungen nach BrancheFinanzdienstleistungenGesundheitswesen und BiowissenschaftenFertigungKommunikation, Medien und UnterhaltungÖffentlicher SektorEinzelhandelAlle Branchen anzeigenLösungen nach AnwendungsfallSolution AcceleratorsProfessionelle ServicesDigital-Native-UnternehmenMigration der Datenplattform9. Mai | 8 Uhr PT   Entdecken Sie das Lakehouse für die Fertigung Erfahren Sie, wie Corning wichtige Entscheidungen trifft, die manuelle Inspektionen minimieren, die Versandkosten senken und die Kundenzufriedenheit erhöhen.Registrieren Sie sich noch heuteLernenDokumentationWEITERBILDUNG & ZERTIFIZIERUNGDemosRessourcenOnline-CommunityUniversity AllianceVeranstaltungenData + AI SummitBlogLabsBaken26.–29. Juni 2023 Nehmen Sie persönlich teil oder schalten Sie für den Livestream der Keynote einJetzt registrierenKundenPartnerCloud-PartnerAWSAzureGoogle CloudPartner ConnectTechnologie- und DatenpartnerTechnologiepartnerprogrammDatenpartner-ProgrammBuilt on Databricks Partner ProgramConsulting- und SI-PartnerC&SI-PartnerprogrammLösungen von PartnernVernetzen Sie sich mit validierten Partnerlösungen mit nur wenigen Klicks.Mehr InformationenUnternehmenKarriere bei DatabricksUnser TeamVorstandUnternehmensblogPresseAktuelle Unternehmungen von DatabricksAuszeichnungen und AnerkennungenKontaktErfahren Sie, warum Gartner Databricks zum zweiten Mal in Folge als Leader benannt hatBericht abrufenDatabricks testenDemos ansehenKontaktLoginJUNE 26-29REGISTER NOWPartnerprogramm für DatenanbieterMit Databricks Zugang zu einem umfassenden und offenen Ökosystem von Datenkonsumenten erhaltenJetzt bewerbenDatabricks hilft unseren Partnerdatenanbietern, Datenbestände über eine zentrale Plattform in einem umfassenden und offenen Ökosystem mit Datenkonsumenten zu monetarisieren. Unsere Partner können die Databricks Lakehouse-Plattform nutzen, um mehr Kunden anzusprechen, Kosten zu senken und ein erstklassiges Erlebnis für alle Anforderungen an den Datenaustausch zu bieten.Vorteile für PartnerdatenanbieterMehr Konsumenten ansprechenGrößere Reichweite für jeden Datenkonsumenten über eine offene und sichere PlattformBesseres KundenerlebnisBeschleunigte Einrichtung und Aktivierung für DatenkonsumentenMarketingunterstützungVerstärkte Präsenz durch Marketingunterstützung von DatabricksTechnologie für DatenprodukteNutzung der marktführenden Lakehouse-Plattform für Daten, Analytics und KIInanspruchnahme von Produkt- und F&E-TeamsZugang zu Produkt-, Technik- und Supportmitarbeitern von DatabricksBranchenlösungenArbeiten Sie mit unseren Branchenteams zusammen, um branchenspezifische Lösungen zu entwickeln, die für die Anwendungsfälle unserer Kunden entwickelt wurden.Delta Sharing für DatenanbieterDatabricks ist nativ verzahnt mit Delta Sharing, dem weltweit ersten offenen Protokoll für den sicheren Echtzeit-Datenaustausch zwischen Unternehmen – ganz unabhängig von der Plattform, auf der die Daten gespeichert sind.Unterstützung für Delta Sharing durch ein breites ÖkosystemOpen-Source-KundenGewerbliche KundenBusiness IntelligenceAnalysenGovernanceDatenanbieterMöchten Sie loslegen?Jetzt bewerbenPartner findenProduktPlatform OverviewPreiseOpen Source TechDatabricks testenDemoProduktPlatform OverviewPreiseOpen Source TechDatabricks testenDemoLearn & SupportDokumentationGlossaryWEITERBILDUNG & ZERTIFIZIERUNGHelp CenterLegalOnline-CommunityLearn & SupportDokumentationGlossaryWEITERBILDUNG & ZERTIFIZIERUNGHelp CenterLegalOnline-CommunityLösungenBy IndustriesProfessionelle ServicesLösungenBy IndustriesProfessionelle ServicesUnternehmenÜber unsKarriere bei DatabricksDiversität und InklusionUnternehmensblogKontaktUnternehmenÜber unsKarriere bei DatabricksDiversität und InklusionUnternehmensblogKontaktWeitere Informationen unter „Karriere bei DatabricksWeltweitEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Datenschutzhinweis|Terms of Use|Ihre Datenschutzwahlen|Ihre kalifornischen Datenschutzrechte
https://www.databricks.com/p/webinar/delta-lake-the-foundation-of-your-lakehouse
Delta Lake: The Foundation of Your Lakehouse | DatabricksVirtual Event + Live Q&ADelta Lake: The Foundation of Your LakehouseBring reliability, performance and security to your data lakeAvailable on-demandAs an open format storage layer, Delta Lake delivers reliability, security and performance to data lakes. Customers have seen 48x faster data processing, leading to 50% faster time to insight, after implementing Delta Lake.Watch a live demo and learn how Delta Lake:Solves the challenges of traditional data lakes — giving you better data reliability, support for advanced analytics and lower total cost of ownershipProvides the perfect foundation for a cost-effective, highly scalable lakehouse architectureOffers auditing and governance features to streamline GDPR complianceHas dramatically simplified data engineering for our customersSpeakersHimanshu RajaProduct ManagementDATABRICKSSam SteinyProduct MarketingDATABRICKSBrenner HeintzProduct MarketingDATABRICKSBarbara EckmanSoftware ArchitectCOMCASTTranscript Sam Steiny: Hi, and welcome to the Databricks event, Delta Lake, the foundation of your lakehouse. My name is Sam Steiny and I work in product marketing at Databricks, focusing specifically on data engineering and on Delta Lake. I'm excited to be here today. I get to be the MC for today's event, and I will be guiding you through today's sessions. More and more, we've seen the term lakehouse referenced in news at events in tech blogs, and thought leadership. And beyond our work at Databricks, organizations across industries have really increasingly turned to this idea of a lakehouse as the future for unified analytics, data science and machine learning.Sam Steiny: In today's event, we'll see an overview of Delta Lake, which is the secure data storage and management layer for your data lake that really forms the foundation of a lakehouse. We'll see a demo of Delta Lake in action, and we'll actually hear how Comcast has leveraged Delta Lake to bring reliability, performance, and security to their data. We'll finish today's event with a live Q and A. So, come prepared with your questions and we'll do our best to answer as many as possible. So, before we start just some quick housekeeping, today's session is being recorded. So, it'll be available on demand to anyone who has registered.Sam Steiny: And then also, if you have any questions throughout the event, please feel free to add them to the Q and A box. We'll do our best to actually answer them in real time there. But we'll also answer the leftover questions as well as any additional ones in the live Q and A at the end of the session. So, now before we get to our speakers, I wanted to share a quick overview of Delta Lake in a video we recently launched. This'll give you a high level understanding of what Delta Lake is, before Himanshu who is the Delta Lake product manager, will go into more detail about Delta Lake and how it forms the foundation of a lakehouse.Speaker 3: Businesses today have the ability to collect more data than ever before. And that data contains valuable insights into your business and your customers, if you can unlock it. As most organizations have discovered, it's no simple task to turn data into insights. Today's data comes in a variety of formats, video, audio, and text. Data lakes have become the defacto solution because they can store these different formats at a low cost and don't lock businesses into a particular vendor like a data warehouse does. But traditional data lakes have challenges, as data lakes accumulate data in different format, maintaining reliable data is challenging and can often lead to inaccurate query results.Speaker 3: The growing data volume also impacts performance, slowing down analysis and decision-making, and with few auditing and governance features data lakes are very hard to properly secure and govern. With all of these challenges, as much as 73% of company data goes unused for analytics and decision-making and value in it is never realized. Delta Lake solves these challenges. Delta Lake is a data storage and management layer for your data lake that enables you to scale insights throughout your organization with a reliable single source of truth for all data workloads, both batch and streaming, increase productivity by optimizing for speed at scale with performance features like advanced indexing and schema enforcement.Speaker 3: Operate with flexibility in an open source environment stored in Apache parquet format and reduce risk by quickly and accurately updating data in your data lake for compliance and maintain better data governance through audit logging. By unlocking your data with Delta Lake, you can do things like dramatically simplified data engineering by performing ETL processes directly on the data lake. Make new real-time data instantly available for data analysis, data science and machine learning, gain confidence in your ability to reliably meet compliance standards like GDPR and CCPA.Speaker 3: Delta Lake on Databricks brings reliability, performance and security to your data all in an open format, making it the perfect foundation for a cost-effective highly scalable lakehouse architecture. Delta Lake, the open, reliable, performant and secure foundation of your lakehouse.Sam Steiny: Great. So, with that high level view, now you have an understanding of Delta Lake and I'm going to now pass it over to Himanshu Raja, who's the product manager for Delta Lake at Databricks. He's going to do a deeper dive into Delta Lake and explain how it really enables a lakehouse for our customers. Over to you, Himanshu.Himanshu Raja: Thank you, Sam. I'm super excited to be here and talk to you about Delta Lake and why it is the right foundation for lakehouse. In today's session, I will cover the challenges of building data analytics stack while lakehouse is the only future proof solution. What is Delta Lake? And why it is the best foundation for your lakehouse? Brenner, will then jump into the most exciting part of the session and do a demo. After the session, you will have enough context, links to the supporting material to get started and build your first data lake.Himanshu Raja: Every company is feeling the pull to become a data company, because when large amounts of data are applied to even simple models, the improvements on use cases is exponential. And here at Databricks, our entire focus is on helping customers apply data to their toughest problems. I'll dig examples of two such customers, Comcast and Nationwide. Comcast is a great example of a media company that has successfully adopted data and machine learning to create new experiences for their viewers that help improve satisfaction and retention.Himanshu Raja: They have built a voice-activated remote control that allows you to speak into the remote, ask it a question, and it will provide some really relevant results, leveraging things like natural language processing and deep learning. And they've built all of this on top of Databricks platform. Nationwide is one of the largest insurance providers in the U.S. nationwide saw that the explosive growth in data availability and increasing market competition was challenging them to provide better pricing to their customers. With hundreds of millions of insurance records to analyze for downstream ML nationwide realized that their legacy batch analysis process was slow and inaccurate, providing limited insights to predict the frequency and severity of the claims.Himanshu Raja: With Databricks, they have been able to employ deep learning models at scale to provide, more accurate pricing predictions resulting in more revenue from claims. Because of this potential, it's not surprising that, 83% of CEOs say AI is a strategic priority. According to a report published by MIT Sloan management review, or that Gartner predicts AI will generate almost trillion dollars in business value in only a couple of years. But it is very hard to get right. Gartner says 85% of the big data projects will fail. Venture Beat published a report that said 87% of data science projects never make it into production. So, while some companies are having success most still struggle.Himanshu Raja: So, the story starts with data warehouses, which it is hard to believe. Will soon celebrate its 40th birthday. Data warehouses came around in the 80s and were purpose-built for BI and reporting. Overtime they have become essential and today every enterprise on the planet has many of them. However, they weren't built for modern data use cases. They have no support for data like video or audio or text. Datasets that are crucial for modern use cases. It had to be very structured data queriable only with SQL. As a result, there is no viable support for data science or machine learning. In addition, there is no support for real-time streaming. They are great for batch processing, but either do not support streaming or can be cost prohibitive.Himanshu Raja: And because they are closed and proprietary systems, they force you to lock your data in, so you cannot easily move data around. So, today the result of all of that is that most organizations will first store all of their data in data lakes and block stores, and then move subsets of it into the data warehouse. So, then the thinking was that potentially data lakes could be the answer to all our problems. Data lakes came around about 10 years ago and they were great because they could indeed handle all your data. And they were there for good for data science and machine learning use cases. And data lakes serves as a great starting point for a lot of enterprises.Himanshu Raja: However, they aren't able to support that data warehousing or BI use cases. Data lakes are actually more complex to set up than a data warehouse. Our warehouse has a lot of familiar support semantics like asset transactions. With the data lakes, you are just dealing with files. So, those abstractions are not provided, you really have to build them yourself. And they're very complex to set up. And even after you do all of that, the performance is not great. You're just dealing with files, the end. In most cases, customers end up with a lot of small files and even the simplest queries will require you to list all those files. That takes time.Himanshu Raja: And then lastly, when it comes to reliability, they are not that great either. We actually have lot more data in that data lakes, then the warehouse, but is the data reliable? Can I actually guarantee that the schema is going to stay the same? How easy it is for an analyst to merge a bunch of different schemas together. As a result of all of these problems, data lakes have sort of turned into these unreliable data swamps where you have all the data, but it's very difficult to make any sense of it. So, understandably in the absence of a better alternative, what we are seeing with most organizations is a strategy of coexistence.Himanshu Raja: So, this is what a data swamp look like. There are tons of different tools to power each architecture required by a business unit or the organization. It's a whole slew of different open source tools that you have to connect. In the data warehousing stack, on the left side, you are often dealing with proprietary data formats. And if you want to enable advanced use cases, you have to move the data across to other stacks. It ends up being expensive and resource intensive to manage. And what does it result into? Because the systems are siloed, the teams become siloed too. Communication slows down, hindering innovation and speed.Himanshu Raja: Different teams often end up with different versions of the truth. The result is multiple copies of data, no consistent security governance model, closed systems and disconnected, less productive data teams. So, how do we get the best of both worlds? We want some things from the data warehouse, we want some things from the data lakes. We want the performance and reliability of the data warehouses, and we want the flexibility and the scalability of the data lakes. This is what we called the lakehouse paradigm. And the idea here is that the data is in the data lake, but now we are going to add some components so that we can now do all the BI and reporting from the warehouse and all the data science and machine learning from data lakes and also support streaming analytics. So, let's build a lakehouse. What are the things we need to build a lakehouse?Himanshu Raja: We said that we want all our data to be in an really scalable storage layer. And we want a unified platform where we can do multiple use cases. We can achieve multiple use cases. So, we need some kind of transactional layer on top of that data storage layer. So, what do you really need is something like asset compliance, so that when you write data, it either fully succeeds or fully fails and things are consistent. The structure transaction layer is what is data lake. And then the other requirement we talked about was performance. So, to support the different type of use cases, we needed to be really fast. We have lot of data that we want to work with. So, there is the data engine, which is a high-performance query engine that Databricks has created in order to support different types of use cases, whether it is SQL, data science, ETL, BI reporting, streaming, all that stuff on top of the engine to make it really, really fast.Himanshu Raja: So, let's do a deep dive on what is data lake. Data lake is an open, reliable, performant and secure data storage and management layer for your data lakes that enable you to create a true single source of truth. Since it's built upon a budget spot, you are able to build high-performance data pipelines to clean your data from raw injection to business level aggregates. And given the open format, it allows you to avoid unnecessary replication and proprietary lock-in. Ultimately, data lake provides that reliability, performance, and security you need to solve your downstream data use cases. Next, I'm going to talk about each of those benefits of data lake. The first and foremost benefit that you get with data lake is high-quality reliable data in your analytics stack.Himanshu Raja: Let me just talk about three key things here. The first is asset transaction. The second is schema enforcement and schema evolution. And then third is unified batch and streaming. So, on asset transactions, Delta employs an all or nothing asset transaction approach to guarantee that any operation you do on your data lake either fully succeeds or gets aborted so that it can be rerun. On schema enforcement Delta Lake uses schema validation on right, which means that all new rights to our table are checked for compatibility with the target table schema at right time. If the schema is not compatible, Delta Lake cancels the transaction altogether, and no data is written and raises an exception to let the user know about the mismatch.Himanshu Raja: We have very recently introduced capabilities to also do schema evolution, where we can evolve the schema on the fly as the data is coming in especially in the cases where the data is semi structured or unstructured. And you may not know what the data types are, or even in a lot of cases, what the columns that are coming in are. The third thing I would like to talk about is unified batch and streaming. Delta is able to handle both batch and streaming data, including the ability to concurrently, write batch and streaming to the same data table. Delta Lake directly integrates with spark structured streaming for low-latency updates.Himanshu Raja: Not only does this result in a simpler system architecture by not requiring you to build a Lambda architecture anymore. It also results in a shorter time from data ingest to query results. The second key advantage of Delta Lake is performance, lightning, fast performance. There are two aspects to performance in data analytic stack. One is how the data is stored, and then the other is performance during query, during run time. So, let's talk about how their data is stored and how does Delta optimizes the data storage format excel. Delta comes with out-of-box capabilities to store the data optimally for querying. Capabilities such as the ordering where the data is automatically structured along multiple dimensions for fast query performance is one. Delta also has data skipping, where Delta maintains the file statistics so that the data subsets relevant to the queries are used instead of the entire tables.Himanshu Raja: We don't have to go and read all the files. Files can be skipped based on the statistics. And then auto-optimize, optimize is a set of features that automatically compacts small files into fewer larger files so that the query performance is great out of the box. It's paying a small pause during writes to offset and give really great benefit for the tables during the requering. So, that's the part about how the data is stored. Now, let's talk about the Delta engine, which comes into play when you actually query that data. Data engine has three key components to provide super fast performance, photon, the query optimizer and caching. Photon is a native vectorized engine, fully compatible with Apache spark, build to accelerate all structured and semi-structured workloads by more than 20X compared to spark 2.4.Himanshu Raja: Second key component of Delta engine is query optimizer. The query optimizer extends the sparks cost-based optimizer and adaptive query execution with the advanced statistics to provide up to 18 X faster query performance for data warehousing workloads than Spark 3.0. And then the third key component of Delta engine is caching. Delta engine automatically caches IO data, and transcodes it into a more CPU efficient fallback to take advantage of NBMESSTs providing up to 5X faster performance for table scans than Spark 3.O. it also includes a second cache for query results to instantly provide results for any subsequent raps. This improves performance for repeated queries like dashboards, where the underlying tables are not changing frequently.Himanshu Raja: So, let me talk about the third main benefit of Delta Lake, which is to provide security and compliance at scale. Delta Lake reduces risk by enabling you to quickly and accurately update data in your data lake, to comply with regulations like GDPR and maintain better data governance through audit logging. Let me talk about two specific features, time-travel and stable and role-based access controls. With time-travel Delta automatically versions the big data that you store in your data lake and enables you to access any historical version of that data. This temporal data management simplifies your data pipeline by making it easy to audit, rollback data in case of accidentally bad writes or deletes and reproduce experiments and reports.Himanshu Raja: Your organization can finally standardize on a clean centralized version, big data repository in your own cloud storage for your analytics. The second feature I would love to talk about is that table and role based access controls. The data lake, you can programmatically grant and revoke access to your data based on specific workspace or role to ensure that your users can only access the data that you want them to. The Databrick's is extensive ecosystem of partners. Customers can enable a variety of security and governance functionality based on their individual needs.Himanshu Raja: Lastly, but one of the most important benefits of Delta Lake is that, it's open and agile. Delta Lake is an open format that works with other open source technologies, avoiding vendor lock-in and opening up an entire community and ecosystem of tools. All the data in Delta Lake is stored in an open Apache parquet format, allowing data to be read by any compatible reader. Developers can use their Delta Lake with their existing data pipelines with minimum changes, as it is fully compatible with spark. The most commonly used big data processing engine. Delta Lake also supports SQL DML, out-of-box to enable customers to migrate SQL workloads to Delta simply and easily.Himanshu Raja: So, let's talk about how we have seen customers leverage Delta Lake for a number of use cases, primarily among them is improving data pipelines, doing ETL at scale, unifying batch, and streaming with direct integration with Apache spark structured streaming to run both batch and streaming workloads in sort of doing the Lambda architecture, doing BI on your data Lake with our Delta engine, super fast, ready performance. You don't need to choose between a data lake and a data warehouse. As we talked about with the lakehouse, you can do BI directly on your data lake and then meeting regulatory needs with standards like GDPR by keeping a record of historical data changes. And who are these users?Himanshu Raja: The data lake is being used by some of the largest Fortune 100 companies in the world. We have customers like Comcast, Wirecomm, Conde Nast, McAfee, Edmonds. In fact, here Databricks all the data analytics is done using data lake. So, I would love to just deep dive in and would like to talk about the Starbucks use case to just give you an idea as to how our customers have used data lake in their ecosystem. Starbucks today does demand forecasting and personalizes the experiences of their customers on their app. And their architectures are actually struggling to handle petabytes of data adjusted for downstream ML and analytics, and they needed a scalable platform to support multiple use cases across the organization.Himanshu Raja: And with Azure Databricks and Delta Lake, their data engineers are able to build pipelines that support batch and real-time workloads on the same platform. They have enabled their data science teams to blend various data sets, to create new models that improve the customer experiences. And most importantly, data processing performance has improved dramatically allowing them to deploy environments and deliver insights in minutes. So, let me wrap up by summarizing what data lake can do for you, why it is the right foundation for your lakehouse. As we discovered that with Delta Lake, you can improve analytics and data science and machine learning throughout your organization by enabling teams to collaborate and ensure that they are working on reliable data to improve speed with which they make decisions.Himanshu Raja: You can simplify data engineering, reduce infrastructure and maintenance costs with best price performance, and you can enable a multi-cloud secure infrastructure platform with data lake. So, how do you get started on data lake? It's actually really easy, if you have a Databricks deployment already on Azure or AWS, and now GCP if you deploy a cluster with DBR, which is the Databricks right time release version 8.0 or higher, you actually do not need to do anything. Delta is now the default format for all creative tables and the data frame APIs. But we also have plenty of sources for you to try out the product and learn.Himanshu Raja: It's actually a lot of fun to deploy your first data lake and just build a really cool dashboard using a notebooks. If you have not tried Databricks before you can sign up for a free trial account and then you can follow our getting started guide. And Brenner, will do a demo very shortly to just showcase the capabilities that we talked about. So, with that over to you, Sam.Sam Steiny: Awesome. Thank you, Himanshu. That was great. Now, in the past the stage over to Brenner Heintz and Brenner is going to take us through a demo that really brings Delta Lake to life. Now, that you've heard what it is and how powerful it can be, let's see it in action. So, over to you, Brenner.Brenner Heintz: My name is Brenner Heintz. I am a technical PMM at Databricks, and today I'm going to show you how Delta Lake provides the perfect foundation for your lakehouse architecture. We're going to do a demo, and I'm going to show you how it works from a practitioner's perspective. Before we do so, I want to highlight the Delta Lake cheat sheet. I've worked on this with several of my colleagues, and the idea here is be able to provide a resource for practitioners like yourself, to be able to quickly and easily get up to speed with Delta Lake and be able to be productive with it very, very quickly. We've provided most, if not all of the commands in this notebook, it's part of the cheat sheet. So, I highly encourage you to download this notebook and you can click directly on this image, it'll take you directly to the cheat sheet, provide a one pager for Delta Lake with Python and a one pager for Delta Lake with Spark SQL.Brenner Heintz: So, first in order to use Delta Lake, you need to be able to convert your data to Delta Lake format. And the way that we're able to do that is instead of saying parquet as part of your create table or your Spark data frame writer command, all you have to do is place that with the word Delta, to be able to start using Delta Lake right away. So, here's a look at what that looks like. With Python, we can use Spark to read in our data in parquet format. You could also read in your data in CSV or other formats for example. Spark is very flexible in that way. And then we simply write it out in Delta format by indicating Delta here.Brenner Heintz: And we're going to save our data in the loans Delta table. We can do the same thing with SQL. We can use a create table command using Delta to then save our table in Delta format. And finally, the convert to Delta command makes it really easy to convert our data to Delta Lake format in place. So, now that we have shown you how to convert your data to Delta like format, let's take a look at a Delta Lake table and what that looks like. So, I've run the cell already. We have 14,705 batch records in our loans Delta table. Today, we're working with some data from the lending club, and you can see the columns that are currently part of our table here.Brenner Heintz: So, I went ahead and kicked off a couple of right streams to our table. And the idea here was to show you that Delta Lake tables are able to handle batch and streaming data, and they're able to integrate those straight out of the box without any additional configuration or anything else that's needed. You don't need to build a Lambda architecture, for example, to integrate both batch in real-time data. Delta Lake tables can easily manage both at once. So, as you can see, we're writing about 500 records per second, into our existing Delta Lake table. And we're doing so with two different writers, just to show you that you can concurrently both read and write from Delta Lake tables consistently with asset transactions, ensuring that you never deal with a pipeline breakage that corrupts the state of your table, for example.Brenner Heintz: Everything in Delta Lake is a transaction. And so this allows us to create isolation between different readers and writers. And that's really powerful, it saves us a lot of headache and a lot of time undoing mistakes that we may have made if we didn't have acid transactions. So, as I promised as well, those two streaming writes have been coupled. I've also created two streaming reads to show you what's happening in the table in near real time. So, we had those initial 14,705 batch records here. But since then we have about 124,000 streaming records that have entered our table since that time.Brenner Heintz: This is essentially the same chart, but showing you what's happening over each 10-second-window, each of these bars represents a 10-second-window, over which as you can see, since our streams began, we have about 5,000 records per stream that are written to our table at any time. So, all of this is just to say that Delta Lake is a very powerful tool that allows you to easily integrate batch and streaming data straight out of the box. It's very easy to use, and you can get started right away. To put the cherry on top, we added a batch query just for good measure, and we plotted it using Databricks built-in visualization tools, which are very easy and allow you to visualize things very quickly.Brenner Heintz: So, now, that we've showed you how easy it is to integrate batch and streaming data with Delta Lake, let's talk about data quality. You need tools like schema enforcement and schema evolution in order to enforce the quality in your tables. And the reason for that is that what you don't want are upstream data sources, adding additional columns, removing columns, or otherwise changing your schema without you knowing about it. Because what that can cause is a pipeline breakage that then affects all of your downstream data tables. So, to avoid that, we can use schema enforcement first and foremost. So, here I've created this new data, data frame that contains a new column, the credit score column, which is not present in our current table.Brenner Heintz: So, because Delta Lake offers schema enforcement when we run this command, we get an exception because the schema mismatch has been detected by Delta Lake. And that's a good thing. We don't want our data to successfully write to our Delta Lake table because it doesn't match what we expect. However, as long as we're aware and we want to intentionally migrate our schema, we can do so by adding a single command to our write command, we include the merge schema option. And now, that extra column is successfully written to our table, and we're also able to evolve our schema. So, now, when we try and select the records that were in our table, in our new data table, you can see that those records were in fact successfully written to the table and that new credit score column is now present in the schema of our table as well.Brenner Heintz: So, these tools give you, they're very powerful and they allow you to enforce your data quality the way that you need to in order to transition your data from raw unstructured data to high quality structured data, that's ready for downstream apps and users overtime. So, now, that we've talked about schema enforcement and scheme evolution, I want to move on to Delta Lake time travel. Time travel is a really powerful feature of Delta Lake. And because everything in Delta Lake as a transaction, and we're tracking all of the transactions that are made to our Delta Lake tables over time in the transaction log, that allows us to go back in time and recreate the state of our Delta Lake table at any point in time.Brenner Heintz: First, let's look at what that looks like. So, at any point, we can access the transaction log by running this describe history command. And as you can see, each of these versions of our table represent some sort of transaction, some sort of change that was made to our tables. So, our most recent change was that we upended those brand new records with a new column to our Delta Lake table. So, you can see that transaction here, before that we had some streaming updates. All of those rights that were occurring to our table were added as transactions. And basically this allows you to then go back and use the version number or timestamp, and then query historical versions of your Delta Lake tables at any point. That's really powerful because you can even do creative things like compare your current version of a table to a previous version to see what has changed since then, and do other sorts of things along those lines.Brenner Heintz: So, let's go ahead and do that. Let's look, we'll use time travel to view the original version of our table, which was version zero. And this should include just those 14,705 records that we started with because at that point version zero of our table, we hadn't streamed any new records into our table at all. And as you can see, the original version, those 14,705 records are the only records that are present in version as of zero. And there is no credit score column either, because of course, back in version zero, we had not yet evolved Delta Lake table schema.Brenner Heintz: So, compare that 14,705 records to the current number of records in our table, which is over 326,000. Finally, another thing you can do with Delta Lake time-travel is restore a previous version of your tables at any given point in time. So, this is really powerful, if you accidentally delete a column you didn't mean to, or delete some records you didn't mean to, you can always go back and use the restore command to then have the current version of your table restored exactly the way that your data was at that given timestamp or version number. So, as you can see, when we run this command to restore our table to its original state version as of zero, we have been able to do so successfully. Now, when we query it, we only get those 14,705 records as part of the table.Brenner Heintz: Next, one of the features that I think developers, data engineers and other data practitioners are really looking for when they're building their lakehouse is the ability to run simple DML command with just one or two lines of code, be able to do operations like deletes, updates, merges, inserts, et cetera. On a traditional data lake, those simply aren't possible. With Delta Lake, you can run those commands and they simply work and they do so transactionally. And they're very, very simple. So, managing change data becomes much, much easier when you have these simple commands at your disposal.Brenner Heintz: So, let's take a look, we'll choose user ID 4420 as our test case here, we'll use sort of modify their data specifically to show you what Delta Lake can do. As you can see, they are currently present in our table, but if we run this delete command and we specify that specific user, when we run the command and then we select all from our table, we now have no results. The delete has occurred successfully. Next, when we look at the described history command, the transaction log, so you can see the delete that we just carried out is now present in our table. And you can also see the restore that we did to jump back to the original version of our table version zero is also present. We can also do things like insert records directly back into our table if we want to do so.Brenner Heintz: Here, we're going to use time-travel to look at version as of zero, the original version of our table before this user was deleted and then insert that user's data back in. So, now when we run the select all command, the user is again, present in our table. The insert into command works great. Next, there's the update command. Updates are really useful, if you have row level changes that you need to make. Here, we're going to change this users funded amount to 22,000. Actually let's make it 25,000, it looks like it was already 22,000 before.Brenner Heintz: So, we'll update that number and then when we query our table, now, in fact, the user's funded amount has been updated successfully. Finally, in Delta Lake you have the ability to do really, really powerful merges. You can have a table full of change data that for example represents inserts and updates to your Delta Lake table. And with Delta Lake, you can do upsert. In just one single step you can... for each row in your data frame that you want to write to your Delta Lake table, if that row is already present in your table, you can simply update the values that are in that row. Whereas if that row is not present in your table, you can insert it.Brenner Heintz: So, that's what's known as an upsert and those are completely possible and they're very, very easy in Delta Lake. They make managing your Delta Lake very, very simple. So, first we create a quick data frame with just two records in it, we want to add user 4420's data back into our table. And then we also created a user whose user ID rather is one under 1 million. So, it's 999,999. And this user is not currently present in our table. We want to insert them. So, this is what our little data frame looks like. And as you can see, we have these as an update or an insert. And when we run our merge into command, Delta Lake is able to identify the rows that already exist, like user 4420, and those that don't already exist. And when they don't exist, we simply insert them.Brenner Heintz: So, as you can see, these updates, and inserts occurred successfully and Delta Lake has no problem with upserts. Finally, the last thing I want to point out are some specific performance enhancements that are offered as part of Delta Lake. But also as part of Databricks, Delta Lake only. We have a couple of commands that are Databricks, Delta Lake only at the moment. First there's the vacuum command. The vacuum command takes a look at the files that are currently a part of your table, and it removes any files that aren't currently part of your table that have been around for a retention period that you specify. So, this allows you to clean up the old versions of your table that are older than a specific retention period, and sort of save on cloud costs that way.Brenner Heintz: Another thing you can do on Databricks Delta Lake is you can cache the results of specific commands in memory. So, if you have a specific table that your downstream analysts tend to always group by a specific dimension, you can cache that SQL command, and it will always appear much quicker than it, and that way it's able to avoid doing a full read of your data, for example. You also had the ability to use the Z order optimized command, which is really powerful. Z order optimize essentially looks at the layout of your data tables and it figures out the perfect way to locate your data in different files. It essentially lays out your files in an optimized fashion, and that allows you to save on cloud storage costs because the way that it lays them out is typically much more compact than would be when you start. And it also then optimizes those tables for a read and write throughput.Brenner Heintz: So, it's very powerful. It speeds up the results of your queries and saves you on storage and compute costs ultimately. So, that's the demo. I hope you've enjoyed this demo. Again, take a look at the Delta Lake cheat sheet that we will post as part of the description or in the chat that is part of the presentation below. So, thanks so much. I hope you've enjoyed this demonstration. Check out Delta Lake and join us on GitHub, Slack, or as part of our mailing list. Thanks so much.Sam Steiny: Awesome. Thanks, Brenner. That was really, really great. I'm so excited now to be joined by Barbara Eckman. Barbara is a senior principal software architect at Comcast, and she's going to be sharing her experience with Delta Lake and how working with Databricks has really made an impact on her day-to-day and on the Comcast business. So, thanks so much for being here, Barbara. We're super excited to have you.Barbara Eckman: Hi, everybody. Really glad to be here. Hope you're all doing well. I'm here to talk about hybrid cloud access control in a self-service computer environment here at Comcast. I want to just real briefly mentioned that Comcast takes very seriously its commitment to our customers to protect their data. I'm part of the Comcast, what we call data experience big data group. And big data in this case means not only public cloud, but also on-prem data. So, we have a heterogeneous data set, which offers some challenges and challenges are fun, right? Our vision is that data is treated as an enterprise asset. This is not a new idea, but it's an important one.Barbara Eckman: And our mission is to power Comcast enterprise through self-service platforms, data discovery lineage, stewardship governance, engineering services, all those important things that enable people to really use the data in important ways. And we know as many do that powerful business insights, the most powerful insights come from models that integrate data, that span silos. Insights for improving the customer experience as well as business value. So, what this means for the business there are some examples. Basically, this is based on the tons of telemetry data that we capture from sensors and Comcast's network. We capture things like latency, traffic, signal to noise ratio, downstream and upstream, error rates and other things that I don't even know what they mean.Barbara Eckman: But this enables us to do things that improve customer experience like plan the network topology to help if there's a region that has a ton of traffic, we might change the policy to support that. Minimizing truck rolls, truck rolls are what we call it when the Comcast guy cable guy comes or cable female comes to your house. And in this COVID times, we really would like to minimize that even more. And if we can analyze the data ahead of time, we can perhaps make any adjustments or suggest adjustments that the user can make to minimize the need for people to come to their house.Barbara Eckman: We can monitor, predict problems and remedy them often before the user even knows because of this data and this involves both the telemetry data and integrating it with other kinds of data across the enterprise. And then optimizing network performance for region or for the whole household. So, now this is really important stuff and it really helps the customers. And we're working to make this even more prevalent. So, what makes your life hard? This is a professional statement. If you want to talk about personally, what makes your life hard? We can do that later, but what makes your life harder as a data professional?Barbara Eckman: People usually say, "I need to find the data. So if I'm going to be integrating data across silos, I need to find it. I know where it is in myself silo, but maybe." And the way we do that is a metadata search and discovery, which we do through Elasticsearch. Then once I find the data that might be of interest to me, I need to understand what it means. So, what someone calls an account ID might not be the same account ID that you are used to calling an account ID, billing IDs, or back office account IDs need to know what it means in order to be able to join it, to make sense as opposed to Franklin data, monster data that isn't really appropriately joined. We need to know who produced it, that it come from a set-top box. Did it come from a third party who touched it while it was journeying through Comcast, through Tenet, through Kafka or Kinesis and someone aggregated it and then maybe somebody else enriched it with other data.Barbara Eckman: And then it landed in a data lake. The user of the data in the data lake wants to know where the data came from, and who added what piece. And you could see this as both the publisher looks at the data in the data lake and says, "This looks screwy, what's wrong with this? Who messed up my data?" He could also say, or they could say, "Wow, this is enriched really great. I want to thank that person." And also someone who's just using the data wants to know who to ask questions. What did you enrich this with? Where did that data come from, that kind of thing? So, and all that really is helpful when you're doing this integration. That's data governance and lineage, which we do in Apache Atlas.Barbara Eckman: That's our metadata and lineage repository. Then once you found data and understood it, you have to be able to access it. And we do that through at Apache Ranger and its extension that's provided by Privacera. Once you have access to it, you need to be able to integrate it and analyze it across the enterprise. So, finally, now we get to the good stuff to be able to actually get our hands on the data. And we do that with self-service compute using Databricks. And Databricks is a really powerful tool for that. And finally we find that we do really need asset compliance for important operations. And we do that with Delta Lake. So, I can talk about this in more detail, as this top goes on or in the question session.Barbara Eckman: I'm an architect. So, I have to have bus and line diagrams. So, this is a high-level view of our hybrid cloud solution. So, income passed on our data centers, we have a Hadoop data lake that involves Hadoop Ranger and Apache Atlas working together. We are, as many companies are kind of phasing that out, but not immediately, it takes a while. We have a Terra data, enterprise data warehouse. Similarly, we are thinking to move that and not necessarily to the cloud entirely, but maybe to another on-prem source, like the object store. We use MinIO and basically that gives us the mix this object so it look like S3. So, when the spark jobs that we like to use on S3 also can run on our on pre data store.Barbara Eckman: And that's a big plus of course. And for that, we have a Ranger data service that helps with access control there. Up in the cloud, we use AWS though Azure also has a big footprint in Comcast. And Databricks compute is kind of the center here. We use it to access Kinesis. Redshift is only, we're just starting with that. We use Delta Lake and S3 object store and we have a Ranger plugin that the Databricks folks worked carefully with Privacera to create so that our self-service Databricks environment can have all the nit script and the configurations that it needs to run the access control that Privacera provides.Barbara Eckman: We also use Presto and for our federated query capability, it also has a Ranger plugin and all the tags that are applied to metadata on which policies are built, or are housed in Apache Atlas and Ranger and Atlas synced together. And that's how Ranger knows what policies to apply to what data. And in the question session, if you want to dig deeper into any of this, I'd be very happy to do it. So, this is very exciting to me, we're just rolling this out and it's so elegant and I didn't create it so I can say that. So, Ranger analysis together provide a declarative policy based access control. And as I said, Privacera extends Ranger, which originally only worked in Hadoop to AWS through plugins and proxies. And one of the key ones that we use, of course, is Databricks on all three of these environments. And basically what I like about this is we really have one ranger to rule them all and Atlas is his little buddy, because he provides or she, provides the tags that really power our access control.Barbara Eckman: So, here's again a diagram. And we have a portal that we built for our self service applications and the user tags, the metadata with tags, like this is PII, this is a video domain, that kind of stuff. That goes into Atlas, the tags and the metadata associations are synced with Ranger, the policies based on that. So, who gets the CPI? Who gets to see video domain data? Those are synced and cashed in the range of plugins. And then when a user calls an application, whether it's a cloud application in Databricks, or even an on-prem application, the application asks Ranger, "Does this user have the access to do what they're asking to do on this data?" If the answer is yes, and it's very fast, because these are plugins. If the answer is yes, they get access.Barbara Eckman: If no, then they get an error message and we can also do masking and show the data, if someone has access to many columns, but not all columns, I would say a glue table we can mask out the ones that they don't have access to and still give them what data they are allowed to see. Recently we've really needed acid compliance. So, traditionally big data lakes are write once, read many. We have things streaming in from set top boxes in the cable world, those aren't transactional, that's not transactional data. That's what we're used to, but now increasingly we are finding that we need to delete specific records from our parquet files or whatever. We can do this in Spark, but it is a terribly performant. It certainly it can be done, but it turns out Delta Lake does it much better.Barbara Eckman: The deletes are much more performant and you get to view snapshots of past data lake states, which is really pretty awesome. So, we're really moving toward, I love this word, a lakehouse being able to do, write once, read many and acid all in one place. And that is largely thanks to data lakes. So, this is me. Please reach out to me an email if you wish. And I'll be happy to answer questions in the live session if you have any. So, thank you very much for listening.Sam Steiny: Thank you for joining this event, Barbara. That was so awesome. It's great to hear the Comcast story. So, with that, let's get to some questions. We're going to move over to live Q&A. So, please add your questions to that Q&A.Watch NowProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/customers/yipitdata
Yipit – DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack. Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence. See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco June 26–29   Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWCUSTOMER STORYYipitData transforms financial market information overload into insight60%Lower database costs$2.5Million annual savings90%Reduction in data processing time INDUSTRY: Financial services SOLUTION: Data-driven ESG,risk management,transaction enrichment PLATFORM USE CASE: Delta Lake,data science,machine learning,ETL CLOUD: AWS“With Databricks, we’re innovating faster than ever before across our data engineering and analyst functions, and paying less in database expenses every year.” — Steve Pulec, CTO at YipitDataYipitData provides data-driven research to empower investors by combining alternative data sources with web data for comprehensive coverage. By leveraging Databricks, YipitData’s data team has been able to reduce data processing time by up to 90 percent, increasing their analysts’ ability to deliver impactful, reliable insights to their clients. Additionally, by moving to Databricks on AWS, YipitData has reduced database expenses by almost 60%.Unlocking hidden insights in alternative dataMaking sound investment choices requires information. The more actionable information you have, the odds of financial services institutions (FSIs) obtaining a better understanding of their customers, markets, and businesses increases significantly. YipitData is in the business of providing data-driven insights to the world’s largest hedge funds and corporations to help them gain a real competitive edge and provide better service to their customers. Specifically, they leverage alternative data sources and web scrapes to help banking institutions and asset managers alike make better decisions by revealing valuable information about consumer behavior (e.g. utility payment history, transaction information) and extends across a variety of use cases including trade analyses, credit risk, and ESG risk.“Alternative data is a critical key to the success of our financial services customers,” explained Anup Segu, Senior Software Engineer at YipitData. “However, most organizations don’t have the means to leverage alternative data to the greatest extent possible. That’s where we come in to help.”The challenge the YipitData team faced however was not only the sheer volume and variety of alternative data (each month they make billions of requests collecting data from hundreds of websites), but they were also limited by siloed teams and the inability to scale their data processing and analytics. Running queries and scaling their previous data warehouse proved challenging and time-consuming.“We were constantly running into performance bottlenecks,” explained Segu. “Very large queries could take up to six hours which slowed our ability to answer questions.”Collaboration across teams was also an issue as they struggled to share learnings and code. “We struggled with siloes of tribal knowledge, which hampered our ability to scale and operate with speed,” explained Bill Mensch, data analyst at YipitData.Democratizing data analytics across the organizationWith Databricks, the team at YipitData is now able to manage the entire data analytics workflow from data ingestion to downstream analytics. Integrated cluster management with features like autoscaling has greatly simplified infrastructure management while lowering operational costs. “Since we can manage compute and storage independently, Databricks has allowed us to optimize our cluster management and AWS spend,” explained Segu.Databricks has empowered its 40+ data analysts to evolve their roles into hybrid data engineers and analysts — enabling them to independently create data ingestion systems, manage ETL workflows, and produce meaningful financial research for their clients.Databricks has given our analysts flexibility so that they can be in control,” explained Steve Pulec, CTO at YipitData. “As a result, data engineering doesn’t even need to be involved and can focus on higher valued tasks.”Now they can rapidly construct and deploy robust ETL workflows within the Databricks notebooks, and leverage their programming language of choice (Python or SQL) to explore, visualize, and analyze their data.Behind the story: The Data Team EffectMeet the great data team that’s behind YipitDataMeet the teamFaster processing and simplified operations help to lower costsThe biggest gains from using Databricks has been the sheer processing power at scale, improved cost efficiencies of a cloud platform, and the democratization of data. With scalable cloud infrastructure at their fingertips, they’ve been able to accelerate their data pipelines by up to 90% on average. And in some cases, some very large queries that used to take up to 6 hours can now be completed in roughly 7 seconds.“Databricks allows us to effortlessly trade scale for speed, which was not possible before,” said Andrew Gross, Staff Engineer from YipitData. “Now we are able to answer more questions with the same resources.”Not only are they processing more data faster, but they are also doing so more efficiently which has helped drive business forward. “COVID has created tons of questions in the market, and we have gone into overdrive in terms of analyzing data to uncover answers,” said Pulec. “All of that additional work has had a huge impact on our top line and has really helped our business to be able to answer those questions for investors in a timely manner. It probably would not have been possible in the old world.”Although the scale of analyses and reporting to customers has increased by 4-5x, Pulec estimates that overall operational spending has decreased significantly. “Databricks has reduced our operations costs by almost 60%,” said Pulec. And overall, with the help of Databricks and some savvy cost-cutting techniques, they were able to cut their annual AWS bill by 50% or $2.5 million.Databricks serving as the foundation for their data analytics workflow, YipitData is looking to expand the adoption of Databricks across the company — promoting greater transparency and cross-team collaboration. Looking ahead, YipitData is well-positioned to take full advantage of the explosion of alternative data and unlocking new insights for FSIs and corporations to make smarter business decisions.Related ContentBlogYipitData Example Highlights Benefits of Databricks integration with AWS GlueSessionTechnical Talk at Spark + AI Summit 2020Ready to get started?Try Databricks for freeLearn more about our productTalk to an expertProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/kr/company/careers
Databricks 채용 | DatabricksSkip to main content플랫폼Databricks 레이크하우스 플랫폼Delta Lake데이터 거버넌스데이터 엔지니어링데이터 스트리밍데이터 웨어하우징데이터 공유머신 러닝데이터 사이언스가격Marketplace오픈 소스 기술보안 및 신뢰 센터웨비나 5월 18일 / 오전 8시(태평양 표준시) 안녕, 데이터 웨어하우스. 안녕하세요, 레이크하우스입니다. 데이터 레이크하우스가 최신 데이터 스택에 어떻게 부합하는지 이해하려면 참석하십시오. 지금 등록하세요솔루션산업별 솔루션금융 서비스의료 서비스 및 생명 공학제조커뮤니케이션, 미디어 및 엔터테인먼트공공 부문리테일모든 산업 보기사용 사례별 솔루션솔루션 액셀러레이터프로페셔널 서비스디지털 네이티브 비즈니스데이터 플랫폼 마이그레이션5월 9일 | 오전 8시(태평양 표준시)   제조업을 위한 레이크하우스 살펴보기 코닝이 수동 검사를 최소화하고 운송 비용을 절감하며 고객 만족도를 높이는 중요한 결정을 내리는 방법을 들어보십시오.지금 등록하세요학습관련 문서교육 및 인증데모리소스온라인 커뮤니티University Alliance이벤트Data + AI Summit블로그LabsBeacons2023년 6월 26일~29일 직접 참석하거나 키노트 라이브스트림을 시청하세요.지금 등록하기고객파트너클라우드 파트너AWSAzureGoogle CloudPartner Connect기술 및 데이터 파트너기술 파트너 프로그램데이터 파트너 프로그램Built on Databricks Partner Program컨설팅 & SI 파트너C&SI 파트너 프로그램파트너 솔루션클릭 몇 번만으로 검증된 파트너 솔루션과 연결됩니다.자세히회사Databricks 채용Databricks 팀이사회회사 블로그보도 자료Databricks 벤처수상 실적문의처Gartner가 Databricks를 2년 연속 리더로 선정한 이유 알아보기보고서 받기Databricks 이용해 보기데모 보기문의처로그인JUNE 26-29REGISTER NOWAttention: Databricks applicantsDue to reports of phishing, all Databricks applicants should apply on our official Careers page (good news — you are here). All official communication from Databricks will come from email addresses ending with @databricks.com, @us-greenhouse-mail.io or @goodtime.io.Databricks 채용Databricks는 세상의 가장 어려운 문제를 해결할 수 있도록 데이터 팀을 돕는 것을 사명으로 삼습니다. 여러분도 함께 하시겠습니까?공석 보기 OverviewCultureBenefitsDiversityStudents & new grads커리어를 한층 더 발전시켜, 한 차원 더 발전된 미래로 나아가세요.Databricks는 데이터와 AI 혁명을 선도하고 있습니다. Databricks는 레이크하우스라는 카테고리를 만들었습니다. 이제 수천 개의 기업이 레이크하우스를 사용하여 기후 변화, 사기, 고객 이탈 등의 문제를 해결하고 있습니다. 진정으로 여러분의 커리어를 정의할 수 있는 기회를 찾고 계신가요? 그 기회가 바로 여기에 있습니다.레이크하우스가 곧 미래인 이유Databricks를 선택해야 하는 이유Databricks는 빠른 속도로 성장하며 세계 최고의 인재를 모집하고 있습니다. Databricks의 직원을 일컫는 '브릭스터'는 똑똑하고, 호기심 많고, 기민하게 사고하는 특별한 인재가 모여 있습니다. 브릭스터에게 Databricks의 장점을 묻는다면, 아마 대부분이 기업 문화라고 답할 것입니다.기업 문화 원칙 보기자신의 팀 찾기관리사업 개발고객 성공공학필드 엔지니어링재무IT법적 고지마케팅운영인사 및 HR제품프로페셔널 서비스채용영업보안인턴십 및 조기 커리어혜택, 특전 및 하이브리드 작업일에서 최고의 성과를 내려면 직원의 건강과 행복이 핵심입니다. 그래서 Databricks에서는 유연한 근무 제도를 포함하여 좋은 혜택과 특전을 제공합니다. 이 모든 것은 직원과 팀에게 가장 좋은 것을 드리기 위한 노력입니다.혜택 살펴보기다양성, 평등 및 포용성Databricks는 배경, 관점, 기술의 다양성이 성공을 이끌어낸다고 생각합니다. 그래서 Databricks는 포용적이고 지원을 아끼지 않는 환경을 조성하고자 노력합니다. Databricks에서 어떻게 임금 격차를 해소하고, 채용에서 무의식적인 편견을 제거하는 등의 노력을 기울이는지 알아보세요.포용 활동 보기지역Databricks는 캘리포니아주 샌프란시스코에 본사를 두고 있으며, 이제 12개 국가에 20개 이상의 지사를 설치했습니다. 전 세계적으로 4,500명 이상의 직원을 두고 있고 야심 찬 성장 전략을 세워 추진하고 있기에 Databricks는 가장 빠르게 성장하는 엔터프라이즈 소프트웨어 클라우드 기업으로 우뚝 섰습니다.전 세계 지사 둘러보기미주 지역샌프란시스코시애틀워싱턴 DC뉴욕더 보기유럽런던암스테르담베를린뮌헨더 보기아시아 태평양도쿄싱가포르서울항저우더 보기학생 및 대졸 신입을 위한 기회Databricks에서는 자사의 차세대 리더를 키워내고자 최선을 다합니다. 그래서 인턴과 새로운 대졸 신입 직원이 플랫폼 개발에 중요한 역할을 하도록 하고 있습니다.Databricks의 University Program은 엔지니어링 해커톤, 인턴 올림픽에서부터 보드 게임으로 밤을 지새우던 나날들, 해피 아워로 보낸 시간에 이르기까지 여러분의 경험을 최대한 활용할 수 있도록 설계되었습니다.인턴십 및 채용 기회 살펴보기 건강 보험 보장 정보(미국만 해당)제품플랫폼 개요가격오픈 소스 기술Databricks 이용해 보기데모제품플랫폼 개요가격오픈 소스 기술Databricks 이용해 보기데모학습 및 지원관련 문서용어집교육 및 인증헬프 센터법적 고지온라인 커뮤니티학습 및 지원관련 문서용어집교육 및 인증헬프 센터법적 고지온라인 커뮤니티솔루션산업 기준프로페셔널 서비스솔루션산업 기준프로페셔널 서비스회사Databricks 소개Databricks 채용다양성 및 포용성회사 블로그문의처회사Databricks 소개Databricks 채용다양성 및 포용성회사 블로그문의처Databricks 채용 확인하기WorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark 및 Spark 로고는 Apache Software Foundation의 상표입니다.개인 정보 보호 고지|이용약관|귀하의 개인 정보 선택|귀하의 캘리포니아 프라이버시 권리
https://www.databricks.com/dataaisummit/speaker/uri-may
Uri May - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingUri May HuntersBack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/dataaisummit/speaker/liping-huang/#
Liping Huang - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingLiping HuangSenior Solutions Architect at DatabricksBack to speakersDatabricks Senior Solutions Architect, ex-Microsoft, specializing in Big Data Analytics, Enterprise Data Warehouse, and Business Intelligence, passionate about solving real world problems using data and technology.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/jp/company/careers
採用情報 | DatabricksSkip to main contentプラットフォームデータブリックスのレイクハウスプラットフォームDelta Lakeデータガバナンスデータエンジニアリングデータストリーミングデータウェアハウスデータ共有機械学習データサイエンス料金Marketplaceオープンソーステクノロジーセキュリティ&トラストセンターウェビナー 5 月 18 日午前 8 時 PT さようなら、データウェアハウス。こんにちは、レイクハウス。 データレイクハウスが最新のデータスタックにどのように適合するかを理解するために出席してください。 今すぐ登録ソリューション業種別のソリューション金融サービス医療・ライフサイエンス製造通信、メディア・エンターテイメント公共機関小売・消費財全ての業界を見るユースケース別ソリューションソリューションアクセラレータプロフェッショナルサービスデジタルネイティブビジネスデータプラットフォームの移行5月9日 |午前8時(太平洋標準時)   製造業のためのレイクハウスを発見する コーニングが、手作業による検査を最小限に抑え、輸送コストを削減し、顧客満足度を高める重要な意思決定をどのように行っているかをご覧ください。今すぐ登録学習ドキュメントトレーニング・認定デモ関連リソースオンラインコミュニティ大学との連携イベントDATA+AI サミットブログラボBeacons2023年6月26日~29日 直接参加するか、基調講演のライブストリームに参加してくださいご登録導入事例パートナークラウドパートナーAWSAzureGoogle CloudPartner Connect技術・データパートナー技術パートナープログラムデータパートナープログラムBuilt on Databricks Partner ProgramSI コンサルティングパートナーC&SI パートナーパートナーソリューションDatabricks 認定のパートナーソリューションをご利用いただけます。詳しく見る会社情報採用情報経営陣取締役会Databricks ブログニュースルームDatabricks Ventures受賞歴と業界評価ご相談・お問い合わせDatabricks は、ガートナーのマジック・クアドラントで 2 年連続でリーダーに位置付けられています。レポートをダウンロードDatabricks 無料トライアルデモを見るご相談・お問い合わせログインJUNE 26-29REGISTER NOWAttention: Databricks applicantsDue to reports of phishing, all Databricks applicants should apply on our official Careers page (good news — you are here). All official communication from Databricks will come from email addresses ending with @databricks.com, @us-greenhouse-mail.io or @goodtime.io.採用情報Databricks のミッションは、データ活用を通じて難題解決に挑む組織の支援に取り組むことです。参加しませんか?採用情報を見る OverviewCultureBenefitsDiversityStudents & new grads将来のためのキャリアアップDatabricks は、データと AI のリーダー企業です。私たちはレイクハウスというカテゴリを生み出しました。そして今、数千社のお客さまが、気候変動、不正行為、顧客離れなど、さまざまな問題の解決に、レイクハウスを利用しています。もし、あなたがご自身のキャリアを真剣に考えているなら、Databricks をオススメします。レイクハウスの将来性Databricks を選ぶ理由私たちは急成長しており、世界中の優秀な人財を魅了しています。私たちの社員 Bricksters は、スマートで好奇心旺盛、頭の回転が速い特別なメンバーの集まりです。Bricksters に、Databricks で働く魅力は何かと尋ねると、おそらく私たちの文化について語ることでしょう。文化理念を見るチームを探す管理事業開発カスタマーサクセスエンジニアリングフィールドエンジニアリング財務IT法務マーケティングオペレーション人材・人事製品プロフェッショナルサービスリクルーティング営業セキュリティインターンシップ・アーリーキャリア福利厚生、特典、ハイブリッドワーク最高の仕事をするためには、社員の健康とウェルビーイングがカギとなります。Databricks では、フレキシブルな働き方を含め、充実した福利厚生を提供することで、社員にとって最適な職場環境の構築をめざしています。福利厚生を見るダイバーシティ、エクイティ&インクルージョン私たちは、さまざまな背景、視点、スキルが成功の原動力になると信じています。だからこそ、私たちは包括的で協力的な環境の醸成に努めています。給与格差の是正、採用における無意識の偏見の排除など、私たちの取り組みをご覧ください。インクルージョンについて詳しく見る所在地Databricks は、米国カリフォルニア州サンフランシスコに本社を置き、12 か国に 20 を超えるオフィスを構えています。世界中に 4,500 人以上の社員を擁し、野心的な成長戦略により、最も急成長しているエンタープライズソフトウェアクラウド企業の 1 つです。世界中のオフィス所在地南北アメリカサンフランシスコ(米国)シアトル(米国)ワシントン D.C.(米国)ニューヨーク(米国)詳細はこちらヨーロッパロンドン(UK)アムステルダム(オランダ)ベルリン(ドイツ)ミュンヘン(ドイツ)詳細はこちらアジア太平洋・日本東京(日本)シンガポールソウル(韓国)杭州(中国)詳細はこちら学生および新卒学生の採用私たちは、次世代の Databricks のリーダーとなる人財の育成に全力を注いでいます。だからこそ、インターンや新卒の学生がプラットフォームの開発において重要な役割を担うことを積極的に推進しています。Databricks の大学プログラムは、経験を最大限に活用できるように設計されています。エンジニアリングハッカソンや、インターンオリンピック、ゲーム大会、ハッピーアワーなどのイベントを開催しています。インターンシップの募集要項を見る 医療保険制度の適用範囲の透明性(米国のみ)製品プラットフォーム料金オープンソーステクノロジーDatabricks 無料トライアルデモ製品プラットフォーム料金オープンソーステクノロジーDatabricks 無料トライアルデモ学習・サポートドキュメント用語集トレーニング・認定ヘルプセンター法務オンラインコミュニティ学習・サポートドキュメント用語集トレーニング・認定ヘルプセンター法務オンラインコミュニティソリューション業種別プロフェッショナルサービスソリューション業種別プロフェッショナルサービス会社情報会社概要採用情報ダイバーシティ&インクルージョンDatabricks ブログご相談・お問い合わせ会社情報会社概要採用情報ダイバーシティ&インクルージョンDatabricks ブログご相談・お問い合わせ採用情報言語地域English (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.プライバシー通知|利用規約|プライバシー設定|カリフォルニア州のプライバシー権利
https://www.databricks.com/dataaisummit/speaker/akira-ajisaka/#
Akira Ajisaka - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingAkira AjisakaSenior Software Development Engineer at Amazon Web ServicesBack to speakersAkira Ajisaka is a Senior Software Development Engineer on the AWS Glue team at Amazon Web Services. He likes open source software and distributed systems. He is an Apache Hadoop committer and PMC member.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/dataaisummit/speaker/avinash-sooriyarachchi/#
Avinash Sooriyarachchi - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingAvinash SooriyarachchiSenior Solutions Architect at DatabricksBack to speakersAvinash Sooriyarachchi is a Solutions Architect at Databricks. His current work involves working with large Retail and Consumer Packaged Goods organizations across the United States and enabling them to build Machine Learning based systems. His specific interests include streaming machine learning systems and building applications leveraging foundation models. Avi holds a Master’s degree in Mechanical Engineering and Applied Mechanics from the University of Pennsylvania.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/dataaisummit/speaker/rafael-barcelos/#
Rafael Barcelos - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingRafael BarcelosSoftware Architect at Microsoft CorporationBack to speakers15+ years of experience as Full Stack Software Engineer working in several areas from user facing features, to backend services and platform features to support other engineers' goals. Since 2017, working as Architect driving the vision for a Data Mesh based platform that is being built by 100+ engineers distributed across the globe to support Data engineering and Data Science efforts on Office 365. Enjoy to mentor junior/senior engineers in their technical careers and to work in fast paced teams focused on delivering solutions for hard problems, especially large-scale related ones.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/kr/product/data-streaming
데이터 스트리밍 | DatabricksSkip to main content플랫폼Databricks 레이크하우스 플랫폼Delta Lake데이터 거버넌스데이터 엔지니어링데이터 스트리밍데이터 웨어하우징데이터 공유머신 러닝데이터 사이언스가격Marketplace오픈 소스 기술보안 및 신뢰 센터웨비나 5월 18일 / 오전 8시(태평양 표준시) 안녕, 데이터 웨어하우스. 안녕하세요, 레이크하우스입니다. 데이터 레이크하우스가 최신 데이터 스택에 어떻게 부합하는지 이해하려면 참석하십시오. 지금 등록하세요솔루션산업별 솔루션금융 서비스의료 서비스 및 생명 공학제조커뮤니케이션, 미디어 및 엔터테인먼트공공 부문리테일모든 산업 보기사용 사례별 솔루션솔루션 액셀러레이터프로페셔널 서비스디지털 네이티브 비즈니스데이터 플랫폼 마이그레이션5월 9일 | 오전 8시(태평양 표준시)   제조업을 위한 레이크하우스 살펴보기 코닝이 수동 검사를 최소화하고 운송 비용을 절감하며 고객 만족도를 높이는 중요한 결정을 내리는 방법을 들어보십시오.지금 등록하세요학습관련 문서교육 및 인증데모리소스온라인 커뮤니티University Alliance이벤트Data + AI Summit블로그LabsBeacons2023년 6월 26일~29일 직접 참석하거나 키노트 라이브스트림을 시청하세요.지금 등록하기고객파트너클라우드 파트너AWSAzureGoogle CloudPartner Connect기술 및 데이터 파트너기술 파트너 프로그램데이터 파트너 프로그램Built on Databricks Partner Program컨설팅 & SI 파트너C&SI 파트너 프로그램파트너 솔루션클릭 몇 번만으로 검증된 파트너 솔루션과 연결됩니다.자세히회사Databricks 채용Databricks 팀이사회회사 블로그보도 자료Databricks 벤처수상 실적문의처Gartner가 Databricks를 2년 연속 리더로 선정한 이유 알아보기보고서 받기Databricks 이용해 보기데모 보기문의처로그인JUNE 26-29REGISTER NOW데이터 스트리밍간단한 실시간 분석, ML 및 애플리케이션시작하기데모 보기Databricks 레이크하우스 플랫폼은 데이터 스트리밍을 대폭 단순화하여 실시간 분석, 머신 러닝, 애플리케이션을 하나의 플랫폼에서 제공합니다.데이터 팀이 이미 알고 있는 언어와 도구로 스트리밍 데이터 워크로드를 구축할 수 있도록 지원합니다. 실시간 데이터 워크로드 구축 및 관리와 관련된 프로덕션 측면을 자동화함으로써 개발 및 운영을 단순화합니다. 스트리밍 및 배치 데이터에 단일 플랫폼을 사용하여 데이터 사일로를 제거합니다.스트리밍 파이프라인 및 애플리케이션을 더욱 빠르게 구축SQL과 Python에서 통합 배치 및 스트리밍 API로 이미 알고 있는 언어와 도구를 사용하세요. 전체 조직에 실시간 분석, ML 및 애플리케이션을 지원합니다.자동화된 툴링으로 운영 단순화프로덕션에서 실시간 파이프라인 및 애플리케이션을 손쉽게 배포하고 관리할 수 있습니다. 자동화된 툴링은 작업 오케스트레이션, 내고장성/복구, 자동 체크포인팅, 성능 최적화, 자동 확장을 단순화합니다.클라우드에서 모든 실시간 데이터에 대한 거버넌스 통합Unity Catalog는 레이크하우스에 모든 스트리밍 및 배치 데이터에 대한 하나의 일관적 거버넌스 모델을 제공하며, 실시간 데이터를 발견, 액세스 및 공유하는 방법을 단순화합니다.어떻게 작동하나요?스트리밍 데이터 수집 및 변환실시간 분석, ML 및 애플리케이션자동화된 운영 툴링차세대 스트림 처리 엔진통합 거버넌스 및 스토리지스트리밍 데이터 수집 및 변환Delta Live Tables을 사용하여 스트리밍 데이터 파이프라인에 대한 데이터 입력 및 ETL을 단순화합니다. 데이터 엔지니어링에 대한 간단한 선언적 접근법을 활용하여 SQL 및 Python 등과 같이 이미 알고 있는 언어와 도구를 팀에 지원할 수 있습니다. 제어가 가능한 자동 새로 고침 설정으로 한 곳에서 배치 및 스트리밍 파이프라인을 구축하여 실행하면, 시간을 절약하고 운영 복잡성을 낮출 수 있습니다. 데이터를 전송할 장소가 어디든, Databricks 레이크하우스 플랫폼에 스트리밍 데이터 파이프라인을 구축하면 원시 데이터를 정리하는 데 시간을 낭비할 필요가 없습니다."예전에는 불가능했던 셀프 서비스 방식으로 플랫폼을 사용하는 사업부가 점점 늘어났습니다. Databricks가 Columbia에 일으킨 긍정적 영향은 입이 닳도록 말해도 모자랍니다."— Columbia Sportswear 선임 엔터프라이즈 데이터 관리자 Lara Minor자세히실시간 분석, ML 및 애플리케이션스트리밍 데이터를 사용하면 분석 및 AI의 정확도와 실천 가능성을 즉시 개선할 수 있습니다. 비즈니스는 스트리밍 데이터 파이프라인의 다운스트림 효과로 실시간 인사이트를 활용할 수 있습니다. SQL 분석, BI 보고, ML 모델 훈련 또는 실시간 운영 애플리케이션 구축을 수행하는 경우에도 비즈니스에 가장 신선한 데이터를 제공함으로써 실시간 인사이트, 더욱 정확한 예측, 더욱 빠른 의사결정을 얻어 경쟁에 앞설 수 있습니다.“저희는 항상 언제나 가장 정확한 최신 데이터를 비즈니스 파트너에게 제공해야 합니다. 그렇지 않으면 인사이트에 대한 신뢰를 잃을 테니까요. . . Databricks 레이크하우스 덕분에 이전에는 불가능했던 일이 지금은 매우 손쉬워졌습니다.”— Guillermo Roldán, 아키텍처 책임자, LaLiga Tech자세히자동화된 운영 툴링스트리밍 데이터 파이프라인을 구축하고 배포하는 동안 Databricks는 프로덕션에 필요한 다수의 복잡한 운영 작업을 자동화할 수 있습니다. 여기에는 기본 인프라의 자동 확장, 파이프라인 종속성의 오케스트레이션, 오류 처리와 복구, 성능 최적화 등이 포함됩니다.Enhanced Autoscaling는 각각의 고유한 워크로드에 컴퓨팅 리소스를 자동으로 할당함으로써 클러스터 활용을 최적화합니다. 이들 기능을 자동 데이터 품질 테스트 및 예외 관리와 함께 사용하면 운영 툴링을 구축 및 관리하는 데 사용하는 시간을 줄이고 데이터에서 가치를 창출하는 데 집중할 수 있습니다.자세히차세대 스트림 처리 엔진Spark Structured Streaming은 Databricks 레이크하우스 플랫폼에서 데이터 스트림을 지원하는 핵심 기술이며, 배치 및 스트림 처리를 위한 통합 API를 제공합니다. Databricks 레이크하우스 플랫폼은 99.95%의 가동 시간이 검증된 관리형 서비스로 Apache Spark 워크로드를 실행하기에 최적의 장소입니다. Spark 워크로드는 Apache Spark API와 호환되는 차세대 레이크하우스 엔진인 Photon으로 가속화되며, 수천 개의 노드로 자동 확장되고 최고의 비용당 성능비를 제공합니다.자세히통합 거버넌스 및 스토리지Databricks에서 데이터 스트리밍을 사용하면 레이크하우스 플랫폼의 기본 구성 요소(Unity Catalog 및 Delta Lake)를 활용할 수 있게 됩니다. 원시 데이터는 스트리밍 및 배치 데이터를 염두에 두고 설계한 유일한 오픈 소스 스토리지 프레임워크, Delta Lake로 최적화됩니다.Unity Catalog는 하나의 일관적인 모델로 모든 데이터 및 AI 자산에 대한 세분화된 통합 거버넌스를 제공함으로써, 클라우드에서 데이터 발견, 액세스, 공유를 지원합니다. 또한, Unity Catalog는 다른 조직과 간단하고 안전하게 데이터를 공유할 수 있는 업계 최초의 오픈 프로토콜인 Delta Sharing을 지원합니다.통합데이터 팀에 최대의 유연성을 제공할 수 있습니다. Partner Connect 및 기술 파트너 에코시스템을 활용하여 일반적으로 사용하는 데이터 스트리밍 도구와 매끄럽게 통합해 보세요.데이터 스트리밍고객 사례“저희는 고속 데이터 처리에 Databricks를 사용합니다. 매장이나 온라인에서 환자의 요구 사항에 대응할 수 있는 속도로 혁신하는 데 정말로 큰 도움이 됩니다. 현재 십여 개의 이니셔티브를 추진하고 있으며, 모든 이니셔티브는 Databricks의 데이터를 통해 서비스됩니다.”— Sashi Venkatesan, 제품 엔지니어링 이사, Walgreens“이제 실시간 탐지를 통해 사기범을 능가하고 편법을 쓰는 사기범, 불법적 침입, 자동 녹음 전화 및 자동 텍스트, 신원 정보 도난 등의 영역에서 사기범보다 앞서 나갈 수 있습니다.”— Kate Hopkins, AT&T 부사장더 자세히 알아보기Delta Live 테이블Databricks 워크플로Unity CatalogDelta LakeSpark Structured Streaming관련 콘텐츠eBooks 및 데모데이터 웨어하우스의 아버지 Bill Inmon이 구축한 데이터 레이크하우스데이터, 분석 및 AI 거버넌스Databricks Lakehouse 플랫폼에 대한 데이터 팀 가이드Delta Live Tables 스트리밍 데모이벤트Delta Live Tables를 사용한 데이터 변환 처리간편한 데이터 수집 웨비나 시리즈DATA + AI SUMMIT 2022Data +AI 월드 투어 2022블로그Delta Live Tables 및 Apache Kafka를 사용한 낮은 지연 스트리밍 데이터 파이프라인Delta Lake에 대한 스트리밍 데이터 수집을 단순화실시간 인사이트: 고객이 Databrick을 통한 데이터 스트리밍을 좋아하는 3가지 주요 이유Project Lightspeed: Apache Spark를 사용한 더욱 빠르고 단순한 스트림 처리2021년에 Databricks 및 Apache Spark를 위해 개발된 모든 새로운 Structured Streaming 기능에 대한 개요 — The Databricks Blog시작할 준비가 되셨나요?무료 시험판사용해 보기커뮤니티 가입제품플랫폼 개요가격오픈 소스 기술Databricks 이용해 보기데모제품플랫폼 개요가격오픈 소스 기술Databricks 이용해 보기데모학습 및 지원관련 문서용어집교육 및 인증헬프 센터법적 고지온라인 커뮤니티학습 및 지원관련 문서용어집교육 및 인증헬프 센터법적 고지온라인 커뮤니티솔루션산업 기준프로페셔널 서비스솔루션산업 기준프로페셔널 서비스회사Databricks 소개Databricks 채용다양성 및 포용성회사 블로그문의처회사Databricks 소개Databricks 채용다양성 및 포용성회사 블로그문의처Databricks 채용 확인하기WorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark 및 Spark 로고는 Apache Software Foundation의 상표입니다.개인 정보 보호 고지|이용약관|귀하의 개인 정보 선택|귀하의 캘리포니아 프라이버시 권리
https://www.databricks.com/fr/p/whitepaper/mit-cio-vision-2025?itm_data=home-promocard2-mit-cio-vision-2025
La vision des CIO pour 2025 : combler le fossé entre BI et IA – DatabricksRapportLa vision des CIO pour 2025Combler le fossé entre BI et IAUne étude mondiale menée auprès des CIO sur l'adoption de l'IA axée sur la valeur commercialeDécouvrez comment les plus grandes entreprises et organisations relèvent leurs défis de données les plus ardus afin de maîtriser l'IA. Dans le nouveau rapport de recherche du MIT Technology Review, vous découvrirez les perspectives de plus de 600 CIO issus de 18 pays et 14 industries.Comment les CIO hiérarchisent-ils l'adoption de l'IA ? Comment investissent-ils dans l’amélioration de leur stratégie de données ?Profitez des éclairages de dirigeants de Procter & Gamble, Johnson & Johnson, Cummins, Walgreens, S&P Global, Marks & Spencer et d'autres grandes entreprises.Découvrez les principales conclusions des témoignages des CIO :72 % affirment que les données représentent le plus grand défi pour l'IA, et 68 % pensent qu'il est crucial d'unifier leur plateforme de données pour l'analytique et l'IA.Les répondants sont déjà 94 % à utiliser l'IA dans les fonctions métier, et plus de la moitié pensent qu'elle sera largement répandue en 2025.Le multi-cloud est essentiel pour 72 % des personnes interrogées, qui sont nombreuses à recourir aux standards ouverts pour conserver une flexibilité stratégiqueObtenir le rapportProduitPlatform OverviewTarifsOpen Source TechEssayer DatabricksDémoProduitPlatform OverviewTarifsOpen Source TechEssayer DatabricksDémoLearn & SupportDocumentationGlossaryFORMATION ET CERTIFICATIONHelp CenterLegalCommunauté en ligneLearn & SupportDocumentationGlossaryFORMATION ET CERTIFICATIONHelp CenterLegalCommunauté en ligneSolutionsBy IndustriesServices professionnelsSolutionsBy IndustriesServices professionnelsEntrepriseNous connaîtreOffres d'emploi chez DatabricksDiversité et inclusionBlog de l'entrepriseNous contacterEntrepriseNous connaîtreOffres d'emploi chez DatabricksDiversité et inclusionBlog de l'entrepriseNous contacterDécouvrez les offres d'emploi chez Databrickspays/régionsEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Avis de confidentialité|Conditions d'utilisation|Vos choix de confidentialité|Vos droits de confidentialité en Californie
https://www.databricks.com/dataaisummit/speaker/ellie-hajarian/#
Ellie Hajarian - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingEllie HajarianSnr IT Manager at TD BankBack to speakersDriven Information Technologist with a strategic mindset in delivering superior solutions. A passionate problem solver who thrives in challenging, fast-paced IT organizations for over 20 years demonstrating solid knowledge in delivery excellence, risk management and emerging technologies. An advocate for diversity and inclusion, leading the 'Women In Leadership' Committee and a member of 'Girls in STEM' committee at my current workplace.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/dataaisummit/speaker/sreekanth-ratakonda/#
Sreekanth Ratakonda - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingSreekanth RatakondaPrincipal Solutions Architect at LabcorpBack to speakersSreekanth Ratakonda is a principal solutions architect at Labcorp, where he is responsible for building robust Data and Analytics platform and Data products. He has over 15 years of experience in solving complex business challenges in various functional domains utilizing several technologies and frameworks.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/kr/company/partners/consulting-and-si
Databricks 컨설팅 파트너 - DatabricksSkip to main content플랫폼Databricks 레이크하우스 플랫폼Delta Lake데이터 거버넌스데이터 엔지니어링데이터 스트리밍데이터 웨어하우징데이터 공유머신 러닝데이터 사이언스가격Marketplace오픈 소스 기술보안 및 신뢰 센터웨비나 5월 18일 / 오전 8시(태평양 표준시) 안녕, 데이터 웨어하우스. 안녕하세요, 레이크하우스입니다. 데이터 레이크하우스가 최신 데이터 스택에 어떻게 부합하는지 이해하려면 참석하십시오. 지금 등록하세요솔루션산업별 솔루션금융 서비스의료 서비스 및 생명 공학제조커뮤니케이션, 미디어 및 엔터테인먼트공공 부문리테일모든 산업 보기사용 사례별 솔루션솔루션 액셀러레이터프로페셔널 서비스디지털 네이티브 비즈니스데이터 플랫폼 마이그레이션5월 9일 | 오전 8시(태평양 표준시)   제조업을 위한 레이크하우스 살펴보기 코닝이 수동 검사를 최소화하고 운송 비용을 절감하며 고객 만족도를 높이는 중요한 결정을 내리는 방법을 들어보십시오.지금 등록하세요학습관련 문서교육 및 인증데모리소스온라인 커뮤니티University Alliance이벤트Data + AI Summit블로그LabsBeacons2023년 6월 26일~29일 직접 참석하거나 키노트 라이브스트림을 시청하세요.지금 등록하기고객파트너클라우드 파트너AWSAzureGoogle CloudPartner Connect기술 및 데이터 파트너기술 파트너 프로그램데이터 파트너 프로그램Built on Databricks Partner Program컨설팅 & SI 파트너C&SI 파트너 프로그램파트너 솔루션클릭 몇 번만으로 검증된 파트너 솔루션과 연결됩니다.자세히회사Databricks 채용Databricks 팀이사회회사 블로그보도 자료Databricks 벤처수상 실적문의처Gartner가 Databricks를 2년 연속 리더로 선정한 이유 알아보기보고서 받기Databricks 이용해 보기데모 보기문의처로그인JUNE 26-29REGISTER NOWDatabricks 컨설팅 파트너전 세계 Databricks 파트너와 연결하고 Databricks 레이크하우스 플랫폼을 구축, 배포하거나 Databricks 레이크하우스 플랫폼으로 마이그레이션하는 데 도움을 받으세요.Databricks 컨설팅 파트너는 데이터 엔지니어링, 협업형 데이터 사이언스, 전체 수명 주기 머신 러닝과 비즈니스 분석 이니셔티브를 구현, 확장하도록 돕는 데 유리한 독보적인 역량을 가진 전문가입니다. 기술, 산업 및 사용 사례에 대한 전문 지식을 활용하여 고객이 Databricks 레이크하우스 플랫폼을 최대로 활용할 수 있도록 돕습니다. 비즈니스에 가장 알맞은 데이터 변환 전략 설계, 데이터 현대화 및 마이그레이션, 데이터 관리 및 거버넌스 등의 서비스를 제공합니다.Loading...제품플랫폼 개요가격오픈 소스 기술Databricks 이용해 보기데모제품플랫폼 개요가격오픈 소스 기술Databricks 이용해 보기데모학습 및 지원관련 문서용어집교육 및 인증헬프 센터법적 고지온라인 커뮤니티학습 및 지원관련 문서용어집교육 및 인증헬프 센터법적 고지온라인 커뮤니티솔루션산업 기준프로페셔널 서비스솔루션산업 기준프로페셔널 서비스회사Databricks 소개Databricks 채용다양성 및 포용성회사 블로그문의처회사Databricks 소개Databricks 채용다양성 및 포용성회사 블로그문의처Databricks 채용 확인하기WorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark 및 Spark 로고는 Apache Software Foundation의 상표입니다.개인 정보 보호 고지|이용약관|귀하의 개인 정보 선택|귀하의 캘리포니아 프라이버시 권리
https://www.databricks.com/p/webinar/officehours?utm_source=databricks&utm_medium=Website&utm_campaign=7013f000000LkdAAAS&utm_content=learn-page
Databricks Office Hours | DatabricksWebinarDatabricks Office HoursProviding answers to your questionsDatabricks customer-exclusive Office Hours connect you directly with experts through a LIVE Q&A where you can ask all your Databricks questions.Register for one of our upcoming events with the form on the right, and join us to:Learn the best strategies to apply Databricks to your use caseTroubleshoot your technical questionsMaster tips and tricks to maximize your usage of our platformQuestions are answered on a rolling basis, so come and go as you please, or stay for the whole event to hear what questions other users have.Who can attend This event is exclusive to Databricks customers seeking platform-related support. For account-related questions, please contact your Databricks representative.Register nowProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/dataaisummit/speaker/satish-garla/#
Satish Garla - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingSatish GarlaSr Solutions Architect at DatabricksBack to speakersSatish has a distinguished background in cloud modernization, data management, data science and financial risk management. He started his career implementing enterprise risk solutions using SAS. Satish leveraged open source tools and dotData technology to implement automated Feature Engineering and AutoML. Currently Satish works as a Sr Solutions Architect at Databricks helping enterprises with cloud and Lakehouse adoption using open source technologies such as Apache Spark, Delta and MLFlow.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/product/startups
Databricks for Startups | DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack. Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence. See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco June 26–29   Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWBuild your startup on DatabricksThe most powerful platform for data and AIBuild on the Databricks Lakehouse to address all your data, analytics and AI on one platform. Accelerate speed to product while Databricks manages your data infrastructure. Prepare your product for growth with cost-efficient scalability and performance. Maintain flexibility with open source and multicloud options.Databricks for Startups helps you get up and running quickly. Build data-driven applications on the lakehouse — the data platform that scales to all your data needs, from zero to IPO and beyond.Free credits Get easy access to the Databricks Lakehouse Platform with up to $21K in free creditsExpert advice Receive advice from experts and the community to help build your productGo to market Reach more customers with access to Databricks marketing, events and customersBuilt on DatabricksReady to build?If you’re a startup building data-driven applications and have raised VC funding, we want to hear from you. Apply nowProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/dataaisummit/speaker/rekha-bachwani
Rekha Bachwani - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingRekha BachwaniEngineering Director, ML Engineering at The Walt Disney CompanyBack to speakersRekha Bachwani is an Engineering Director at Disney Streaming. She leads the ML Engineering team that drives the strategy for ML infrastructure, platform and applications for the services and engineering organization. She has a PhD in Computer Science from Rutgers University, and worked at Netflix and Intel Labs prior to joining Disney.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/it/company/partners/cloud-partners
Databricks Cloud Service Provider Partner | DatabricksSkip to main contentPiattaformaThe Databricks Lakehouse PlatformDelta LakeGovernance dei datiIngegneria dei datiStreaming di datiData warehouseCondivisione dei datiMachine LearningData SciencePrezziMarketplaceTecnologia open-sourceSecurity and Trust CenterWEBINAR 18 maggio / 8 AM PT Addio, Data Warehouse. Ciao, Lakehouse. Partecipa per capire come una data lakehouse si inserisce nel tuo stack di dati moderno. Registrati oraSoluzioniSoluzioni per settoreServizi finanziariSanità e bioscienzeIndustria manifatturieraComunicazioni, media e intrattenimentoSettore pubblicoretailVedi tutti i settoriSoluzioni per tipo di applicazioneAcceleratoriServizi professionaliAziende native digitaliMigrazione della piattaforma di dati9 maggio | 8am PT   Scopri la Lakehouse for Manufacturing Scopri come Corning sta prendendo decisioni critiche che riducono al minimo le ispezioni manuali, riducono i costi di spedizione e aumentano la soddisfazione dei clienti.Registrati oggi stessoFormazioneDocumentazioneFormazione e certificazioneDemoRisorseCommunity onlineUniversity AllianceEventiConvegno Dati + AIBlogLabsBeacons  26–29 giugno 2023 Partecipa di persona o sintonizzati per il live streaming del keynoteRegistratiClientiPartnerPartner cloudAWSAzureGoogle CloudPartner ConnectPartner per tecnologie e gestione dei datiProgramma Partner TecnologiciProgramma Data PartnerBuilt on Databricks Partner ProgramPartner di consulenza e SIProgramma partner consulenti e integratori (C&SI)Soluzioni dei partnerConnettiti con soluzioni validate dei nostri partner in pochi clic.RegistratiChi siamoLavorare in DatabricksIl nostro teamConsiglio direttivoBlog aziendaleSala stampaDatabricks VenturesPremi e riconoscimentiContattiScopri perché Gartner ha nominato Databricks fra le aziende leader per il secondo anno consecutivoRichiedi il reportProva DatabricksGuarda le demoContattiAccediJUNE 26-29REGISTER NOWPartner cloud di DatabricksDatabricks Lakehouse gira su tutti i cloud pubblici, integrandosi strettamente con i servizi di sicurezza, calcolo, storage, analisi e AI offerti in maniera nativa dai provider cloud. Le aziende possono così contare sull'agilità necessaria per sfruttare il proprio fornitore di servizi cloud come e quando vogliono.Azure Databricks è un servizio sviluppato congiuntamente da Databricks e Microsoft. Per cominciare basta un clic nel portale Azure per integrare il sistema in maniera nativa con i servizi di sicurezza e gestione dei dati di Azure.Maggiori informazioniDatabricks su AWS consente di conservare e gestire tutti i dati S3 su una semplice piattaforma lakehouse aperta che unisce il meglio di data warehouse e data lake per unificare tutti i carichi di lavoro di analisi e AI.Maggiori informazioniDatabricks su Google Cloud porta la piattaforma lakehouse aperta sul cloud aperto, per unificare data engineering, data science e analisi, con una stretta integrazione con Google Cloud Storage, BigQuery e la piattaforma AI di Google Cloud.Maggiori informazioniRegioni cloud●Azure  ●AWS  ●Google ●AlibabaProdottoPanoramica della piattaformaPrezziTecnologia open-sourceProva DatabricksDemoProdottoPanoramica della piattaformaPrezziTecnologia open-sourceProva DatabricksDemoFormazione e supportoDocumentazioneGlossaryFormazione e certificazioneHelp CenterLegaleCommunity onlineFormazione e supportoDocumentazioneGlossaryFormazione e certificazioneHelp CenterLegaleCommunity onlineSoluzioniPer settoreServizi professionaliSoluzioniPer settoreServizi professionaliChi siamoChi siamoLavorare in DatabricksDiversità e inclusioneBlog aziendaleContattiChi siamoChi siamoLavorare in DatabricksDiversità e inclusioneBlog aziendaleContattiPosizioni aperte in DatabricksMondoEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Informativa sulla privacy|Condizioni d'uso|Le vostre scelte sulla privacy|I vostri diritti di privacy in California
https://www.databricks.com/dataaisummit/speaker/kasey-uhlenhuth/#
Kasey Uhlenhuth - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingKasey UhlenhuthStaff Product Manager at DatabricksBack to speakersKasey Uhlenhuth is a product manager on the machine learning platform team at Databricks. Before Databricks, she worked on the Visual Studio and C# team at Microsoft building developer productivity tools. Kasey holds an MBA from the Stanford Graduate School of Business and a BA in Computer Science from Harvard University.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/dataaisummit/speaker/macgregor-winegard
MacGregor Winegard - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingMacGregor WinegardAssociate Data Engineer at KPMGBack to speakersMacGregor Winegard works alongside clients to develop cloud applications that extract, load and transform data for business use cases. He has experience utilizing Databricks in fields such as ESG reporting, IoT data collection, and energy data processing. MacGregor holds a Bachelor of Science from St. John Fisher University in Mathematics and Economics. As a student, MacGregor pursued research in several areas, including image processing, baseball statistics, and planar discrete dynamical systems. MacGregor currently is an associate Data Engineer at KPMG, and Databricks certified Data Engineer Associate.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/dataaisummit/speaker/matthew-karasick/#
Matthew Karasick - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingMatthew KarasickChief Product Officer at HabuBack to speakersMatt is passionate about finding ways to use data to achieve the wins that data-powered technology can create between companies and consumers. He has spent his career helping companies do more with data. He has held product leadership positions at DoubleClick, Trilogy, Acerno, Akamai, and most recently at Indeed. After working closely together with Matt Kilmartin at Akamai, Matt (Karasick) worked with the Krux team as a consultant, where he helped create Krux for Marketers. Matt believes that, when done correctly and with sustainable mutual value as the measuring stick, interests between consumers and companies are always aligned.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/dataaisummit/speaker/marcel-kramer/#
Marcel Kramer - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingMarcel KramerHead of Data Engineering at ABN AMRO Bank N.VBack to speakersAccountable for ~50 DevOps teams with 500+ IT engineers delivering all bank wide data capabilities (DQ monitoring, data distribution, data privacy and security, data governance, advanced analytics, process mining and MLE platforms). Broad tech stack ranging from traditional to latest cutting edge technologies. Defined together with CDO rigorous data strategy, to make ABN AMRO a data driven company. Actively involved with AA ventures in data technology investments.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/discover/data-brew
Data Brew by DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack. Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence. See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco June 26–29   Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWData Brew: Let’s talk dataWelcome to Data Brew by Databricks with Denny and Brooke!   In this series, we explore various topics in the data and AI community and interview experts in data engineering and data science. So join us with your morning brew in hand and get ready to dive deep into data and AI. See episodesMeet the hostsWatch or listen on your favorite platformAbout the hosts Brooke WenigDirector of the Machine Learning PracticeDATABRICKSBrooke Wenig is a Director of the Machine Learning Practice at Databricks. She leads a team of data scientists who develop large-scale machine learning pipelines for customers, as well as teach courses on distributed machine learning best practices. Previously, she was a Principal Data Science Consultant at Databricks. She received an M.S. in Computer Science from UCLA with a focus on distributed machine learning. She speaks Mandarin Chinese fluently and enjoys cycling.   Denny LeeDeveloper AdvocateDATABRICKSDenny Lee is a Developer Advocate at Databricks. He is a hands-on distributed systems and data sciences engineer with extensive experience developing internet-scale infrastructure, data platforms, and predictive analytics systems for both on-premises and cloud environments. He has a Master’s of Biomedical Informatics from Oregon Health and Sciences University and has architected and implemented powerful data solutions for enterprise healthcare customers. His current technical focuses include distributed systems, Apache Spark, deep learning, machine learning and genomics.   Brooke and Denny are two of the co-authors of Learning Spark, 2nd edition.Contact the Data Brew team on Twitter: @databrew_db or on LinkedInProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/dataaisummit/speaker/david-tempelman/#
David Tempelman - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingDavid TempelmanResident Solutions Architect at DatabricksBack to speakersDavid is a Resident Solutions Architect at Databricks and helps customers to get most value out of their data. He has several years of experience in the Big Data and Machine Learning domain across different industries including manufacturing, retail & finance.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/it/solutions/accelerators
Acceleratori Databricks - Generare valore per dati e AI più velocemente - DatabricksSkip to main contentPiattaformaThe Databricks Lakehouse PlatformDelta LakeGovernance dei datiIngegneria dei datiStreaming di datiData warehouseCondivisione dei datiMachine LearningData SciencePrezziMarketplaceTecnologia open-sourceSecurity and Trust CenterWEBINAR 18 maggio / 8 AM PT Addio, Data Warehouse. Ciao, Lakehouse. Partecipa per capire come una data lakehouse si inserisce nel tuo stack di dati moderno. Registrati oraSoluzioniSoluzioni per settoreServizi finanziariSanità e bioscienzeIndustria manifatturieraComunicazioni, media e intrattenimentoSettore pubblicoretailVedi tutti i settoriSoluzioni per tipo di applicazioneAcceleratoriServizi professionaliAziende native digitaliMigrazione della piattaforma di dati9 maggio | 8am PT   Scopri la Lakehouse for Manufacturing Scopri come Corning sta prendendo decisioni critiche che riducono al minimo le ispezioni manuali, riducono i costi di spedizione e aumentano la soddisfazione dei clienti.Registrati oggi stessoFormazioneDocumentazioneFormazione e certificazioneDemoRisorseCommunity onlineUniversity AllianceEventiConvegno Dati + AIBlogLabsBeacons  26–29 giugno 2023 Partecipa di persona o sintonizzati per il live streaming del keynoteRegistratiClientiPartnerPartner cloudAWSAzureGoogle CloudPartner ConnectPartner per tecnologie e gestione dei datiProgramma Partner TecnologiciProgramma Data PartnerBuilt on Databricks Partner ProgramPartner di consulenza e SIProgramma partner consulenti e integratori (C&SI)Soluzioni dei partnerConnettiti con soluzioni validate dei nostri partner in pochi clic.RegistratiChi siamoLavorare in DatabricksIl nostro teamConsiglio direttivoBlog aziendaleSala stampaDatabricks VenturesPremi e riconoscimentiContattiScopri perché Gartner ha nominato Databricks fra le aziende leader per il secondo anno consecutivoRichiedi il reportProva DatabricksGuarda le demoContattiAccediJUNE 26-29REGISTER NOWSoluzioni per settori specificiOttieni con dati e AI i risultati più rilevanti... più velocementeComincia la prova gratuitaAcceleratori DatabricksRisparmia molte ore nelle attività di scoperta, progettazione, sviluppo e collaudo con gli Acceleratori Databricks. Le nostre guide specifiche — notebook con funzionalità complete e best practice — velocizzano l'ottenimento di risultati per i casi d'uso più comuni e di maggior impatto. Dall'idea al progetto concettuale (Proof of Concept) in sole due settimane.   Comincia a usare gli Acceleratori con la prova gratuita di Databricks o il tuo account attivo.   Comincia la prova gratuita oggi stessoScorri l'elenco degli AcceleratorisearchHide filtersIndustrySortDatabricksRimozione automatica di dati sanitari protetti (PHI)featurednewDatabricksPrevisione granulare della domandafeatured🔥DatabricksIndividuazione di comportamenti tossici nei videogiochifeaturedDatabricksDisponibilità a scaffalefeaturedDatabricksAbbinamento di elementi alla rinfusaDatabricksAggiustamento del rischio MedicareDatabricksAnalisi del punto vendita in tempo realenewDatabricksAnalisi delle prestazioni ESG🔥DatabricksAnalisi di sopravvivenza e valore lungo il ciclo di vitaDatabricksAnalisi geospaziale per individuare le frodiDatabricksAnalisi immagini digitali di patologieDatabricksAntiriciclaggioDatabricksAstrazione di dati del mondo reale per oncologiaDatabricksAttribuzione multitouchDatabricksClassificazione dei commerciantiDatabricksCostruzione della coorte con i grafici della conoscenzaSplunkCyber Analytics (connettore Splunk)DatabricksDeterminanti sociali della saluteDatabricksEfficacia totale dell'impianto (OEE)DatabricksEvidenze nel mondo realeDatabricksFondamenti di computer visionDatabricksGemelli digitaliDatabricksGenerazione scalabile di percorsiDatabricksGestione del rischioDatabricksGestione della ritenzione (fidelizzazione)DatabricksIndividuare reazioni avverse ai farmacinewDatabricksInteroperabilità FHIR con dbigniteDatabricksInteroperabilità HL7v2 con SmolderDatabricksManutenzione Predittiva (IoT)DatabricksMotore di raccomandazioneDatabricksOttimizzazione delle offerte in tempo realeDatabricksPiattaforma di investimenti modernaDatabricksPicking ottimizzato degli ordiniDatabricksPrevenzione delle frodi finanziarie in tempo realeDatabricksPrevisioni di disdette degli abbonatiDatabricksPrevisioni di vendita e attribuzione delle inserzioni pubblicitarie🔥DatabricksPunteggio di propensionenewDatabricksQualità di esperienza videoDatabricksR&D Ottimizzazione con i grafi della conoscenzanewDatabricksReporting normativoDatabricksRilevamento di minacce con DNSDatabricksRischio reputazionaleDatabricksRisoluzione dell'entità clienteDatabricksScorte di SicurezzaDatabricksSegmentazione clientiDatabricksStudi di associazione genomicaDatabricksTrasparenza dei prezzinewDatabricksValore lungo il ciclo di vita del cliente🔥Domande frequentiQuanto costano gli Acceleratori Databricks?Gli Acceleratori Databricks vengono messi a disposizione di tutti i clienti di Databricks gratuitamente.Devo essere cliente di Databricks per utilizzare un Acceleratore?Gli Acceleratori possono essere implementati attivando una prova gratuita di Databricks o utilizzando un account attivo.Che cosa posso fare con un Acceleratore?Gli Acceleratori Databricks sono pensati per far risparmiare ore nelle attività di scoperta, progettazione, sviluppo e collaudo. Il nostro obiettivo è approntare in tempi rapidissimi casi d'uso di dati e AI fornendo le giuste risorse (notebook, pattern consolidati e best practice). Dall'idea al progetto concettuale (Proof of Concept) in sole due settimane.Scopri come clienti reali usano DatabricksLeggi le storie dei clientiProdottoPanoramica della piattaformaPrezziTecnologia open-sourceProva DatabricksDemoProdottoPanoramica della piattaformaPrezziTecnologia open-sourceProva DatabricksDemoFormazione e supportoDocumentazioneGlossaryFormazione e certificazioneHelp CenterLegaleCommunity onlineFormazione e supportoDocumentazioneGlossaryFormazione e certificazioneHelp CenterLegaleCommunity onlineSoluzioniPer settoreServizi professionaliSoluzioniPer settoreServizi professionaliChi siamoChi siamoLavorare in DatabricksDiversità e inclusioneBlog aziendaleContattiChi siamoChi siamoLavorare in DatabricksDiversità e inclusioneBlog aziendaleContattiPosizioni aperte in DatabricksMondoEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Informativa sulla privacy|Condizioni d'uso|Le vostre scelte sulla privacy|I vostri diritti di privacy in California
https://www.databricks.com/jp/dataaisummit/?itm_data=menu-learn-dais23
Data and AI Summit 2023 - DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingGeneration AILarge Language Models (LLM) are taking AI mainstream. Join the premier event for the global data community to understand their potential and shape the future of your industry with data and AI. Register NowSan Francisco, Moscone CenterJune 26 - 29, 2023Featured SpeakersTop experts, researchers and open source contributors from Databricks and across the data and AI community will speak at Data + AI Summit. Whether you’re an engineering wizard, ML pro, SQL expert — or you want to learn how to build, train and deploy LLMs — you’ll be in good company. See all speakersDaniela RusDirector, MIT CSAIL; Professor of EECS, MITPercy LiangProfessor of Computer Science, StanfordNat FriedmanCreator of Copilot; Former CEO, GitHubMichael CarbinCo-founder, MosaicML; Professor of EECS, MITKasey UhlenhuthStaff Product Manager, DatabricksWassym BensaidSr. Vice President, Software Development, RivianEric SchmidtCo-Founder, Schmidt Futures; Former CEO and Chairman, GoogleAdi PolakData & AI Technologist, lakeFSAli GhodsiCo-founder and CEO, DatabricksManu SharmaCEO, LabelboxMatei ZahariaOriginal Creator of Apache Spark™ and MLflow; Chief Technologist, DatabricksLin QiaoCo-creator of PyTorch, Co-founder and CEO, FireworksSai RavuruSenior Manager of Data Science & Analytics, Jet BlueEmad MostaqueCEO, Stability.AIHarrison ChaseCreator of LangChainSatya NadellaChairman and CEO, Microsoft, Live Virtual GuestZaheera ValaniSenior Director of Engineering, DatabricksHannes MühleisenCreator of DuckDBBrooke WenigMachine Learning Practice Lead, DatabricksJitendra MalikComputer Vision Pioneer, Former Head of Facebook AI ResearchRobin SutaraField CTO, DatabricksLior GavishCEO and Co-founder, Monte Carlo DataDawn SongProfessor of EECS, UC BerkeleyReynold XinCo-founder and Chief Architect, DatabricksDaniela RusDirector, MIT CSAIL; Professor of EECS, MITPercy LiangProfessor of Computer Science, StanfordNat FriedmanCreator of Copilot; Former CEO, GitHubMichael CarbinCo-founder, MosaicML; Professor of EECS, MITKasey UhlenhuthStaff Product Manager, DatabricksWassym BensaidSr. Vice President, Software Development, RivianEric SchmidtCo-Founder, Schmidt Futures; Former CEO and Chairman, GoogleAdi PolakData & AI Technologist, lakeFSAli GhodsiCo-founder and CEO, DatabricksManu SharmaCEO, LabelboxMatei ZahariaOriginal Creator of Apache Spark™ and MLflow; Chief Technologist, DatabricksLin QiaoCo-creator of PyTorch, Co-founder and CEO, FireworksSai RavuruSenior Manager of Data Science & Analytics, Jet BlueEmad MostaqueCEO, Stability.AIHarrison ChaseCreator of LangChainSatya NadellaChairman and CEO, Microsoft, Live Virtual GuestZaheera ValaniSenior Director of Engineering, DatabricksHannes MühleisenCreator of DuckDBBrooke WenigMachine Learning Practice Lead, DatabricksJitendra MalikComputer Vision Pioneer, Former Head of Facebook AI ResearchRobin SutaraField CTO, DatabricksLior GavishCEO and Co-founder, Monte Carlo DataDawn SongProfessor of EECS, UC BerkeleyReynold XinCo-founder and Chief Architect, DatabricksDaniela RusDirector, MIT CSAIL; Professor of EECS, MITPercy LiangProfessor of Computer Science, StanfordNat FriedmanCreator of Copilot; Former CEO, GitHubMichael CarbinCo-founder, MosaicML; Professor of EECS, MITKasey UhlenhuthStaff Product Manager, DatabricksWassym BensaidSr. Vice President, Software Development, RivianEric SchmidtCo-Founder, Schmidt Futures; Former CEO and Chairman, GoogleWhy attend?Join thousands of data leaders, engineers, scientists and analysts to explore all things data, analytics and AI — and how these are unified on the lakehouse. Hear from the data teams who are transforming their industries. Learn how to build and apply LLMs to your business. Uplevel your skills with hands-on training and role-based certifications. Connect with data professionals from around the world and learn more about all Data + AI Summit has to offer. SessionsWith more than 250 sessions, Data + AI Summit has something for everyone. Choose from technical deep dives, hands-on training, lightning talks, industry sessions, and more. Explore sessionsTechnologyExplore the latest advances in leading open source projects and industry technologies like Apache Spark™, Delta Lake, MLflow, Dolly, PyTorch, dbt, Presto/Trino, DuckDB and much more. You’ll also get a first look at new products and features in the Databricks Lakehouse Platform. Browse catalogNetworkingConnect with thousands of data + AI community peers and grow your professional network in social meetups, on the expo floor, or at our event party. Learn moreChoose your experienceGet access to all the sessions, training, and special events live in San Francisco or join us virtually for the keynotes.RECOMMENDEDActivitiesIn Person EventVirtual EventKeynotesBreakout Sessions300+10Hands-on Training Courses for Data Engineering, Machine Learning, LLMs, and many onsite certificationsConnect with other data pros at “birds of a feather” meals, happy hours and special eventsLightning talks, AMAs and meetups on topics such as Apache Spark™, Delta Lake, MLflow and DollyAccess to 100+ leading data and AI companies in Dev Hub + ExpoIndustry Forums for Financial Services, Retail and Consumer Goods, Healthcare and Life Sciences, Communications, Media and Entertainment, Public Sector, and Manufacturing and EnergySee pricingTrusted by the data communityHear data practitioners from trusted companies all over the world See agendaDon’t miss this year’s event!Register nowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/jp/learn/labs
Databricks Labs Projects | DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack. Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence. See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco June 26–29   Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWDatabricks LabsDatabricks Labs are projects created by the field team to help customers get their use cases into production faster!DBXThis tool simplifies jobs launch and deployment process across multiple environments. It also helps to package your project and deliver it to your Databricks environment in a versioned fashion. Designed in a CLI-first manner, it is built to be actively used both inside CI/CD pipelines and as a part of local tooling for fast prototyping.GitHub Sources →Documentation →Blog →TempoThe purpose of this project is to provide an API for manipulating time series on top of Apache Spark™. Functionality includes featurization using lagged time values, rolling statistics (mean, avg, sum, count, etc.), AS OF joins, and downsampling and interpolation. This has been tested on TB-scale of historical data.GitHub Sources →Documentation →Webinar →MosaicMosaic is a tool that simplifies the implementation of scalable geospatial data pipelines by binding together common open source geospatial libraries and Apache Spark™️. Mosaic also provides a set of examples and best practices for common geospatial use cases. It provides APIs for ST_ expressions and GRID_ expressions, supporting grid index systems such as H3 and British National Grid.GitHub Sources →Documentation →Blog →Other ProjectsOverwatchAnalyze all of your jobs and clusters across all of your workspaces to quickly identify where you can make the biggest adjustments for performance gains and cost savings.Learn moreSplunk IntegrationAdd-on for Splunk, an app that allows Splunk Enterprise and Splunk Cloud users to run queries and execute actions, such as running notebooks and jobs, in Databricks.Github Sources →Learn more →SmolderSmolder provides an Apache Spark™ SQL data source for loading EHR data from HL7v2 message formats. Additionally, Smolder provides helper functions that can be used on a Spark SQL DataFrame to parse HL7 message text, and to extract segments, fields, and subfields from a message.Github Sources →Learn more →GeoscanApache Spark ML Estimator for density-based spatial clustering based on Hexagonal Hierarchical Spatial Indices.Github Sources →Learn more →MigrateTool to help customers migrate artifacts between Databricks workspaces. This allows customers to export configurations and code artifacts as a backup or as part of a migration between a different workspace.Github SourcesLearn more: AWS | AzureData GeneratorGenerate relevant data quickly for your projects. The Databricks data generator can be used to generate large simulated/synthetic data sets for test, POCs, and other usesGithub Sources →Learn more →DeltaOMSCentralized Delta transaction log collection for metadata and operational metrics analysis on your Lakehouse.Github Sources →Learn more →DLT-METAThis framework makes it easy to ingest data using Delta Live Tables and metadata. With DLT-META, a single data engineer can easily manage thousands of tables. Several Databricks customers have DLT-META in production to process 1000+ tables.Github Sources →Learn more →Please note that all projects in the https://github.com/databrickslabs account are provided for your exploration only, and are not formally supported by Databricks with service level agreements (SLAs). They are provided AS IS and we do not make any guarantees of any kind. Please do not submit a support ticket relating to any issues arising from the use of these projects. Any issues discovered through the use of this project should be filed as GitHub Issues on the Repo. They will be reviewed as time permits, but there are no formal SLAs for support.ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/it
L'azienda dell'architettura Data Lakehouse e AI - DatabricksSkip to main contentPiattaformaThe Databricks Lakehouse PlatformDelta LakeGovernance dei datiIngegneria dei datiStreaming di datiData warehouseCondivisione dei datiMachine LearningData SciencePrezziMarketplaceTecnologia open-sourceSecurity and Trust CenterWEBINAR 18 maggio / 8 AM PT Addio, Data Warehouse. Ciao, Lakehouse. Partecipa per capire come una data lakehouse si inserisce nel tuo stack di dati moderno. Registrati oraSoluzioniSoluzioni per settoreServizi finanziariSanità e bioscienzeIndustria manifatturieraComunicazioni, media e intrattenimentoSettore pubblicoretailVedi tutti i settoriSoluzioni per tipo di applicazioneAcceleratoriServizi professionaliAziende native digitaliMigrazione della piattaforma di dati9 maggio | 8am PT   Scopri la Lakehouse for Manufacturing Scopri come Corning sta prendendo decisioni critiche che riducono al minimo le ispezioni manuali, riducono i costi di spedizione e aumentano la soddisfazione dei clienti.Registrati oggi stessoFormazioneDocumentazioneFormazione e certificazioneDemoRisorseCommunity onlineUniversity AllianceEventiConvegno Dati + AIBlogLabsBeacons  26–29 giugno 2023 Partecipa di persona o sintonizzati per il live streaming del keynoteRegistratiClientiPartnerPartner cloudAWSAzureGoogle CloudPartner ConnectPartner per tecnologie e gestione dei datiProgramma Partner TecnologiciProgramma Data PartnerBuilt on Databricks Partner ProgramPartner di consulenza e SIProgramma partner consulenti e integratori (C&SI)Soluzioni dei partnerConnettiti con soluzioni validate dei nostri partner in pochi clic.RegistratiChi siamoLavorare in DatabricksIl nostro teamConsiglio direttivoBlog aziendaleSala stampaDatabricks VenturesPremi e riconoscimentiContattiScopri perché Gartner ha nominato Databricks fra le aziende leader per il secondo anno consecutivoRichiedi il reportProva DatabricksGuarda le demoContattiAccediJUNE 26-29REGISTER NOWLa migliore data warehouse è un lakehouseUnifica tutti i dati, le analisi e l'AI su un'unica piattaformaComincia gratisMaggiori informazioniRiduci i costi e accelera l'innovazione sulla piattaforma lakehouseMaggiori informazioniUnificataUn'unica piattaforma per tutti i tuoi dati, gestita in modo omogeneo e disponibile per tutte le attività di analisi e AIApertaBasata su standard aperti e integrata con tutti i cloud per operare direttamente all'interno del tuo moderno stack di datiScalabileScalabilità efficiente per ogni carico di lavoro, dalle semplici pipeline di dati ai modelli linguistici di grandi dimensioni (LLM)Organizzazioni data-driven che hanno scelto il lakehouse Vedi tutti i clienti Il lakehouse unifica i team di gestione dei datiGestione e ingegneria dei datiSnellire l'acquisizione e la gestione dei datiGrazie a processi ETL automatizzati e affidabili, alla condivisione aperta e sicura dei dati e a prestazioni velocissime, Delta Lake trasforma il data lake nella destinazione finale di tutti i dati strutturati, semi-strutturati e non strutturati.Maggiori informazioni Guarda la demoData warehouseRicavare nuove informazioni dai dati più completiPotendo accedere immediatamente ai dati più aggiornati e completi e potendo contare sulla potenza di Databricks SQL (che offre un rapporto prezzo/prestazioni fino a 12 volte migliore rispetto ai tradizionali data warehouse in cloud), analisti e data scientist possono ottenere nuove informazioni approfondite e dettagliate.Maggiori informazioni Guarda la demoData science e machine learningAccelerare il ML lungo tutto il ciclo di vitaIl lakehouse rappresenta le fondamenta di Databricks Machine Learning, una soluzione collaborativa nativa per la gestione dei dati, che abbraccia tutto il ciclo di vita del machine learning, dalla definizione delle funzionalità alla produzione. Insieme a pipeline di dati con qualità e prestazioni elevate, il lakehouse accelera il machine learning e la produttività dei team.Maggiori informazioni Guarda la demoGovernance e condivisione dei datiUnifica la governance e la condivisione di dati, analisi e AIDatabricks mette a disposizione un modello comune di sicurezza e governance per tutte le risorse di dati, analisi e AI presenti nel lakehouse su qualsiasi cloud. Si possono così reperire e condividere dati su molteplici piattaforme, cloud o regioni geografiche, senza duplicazioni o vincoli, oltre a distribuire prodotti di dati attraverso un marketplace aperto.Maggiori informazioni Guarda la demoIl data warehouse è ormai superato. Scopri perché il lakehouse è diventato la nuova architettura per dati e AI.Scopri il LakehouseÈ disponibile il catalogo delle sessioniVieni a San Francisco per scoprire l'ecosistema lakehouse e i progressi delle tecnologie open-sourceScopri le sessioniScopri perché Gartner ha nominato Databricks fra le aziende leader per il secondo anno consecutivo.Richiedi il report600 CIO. 14 settori industriali. 18 Paesi.Questo nuovo studio conferma che la strategia di gestione dei dati è fondamentale per il successo dell'AI. Scopri il punto di vista di altri CIO.Richiedi il reportLettura preziosa per tecnici di Machine Learning e data scientist che cercano strumenti migliori per le MLOpsRichiedi l'eBookPronto per cominciare?Prova Databricks gratisProdottoPanoramica della piattaformaPrezziTecnologia open-sourceProva DatabricksDemoProdottoPanoramica della piattaformaPrezziTecnologia open-sourceProva DatabricksDemoFormazione e supportoDocumentazioneGlossaryFormazione e certificazioneHelp CenterLegaleCommunity onlineFormazione e supportoDocumentazioneGlossaryFormazione e certificazioneHelp CenterLegaleCommunity onlineSoluzioniPer settoreServizi professionaliSoluzioniPer settoreServizi professionaliChi siamoChi siamoLavorare in DatabricksDiversità e inclusioneBlog aziendaleContattiChi siamoChi siamoLavorare in DatabricksDiversità e inclusioneBlog aziendaleContattiPosizioni aperte in DatabricksMondoEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Informativa sulla privacy|Condizioni d'uso|Le vostre scelte sulla privacy|I vostri diritti di privacy in California
https://www.databricks.com/kr/product/delta-sharing
Delta Sharing | DatabricksSkip to main content플랫폼Databricks 레이크하우스 플랫폼Delta Lake데이터 거버넌스데이터 엔지니어링데이터 스트리밍데이터 웨어하우징데이터 공유머신 러닝데이터 사이언스가격Marketplace오픈 소스 기술보안 및 신뢰 센터웨비나 5월 18일 / 오전 8시(태평양 표준시) 안녕, 데이터 웨어하우스. 안녕하세요, 레이크하우스입니다. 데이터 레이크하우스가 최신 데이터 스택에 어떻게 부합하는지 이해하려면 참석하십시오. 지금 등록하세요솔루션산업별 솔루션금융 서비스의료 서비스 및 생명 공학제조커뮤니케이션, 미디어 및 엔터테인먼트공공 부문리테일모든 산업 보기사용 사례별 솔루션솔루션 액셀러레이터프로페셔널 서비스디지털 네이티브 비즈니스데이터 플랫폼 마이그레이션5월 9일 | 오전 8시(태평양 표준시)   제조업을 위한 레이크하우스 살펴보기 코닝이 수동 검사를 최소화하고 운송 비용을 절감하며 고객 만족도를 높이는 중요한 결정을 내리는 방법을 들어보십시오.지금 등록하세요학습관련 문서교육 및 인증데모리소스온라인 커뮤니티University Alliance이벤트Data + AI Summit블로그LabsBeacons2023년 6월 26일~29일 직접 참석하거나 키노트 라이브스트림을 시청하세요.지금 등록하기고객파트너클라우드 파트너AWSAzureGoogle CloudPartner Connect기술 및 데이터 파트너기술 파트너 프로그램데이터 파트너 프로그램Built on Databricks Partner Program컨설팅 & SI 파트너C&SI 파트너 프로그램파트너 솔루션클릭 몇 번만으로 검증된 파트너 솔루션과 연결됩니다.자세히회사Databricks 채용Databricks 팀이사회회사 블로그보도 자료Databricks 벤처수상 실적문의처Gartner가 Databricks를 2년 연속 리더로 선정한 이유 알아보기보고서 받기Databricks 이용해 보기데모 보기문의처로그인JUNE 26-29REGISTER NOWDelta Sharing데이터 자산을 안전하게 공유하기 위한 개방적 표준시작하기데모 보기Databricks Delta Sharing은 레이크하우스에서 모든 컴퓨팅 플랫폼으로 안전하게 라이브 데이터를 공유할 수 있는 오픈 솔루션을 제공합니다.주요 장점개방적이 플랫폼 간 공유공급업체 종속 효과를 피하고, Delta Lake와 Apache Parquet 형식으로 모든 데이터 플랫폼에 기존 데이터를 간편하게 공유할 수 있습니다.복제 없이 라이브 데이터 공유라이브 데이터를 복제하거나 다른 시스템에 복사하지 않고 모든 데이터 플랫폼, 클라우드 또는 리전에서 공유할 수 있습니다.중앙 집중형 거버넌스단일 플랫폼 한 곳에서 공유된 데이터를 관리, 통제, 감사 및 추적할 수 있습니다.데이터 제품을 위한 마켓플레이스데이터 세트, ML 모델, 노트북 등의 데이터 제품을 한 번에 구축 및 패키지화하고 중앙 마켓플레이스를 통해 어디서나 배포할 수 있습니다.개인 정보를 안전하게 보호하는 데이터 클린룸데이터를 안전하게 보호하면서도 안전한 호스팅 환경을 통해 모든 클라우드에서 고객 및 파트너와 협업하세요.작동 방식Databricks 플랫폼과 기본적으로 통합Unity Catalog와 기본적으로 통합되기 때문에 전사적으로 공유 데이터를 중앙에서 관리하고 감사할 수 있습니다. 이렇게 하면 공급자, 협력업체와 신뢰를 바탕으로 데이터 자산을 공유할 수 있어 비즈니스 조율에 도움이 되고, 동시에 보안과 규정 준수 요구 사항에도 부합할 수 있습니다.공유를 간편하게 관리완전한 CLI 및 Terraform 지원은 물론이고, 간단하게 사용할 수 있는 UI, SQL 명령 또는 REST API를 활용하여 공급자, 수신자 및 공유 생성 및 관리오픈 마켓플레이스를 통해 데이터 제품을 발견하고 액세스Databricks 플랫폼이 아니더라도 어디서나 데이터 세트, 머신 러닝 모델, 대시보드, 노트북 등의 데이터 제품을 간편하게 발견, 평가하고 액세스할 수 있습니다.개인 정보를 안전하게 보호하는 데이터 클린룸개인 정보가 안전하게 보호되는 환경에서 어떤 클라우드에서나 고객 및 파트너와 협업 데이터 복제 없이 자신의 데이터 레이크에서 안전하게 데이터를 공유합니다. 협업 대상이 선호하는 클라우드에서 만나서 SQL, R, Scala, Java, Python 등을 비롯한 모든 언어로 복잡한 연산과 워크로드를 유연하게 실행하세요. 사전 정의된 템플릿, 노트북, 대시보드를 사용한 공통적인 사용 사례를 통해 협업 대상을 안내하고 더욱 빠르게 인사이트를 얻을 수 있습니다.사용 사례기업 내 공유 계통Deltah Sharing으로 데이터 메시를 구축하면 데이터를 복사하거나 복제하지 않고 모든 클라우드나 리전에서 각 사업부와 자회사에 안전하게 데이터를 공유할 수 있습니다.B2B 공유데이터 수익화고객“Delta Sharing 덕분에 대규모 데이터 세트의 데이터 전송 프로세스를 간소화할 수 있었습니다. 우리 고객들도 자신의 컴퓨팅 환경에서 통합 작업을 거의 하지 않거나 전혀 하지 않고 갓 큐레이션한 데이터를 읽을 수 있고, 저희도 고유한 고품질 데이터 제품으로 구성된 카탈로그를 지속적으로 확장할 수 있게 되었습니다.”— William Dague, Alternative Data 책임자“저희는 데이터 기업이기 때문에 고객에게 데이터 세트를 제공하는 것이 중요합니다. Delta Sharing을 제공하는 Databricks 레이크하우스 플랫폼 덕분에 이 프로세스가 간소화되어, 클라우드나 플랫폼에서 훨씬 광범위한 사용자 기반에 안전하게 닿을 수 있게 되었습니다.”— Felix Cheung, 엔지니어링 부사장“Pumpjack Dataworks는 Databricks에서 제공하는 Delta Sharing의 강력한 기능을 활용하고 나서 온보딩 경험에 걸리는 시간을 단축되었고, 데이터를 내보내고 가져와 리모델링할 필요성이 사라져 고객에게 즉각적으로 가치를 제공할 수 있게 되었습니다. 저희 고객과 파트너는 더욱 빠르게 결과를 얻었고, 영업 기회가 커졌습니다.”— Corey Zwart, 엔지니어링 책임자“Delta Sharing을 사용하고 나서 우리 고객은 큐레이션된 데이터 세트에 거의 즉시 액세스하고, 원하는 분석 도구와 통합할 수 있게 되었습니다. 고객과의 대화는 가치가 낮은 기술적인 반복적 수집에 대한 내용에서 가치가 높은 분석적 논의로 바뀌었고, 이를 통해 성공적인 고객 경험을 이끌어냈습니다. 고객 관계가 발전함에 따라, Delta Sharing을 통해 새로운 데이터 세트를 원활하게 제공하고 기존 데이터 세트는 갱신하였고, 이를 통해 고객도 업계의 주요 트렌드를 살필 수 있게 되었습니다.”— Anup Segu, 데이터 엔지니어링 기술 책임자개방형 에코시스템사용하기 간편한 SQL, Python, BI 도구에서 공급업체가 직접 제공하는 최신 공개 버전에 액세스할 수 있습니다.리소스키노트 및 웨비나[키노트]Data + AI Summit 2022에서 발표된 레이크하우스의 데이터 거버넌스 및 공유[온디맨드 웨비나]Delta Sharing을 사용한 비즈니스 가치 창출 가속화블로그델타 공유의 일반 공급 발표Delta Sharing 소개: 안전한 데이터 공유를 위한 개방적 프로토콜Delta Sharing을 사용한 베스트 데이터 공유 사용 사례 3가지솔루션 시트 및 eBook[eBook] 새로운 Delta Sharing 솔루션 알아보기Delta Sharing: 안전한 데이터 공유를 위한 개방적 표준데이터 레이크하우스의 부상 by 빌 인몬, 데이터 웨어하우스의 아버지Databricks를 시작할 준비가 되셨나요?Databricks 무료로 시작하기제품플랫폼 개요가격오픈 소스 기술Databricks 이용해 보기데모제품플랫폼 개요가격오픈 소스 기술Databricks 이용해 보기데모학습 및 지원관련 문서용어집교육 및 인증헬프 센터법적 고지온라인 커뮤니티학습 및 지원관련 문서용어집교육 및 인증헬프 센터법적 고지온라인 커뮤니티솔루션산업 기준프로페셔널 서비스솔루션산업 기준프로페셔널 서비스회사Databricks 소개Databricks 채용다양성 및 포용성회사 블로그문의처회사Databricks 소개Databricks 채용다양성 및 포용성회사 블로그문의처Databricks 채용 확인하기WorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark 및 Spark 로고는 Apache Software Foundation의 상표입니다.개인 정보 보호 고지|이용약관|귀하의 개인 정보 선택|귀하의 캘리포니아 프라이버시 권리
https://www.databricks.com/dataaisummit/speaker/sridhar-devarapalli
Sridhar Devarapalli - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingSridhar DevarapalliSenior Director, Product Management at DatabricksBack to speakersLooking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/dataaisummit/speaker/alain-briancon
Alain Briancon - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingAlain BrianconVP Data Science Profiles by Kantar at Kantar GroupBack to speakersAlain is Vice President of Data Science at Kantar Profiles decision. Alain is a serial entrepreneur and inventor with 77 patents. Over the last ten years, through various startups, Alain has applied data science to IoT, predicting appliance failures, political campaigns, food and diet management, customer engagement, upsell/cross-sells, and now surveys. Alain graduated from the Massachusetts Institute of Technology with a Ph.D. in Electrical Engineering and Computer Science. Outside work, he enjoys writing movie scripts, collecting fountain pens, and enjoying family and dogs.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/dataaisummit/speaker/scott-starbird/#
Scott Starbird - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingScott StarbirdGeneral Counsel, Public Affairs and Strategic Partnerships at DatabricksBack to speakersScott Starbird heads the Public Affairs function at Databricks, and has held various leadership positions within Databricks’ Legal Department since early 2016. Scott is a member of the Board of Directors of BSA | The Software Alliance.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/privacypolicy
Privacy Notice | DatabricksPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack. Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence. See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco June 26–29   Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWLegalTermsDatabricks Master Cloud Services AgreementAdvisory ServicesTraining ServicesUS Public Sector ServicesExternal User TermsWebsite Terms of UseCommunity Edition Terms of ServiceAcceptable Use PolicyPrivacyPrivacy NoticeCookie NoticeApplicant Privacy NoticeDatabricks SubprocessorsPrivacy FAQsDatabricks Data Processing AddendumAmendment to Data Processing AddendumSecurityDatabricks SecuritySecurity AddendumLegal Compliance and EthicsLegal Compliance & EthicsCode of ConductThird Party Code of ConductModern Slavery StatementFrance Pay Equity ReportSubscribe to UpdatesPrivacy NoticeThis Privacy Notice explains how Databricks, Inc. and its affiliates (“Databricks”, “we”, “our”, and “us”) collects, uses, shares and otherwise processes your personal information (also known as personal data) in connection with the use of Databricks websites and applications that link to this Privacy Notice (the “Sites”), our data processing platform products and services (the “Platform Services”) and in the usual course of business, such as in connection with our events, sales, and marketing activities (collectively, “Databricks Services”). It also contains information about your choices and privacy rights.Our ServicesWe provide the Platform Services to our customers and users (collectively, “Customers”) under an agreement with them and solely for their benefit and the benefit of personnel authorized to use the Platform Services (“Authorized Users”). Our processing of such data is governed by our agreement with the relevant Customer. This Privacy Notice does not apply to (i) the data that our Customers upload, submit or otherwise make available to the Platform Services and other data that we process on their behalf, as defined in our agreement with the Customer; (ii) any products, services, websites, or content that are offered by third parties or that have their own privacy notice; or (iii) personal information that we collect and process in connection with our recruitment activities, which is covered under our Applicant Privacy Notice.We recommend that you read this Privacy Notice in full to ensure that you are informed. However, if you only want to access a particular section of this Privacy Notice, you can click on the link below to go to that section.Information We Collect About YouHow We Use Your InformationHow We Share Your InformationInternational TransfersYour Choices and RightsAdditional Information for Certain JurisdictionsOther Important InformationChanges to this NoticeHow to Contact UsInformation We Collect About YouInformation that we collect from or about you includes information you provide, information we collect automatically, and information we receive from other sources.Information you provideWhen using our Databricks Services, we may collect certain information, such as your name, email address, phone number, postal address, job title, and company name. We may also collect other information that you provide through your interactions with us, for example if you request information about our Platform Services, interact with our sales team or contact customer support, complete a survey, provide feedback or post comments, register for an event, or take part in marketing activities. We may keep a record of your communications with us and other information you share during the course of the communications.When you create an account, for example, through our Sites or register to use our Platform Services, we may collect your personal information, such as your name and contact information. We may also collect credit card information if chosen by you as a payment method, which may be shared with our third party service providers, including for payment and billing purposes. Information we collect automatically We use standard automated data collection tools, such as cookies, web beacons, tracking pixels, tags, and similar tools, to collect information about how people use our Sites and interact with our emails.For example, when you visit our Sites we (or an authorized third party) may collect certain information from you or your device. This may include information about your computer or device (such as operating system, device identifier, browser language, and Internet Protocol (IP) address), and information about your activities on our Sites (such as how you came to our Sites, access times, the links you click on, and other statistical information). For example, your IP address may be used to derive general location information. We use this information to help us understand how you are using our Sites and how to better provide the Sites to you. We may also use web beacons and pixels in our emails. For example, we may place a pixel in our emails that notifies us when you click on a link in the email. We use these technologies to improve our communications. The types of data collection tools we use may change over time as technology evolves. You can learn more about our use of cookies and similar tools, as well as how to opt out of certain data collection, by visiting our Cookie Notice. When you use our Platform Services, we automatically collect information about how you are using the Platform Services (“Usage Data”). While most Usage Data is not personal information, it may include information about your account (such as User ID, email address, or Internet Protocol (IP) address) and information about your computer or device (such as browser type and operating system). It may also include information about your activities within the Platform Services, such as the pages or features you access or use, the time spent on those pages or features, search terms entered, commands executed, information about the types and size of files analyzed via the Platform Services, and other statistical information relating to your use of the Platform Services. We collect Usage Data to provide, support and operate the Platform Services, for network and information security, and to better understand how our Authorized Users and Customers are using the Platform Services to improve our products and services. We may also use the information we collect automatically (for example, IP address, and unique device identifiers) to identify the same unique person across Databricks Services to provide a more seamless and personalized experience to you. Information we receive from other sourcesWe may obtain information about you from third party sources, including resellers, distributors, business partners, event sponsors, security and fraud detection services, social media platforms, and publicly available sources. Examples of information that we receive from third parties include marketing and sales information (such as name, email address, phone number and similar contact information), and purchase, support and other information about your interactions with our Sites and Platform Services. We may combine such information with the information we receive and collect from you.How We Use Your InformationWe use your personal information to provide, maintain, improve and update our Databricks Services. Our purposes for collecting your personal information include:to provide, maintain, deliver and update the Databricks Services;to create and maintain your Databricks account;to measure your use and improve Databricks Services, and to develop new products and services;for billing, payment, or account management; for example, to identify your account and correctly identify your usage of our products and services;to provide you with customer service and support;to register and provide you with training and certification programs;to investigate security issues, prevent fraud, or combat the illegal or controlled uses of our products and services;for sales phone calls for training and coaching purposes, quality assurance and administration (in accordance with applicable laws), including to analyze sales calls using analytics tools to gain better insights into our interactions with customers; to send you notifications about the Databricks Services, including technical notices, updates, security alerts, administrative messages and invoices;to respond to your questions, comments, and requests, including to keep in contact with you regarding the products and services you use;to tailor and send you newsletters, emails and other content to promote our products and services (you can always unsubscribe from our marketing emails by clicking here) and to allow third party partners (like our event sponsors) to send you marketing communications about their services, in accordance with your preferences;to personalize your experience when using our Sites and Platform Services;for advertising purposes; for example, to display and measure advertising on third party websites;to contact you to conduct surveys and for market research purposes;to generate and analyze statistical information about how our Sites and Platform Services are used in the aggregate;for other legitimate interests or lawful business purposes; for example, customer surveys, collecting feedback, and conducting audits;to comply with our obligations under applicable law, legal process, or government regulation; andfor other purposes, where you have given consent.How We Share Your InformationWe may share your personal information with third parties as follows:with our affiliates and subsidiaries for the purposes described in this Privacy Notice;with our service providers who assist us in providing the Databricks Services, such as billing, payment card processing, customer support, sales and marketing, and data analysis, subject to confidentiality obligations and the requirement that those service providers do not sell your personal information;with our service providers who assist us with detecting and preventing fraud, security threats or other illegal or malicious behavior, for example Sift who provides fraud detection services where your personal information is processed by Sift in accordance with its Privacy Notice available at https://sift.com/service-privacy;with third party business partners, such as resellers, distributors, and/or referral partners, who are involved in providing content, products or services to our prospects or Customers. We may also engage with third party partners who are working with us to organize or sponsor an event to which you have registered to enable them to contact you about the event or their services (but only where we have a lawful basis to do so, such as your consent where required by applicable law);with marketing partners, such as advertising providers that tailor online ads to your interests based on information they collect about your online activity (known as interest-based advertising);with the organization that is sponsoring your training or certification program, for example to notify them of your registration and completion of the course;when authorized by law or we deem necessary to comply with a legal process;when required to protect and defend the rights or property of Databricks or our Customers, including the security of our Sites, products and services (including the Platform Services);when necessary to protect the personal safety, property or other rights of the public, Databricks or our Customers;where it has been de-identified, including through aggregation or anonymization;when you instruct us to do so;where you have consented to the sharing of your information with third parties; orin connection with a merger, sale, financing or reorganization of all or part of our business.International TransfersDatabricks may transfer your personal information to countries other than your country of residence. In particular, we may transfer your personal information to the United States and other countries where our affiliates, business partners and services providers are located. These countries may not have equivalent data protection laws to the country where you reside. Wherever we process your personal information, we take appropriate steps to ensure it is protected in accordance with this Privacy Notice and applicable data protection laws. These safeguards include implementing the European Commission’s Standard Contractual Clauses for transfers of personal information from the EEA or Switzerland between us and our business partners and service providers, and equivalent measures for transfers of personal information from the United Kingdom. Databricks also offers our Customers the ability to enter into a data processing addendum (DPA) that contains the Standard Contractual Clauses, for transfers between us and our Customers. We also make use of supplementary measures to ensure your information is adequately protected. Privacy Shield NoticeDatabricks adheres to the principles of the EU-U.S. and Swiss-U.S. Privacy Shield Frameworks, although Databricks no longer relies on the EU-U.S. or Swiss-U.S. Privacy Shield Frameworks as a legal basis for transfers of personal information in light of the judgment of the Court of Justice of the European Union in Case C-311/18. To learn more, visit our Privacy Shield Notice.Your Choices and RightsWe offer you choices regarding the collection, use and sharing of your personal information and we will respect the choices you make in accordance with applicable law. Please note that if you decide not to provide us with certain personal information, you may not be able to access certain features of the Sites or use the Platform Services.Account informationIf you want to correct, update or delete your account information, please log on to your Databricks account and update your profile.Opt out of marketingWe may periodically send you marketing communications that promote our products and services consistent with your choices. You may opt out from receiving such communications, either by following the unsubscribe instructions in the communication you receive or by clicking here. Please note that we may still send you important service-related communications regarding our products or services, such as communications about your subscription or account, service announcements or security information.Your privacy rightsDepending upon your place of residence, you may have rights in relation to your personal information. Please review the jurisdiction specific sections below, including the disclosures for California residents. Depending on applicable data protection laws, those rights may include asking us to provide certain information about our collection and processing of your personal information, or requesting access, correction or deletion of your personal information. You also have the right to withdraw your consent, to the extent we rely on consent to process your personal information. If you wish to exercise any of your rights under applicable data protection laws, submit a request online by completing the request form here or emailing us at [email protected]. We will respond to requests that we receive in accordance with applicable laws. Databricks may take certain steps to verify your request using information available to us, such as your email address or other information associated with your Databricks account, and if needed we may ask you to provide additional information for the purposes of verifying your request. Any information you provide to us for verification purposes will only be used to process and maintain a record of your request.As described above, we may also process personal information that has been submitted by a Customer to our Platform Services. If your personal information has been submitted to the Platform Services by or on behalf of a Databricks Customer and you wish to exercise your privacy rights, please direct your request to the relevant Customer. For other inquiries, please contact us at [email protected].Additional Information for Certain JurisdictionsThis section provides additional information about our privacy practices for certain jurisdictions.CaliforniaIf you are a California resident, the California Consumer Privacy Act (“CCPA”) requires us to provide you with additional information regarding your rights with respect to your “personal information. This information is described in our Supplemental Privacy Notice to California Residents.  Other US StatesDepending on applicable laws in your state of residence, you may request to: (1) confirm whether or not we process your personal information; (2) access, correct, or delete personal information we maintain about you; (3) receive a portable copy of such personal information; and/or (4) restrict or opt out of certain processing of your personal information, such as targeted advertising, or profiling in furtherance of decisions that produce legal or similarly significant effects. If we refuse to take action on a request, we will provide instructions on how you may appeal the decision. We will respond to requests consistent with applicable law.European Economic Area, UK and SwitzerlandIf you are located in the European Economic Area, United Kingdom or Switzerland, the controller of your personal information is Databricks, Inc., 160 Spear Street, Suite 1300, San Francisco, CA 94105, United States. We only collect your personal information if we have a legal basis for doing so. The legal basis that we rely on depends on the personal information concerned and the specific context in which we collect it. Generally, we collect and process your personal information where:We need it to enter into or perform a contract with you, such as to provide you with the Platform Services, respond to your request, or provide you with customer support;We need to process your personal information to comply with a legal obligation (such as to comply with applicable legal, tax and accounting requirements) or to protect the vital interests of you or other individuals;You give us your consent, such as to receive certain marketing communications; orWhere we have a legitimate interest, such as to respond to your requests and inquiries, to ensure the security of the Sites and Platform Services, to detect and prevent fraud, to maintain, customize and improve the Sites and Platform Services, to promote Databricks and our Platform Services, and to defend our interests and rights.If you have consented to our use of your personal information for a specific purpose, you have the right to change your mind at any time but this will not affect our processing of your information that has already taken place. You also have the following rights with respect to your personal information:The right to access, correct, update, or request deletion of your personal information;The right to object to the processing of your personal information or ask that we restrict the processing of your personal information;The right to request portability of your personal information;The right to withdraw your personal information at any time, if we collected and processed your personal information with your consent; andThe right to lodge a complaint with your national data protection authority or equivalent regulatory body.If you wish to exercise any of your rights under data protection laws, please contact us as described under “Your Choices and Rights”.Other Important InformationNotice to Authorized UsersOur Platform Services are intended to be used by organizations. Where the Platform Services are made available to you through an organization (e.g., your employer), that organization is the administrator of the Platform Services and responsible for the accounts and/or services over which it has control. For example, administrators can access and change information in your account or restrict and terminate your access to the Platform Services. We are not responsible for the privacy or security practices of an administrator's organization, which may be different from this Privacy Notice. Please contact your organization or refer to your organization's policies for more information.Data RetentionDatabricks retains the personal information described in this Privacy Notice for as long as you use our Databricks Services, as may be required by law (for example, to comply with applicable legal tax or accounting requirements), as necessary for other legitimate business or commercial purposes described in this Privacy Notice (for example, to resolve disputes or enforce our agreements), or as otherwise communicated to you.SecurityWe are committed to protecting your information. We use a variety of technical, physical, and organizational security measures designed to protect against unauthorized access, alteration, disclosure, or destruction of information. However, no security measures are perfect or impenetrable. As such, we cannot guarantee the security of your information.Third Party ServicesOur Databricks Services may contain links to third party websites, applications, services, or social networks (including co-branded websites or products that are maintained by one of our business partners). We may also make available certain features that allow you to sign into our Sites using third party login credentials (such as LinkedIn, Facebook, Twitter and Google+) or access third party services from our Platform Services (such as Github). Any information that you choose to submit to third party services is not covered by this Privacy Notice. We encourage you to read the terms of use and privacy notices of use of such third party services before sharing your information with them to understand how your information may be collected and used.Children's DataThe Sites and Platform Services are not directed to children under 18 years of age and Databricks does not knowingly collect personal information from children under 18. If we learn that we have collected any personal information from children under 18, we will promptly take steps to delete such information. If you are aware that a child has submitted us such information, please contact us using the details provided below.Changes to this NoticeDatabricks may change this Privacy Notice from time to time. We will post any changes on this page and, if we make material changes, provide a more prominent notice (for example, by adding a statement to the website landing page, providing notice through the Platform Services login screen, or by emailing you). You can see the date on which the latest version of this Privacy Notice was posted below. If you disagree with any changes to this Privacy Notice, you should stop using the Databricks Services and deactivate your Databricks account. How to Contact UsPlease contact us at [email protected] if you have any questions about our privacy practices or this Privacy Notice. You can also write to us at Databricks Inc., 160 Spear Street, Suite 1300, San Francisco, CA 94105 Attn: Privacy.If you interact with Databricks through or on behalf of your organization, then your personal information may also be subject to your organization’s privacy practices and you should direct any questions to that organization.Last updated: January 3, 2023ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/dataaisummit/speaker/christina-taylor-0
Christina Taylor - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingChristina TaylorData Engineer at ToptalBack to speakersChristina is passionate about modern data platforms, mutil-cloud architecture, scalable data pipelines, as well as the latest and greatest in the open source community. An intensely curious lifelong learner and effective team leader, she builds data lakes with medallion structure that support advanced analytics, data science models, and customer facing applications. She also has a keen interest in interdisciplinary areas such as Cloud FinOps, DevOps and MLOps.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/kr/company/awards-and-recognition
수상 실적 | DatabricksSkip to main content플랫폼Databricks 레이크하우스 플랫폼Delta Lake데이터 거버넌스데이터 엔지니어링데이터 스트리밍데이터 웨어하우징데이터 공유머신 러닝데이터 사이언스가격Marketplace오픈 소스 기술보안 및 신뢰 센터웨비나 5월 18일 / 오전 8시(태평양 표준시) 안녕, 데이터 웨어하우스. 안녕하세요, 레이크하우스입니다. 데이터 레이크하우스가 최신 데이터 스택에 어떻게 부합하는지 이해하려면 참석하십시오. 지금 등록하세요솔루션산업별 솔루션금융 서비스의료 서비스 및 생명 공학제조커뮤니케이션, 미디어 및 엔터테인먼트공공 부문리테일모든 산업 보기사용 사례별 솔루션솔루션 액셀러레이터프로페셔널 서비스디지털 네이티브 비즈니스데이터 플랫폼 마이그레이션5월 9일 | 오전 8시(태평양 표준시)   제조업을 위한 레이크하우스 살펴보기 코닝이 수동 검사를 최소화하고 운송 비용을 절감하며 고객 만족도를 높이는 중요한 결정을 내리는 방법을 들어보십시오.지금 등록하세요학습관련 문서교육 및 인증데모리소스온라인 커뮤니티University Alliance이벤트Data + AI Summit블로그LabsBeacons2023년 6월 26일~29일 직접 참석하거나 키노트 라이브스트림을 시청하세요.지금 등록하기고객파트너클라우드 파트너AWSAzureGoogle CloudPartner Connect기술 및 데이터 파트너기술 파트너 프로그램데이터 파트너 프로그램Built on Databricks Partner Program컨설팅 & SI 파트너C&SI 파트너 프로그램파트너 솔루션클릭 몇 번만으로 검증된 파트너 솔루션과 연결됩니다.자세히회사Databricks 채용Databricks 팀이사회회사 블로그보도 자료Databricks 벤처수상 실적문의처Gartner가 Databricks를 2년 연속 리더로 선정한 이유 알아보기보고서 받기Databricks 이용해 보기데모 보기문의처로그인JUNE 26-29REGISTER NOW수상 실적Databricks가 업계 리더들에게 어떻게 인정받았는지 알아보세요.2022년 Magic Quadrant 클라우드 데이터베이스 관리 시스템 부문 리더로 선정2022년 Customer Choice Award 클라우드 데이터베이스 관리 시스템 부문2021년 Magic Quadrant 클라우드 데이터베이스 관리 시스템 부문 리더로 선정2021년 Magic Quadrant 데이터 사이언스 및 머신 러닝 부문에서 리더로 선정레이크하우스 — Hype Cycle for Data Management, 20222023년 가장 주목해야 할 기업데이터 사이언스에서 가장 혁신적인 기업클라우드 100대 기업AI 50대 기업미국 최고의 스타트업 기업기술 분야에서 일하기 좋은 최고의 직장베이 에리어에서 일하기 좋은 최고의 직장밀레니얼을 위한 최고의 대기업CNBC가 선정한 50대 혁신 기업2022년 일하기 좋은 최고의 직장더 자세히 알아볼 준비가 되셨나요?귀사의 비즈니스 목표를 자세히 알아보고 서비스 팀에서 귀사의 성공을 도울 방법을 함께 알아보고자 합니다.Databricks 무료로 시작하기제품플랫폼 개요가격오픈 소스 기술Databricks 이용해 보기데모제품플랫폼 개요가격오픈 소스 기술Databricks 이용해 보기데모학습 및 지원관련 문서용어집교육 및 인증헬프 센터법적 고지온라인 커뮤니티학습 및 지원관련 문서용어집교육 및 인증헬프 센터법적 고지온라인 커뮤니티솔루션산업 기준프로페셔널 서비스솔루션산업 기준프로페셔널 서비스회사Databricks 소개Databricks 채용다양성 및 포용성회사 블로그문의처회사Databricks 소개Databricks 채용다양성 및 포용성회사 블로그문의처Databricks 채용 확인하기WorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark 및 Spark 로고는 Apache Software Foundation의 상표입니다.개인 정보 보호 고지|이용약관|귀하의 개인 정보 선택|귀하의 캘리포니아 프라이버시 권리
https://www.databricks.com/p/ebook/data-lakehouse-is-your-next-data-warehouse
Why the Data Lakehouse Is Your Next Data Warehouse | DatabrickseBookWhy the Data Lakehouse Is Your Next Data WarehouseUnder the hood of Databricks SQLMost organizations routinely operate their business with complex cloud data architectures that silo applications, users and data. As a result, there isn’t a single source of truth of data for analytics, and most analysis is performed with stale data. To solve these challenges, the lakehouse is quickly emerging as the new standard for data architecture, with the promise to unify data, AI and analytic workloads in one place.In this eBook, you will discover the inner workings of the Databricks Lakehouse Platform. Learn why the data lakehouse is your next cloud data warehouse, how Databricks SQL works under the hood, and how it allows you to operate a multicloud lakehouse architecture that delivers world-class performance and data lake economics with up to 12x better price/performance than legacy cloud data warehouses.Specifically, you’ll learn:How to ingest, store and govern business-critical data at scale to build a curated data lakeHow to get started in seconds with instant, elastic SQL compute to process all query types with best-in-class performanceHow to quickly find and share new insights with a built-in SQL editor, visualizations and dashboards, or your favorite BI toolsThe differences between a data lake vs. data warehouseGet Your CopyProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/product/pricing
Databricks pricing | DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack. Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence. See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco June 26–29   Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOWExtended Time SQL Price Promotion - Save 40%+ Take advantage of our 15-month promotion on Serverless SQL and the brand new SQL ProLearn moreDatabricks pricingOne simple platform to unify all your data, analytics and AI workloads — across all your preferred cloudsStart free trialHow does Databricks pricing work?Pay as you goDatabricks offers you a pay-as-you-go approach with no up-front costs. Only pay for the products you use at per second granularity.SAVE MORE WITHCommitted-use discountsDatabricks helps you lower your costs with discounts when you commit to certain levels of usage. The larger your usage commitment, the greater your discount compared to pay as you go, and you can use commitments flexibly across multiple clouds. Contact us for details.Explore productsWorkflows & StreamingJobsStarting at $0.07 / DBURun data engineering pipelines to build data lakes and manage data at scaleLearn moreWorkflows & Streaming Delta Live TablesStarting at $0.20 / DBUEasily build high-quality streaming or batch ETL pipelines using Python or SQL with the DLT edition that is best for your workloadLearn moreData WarehousingDatabricks SQLStarting at $0.22 / DBURun SQL queries for BI reporting, analytics and visualization to get timely insights from data lakes. Available in both Classic and Serverless (managed) Compute.Learn moreData Science & Machine LearningAll Purpose Compute for Interactive WorkloadsStarting at $0.40 / DBURun interactive data science and machine learning workloads. Also good for data engineering, BI and data analyticsLearn moreData Science & Machine LearningServerless Real-Time InferenceStarting at $0.07 / DBUMake live predictions in your apps and websites. Learn moreDatabricks Platform & Add-OnsDatabricks Platform & Add-OnsCross-platform capabilities that provide the right level of management, governance and security to run everything from basic to enterprise-critical workloadsLearn moreTry Databricks for freeGet started for freeSchedule a demoFAQWhat does the free trial include?The 14-day free trial gives you:A collaborative environment for data teams to build solutions togetherInteractive notebooks to use Apache SparkTM, SQL, Python, Scala, Delta Lake, MLflow, TensorFlow, Keras, scikit-learn and morePlease note that you will still be charged by your cloud provider for resources, like compute instances, used within your account during the free trialWhat happens after the free trial?At the end of the trial, you are automatically subscribed to the plan that you have been on during the free trial. You can cancel your subscription at any time.Is pricing based on usage or storage volume?Databricks pricing is based on your compute usage. Storage, networking and related costs will vary depending on the services you choose and your cloud service provider.What is a DBU?A Databricks Unit (DBU) is a normalized unit of processing power on the Databricks Lakehouse Platform used for measurement and pricing purposes. The number of DBUs a workload consumes is driven by processing metrics, which may include the compute resources used and the amount of data processed. Is pricing the same in every region?Databricks prices and Cloud Infrastructure prices may vary based on geographic region and cloud service provider. For details, please see the Databricks pricing pages for individual productsIs Databricks as cost effective as other cloud services or open source solutions?Databricks products are priced to provide compelling Total Cost of Ownership (TCO) to customers for their workloads. When estimating your savings with Databricks, it is important to consider key aspects of alternative solutions, including job completion rate, duration and the manual effort and resources required to support a job. To help you accurately estimate your savings, we recommend comparing side-by-side results as part of a proof of concept deployment. For example, see the speedup achieved by this customer and the results of this customer benchmark. Contact us to get started.What is Databricks Community Edition?Databricks Community Edition is a free, limited functionality platform designed for anyone who wants to learn Spark. Sign up here.How will I be billed?By default, you will be billed monthly based on per second usage on your credit card. Contact us for more billing options, such as billing by invoice or an annual plan.Do you provide technical support?We do offer technical support. You can also check the technical documentation. Contact us to learn more.ProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/dataaisummit/speaker/mitch-ertle/#
Mitch Ertle - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingMitch ErtlePartner Solutions Engineer at Sigma ComputingBack to speakersMitch Ertle is an experienced data practitioner with over a decade of experience in Data Analytics. Prior to joining Sigma, Mitch spent three years leading data teams on Databricks.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/company/careers/open-positions?department=professional%20services
Current job openings at Databricks | DatabricksSkip to main contentPlatformThe Databricks Lakehouse PlatformDelta LakeData GovernanceData EngineeringData StreamingData WarehousingData SharingMachine LearningData SciencePricingMarketplaceOpen source techSecurity and Trust CenterWEBINAR May 18 / 8 AM PT Goodbye, Data Warehouse. Hello, Lakehouse. Attend to understand how a data lakehouse fits within your modern data stack. Register nowSolutionsSolutions by IndustryFinancial ServicesHealthcare and Life SciencesManufacturingCommunications, Media & EntertainmentPublic SectorRetailSee all IndustriesSolutions by Use CaseSolution AcceleratorsProfessional ServicesDigital Native BusinessesData Platform MigrationNew survey of biopharma executives reveals real-world success with real-world evidence. See survey resultsLearnDocumentationTraining & CertificationDemosResourcesOnline CommunityUniversity AllianceEventsData + AI SummitBlogLabsBeaconsJoin Generation AI in San Francisco June 26–29   Learn about LLMs like Dolly and open source Data and AI technologies such as Apache Spark™, Delta Lake, MLflow and Delta SharingExplore sessionsCustomersPartnersCloud PartnersAWSAzureGoogle CloudPartner ConnectTechnology and Data PartnersTechnology Partner ProgramData Partner ProgramBuilt on Databricks Partner ProgramConsulting & SI PartnersC&SI Partner ProgramPartner SolutionsConnect with validated partner solutions in just a few clicks.Learn moreCompanyCareers at DatabricksOur TeamBoard of DirectorsCompany BlogNewsroomDatabricks VenturesAwards and RecognitionContact UsSee why Gartner named Databricks a Leader for the second consecutive yearGet the reportTry DatabricksWatch DemosContact UsLoginJUNE 26-29REGISTER NOW OverviewCultureBenefitsDiversityStudents & new gradsCurrent job openings at DatabricksDepartmentLocationProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoProductPlatform OverviewPricingOpen Source TechTry DatabricksDemoLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunityLearn & SupportDocumentationGlossaryTraining & CertificationHelp CenterLegalOnline CommunitySolutionsBy IndustriesProfessional ServicesSolutionsBy IndustriesProfessional ServicesCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsCompanyAbout UsCareers at DatabricksDiversity and InclusionCompany BlogContact UsSee Careers at DatabricksWorldwideEnglish (United States)Deutsch (Germany)Français (France)Italiano (Italy)日本語 (Japan)한국어 (South Korea)Português (Brazil)Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 1-866-330-0121© Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.Privacy Notice|Terms of Use|Your Privacy Choices|Your California Privacy Rights
https://www.databricks.com/dataaisummit/speaker/ben-wilson
Ben Wilson - Data + AI Summit 2023 | DatabricksThis site works best with JavaScript enabled.HomepageSAN FRANCISCO, JUNE 26-29VIRTUAL, JUNE 28-29Register NowSession CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingBen WilsonPrincipal Specialist Solutions Architect at DatabricksBack to speakersBen works on the team supporting MLflow and other notable Open Source ML products at Databricks.Looking for past sessions?Take a look through the session archive to find even more related content from previous Data + AI Summit conferences.Explore the session archiveRegister today to save your spotRegister NowHomepageOrganized By Session CatalogTrainingSpeakers2022 On DemandWhy AttendSpecial EventsSponsorsAgendaVirtual ExperiencePricingFAQEvent PolicyCode of ConductPrivacy NoticeYour Privacy ChoicesYour California Privacy RightsApache, Apache Spark, Spark, and the Spark logo are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse the materials provided at this event.
https://www.databricks.com/wp-content/uploads/2022/02/Data-to-anchor-a-new-age.pdf
%PDF-1.4 %���� 1 0 obj <> stream x�MSM�� ��^ y�y�y�f�s�mj���@�""�zn��7vo���B��x>��a��r���n�G��R���lPe�� ���$��y�����XE�G*�J-M%,Ce���^�� ֆ.qJ�A���ܪ�]mZx�T����nukp]�sQ�Ug9�(�Cb�<��ٜ��d^(��'�� ��d����/�gޞ���'�tA7j&5��$��L�Ԅ�������$+?׿���BQ����0FX� �d���W�V��7�\]�"�d�~��(&'R|S��89œiOa7��Rj�d�*�a�O����b���-�q�Un��E�Eho� ��IA̽��R���wl}��K��IT�NV{�U�v�������C(О6��?����o�N�� endstream endobj 2 0 obj <> stream x�MTI�9ܿS�z!& ��G/���:���HLJt�6���m�:���>�B��!߉o��D�I;0'j?8w�6�u��|� �ԯ�^!�����,4�7�v��aB�����T�g��4[ՙ��Q&�:��v�@��^�B]� '�U)��k�h T���R�g��t���j�������@l��� d��"�5«Tx^0����F��vc�p�Cw�#p���� ��"��u�sUoܰ8�:�1�)^��rg#�<�W����EɊs}A�#��B�:A�m�Q���� �� �6 �iD!&;�z�.c�-���*���l���7��F�,vyBM������B�¬��--UrքH���y������������'���z��)"�d�J��� �S�vTc��z�� ���]g�-����Oഹ��kFx�LrҶ�Ac�j���G�� �ٲ--oJ�QL�ݐ+���l������?Ms�]���'�� ��M����m�����B9U�0!:�m�w���|L�O �#��e�u׀y�;u�|��Af��1�E3�v���b5�ƚ�z>Sy�>��FF�m���Т툭Up� ʹ��*oK;�ߚ���R,���Ǘdt������� ,a'� endstream endobj 3 0 obj <> stream x3�0R0��C.�� endstream endobj 4 0 obj <> stream x�]TK�1��)|�����yOQ���)0�'e6��P@X����Z7�Mԛ�oz���������&�48�[����̏��G�#y���CǕw�4��LE��&B$9�?0����Q���~�NT)g#�ַgJ2yk�9�6Z|�Z_�h���z ?�,�YQ:��Dd��lQ�O�qFwᔕ����&��V�ͮ��-.�(_f��<��UL�� �:����h[r{�";�g��j! �4�+ �_�����.��0 .�1%%s_�)� T��T��lPF�p�H�g�b��N������qQ�s# �����BT� ���`|*�OH�c�@�����B���Q �!�ک�]I�|�ʪ�*�c�u� �}C r*�n�� �| pS�����^�U� �Ow�P�%���y ��9`wd��`� 7��,��ԴDr~q��.{[�3�i rA8��:Ɠ]�)D+%Cԣ�`����?Y�"d*�Z����#�5����@a��n�9�H�0�v� Z-�Qc�bp�Ӣ ld�OH �=�xij�o��������9�T endstream endobj 5 0 obj <> stream x�MTI�,7��)t���@������N@��zӠbH�Ay�9��c����ч, ��!���ǩ�"k��Ǝ�P >1�+:V�,�җ�͠�6T�A��(���� ��RZ<4Sl�cM�'�w�LD���^�?��ߕ�d��W~3��H��gP��+�f�L��>�=؁C�y�:y�{D��W�GUo�RQ<��G�u2/��눬U�n���� �6�)�q��#T�@���=(,��gC"_�����r�*�o(���2��!��di���-�8-Q��$M��Ni5,΀h-L�p�Flh:; eQ> Ѽ�R˴[�f�Ki��u�r��a(�(�U������4L�$�Ȕ����b��R�^�,AJ���3}�kV#w*��S7f��4�;?tz嵞5< h��E���3�ְ� �A��p�ua�"��.0{�R������n����,G �~c=��N�h�=�������χ'�%�|/��GC=�������V�V4���21I�xۙM��������e�����>�X$�&7%��%x3��X�i��2��:�=��� K�}�� w);܋M��N1���7�^/~�A��]�]z���k�r�[��J��[�����l־�ۄ��u������_������1O endstream endobj 6 0 obj <> stream x�M�Mr!���� �% ?��S�,&�����?��*Ӏh�5���� �Q̿9��:�[*K�݇���t ��5��\���b��1p̎)�Q��HZ�8v5��=d�Qqt�c�;&�F�*� ��k.#q�Aje�iM��:c��fhm�*Z�H���0�|~D��9i�.�"��#Tf��nH��o+1�2�"Ż&xn����%����+�d q��q�������,��c�_N?���XVf� endstream endobj 7 0 obj <> stream x�]S9r�0 �� }����l&�b��6<$�Y7�d @��SNE�ف$�$*�4G�P/�ǎ��U�i�\�`�5���*�u����%Bz3 �Ő4�Us*��I�W�1�F�5�3T�����Q����3_SjM��-w�58�h^�����m�M;�*q��c�yvBdJ�"�4^#F�q��E���1����ݤ�}ܔ�ѓ��]��,zL�G�E� �V�*�$'�.���BZ6��Ka�%�~C_���<�6��J�����1��u��i7��3.���2pv�HF����2|Z��u��N6�;��QG�ѳ K'��Ƹ4(�2��*p,yZl"�e� ����d���/3�|Ϫ���rf?V Q vv�X!�����E�#�dz"w7������f ���1�ύ����N� �>����,���� X>����[H�0��R��[�����F endstream endobj 8 0 obj <> stream x333W0��C.� endstream endobj 9 0 obj <> stream x�MTˑ1�OJ`^���Y�ˇu�W��zN-P�&��1�x7�!����e��C��N���K��p��Q?s|=��Q�d�>���Q"3� ׶��"P%�4s�)��)�d��G�a|'���p�ߠ uZ�L���6��+�a��פ�����Ջx7� �.ԛ��z蜄F0�O�e��u�F>T���`V(~�V�W�"�,��c}V�M _/LɎ��o�9o4j\�a /�&��8(� <0B�x'Xv��ҽc������8�Dm��r�C�h��M�M�B�@&'T�m��RZ�㒢�;��ۏ+k���+5T;Σ�Tw2(6��ǃ��ȩ �@�Ҍ6��.�W+�T�n��R ��U� X{��V��*D�ZEcƶN�8(Wa�Ft"4^�}�ߔ,��W��' �E���J�y�K�w��;�t�m?���Yq�=�G�6��^ �Y��w�ϓ���l d�u��~��)�+14a�s�|�������+3����պ� ބ~���L�����KH�1w%�趚��Y�F&��N�0��_�rK4�G�y��m$��n��m�����B�D�\�J��-y^��y��b�j��MXTq�5j�&v�'����M@�kPW/ ����q� V�̵.>��Wx�&��Dm(?C'��A�� �s唖\W����N: endstream endobj 10 0 obj <> stream x�UQKr� �s _���y��t��['ᕍ5�’4@`��]�M��� ;m�[H�Ґf"��$N����B������m�Ʉ*3Y�c���̶J�R7P �*m�*\%\~ �v��"�U��0�y��~36$ِ�r���4��q�HP�<��ʄ>؞�����޳zs�O~�X�氐MZ�zwZ�5>�:��V�����b��8�n) �[��>`�gyBi�L�)����g�)���&g� endstream endobj 11 0 obj <> stream x�UQ;r1�} .`>�<�dR�ܿ �ݝK> stream x�U�A�0"t�����C��UP{��B��\�:�M��Jl�P ���$��C���\l$��p �!U�H #�3!��+�Œ~{'�t� ��[ endstream endobj 13 0 obj <> stream x�UVI�-7ܿS��!M������N&U��� �jtj��h�$i�F�����a9���3~��\��[��F<����m���د��Cv΍��=�P�3��e�������*6��/i3����ƻ��l2��n���i�b5��zc�ƟG��渣������A�Ax��=\�z{�JJc"r�p{v���{��"�i_��/r�jh�9%��w��:X�����b��!��<��:�+��>;�6+B�0�Z�9"�]cJ�`[�������?�A�n蟡�� �ntL3Is��� �rq7� �|�R�H{5_��c-�f��ԗ=� �]׷���S�\u��iW�e�H�2�3���Ʋ���]ɫ��p%�5��|�F�sa�cX����)(хYԝ���{F״���KϹ��@���=�fU��K�'\��Z��r�j����׮�R/�ʪ��;�r�k���kٻ00W �)ޝ��a�q:Y̍�X���K�NZ� q�Dc:WA���a �8��;ջȨ͞QR��٩!68ђ��34� ���վ�޻8�_�e7ք����V� �Z�M���k�1�Ļ�e���:9T�E;�BW@��l�:�=+��zy��}�P�t"�&�S�9 �D|۩N����#������G�d[�-�N���@>E�D5�����a� �χ�L! Wm��-����~h�[j�b��0I�ж��Wų� ����̪�N&SږH0|Ir�����~ �ɏ�`;!�UZr�h$����I-4�u4����AN�hɠ�]?�>/�p΢� �$�u1�S@+�D\> o�t���~�Y�Y��VK�x���ۼ���1�B<������E�8���3 ���W�#�5sn���1Ǧ�Xp� ���c��_bt�������� �)���m�G; \:�u�Q���pG���u�M�c��u�q�}U�Q�C�maݱ��D���_:�٣��ih=Ց�|�'a_EI�s��tzq�k�k�`�+���o����ڲ3Bs��]��������u��� �_����O<qM���o���_#�?��>�&� endstream endobj 14 0 obj <> stream x�UTK�! ��)������T* ��۴�_\���>-5&��f����tF#n����D/fyI�rI����O�����/"͕�����;���66_�?ņŊȝk�V�id�Sh֭�l�h_w�����c����8qy��Q�y8�#,%�VI��=�\~gc��c��%�I� 3 RN8@�V8�;5u�D�a�.���;3���Q��%��`��T��� �ͬ�W)���w���\R0� ����.���Jdd;����Bu>��5Ƿ*k9/ �/��߉ő�Sk0�f��U�J���G%��q:�K�\���\�R�J�e��� P�c�5�Yc�:uUʁ�0�f����u�P�����k}�s���Oʥ��qU맭���k���s��@�2�:=�d#;F��8t&���f>���X�8o�*Y]N�5X���uP��N`ޛ�,ނ�B��?�l��)��)O lH_5�����<��k)� hJX3dՅ`�ڲ�zq��C0ͭ|�d�46б �d��9�x630�� N�g<&O�����_ �[|'n�$�O?HL��F����{����l�u�����K=��3��3��^���u��ݠ�#GW4�2���$�L�*y��q�B^*�kRY���T���$6:�&��4��7������ ? endstream endobj 15 0 obj <> stream x�UPKr!�{ /06�x��Je�r�m��I=76���U�*�e�� x�Ջ��[N�]��*f@���> stream x�]��n� ��y���^lt�Y��1iw�ċ��n@a�$ �o_`�6) $3���\_j%-D�f� Z��i1��A*vH@Hn7 +[�"'n���X�~be }��l� �1uxǢ7#�H5����8n��qDe!fU{�����vD��l_ W�v�;�_�u�I�݆Og�r4������nT ��W�I����5�;s�q�ĕ������c�cBt!:ʉ�S8e�K�o�����i�@yJ�O�IG�$(�͗��C|ු�b� (�JH�g"�>NOګ��k�� endstream endobj 17 0 obj <> stream x325S0��C.�� endstream endobj 18 0 obj <> stream x�UTKr1��)t��A�q*�Er�m�q�-l����F�ѺE4�h�_�0ϴ�=��G�G�����:�k��X�H��x|"��u���y����.�@I 1$3��}�~,�~#� ����(��}�ʐh�|_�����:�|�R��#2��MNrz`��7��Re��+8`�ưyoAcb���J/F�.���C��G�� L�Kr5�d����Jh�� ���b �V��"N[B˔�����S��ʮ9=Z�v�H,Y�FY�++86��v8�f�Ȉ�ͼ��q+�X���g.)B&ZF����ef�k�4�U��������۔��*��3���8R�c+�1$t"�D2`6�����"�M_W;�9� sʰo}���y�8�lp��\��N̕s�]nN�1Ҡm(�K�|��xo��ɜ�Y�œ�Ӊؙ���}u�7i��dS(= �Er7A]P�א�� C[���w�2G��T��W��kȽ��B?�U�2�. ��^�s�@%N�įL�N�c] � X%��a��>���{�����~��� endstream endobj 19 0 obj <> stream x333W0��C.� endstream endobj 20 0 obj <> stream x��Y xSž�휓�i�&Mׄ��J���EZdZ��J�R -�B�"U��eeQD@�z=� -�Uq{(}.��.W|>ܽ�\h�����$��}��|����93���s�!=jA��:%;�!�������T��f����-��68Xg��@���9�s,�5�B����V.�G*dF��0�έm��(��! �a�ͫZ�4 7{3B��7�����{��?t>������z� M��������j�fW�ܯ��T6Փ�� � �ca�j�3�N��\_��!P��J�������O<5��U0^�(�Û���'9X��Ds`R��}?N�������@�"�#4Gz����c�@��pG�S��%Vj�F(�G w�\4�Bc�x4U���-ĆG ���)0:݅������D���`� ��&�pf��֢ 4 ]K��.���Q�+���@�DQ�Ű��vecQ ޏ��)�?BbpJe���K=o,����rq�GMg�g�8��(�_,�L�d�x�3S��Q2M5��U�hu���ju�r̫��Y�R�@ukY�CFSJ�C9��)���5��� 2eƧa�4�e0AMh�exޗ) �2M�X:�Tn)������NG�|jb�|�8�YV�)�}�^>?6�Vʐ����*8ÔR�(^Fe��A��[Z[�[a���H���E�����2Qiq9�y���r²�LY�1nJi @tDM��V�)Geȃ��fx��:G��cE�O�;Th���c(���/��]0�c]����]Fg�E�:hF�g*�?����e��-l����A�����4:�W�*�z���t�I�)W7u�]�җ�����uP0 ��O��]�F�F)�I�*��E���{u��I�E�h�R�i��()d��e���p�,����*E?_�ɘ������.^�{_}�j �Z�g��th8�V�]��"��`4�n��u�RTJ�RFw�֗��$;�C��)��yA�WVR�R)�G����{��J&�N{!g�5��F56b���a'�J^����?��_��c�� �ŧɷ�,�d$M�rR����N���� o #r�1^���f�U!�j��TV߆#�����TZ��΃���9+e� ��[����x�9��Tl[� c{6�5g�({��S���� <�����f���V��"D D�'jF8\u�]���(�9C� �a��!���7�mv�Ÿ5�{��@&�8r��}C�+�;H?M��8�}\����{AP (_ES�͗�6x,aܢOB�`�e��T���ds�l�J�D�r����;ϭ}�^dQ�l\���~�:��9O�R������׮��_���<� ���������Iϣ�1a�s�Å������J�:����@�W�6Ot�<1����B.3q�u��-��ӲpZ���Ÿަ����߲�z�=�����[{���4����`�� ƪ�:���<�\��<�/ MB�%� H8go9�����!a�h�"��N�i�?u�n�������>ڜ�9�T�=��]��۟��4�P����k^5aT/�C췰+��Q�\�壵 ����$�ҷ����ǂ�c�?&^E9C�.sf��N�W�I9�t�Up����¹���dwB�~߉]�zg�BK����%#f�/�P��%�@h9c0b�cV�F�qu�c��(�2�������j��v��gFe��K��)��"r�l�o��]��_����*����}���%5�Ȓ�H��d �� 뻡��"586�L6�����F�,?� ts�E�4œIb2aTG�T�6����k�s���5��cݪ��� ͆�w�,��u���������J]�'��.��ɛ��W���x5��o܇�l&d���s<@��p'�K^_�x�=�w� �L�����`}�T <�0hQ���L��-���70�*�$ڤ+DŴ�u+i���m/�4��_'��Y�f���8��`��T :�Ϙ�������h�e� 7�2�Y����i���5��<���{}�ǻ������ٷTs>� |M�b����9 2���xD aFg�E�x<���1_ѕ��]�p���(��W�=�_�f���R�`�t~óg�ŕuڨȤ���Yx�3唬��c�;Y���⚕�~�R\Qq�:~�8��`�1�˒�C��YJٗ-�wJ���b6\�"�V��z#UL�'� ߖ9ߔo���{�饥�x�r�A~:<Ų�{y{�v�/��-]ۼ=Bk���{���w���$�ÚU����.�� ��;Yhm�M�}��F�� {* |C?��A�����8+� �=���(��U2A��y هO&�M 3=�Ȍ+����ཪ~RAR|wI�f�c��-�S0|����&��닕^TS)�w�i��'��+��X`�1�" ��H߃5k)���#�l~��-�w�ѷ���p_0��=Q�#+�����>�S����<�w�& L��̓�KH3�7� ��[��������٘-�&��E�q��˞>`�o����7o�5gHy������P[��;*��Q i�����6�i�+V�Qf�7jp�b�9��Q�$ ${x@0�Źu.VɎb���?B@Ve�{g�&�^A�y`/\�w��pZ֍��|()`�AR��=��C����`Q-j S��p�i��8D��Oċb ��h�Aw�V�8zGo"�L��G���9�Rq��M�/_53�U w��!�ػ�����:��_����^�� ���z\���mĜ7?�����~�U�v�0ִ�|wI��(���k � ��G��K؀���{��HR��$� R% ���x��"4BTZA�0�U�u�����S�[�^*��'f��s�)z��ַw���M�̘�e����Ƶ��(Y1�7��^�D�YA�.�nMh���t���ؠC�4Ћ�!����������{��c�����{�p��d�L�VSOO�S"�y� � h�\l���N��g�#v��wr�$��h78**�`�ͤi�K3�ļ퇍k_�����璤i���}�^%h�����M �|۬�����g�v��cx�R@>�T�Û70��Q~~��_A{�&��^��E���s�=<:G8�^z�-��-�M�;Z� f���Ur��U�&���z�ޮ��v̞�����g��B��]�o�q��$v��������E��"�O_.:���=��}�5��Ǧ���Ѻ����Q���9�]v���� i��pȹ6r�?W�n�z>~�N�ut�zWO<�}�2=��,�����(r kڠ��# 3�D%yMT��(Ub( �G��Ò�#xS��3������a����@�G �%�Y�y�|rB����J����o�ä���x�&����;E���E���7T�ބS�ܓ_l;JH�6-@Xz :�e��_C���а��"_�P�����)�h���%���I���N�_YD�4��֍Y[�]x�~2K(�_l�͊�N���a&\1L��0U�3IQN��6�[.���Wvl57�7 T�����u�]��Z��_��N�M��L��^j��*G�����LD�{���27��O/q�Վ@%M�o�h~�9����=���x�Uŗ����--�� a1{��*��R���P�,�`�� �P+(8~���u�͛��won|�&��F��ҫ4��mV��1M����K�w�N���Hy�.g){7+�U��A���P�_�4��|3���_�ݷ杔1��i�sFt��Fz�%��w���v۬{���kO�Z��R^ �"EAPn�/���H]�L����(PLB��"�������תoQ!�B���t7�ꗫ~���?6� ն��B� � 5NTȖ�*G��T��(���ݗ�"��Ci������(O���(V�ƱTTB�P�F |m2��gR`C�#o��$�m�l*aՁ���AYd3 '����P�L�6���>E��&�h�@��oj:.d�I!E���;���jN�M��{Eq�(�fambC\�-B �ɨ� ��g'P%�E��l@R �,�v���Yh�x�P9@�#�wD<��k$��Q3�=��h�m�F�h����|���t�F@O-ԓ��Z�P�Ps�K��B�� E(p�5�������2JG^�'��k�P5���*�c ��h%�c�����צXy����n�-!?c�]�[⢙�r�yKzx�.�R�7w� q���Z���̢YZ1.?���VVMާ���:{(oE[ Z��[���i F��OĂ��( `�z'���p;^�ŋ�ȶC߭���E/��E��v�ȟh�O4�'��Dx�.����#/���Y�h��Y^x1�ߗ�̟����'� endstream endobj 21 0 obj <> stream x��[y\TU�?��e�Y�Q@�a\ ���\�%��\�\Pp�DT��D�p w�K��;���`)>�$���������QfO� sy�� � �.�����.��~���m�\F��%�C�C�E���d:g�H�Ȟ�k.6D��~iƜY���;��=�OΝ�]�0}?B�;��`ʄ�\�Dz���p�s��y����@H�1#"�ff�0bd'�"��N�4!��v~>\/�~�T8��'"���O͞5���Ap=������1��~)�"����� ss�/� �J�qƄ�I:> �uQ�u�ܜ�YM�(�� z=7oR��>�w��B� ���B� ���94�������I���?dH� �mE$#4Y�� #»�5>Y��cdO�!h� %f5��B�4pF w���( D)h��7�zcۦ&J/j�:�H����µ �kM�Z�!Ӎe7�7�ٯ�l��/�D��hJG#Q<��Ȋ^Ct�}�t�K�!#gB|2��� ��+�A~�m�k��x�tH E�T)��_K;��VJ+�YT��qQ�4�f��pz�D"%��%q��~�oh�9�Xl,�Yl�g�:!S��X &�E%4,5 ��&�WZ`ssRZZr���ax6Lq 0�1�46����(Il��Ym��*"%�S����0,U�(���b{�l���~�{�0F-O�r=�{/IaW��M���d6�i}�$U�a�}�D����:�<"��PyFZ"p��xXjU/�Q�a%*�Z�:p��Jf�XtX���Q.�"�^E��ht��#�X�:r��I���UH󁩜���"�����#��*�O���K�&�sٜ����݂��H���wUL�ċ �b��RT����o�򔊨RU{T{�X��ެ������_���s����^^�xw���}���'�g��mM�f�Ʀ��ݫ}�뭛�;��/���d�������@���uer���>�S�U2�=��)Z_�C}h���Q�CW�l�j���h� yœδzj�Z�Ӓ�FBu�Y�b�Who�T�DZHp�4�A�A�iV� x�*�+��R�ʢ����i?��*�h,ޞ��t鈵&N��Z̙p6ql�ɉD�'�{�"�,cb� ң=�¶�4h�\�82����g4��Zq;ꌲ줩����P�R�H���S��<�h���E�%u�T���_�(9m]��)���ᴴt||)� ��x '�'&%��c?s|�9T�����+*�� p9��?.6�K]���G�������¼�E+q��/}rhc�U� ���1��z�۽��� ���y����e�VqV������Et���~� '���p���v~9`�s�k����8������-��'Nci��5k�vuR�hhX:��o��tyܵ&�9*��hkg����.��ĸX�v�W4��'�π\@ 8>�Ȯ]/P ؐ�w�ʷ�.]VY�:����G�n{��o��H��Y��J�y]|i˒w��3Bp�A$�B��7(�#��y?/ _D�׌��r^=� ~�7?W���!/�oU���N5+�Lj1&�UcED�N���I��$;�"b���cbĊ{������3���8�xg�����J9�l'_���K��m��� ��:�(� \�ba�8���痗�]7��#!ċ�:�C���E]�iJ0/� ���l���m� �A�� ;^@�B���c��u��k����-�� 1�z�<գC״��0+��1��,@L��Yv��� �(��7��j�ܼP �f�nV;1�K1��X���n�h�9�p ]\�N� ��2@%����wn����:W���edu��ɖ��'���=��qW� �����t]�\��-��"k�l�Lb�' (��h ��"+��٘.&�VH�#W���kǟ[��4�=b�8�ۀ�����p��c[ �6���i �U�E�rAC��Ϡa/��{Ô��>�N� ����~�a����-U���I=���˕$f�y�w��yx��/�_r�7����� G����������N[�;�jK B���k݅�Lk4m�G8O��XD:��c@K�����̾��HAuPZB�f58��Id�� j5yc{��>#̸�dyy�ܲ*�Gg�:m��&$O������}C K �0Z=����s�52\Z|�?���_\�|q�(L��H̪�7D�Oz:<��xJ-�����a��QgU��g��(�~f�Yk�a�V���V۲H�I�R|���$���fs��&���lm�:ځ�e/��b줂M����."TxaJ�"��rVPг tm� _�Ԙ�t �&?�������G�c�L͡>��@r����W�N�����jh��Y�a׊y}�����~9���Ǎ���҇ K��[:PY�� d!&�z �'� `7.�� ?7�X���gyE3���Aa躟ܐ��M�7���o;{Nnģ����ys�²u�e��� �� 7�_po[�W�ܼ�}^;�zy�� a0 "�kE�ٔP����#�«���U>���?IZc=wYx��a��f���l��֛ q̱ �����"-B�G���Ǻ�,AQ�`�R\uF��XB��P�z!a��G�/�f�Hl"��>b�ʦ��yr����iۣM�/�(��^����b>>�2��޸����]b*��|�q�����V�����z��R@AfS��q��l��X�p򩇎����ʀ������^�hi!��9�������i�`N�v?�iq=�w�/:29�Ӣ�Ȓ�n���~�����y�7��%}�/�Y����C�N���[G��~�B���|����V��{��C~��Խ�l<B�v�ؓ�V��'eK��xG<̬�S�5�:"���'�!e��L����~�7Jq�q��v�>E"������x����@*q��"�����ڊUi�c�̈!����N�s9���i�-�!Nkzl�E3頋S��nML�ڏ�fe�Y-/���n���kp�M�e�-A:Q3yGgSՒ7H��ג�X�u���٠��1�GA�+(j�<4�O��:mݺ�b@x"o6"mB�\#2���@u,T�(��I��3�������x۽�ϝ���?��݀�pb�'8�-���{����(�� �:��)���sF2�ʙ��Ӆ,��WKQ�:�7^�i��� ��(n� ������;�i�\{f�Y��m"�J�oӈ�XO����c�8,0��̃��z��L�5��McQ��Z����a[M�j��6آH�mk�!H;�H' �@� =�����'h[���x姶5\�-�d�嶅�u�����k��4����?�%��v;}�X���d�A��Z� ���+]a���$"A�`� fb� �ώ%-���$�\��������!5�����c��L�ͱiY{q�'��<��%l���M��_�<��o��QP�����\���s�e �T��{*<��窯Ud��LE#5��T8�u�Y�J� @�� ��Кr�8��xA��F������E���"�gc�ႚ��J���Sh���V"�f)�@��$4���]�=�K�<�7���K��e� 5�q���� K�W9�oPǿ@�+�`")���4bm\s|��>�8��δV�uc����������' �:��CD�Q����ƻ�5|���A6�Aj c�=`̓Hf�$~�m��]�NT�� V���m%Ta�� �BX�`_~a�.Lg�'�|�8��?�a'���VnNS��=ӂ�uk%3hn~� �ǁy�k�ü�/�ep �<�$A�V~(�#�������k��y��EU����$�W��>�u?��B>v�`��?��J�|�ؐ�"װ�^��Xվ~?�Y��b�3��zp��ȹ���ڣXR���E�pV i{���8m�=e�v�l��� ��X�z�v��t�a�2� �І�P�F��/�g�l����{|��@Ee����Ċ ����^rL�<����6,�#��f����ݛ��E���ƥ��X[�'��q�����.�l� 3�(�C3��LRt(�"͉���v�1<���p�(/�-���]i`�xbՌB�i ����7��Z�K����ڮM���2����2��G��f9Y�����ܜ���椽:��=9���/^�e�����m�&�E��P�+nN��)[0����cr?Y�=�I����*iU¨:�U��6K�͌��N �*1�5��P&ٞ�h�J,7]��'�"G����w�1���HC0�������{;�/��g����hHk�nB ��_q��:>̆qjj-D{��*�ϐ�͐��!i����۠���&n�!�����A>s@~�R�.��Q)��n=J��rr����p���Q��j�؉���� ah��6G���l��@��q�XV�ap�k�р����Gؓ���/����ߠ�5����l�iE�z���� �;g��(��?9��>\��ݼN~g�qh�?J*��������M�G��ޯ�j֌~���Q���1�,A[4٪ ���4<+����f�=Y�s)��})�elhax͡ ���:-[)Sh)�x㭚�rU�U���f��ՇRF\��h�_�[("�G%�%�&|x�R��< �3V3� �j�C��͖�Z8C7�QR麕/-� ��󦹽6�!v�������9'g|%7̺�����ʲ�5۶.�;z�L�Q��ߋ������Z*��@��"݁)� �bjc���X�-\��V��dRf���OCH�� ^�26S,��v���i|���9U���S�X��Pzi�;E�I�8z��f�����u;��-�)�c�����@�ӭJ&g*@��`�T�%[M � 6��[[�.ɜUo�W��mtuřW���P�Aġ�”�&/^��_�j�wmf�����f�Wr7�H����~ �m+��ʶ�8��l�9��o��l�v�;�U+�G��>��f�0N��Pp5M-y�B�n�rM�芜�F3*sB[�0(��������+{�TG'��x�6��,罣�rU�ļ�� �ty$� Ƚ-j� ��a�TF�ԥ�' d�������afˎ[�1���=�����^�sGU蓨Jr��I�� ����J�{�T����otK/xgզ���?T��:0tD�*� w.��p��/��r6l.����й]��m��B��tE�Z�:=���"~@ O�:���f��Dpl֩�lNb����L�,����I<���Y9��«��C�䅶�������6�����L�h�C���U��cCP�]��@qnf��[4��ݵ�#��>N�޵��$>�D��1�;e�1yN�7z"j� m��r��Ic ���mO�:�/ڢ�<���`��[ct ��{&�ߣ�>)!�I���6*���!�=�a* ���f��FW#�2\i�"�%�{���@F�ih>*F[Q%�F�H=�*:.'��đ�dq�X(��o�D�X��M�=]߬�UU��_�)��/]��͓W��q|Δ�ۊ���@����-�ʑ�d�l_�b��̵̟�.��MӞ4w-Xe_��:;�� d񋒭� ���uM� � �J��cxu�"A��HHϒ���o�&�;g�U�N)���sU|�����˫l�I���W��b���m�w�d��)�K��[=:�_n�8]{�苅�`<ܴ��R�p7q��.��'mwU/ ���!���|P �k� j���׻f!-]������רy�������ȱ�k�^^�u�WX;���k�꽥kޮ(]��D쒋峲�ΆRۨ:x���֯���HN�u(ת��l����MY �&;���U�l#�����H��AR�7�Oά�P�<�ѷUۋ� ���} �_h{ \�[*_��.�nk���>�t����_�u�3���7��$���Dr�dM�B��w�v�|kTꛅ;Ix�4�Q8���.��� 뉮��d����[yo�f3�kU3�����j�U�a�%��v�b6�M���߻� ���� n�b� �� �yS����x7�9U.9�ۤA'�m'9?��bmh���$��LuPZڷ�\��s���l���GƑ�������ЯJze񙓤��������PcKZ Ԉ��)��ˌA�e�dEǀ�6N��}����`�<���g�)���4=Q�U`�F+���MZOϐ3=��R{8��2��q�8�XW%�,��hB���_o�5���?�/q�����{-�恆�B��[͎��pX��eǢ��u&V�h_��z��=(Iq��ϗ�8�ңG�����K�.�ݱ}�_�_!fbj��A���@���7`���B�i�txS����ʈ��� ��=1�S�)��I��}c�e�v��b�sC�� �Uο0i�jjB#%�.�Co]A/|֋�d ��CX�D&�R�n��6Y�}'���o�%k,wO���_�v��L���>R��cs�/b�ۀ'|�K����H�&����J�;>�0{� ����:~� �D@�ڈ�����Zv/��(�mN������Ot��}���?B9QL�=R x(T�<��?�R�m��np��_�f�PV����c����ףv$��@��{���|8*� �88�� � 8^�#�L�3*�3…�(�o@B*z� �����G󅃨�D�C�4���7U��p � K�������>�J�O�.B �� ��tM����Q6���1h ���Ac�i�Υ�SQ)����;M�xo��� �=��Z�2{��'����@k*�n �ȡ��ԑ;9���x�7�m���O����u����<Ռ�2���Uv=�JD�#���|q�*h�Ccm '��&�ij�6�r�wi �;A��H�7@CQ��,8��7 M�Lq:����H��|п4Q,z uAq( uc��%�x�u����E��l�1�9����p5�O��h�!�7��pf:�C���g�IPSz�@� w:��i�� q�$���e+֐�ت�P�&DŽj��3:��������H��];�7���0��ǽؽs[�#C��Ж8sl��@�R�+�l�-�))OЖjx��p6�zN���v���+������� -��AZ���߱{V���N��<����k��oO�Ŵ����Ch�C�2Z�E-B�l��l��l��l�O0}��h ���E->�E-z��9��6�}9P�,��(I� endstream endobj 22 0 obj <> stream x�MOA!��|���"�g;�����uZ.&L����)�x:�U<�l�u��4z���F�K���r����A,��Y�b�[&�K��Vc+�k��'e�(|��"���f�[3[��a�ex�K�챿����҃���/�'4 endstream endobj 23 0 obj <> stream x�UUK�[! ��\���'S�,&��FO�x%�>����d́8�:������X��Jo��ao����Wzu<�of�yD�����㨯�A>�,^�h�G�x��E]�ͤ�-��[�9��+��'���;4л K��g�YH�V$"�>F8pu"Ҽ�Gb,#�c}�� ���@NNvXP�2cmut�|&T?ƪ�4��D텝��F�QƘ�f�2zd�>�!4L�1d"�!��5� �a��xYd*� U��G6 �,�30J>����1F71.P+��iE���OY����$ � 2� -�g��0�rW�-����� g�ؔ-X��2���1߇&����l�#![3@su�Ǩ͖AM�����Ü-U�I|�*iw��^�bT�C Ds���s� /�.��R��>@�BV�!t?�7W���r�r����OLF������]:*��e��ܳy��� ���e����U���=f�4����� ��u�x���1*�&��j�a�>����x� 9Ϣ� <#ւ�d2Y� �v��=L�X/���d�c��Kp��z�eβ~<1{�Y?�]��R"^���0ӂ|D���\�ba�v�+("Ӏ�2�'�������s-� endstream endobj 24 0 obj <> stream x�]��j�0E���Y���g��!^�A�|�-�SA- YY��;����8�̽�D���(i!z7o�� �08Ow�z�IŒ��v����N����2[5L������,�9���'��F�l�������7��,Ĭ�A�@N/�~�F��˶����˖4��FH}��n�$p�Gө�*�k�.����r������> stream x327Q0P0�Pе04P022�R �`�fF �(�+� �hZ endstream endobj 26 0 obj <> stream x�UTKr1��)t��A�q*�Er�m�q�-l����F�ѺE4�h�_�0ϴ�=��G�G�����:�k��X�H��x|"��u���y����.�@I 1$3��}�~,�~#� ����(��}�ʐh�|_�����:�|�R��#2��MNrz`��7��Re��+8`�ưyoAcb���J/F�.���C��G�� L�Kr5�d����Jh�� ���b �V��"N[B˔�����S��ʮ9=Z�v�H,Y�FY�++86��v8�f�Ȉ�ͼ��q+�X���g.)B&ZF����ef�k�4�U��������۔��*��3���8R�c+�1$t"�D2`6�����"�M_W;�9� sʰo}���y�8�lp��\��N̕s�]nN�1Ҡm(�K�|��xo��ɜ�Y�œ�Ӊؙ���}u�7i��dS(= �Er7A]P�א�� C[���w�2G��T��W��kȽ��B?�U�2�. ��^�s�@%N�įL�N�c] � X%��a��>���{�����~��� endstream endobj 27 0 obj <> stream x�32�T0P03T�577S04��R �� � J���B!� $��4�`59\`Uff&P� s�ꑘ ��J��4.�`� endstream endobj 28 0 obj <> stream x�UTI��0����]b���S�,^�M¯� ��f�G߭7�v/�wZ��~��c����=��U�Y"«��;�7�?�����[&,��f���t=�ÀĞ�֌/˄wM�s�n�hܓ�t�Fm�H�cDhNWnL+h�*��3-=��X�@� X����]���$��*��f!��U����Pa�@GH��F�"��� ~�(�z�~���ThG�+\!J��$ E���S�Aw8S(5)0aXv��"�W6*�J��婮���R�n���~g�DF��9�r,m+G��K����ޤ�����R��D�.Ҟ���]ª�=�>�Èm`6��_�x�ت�0���$F/y�d�fŻ�q nH���fP�Ti��u��� ��舩){��^��w}��{��{�"2&W��e���U��1�]\��:�p;7�@ś{�3L߻����������2�;�u��q~> stream x�]��n�0 ��y��ÈeX:Bb6�C����L�J�s��k�t*����m' ��6�,�77�f��F;�ƛS /p퍈b�{5�+�WCkE��f�fjӍ�(� �1:�n��/�(�W����*> �����0��e(�Rj��鹵/�2�M�1���5��u�ݨQ�d[�5WE��8�KF��E;�\:��:L�05 ��D����'�9�2g����|B���S�n�'#O4���䙰6#���� �H���y�Lus�!��]�ˈ��~��(���v�� �!�Ӧ���甐Q��YL]Q' ��Qٔ,�u�=�G��$�(�!����؟N�^��Z��9�Q���U�%��/͎�T��6�� endstream endobj 30 0 obj <> stream x�]S9r�0 �� ~@'�g2)����Eˎ��b)c�G;y��]L��z�=®�=�D.�� �ط0]l����aPi�{eR�#P6G�)X���ݞ52�9�gVt�� �Py]3:�W>m\Dj*tO�Fq?LZ��w�� nمeV�@ɋ6~�����܄�1��+ƃ�� /�d���S�k�b݋��N�=q��"���4��{Rl.Ecܧ�qO2��1ƶH$��Q�Zoǀ�?�I���vެ�����Q^�Pkc�]Ȍx�[�L�Z �tL~4%ҧ�є�L�2n{���).:v���HI�H��t�70y�*�Â�X�q��V0�A� ����\|'���&%~�$���j>?���C�� endstream endobj 31 0 obj <>/CharProcs<>>> endobj 32 0 obj <>/CharProcs<>>> endobj 33 0 obj <> stream x�]TK�1��)|�����yOQ���)0�'e6��P@X����Z7�Mԛ�oz���������&�48�[����̏��G�#y���CǕw�4��LE��&B$9�?0����Q���~�NT)g#�ַgJ2yk�9�6Z|�Z_�h���z ?�,�YQ:��Dd��lQ�O�qFwᔕ����&��V�ͮ��-.�(_f��<��UL�� �:����h[r{�";�g��j! �4�+ �_�����.��0 .�1%%s_�)� T��T��lPF�p�H�g�b��N������qQ�s# �����BT� ���`|*�OH�c�@�����B���Q �!�ک�]I�|�ʪ�*�c�u� �}C r*�n�� �| pS�����^�U� �Ow�P�%���y ��9`wd��`� 7��,��ԴDr~q��.{[�3�i rA8��:Ɠ]�)D+%Cԣ�`����?Y�"d*�Z����#�5����@a��n�9�H�0�v� Z-�Qc�bp�Ӣ ld�OH �=�xij�o��������9�T endstream endobj 34 0 obj <> stream x�=�� �0�L�����SU}���%A���>�S��'����k\C�og�P��M*"�T�w! ˦����=�������5���a���,���_u endstream endobj 35 0 obj <>>> endobj 36 0 obj <> stream x�UTI��0����]b���S�,^�M¯� ��f�G߭7�v/�wZ��~��c����=��U�Y"«��;�7�?�����[&,��f���t=�ÀĞ�֌/˄wM�s�n�hܓ�t�Fm�H�cDhNWnL+h�*��3-=��X�@� X����]���$��*��f!��U����Pa�@GH��F�"��� ~�(�z�~���ThG�+\!J��$ E���S�Aw8S(5)0aXv��"�W6*�J��婮���R�n���~g�DF��9�r,m+G��K����ޤ�����R��D�.Ҟ���]ª�=�>�Èm`6��_�x�ت�0���$F/y�d�fŻ�q nH���fP�Ti��u��� ��舩){��^��w}��{��{�"2&W��e���U��1�]\��:�p;7�@ś{�3L߻����������2�;�u��q~> stream x�U�A �0�����iV�SJ���M�x�e-�Ƈ��!�u+�@���72 g�T��r�(X$|6�έ�4���F/=��� endstream endobj 38 0 obj <> stream x�MSI�$1��+���`3��S}���u8�:T�0�HԨ�j�>g3�����!F���fvir����p���,x���k����6�Hq���T�d�q*R���7��g��(���=;��.�t������j8�������&�eªb/�A2���)��Y���T�j�1��l��ר j~���eƔeġ��e6������5�]G@� �� �^� q>�?Q<��"͌j��Z�3;@�™)QD�N��K����c�'��s�RD��-3������/�d��1�eY�]2$�1Ahn�RfAQ�P2=�1�7_�`��n5�4� �-�\���(�����?����o!� �[�Xo(s���ߌ�`14K�{�c��K(s��)����m܍m\�c�`�*�t�c��^u�yV�4�k�R����� ��7��2Ӿ'N�Ld�L�K������?�Y endstream endobj 39 0 obj <> stream x�eS�! ��W��c �����!��keI;�C"@F�l�z�Ҭ��{1�EC������5φ���]�� G}:�� �FQ.[<��H�)� h�%��$��0t��Ue� {���ǣ3������]�A_I�T�̬�@���R� ���l�V��O����dAVu���'L�N���'v-)�tB hք�k���m�Taͪ;9'w��y�=+�1ɕ�rm��sU�N�(^Ђ�t�I�A �%�ΰ�1o`Z�䯾"V��̾[M6Q��L����2��sj���hd[�܎�98Cw�� v�%�)�����XCR�e�mW�`$8% &�/��F���-m�[d!m��w^���<F����������9o2��kO��[�1C���g��M��R����[e�8�Sͧ�;�sD4m�5C|xGx��Րݨ������ endstream endobj 40 0 obj <> stream x�U�K�0��� 4�Sjd��z��P����3�H(��3����3��k��]p@8��M'mP�~������F& ��&y�3=�Uu|����q$� endstream endobj 41 0 obj <> stream x�3�0U0P04U�55�P067�R � �,��\.�DP�δ4Q�5442���`� �P��Ș�)9\\i\o� endstream endobj 42 0 obj <> endobj 43 0 obj <> stream x�URA��0��|�68�Igg���&iڜd,d ��S#��2���E�� O�,>�x�ƀ��k��}TY�(��6Q�J����枨S�})N�j�GqI5����D_eɛ0��`���,a�F�'P��("=�E`�+��J�����]nD=}��HW�{N"l���1r��ؾ�=�F2O>���7�־��ն�kb0|��U$�9�� 0v�Z�J�y;���皈��)�(�K2e�b��O���7�l endstream endobj 44 0 obj <> stream x�]S9r�0 �� ~@'�g2)����Eˎ��b)c�G;y��]L��z�=®�=�D.�� �ط0]l����aPi�{eR�#P6G�)X���ݞ52�9�gVt�� �Py]3:�W>m\Dj*tO�Fq?LZ��w�� nمeV�@ɋ6~�����܄�1��+ƃ�� /�d���S�k�b݋��N�=q��"���4��{Rl.Ecܧ�qO2��1ƶH$��Q�Zoǀ�?�I���vެ�����Q^�Pkc�]Ȍx�[�L�Z �tL~4%ҧ�є�L�2n{���).:v���HI�H��t�70y�*�Â�X�q��V0�A� ����\|'���&%~�$���j>?���C�� endstream endobj 45 0 obj <> stream x32�T0��C.�� endstream endobj 46 0 obj <> endobj 47 0 obj <> stream x�323U0P05T�5��T02�R �L��t.��55�P��� 1 �4XMWH���H=��z$&H>��+� �t� endstream endobj 48 0 obj <>>> endobj 49 0 obj <> stream x�MSI�$1��+���`3��S}���u8�:T�0�HԨ�j�>g3�����!F���fvir����p���,x���k����6�Hq���T�d�q*R���7��g��(���=;��.�t������j8�������&�eªb/�A2���)��Y���T�j�1��l��ר j~���eƔeġ��e6������5�]G@� �� �^� q>�?Q<��"͌j��Z�3;@�™)QD�N��K����c�'��s�RD��-3������/�d��1�eY�]2$�1Ahn�RfAQ�P2=�1�7_�`��n5�4� �-�\���(�����?����o!� �[�Xo(s���ߌ�`14K�{�c��K(s��)����m܍m\�c�`�*�t�c��^u�yV�4�k�R����� ��7��2Ӿ'N�Ld�L�K������?�Y endstream endobj 50 0 obj <> stream x�USI�#1��+�@U�U�=������� $�ɒh3h��c1������i������KC}0:P�p��{]��ʹe~8pS���������^�J�]ߦ���i�D ��'����n����g��$�����b�X�4^�3o�,�ލ/dA:ȇ[(�uV?@8]�j�H:D��J'�&���=��1\����M4��&�{��ѱS�6�:���*f��ia�,=I�4�e�j�ބ�����)�M�M�[�rI���&�� ֹR�B�u��b����*��bc�~+��.������\�W��� endstream endobj 51 0 obj <>>> endobj 52 0 obj <> stream x�MQK�C!�{ /�W���<�������j���Hc7H�:z�a'��`3����`��ߒ�ES-PV�H�����e#�D�\��5C ³!Ѕ�㎤�w�LY: �� �;�n�8x6��T�4�^*b [���*M+7��Y!�� ��\�"J1�K G)/���{�x=%E=B��31qn���i�>�~̋�������B�VK��fѓ�.�+%)�<�T�����y��>1�_�mo|*h endstream endobj 53 0 obj <> endobj 54 0 obj <> stream x�USIr�0����]b���3�T���i�d|@,M7�j+�4*g�h�B��/:r���7�>��Ӕ��%�!ЖI�����]�^����9|�\�,Bs�I�uHO8b�h��]mwS��<���4��Y4�e\Q��0G_�6�"V� A�ӡ��kOp���ҩJ.�H�I�y� V�� �*E7��xε�嚈��K��lH�� 8��a9�}�U�h����(gm�0ȡ>\�C48���[P� 'W��-��{��}����s6� w۸��P���QaO P��b{ӂ�-��S W,��5�Ց�4z,ꦊ�X�#��o�3��бf��'B�_.P�g ���j��C:�w%��^��\Ҧ.^Tr��W�K� ��=T����YJ�&���x� �ږ�\^��u���r �g�u�I.W�������x� endstream endobj 55 0 obj <> stream x�UTK�! ��)������T* ��۴�_\���>-5&��f����tF#n����D/fyI�rI����O�����/"͕�����;���66_�?ņŊȝk�V�id�Sh֭�l�h_w�����c����8qy��Q�y8�#,%�VI��=�\~gc��c��%�I� 3 RN8@�V8�;5u�D�a�.���;3���Q��%��`��T��� �ͬ�W)���w���\R0� ����.���Jdd;����Bu>��5Ƿ*k9/ �/��߉ő�Sk0�f��U�J���G%��q:�K�\���\�R�J�e��� P�c�5�Yc�:uUʁ�0�f����u�P�����k}�s���Oʥ��qU맭���k���s��@�2�:=�d#;F��8t&���f>���X�8o�*Y]N�5X���uP��N`ޛ�,ނ�B��?�l��)��)O lH_5�����<��k)� hJX3dՅ`�ڲ�zq��C0ͭ|�d�46б �d��9�x630�� N�g<&O�����_ �[|'n�$�O?HL��F����{����l�u�����K=��3��3��^���u��ݠ�#GW4�2���$�L�*y��q�B^*�kRY���T���$6:�&��4��7������ ? endstream endobj 56 0 obj <> stream x�U�Ir� E��� � ���$�ʢs�m$Ac� ��d�ZmU��Ĭ*���x�տ��n�]@�4���xQ|`�0�R��� ���n��U��.7��QQ9�Kژ�W!h� ���I�ȽI)�}.u�: m� Z� � ��η�g� VP��6ω�;g��H-����s ��Ό����{s�-Np*񛈇����0��C�-��[���M��s���x�^_�}���`"e�17�y�FS_�t���� V�`�Iό��[L)w�IB�� =��G�}�H#%B�:M�U�ܳP�.lG����UHp�^�i:4;�`O4�ۉŞ����;��j�O@0��l���� ZnD�0'�0sƜ�o�)����" endstream endobj 57 0 obj <> stream x325S0��C.�� endstream endobj 58 0 obj <> stream x�M�� �0 D����d'����ٿ%���~OgcQ؅-Thb; �[otS6��D!�+8Q������`U�G�‰�j9 endstream endobj 59 0 obj <> stream x�URIr!�{ .�� ���S�,��oHߪn��� t�:�n@�/,��br��[l��J���DWV�[�0Tu�p�c���aQ��47搨�Y\i�ZT��DrC�ێ�Z"c;����!���NJ};��H� %u�#2'�(�h�����`%8 [�/ Ӹ��ݿ�f���� ��I�3����Dk�])w}wc�����O� �� _ �g@����d�+Ø� ��xf.�C���xN�G�Wg�)����x� endstream endobj 60 0 obj <> stream x�UQ;r1�} .`>�<�dR�ܿ �ݝK> stream x�eSK��0����]|�8ϼJe1��6��L��ݲ�4Ј؀�s� a�X�/���E�'"|Va�_��-^� �?8�q� ��'�Ah��q �u �+R�����_�����;��,��ѭ�s`pZ�I " �<�tA��X?� � ^CC&�Ȓ�V����yR���bnJ������HvW��n��#�ļ���q=� ���&@7�N��y����H������VGo�#�n{Z�7o��{����~_�ao3դv�Y�c�t’��� K+E�pxPq�\%�s���#�zӮ�<< �y��n˻���~*C�v��:y�F����|<,A�,��w�A��?z Q�)I�`G��)��% �K|t+� 4:��uh�]2�e���|L����*�� ��k��k����9��c������� endstream endobj 62 0 obj <> stream x�UTKr�0����^�?:ORSYd�>����0�3�X�iLqd>||ƒ��>���|�D� �|9ŃK�=&�K����*��I��t��ֈ�T�P,�� c;��E� 3WM`ToDJ�@$2� ������yɷ�g��6\2�0V�{�2� '��jV��[MCv_��A�C@��1�BAI�(�v��uz���~��T>���-�'���>�=��[� �~A0x�`��<�P��cN�'@^ �D�V{W9Q;�bL�F�J'���ۉV,ۅݝ��4�wp\���?�(E�c]�AV�?��ÁZ����w��QVw��ZN ����Z &ĕD��/��0����r&�� �1u����`�D�裗���jE�%6��'�>pۅ�7'��7ڍ��ڑ�@}OK$k��ȗF�&I���3y藅70����b�A���(mR�O}cɯ5����M���OҙA����ڲ"��>�4?��.?_ϟ����k endstream endobj 63 0 obj <> stream x�UQI� ���@���'U�C��kg!����,�����2�t*pWco,�ꯄ>&���?�Lr�R��a5��vH9��J�3�AFuV���j1+{EC sL�i �¢��FE�fVI�:���&o�(y�C�fQX��*t9aY5:�dwW)d[�3� 9%K������@#�@�ro/6�h]o��X<8��o�n���-��bqUn endstream endobj 64 0 obj <> stream x�]UK��0����]�'�yf*�����B�8��B|� p덬݆�ب��o�p@�?v~(~!���3��r�G� ھ�) �q�ɖ�h�G9lC��ԇ�}]���U�=Dd6�i�]�a��Ӓ��>+��@�D�� ��g��/1�,xNF:SOg��=��0*�d����hy���(�,���q������n���\'���H��ũ��Ly�IuD��M1y'O0��(�O�b؀�I@�&��#���4�ʪœ ��E�5�)�y���IJ��m*��]���� �۰ �R��a5X���~��e�yLo��Ĝ 6��{�G�<�� �=��8q�9�� �7�[ك�;�$�����V���M�� A� �Z:���3u�+$_�,��bf��E�3 b/C����K��~�`�����������g��a�ڈ���'*"�i�U(�l�s �Z9Ά��J!l�}f��K����&5�������p-:�^~9�Qى��mEZ[N�'TƱl�!P�G�%�M��!޷,ч]��;9_ܟ�f�i�����1�7��|�����Әᙩp���qbP�oVoWTqvF>{+KA��e����m{V9k�%��!9�O�;H'�\z�eb�e�W%����(�[�>h_�����vR� endstream endobj 65 0 obj <> stream x�]R�n�0��+��"�<�H)E�ЇJ�`/��b,���/M�Z4;3���T�k�D�v :蕖��nB�7�Y�A*�V�bh ����'�C����%@�����69v�Ģ7+�*}����x�܍�����UH��N/�ym�(ض���r��{���A�'�F�'� ���!+c�*(/~U �������⫵A�yu�ZPB(]QA��󀲄Й�>��������>$X{��A��Zh�4xӘz&T.@ܭ�����[i|�f4�ky~o]�� endstream endobj 66 0 obj <> stream x321Q0P01S�545Q020�R �`0\�ʕ��"S endstream endobj 67 0 obj <> endobj 68 0 obj <> stream x�U�K�0��� 4�Sjd��z��P����3�H(��3����3��k��]p@8��M'mP�~������F& ��&y�3=�Uu|����q$� endstream endobj 69 0 obj <>>> endobj 70 0 obj <> stream x�UTKr�0����^�?:ORSYd�>����0�3�X�iLqd>||ƒ��>���|�D� �|9ŃK�=&�K����*��I��t��ֈ�T�P,�� c;��E� 3WM`ToDJ�@$2� ������yɷ�g��6\2�0V�{�2� '��jV��[MCv_��A�C@��1�BAI�(�v��uz���~��T>���-�'���>�=��[� �~A0x�`��<�P��cN�'@^ �D�V{W9Q;�bL�F�J'���ۉV,ۅݝ��4�wp\���?�(E�c]�AV�?��ÁZ����w��QVw��ZN ����Z &ĕD��/��0����r&�� �1u����`�D�裗���jE�%6��'�>pۅ�7'��7ڍ��ڑ�@}OK$k��ȗF�&I���3y藅70����b�A���(mR�O}cɯ5����M���OҙA����ڲ"��>�4?��.?_ϟ����k endstream endobj 71 0 obj <> stream x�MRKr!��)����#�y�Je����sc�4��$m��j=�����I�R޿�:Q���e��d��^���!��䅷�^�G��#F��8�9Fq�I"*�'S9W:��T�"d��L}�VRj���v0�PaC����G�9��٬���w'���8&̕b�8�� Ȟ¸�( p����z�� W���#�~ ���E�2f�_yCN72ZM�$ 2����ma���R�:]�z\�=47�i�����ř_�ڲ?� �����t_Y��j7Ge����ȫ�e�]�X�l3�)C���x�A��Wޙ����6iDSݫg��\�r7A\f��f�ңV�YQ���-�rL^1�8pS]����z(� endstream endobj 72 0 obj <> stream x333W0��C.� endstream endobj 73 0 obj <> stream x�URI�#1��+��+ش��sp��ڀ���� I.�.� ^�wPU ��XɫߋƸ���HU�."�A�D�/"x_�r�w)��@�&^Z�F_S����r�%i�ۆ5x%c�'*5iD�+�hiM�L���`���&��=�B���}q��Qu^��:�D���t-T<��Gil�*eos����/݄�*͗4�v���l2� 31�\y��˒\����5�kt˚ Ǻg��kq{�~OX �Lj�+xέm���܎�"8�:m������PE��s�3 +�x~Vɺ�HP*{�Z��d&LJ���ڃ�ֲ�.J�W��T�C�[Ƴ�F�`+���9ʘD_�̿�y8�&�x��2�/�q���t�>�LF�l7jJ��ɱ9�.��]l~� 4�1���w�7� endstream endobj 74 0 obj <> stream x��َ$Iv%�_�� ��� ��*��Ϝ&03$1�`8O��9�\UQs3w����\�KE�PS��ʹ��[�]�yq��?ݖ��T��{����_��/����{ ��r ��?��/��{��o���w>q� ?�/��������߾���_���ӻB»jh����/����[����Ox ���-�_n9��W�&^_�����|�?_fг��A �R���| ���T�-cҞ�����=>�� 9k4���������o_���;������'�O��[�0�x���5�˿|��M/���˿�)m�������_���/������_��/>�\m�7��_�ϗ��*z����Ĕ�aK=8�Ҡ�?��Z�����b�vs�v��q�o��O>yE�����`���c���b�xw��L�4���_�3fiep��[�_r�l��xV�\���㭷�r�.Q�X�.Fp�� ������u=}���\�%�H�k�;oy�ѱҡߒ+�|R�OcS\�/O�4��!�����t���u��߲�8�>$��_���v�����q�ʹη�����mN�r�߷����I��D��'�����&#�v�xIxo��^�p x�1�����/��z�B�e�s��ڞjH��N�Vjr��I����}2�'?�8��f�|oI�w1���xY:�\1�d ��_�� e�_��E#(�V)������Ə��ҟ��3���/ş3����.��jD��rå8���O��~<<\��=�܏sV^��8��2�:&�O�,�9&P����in��3�Aчl3c�E|����rK���2�n�i��+)_�a��v{��Z=��xp̑o��N��MȎ��c��7�%����5�j+8�[� K,;�=wH�R�< C����8��EU�?���_��������Sa�V�P�~ԧ�&��|6]2�RM��cr!�{z�c,7W^��n@o�H�x��v�I$*��ƽ�*��[I�RMw���ʁ�ou�����z��� 5�[)��|5�C��;!�_��[�3e���� �8�+Q|u�Z�ߛ}Kskx!>m���I^��ӽ�~G�:�?'�Y-�*��@Vըg}(7�/�=W_�$����>�o{ %��W��Ut� �2 ut�=@0= 4���m��r}�����8l �� �T�6�ˋGo��]޿@J���I�'�4� �߽&�����LM1\�#X�G���>��Z��z}\���SO阏�������A\�7���]��n[U2�l���*��S,2}�w{}�� 7\�b])_��XĄ�S ;�@2f�� ��'㭹�I��'BLg&�o�6R�ҁ W\?��;V��'r��1����# �����I�^�]_k4.�#��mL(�f���(������2YgJ��XƷ<�劃҄w�H�3ص�q5���-pMZ�V��Yh����F�o�1��i`����נڞc���c���#4���77l�_��W� �������� �^-�W��"f2c��Q(օ��$���; ���C�c��+�=|�y��K:/�xNlļ�;)�-�t���`�Q����y���5G~3B�T�e�����_�4bK��p���H�w4.]�?=����+\�Eo"�!+#yR��w��X*b#w���5���Uӥ��X�&ި����a��H�};�� y���}��=E����E5e�S�j�3����{B���5͓��K��i�J͜�Xǭ�('H M����5y�NYf�߾d,��P�@�/��i�jL>MI4�x��t(h�?vnN��6Ɨ��m����@\�C:?{��U�]�j� | 0�m�8�v������튨�=�+n#�7x.jIS��'�P�^2 �覎,pJ� "�4�ڹk���I�Uj��@�����_�h0�PE�D�}���<)���c~[~�� QUڅ�ݫ8/H ���PA��c�}ŷ�$�[�'}��H�bC��16�d�g�zRy�(��M�0۬�� �L `��Q�(T�|r��[����L͡� hU�D�>������].��L��q��5���/���ޟP�ݽ����1�Fn*��H�AhCPq 8�ʄm/C������B�:�#`%�O��3�a��d�k@�9r��� �鴊�L(��˭�H�.�2�ʅlF�Q�W$���%�}�� �� �ׁ� ��e�Oc��g�|J#��7�Rz�.0L)�� br������P`i�9�d�x��$jQd�Io�&M5����і�͉xSg� >����Bm���7�S�f�4�#���,�2����B���s����旯�A[�;r6�pw��KI�;}�pcp�d�^�^��!��wX/q�J�>��]m4anH�s�O�1��hq)�lb��Aʃ=dx��˱~ J��$�ð��6��䱃$����� �:�P@�#��g��� ����'Q�������d�P(�Mw�����>�_�q#�f���q�-��.l� ?C�2�Dj��`���XX�>���y:���Yy+�3��d{���ڡ�sM�6����ذj4h}����^u���k����xR���9{��/'���^���:g��v(_���=�?�u�4L���V�8����E9p�����=H�d�xB-r���J�cb�ղK�+zAtI�H�H�t�<�1%�^ b��!i�r$��M�e3͓��ct �v2��_�qSͣ^�c9��`�Ψ|���;̟�_?i�z��ay�p!o3^�ڹ0�Y���I�m,�zָ��g�ܸ�]h�Y;�e���8�� ���� ��,y;�~=G�����<͞��j�������2�Bv��c|�w9Q��sw�uy�&t��W��j۩�/g���t� S�3w!�1��a6�����+�w�W�٣eX�%1̷����9�����*�!�H��W��#5w�{������w���7ߙ�Su~}=��zy��9��9����t��b���;~=��# �<��f���B;��I]O�#.�9,��z�yK��䩲\�"�ixv�g� �{���C��ϳȷ���| `=�6��s2�Y|8q��<�Q��d���"��4 2�I��#�����7��,�,�W���A�3�βѤ��+��n�C�m�.����J�R�%i$'�����d��v����쫉�vG�LR��^�H��A� ��A���b������� Q����7ҝ�����`�;�R_��6?|�іəIן��_����+�_�m��K����~}��ëґB������e��Oy����B:��뗕>r������G��3sd� ��_5��mpk�$ę��Ic��ٗ@��Xi��qӕH�q׆!im(K<�G��I�oi��;����a���x��&mK܍�ѧ�d������ C:h|���)7���i�d���-07V� s�� ���L4�k�+?d�X>g�7q�L1��'5�؇�=j��Z�1����V#��Z��B���ܸ< ��8v-����ڊW#my��{��++!�qL�EGf�|ș���/L��P�Z#=��<�^�7 ��l(��&4���>�e=�� �8b� �i)�M�˳��7�G�l!�� �T�I��K���Z�V��a���1�wO����1��-�U7>�pG,��zG���4�֙��4���t�?��%#`�N���t��r:Ts�Y~e�5��T���ImKR��� 5N����c��X���pRqN�)��,j��������|t�UW%U��)�Ųv�n�&�c�I̦N��iuJ5�`������4E �#�$�oW��l趈��?�Cy(�?9�Q�*f�H �3�{F�IT�&7��c�=҂Z:?F���}DI�|O�2e_���9�aW���JW�L%�\�á�����1�˧��s��&���;S=H���c��m��:�1�zX�"&sa� ��?��+��Rl��I�ϧ'�vĸۏ� ��L-9(������߆��Y���]��8�f����VC�1�E�����rQ$@�c�B�Er/gW��^r%�=+}!͗��-m7�9��� h�ad�̳�\��<�y���'i�� �Q+8,bҺA��l��q�����ɟ{�w��}N�k31�jP4�TT�П��rFE�����Hd��j�I�g�i�`��/���V�� ��YUu;|rLaF��Î%����c�B� H��T.�6L,��m� 7�X�t�i�gT����3��ʼEc�:~�cF���oCX{8��g��x00���1�{�qq�_0r�5OG�8�Y|�̥���L�!� hIh�#*wb�p�v�ٯ`4���n��N�҄(��R��j ���9�ᝩcQ�ꘗ� �)��c����d�3��KP��� ����6:��H=3ٛ� �'�X��Ó<�a��J���&�ώu����.#8�2�h���K�4^���(R1W��t8����>�{r��@� �F��|v�Q���"VW�iu<;v�Se��3�X��s`�\�F9E�R��1ٺ���4���MCa�������Orr�q�7>�6�IepF��Z^���3 ��MsV��J6�� ���-���5�a�3S��*�{�o�aP��k��S��O��p�KB~E��'�0z�b:WK�= &Q�&��fVU�����$Z|f.����c! �l�_����!W3����2�M�yN���1/�=Ybm!�8��Cw#qr����{�Y�a��F���o�xi'��;޺.O�q.�e�34�����~��_g��?���ǵ3b##64��kG�����Yj���st��6�2����Bl����ϖ?��Qb2@*m��w‡���0a��hu�W��fOt�i {�8~�8I*�yU�c ��")CA�-�-�'o�(?�R%3�X�?Y@�pE�/K�`�E2��%�Z�z� ~��������f�t�������4)Zr�P�K����Y�)f��Z�˫�x~���aDf�_����r��σ�{�b,�6܊�bB���s�~@7�7 �x��~���q�+ �q�)�~�ά2}٘c=�K�EBu��0dgd!�Z����ѫs���}: cIM�}E�h�J�����$e�Ϊ��/��w̽9�ߘh*�cB|3U������<��D_d-``Y)V��v8�/�W�(}Y� �a�"s�eX�UZ�-�bѡ{ �*��� �$c�T���A��_���l+���]G6��� ��u6nz]����l!o���ǞI����|�['���[�w�ޘ)�]>:ԍ��l��]�GCI�(��@�kYnT�R�kW�����mmt��#���mt�co��0�w���$�����K�i�O^]�2���Zn�x̝5�1M�<¨���otd�F�ۛîq�}1�����x5��0z{6O*χY������Η����z���A��MNo������06���xWݟG=�m���7Zz�O�x�4����\���|r.��o;�Ov��)z<��<�dM��㳹?�g �Y�L�<} V<;t��wD�2��{!O���{��O���Cz��'��웏w� �q��W�����w<� ��.1���F�>������;��mK�{�~��z>;�>ٿ'����oC�'��'�2��'��1"~����x�~6R�e�&�E�?�f(�=I�b%����pޜ��K\�!��h�iLr�(���S�~�x��<$��R(��?z{��������}]g=n,'2�����vb�hp?�7�Yƶ��[�6\?�}�z�wٍ�v��xn��՘���1��+�>ty�x_[���{�?}`�|��<�󜰓��α��~l�\9e޵ua��nǪ'W�[����8�|�8q������>V��$�9x=��<�˖����a�?��f0��x^��X:SW��1h��y��q��\�q��e��مι�^�y�|�^u}�w�ϕɒo�#xcd+�l����-zs��q^�vl�;/��������b���� ��[��6wl�8���Q�6.=�u��mƺ���}�j���^q��)ꉵ&x�׺����w�ްy��=&���sh�ܨW���E�˭y��gU���\�ZF(�wK���2��>t1�����ۺί��I��q�@4���/&�����,�����=�} �\�!L�H�,[o�ND/{�'����#4��e�d� n�}�Լ�<*y/�~x�q���9��O���%݁�K�"���E.���$��&t��:��������u2��uB�Tb�u�Er�j��e:y���d�/�;��1G���;�e}y����=� �]x*�T���[� #����Wu��e{�f�nmS�/k���yQ+/��Ȕ���#^M�~��˽?]6��\� 5.��8��VY4�c�aἻ���I��pf8��W]8���?�:���<)���>�x�*���k��5��G�Q��X�qs3k�KI�Պ� �"���>پ��j�aqm�L�t�ʤC�祐�����%lIQ�E����?�F�z��۴�^�����w]$�{�{xͲ��6^u�6�A.��j?���;���+�~?�gwC?��V���Z3�ӵ��Qi5�&���k���]�n���[���W���̖�9,��)2�6����^~��\��L�D��7���ӚI!���5Fs�9��`�p��� C# �� Zj����-���,:�F{|����n�UȖs���؉���o�j�',��P���r �/�� ����v���������yoh��^K:�8=nr�������+i������ �6=` �a�y�#g�o/�����y�Й~� F�>� ��}[9_�$�;Kk���ɑƑ1�xfw�f�����҆[�W>���E�姥W�������9t��h���+�`��}sx����wlw���"q����vf����ƞ~w!�� ��G�sľB+X��x�؏Ջ?y`�B]�7�]\ S�1�|c�oͲyΩO��MVi*>���T������� ��}�𻪓�� ���ߦ�3��r�T�!�}�w^��/R��j;@�}w���ZY‰��.��Z�j���ӿ>�p���%3é����OW�1~c��5��ۉj�x���!�w��5�d���M��`f(��`��b����SF��`�X�w.؇��! ��I�w��na��6�{��v���im��>p�|���diU�����z���������0�Md�,����=(p��� ����k����Ä�{�}���9M���g��:�Py�.��u �L�V��l��q䋘(�ᆥe���+��@� �)�v�t��� so����  ;d?S���Hx����_�U{�[s�I���{��J�V�g����+� ��Ƭ�PW;1�jZ��T�W�տ���޹���e�[����}8��#�G�Mm"��K�>N��_r�ֻ�{�99v_]��ô#VoӪ��2�z^���_�4�K��zއ��Y=�k��/�:���}&̑�3=��} �i+v�ϟ!��)�;�� w���i^k�]~Se�F������W�}�M����L�K�c^�y��=��Y�����jŤ�h��@��ܨo��~]��-�������?�xM�P�#�z�k_��n�7�3�dt�f㕇�v�������v�,/��23Z�����=�}�|���.,��P��]�DsO�őq�A�|�Ƅ{ҿ�ػ;�s�M�%���H�fL|� T� �˂�Ἔ#c4��E���ޞ%�_�fW�a: � +��s1ߏR����D��)���tx����.`Z����=�.�5�l���5\m�oIc���������K�F�7���t���C$*�~~����pk��˒c3^�鿶�/����)�p�L�V�17J��u(�n\*'�O;eL�ע���m$��a#���>�r�Wm� ���= �n�>�����%�D;℻3�sb�q��Y����D�?�v^S�?����W��d�<6`x���V{3ⓣ��yY��_�{]5��~H����h���HI �����P�S�]����V�lɻ?���80k���O���\�'k�d1F��&f80���^��6 �I�O��q<�ٿ=�9S�}��]Y?d˕ܓ�^�<��^/��d���3�i/�6aꝎ#e�F�_�kOD�P�ئ�D�v�ٶ=y�?��q˂;�W?H�h9�oX7o�p����z�p�����k�QV}�#NȈy�+ %��u���3�|{���\~���=�ķ���������#yi?|g���牀�F1����� 7�߆1� ���b�������ߡ?�]6�B c�ɳ��ηЎ��z\��d0�8� �|�=�޴{B��m��h5x���b��T�V�������P��߾��øH}���z����S�.[��oy_�O��ح��y=j�삟�%�ʜ���w�ǿ�?�S�?�P������� �sM.��w�d����ى}���7"͏@`���R���������z��G��?]���p` ;�"�˥�[(�%�����ɱ��h�_�%��'_o '�w����w��1ب�� L��-P'p��뗕�n�T��7����,�������-��?�8N�W�r���~6�O���p~�뗅�� ��o����� xe7����Ƙ�u=�~���B:a� u�d�}��C���|�Kץ �)�\����[����ҀUn�����Ҝ��Kc�Z�7ץ��r��p�KcG�|��:�u=�r��sI�:1�6�{"T��m�V2@����D��~�M s\�Ӻ��c~�/�9�.ˀ��d�:W����DZ��^͞K��A�v_�*� g�;͉q2�_hL+�(wP+�D�m��[�|e��zKv��k���C��"��;�l/z�I +��"~y�~\�`ۓ��„(�BG��s�2�G s]0e}�Kx�+�9m�B���?�k��OܛxY4�Azk}�Z��RZ�eL'�\���Ly�J=pY�~ku�k}�m�ǫy�~���2�\|��uY�x�ׅ�h ϛ��$�[�e ��G�E$p�u�X���e��kօ��]��$�>s]4���jю ?Z���~��ļs����/���a�HP/CX }.�AZWk��� �uY �W����Jc����n}�_��D�s��_� �B�?I+Js�� 0~�C?V^f���e���w@�J>`~]�c����DNKө�9��Ǭ�tX w]�P,c^փSV�$��| �W�o�c����a?8_����9�_����u>��,)��$|�����U�����]��K \�S�.�"��D�L�!d��a��A�벾oR�/O�9F���k�y=o�ƿ>�� ����B�>�{k� `o |k�[���"n |k�[�[���5p2U�Y�}�6�T}=���;ݫ���q=���x�7^zU���ui� �s}#�R.A�Ѐ�/��$��v2��"��4Cl^�N�F�>x�Kf�9k�@],P�,l~��{KCrU��N�D;�1�y�$%�1�X�' b�9�*& �w��s����!<��Ug5�!�Sm�\�y�Ya"�P��E���M~�߅���&f�$^���i�4�2W��!�9o�b�5D�7.�@�T�.di��u$�=�9I�'Bv�(�v#Q�hg�P|�a�"��������z\���2 ����2b�0�V��k<�.V�����fB�q����c�0�ĸ�qÃ�q��M�S̮d���[���A��۸;���O+o#�1��`=8^ H����� ��(3ɩ����δ�.��Ly��K���r����4��z��� L[ /_��jԉ�6�!�0t�v� ߘ��q;B�����>�!t���<^��Nr �l�ª�?�V�ܬ��KƤ�-U?�~`X�\��`������� ��f)͍��������� ;� �*��dP�\�@+��ф�!��\pk�mC�clH��^*�,���$�G�P�j�Z���k�߰w2NY���T�h���:��-��KŒ`� >#�.$eq�ST'� ��>3� �PvGh8����E����5,?�43�=�$�p��X��U��ѱ��Dk��#{Y[�$v-������4(" 83� hX�ƆV��Yr׸�%t�Di$���3��D1��""�}`�^�%�IRq,^{ ��3��Iod��D��,� �B�L�,�1j�����g"o���'��{��8��3�;G�5&�O����*��݃A�tv�'.�5ꍋ\�xA�Y��XEB��q��s �����:�� �J�)�ִ�i�`(���{���-`��,� ,dB8N�'?T�:u@���"o+��ǡ�Jz}�T�g��� SU*�Bg�kc\�aj���>R�1k��ƺ�L �� Q2��߱�Vl4�*��.��#�|����� �!>N�Vi�j L#*� R��X+��M�y� :� ���(ؚ ��e��] ��;�db�7X��2M�m..�Tȕ*�uP�2�]�R�-3K�Kt"Jw��0�\c�$k�0� � � ��n�R�QW���.�M�����H�a���(3�B �H}�܊"A�q��7���Δu�8����u��H�~<9�{W���{���hn����~y�v �p4���$�;S�&㳈0y�#M���������rN��yE8��d�$8VJO�b/� @��ǫ�{F�)ek��E �2!�cnbH�bTV����V=���8oZ/F���*�a ��K�a�K02?A"x��rq����Dj~TB��e�(B��35%�1��B�L�U���Cb#N�*�`��J�^�a�X��'�Ij:�b P1>S }(<���j1TL� +�� ��x�<�0j��*[&����3ώvb '��� �9��F�bp�?���:�0�9.��YBEj�ܴ����#..��R H���=� oɟ��:-PO�Y:"H�"Obp��:��F�)�T����\�[K�k�P�m^p�^��U�K_�<��N��ɍ�=���~�H�㪶+�Ȏ����T�� Dq*�.Y�U�� ��@WC�D9^��c�;�ay!�ݥ$c������+�A��hL6����?�̫��$v��T�U�e�ma�KWp�C�y�'N�(NWv8���M�@��̿&�<q 5~��  +̵!�gN���^h_��V�Bh��ǘ^�8T��"�O�7�(�<�= 9�� 8�$8��qڡ�qp�*V�Ԗs0dZ̄D.�3�-���A�V�T ?�����yZ��Bh��� �K�؁�!R{{3m2�J3ȴ�`J��L?� D�I���7��l�F/��5��������%@���,t����2,3�x8@��sN�i �8cdFS�%K��J� t{A�)L�N��˅�-u'��b79+Zw�E�ݦ��V���@>4���)�5���g��F�����*"[dR�-دB%[�Y�-��"wQ!���r�����5�)��d�L&j��g�� �A��������\�Q��:[L����;��VԺ�^�Ad����!:!�i��%pM�j�2c��k��z�,(,8��Ӭ*�F��?������Ê�X�G��Nt�V��¦�v�A�p��P�7�V�A�!�U@Tz��DX � UG���a6Gom�i�`����9F��u�fqty�9 �K؄83y�����A{��nh�U{ YL�:���� ��L]dȇ��:�m_� ��JV��Y���ϕ`�Yq(ا�4{����n�]�F@kM��K^�5t�d"�)��ecGa��'�$Z�e����8Tv�y� C[^�_O�A5��)�c`:?l�nF�W�$�)�l�LU�A��$�,��iy�!� x�I�[K�~j3�Z\4��Z�7�t�f��pJ�ń�~��Q��)��M 8;��.�H�q��K�:�Τs�׼k��\�bi��+oUƖe��H�:%�����H:���S]_W*�g�ԝY���>0=��%ud�”�N�����F.46��X ��'!�e�t< E 1#l3�� d&�\ :PQ��[�B���P�i����aTg���F��8A�m\2���T��`� ���D�7��@����>Q?���x��k�U�)(�B۱Ъ�D\�9��"^�U�/�*(l���H@��=,�A�Mj����/��9�h�D;�NN%v����i�ٰ>�\a��w]�.]MT���$Y"��v�u!�26O�������.T�x���L������?�0[�9U�k�Ǯ��iU��Ⱞ�6�鈧��Gz��s��4l�6�Y 0�q��U�jR���g`�n��蝼�T�j1�BҚ�_�<�t��d�g�G5�\�(E�(��P�Ry�?MG=e�N B5 !�k�!�*�uԏhP|���mJ��72��Z[�euCŔUJ�����/�M�B"l�� �B!����h�w>bb�*����dhB�2e�Ci0 r`#��2�y�G���v�B�(��@��tQ� 6�q�adPz�v�kܨP ��,�� �B�9Ѣ\�t���Z���/'xoeR:��(LA�L�^fԔ�ֿ;6����E�}{�qDb��l����{s�e'_CTu|Oq<���8!�E@5՟b��~����~�5ʅØ L&?��, J��jNH� "�N�����ES��ֺ��Ǭ�G��2a�UǼ����]�@�bD1�������U(r�\�=9�k^B^������[���ޔ<�Ya�FoMnF���P��ST�0� ��C�N+X��S�V�F�i"+[ ��ƦdV�H����D���� ﰪ�/p��Ɨ̞F��f3У��t$A�_}��4I_t��&�y��%U9�5]uaO1�H�KT,�QC!���u x�_�A��$�j�L�\�:���@5��h������>;Ÿ!B��#��8���I�h�*�7 g�N�s"��ڠEJnYz�ɡE�Y!+qTl"� )tP��[#=�J�����tty�W�4�� �6%+��;����d~h���@ L^�Lc-��$�A:��� �}! 0���y��� R�mv��- �Y�Ѧ��7�k!�[hUw�d�$8S8x4!�3c���nD���Z� �*PJg[v��T�*瞙@�esa�Eq�L))�M��Z�r@��q)�1s�U�H0ғdGFt,M���xI�@��r(r���ŢI�af`����H����SI%�+;���)g��I��>��߆^pTA�d�)?%1����@����9�C)%�֡|r+mJ�M��?0��g+H �%�Z$� ��Fy�w�i@�Ot#R���@�7�KY-�"��5�Y�n�� 穲f��9,� :/�:S�2��3�@�'���y�"�W�L\o�*�bVܵKhR�Ӥ:s�[d��N����q&�� ��� !;���*� E���DZ%]�&���*C04TaXc�:3V|��y�L��Π't~Z��Ն�b�'We��� 8 �EG���dtht���q�� �� c����ؔ ��Š(��9?h�ӝPh�C�TrAf�"3����ʴ��lv�R̽r7�1�l�)S����3^�#��&�'�[Ģ�L�:���f{� ���'����%��U�� `q�,�+*�R���"�|N0 i���7 3��P�0�>��8���Z�5A^W�dֶ�F]��I.V �v�׈���HM��A��Hј.�BTr!�C�H���`��_Vآ�e����x �@�S���Q+�$��:�)�M�V�yz(M!Ӱ��t[�2���d�,�N�(�a00���,����8w�>�-T.���)����Kr(�(M@�@(�X����|��\�munW �^P�U D��utA6��pF���� "�y�U9 4Q����@�Sԡg��G��*h ��қ:S��� 8���=�ʨ$0 (�Av�NJw���nd`�a!x+=m�=l���Ëc� _p�)�q6��𯚾�y� ��9킓���2c�U�Y4�A�fEz�i2�K�K{�UiJW :���� �N�L��u�1S�eq?�� ]c���`G9敷#7eTv�K@�1nM%��Jģ}纜PNy.0;u����P�RZԄ9�(� F�d'1�\����줣��4� ���X�>wyN��0~�C����d��P�f��b��lF�,j�r�Ph�k�F�ꏄW�������b��L���6���<3t�ӝҘ(�G3�P�l�Q#"��� g �Y����i���;FbIJ |�� ��&�����?ω���@���ć�]0��<�nㇶ�- �kV�U*�z+�B���|M�ۂ�g�L5י+�W�`��@A ����8�` �q�Ϧg����^�;P��s��������=��`4j��u���ʢ����C��c�S��"fcr#��%q�tt=+� �Znor;Vc��=��Dž�}2U�7K�+s �v��u�*�JU�]9��"M�z� � Cx��}0��Œ��x�{�<T+��H�I�IY�=&[cP��R��0=ćh�yT?��*���0һ2_��0���,�Bg�h��L�D�)u��q��Y�?�H Ҕ��G��6��s����Y���%=e,@a;3�7��bj��?�{�`�R���lh�D �f�Y;��H�%-����1Qc�$�+dQ��0��v/�1����ͼT��+�‹p�YŎ6��2kG���0�ۤ�fǀ[iB�{��4wD�͛Ӓ<�~�� �ѐ���(�"H))�]�BW �����9U�J8ϔ�:��IO~T�!�儧-trg.�yyX���� ɑZ,V�WD�TH�y�e��2���@9�������?��w֊��g0��)��K�$+S���B2׽S_�?�6�A]���'�nJ��$<�1X���@�� �I�((Lm�fB��{Ri���E��,�d£��0-T0���L�-�m��+R��B�b3"��D��&�5`D̑X�“���H���V�2n?��&�ZzJI*ib��� �R>S�k(#$�0U:R�q������3U ݔ�Q���\H���iC$2�!KϏ��q��?2��QzFW�Z�!j��qDl" -:H��u�8!��s��ʖ��X����kN% �:��Qe27]ӣ��U%�"W���֘�L"U��t�07�Dž.��$�F�Π�J���͠�1-���\P�)VЬ4e��4I��(�8=T��~��uO�x��oꆫfk���H�:3Rc�R� ��0!����� �����('.3�d@R�K�U��\ L�r��7T���b!Cg*3�p�-Ë����4�1C�7²��|�,�)|��P�,S����_X�6��w"��j ZT��.f�7h���,U�\�pY)��Z~��sM��HXʹTq�y��C �fϓ K�US�!qYA�@`.J�����A��T���84X�NF���H��\R ��T��c�lU�W����*�"�yZN��a�5�K!K�Tp���t��{F��Dk���� ZhM� �L��D�h�Pe�>���u �O����:�4 *��p��@��;j��m�� �ѽ]UN1zFLw(t�`+:5l(���Cm�Q/���z����PY�)���b�W�E��9Qy�,�d���!�^�� f��s�i��X�M�� �p�Tb[d*c^n��KK�I����X��x�����=�3s���ŏ�h�~&=g�8� ɳ@����<,��p�鷦�s��J��J�|���w�4#�A�4T���2����>�m�\���DS1�_����S�2C� ��)�X�U�b�� *U `[��H_�l�-��m  �)� ���v E0&�3P�´2�+R�p�KD��Y�H�?�jqrƞin�h��UA���H��I���fg}��]�U�G��]�*(�X�T�@���;ݷ��=YM�_P- ��J�D�����M�(�)/?_�e���K��=I ���᪢a��0�:�R"+A��;H�f�X}!$f����Q�; �������KuIr�C��AO]L�eZad� ���S�=�XJl`FV�JE��Js��0ˊL�#��.\hnC<\�2��@R ��8/w�*���A�M)ɱ����IְJ-�M�b��S;g�T$UŦ���(b�� ��f.[*+`"cd��k0W���+�����4�@�!�<���۬���<3�QL�t@3���@0��2`Zˈ�9�lI`�s��g��Wv[Ϊ�j��I�Gg�.#�,<���X ����1���*!����`�a���Jr����v���[�h������/܄2+�iJ2�<�������������-��f)��) l�;�C�,�Ll�;ٳ؀�dHx��J�Z~��c�1Y�c�b��|Yx�&f�S�V��Ţ[ʹ���?D��� (kq$*ev��~t 2�X�fZ����#�$�. ReP�μm~�9H�n����'s�2��KEe�}ڽP��yH�`�s�:`F�f��̻j�6�� ����V`���6,c؁}�=7&�q��Lw��S��#2>�-ߗn9c���-��z�@I S{��J/> n���s2��U��S��g�S�u�,)��!�7C��3���A���5�{Vsu:Q_���&�hdއU'� �}2Tr�X�����F��|cQBm����i��P���1y��,�:��e4gY��̣»0Y���$����� �$�$�I�XZ�{��ק�hL�!��Y�d]W6 ���ј@�O�)I��gx!��s�����p������a}��өoWz/�Iˍ�Q2u��6��% �]���=�`M*{ZdF5�BL�T�؏�h���h����}vHjB�phN�X��(y�a��0�B!C��]'ŵ)��TMNI,ݏV�g�����5�L�X�i��eR%�/t�d5*,���a�T6wt~��X 0���I_L 1����[�v�r���|fP�ș!ǢRs�vn�w�ve�B^D=V����C�C�  ��U�:XS����0�_N)u�U��w_��8Ӌ��r�6�贏��~ t2��H�3�4<����Zz�!�<�I*�J��퓴 ��2�I� ��XXu��z ��ĸD�,��S֖4ѠT��-e3��u*�2�����!��zHfz�@��IRE�mc"��S� ��2ݻ����:����LDbڀ�N�:�ԙ�z�M�� @�=���T3��Ft2�� @ˏ�q��.h���L�ό�0��˦���R£��8X�8qN�'� /n�FeB�~��\ ��43i��4������to)��8���7*@FB� !Kq~�Bw�S9�46���3�[m~�J�RR)$7F6�����LU1��� lj���  ���|}�����}3��hīvB�29/�� �*t��j�̐����n�}( ����B=��K��LY�J1(�pn���;(�*� Е�U�Fj���<�����ʑ����������Z��t�S���PZ�\֬z� f��f��T)s����Y�>�"1��!Ku�b`�*$�J\�Z��g��d�KF:�G�M�p  R��ch~|�Iq��N���l�Ulآ.6��0���`1���.Ї�ݼ�U`�8 O"DO/���1��˦4�V$�<��N��V��yi�d����0ӢX��M�^��pd8���P ������X��ͼ�j����@��̑Ŷ�S�h*�֣����X���Vz3���L�")����gu�a9��(�L�rY'}�I����A�1�qa��,nS��YV�կ�J����mv��UҀ�7�o�}L���~�$-�Uk�ǜ��L�?����)3�h�Pf�j��ScC._���-D7#�=a�{K�a:b��.z&��*R�2f���E��K[�$M��ԑ�AR� X�#�a`K�x��r�T��`$�Lqݛ%Օ)Qt&iRE�0d�t�d����� ���itd�� K�)��PC���Ӕ��()$L+6(�13��4��lϫW�5*�H/�gf<������ ��U���n_� ��#(Do�'$wQ� y�XI�E��b�ݍxO72K:sL�F��+N�0��_����5��jW[���$�MBu��w�������+v� �v�h<�*N$�g(-���x���l���tz1c4oBS�ψgHd���f*�`fjhL�q��= ���g��>IuXS�?���<�4$� �A:_i�J�bs��!�I�*�3ZARAU�1^S�Ef, ���wugg�ŮJ}a�6�����>��2���;�����ya�f�o�0D��PQ���٥�ngD��Q���0�2�'��0e�� �p5*/E5�]ZCU^�e�+�B|�H�o:�V�T��Q����TU�EV�Y��n�ը�>v} T�}IE-����Lz̲uÑ�w�gm�+o�N̰��֏�L�N�Ȅ�y ;J�n�F5f�a�杩<��cQ�r�J�H��h�x�ǹH�g'�H\�r,k���ecH�X�͡�$Q��A��(Fn����+��M�O��ZI��*�E�� �1����ˉ3��F,�5���uL�f�ȱ � GM :�@MG���Y)��u�6�;�t��s֤v��j[�ees&)Wy������v�'f*���1��E����x��ݮ��Do�B��n�.w{|�hMwi�� ��XPUFN��s ��3 ���y�0�P�_-A�\�^��Rb'�[�ȫ'���U-�!^�U]�R�����l�f�5��I���FNڕ|Z�a�L@�k���PD$���hf��h���Ww]���P[;Z,)̜� ��@u�]�m�Ļ A0o}8��%u^���i.Cp0���� Ӥ�+�<[�3O�`��ʱ\ @��yPG|a#:j����R��{��[U�T������\V�R����³�jG0�F4_#���BQ2�_9�����!���$��flX�i�2�u�^�4pDP���t��M���9�w(��,�瘧�ٜƄuL�dž�2��L���D��GgF֋�Y�]z��u*��r�L������Ӡ�͜�-N�q�x ��M>c�bN��؎�x ���S�gi��-Zb����W�,���:K���!L�o)�׷�ۗ��Mgg��Ioem�ܦQI��A�ڬ�E >=�U71U/W9q��T�_VW`t�n�t��/����^f�L� �WX#7mS�ج�<�����������,�NV�Ѩ[�+-U��{�����JUeHjT�v]Qf�'� .��Rm���pyf��M�V��C�O� ��]�B������O���*��j��~�*��ʯ���T�BO�c�h��.I�� � 7ey���,���g-�6�����6h�D�+�(\Ke�Qc9;��Zrvz�`�$T������ĔKnM����-C�0���iL�Gc��G��N�@]7���H 9ڔ=S�UiX�:P��ұ�b�na����ԇ}L�o�٭��[� 2]3䬓U-�W��T��v����Y���u4�[D�n����K/�����Թ�1Ý4溱)*�J�����VNd���l0w̱�D�2ʮ���+�-js���/Q�&t3���У�5�VwC��*��,�$ �r�����塐���k�2��0���i�uMK��`o�(-�}��a��Bnl�Ŋf�� �>�ļ�$�`���l��]�NA�y�DBZ�d|� ��C���Μ8n��Fbr�RQX��j";Rk�cx����Ƌ%&�qZw|#��d:K�/�R.k��>j�J���x8s�&KI�L�mr1�++���/�cJ��Y&nif ��a�D�Il�؂���,Z��1��̕eƗ�pz��{M�� �-��1AAM(X�C Uޙ���5��55ŘG+ւ�n}ai�*���M�Գ7�|Kt�Þe]CPNm�3CC�j���Ju`A&��VC��:���N���6BR;)rob"�ھ+����V���������W�e�,�Q�����tj2!)�ܩC��wJdk� �h�T��@-HŔ�F�?wF��z��+��*����^��t�;�d������jzpBh��I=pV��T_��{lcT���_�ZFM�+W�FsF&x0��X.�v1��:��Q��2��}�Qe��l�!��YK,"Ȝ�p�g:��ae�1��3}]Y,�$�1�ԭ���Э���N&ͪ� ��� (28�&���m�4�6�������� �tX'e����I�h�a)T�5�z*��a�Z7�&'�t('a��.�PM��_� ��&�qV{ye~�>��bA��^��Z�%�����#��.�Y���]0!���gtذW@�&1Q�_���Z��4;O7����d�X�+MD�������mG�\S��Q2lQSx��H����`�nٺ�<�y��b� ZTj�j��߈Z4�I�ҝ-j�6aUN���: S�,��Lh��5��#S�4� �����U#*#SP��7���'ш�y��$���l$�k�g)��i1[<��V]�h��*��!��c��si�Cw#�f�DZ+��z��s��gavX�j![؁�Z��Z R��v��Q2WQ�G����f�Ȫ�o1H��#e �!i��Gw��jec?+=%�TqR��v�W��>����f�$���ǒuB���T�fW?+�}yoK�j���Tp���'� v+�9U�Oe���I�57��� �ғ�YNn�%"�e�/�:a�<��g�LY�?���dQN�����M+�������J��UC�J���3b�n5���(��jRzB��r��dYk��5���j��n0N\���t��<TQ��<Unz�'bB[�(_M�޺�MIlɛ����i#��U�#sW�WC�"$h3erdh3�����b 8t�y��0{Fɗ�d�[[��v�;bD�6dQw �m2���D�JW����Px�,u�Dbֵ%}5�l g'_��`�-Ͱ�lD�u�Z�M�l3tcTGG��弦�l���HcE]P|�2Cy����J�o�g�Q�%25�蘶�����բN�z�&�(f��wct�@��k�b�ճJ������eve��d�%"�Ӡ�e٪A%4����|<=�Q��xr-+Q7�7CW�fR����+���tU��K���d�^��x�g�t Ji��Q�b��C]ɔP���i��%?`�9/vm����O�U�V)̎���Fzg������"ӽ��WD=ȴ���V*�ICd�l#5��&#���0�y�L�'�=�,UO\p�� C��]�z~ò�t���tQ�I{f�K�SҪ5)�E�) �u�̹�z �ڤ�o$iД�8�M��� �K���X,!KmGk�)-�\g`R+�,,��m��*3)���l���=������8��U��G)��k�S���u��vo߸d���5]\��d�Q� ��إl���t��צ��2.��2��4>r�}o�� "�5W�5� ��׿���SG t����;Ãq�#�����-��U�Xl�"׮ֈ�w٨��C��������Q�f��� ��fD{�bvQn��@�� 2u�.[&��yz=�,����<[� V��77�k��?19��Da���3�.���;����S�dž��Y�݃������+�}�G�tsѩW�]Ѩ��nm�u�,�1������zv7��;^�F/��4n�t��x ��nun���" mJu�������u%ا.�e��χ.Kd�Li�@���KIw򶑁� ���<���!Yݡƥ������|d�X�m��\�_�F]�V_�&㌽%����j�?��E e�P�W�^ǟ8M��F`����cj�z�Ŝ+�³4�" D��B���|I�qǣ F���Ii��Q���y�e���SÇP�Ŗ���X��]���,S`}�SB|��4�R�9�����/�� ƚ踱e����/�=(Z��� ��7�o����Կ�@nc������7���t%��l�������O�L�c_��7�o������'B�@d.n��F����7�&�,�6�o�������A��&��.������F�����Ka�� ;���F����?�7]Jg46�o����������7���������� ����V�n�������O�K�������7�o��$ؿV�n�������O��*ߍ��7�o����Y��������F������*ߍ��7�o����Y��������F������*ߍ��7�o����Y��������� ����*ߜ"#���� ��7�����Z����� ��? �/U��7�o��ؿ���`�Z����� ��? �_�|7�o������gA���w��F����7��_�|7�o������gA���w��F����7��_�|7�o������gA���w��� ��7��_�|q1�[�$�!$��]���9�y���W,V�u�fZFx�`nvlps��)����)����)�'�ߝ�}����������߉���|���)����)����w"��S�O����O����O��{�����S�?��S�?��S��^��?����O����O�������O�>��S�?��S�?���E��S�O����O����O��{����S�?��S�?��S����`%L�'���C��M�b���pK�w�������8�vJ�'���1��Gs��78��9���UZ���� b,W��WR1i�(�J�/���2΅0��R���4"���V^�ݰ*z��������\B)���R��Qo�d�r_��FUCi��K$ C)��(@�)Ad�t �^�v��� G/ց�a�Z�K'O�+��]>���#��"r$�6�5��%0r禤�w7j�Do�%�V�d���t�U�d,�+��B��MހH�PI�zG-���EL�NW�Om !R|�T(>zi���k� �@��P��tl�*3� Q�̉S��54��c&ۂ��y��rs��R%Z�S��MEVr5V� 4� �&��0��ꡈ��DFd�~�섊��/`p�\�^�?B-��>ɭK���(AI�k�p�FE�+��\��1E���DL.�A�:â׫C%`:,J�&$|"v� ~�n�XI�dnb@�&��^;�G4����!r#\�� �g�T8@,�5P���GIJ>�VC/{&Z���:z0r�a��y�+* h��Or� sZǤj_���\ �*�=M�%I��PX1��֏� ��x�^��}s���o��w�m��6c��kPn�k �İ 7an���)Ka���� _ ��-�[���Q؆c �V��)m6k�f�W#{�����VX[dn1ǹ�l����v�ڤ{� �ݼw����,X"�� ai'N��9ϔFKHM�)��@kuI9h�!����������?9EꔳN�n�����h��S��ꄥ*��@^� Ms������r� ��PkS� pjť,�E�b��v�`S�C-��ni)p:S�9/]�����Q9���Ÿ�T�@�����J�&�Ŭ��Z��y�����h����I~D`>U�>7� ��dƞ);�B?洐� ��9��Av�s��ZX�k�~U ��ʥ� P0�ŃW FЀp��*�#�B��@�v�?dϭ�X�p�5˻P<�s и,=�~��V��=��T�R�Gx2}i��ל0-]z0���pR S&_Z�%0`��M�.ђ�a/�j>�@;�;�'��!�v;�ƕ����x�]�l�� B9z��Q�|����"�G��8$�4ϼ�G� �4mK� �,@Pᝥ:�0�U���4�1�����m��������,U���L��Ћ�BL5Rho%����*5�m��"!�53�B_�0+��Pi6o�bL�²b~k=�D�Xhd 0|H*n��@b������d:1x�W���&��Bn��F��챼� ����y��H������D�P��Pj:�+�A�p�jĜ�� Ƥ��]�JfC��6}������ax�y���`IA���� ��p��8+��m: �������{�+8L�BX� �0b�0��:�,*�Kq�v"�r�:nd�%>ѣ�z�c��Fٸ�5 ����@J� ��R����̊��ܘ2 0}�C*��n�h�ȃ���0)73V1���+=��i���� �8`TX��0\�`���ٸ�鴛3��G���a73+ҽ� !�1&�ʍ^/h)����x�ڥa����.�.�����-��^��}f�||fwi�������(����k>������ap���x{��S�{͜����=�w��V��e��߼�u=9sr�V��[��_\��<��d����.�Z~���8v�V����ԡN�/��T��%[�K ��4��PN�ou�5��S�9;��S�K�.Ŏ���uz}��և.��C|)���na��0�����}���}w�f��_ܜ���7gtC�=�x����P~��w nkq��[��3u���x�q���Ž���o���kG�]�v��Mk��}�H�ڍk��M�����{���%�lp�bɐ%Y����K9��EՒ^K�-1�w����K�%5�,���i�XKoѼ��6ڶh��N�o5�t��K�lݍ׷�Y:�饭���Z k뺭��Nt�r�O�T�ӵ[/���5#�^�o'�W�Ӊ|e{�=��cÎF� l�PfÛ yڪooȴ�b{Z��N���m��D{ z��`qH*�\�{���q�����]�w+o��lvPz����V��|A����_p� �mlqsX۬Xֆa�e�͕eÜ�Ͱw���ӭ�O;j�p��J�m87���=����?T�����mkn?��f�b=���w&�r�{}�fgJo����O�T��џf�p�{��<���D؞���0��9<ˍ�}��c�A�|&N�o���r���m����^y��YN�En���L�&�v�y�a��j������z�/�h�6;}i����n��ѻw�����s^���ƃs�{o�v1z��pFz��;.�3s98O��p�z��;No�t�n��r�&�s�����=����mF�:>�����=��!�MV�>=���><�#��-f{�OQ���#�o��~��]�8 �,x���Ex�W��@e��ݕ-`P�A���"*'e��[#��/��[�����b��u�V21��������iƳ0���,_#�g�"0��mĴ�S�.��Q'A�\�P=0���`�;X� �S���,��Y <�A`Am�l��bU��b�]s/����‚����� �W�(_������]�)���q3f��@7��"i���LХ݄-�>�0±�Q���&AM1ɑ�h�^lԼ1��0��xa��)�020��}�Ӑ\l}���H�1�b��mCvl ��ͬ�NuE"�C����͋ߋ�H򲤙�I2�)�n��5O~�~����W9����Kc�jZ�b�*3��;����2u ������VFp+Y[��aX�h���v*�����2��3�f�v��/ G�|�Ø�È6 ���)ʉ����uӻ�Q�� L}���nQ�"b�D����:�2�.l1��IO�Qo��� @TY�0�\�.�hT=A���R� �R9�{���$/%$��AC�ERk�rY�2r�ve�;E�-��W�fM:���5�#�f�[M���7��dZ^����0�b@S �"F[pm���"t�Cb�%&���C$`�Vh,[b�ّ*-t���*�B@%�3/�|���rI�ey�����S`6a��-��F�����N_T�¤��E횎 o���ÿi=A�X�� ��U�¯ء��t�a˶m�mA��{��S,zW:��M���M��Y�o R�$�d��Ť���I ��5� ����@CŴ�����`�_Ų�U3��D�٤?8�+ƭ�����F�`�fyػ�B� ��Q� v���?�@iQ�[�ߺi���������<�;�-��r�ܴ:�D!�6��E����U�QI���h6L���l?�e�0C�� ��I���O�f:��8"���H����%*�?��I}x4[R�r� �7A �E�Yx�cV�6��L��,� Q]�V�v�}�=:l1��E|�"� �3�& ����D?Y�q��r�i���^K��\8���/.��Hn] W ��2Y����%�] �0@n�X�����L�=0|q� (� �-9��������Y��LA�P�E�@�+�@��s�D�� �� c�� ����Dn����R(����f �B ^���D+BEN1�S�g���*g��~;���~�A�q���[h�AQd/^��JX���� ��� n�`��Ü���S����,/@ޘ�TX��� -Su ��FRZy�_-,��d;�k_��G�S�ð���h�-Z�Ҡ��S�b�u��K� |:��b�Nd����P�`<4E��K�_�*d���'8h�G@c1����J%��~o�Q����=.�"ó\��f�Q�1YS��Z�nE����%��4OP�O(Ա�lF/P�(�0�Z6 �����aa;���(��p *h��5�Ȁ�7�0&9! Ze� ��+ j�]�B�B�7%�/� Iuӽ���3P1>ATó�B7�)�����P+�ah�pOd�]��ݡ?�Wb@�ā�aF�1 k�Q��y tw��~���Y���W�h�I|����z��c6htm;\�1��v�7����L��̊��f��m�EZ�P��0ݢDĝ+Ә1�ٛ��0B�a9��8����@\���(��8Z���b�<Ao�6ri_^��5,ٓ���3��!-�ll�����1�F]����-e��m��c�;W��m��^T0J�X������v-{��^V�Vyvm���Y����+'=6W{�� � ���ΘH�����r��s�-��v=1ε�������A˷'k@����.���4�@�d6i���+̱��?ӡİ�dߢ�U���х"��H��(��E���[��Ŋ�|���X��\������6^֦�����n��� #���T� 剅�҂����dJ�!�L�(EҔ>�{��i�)����~;�FT���D����ceKIhzz��%PM�c(�K��%����e �[���ٖT/��P�㸆i��z��B R�>j|4\[�t���#im*!���1 㷾����I�u���x�z02rR�N�M��/��\��8�������>��h/[_C�c�q�j��6�7^�28P@d��{8�@EBc�����% L B� �&X�ڄ�+�㚪ӡ�p��@�� �l���Xe��+.o�b���'� �� ��UXN\��W �8H1�y �n��Y���������[����l� , 1�@݆����6r�9@��;(C�5B��05m� �G�~s)���q:=�A�}Cznnn�h��S4�{ �a&y[`�8�BG��(�l�A�.�w����,/#�0 ��?�Pf�(��k�+�Z6��B��V�9��w��avQs�`~_�}�F"���Ӓ�Ȯ����m>��}�f�q(��Ɵ�IZa��L�|I���7r��]�' ��U��,�d ������!�Z�ɹ�<��a@�0 9K�aT�MO#Y���W �;0Y��7ޮ��f�f�o�v������8�^:����b��;�Ǟ&����(~��e���ͬ�n�z����,ٞ��IÉ�=Ǚ� H`1��'�i)�����ޜ�-������v����}g~� /[��q������� >�|�:Q0��%:���^�l�CA���E2�¤U�o }8[5}lNN�E��l��e����Ș��S/W��u�`���z�p�)ڧ�7��o�x�1�t�x�2���k�顎��Z��~j���nNN7ywZs���ǡ`G��e����O���ӥoiK�O���P��N�0� }B �� D���x����nčl∏�3�v��3�ѣ�` �DV;�P��8���<'�[!!�w����j �Ė;.�a�`M��|�k"ۙ>0bc�!�Ehy򷀛��.6�� �-���}ч0�� �͹7���H+�ȜgO���<�q��˜�DQ�a�<�5SF�t�G�����BrA�iL���mw�P�6�\����v�x[}K����a]a璗!YG�y����9\���Ʃ |{;v�ȷ����4v�woE� �6�}�Y�C�#��k�p'a�CBP�^��8�7ƙ��/�T/�f����X֥�j�t��Al&��t�4�q����S�C��Nf�$8LI�W�iԱ�Ҕ��M3UEy}Wh�y��t�\^����X�&H���٫2<�r9{�b�؛L>�i1�nH"���t� :�E� s�R5_�ẹ�c�P3���:oe \�'�N�t����3&�Ac7��q�L��� �K`&���'`�o���q���Ս�Q���?gݶ�2�)��P]�x�<��_�DcQ&����@ Ќ����_ŕJ ���h�Z ����ˋdp�$06(s�F� ��*��CuW�!0��������`���r�\��\T��Y��b�����0܈�tK�( i4��.zc��,��e�h���%Q��!{��¿k�����X���.B�6p��Oh��Q�N|��e����$����9D���]����=C�U��I�8ΣF��H�ߔy�����L�]��xy���a �.�(��)��g�r ��R�j���M��&�JLq�Lf��x ٛ��?������`���b�6����x63�� X�c����x�ع���%� �3�>��h��w��7E0�<���!X�n���`�f�@�W��|�&�N��|S�,�iI�����f�e��V �:>z�9����O��-+2u��{\t��>k"�9��hf�L{��B�}��1��v��Xŵ ���o�X錷���:֖y�<�6�Į�#�qpL�P��W��cX6�ŇJ���L&�, t�A_e&/���ʌ�m�7B�����MSh6���^���8v[1ӵk.�mˤ� x'� N�`RP�˂L�הO��͎�m�L�făi �0|��&L��+3��1: [HD�) a6�.)�Cn��c�6Q��z�C�B㱘���N���M���{��Sh݋��Vʕ�y����Om����x�.��)�Q�fh9�$8E�]^%&z�b�]��V%]^�rz٨^�^�N� �W�v�?�=g�H�w "3���oG��@�B�r0 ����Ǹic�{��v3-�.g�'�t*�g�@����kx�①�Z}�@��K��J�X\8M& #"m�5� ���E��GБ� ض>O�p�n�8�Á`�2 �0�v���t������ׂ�͗j}Mʘ :��5e�YZ��᫔B�⨉7��m�(ċ���c6LTY3�t���d���*U z-��S����D����wFM���I�T��e�n��L2�Ⱥ=�,���4�w2*��%��X��$�?��_y��A�}�N�w�3*���`�&mO��XD׹5��ja�����mk���G�� b�\S�LyQ��;}��%Z�Hߴ�P'R*�9���2�_�؇L�� v�b]�/��Ō) �I �@���K��,�ҷ0��>�� A��ȗyه@�4ի���a�S���&��ҭ�K��p��N�/5.�)���@ ���[C�e�]��_j/�]-���� �NL�|�4 @EK�K�ފ�,\�����Y1`���A�1]�W� Sc�k�8H]����D�lo��(������D~���T����h'���#Z��Ș����J�z���-��(KI�3���v"ٴq�P��ԍ ^3��#K^CƦ� &�Y/���(C���FF-����Ic��. ����� Wi_E�A������G��#kh�u.7Uo���ɳ����Y���.��)��^6���U uJ_�]��n����\ R�I�h��^O�Gl3X^��D/m��9E-�C(�`Y^ ,�B��DSo�'�1�R��b$�o�O%�_x�t�z;��0mI ��,�mtF%���:��9�� Yzv�fe� ���3��R��a�5EgG�Q�#��&��tа�I�H��HJ��"n��nn��v|x{v�[be{0����cO����i��=����o�,�%��|�w���q����',�x��6�mT��62 �aJi3��D���u8�r?{��ؙ��#�*���2|f��j�����~��I���m����{~���[6Lo�!D̯(�����?�-���҉���d��q��ٜ$��)2�జ�[��k)KO�l9e�t�yH�������G��CSL�rkN�8��V>�Q����j/����[�9g�W�˱ޜ>�>�����7ݳ4�t��N��0@�[�ϐ~��`����D4;����'�(}�-�'f�X��n�cf8$�wV��h�Yrp(j�dZrh��o<0s����\\ha=C���j�#���M9��Y(�A/�_wx�`�X��V�b�&t�2�#F70�8zj� ���П��;H�@�(�e�+;{�A2l�y@��5,+���&ӑ�E2��VJn/#L��rfH����:�ij������M6��|[�bo�)����&�go>�@�ah��vM�"��u��Y(x�#NA���Sg�|o��3�w�~�ُb����L(�� h�d�|aOd^WBgT�� ���_b�!fu��*m�����pԸC�)�I~rT1ryys$zu�o_x��=��=�o��� $*o��k.f��K$��t &TB�M�����c���w��/[~���a�H�;Fi��,���wně��6�̧��z�S-ݮ���Z�0dM�-���'��Mͼњ��`�~�ytJ���AI%r����}1'�I�eL���+6nU~+����3]�� R���9���6�eb˸�Gc�W�5^���<���^ N�B�t��M����i(B::��+*i �%Pj�f���=?9�M$��_8����w��$�Y���‘�Z��f&P��[�6?�Z����A�J�b�E����Zx{���=,X,Ƿt!L��Z�����>�� �{�(��:�rϴɲ��O��M�4K��m2~�껦d���S} ��J�v�C�����v[�-�� M�Z��Ȅ�m���&��H��L��є�G��ҭ���z��eJ^������7�?О��9��#���;ԏ���LP"P�pLQ2ǿ��/�����������X(C�g���or��-Zmh70�;���� ���A�=T�N*/��Jc�@XRO�=� ���$�}�4��t�t>��%��tD�yG�]m:kY�A=�� �����J�kY·�RU�RP���HDj��aD����`0"���/�fG��0fMUa�h%��/2��Z���!9ҝ�jF�Rq��M���_�UW*�V�� �U)�����ܴ>�)V��$��I���fj'i �'۔���ݳ'u:}%�J��vk1փȆ��KK9�7�vfg��}�x8� �*u%�AP��Z�e4ɶ��@?P~���3�tY�=f$�/�ks���H5Q ��Hv%򯉩<{���Jm#�e��$(*�� ��-��2���+ ��=�VEcoK ek��/(!V�Nd<�s}� ��UL��ݴ-�'�E�q�]"i��CLT2|��;�H'{1q� �_ff[ׁ�LT���/�a� �TV��k�_�u:�u�h0��^�3�(���.���Rmh�Vq�z�L~��`�����d�R�v�k�Qr"=F��f��W�� ;��`�[Sz�V�u��(3n�C7�o��&�{�*_z]N��hDc㛙L[f pLW����DM�exi7 ތ:,"X$F2�WT��v��ˣ���G�+�SeK�Wp7�v��_"`� Ǫ��PL`���_��� �>��:.!a8~go k&��gݛ[G��!��k^oP�6���_� ���y �y2����"Ln�3���$7&7�25��U���F�+�xk`vƛx����"2Y�&[�W��M,��ZA��M���!�M�]}��)Y��K*̾��h��_����ɜE;k⩬��#��ZeTe�R7/��b�O�th�,������,�z@K+1K�wkY���N�t���)Q�h���,��j��ޚhsi� <���������,.�����.O���t ��,���7һI�R �d�j�?����'+�g���F7m~�����s5,Bi?�{͕&q7������J@g��:�p9�]D�����~ W���:�NȐO�&��`�б�&��Y����~�m>� f6`���W�J��L��� �+G��x����MI5r9���سWޘD'M���4�&��}�糬�d�;��E�������4�oW��rS�Ǵ�.��y�L/�&�9����{|���i+�]q�2�LF&O]7���ޭ}��=��AQ�M�uY���ꉬL�C�~��D�g7�ܐ2�5xa�+ޣ�E r�Ų����]��2y���U!��D�-������Y��e�sH;Vói�2�iL�W@Γ#��y����j���L%��2��m�� ٺm� |az�� ����(�+n������j�-E:A�{@�eB@O��'�f�?�Tf���Y��J�� �$�h�qj���U� 3�!2�uƳLt䷃\ SF�ZW#*���7�蓣/���a����{G7�� C�������#?��_��JJLs�����Y��8i���� �7�蓚*����~ݚG.���Y��}[�'Ա�I+�|���}��4{>��c:'�]��R^���|��'�R���������$�;��:���1c�f�& ө|�'���K�4"��'d����B�X4�Z_��9�DS��c�k��dJ����'��$��+g럐]#�G���=��k}�Ϗ^Yʻ�?!�F����Aּ8>���G3���h�3�k�{��*�Y:��}����k���F���\랍�1�k�1�d#��F�m�F����5� �d#��F�u�F����5� �d�����v�9����'䓍��8��&�U��`#G�����|����g#׺g#Gv ��xL=������k۳�#{~q�|B>������kݳ�#{~q�|B>�h��`����F����5� �`�t�� y( 0�9�~�dS��pSf������~ض>y6V5ò8���d�ݘԷ�Zk_��5� y�q~���X�=���Ě��m?��&�O2������E>?I����g럐]#�G���;h�ɑU�Y��l����Y��~��|~4��fyn��OȮ�� 9e��O���+�W�<LuM��Y^z.<�&��n&,7�:��O��'�m��m}�m�1�k�1�d[��c[߶c[G�����|����g[׺g[G�����|��ޱ�oݱ�#{�t�|B>�vz�x���N�e��ƫ�?�O~��ȮS�=�DY%�j_�~յ�J;��'�>������t����c�%�v<��e�Q9�5�pa)�{��>�͈�������=�?������0�_U�*��ұ���nI!�EV�E��X��&�H��:Πub���x*\T.(�*�5��l ��}�Y�Q!�����p�Ol��i+�4��xf~i<���s:��Zt�>� xڍ�>lR���_I�F] �:˼R}���^�T�'�y�s�AZ�*�ӘԚ��W�X�4v���E,�rw��<�⤰��}4��x�Z?�5�:�q4c����']�� ��־u�UL`�8��.x��^=b�2��j��i��E^�b�BSUr]��4�l��A�5ŷ����K�®���a��=XI�$%���W�KxJ]������A�4�S��j#R��E�I�0����� ?!f�֡bӡ�x C�X�uT���4Vh2��|��c�"��,������Y}eU�>8B�)@U��[�Ag�.%���8���u^uǬTN��G�T��������PU���kƧ�kg;�|�qMWTX��\v�k�m�˝�=�Ϟ���=������$���}(70��3�5}鼪�$]mS?��x�X��$~W��U�����{�MM�DZ�C1֥��8��R�Ӥ-�kw��� /҂��:T�4`����W�WN �l) ��n� nm�~��,��k?�!Ǘ����Y3G}�q��o��}�'���0�_�.�����£�#���E&`��ٻ���c��O��M���8�S��&�g�9�q�?7N�-eh�^�_0D־�!:��T}�����&7ڟ���?[9v�{ku|2�����B���њ�Z������'��v�/�����������cXX�I����O����[�S�a�an����Y`�[�ߺe�ş��~N�� }������S�f���ί\�t ��������Η��C�ڟ]MV�-��*)�8fMW�W[��3}ْ��U܇��^m�u{��,i#�6���p�x��5�x�d��7J��$�\�m�k���� �i,��a�$�k�fd� �'�@��2i�7�j5���G�?|�)k�����������{���ɷ���x�Q�+�Z�]���E�����o'5��OŪ] �����Y�u �z1sXh��U�2��>]p!��u�����/|c��ʹ����Q|#7�B�����)�|ǦhJ� �ō��{������q ��ŗ�֜j�[� |���3a �k�7Qd{���K�X�T�[�������������=��ͮ]C��w��e ܍~{!�Sb7/q�����@�Yw�:��Im ���ZxL]-��B��P Ξ=�Y�:�܉﹧�>�S����*��\g�t�� V��GS<��a����s��M�7Pλ_u��a�f�e��i�>V�Ԯw yU;U��ڙ�1<h��j����kx�Wx���2��S緩䩚��Sn����5������d�;ݠ�-�60�|�{ �:8�t&y��6J_�u��s2 �C~彛-�Ɯ������ �o�<,ٓu�A�;!m��ӯ?� ����l�q,N|�wz0��N����y�xޭ�k�:Ή�S?����؈�Xr6K�~?6��C�*y����+?>߰����u*� j����HC���_^?u(�|(*?y�|k��ζF{�Ц_��T������\>� ?���t�]�,@4F��u*�����2�l��rN:������ ���7�٬���jy�͆cF����LPZ~<��o8 <���ـ��:ऀ]�Y�X 6�P�^� �����w�g��k�!�Ry��m�a� ���Է���1��C�z����}N��Y��\�ߋb��O��n������b\��g^��Wwtg1p�D�漩���lYBS�mٞ��鈶L�{�����������y�� QM�ؤ{�X.���"��#/ t6� y5�k�DV�d"~;{_�œ���?Ȼ������}�;�k^n�}�cy����"����KX�9���.�Te}�n�ǫ�ݭ�L �1/� *Ѽ��P������ ϗ'��S�Z� K��_>�lk����5- �+�G�%����s �/#�N��q���q���!�xS�3=����Ŧ'B���#�h&ENs��ן&�"}c �骩mc��g�N�`�u{J�/ �!�>:�KE:�b�˒ 8���.�{��Z�;Ǡ+�����A��j�^��o2��q{��D��~����ؒ.����\��'�)���`�ο4=^#��Yr�,���>)�-xR+��c�Nr������[R6��ir��.7յ��Ό��>�0�����p���yn��j�8-���Z���pM��#��N?�Ǻ��,~�����vп7��j�t씦F֭`U���E};� *-<��~]iX�Q���������H�瞺��[xL}�󿗗�SD]�rY�"��=Y�O6����9���������[�7�𺞽O�^6�>��Í��%�C���!�3"���%���̿eѲ�a�' ��8�0f������?�/²�~p�şѼK�{��gH��"9��}RI�;�O����)��Q�jZ߷�[��;Yj� ����,���z��o�1�C��^�n�yG�R� �~�b�y��w���l�W��sο׭������o�c�=�N9U�.$�i��8���{۱��<�ѕ��>����g� �6��8����,𯞩+ �˫8¯S=���Y^s endstream endobj 75 0 obj <> stream x�UQAr!�� >04���I�r���� 5[ iA��`b2�˖���ӗ4���DF�0x\��3m�]�t�%�N���FW�J`/�Ė�.[��ɡx��6�����S)3Kٺ��pA�=��].�K�x��0 \�n��ε����>, ��@������%5��r��>>R�l%�7���>��V�˶��96� $��;�奔��O�nos�Tm endstream endobj 76 0 obj <> stream x�]TI�! ��+�@u ���������W+���� h��z�֛h���M��/�����{iNz���(<�������!� 2}��8���,?�\��"ϤE�J/m�Kg9d�[�|��$����`d��ȥ���l��W���se��Di���"Y����7:v��X�F� !����]z&�;�������j��P����2�B�6�S���@M�{��Z�#�:���#��h/*�=+-)��85�Fra/F�q�(�Q��$���8L9�D�6F˔��:�? e�*x뵸�.%)c~�������ޡ���w:���mXj�9G��IJ���-�Q(GMQ$�LȰ��a���R�ԓe����B���f�$"��{��Py���a���X1'����2�#�X�6�-l%�8(&b+y�W��)�|�]��o���?n��s>=�a����Ag(n���*�{�{�h����/A0��7j/�f �h��j�tޓ <�\>4N�cOf6陉|1t� �}��'\�><�k�b���w^�]�be���Z𛙓}�v�����tt��Q����u��������/�L� endstream endobj 77 0 obj <> stream x�UQKr� �s _���y��t��['ᕍ5�’4@`��]�M��� ;m�[H�Ґf"��$N����B������m�Ʉ*3Y�c���̶J�R7P �*m�*\%\~ �v��"�U��0�y��~36$ِ�r���4��q�HP�<��ʄ>؞�����޳zs�O~�X�氐MZ�zwZ�5>�:��V�����b��8�n) �[��>`�gyBi�L�)����g�)���&g� endstream endobj 78 0 obj <> endobj 79 0 obj <> endobj 80 0 obj <> endobj 81 0 obj <> stream x�]S9r�0 �� }����l&�b��6<$�Y7�d @��SNE�ف$�$*�4G�P/�ǎ��U�i�\�`�5���*�u����%Bz3 �Ő4�Us*��I�W�1�F�5�3T�����Q����3_SjM��-w�58�h^�����m�M;�*q��c�yvBdJ�"�4^#F�q��E���1����ݤ�}ܔ�ѓ��]��,zL�G�E� �V�*�$'�.���BZ6��Ka�%�~C_���<�6��J�����1��u��i7��3.���2pv�HF����2|Z��u��N6�;��QG�ѳ K'��Ƹ4(�2��*p,yZl"�e� ����d���/3�|Ϫ���rf?V Q vv�X!�����E�#�dz"w7������f ���1�ύ����N� �>����,���� X>����[H�0��R��[�����F endstream endobj 82 0 obj <> stream x�3�4W0P03P�55�T04�Pе4SH1� ��*�r�$s���HL�|\eW� ��ddhS��J��ԁ��q|*4 endstream endobj 83 0 obj <>/CharProcs<>>> endobj 84 0 obj <> stream x362T0P0�T�561R027R�523TH1䂋+X*�r�q�?� endstream endobj 85 0 obj <> stream x�U�1�0w^�"�TU���%�:����&\��%�<�K�]t�I*�C�h ܯ��4I�s͠�*:85��qk�Nz���� endstream endobj 86 0 obj <>>> endobj 87 0 obj <> stream x�MQIr�0��|`\,b{�S���_�g]h���D3����/$R�wٗM�k�a�f3D~H?+_���92�LN�8��sa�C��2�8�]A�p��d�%��+j�9�n�M�ڻ�֡i���koν�l�R�WHAK���5�#ɧ�d+� Avذ��)��:��O�{�!ɂ����*����� �{ U�u��[���>�u_�?�{�M�VB endstream endobj 88 0 obj <> stream x�=Q9�! �y�.|b��S[��?]�4C@ɇl!,���F7���Ԑ������ہ"v9�� Y�BG8袪��9C.�wS_��(RnWR�5J<#A8F�1۝��T�r� �4[�������E&]EVݓ�hK�[k�kOex�Z N�M�������y�ۓO�m,��e� Zz^� �х� ��`����Zn s�(89�6S�f����e@ �OH�m{�%�Q Cg�q�d�l/���|�H�K�δH���F^|�ױ��� ^j���=?��q" endstream endobj 89 0 obj <> stream x�UUI�[1����s1 I�I*�M� ��� $h<  ���&ɖ�<~�\��"�o�����V�,�Ɛ�@:������h8��'�{,�*M�toHw�&�zo��4�� ��4�A-K�&S7�E"��ЩKw?�0n3M��̚��*��1z:��ug>c��j��w �Ό8��U�@T�d����'�IY�1hD��ZˤS��x&y��W�s�qL;V`s�p�U����^e\�Ƴ,j�]����EL)�k�U\=Q�j"<��%�d��%�hu0 ,U�`��K#�#�.��U��S:~Ml��Z,m��¼Vt_��Bt5V���sv}2oФ��w�`�C�3�ֈ��E��tw�$/�\�������|�K&t�����Y���|�E�wY�= p�dS P1Zb/WE�ɓ���΄�3�=g��-$�;�|�.���g�0βي�����7vЃ���N����U+�t����|�njџ�IvC����V��#�։;�X�X���x9@����t70���0�=�U�E�L��ȝ9��q�=�`���傹H�' ?"��n�Z�^�Ze��4�V���(y='���=5�)^�$qlU�p���|Y(!�(�$� � %3�~+Gﯝʍ�7.5dkP�.78՛���)Ƴ��μ�^;Ý�i2D�$�.�m�Z�͹�VK� +��"K���;uz�c�9-�^��� �uw��]�fs��8��߉�ĭ���x��,r�� �[17�҆�������\��Vw�p�KčY�U>�;#ٍ�7���?�]�� endstream endobj 90 0 obj <> stream x�]RKr�0���@2�|l�'�N���0����! Q�R Q9yH�e�/8���z���(�请rz!R��F��ѽ��r"�))2���Sj�yq9g�n��n�F��ş���i���+qd���Ȥ�#�de�x15�0mVJ��uq_�P�rw� ��>��F0��h|���(@EyƔ$ o��0mv��ܟf�6Zi>}V�L�o��i��K��ق��5 �g�&���r?����!>&��p{u����^�U�v�T�cݭup6hs:��! �v��P�[�s����꒗d�6�e�<{����THjx��R�@=4Ƣ��r�ў���#NZh�f{��@3$��A��C��@oG�o�����h�_�%^�M޹��D�ښ1����P&�۵]�� �ր$K,<* ����T endstream endobj 91 0 obj <> stream x�5�� �0{��,����>Q��ٿ &r� �Ba�r�Rc�LB(^1'�[Ô(��mc�����ѷe����_6�c����v�[�*�Ny��� endstream endobj 92 0 obj <>/Font<>>>/MediaBox[0 750 1800 5461.9199]/Rotate 0/StructParents 0/Annots[158 0 R 149 0 R 142 0 R 86 0 R 69 0 R 51 0 R 148 0 R 139 0 R 35 0 R 112 0 R 48 0 R]/Contents[74 0 R]>> endobj 93 0 obj <> stream x�]��j� ��>�]N�&-L��Ȣ?4��I���1��}Ց�P���\�^�kgt���1������K�'mHU��2�)�r��h�%�ܙ���gT��78�����w��k3����ǹ_���&#M Ǹ�U�71#�l;v*�:l���#�6�P繺��V��D/̄�3�oۆ�Q���F�#<�/�H2V�̖ם)�Nm���2�X�R {r�zC�rڔS��t�%W:��z� endstream endobj 94 0 obj <> stream x�URI�#1��+��+ش��sp��ڀ���� I.�.� ^�wPU ��XɫߋƸ���HU�."�A�D�/"x_�r�w)��@�&^Z�F_S����r�%i�ۆ5x%c�'*5iD�+�hiM�L���`���&��=�B���}q��Qu^��:�D���t-T<��Gil�*eos����/݄�*͗4�v���l2� 31�\y��˒\����5�kt˚ Ǻg��kq{�~OX �Lj�+xέm���܎�"8�:m������PE��s�3 +�x~Vɺ�HP*{�Z��d&LJ���ڃ�ֲ�.J�W��T�C�[Ƴ�F�`+���9ʘD_�̿�y8�&�x��2�/�q���t�>�LF�l7jJ��ɱ9�.��]l~� 4�1���w�7� endstream endobj 95 0 obj <> stream x�UTK�! ��)������T* ��۴�_\���>-5&��f����tF#n����D/fyI�rI����O�����/"͕�����;���66_�?ņŊȝk�V�id�Sh֭�l�h_w�����c����8qy��Q�y8�#,%�VI��=�\~gc��c��%�I� 3 RN8@�V8�;5u�D�a�.���;3���Q��%��`��T��� �ͬ�W)���w���\R0� ����.���Jdd;����Bu>��5Ƿ*k9/ �/��߉ő�Sk0�f��U�J���G%��q:�K�\���\�R�J�e��� P�c�5�Yc�:uUʁ�0�f����u�P�����k}�s���Oʥ��qU맭���k���s��@�2�:=�d#;F��8t&���f>���X�8o�*Y]N�5X���uP��N`ޛ�,ނ�B��?�l��)��)O lH_5�����<��k)� hJX3dՅ`�ڲ�zq��C0ͭ|�d�46б �d��9�x630�� N�g<&O�����_ �[|'n�$�O?HL��F����{����l�u�����K=��3��3��^���u��ݠ�#GW4�2���$�L�*y��q�B^*�kRY���T���$6:�&��4��7������ ? endstream endobj 96 0 obj <> stream x�URA��0��|�68�Igg���&iڜd,d ��S#��2���E�� O�,>�x�ƀ��k��}TY�(��6Q�J����枨S�})N�j�GqI5����D_eɛ0��`���,a�F�'P��("=�E`�+��J�����]nD=}��HW�{N"l���1r��ؾ�=�F2O>���7�־��ն�kb0|��U$�9�� 0v�Z�J�y;���皈��)�(�K2e�b��O���7�l endstream endobj 97 0 obj <> stream x�]��n� ��<��� hk�&�d�v���>�hI*ăo_@�Mz��&3��Ц��Zy�oΈ= JK��Y�@�qT�d9H%�^�[L���:{�Z=RU�=tg�V8�����W'�)=���B�-�~��#u ����� �h�[�ʯǠ���X-B��l��0g�:�G$c5T�kMP��bS���T�m���$΋佻d����e4-�M��On�����[R�8B�ͦt1��x[�56���Ʒ� endstream endobj 98 0 obj <> stream x3�0R0��C.�� endstream endobj 99 0 obj <> endobj 100 0 obj <> stream x32�T0��C.�� endstream endobj 101 0 obj <> stream x�MRKr!��)����#�y�Je����sc�4��$m��j=�����I�R޿�:Q���e��d��^���!��䅷�^�G��#F��8�9Fq�I"*�'S9W:��T�"d��L}�VRj���v0�PaC����G�9��٬���w'���8&̕b�8�� Ȟ¸�( p����z�� W���#�~ ���E�2f�_yCN72ZM�$ 2����ma���R�:]�z\�=47�i�����ř_�ڲ?� �����t_Y��j7Ge����ȫ�e�]�X�l3�)C���x�A��Wޙ����6iDSݫg��\�r7A\f��f�ңV�YQ���-�rL^1�8pS]����z(� endstream endobj 102 0 obj <> stream x�]TI�! ��+�@u ���������W+���� h��z�֛h���M��/�����{iNz���(<�������!� 2}��8���,?�\��"ϤE�J/m�Kg9d�[�|��$����`d��ȥ���l��W���se��Di���"Y����7:v��X�F� !����]z&�;�������j��P����2�B�6�S���@M�{��Z�#�:���#��h/*�=+-)��85�Fra/F�q�(�Q��$���8L9�D�6F˔��:�? e�*x뵸�.%)c~�������ޡ���w:���mXj�9G��IJ���-�Q(GMQ$�LȰ��a���R�ԓe����B���f�$"��{��Py���a���X1'����2�#�X�6�-l%�8(&b+y�W��)�|�]��o���?n��s>=�a����Ag(n���*�{�{�h����/A0��7j/�f �h��j�tޓ <�\>4N�cOf6陉|1t� �}��'\�><�k�b���w^�]�be���Z𛙓}�v�����tt��Q����u��������/�L� endstream endobj 103 0 obj <> endobj 104 0 obj <> endobj 105 0 obj <> stream x�MRKr!��)����#�y�Je����sc�4��$m��j=�����I�R޿�:Q���e��d��^���!��䅷�^�G��#F��8�9Fq�I"*�'S9W:��T�"d��L}�VRj���v0�PaC����G�9��٬���w'���8&̕b�8�� Ȟ¸�( p����z�� W���#�~ ���E�2f�_yCN72ZM�$ 2����ma���R�:]�z\�=47�i�����ř_�ڲ?� �����t_Y��j7Ge����ȫ�e�]�X�l3�)C���x�A��Wޙ����6iDSݫg��\�r7A\f��f�ңV�YQ���-�rL^1�8pS]����z(� endstream endobj 106 0 obj <> endobj 107 0 obj <> endobj 108 0 obj <> stream x�]��n�0E�� /�E�1�D���H,�Pi?��C�T�eȂ��Ǧ��� 3���qTէZ�3���(�i�kea�V�­�$�T�r^#��CkH���2�0Ժ��F�.;�v�����D�W����F7�U���n�7 �g�HYR�szn�K;��l[+���e�4��}�����ɴl�o@c%�KI@��8�k'�ZKD|q����8�'�y�8�9rk�P��gU`w��S�C>E�Hc��y�x�>\9>������s���S�g����䷯u��� t��9��R��r�!a�WP��p���Ȼ�n��%��{ �;bF�*|~-;�[ endstream endobj 109 0 obj <> stream x�=�� �0{�`K����DQ g�6��T�� ������4�� L �a��8'61 �R��B���Jbض��e���'����Ɣ��p� %/p endstream endobj 110 0 obj <> stream x��W}T\����۷�����6�>���7,�I ��ѐ@�F>�!@ �Dۘt5I��*��i���9�ݓ*���X?�S+Fllդ���T[�U��蝷��K<��iߜwg�73�{��;o��&���� �[n�D�Y|[��[�i�� ����{�M�/��K:���?O~��Pv��ux ���M]}�_<�=@�0��n�K���������v��Q�[�yDg�9�K����+�� _��� ���JK��ob����c��n^F���on��{1�6c?I��v�k�?��c0g�U [��<����!��&u%(I ��k�����?$�Y�v�j�cA��OrȐY�P��Z����D�pcJؠ������c85Y;�6�0�͝�,�X�O| #�ˡ����� ���T[���v^�h�,�.��E�Byd�0Ċ'�Rp�Qc��G�n���NkR|F��)���^�SCIK圔�(y�w�/�|������k��i��/�w��S!S�����)�Shp� mt+��c�ovx<yT`��4^N���W����Q���N��z�:7���Ѫ�MQ��p����)O�1b}c�5�V�bN5fhp�*��� ��zm^� ���,T� ��Sd�^�p(6&p(zj��Y��E� B�r���<���LN_6�i�6�V�o�M`g�� �� zlԁ��wNI0/cZ�8i��);\��9��v�G5�<� �C/2�V������I!QX:� ����r"T���`yWȟ/[���9⢞�w{�y�� K ��?�W9��o���;]8��q�a�5����;� ���&�E=�u��r6g]��@L+=~�9��.P��A�h��~S@p�}���<"+�l$2�RF~��r�y����Kb:��J��Y=��N?{#��̍�6qM;�&��ƹS���V�E#��2��h��������u+oھ�P^�-�`����""�b��4.����|�H� �ĒbW�����4�G�3w��'ϒR�:����r{&o�^y����d���%U������c�m>�����U�J��6���/Maf��h�4Pc�؄�M���#"#2#�L"�b�VQ�䋏Tɺ��2�H��+�"�J�����,G�^F]�]���/ў�,Q�����z���[Zv��.^=���cX���oU�Q��=|ϐ�a����F-d�"���OZ��`$I&�<��CH M?M ޠi�U�w%)\��Ҹ��$T@��\&ۈrը�� t�t�Ll�c�|H�ގ��]�_�2g�s�[v�w������y|q_��S��!R`ԟ���p'J�0 8-�]:٠9kr����dnȿ'�\�oN&C>�a4H>i�?Ll�A�%���K��4Nۉ%��"I��G�R�� �����M�}|nvm쮾�M�Wt�fr�n!_,�G���ys���n���F��ZzX�`��L���q�0����)�����X[f{b\� �DŽ� �h�4��)�e%e��H�w"��$8���}���+���}ǎ�3��2[����=���`��Q������66��lY�B��l�|r�N �d��J�F,��'k{�*c���>�|�$H����T?&�H_��赽#\�޳^��.�F}�8 g�ܷ�w�#?E��?w�?��2�j�-+;2�B�l�ؘm8��Ǣ �`�����0aڟ�I>�1�|�Y�e��T4� �T���Ց�Oʴ������7}��o�B�nXS՛�������z= Hn� 뺯����_������\[�w?���k.[_��D͝�v�j034�!,3ģ�!mb��aێ��F�D��h� a��H>C�P��2�|-ZD��gyA&G�_U���JuZ|�]����ٓ�K�o%�n� �b9�W��.!A�s�<2�`�%�#� }0����� ѣ�˒�]��(�2q��Y[OT�r�_���n'��>n�� f��"�aE���( �o)�����[:� L��1�ĸ��O5�� �2�snM��k�!꿇�zt�a��km��ѱ�V���8�wB-�qi,�.��z�n��Y ����s����&��x���%�yW���x=\ɾ�ڗ��6�?␜���<δ<���:<��m[�#�~��x��Fڇ>�:` �W�Mv3��bȇ"(�r�P� �+�k���{/�w�q��w�,4K�Vc���pU;���p=;�BI���G k\� �(�v`7>`˹�H� =ԔK!�$�˅�HѲ��X0�>��T8�q���˄;�m�p��n��Ε�Zbf�U�f-}�+'Ua�(H��Y��r[QW�U�,a��3�<��[�7E�q���_K��TBR/��#��꯴�����)�`�ZF�����c$������Fl���8l��X����4+~���F^ed��J�]A2�H�I%��?�i`� endstream endobj 111 0 obj <> stream x�UQI� �� >��{������Z�$�)�D��A�i� �M\;y��f������@�"$�9(W�' �eA��Q��v4d춗56�}���F(e�+Lҝtv��Ӫ6��JXM��a� R�|�S*����Z.�zU�����ߘ�b�&��d�Pʖ�3=�z�c�H"�.F�K��'�?�o�ԎpEe��)��^�5'�0�Zv�O{�")X� endstream endobj 112 0 obj <>>> endobj 113 0 obj <> stream x�UQAr�0��| [���ٝN��_ �I��X� �n�I�N2{D�т��ӊ7�OÐ�=!��y��Qp�N B�འuz5�\��v�<׷�|��mn�ӣ�T�9B�Hv��^M��0뷒��O��W���cge���� \��F�-�C�����EeC��LD ��YHkC�ӳ�^ ��������ƶ�mӪ 0������ Qj��-����鿖�i������_� endstream endobj 114 0 obj <> stream x�UTI��0����]b���S�,^�M¯� ��f�G߭7�v/�wZ��~��c����=��U�Y"«��;�7�?�����[&,��f���t=�ÀĞ�֌/˄wM�s�n�hܓ�t�Fm�H�cDhNWnL+h�*��3-=��X�@� X����]���$��*��f!��U����Pa�@GH��F�"��� ~�(�z�~���ThG�+\!J��$ E���S�Aw8S(5)0aXv��"�W6*�J��婮���R�n���~g�DF��9�r,m+G��K����ޤ�����R��D�.Ҟ���]ª�=�>�Èm`6��_�x�ت�0���$F/y�d�fŻ�q nH���fP�Ti��u��� ��舩){��^��w}��{��{�"2&W��e���U��1�]\��:�p;7�@ś{�3L߻����������2�;�u��q~> stream x�MTI�9ܿS�z!& ��G/���:���HLJt�6���m�:���>�B��!߉o��D�I;0'j?8w�6�u��|� �ԯ�^!�����,4�7�v��aB�����T�g��4[ՙ��Q&�:��v�@��^�B]� '�U)��k�h T���R�g��t���j�������@l��� d��"�5«Tx^0����F��vc�p�Cw�#p���� ��"��u�sUoܰ8�:�1�)^��rg#�<�W����EɊs}A�#��B�:A�m�Q���� �� �6 �iD!&;�z�.c�-���*���l���7��F�,vyBM������B�¬��--UrքH���y������������'���z��)"�d�J��� �S�vTc��z�� ���]g�-����Oഹ��kFx�LrҶ�Ac�j���G�� �ٲ--oJ�QL�ݐ+���l������?Ms�]���'�� ��M����m�����B9U�0!:�m�w���|L�O �#��e�u׀y�;u�|��Af��1�E3�v���b5�ƚ�z>Sy�>��FF�m���Т툭Up� ʹ��*oK;�ߚ���R,���Ǘdt������� ,a'� endstream endobj 116 0 obj <> stream x�M�9�0 {����3���oq ��<;ڵKGªX:Z�HC����F�J���]�fqH^���]?\*�O���8�� endstream endobj 117 0 obj <>/CharProcs<>>> endobj 118 0 obj <> stream x�MSM�� ��^ y�y�y�f�s�mj���@�""�zn��7vo���B��x>��a��r���n�G��R���lPe�� ���$��y�����XE�G*�J-M%,Ce���^�� ֆ.qJ�A���ܪ�]mZx�T����nukp]�sQ�Ug9�(�Cb�<��ٜ��d^(��'�� ��d����/�gޞ���'�tA7j&5��$��L�Ԅ�������$+?׿���BQ����0FX� �d���W�V��7�\]�"�d�~��(&'R|S��89œiOa7��Rj�d�*�a�O����b���-�q�Un��E�Eho� ��IA̽��R���wl}��K��IT�NV{�U�v�������C(О6��?����o�N�� endstream endobj 119 0 obj <> stream x�3�4T0P03Vе04P067�R �� �\0�.� 2����4Dc�����Ɔ(Lcs�dؖ� �4.��� endstream endobj 120 0 obj <> stream x�UVG�1 ���������u�|��u7HP�:H`@d�!�Vj�Z�6WqE�*���C������O��&~[�H-�g�zM�P����c�i� � %��:�G�f-M�!-��P9��V��io��^��Gڻ���=/����M/H�1O=���48��x���8:���Х<��D���xA��u���LI=�^]q�u��}��a�� �e�vAk� )̆�2�j���Ua�קk�����Ŧ亝:K{�A9l˥+�~|�.�����eW���{/K��_/q�ֺ��7��H#��'�}�-�� �UtF\-0cUb�6&����Y.5�eIL��#Ϗ3n_���| ���AF66�n����3ʞ����ށ�J��' �]��$�]v^R��u ��ڎ�T��c��`κ�U�v�_��<���=��i ��u �x�8L�҇L��x��b� �"� UQ|d;g�M��:>��9���\k+�ւ�[����|ݥ��MN?�L�3ᓊy?0��Sj {p�Ug6 q��z�01g����KE�jO�� (�-�@1a��8P��S+`]�� j0� �5=�J���H+@�YǓ>4_U"�Kjr�b��1�xc� �FI�-����9Yﮜ9� b@���'t��of�FA�'�'jENi�|c�<� ��IF?bΣރ��w�[�Z� .�}?H� �S�Wx�lOn��5�-� �׾Iw�2��풸�j���)P�����y:$�̐��8���H@:(�z��!z��7�K^>����|�3�Ĉr��݌�}�����5޶�E���+��^J����@un�$�����n�q�^{�?әΤ�E�A7�+|�3�۷?NѬ$oo���mv�k�� �S.�,�;q�p��U\����G��hb�s��jM�1������6����m���;�Q�;���w��o"�0����"� endstream endobj 121 0 obj <> stream x�UTI�1��+��@zOO8|���$9ܗFK*I�.�7Y��Mu�վ��0�<��K�����F�a0��C��VjSڋ��6sDvE� X���jS�� ��x�g���&��<����C:���,�����XyP�p�-�/q��L4����t�U��~go�4�1���Y2/2��&ܳU׬'R����eMT�CRg��0�h_=� 8A ��� �T�N ϑ��&N�s�<�B$� �� ��Y|P��C�Ϙ^]k�����]uO��#q���Iy2 �Һ��N.c�*8f LlAg������cĦ���V1yĉ�@NM�࿾l�'��(��)T�i�o�� I��N��O���^��8� tnb��MO?�Y�2�c�C�ӻ[��M���:J������x����^�*Y���|��X1�^�z�*�&ͣ6d��]�@Z��nle�r���zu�T[�d�K��C<��LcT� �[w@�vd0ԏ6Vħ�S�J�d1>,�> endobj 123 0 obj <>/W[0[443.35938 0 0 0 249.02344]39[654.29688]70[562.98828 0 563.47656 540.52734 358.39844 570.80078 0 265.13672]82[560.05859 565.42969 0 0 0 514.16016 337.89063]93[501.95313]]/DW 0>> endobj 124 0 obj <> stream x�MTI�,7��)t���@������N@��zӠbH�Ay�9��c����ч, ��!���ǩ�"k��Ǝ�P >1�+:V�,�җ�͠�6T�A��(���� ��RZ<4Sl�cM�'�w�LD���^�?��ߕ�d��W~3��H��gP��+�f�L��>�=؁C�y�:y�{D��W�GUo�RQ<��G�u2/��눬U�n���� �6�)�q��#T�@���=(,��gC"_�����r�*�o(���2��!��di���-�8-Q��$M��Ni5,΀h-L�p�Flh:; eQ> Ѽ�R˴[�f�Ki��u�r��a(�(�U������4L�$�Ȕ����b��R�^�,AJ���3}�kV#w*��S7f��4�;?tz嵞5< h��E���3�ְ� �A��p�ua�"��.0{�R������n����,G �~c=��N�h�=�������χ'�%�|/��GC=�������V�V4���21I�xۙM��������e�����>�X$�&7%��%x3��X�i��2��:�=��� K�}�� w);܋M��N1���7�^/~�A��]�]z���k�r�[��J��[�����l־�ۄ��u������_������1O endstream endobj 125 0 obj <> stream x�UQAr�0��| [���ٝN��_ �I��X� �n�I�N2{D�т��ӊ7�OÐ�=!��y��Qp�N B�འuz5�\��v�<׷�|��mn�ӣ�T�9B�Hv��^M��0뷒��O��W���cge���� \��F�-�C�����EeC��LD ��YHkC�ӳ�^ ��������ƶ�mӪ 0������ Qj��-����鿖�i������_� endstream endobj 126 0 obj <> endobj 127 0 obj <> stream x�353T0P03Pе04P050�R ��@t.L<���� �465R�51�3!� �HL���%�624�i1A�DEW,"^ endstream endobj 128 0 obj <> stream x�]��n�0E���Y��C�d��Ҵ�X���|��Tl˘_?h*ua�����8�4O��w�X�!��Y-�!�8 I� �`v��ͦN�ę�u�85rP������Yaw�����F�v�K��v��'�(�k�8�J/�~�&�$�� wya׽��)>W���8��0�q�C��IEi ��Z��_����:C���))-ӚTy9�|�\8.6ͳ�2��kNy�C��1�3�ܪ��omm�^T�b�lSƜo�/���S�y����h��˟��� endstream endobj 129 0 obj <> stream x325S0��C.�� endstream endobj 130 0 obj <>/CharProcs<>>> endobj 131 0 obj <> stream x32�T0P03Qе04P047�R ��� �(�+� �8X endstream endobj 132 0 obj <> stream x�]R�j�0��+tL��y�1�N >�A�~�#�SA- Y9��+� T ����+'usl� > stream x�UQI� �� >��{������Z�$�)�D��A�i� �M\;y��f������@�"$�9(W�' �eA��Q��v4d춗56�}���F(e�+Lҝtv��Ӫ6��JXM��a� R�|�S*����Z.�zU�����ߘ�b�&��d�Pʖ�3=�z�c�H"�.F�K��'�?�o�ԎpEe��)��^�5'�0�Zv�O{�")X� endstream endobj 134 0 obj <> endobj 135 0 obj <> stream x�M�� �0 D����d'����ٿ%���~OgcQ؅-Thb; �[otS6��D!�+8Q������`U�G�‰�j9 endstream endobj 136 0 obj <>/CharProcs<>>> endobj 137 0 obj <> stream x�UTIr�0����]b���R9$����N�� t�B�6J-2�ic-T�����fw�]�Z�\��!S�r����J�@�G9y��}kz\����Ў�گ� �4ۢ>��Mݒi�;,�ve[��N��V+����鿔 ,l�f��w&L 6���}� (B=-Qͦ>�t ǒ*a�����E�� �=֩Â��A�j������½�3��c��R��W�xC�#`��f��4C��Dp,�l��P^�+�����S:7�2�I'*tQ$֤A����:� 젻:����e����G�t�t��B�0U�:�|'9��P�DԂ���C��Ţ� �5 d�،���.�лQ���?Q������7*I2 ��Y$�7=N�Ԗa1�� ��ܞ?&?&�}]��S7k%?�+aq5�1B��Or�؟�q�|�<�}.������}|��/���� endstream endobj 138 0 obj <> stream x333W0��C.� endstream endobj 139 0 obj <>>> endobj 140 0 obj <> endobj 141 0 obj <> stream x�MQIr�0��|`\,b{�S���_�g]h���D3����/$R�wٗM�k�a�f3D~H?+_���92�LN�8��sa�C��2�8�]A�p��d�%��+j�9�n�M�ڻ�֡i���koν�l�R�WHAK���5�#ɧ�d+� Avذ��)��:��O�{�!ɂ����*����� �{ U�u��[���>�u_�?�{�M�VB endstream endobj 142 0 obj <>>> endobj 143 0 obj <>/W[0[443.35938 0 0 0 247.55859 0 0 0 561.52344 732.42188 0 0 341.79688 347.65625 0 0 196.28906 275.87891 263.18359 412.10938]20 29 561.52344 30[242.1875]37[652.34375 622.55859 650.87891 655.76172 568.35938 552.73438 681.15234 712.89063 271.97266 551.75781 0 538.08594 873.04688 712.89063 687.5 630.85938 0 615.72266 593.26172 596.67969 648.4375 0 887.20703]69[543.94531 561.03516 523.4375 563.96484 529.78516 347.16797 561.03516 550.78125 242.67578 238.76953 506.83594 242.67578 876.46484 551.75781 570.3125 561.03516 568.35938 338.37891 515.625 326.66016 551.26953 484.375 751.46484 495.60547 473.14453]149[656.25 0 0 199.70703]193[247.55859]]/DW 0>> endobj 144 0 obj <> stream x32�T0��C.�� endstream endobj 145 0 obj <> stream x�UTK�� ��\�.��3�RY������,< ֧�4V�֛J�mz���l_t11���\��K��L��nz�QW���Y ����nY�ՈvQљE#_BT���"�m��F��aZݠk����C�����w#tZ�����ԣ)�Jɂ i��z�,H�IZE�H� 1�XH%���G]4�G^����\�'n����$�gi=�҉�r)��R��<�2�w@�&Վ����F�S�d���! Q�h�� ̧�Ž���1�DZ�{�S:� ���¾oE�9�eO��o�*yȡ�f^���ib���H��w,�}M�E�44�`4��r�V�y{}fF��gd�_3�.��Vߕ����u�K�3 endstream endobj 146 0 obj <> stream x�]UK��0����]�'�yf*�����B�8��B|� p덬݆�ب��o�p@�?v~(~!���3��r�G� ھ�) �q�ɖ�h�G9lC��ԇ�}]���U�=Dd6�i�]�a��Ӓ��>+��@�D�� ��g��/1�,xNF:SOg��=��0*�d����hy���(�,���q������n���\'���H��ũ��Ly�IuD��M1y'O0��(�O�b؀�I@�&��#���4�ʪœ ��E�5�)�y���IJ��m*��]���� �۰ �R��a5X���~��e�yLo��Ĝ 6��{�G�<�� �=��8q�9�� �7�[ك�;�$�����V���M�� A� �Z:���3u�+$_�,��bf��E�3 b/C����K��~�`�����������g��a�ڈ���'*"�i�U(�l�s �Z9Ά��J!l�}f��K����&5�������p-:�^~9�Qى��mEZ[N�'TƱl�!P�G�%�M��!޷,ч]��;9_ܟ�f�i�����1�7��|�����Әᙩp���qbP�oVoWTqvF>{+KA��e����m{V9k�%��!9�O�;H'�\z�eb�e�W%����(�[�>h_�����vR� endstream endobj 147 0 obj <> stream x�323U0P05T�5��T02�R �L��t.��55�P��� 1 �4XMWH���H=��z$&H>��+� �t� endstream endobj 148 0 obj <>>> endobj 149 0 obj <>>> endobj 150 0 obj <> endobj 151 0 obj <> endobj 152 0 obj <> stream x�eTI�1 ��+���Sz��R9���.R�S� Ts�@Ȋ:`���>�a����������1��x�@�����\��L�Fq���-����Y�#I�k72��hj�4I62��D*6� YL��5�&�&�`sEჁ��UO�nX-ֹ�Sͫ<.9A��_J&:�s1�� �U�d���<�G$5��w�܅�2�w�(�T�q]��ǚ^�wr�:Sن!� �4��@6���:+P+X���&^cS��A���$&8 8J��)��`��s�>C��w����f�bR5I�K�*Щw���&���-Ш�M�Ð!s�P��pD�cܰ��9;B����G��,��> ����Dp.3�\0��̔���F����Ȏ��8]\;; #��UNFv�5��Q �� LPثA���70�[��3aC\�GH=�x�[oє���b�.����+�U�vPz���5��s��~]�R� endstream endobj 153 0 obj <> stream x�EQI�!�� >�!��{�cb���v�U3F�, $h�I�� 2 ���2��>���Ub��3%M��Bq�n?��ќ��UX��A[��� n�o/�(����@�N�am��'݈=���}2ō�:�d��C�ͩξ�tQ]I�����*�����xs3������PAk 9t)�P���ܔ�n�IJ610ɍ�^ʵ�ۙ`�.yh��W�����Ϲ�o�)_�eU� endstream endobj 154 0 obj <> stream x�UQI� ���@���'U�C��kg!����,�����2�t*pWco,�ꯄ>&���?�Lr�R��a5��vH9��J�3�AFuV���j1+{EC sL�i �¢��FE�fVI�:���&o�(y�C�fQX��*t9aY5:�dwW)d[�3� 9%K������@#�@�ro/6�h]o��X<8��o�n���-��bqUn endstream endobj 155 0 obj <>/W[0[448.24219 0 0 0 245.11719]20 22 549.80469 37[637.69531 0 636.23047 641.11328 556.15234 541.01563 0 0 268.55469 0 0 526.85547 851.5625]55[580.56641 583.49609]69[532.71484 0 512.69531 551.75781 518.55469 341.79688 549.31641 539.0625 240.23438 0 496.58203 240.23438 854.98047 540.03906 558.10547 549.31641 0 333.00781 504.88281 321.77734 539.55078 474.60938 733.88672 0 463.86719]]/DW 0>> endobj 156 0 obj <> stream x325S0��C.�� endstream endobj 157 0 obj <> stream x�MTI�$! ��+�@�����Tk4���_'���%�*0�Kx�j��n�bj���/���ѿ��v@���}c����j��{�3�T���� Lc�����rOx&�a�N�a�J"V��,X;+��c��~'�93yc��S�-�aݏ'��G%Ew���XvA��8�+���Zɂ/�!�������q�%��.}EV��VY�Gi3K������R���r�;���*i��d��ᩞL(XvRI��e������=$0��#�j?".|�[>��p�;���[Y��JI�W���37�H>>> endobj 159 0 obj <> stream x�U�K�0��� 4�Sjd��z��P����3�H(��3����3��k��]p@8��M'mP�~������F& ��&y�3=�Uu|����q$� endstream endobj 160 0 obj <> stream x�eQK�!�{�\�|��<�o�,��mб6�BMP&��W�Ҏ7��}��.N'U˥0�Cb1�9���B�5�b�^)�$��<��i���1����4��}���[rP��e��E��Y2΋W\i�t\��e:l��S���8���TsP����@2�0x�P05̱m�D�=�D��!�< �)e*�S�\�W�X�u~�.��|�uW� endstream endobj 161 0 obj <> stream x32�P0P0�Tе04P0�0�R ���� �(�+� �L^ endstream endobj xref 0 162 0000000000 65535 f 0000000015 00000 n 0000000482 00000 n 0000001179 00000 n 0000001263 00000 n 0000001885 00000 n 0000002596 00000 n 0000002905 00000 n 0000003421 00000 n 0000003505 00000 n 0000004240 00000 n 0000004546 00000 n 0000004822 00000 n 0000004985 00000 n 0000006144 00000 n 0000006878 00000 n 0000007179 00000 n 0000007531 00000 n 0000007616 00000 n 0000008239 00000 n 0000008324 00000 n 0000013367 00000 n 0000021052 00000 n 0000021263 00000 n 0000021958 00000 n 0000022327 00000 n 0000022437 00000 n 0000023060 00000 n 0000023204 00000 n 0000023746 00000 n 0000024179 00000 n 0000024623 00000 n 0000026366 00000 n 0000027668 00000 n 0000028291 00000 n 0000028454 00000 n 0000028684 00000 n 0000029226 00000 n 0000029379 00000 n 0000029899 00000 n 0000030400 00000 n 0000030567 00000 n 0000030709 00000 n 0000030757 00000 n 0000031078 00000 n 0000031522 00000 n 0000031607 00000 n 0000031835 00000 n 0000031979 00000 n 0000032248 00000 n 0000032768 00000 n 0000033271 00000 n 0000033463 00000 n 0000033770 00000 n 0000033848 00000 n 0000034346 00000 n 0000035080 00000 n 0000035512 00000 n 0000035597 00000 n 0000035740 00000 n 0000036070 00000 n 0000036346 00000 n 0000036860 00000 n 0000037414 00000 n 0000037687 00000 n 0000038419 00000 n 0000038801 00000 n 0000038907 00000 n 0000039030 00000 n 0000039197 00000 n 0000039443 00000 n 0000039997 00000 n 0000040426 00000 n 0000040511 00000 n 0000040977 00000 n 0000085825 00000 n 0000086112 00000 n 0000086857 00000 n 0000087163 00000 n 0000087296 00000 n 0000087419 00000 n 0000087488 00000 n 0000088005 00000 n 0000088151 00000 n 0000089479 00000 n 0000089594 00000 n 0000089754 00000 n 0000089948 00000 n 0000090233 00000 n 0000090578 00000 n 0000091432 00000 n 0000091931 00000 n 0000092096 00000 n 0000092515 00000 n 0000092824 00000 n 0000093290 00000 n 0000094024 00000 n 0000094345 00000 n 0000094679 00000 n 0000094764 00000 n 0000094887 00000 n 0000094973 00000 n 0000095403 00000 n 0000096149 00000 n 0000096282 00000 n 0000096509 00000 n 0000096939 00000 n 0000097055 00000 n 0000097184 00000 n 0000097592 00000 n 0000097757 00000 n 0000100627 00000 n 0000100914 00000 n 0000101175 00000 n 0000101477 00000 n 0000102020 00000 n 0000102719 00000 n 0000102870 00000 n 0000104310 00000 n 0000104779 00000 n 0000104923 00000 n 0000106091 00000 n 0000106738 00000 n 0000106869 00000 n 0000107232 00000 n 0000107945 00000 n 0000108247 00000 n 0000108363 00000 n 0000108510 00000 n 0000108856 00000 n 0000108942 00000 n 0000110587 00000 n 0000110697 00000 n 0000111082 00000 n 0000111369 00000 n 0000111423 00000 n 0000111567 00000 n 0000113006 00000 n 0000113539 00000 n 0000113625 00000 n 0000113816 00000 n 0000113940 00000 n 0000114226 00000 n 0000114431 00000 n 0000115266 00000 n 0000115352 00000 n 0000115896 00000 n 0000116629 00000 n 0000116774 00000 n 0000117030 00000 n 0000117188 00000 n 0000117225 00000 n 0000117458 00000 n 0000118007 00000 n 0000118300 00000 n 0000118574 00000 n 0000119148 00000 n 0000119234 00000 n 0000119960 00000 n 0000120111 00000 n 0000120279 00000 n 0000120567 00000 n trailer <> startxref 120677 %%EOF