package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
abstract-api
Abstract API Python SDKThis is a simple and intuitive Python SDK for usingAbstractAPI'sservices.Current supported services:All AbstractAPI services are supported by this SDK (16th. January 2023).Email ValidationPhone ValidationVAT Validation/Calculation/CategoriesIBAN ValidationIP GeolocationHolidays LookupExchange Rates Live/Convert/HistoricalCompany EnrichmentTimezone Current/ConversionAvatars GenerationWebsite ScreenshotWebsite ScrapeImage ProcessingInstallation:Install usingpip:pip install abstract-apiGetting Started:To use any service, you must have your API key for that service.To do that, you have two options:Export your API key as an environment variable:To export an API key for a service, you should follow this scheme:ABSTRACTAPI_{SERVICE_NAME}_API_KEYNote:SERVICE_NAMEis all uppercase and underscore separated.For example, to export your Email Validation service API key, use the following environment variable name:ABSTRACTAPI_EMAIL_VALIDATION_API_KEYExample in terminal:exportABSTRACTAPI_AVATARS_API_KEY=612345e4a63044b47a1234567a53cc81In initialization, you don't have to pass an API key:fromabstract_apiimportEmailValidationservice=EmailValidation()Pass your API key during service class instantiation:Example:fromabstract_apiimportEmailValidationservice=EmailValidation(api_key="612345e4a63044b47a1234567a53cc81")Note: If both options were used simultaneously, the API key that was passed to constructor is used.Examples:Notes:Each service response is represented as a response class to provide an intuitivePythonic way for handling responses.All public interfaces of all services classes are modeled after AbstractAPI endpoints interfaces and responses.Example: Email Validation service endpoint expects the following parameters:api_keyemailauto_correctTheEmailValidationclass'scheck()method expects the same parametersemailandauto_correct.(No need to passapi_key. It is already passed during service instantiation.)Recommended:Check service class and service class response documentations.Response fields used in examples are not only the ones. Check documentation to seeall the capabilities.Email Validationfromabstract_apiimportEmailValidationservice=EmailValidation(api_key="612345e4a63044b47a1234567a53cc81")response=service.check("example@example.com")ifresponse.is_valid_format:print("Email is valid!")ifresponse.is_disposable_email:print("Email is disposable, not this time :( ")EmailValidationdocumentation can be foundhereEmailValidationResponsedocumentation can be foundherePhone Validationfromabstract_apiimportPhoneValidationservice=PhoneValidation(api_key="612345e4a63044b47a1234567a53cc81")response=service.check("20123456789")ifresponse.valid:print("Phone number is valid!")PhoneValidationdocumentation can be foundherePhoneValidationResponsedocumentation can be foundhereVAT Validation/Calculation/Inquiryfromabstract_apiimportVATservice=VAT(api_key="612345e4a63044b47a1234567a53cc81")validation_response=service.check("SE556656688001")calculation_response=service.calculate(amount=100,country_code="EG")categories_response=service.categories("EG")VATdocumentation can be foundhereVATValidationResponsedocumentation can be foundhereVATCalculationResponsedocumentation can be foundhereVATCategoriesResponsedocumentation can be foundhereIBAN Validationfromabstract_apiimportIBANValidationservice=IBANValidation(api_key="612345e4a63044b47a1234567a53cc81")response=service.check("BE71096123456769")ifresponse.is_valid:print("IBAN is valid!")IBANValidationdocumentation can be foundhereIBANValidationResponsedocumentation can be foundhereIP Geolocationfromabstract_apiimportIPGeolocationservice=IPGeolocation(api_key="612345e4a63044b47a1234567a53cc81")response=service.check("156.215.70.7",fields=["city"])print("IP is in: ",response.city)IPGeolocationdocumentation can be foundhereIPGeolocationResponsedocumentation can be foundhereHolidays Lookupfromabstract_apiimportHolidaysservice=Holidays(api_key="612345e4a63044b47a1234567a53cc81")response=service.check("EG")print(response.holidays)Holidaysdocumentation can be foundhereHolidaysResponsedocumentation can be foundhereExchange Rates Live/Convert/Historicalfromabstract_apiimportExchangeRatesservice=ExchangeRates(api_key="612345e4a63044b47a1234567a53cc81")live_response=service.live("USD","EGP")conversion_response=service.convert("USD","EGP","2023-01-17",150)historical_response=service.historical("USD","2023-01-17",150)ExchangeRatesdocumentation can be foundhereLiveExchangeRatesResponsedocumentation can be foundhereHistoricalExchangeRatesResponsedocumentation can be foundhereExchangeRatesConversionResponsedocumentation can be foundhereCompany Enrichmentfromabstract_apiimportCompanyEnrichmentservice=CompanyEnrichment(api_key="612345e4a63044b47a1234567a53cc81")response=service.check("EG")print(response.holidays)CompanyEnrichmentdocumentation can be foundhereCompanyEnrichmentResponsedocumentation can be foundhereTimezone Current/Conversionfromabstract_apiimportTimezoneservice=Timezone(api_key="612345e4a63044b47a1234567a53cc81")current_response=service.current("Cairo, Egypt","82.111.111.111")conversion_response=service.convert((30.0594627,31.1758899),"Cairo, Egypt")Timezonedocumentation can be foundhereCurrentTimezoneResponsedocumentation can be foundhereTimezoneConversionResponsedocumentation can be foundhereAvatars Generationfromabstract_apiimportAvatarsservice=Avatars(api_key="612345e4a63044b47a1234567a53cc81")response=service.create("John Doe",200)file=open("logo.png","wb+")file.write(response.content)Avatarsdocumentation can be foundhereAvatarsResponsedocumentation can be foundhereWebsite Screenshotfromabstract_apiimportWebsiteScreenshotservice=WebsiteScreenshot(api_key="612345e4a63044b47a1234567a53cc81")response=service.capture("https://www.github.com",capture_full_page=False)file=open("github-home-screenshot.png","wb+")file.write(response.content)WebsiteScreenshotdocumentation can be foundhereWebsiteScreenshotResponsedocumentation can be foundhereWebsite Scrapefromabstract_apiimportWebScrapingservice=WebScraping(api_key="612345e4a63044b47a1234567a53cc81")response=service.scrape("https://www.github.com",proxy_country="EG")file=open("github-home-screenshot.png","wb+")file.write(response.content)WebScrapingdocumentation can be foundhereWebScrapingResponsedocumentation can be foundhereImage Processingfromabstract_apiimportImageProcessingfromabstract_api.image_processing.strategiesimportCrop,Exactresize=Exact(height=200,width=200)service=ImageProcessing(api_key="612345e4a63044b47a1234567a53cc81")image=open('example.png','rb')response=service.upload(image,lossy=False,resize=resize)print(response.url)response=service.url("https://example.com/image.jpeg",lossy=False,resize=resize)print(response.url)ImageProcessingdocumentation can be foundhereImageProcessingResponsedocumentation can be foundhereHandling ErrorsIf something wrong happened on client side:fromabstract_apiimportImageProcessingfromabstract_api.core.exceptionsimportClientRequestErrorservice=ImageProcessing(api_key="612345e4a63044b47a1234567a53cc81")try:service.url("https://example.com/image.jpeg",quality=150)exceptClientRequestErrorase:print("Some error happended from client's side")print(str(e))'quality must be in range from 1 to 100 (inclusive)'If the service endpoint returns a status code that is not 200 or 204.(200 and 204 are -currently- the only accepted status codes.)fromabstract_apiimportImageProcessingfromabstract_api.core.exceptionsimportAPIRequestErrorservice=ImageProcessing(api_key="612345e4a63044b47a1234567a53cc81")try:service.url("https://example.com/image.jpeg",quality=150)exceptAPIRequestErrorase:ife.status_code==500:print("AbstractAPI service is currently having a problem")print(str(e))
abstract-audio
Certainly! Here is the comprehensive README for the 'abstract_audio' module:Abstract AudioThe 'abstract_audio' module provides functionalities to capture and manipulate audio input from a microphone and save it into a text file. It also includes an abstract GUI to display the state of audio recording and playback.InstallationYou can install the module using pip:pipinstallabstract_audioUsageSpeech-to-Text ApplicationThe 'speech_to_text_gui.py' script provides a simple speech-to-text application with a graphical user interface (GUI). The application uses the SpeechRecognition library to perform speech recognition and PySimpleGUI for GUI implementation. It allows you to record your voice, transcribe the audio to text, and save it in a file called 'voice.txt'.To use the speech-to-text application, run the following command:pythonspeech_to_text_gui.pyThe GUI window will open, displaying a 'record' button. Click on the 'record' button to start recording your voice. The GUI screen will turn green to indicate that it is recording. Once you stop speaking, the recorded audio will be processed using the Google Web Speech API, and the recognized text will be displayed in the GUI window.Volume CommandsThe 'volume_commands.py' module provides a set of volume control commands for Windows operating systems. It allows you to control the system's volume, mute, and unmute the system, check the current volume, and change the volume incrementally.To use the volume commands, import the 'volume_commands' object from the 'volume_commands.py' module:fromabstract_audio.volume_states.volume_commandsimportvolume_commandsThen, you can use the methods provided by the 'volume_commands' object to control the system's volume:# Check the current volumecurrent_volume=volume_commands.check_volume()print("Current Volume:",current_volume)# Set the volume to a specific valuevolume_commands.set_volume(50)# Sets the volume to 50%# Change the volume incrementallyvolume_commands.change_volume(10)# Increases the volume by 10%Note: The volume control commands are currently supported only on Windows operating systems.Source Code MapThe 'src' directory contains the following packages and modules:'audio_player': Contains platform-specific implementations for audio playback.'volume_states': Provides volume control commands for Windows operating systems.'init.py': Initialization module for the 'abstract_audio' package.'speech_to_text_gui.py': Main script for the speech-to-text application.RequirementsThe 'abstract_audio' module has the following dependencies:abstract_utilities>=0.0.1740abstract_gui>=0.0.53.5pydub>=0.25.1SpeechRecognition>=3.10.0PySimpleGUI (already included in 'speech_to_text_gui.py' script)Please ensure that these dependencies are installed before using the module.LicenseThis module is licensed under the MIT License. See the 'LICENSE' file for more details.ContactFor any inquiries or feedback, you can contact the author atpartners@abstractendeavors.com.AcknowledgmentsThe 'abstract_audio' module was developed by putkoff.For more information, visit the GitHub repository:https://github.com/AbstractEndeavors/abstract_essentials/tree/main/abstract_audioThe setup.py file and source code provided in the 'src' directory are parts of the 'abstract_audio' module.Please note that the 'audio_player' and 'volume_states' packages in the 'src' directory contain platform-specific implementations for audio playback and volume control. The module currently supports volume control Windows and Linux operating systemss.The 'abstract_audio' module is in the alpha development stage and may have limitations or issues. Please refer to the GitHub repository for updates and contributions.Author: putkoff Date: 8/1/23Thank you for using the 'abstract_audio' module! We hope it fulfills your audio recording and manipulation needs. If you have any questions or suggestions, feel free to reach out to us atpartners@abstractendeavors.com. Happy coding!
abstract-audio-test
Certainly! Here is the comprehensive README for the 'abstract_audio' module:Abstract AudioThe 'abstract_audio' module provides functionalities to capture and manipulate audio input from a microphone and save it into a text file. It also includes an abstract GUI to display the state of audio recording and playback.InstallationYou can install the module using pip:pipinstallabstract_audioUsageSpeech-to-Text ApplicationThe 'speech_to_text_gui.py' script provides a simple speech-to-text application with a graphical user interface (GUI). The application uses the SpeechRecognition library to perform speech recognition and PySimpleGUI for GUI implementation. It allows you to record your voice, transcribe the audio to text, and save it in a file called 'voice.txt'.To use the speech-to-text application, run the following command:pythonspeech_to_text_gui.pyThe GUI window will open, displaying a 'record' button. Click on the 'record' button to start recording your voice. The GUI screen will turn green to indicate that it is recording. Once you stop speaking, the recorded audio will be processed using the Google Web Speech API, and the recognized text will be displayed in the GUI window.Volume CommandsThe 'volume_commands.py' module provides a set of volume control commands for Windows operating systems. It allows you to control the system's volume, mute, and unmute the system, check the current volume, and change the volume incrementally.To use the volume commands, import the 'volume_commands' object from the 'volume_commands.py' module:fromabstract_audio.volume_states.volume_commandsimportvolume_commandsThen, you can use the methods provided by the 'volume_commands' object to control the system's volume:# Check the current volumecurrent_volume=volume_commands.check_volume()print("Current Volume:",current_volume)# Set the volume to a specific valuevolume_commands.set_volume(50)# Sets the volume to 50%# Change the volume incrementallyvolume_commands.change_volume(10)# Increases the volume by 10%Note: The volume control commands are currently supported only on Windows operating systems.Source Code MapThe 'src' directory contains the following packages and modules:'audio_player': Contains platform-specific implementations for audio playback.'volume_states': Provides volume control commands for Windows operating systems.'init.py': Initialization module for the 'abstract_audio' package.'speech_to_text_gui.py': Main script for the speech-to-text application.RequirementsThe 'abstract_audio' module has the following dependencies:abstract_utilities>=0.0.1740abstract_gui>=0.0.53.5pydub>=0.25.1SpeechRecognition>=3.10.0PySimpleGUI (already included in 'speech_to_text_gui.py' script)Please ensure that these dependencies are installed before using the module.LicenseThis module is licensed under the MIT License. See the 'LICENSE' file for more details.ContactFor any inquiries or feedback, you can contact the author atpartners@abstractendeavors.com.AcknowledgmentsThe 'abstract_audio' module was developed by putkoff.For more information, visit the GitHub repository:https://github.com/AbstractEndeavors/abstract_essentials/tree/main/abstract_audioThe setup.py file and source code provided in the 'src' directory are parts of the 'abstract_audio' module.Please note that the 'audio_player' and 'volume_states' packages in the 'src' directory contain platform-specific implementations for audio playback and volume control. The module currently supports volume control Windows and Linux operating systemss.The 'abstract_audio' module is in the alpha development stage and may have limitations or issues. Please refer to the GitHub repository for updates and contributions.Author: putkoff Date: 8/1/23Thank you for using the 'abstract_audio' module! We hope it fulfills your audio recording and manipulation needs. If you have any questions or suggestions, feel free to reach out to us atpartners@abstractendeavors.com. Happy coding!
abstract-authority-replica-test
This is the README for module abstract_authority_replica_test
abstract-blockchain
Abstract BlockchainAbstract Blockchain is a Python package designed to streamline and simplify interactions with blockchain networks and smart contracts. It consists of various utilities that enable users to manage RPC parameters, work with smart contract ABIs, and facilitate user-friendly interactions using graphical user interfaces (GUIs).Available Modulesabstract_abis.py: This module provides theABIBridgeclass, an interface to Ethereum smart contract ABIs. It allows interactions with contract functions and retrieves read-only functions. Additionally, it categorizes RPC parameters for easier blockchain interaction.abstract_apis.py: Houses theAPIBridgeclass for managing API URL creation and their respective calls. It contains GUI-enabled tools to build API URLs or fetch preselected call parameters.abstract_rpcs.py: This module offers theRPCBridgeclass that manages the RPC parameters for different blockchain networks. It provides a GUI for filtering and selecting RPC parameters and organizes them for easy use.abstract_accounts.py: TheACCTBridgeclass in this module allows interfacing with your personal wallet. You can build transaction information, derive public keys, and send/verify transactions.abstract_contract_console.py: This section of the module integrates all classes for a harmonious interaction with smart contracts.abstract_gui.py: This submodule provides utilities for creating GUIs that enhance user interaction with blockchain-related features.main.py: This is the entry point of the package where files are uploaded.InstallationThe package is available onPyPI. You can install it using pip withpip install abstract-blockchain.The package is available onPyPI. You can install it using pip withpip install abstract-blockchain.Example Usagefromabstract_abisimportABIBridgefromabstract_apisimportChoose_RPC_Parameters_GUI,RPCData# Example usage of ABIBridgeabi_manager=ABIBridge(contract_address='0x3dCCeAE634f371E779c894A1cEa43a09C23af8d5',rpc=default_rpc())read_only_functions=abi_manager.get_read_only_functions()foreachinread_only_functions:inputs=abi_manager.get_required_inputs(each)iflen(inputs)==0:result=abi_manager.call_function(each)print(each,result)else:print(each,inputs)# Example usage of RPCData and GUIrpc_data=Choose_RPC_Parameters_GUI()rpc_manager=RPCData(rpc_data)w3=rpc_manager.w3# Your blockchain interactions using w3...InstallationTheabstract_blockchainpackage can be installed using pip:pipinstallabstract_blockchainModule - abstract_accounts.pyThis module, under theabstract_blockchainpackage, includes theACCTBridgeclass, providing an interface to the user's personal wallet. It interacts with Ethereum accounts, allowing the user to perform transactions, estimate gas, retrieve transaction counts, sign and send transactions, and handle Ethereum addresses.The module primarily leverages other modules and classes fromabstract_rpcs.pyandabstract_apis.py, requiring theeth_accountpackage to interact with Ethereum accounts.The critical methods within theACCTBridgeclass include:__init__: Initializes theACCTBridgeobject, establishing an RPC bridge for interaction and retrieves the private key and account address.check_priv_key&get_address_from_private_key: Manages operations related to the private key and converts it into an Ethereum address.build_txn&get_txn_info: Allows the user to build a transaction, accounting for multiple variables.check_sum&try_check_sum: Manages the conversion of the address to a checksum address.get_transaction_count: Fetches the transaction count of the Ethereum account.sign_transaction&send_transaction: Handles the signing and sending of transactions.estimate_gas: Estimates the gas fee for Ethereum transactions.The module employs a rate limiting manager to manage the frequency of requests, preventing the exceeding of API rate limits.Module - abstract_abis.pyPart of theabstract_blockchainpackage, theabstract_abis.pymodule is a critical component intended to streamline interactions with Ethereum smart contracts, providing a Pythonic interface to their Application Binary Interfaces (ABIs). The core of this module is theABIBridgeclass, which offers an encompassing interface for managing Ethereum contract ABIs.Just like theabstract_accounts.pymodule,abstract_abis.pyperforms various tasks in collaboration with other modules, principallyabstract_rpcs.pyandabstract_apis.py.It houses several methods responsible for:ValidatingEthereum addressesCreatingABI bridgesEnumeratingcontract functionsAccessingread-only functions from contract ABIsInvokingcontract functionsAcquiringandcategorizingRPC parameters that are crucial for interaction with the blockchainManagingrate limiting for API requestsTheABIBridgeclass also emphasizes contracts' functions, offering tools to list all functions, obtain necessary input details, and invoke functions smoothly. It also provides mechanisms to create functions ready to be executed in future operations.A default_rpc() function is also included, providing a default RPC configuration used when an instance of ABIBridge is created. This function underlines how to utilize the ABIBridge class for tasks like interacting with Ethereum contracts, managing user interaction, retrieving read-only functions, obtaining required input details, and invoking contract functions.Module - abstract_api_gui.pyPart of theabstract_blockchainpackage,abstract_api_gui.pyis a module that builds a Graphical User Interface (GUI) to simplify interactions with APIs. This module utilisesPySimpleGUIfor creating the GUI, andabstract_utilities.list_utilsfor utility functions. The primary features of this module include:Declaring hard-coded API descriptions throughapiCallDescDefining options for API actions usingoptionsStreamlining API parameters withinputsAdditionally, it introduces several functions to generate API GUI and manage user interactions:get_revised_dict(): This function modifies a dictionary based on required keys.generate_api_variables(): This function produces API variables based on user input.generate_api_gui(api_desc): Generates the GUI layout with the given API description.make_invisible_unless(window,values,visible_list,example): Manages UI elements' visibility based on user input.choose_api_gui(): Renders the main GUI displaying available APIs, parameters, and execution results.In the main application execution,choose_api_guifunction gets called to display the user interface. As user actions are detected in the dropdown or interaction buttons, relevant changes are made in the GUI, providing a user-friendly environment for API interaction.Theabstract_apis.pyScriptTheabstract_apis.pyscript is a core component of the abstract-blockchain module. This script houses theAPIBridgeclass which serves as the robust engine for API URL construction and execution, and efficient GUI management for RPC parameters.Key ComponentsAPIBridgeclass: This primary class initializes with optional parameters,api_data,rpc, andaddress. To ensure smooth functionality and improve efficiency, the class imports from external modules namely,abstract_webtoolsandabstract_utilities.Methods of theAPIBridgeclass include:get_api_data_string: This method prepares the appropriate API data string based on the providedapi_data_typeandaddress.api_keys: An essential method that retrieves API keys from the environment settings contingent on the scanner in use.get_http_variants,get_api_variants: These methods lend a hand in navigating different versions of the API URL based on request requirements.try_api_url: As the name suggests, this method obtains the API request URL.get_try,get_request: Key in sending a request to the API URL with a focus on 'Rate-Limiting'.get_response: This method parses the JSON response from the API.check_sum,try_check_sum: Integral in ensuring the validity of an Ethereum address and converting it to a checksum address.safe_json_loads: A safety-oriented method that loads JSON data as a dictionary or a list.Remember to study the functional activities of this module and understand its relevance in the larger abstract-blockchain module. Join us in the next section where we detail another integral part of the module, the abstract-api-gui.py script.abstract_contract_console.pyTheabstract_contract_console.pymodule within theabstract-blockchainpackage offers a Graphical User Interface(GUI). This GUI is ideal for users interested in engaging interactively with the Smart Contracts on the Ethereum Blockchain. At its core, it is designed to provide a user-friendly console for interacting with Ethereum Smart Contract functions.The module imports required dependencies and presents helper functions to perform data conversions and error checks. This setup lays the foundation for an interface that allows users to input a contract address on the Ethereum Blockchain. Once the address is provided, the interface makes a connection with a network node that will process the desired transactions and fetch available contract functions.All available functions are displayed in the GUI, which features input fields and buttons to facilitate interaction. The GUI shows the return value for functions having 'read' or 'pure' mutability; for transaction-triggering functions, the transaction hash is displayed.Notably, this module also permits users to manually input their account details, vital for signing transactions when calling non-read-only functions. Determining the Endpoint URL to connect to an Ethereum node is made simple with theget_account_layoutanddetermine_correct_rpcfunctions.Operating within a GUI Event Loop, the interface maintains responsiveness and interactive capabilities. When a button associated with a contract function is clicked, input values are collected, the function is executed, and the output is displayed.In summary,abstract_contract_console.pyemulates a local Ethereum interaction console, simplifying users' interactions with the Ethereum Blockchain and providing an all-in-one abstraction service.Furthermore, the module features anabstract_contract_console_main(). This function leverages various classes from the package for smart contract interaction and accepts an optional 'rpc_list' argument. If no argument is provided, it defaults to the RPC list from the RPCBridge class. To launch a new window titled 'New Blockchain Console,' it retrieves the RPC layout and values, constructs a final layout consisting of the account, ABI, and RPC layouts. The new_window_mgr object'swhile_basic()method responsibly handles the main application loop, managing events and rendering.OverviewTheabstract_contracts.pyscript is a part of theabstract_blockchainmodule which is designed to facilitate the interaction of users with the Ethereum network. The script is built around a suite of utility functions which assist users in communicating with smart contracts on the blockchain, providing an easy and effective way to perform a variety of tasks from simple data conversions to transaction generation.Key FeaturesVersatile Utility Functions:The script offers utility functions which handle a wide range of features including estimation of average gas price for transactions, verification and handling of different input types (e.g., boolean, integers, and addresses), and user interaction management through GUIs.Data Type Conversion:It supports conversion of different data types (e.g., addresses, integers, bytes, boolean values, and strings) to their required format for interaction with the Ethereum blockchain.User-friendly Interaction:The script provides a comfortable user interface, allowing for inputs confirmation for a smart contract function or generation of a new transaction.Error Handling and Validation:It manages various situations and edge cases, such as checking the validity of a potential Ethereum address and dealing with batch inputs. It provides intuitive feedback and correction options throughout the code execution.abstract_contract ModuleTheabstract_contracts.pyscript within the abstract_blockchain module is designed to enhance user interaction with blockchain contracts associated with the Ethereum network. This script extensively uses the web3 module to enable these interactions while also leveraging PySimpleGUI to deliver a simple yet effective graphical user interface.The script initiates with the importation of needed modules and subsequently defines several utility functions. These functions specialize in aiding users in contract communication and boosting their utilization of smart contracts within the Ethereum network.Several features are encompassed by these functions such as average gas price estimation for transactions, verification and management of multiple input types including boolean, integers, and addresses, user interaction facilitation via GUIs, and tasks like check-summing for Ethereum addresses. Further, they empower users to interact directly with the blockchain, such as confirming inputs for a smart contract function or even producing a new transaction.The script introduces functions to convert a diverse range of data types to a format that is suitable for the interaction with the Ethereum blockchain. Data types like addresses, integers, bytes, boolean values, and strings are all catered to. They also support list processing of these data types, allowing for batch operations.The functions are coded to handle a broad array of situations and edge cases such as validating potential Ethereum addresses and managing batch inputs. The code regularly provides intuitive feedback and correction options. Overall, this script serves as a comprehensive suite of tools for users interested in interacting with smart contract careers on the Ethereum network.abstract_rpcs ModuleThe abstract_rpcs.py script is part of the Abstract Blockchain package and it offers the RPCBridge class that is designed to manage RPC parameters for different blockchain networks. RPC parameters are necessary for remote procedure calls (RPC), a protocol that one program can use to request a service from a program located in another computer on a network without having to understand the network's details. In the context of blockchain, RPC parameters are used to interact with the blockchain network.The RPCBridge class in abstract_rpcs.py provides several key functionalities. It allows users to filter and select RPC parameters through a graphical user interface (GUI), streamlining the process of defining these parameters. This makes it user-friendly even for those without deep knowledge of RPC parameters.The RPCBridge class also categorizes and organizes RPC parameters, so that they can easily be used in blockchain interactions. Information is thus made more accessible, and the usage of blockchain technology is simplified as a result.Overall, the abstract_rpcs.py script plays a crucial role in the Abstract Blockchain package by providing efficient and streamlined ways to manage RPC parameters, one of the essential parts of interacting with blockchain networks.In conclusion,abstract_blockchainis a powerful tool that allows developers to interact with Ethereum networks and contracts. While the package is easy to install and use, we always recommend carefully reading the documentation and understanding how blockchain technologies work before getting started. Happy coding!For more info regarding license, please visithere.
abstract-bots
#magnets... how do they work? who invented them?
abstract-builder
Abstract BuilderУстановкаpip install abstract-builderИспользованиеAbstract builder запускается одной командой:abstract-builder [path]где[path]- путь к каталогу, в котором находятся необработанные файлы. В случае, если нужно обработать файлы в текущем каталоге, можно выполнить команду:abstract-builder $(pwd)
abstractcp
AbstractCP -- Abstract Class PropertyThis package allows one to create classes with abstract class properties. The initial code was inspired bythis question(and accepted answer) -- in addition to me strugling many time with the same issue in the past. I firtst wanted to just post this as separate answer, however since it includes quite some python magic, and I would like to include quite some tests (and possibly update the code in the future), I made it into a package (even though I'm not a big fan of small packages that hardly do anything).The package is python3.6 and higher. I could consider creating a version for pre-3.6; if you want that, please create anissue on github.TL;DR ExamplesNote: the examples usePEP-526 type hints; this is obviously optional.All examples assume the following imports:importtyingastimportabstractcpasacpNote that all typing (including theimport typing as tis optional. In addition, for python < 3.8, theLiteraltype hint can be found intyping_extensions.classParser(acp.Abstract):PATTERN:str=acp.abstract_class_property(str)@classmethoddefparse(cls,s):m=re.fullmatch(cls.PATTERN,s)ifnotm:raiseValueError(s)returncls(**m.groupdict())classFooBarParser(Parser):PATTERN=r"foo\s+bar"def__init__(...):...classSpamParser(Parser):PATTERN=r"(spam)+eggs"def__init__(...):...Example with (more) type hints:classArray(acp.Abstract):payload:np.ndarrayDIMENSIONS:int=acp.abstract_class_property(int)def__init__(self,payload):assertlen(payload)==type(self).DIMENSIONSclassVector(Array):DIMENSIONS:t.Literal[1]=1classMatrix(Array):DIMENSIONS:t.Literal[2]=2Note that in the previous example, we actually fix the value forDIMENSIONSusingt.Literal. This is allowed in mypy (however it may actually be a bug that it's allowed). It would possibly feel more natural to use at.Finalhere, however mypy doesn't allow this.Note that if we forget to assign a value for DIMENSIONS, an error will occur:classOtherArray(Array):pass>TypeError:ClassOtherArraymustdefineabstractclasspropertyDIMENSIONS,orhaveAbstractasdirectparentIn some cases, however, we might indeed intend for theOtherArrayclass to be abstract as well (because we will subclass this later). If so, make OtherArray inherit from Abstract directly to fix this:classOtherArray(Array,acp.Abstract):...classOtherVector(OtherArray):DIMENSIONS=1IntroductionI quite often find myself in a situation where I want to store some configuration in a class-variable, so that I can get different behaviour in different subclasses. Quite often this starts with a top-level base class that has the methods, but without a reasonable value to use in the configuration. In addition, I want to make sure that I don't accidentally forget to set this configuration for some child class -- exactly the behaviour that one would expect from abstract classes. However Python doesn't have a standard way to define abstract class variables (or class constants). The search for a solution initially led me tothis question-- the accepted answer works well, as long as you accept that each subclass of the parent must be non-abstract. In addition, it would not play nice at all with type-hinting and tools likemypy.So I decided to write something myself -- it started as a small StackOverflow answer, however since I felt lots of tests and docs would be required, better make it a proper module.Design ConsiderationsI had some clear requirements in mind when writing this package:Pythonic syntaxWorks well withPEP-526 style type hintsand static type checkers (if possible without any# type: ignorein either this code, and the code using this module).No runtime slowdowns (i.e.: all the work gets done at setup-time)Useful error messages -- stuff needs to be explicit, no silent failures.No need to define all abstract class properties directly in the first child -- so an abstract class can have abstract children.InstallationThe package is a 100% python package. Installation is as simple aspipinstallabstractcpUseThe system consists of 2 elements: TheAbstractbase class. Each class that is abstract (i.e. that has abstract class properties -- this is completely independent of the ways to make a class abstract inabc) must inheritdirectlyfromAbstract, meaning thatAbstractshould be a direct parent. This is done so that it's explicit which classes are abstract (and hence, we can throw an error if a class is abstract and does not inheritdirectlyfromAbstract).The second part of the system is the_AbstractClassPropertyclass. Every abstract class property gets assigned an_AbstractClassProperty()instance, through theacp.abstract_class_property(...)method. Note that this method has typehints to return the exact class that you provide, so from a type checker point of view,acp.abstract_class_property(int)is identical to3(or4, or any otherintinstance). This means that we can be more flexible here, for instance doingacp.abstract_class_property(t.Dict[str, int]), however note thatacp.abstract_class_property(t.Mapping[str, int])does not work, since mypy wants a concrete type there.Note thatabstract_class_property()can only be assigned in classes that haveAbstractas direct parent.See the Examples section above for exact use.Update from 0.9.1Note that since 0.9.1 the syntax has changed a bit. Rather than writing:classA(acp.Abstract):i=acp.AbstractInt()you now useclassA(acp.Abstract):i=acp.abstract_class_property(int)It results in cleaner code, and also means that we don't have to make our own classes for new types.FAQI'm gettingArgument 1 to "abstract_class_property" has incompatible type "object"; expected "Type[<nothing>]"errorsThis happens when you try to feed something that is not actually a type to abstract_class_property, for instancex = acp.abstract_class_property(t.Union[str, int])(or even, more correctly,t.Type[t.Union[str, int]]ort.Union[t.Type[str], t.Type[int]]. Alsox = acp.abstract_class_property(t.Type[Employee])will not work (sincet.Typedoes not actually make something a type; in this case usetype(Employee)instead (which would give you an abstract property that could receive some subclass of Employee).Note that the argument toabstract_class_propertyis only for readability and used in the__repr__of the_AbstractClassPropertyclass -- and for static typing. So as long as you satisfy static typing, all will be fine:T = t.TypeVar("T", int, str) class A(t.Generic[T], acp.Abstract): VALUE_TYPE: t.Type[T] = acp.abstract_class_property(t.cast(t.Type[t.Type[T]], "union of int and str")) def to_value(self) -> T: ...Note the doublet.Type, since acp.abstract_class_property will remove 1 t.Type.Why am I getting warnings when I inherit a class fromacp.Asbtractbut don't define any abstract fieldsYou will get a Python warning if you run the following code:class A(acp.Abstract): i = 3You are defining classAto be abstract, however it has no fields withabstract_class_property. In almost all cases this means that either you should add an abstract class property, or remove theacp.Abstractinherritance.Defining a class like this used to result in aTypeErrorin versions <= 0.9.8, but is a warning from version 0.9.9 forward.You can safely ignore the warning (if you understand what you're doing; for instance if you just commented out the abstract class property during development for a moment), or if you really want to silence the warning forever in production code, add the following code to your program:import warnings warnings.filterwarnings("ignore", category=acp.AbstractClassWithoutAbstractPropertiesWarning)If you do this, I would appreciate if you drop me a line, since it probably means you've found a novel use for the package that I'd be happy to learn about (and possibly document).
abstract-data-types
Abstract data typesThis is a package that adds some complex data structures like arrays and graphs that are tailored to suit our needs. Over time we added methods that helped us in specific situations such as the generation of a Voronoi diagram that is included in the matrix, which is perhaps not so related to its operation, however, we added them anyway because the objective to create this library was to learn more about python and try to make as many advances from scratchAuthorsArcos Juan-ArcosJuan.Carazo Medley Matías-Matimed.Licenseabstract-data-types as a whole is licensed under the MIT License - Look theLICENSEfile for details.
abstractdomain
Failed to fetch description. HTTP Status Code: 404
abstractEasyT
abstractEasyTeasyT, easyTo trade, eastTo use!
abstractfactory
Copyright (c) 2016-2017 David BetzSee test_provider.py unit test for usage.Basically an implementation of an abstract factory pattern.In one system where I use this, I create factories for eachs type of thing in my system. So, SearchFactory, CloudStorageFactory, QueueFactory, AristotleFactory, etc… These would implement for ID interface like ICloudStorageProvider (in Node, it’s just a class).Each of these would have their own switch/case (or whatever) to create the factory for it. So, for example, I may have config in a YAML file specifying that I want to use Mongo for my Aristotle provider (“Aristotle” is what most people incorrectly call “NoSQL”).To begin, create the factory (do this one for the entirety of your system):abstractFactory = AbstractFactory()Then, add your factories:abstractFactory.set(SearchFactory) abstractFactory.set(CloudStorageFactory) abstractFactory.set(QueueFactory) abstractFactory.set(AristotleFactory)When the time comes, just ask for your provider:provider = abstractFactory.resolve(IAristotleProvider)Your code SHOULD. NOT. CARE. ABOUT. MONGO. It should the your configuration or something handle that. Don’t tightly couple your providers.Also note that the resolver also accepts various arguments for extra flexibility:provider = abstractFactory.resolve(IAristotleProvider, "alternateConnectionString", collection="log")Despite what random bloggers say, service locators are awesome and provide excellent decoupling.Look at theMockexamples provided with the tests; they’re rather extensive.
abstract-gui
Abstract GUI ModuleThe Abstract GUI module provides a dynamic and abstract approach to manage PySimpleGUI windows and events. The module contains powerful classes such asWindowGlobalBridgeandWindowManager, geared for handling global variables and PySimpleGUI windows respectively. Also, this module offers a plethora of utility functions aimed at various GUI operations.InstallationTheSimpleGuiFunctionsManagermodule can be imported to obtain theSimpleGuiFunctionsManager,SimpleGuiFunctionsManagerSingletonandsgclasses, and other handy utility functions. These functions assist in managing the creaton and manipulation of PySimpleGUI window elements.Function Definitions for the Abstract GUI Moduleensure_nested_list(obj): Ensures that an object is a list.create_row(*args): Creates a row (list) from the arguments.create_column(*args): Creates a column (list of lists) from the arguments.concatenate_rows(*args): Merges multiple lists into a single list.concatenate_layouts(*args): Joins all argument lists into a single list.create_row_of_buttons(*args): Produces a row of button elements.get_buttons(*args): Fetches a list of button components.make_list_add(obj, values): Appends the listified values to the listified object.if_not_window_make_window(window): Make a new window if the passed argument is not a window.while_quick(window, return_events, exit_events, event_return): Provides a simplified avenue for the window event loop.... and many more designed to create different types of GUI layouts and manage GUI events.GUIManagerClassThe GUIManager class in the abstract_gui module is used to manage GUI events in a given window. It offers a systematized representation of event handling looping through a window until the window is either closed or deleted. It functions in conjunction with the WindowManager class to manage multiple windows and provide enhanced GUI manipulations.AbstractWindowManagerClassThe AbstractWindowManager class plays a crucial role in managing multiple PySimpleGUI windows. It functions to record the window elements and regulate their sizes and their respective events.This class possesses prominent methods such asget_screen_size(),set_window_size(max_size, height, width),add_window(),while_window()exists(),close_window()each dedicated to offering versatile and high-level control of the GUI windows.Installation:You can install this module via pip by running the commandpip install abstract_gui.get_buttons(*args)This function generates button elements for the GUI based on the provided arguments. It accepts any number of arguments that can be single or nested instances of strings, dictionaries, lists, or tuples that define the specifications for the buttons to be created.if_not_window_make_window(window)This function ensures the passed object is a valid window object. If not, it treats the object as a dictionary holding layout information for a new window which is then created.create_window_manager(script_name='default_script_name', global_var=globals())This function is used to initialize a WindowManager instance for managing various PySimpleGUI windows and their events. Thescript_nameparameter is used to define the name of the script using the WindowManager, andglobal_varis used to provide the global variables associated with that script. The output is a tuple containing the WindowManager, bridge, and script name.Note:The above functions are part of the abstract_gui module. For event management and other advanced functionalities, you need to use the methods provided in the WindowManager and WindowGlobalBridge classes.This segment of code contains function definitions for window handling in Abstract GUI Module. Below is a brief explanation of the functions.Functionwhile_quickhandles window events for a specified PySimpleGUI window by checking against a list of events for when to close it and whether to return a value or an event.FunctionChoose_RPC_Parameters_GUIlaunches a GUI window for choosing Remote Procedure Call (RPC) parameters. It uses a provided list of parameters or fetches a default one, if not provided.Functionverify_argsvalidates and updates window arguments, setting default values if required.Functionget_windowcreates a PySimpleGUI window with the provided name, layout, and additional arguments.Functionget_browser_layoutis used to create either a File Browser or a Folder Browser GUI depending on the 'type' argument given. If type is not 'Folder' or 'Directory', it defaults to 'File'.Functionget_yes_no_layoutcreates a layout for a 'Yes or No' window prompt. This layout includes a text message and two buttons for 'Yes' and 'No'.Remember, a helper functionget_gui_funis used across functions, which gets a callable object for a specific PySimpleGUI function with the provided arguments. It takes the function name as a string and a dictionary of arguments.## Helper Functions (Continued)get_input_layout()A function that returns an input layout for a GUI. You can customize the window's title, the prompt message, the default text in the input field, and any additional arguments for creating the window.get_yes_no()This function creates a Yes/No interface, allows custom settings such as window's title, prompt message, exit events, return events, and whether to return the clicked event or the current window's values.get_input()It allows creating a GUI that gets text input from the user with customizable settings.get_browser()This function creates a browser for selecting files or directories. You can define the window's title, browser type, initial folder, exit events, return events, and additional arguments for creating the window.get_menu()Develop a menu for a GUI. The menu structure is defined by a nested list.get_push()Fetches the 'Push' function from the GUI module to allow operations on the GUI.text_to_key()Converts a given text into a format suitable for a key in PySimpleGUI.get_event_key_js()Parses an event key into related components to facilitate key management.get_screen_size()Retrieves the screen's size to assist in positioning and sizing GUI elements.GUIManagerClassA class within theabstract_guimodule that handles long-running operations in GUI windows managed byWindowManager.Methods:__init__(self,window_mgr)Initializes theGUIManagerInstance.Args:window_mgr(WindowManager): The window manager instance.long_running_operation(self,function=None,args={})Simulates a long running operation by calling a function with its arguments.Args:function(function): The function to be executed.args(dict): The dictionary of arguments to be passed to the function.Returns:any: The results returned byfunction.start_long_operation_thread(self,window_name)Starts a long running operation in a thread and ties it to a specific windowArgs:window_name(str): The name of the window to be associated.Returns:str: The Name of the thread.GUIManager Class ExplanationTheGUIManagerclass is responsible for managing GUI events in a given window. This class includes functions such asrun, which reads the window and handles events until the corresponding event thread ends. The functions inGUIManagermake it easier to manage GUI events and provide an abstract layer of control for developers.This class maintains a dictionary namedevent_threads, where the keys correspond to window names and the values are eitherTrueorFalse, depending on whether the window is open or closed.The primary methods ofGUIManagerinclude:run(self, window_name, window, event_handlers=[], close_events=[])This method controls the event loop for a given window. It takes as parameters the name of the window, the window object itself, a list of event handlers, and optionally, a list of events that would close the window.Within the loop, therunmethod reads from the window usingself.window_mgr.read_window(), and handles events returned from the window using the provided event handlers.This loop continues to run until the window is closed or deleted.AbstractWindowManager Class ExplanationTheAbstractWindowManageris a high-level manager for PySimpleGUI windows. It keeps track of multiple windows, their sizes, and their events.Some main functions include:get_screen_size()This function retrieves the screen size.set_window_size(max_size, height, width)Sets the window size ensuring the dimensions are valid and within a maximum size.add_window(window_title, layout, name, default_name, close_events, event_handlers, match_true_bool, sizes, **kwargs)Adds a window to the global windows list and returns the window name.while_window(window_name, window, close_events=[], event_handlers=[])Handles a window's events until the window is closed.exists(window_name, window)Checks if a window exists.close_window(window_name, window)Closes a window.For more robust details on each function, please refer to the source code or the detailed documentation.###GUIManagerMethodsset_window_size(max_size, height, width)This function set the window size.Args:max_size(tuple): Maximum size of the window.height(int): Height of the window.width(int): Width of the window.Returns:tuple: A tuple containing the width and height of the window.add_window(title, layout, window_name, default_name, set_current, window_height, window_width, close_events, event_handlers, match_true, set_size, *args, **kwargs)This function adds a window to the GUI manager.Args:title(str): Title for the window.layout(list): Layout of the window.window_name(str): Name of the window.default_name(bool): If true, a default name is given to the window.set_current(bool): If true, the window added is set as the current window.window_height(int): Height of the window.window_width(int): Width of the window.close_events(list): List of event handlers to be triggered when the window is closed.event_handlers(list): List of additional event handlers to be attached with the window.match_true(bool): If true, matches the window with existing windows.set_size(bool): If true, sets the size of the window.*args(tuple): Additional arguments.**kwargs(dict): Additional keyword arguments.Returns:str: Name of the window added.while_window(window_name, window, close_events, event_handlers)This function handles a window's events until the window is closed.Args:window_name(str): Name of the window.window(PySimpleGUI.window): The window object.close_events(list): List of event handlers to be triggered when the window is closed.event_handlers(list): List of additional event handlers to be attached with the window.exists(window_name, window)This function checks if a window exists.Args:window_name(str): Name of the window.window(PySimpleGUI.window): The window object.Returns:boolean: True if the window exists, False otherwise.close_window(window_name, window)This function closes a window.Args:window_name(str): Name of the window.window(PySimpleGUI.window): The window object.get_window(window_name, window)This function returns a window.Args:window_name(str): Name of the window.window(PySimpleGUI.window): The window object.Returns:PySimpleGUI.window: The requested window.append_output(key, new_content, window_name, window)This function updates the output in a window.Args:key(str): Key of the element to update.new_content(str): New content to be appended.window_name(str): Name of the window.window(PySimpleGUI.window): The window object.update_value(key, value, args, window_name, window)This function updates the value of a key in a window.Args:key(str): Key of the element to update.value(any): New value to be set.args(dict): Additional arguments.window_name(str): Name of the window.window(PySimpleGUI.window): The window object.defexpand_elements(self,window_name=None,window=None,element_keys=None):"""Expand the specified elements in the window.Args:- window_name (str, optional): The name of the window.- window (object, optional): Direct window object.- element_keys (list, optional): List of keys of the elements to be expanded."""# Get the window using its name or direct objecttarget_window=self.get_window(window_name=window_name,window=window)# If no element_keys are provided, use the default set of keysdefault_keys=['-TABGROUP-','-ML CODE-','-ML DETAILS-','-ML MARKDOWN-','-PANE-']element_keys=element_keysordefault_keys# Expand the elementsforkeyinelement_keys:ifkeyintarget_window:target_window[key].expand(True,True,True)expand_elements(self, window_name=None, window=None, element_keys=None)This method expands the specified elements in the window.Parameters:window_name(str, optional): The name of the window, default toNone.window(any, optional): Direct window object, default toNone.element_keys(list, optional): A list of keys that corresponds to the elements to be expanded, default toNone, and if no element_keys are provided, the default set of keys will be used, which are ['-TABGROUP-', '-ML CODE-', '-ML DETAILS-', '-ML MARKDOWN-', '-PANE-'].WindowGlobalBridgeClassThis class manages shared global variables between different scripts.global_vars(dict): Used to store the global variables for each script.__init__(self)Initializes theWindowGlobalBridgewith an empty dictionary forglobal_vars.retrieve_global_variables(self, script_name, global_variables)Stores the global variables of a script in theglobal_varsdictionary.Args:script_name(str): The name of the script.global_variables(dict): The global variables to store for the script.return_global_variables(self, script_name)Returns the global variables of a script.Args:script_name(str): The name of the script.Returns:dict: The global variables of the script. If no global variables are found, it returns an empty dictionary.WindowManagerClassThis class manages PySimpleGUI windows and their events.global_bridge: Global bridge to access shared variables between different scripts.global_vars(dict): Stores global variables for this script.For details about each method of this class, see thedetailed documentbelow.Helper FunctionsSome additional notable helper functions within this module are:expandable(size: tuple = (None, None)): Returns a dictionary with window parameters for creating an expandable PySimpleGUI window.get_browser(title:str=None,type:str='Folder',args:dict={},initial_folder:str=get_current_path()): Functional and customizable browser input statement.Example UsageHere is an example of how to useabstract_guito create and manage a PySimpleGUI window:# Import the moduleimportabstract_gui(...)# Run the event loop for the windowwindow_manager.while_basic(window)# Retrieve all registered windows and their detailsall_windows=window_manager.get_all_windows()For full example and more practical uses, refer toExample Usagesection.ContributingFork this repository and open a pull request to add snippets or make improvements.ContactShould you have any inquiries, you can reach us atpartners@abstractendeavors.com.LicenseThis project is licensed under the MIT License. See theLICENSEfile for details.Authorsputkoff - Main developerLast Update: May 29, 2023.
abstract-http-client
Abstract Http ClientThis project is a starting template for quickly implementing a python REST api client. A concrete base class encapsulating the popularrequestslibrary is provided. Abstract base class building blocks for integrating other http libraries (such as aiohttp, etc.) are also available. The main advantage to using this base class as a starting point is to save on common boilerplate actions, which are highlighted below.Basic FeaturesSetup of session object and common general attributeshelper methods to send requests with sessionValidation and debug logging of all requests sentinternal counter of requests sentContext manager to manage session tokens (login / logout methods must be implemented)Composable client and service classesInstallationpip install abstract-http-clientBasic UsageSample implementation of api client that has no authfromabstract_http_client.http_clients.requests_clientimportRequestsClientimportjsonclassJsonPlaceholderApiClient(RequestsClient):def__init__(self,host):super().__init__(host=host,use_https=True)defget_users(self):returnself.rest_service.request_get(uri="/users").json()defget_posts(self):returnself.rest_service.request_get("/posts").json()defadd_post(self):returnself.rest_service.request_post("/posts",data={"post":"my_post"})if__name__=="__main__":api=JsonPlaceholderApiClient(host="jsonplaceholder.typicode.com")users=api.get_users()print(json.dumps(users[:2],indent=4))print(f"total requests sent{api.rest_service.request_counter}")Auth Client SampleExample that requires login and authorization header on every requestImplement a login method - here it stores header on session objectBase Class context managers call Logout onexitby defaultLogin not inenterby default, can be added if prefer to have context manager trigger login. Or just put intoinitfromabstract_http_client.http_clients.requests_clientimportRequestsClientclassSampleAuthClient(RequestsClient):def__init__(self,host,user,password):super().__init__(host=host,user=user,password=password)self.login()deflogin(self):""" sample login - getting token and storing on requests session object """data={"user":self.user,"password":self.password}self.token=self.rest_service.request_put(uri="/login",json=data)self.rest_service.session.headers.update({"Authorization":self.token})deflogout(self):""" sample logout - invalidating token and clearing session auth header """self.rest_service.request_delete(uri=f"/logout/{self.token}")self.rest_service.session.headers.pop({"Authorization":self.token})defget_stuff(self)->dict:""" NOTE: this is pseudocode, not real endpoint """returnself.rest_service.request_get("/stuff").json()if__name__=="__main__":# Context manager will handle api logoutwithSampleAuthClient(host="192.168.1.3",user="admin",password="admin")asapi:# call your api herestuff=api.get_stuff()# Do more stuff
abstract-images
abstract_imagesModule - Image and PDF UtilitiesPart of theabstract_essentialsPackageGitHub Repository:abstract_essentialsContact Email:partners@abstractendeavors.comDate: 08/27/2023Version: 0.0.0.1This module, part of theabstract_essentialspackage, provides a collection of utility functions for working with images and PDFs, including loading and saving images, extracting text from images, capturing screenshots, processing PDFs, and more.Image Utilities -image_utils.pyTheimage_utils.pymodule contains functions for image-related operations.Paths to Image Data:get_dimensions(image_path: str): Return dimensions (height, width) of the image.img_to_str(image_path: str): Convert image to text using pytesseract.get_pix(image_path: str): Return pixel data of the image.image_to_bytes(image_path: str, format: str = "PNG"): Convert an image to bytes.get_pixel_data(image_path: str): Get pixel data from the image and save the resultant image.open_image(image_path: str): Open and return the image using PIL.read_image(image_path: str): Read image using OpenCV and return it as a numpy array.Paths to Save:save_url_img(url: str , image_path:str, format: str = "PNG"): Download an image from URL and save it.screenshot(image_path: str = "screenshot.png"): Take a screenshot and save it.save_image(image:Union[Image.Image, ndarray], image_path:str,format:str="PNG"): Save an image to the specified path.Data to Image:get_image_bytes(image_data: bytes): Convert image data in bytes format to a BytesIO stream.pix_to_img(pixel_data: List[List[Tuple[int, int, int]]], image_path: str = "image.png"): Convert pixel data to an image and save it.show_image(image: Union[Image.Image, ndarray]): Display an image.PDF Utilities -pdf_utils.pyThepdf_utils.pymodule provides functions for PDF processing.Function Descriptions:if_none_return(obj, obj_2): Return primary object if secondary object isNone.write_pdf(): Initialize and return a new PDF writer object.read_pdf(file): Read a PDF from a given path and return a PDF reader object.is_pdf_path(file): Check if a file path corresponds to a valid PDF file.get_pdf_obj(pdf_obj): Process a PDF object or file path and return its content.split_pdf(input_path, output_folder, file_name): Split a PDF file into separate pages.pdf_to_img_list(pdf_list, output_folder, file_name, paginate, extension): Convert PDF files into images.img_to_txt_list(img_list, output_folder, file_name, paginate, extension): Convert images to text using OCR.open_pdf_file(pdf_file_path): Open a PDF file using the default system application.image_to_text(image_path): Convert an image to text using Tesseract OCR.get_pdfs_in_directory(directory): Get a list of PDF filenames in a directory.get_all_pdf_in_directory(file_directory): Get full paths of all PDFs in a directory.collate_pdfs(pdf_list, output_pdf_path): Merge a list of PDFs into one.Example Usage:To showcase thepdf_utilsmodule, here's an example combining several utility functions:fromabstract_images.pdf_utilsimport(get_file_name,get_directory,mkdirs,split_pdf,pdf_to_img_list,img_to_txt_list)pdf_path="path_to_pdf"file_name=get_file_name(pdf_path)directory=get_directory(pdf_path)pdf_folder=mkdirs(os.path.join(directory,file_name))pdf_split_folder=mkdirs(os.path.join(pdf_folder,"split"))pdf_list=split_pdf(input_path=pdf_path,output_folder=pdf_split_folder,file_name=file_name)pdf_Image_folder=mkdirs(os.path.join(pdf_folder,"images"))img_list=pdf_to_img_list(pdf_list=pdf_list,output_folder=pdf_Image_folder,paginate=False,extension="png")pdf_Text_folder=mkdirs(os.path.join(pdf_folder,"text"))text_list=img_to_txt_list(img_list=img_list,output_folder=pdf_Text_folder,paginate=False,extension="txt")Note:For queries, bug reports, or feature requests, please raise an issue on theGitHub repositoryor contact us through the provided email:partners@abstractendeavors.com. Ensure that you have the required dependencies installed, and for OCR operations, ensure Tesseract is properly set up and its path is correctly specified.
abstract-instrument-interface
abstract_instrument_interfaceThis package contains general classes for interfaces and GUIs, which are inherited by different other packages to control lab instruments.InstallationUse the package manager pip to install,pipinstallabstract-instrument-interfaceUsed byThis is a list of all instrument interfaces that use this package (and that are compatible with Ergastirio)pyThorlabsPM100xpyThorlabsAPT
abstraction
|project abstraction|NOTE====Please note that abstraction is a *project*, not a finished product.setup=====The following Bash commands, that have been tested on Ubuntu 15.10,should install prerequisites and check out abstraction... code:: bash# Install ROOT.sudo apt-get -y install festivalsudo apt-get -y install pylintsudo apt-get -y install snakefoodsudo apt-get -y install sqlitesudo apt-get -y install python-nltksudo python -m nltk.downloader allsudo easy_install -U gensimsudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.8.0-cp27-none-linux_x86_64.whlsudo pip install git+git://github.com/google/skflow.gitsudo pip install abstractiongit clone https://github.com/wdbm/abstraction.gitThe function ``abstraction.setup()`` should be run.upcoming========Under consideration is a requirement for arcodex to ensure the existenceof a response to an utterance before saving to database.logging=======Updating logging procedures is under consideration because of possiblelogging conflicts. It could be beneficial currently to run using Bashanonymous pipes, in a way like the following:.. code:: bashpython script.py 2> >(grep -E -v "INFO|DEBUG")data====feature scaling---------------Standardization of datasets is a common requirement for many machinelearning estimators implemented in the scikit; they might behave badlyif the individual features do not more or less look like standardnormally-distributed data: Gaussian with zero mean and unit variance --often called a standard scores. Many machine learning algorithms assumethat all features are centered around zero and have variance of the sameorder. A feature with a variance that is orders of magnitude larger thatothers might dominate the objective function and make the estimatorunable to learn from other features. The scikit function ``scale``provides a quick way to perform this operation on a single array-likedataset.SUSY Data Set-------------- https://archive.ics.uci.edu/ml/datasets/SUSY- http://arxiv.org/abs/1402.4735The SUSY Data Set is a classification problem to distinguish between asignal process which produces supersymmetric particles and a backgroundprocess which does not. In the data, the first column is the class label(1 for signal, 0 for background), followed by 18 features (8 low-levelfeatures and 10 high-level features):- lepton 1 pT- lepton 1 eta- lepton 1 phi- lepton 2 pT- lepton 2 eta- lepton 2 phi- missing energy magnitude- missing energy phi- MET\_rel- axial MET- M\_R- M\_TR\_2- R- MT2- S\_R- M\_Delta\_R- dPhi\_r\_b- cos(theta\_r1)This data has been produced by MadGraph5 Monte Carlo simulations of 8TeV proton collisions, with showering and hadronisation performed byPythia 6 and detector response simulated by Delphes. The first 8features are kinematic properties measured by simulated particledetectors. The next 10 features are functions of the first 8 features;they are high-level features derived by physicists to help discriminatebetween the two classes. There are 46% positive examples in the SUSYdata set. The features were standardised over the entiretraining/testing sets with mean zero and standard deviation one, exceptfor those features with values strictly greater than zero; these werescaled such that the mean value was one.Caffe=====introduction------------Caffe is a deep learning framework developed by the Berkeley Vision andLearning Center (BVLC) with cleanliness, readability and speed in mind.It has a clean architecture which enables rapid deployment. It isreadable and modifiable, encouraging active development. It is a fastCNN implementation. It has command line, Python and MATLAB interfacesfor day-to-day usage, interfacing with research code and rapidprototyping. While Caffe is essentially a C++ library, it has a modularinterface for development with cmdcaffe, pycaffe and matcaffe.The Caffe core software packages are as follows:- Caffe- CUDA- cuDNN- OpenBLAS- OpenCV- BoostCaffe other dependencies are as follows:- protobuf- google-glog- gflags- snappy- leveldb- lmdb- hdf5The Caffe build tools are CMake and make.command line------------The command line interface cmdcaffe is a Caffe tool for model training,scoring and diagnostics. Run it without arguments for help. It is atdirectory ``caffe/build/tools``.train~~~~~``caffe train`` learns models from scratch, resumes learning from savedsnapshots and fine-tunes models to new data and tasks. All trainingrequires a solver configuration through the option``-solver solver.prototxt``. Resuming requires the option``snapshot model_item_1000.solverstate`` argument to load the solversnapshot... code:: bash# train LeNetcaffe train -solver examples/mnist/lenet_solver.prototxt# train on GPU 2caffe train -solver examples/mnist/lenet_solver .prototxt -gpu 2test~~~~``caffe test`` scores models by running them in the test phase andresport the network output as its score. The network architecture mustbe defined properly to output an accuracy measure or loss as its output.The per-batch score is reported and then the grand average is reportedlast... code:: bash# score the learned LeNet model on the validation setas defined in the model architecture lenet_train_test.prototxtcaffe test - model examples/mnist/lenet_train_test.prototxt -weights examples/mnist/lenet_iter_10000 -gpu 0 -iterations 100benchmark~~~~~~~~~``caffe time`` benchmarks model execution layer-by-layer through timingand synchronisation. This is useful to check system performance andmeasure relative execution times for models... code:: bash# time LeNet training on CPU for 10 iterationscaffe time -model examples/mnist/lenet_train_test.prototxt -iterations 10# time LeNet training on GPU for the default 50 iterationscaffe time -model examples/mnist/lenet_train_test.prototxt - gpu 0diagnose~~~~~~~~``caffe device_query`` reports GPU details for reference and checkingdevice ordinals for running on a device in multi-GPU machines... code:: bash# query the first devicecaffe device_query -gpu 0pycaffe-------The Python interface ``pycaffe`` is the caffe module and its scripts areat the directory ``caffe/python``. Run ``import caffe`` to load models,do forward and backward, handle IO, visualise networks and instrumentmodel-solving. All model data, derivatives and parameters are exposedfor reading and writing.``caffe.Net`` is the central interface for loading, configuring andrunning models. ``caffe.Classifier`` and ``caffe.Detector`` provideconvenience interfaces for common tasks. ``caffe.SGDSolver`` exposes thesolving interface. ``caffe.io`` handles input and output withpreprocessing and protocol buffers. ``caffe.draw`` visualises networkarchitectures. Caffe blobs are exposed as numpy ndarrays for ease-of-useand efficiency.MATLAB------The MATLAB interface ``matcaffe`` is the Caffe MATLAB MEX file and itshelper m-files are at the directory caffe/matlab. There is example code``caffe/matlab/caffe/matcaffe_demo.m``.models------The directory structure of models is as follows:.. code:: bash.├── bvlc_alexnet│   ├── deploy.prototxt│   ├── readme.md│   ├── solver.prototxt│   └── train_val.prototxt├── bvlc_googlenet│   ├── bvlc_googlenet.caffemodel│   ├── deploy.prototxt│   ├── quick_solver.prototxt│   ├── readme.md│   ├── solver.prototxt│   └── train_val.prototxt├── bvlc_reference_caffenet│   ├── deploy.prototxt│   ├── readme.md│   ├── solver.prototxt│   └── train_val.prototxt├── bvlc_reference_rcnn_ilsvrc13│   ├── deploy.prototxt│   └── readme.md└── finetune_flickr_style├── deploy.prototxt├── readme.md├── solver.prototxt└── train_val.prototxtdraw a graph of network architecture------------------------------------.. code:: bash"${CAFFE}"/python/draw_net.py "${CAFFE}"/models/bvlc_googlenet/deploy.prototxt bvlc_googlenet_deploy.pngsetup-----.. code:: bashsudo apt-get -y install libprotobuf-devsudo apt-get -y install libleveldb-devsudo apt-get -y install libsnappy-devsudo apt-get -y install libopencv-devsudo apt-get -y install libhdf5-devsudo apt-get -y install libhdf5-serial-devsudo apt-get -y install protobuf-compilersudo apt-get -y install --no-install-recommends libboost-all-devsudo apt-get -y install libatlas-base-devsudo apt-get -y install python-devsudo apt-get -y install libgflags-devsudo apt-get -y install libgoogle-glog-devsudo apt-get -y install liblmdb-devsudo apt-get -y install python-pydot.. code:: bashsudo pip install protobufsudo pip install scikit-image.. code:: bashcdgit clone https://github.com/BVLC/caffe.gitcd caffecp Makefile.config.example Makefile.configEdit the makefile. Uncomment ``CPU_ONLY := 1`` for a non-GPU compilation(without CUDA). It may be necessary to include the following lines:::INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include /usr/include/hdf5/serial/LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib /usr/lib/x86_64-linux-gnu/hdf5/serial.. code:: bashtime make alltime make testtime make runtesttime make pycaffe.. code:: bashPYTHONPATH="/home/"${USER}"/caffe/python:${PYTHONPATH}"CAFFE="/home/"${USER}"/caffe"Download Caffe models from the Model Zoo.- http://caffe.berkeleyvision.org/model_zoo.html- https://github.com/BVLC/caffe/wiki/Model-Zoo.. code:: bash~/caffe/scripts/download_model_binary.py models/bvlc_googlenetTorch=====setup-----.. code:: bashcurl -s https://raw.githubusercontent.com/torch/ezinstall/master/install-deps | bashgit clone https://github.com/torch/distro.git ~/torch --recursivecd ~/torch; ./install.shCPU versus GPU for deep learning================================Roelof Pieters set some benchmarks in 2015-07 for deep dreaming videoprocessing using CPU and GPU hardware. The CPU hardware was Amazon EC2g2.2xlarge Intel Xeon E5-2670 (Sandy Bridge) 8 cores 2.6 GHz/3.3 GHzturbo and the GPU hardware was Amazon EC2 g2.2xlarge 2 x 4 Gb GPU.+------+------+------+------+------+| **in | **CP | **GP | **CP | **GP || put | U | U | U | U || imag | proc | proc | proc | proc || e | essi | essi | essi | essi || reso | ng | ng | ng | ng || luti | time | time | time | time || on | for | for | for | for || (pix | 1 | 1 | 2 | 2 || els) | imag | imag | minu | minu || ** | e** | e** | te | te || | | | vide | vide || | | | o** | o** |+======+======+======+======+======+| 540 | 45 s | 1 s | 1 d | 60 || x | | | 21 h | minu || 360 | | | | tes |+------+------+------+------+------+| 1024 | 144 | 3 s | 6 d | 3 h || x | s | | | || 768 | | | | |+------+------+------+------+------+So, the GPU hardware was ~45 -- ~48 times faster than the CPU hardware.introduction============Project abstraction is a natural language processing project utilisingcurated conversation data as neural network training data.bags of words, skip-grams and word vectors==========================================Word vectors are an efficient implementation of bag-of-words andskip-gram architectures for computing vector representations of words.These representations can be used in natural language processingapplications and research.An n-gram is a contiguous sequence of n items from a sequence of text orspeech. The items can be phonemes, syllabels, letters, words or basepairs depending on the application. Skip-grams are a generalisation ofn-grams in which the components (typically words) need not beconsecutive in the text under consideration, but may have gaps that areskipped. They are one way of overcoming the data sparsity problem foundin conventional n-gram analysis.Formally, an n-gram is a consecutive subsequence of length n of somesequence of tokens w\_n. A k-skip-n-gram is a length-n subsequence inwhich components occur at a distance of at most k from each other. Forexample, in the text::the rain in Spain falls mainly on the plainthe set of 1-skip-2-grams includes all of the 2-grams and, in addition,the following sequences:::the in,rain Spain,in falls,Spain mainly,mainly the,on plainIt has been demonstrated that skip-gram language models can be trainedsuch that it is possible to perform 'word arithmetic'. For example, withan appropriate model, the expression ``king - man + woman`` evaluates tovery close to ``queen``.- "Efficient Estimation of Word Representations in Vector Space", TomasMikolov, Kai Chen, Greg Corrado, Jeffrey Deanhttp://arxiv.org/abs/1301.3781The bag-of-words model is a simplifying representation used in naturallanguage processing. In this model, a text is represented as a bag(multiset -- a set in which members can appear more than once) of itswords, disregarding grammar and word order but keeping multiplicity. Thebag-of-words model is used commonly in methods of documentclassification, for which the frequency of occurrence of each word isused as a feature for training a classifier.Word vectors are continuous distributed representations of words. Thetool word2vec takes a text corpus as input and produces word vectors asoutput. It constructs a vocabulary from the training text data and thenlearns vector representations of words. A word2vec model is formed bytraining on raw text. It records the context, or usage, of each wordencoded as word vectors. The significance of a word vector is defined asits usefulness as an indicator of certain larger meanings or labels.curated conversation data=========================Curated conversation data sourced from Reddit is used for theconversation analysis and modelling. Specifically, conversationalexchanges on Reddit are recorded. An exchange consists of an utteranceand a response to the utterance, together with associated data, such asreferences and timestamps. A submission to Reddit is considered as anutterance and a comment on the submission is considered as a response tothe utterance. The utterance is assumed to be of good quality and theresponse is assumed to be appropriate to the utterance based on thecrowd-curated quality assessment inherent in Reddit.translation with word vectors=============================In the paper `"Exploiting Similarities among Languages for MachineTranslation" <http://arxiv.org/abs/1309.4168>`__, Tomas Milokovdescribes how after training two monolingual modes, a translation matrixis generated on the most frequently occurring 5000 words. Using thistranslation matrix, the accuracy of the translations was tested on 1000words. A description Milokov gave of the general procedure is asfollows:- Create matrix ``M`` with dimensionality ``I`` times ``O``, where``I`` is the size of input vectors and ``O`` is the size of theoutput vectors.- Iterate over the training set several times with decreasing learningrate and update ``M``.- For each training sample, compute outputs by multiplying the inputvector by ``M``.- Compute the gradient of the error (target vector - output vector).- Update the weights in ``M`` (with reference to how the weights areupdated between the hidden layer and the output layer in word2veccode).abstraction code picture========================.. figure:: packages_abstraction.png:alt:module abstraction==================The module abstraction contains functions used generally for projectabstraction. Many of the programs of the project use its functions.arcodex: archive collated exchanges===================================The program arcodex is a data collation and archiving programspecialised to conversational exchanges. It can be used to archive todatabase exchanges on Reddit.The following example accesses 2 utterances from the subreddit"worldnews" with verbosity:.. code:: basharcodex.py --numberOfUtterances 2 --subreddits=worldnews --verboseThe following example accesses 2 utterances from each of the subreddits"changemyview" and "worldnews" with verbosity:.. code:: basharcodex.py --numberOfUtterances 2 --subreddits=changemyview,worldnews --verboseThe following example accesses 30 utterances from all of the listedsubreddits with verbosity:.. code:: basharcodex.py --numberOfUtterances 30 --subreddits=askreddit,changemyview,lgbt,machinelearning,particlephysics,technology,worldnews --verboseThe standard run 2014-10-28T202832Z is as follows:.. code:: basharcodex.py --numberOfUtterances 200 --subreddits=askreddit,changemyview,lgbt,machinelearning,particlephysics,technology,worldnews --verbosevicodex, vicodex\_2: view collated exchanges============================================The program vicodex\_2 (and vicodex) is a viewing program specialised toconversational exchanges. It can be used to access and view a databaseof exchanges.The following example accesses database "database.db" and displays itsexchanges data:.. code:: bashvicodex_2.py --database="database.db"inspect-database: quick printout of database============================================The program inspect-database provides a simple, comprehensive printoutof the contents of a database. Specifically, for every table in thedatabase it prints all of the column contents for every entry... code:: bashinspect-database.py --database="database.db"The program Sqliteman can be used to provide a view of databaseinformation:.. code:: bashsqliteman database.db::SELECT * FROM exchanges;vcodex: word vectors====================The program vcodex converts conversational exchanges in an abstractiondatabase to word vector representations and adds or updates anabstraction database with these vectors... code:: bashvcodex.py --database="database.db" --wordvectormodel=Brown_corpus.wvmThe program vcodex increases the file size of abstraction databaseversion 2015-01-06T172242Z by a factor of ~5.49. On an i7-5500U CPUrunning at 2.40 GHz, the conversion rate is ~25 exchanges per second.reducodex: remove duplicate collated exchanges==============================================The program reducodex inspects an existing database of conversationalexchanges, removes duplicate entries, creates simplified identifiers forentries and then writes a new database of these entries.The following examples access database "database.db", remove duplicateentries, create simplified identifiers for entries and output database"database\_1.db":.. code:: bashreducodex.py --inputdatabase="database.db".. code:: bashreducodex.py --inputdatabase="database.db" --outputdatabase="database_1.db"fix\_database: fix the data structures of database entries==========================================================.. code:: bashfix_database.py --verbose 2> >(grep -E -v "INFO|DEBUG")abstraction development testing===============================.. code:: bash./arcodex.py --numberOfUtterances 10 --subreddits=askreddit,changemyview,lgbt,machinelearning,particlephysics,technology,worldnews --database=2015-10-12T1612Z.db --verbose.. code:: bash./vicodex.py --database=2015-10-12T1612Z.dbsaving models=============Note that the file ``checkpoint`` in the saved model directory containsfull paths... |project abstraction| image:: http://img.youtube.com/vi/v9zJ9noLeok/0.jpg:target: https://www.youtube.com/watch?v=v9zJ9noLeok
abstraction-bnadeau
abstractionThis is a simple example package. You can useGithub-flavored Markdownto write your content.I think you should use an<addr>element here instead.
abstractions-aimedic
Abstractions forAIMedic's training pipelineAbstractions for main components inAIMedic'stensorflow-based projects.GuidePlz don't be lazy andreadthedocs! :)
abstractions-pkg-aimedic
Failed to fetch description. HTTP Status Code: 404
abstractioutils
abstractioutilsCommon packages used in Abstractio
abstract-ip-geolocation-api
IP Geolocation for Python (geolocation-api-python)Python library for Abstract free IP Geolocation API.Full documentation can be found on AbstractIP Geolocation APIpage.Getting startedGetting started with Abstract IP Geolocation API is very simple, you just need to install the library into your project as follow:pipinstallabstract-ip-geolocation-apiFrom there you can then call the geolocationapi as follow:# Import abstract ip geolocation api moduleimportimportlibabstract_ip_geolocation_api=importlib.import_module("abstract-ip-geolocation-api")# Initiate the geolocation api with a free API key retrieved on https://www.abstractapi.com/ip-geolocation-apigeolocation_api=abstract_ip_geolocation_api.v1('YOUR_API_KEY')# Fetch location data for a given IP# Note: If you don't provide an ip_address value, then the requester IP will be usedlocation_data=geolocation_api.geolocate(ip_address="ANY_IP_ADDRESS")# Process location data and potential errorsif'ip_address'inlocation_data:# Location data has been successfully retrievedcountry=location_data['country']city=location_data['city']print(country)elif'error'inlocation_data:# Handle Abstract related errorserror=location_data['error']print(error)else:# No location data available for this IPprint('No location data available for this IP')
abstract.jwrotator
abstract.jwrotator Package Readme=========================Overview--------abstract.jwrotator is a simple wrapper to JW Image Rotator(http://www.jeroenwijering.com/?item=jw_image_rotator).It simply provides the a jwrotator_view for Folder and Topics.#TODO:- make portlet configurableChangelog for abstract.jwrotatorabstract.jwrotator - 0.3- added JWRotator Portlet [Steven Shade]- added some other configurable featuresabstract.jwrotator - 0.2- Changed namespace to abstract.*- Fix an issue on automatic "Plone Default" skin switch after install [DaviLima]plone.jwrotator - 0.1 Unreleased- Initial package structure.[zopeskel]
abstract-kernel-for-svms
Failed to fetch description. HTTP Status Code: 404
abstract-melody-parser
IntroductionThis python module is designed to offer a collection of functions and classes that can be used to give a meaningful interpretation of a succession of midi note messages, in terms of rhythm and melody.The developement of this project started with the idea of creating a parsing software that could be used to output rhythmic and melodic symbols, that could be further processed by some machine learning algorithm (based on Markov models, grammars or similar techniques).The output symbols of the parsing are based on the Impro-Visor software notation (https://www.cs.hmc.edu/~keller/jazz/improvisor/) and the theory supporting it can be found at this linkhttps://www.cs.hmc.edu/~keller/jazz/improvisor/papers.html.InstallationThe package can easily be installed using the pip package manager.pipinstallabstract_melody_parserMidiQueueTo parse note_on/note_off messages, a particular data structure called MidiQueue is used. This queue stores midi note messages with a timestamp, that is needed to parse the rhythmic structure of a melody. This data structure is supposed to collect only melodies, chords are not supported, if multiple note_on messages are pushed into it with timestamps too close with each other, only the first one will be mantained.In order to insert a new note message with a timestamp included, the get_timestamp_message function must be used.importabstract_melody_parserasampnote_queue=amp.MidiNoteQueue()note_queue.push(amp.get_timestamp_msg('note_on',47))# some temporal delay...note_queue.push(amp.get_timestamp_msg('note_off',47))Before parsing a midi queue, it's suggested to clean the note_on messages that are still missing the relative note_off message.note_queue.clean_unclosed_note_ons()Some other methods are exposed for extra flexibility.msg=note_queue.pop()# gets the oldest note message removing it from the queuelist_of_msg=note_queue.get_container()# to deal directly with the data containermusical_notes=note_queue.get_notes()# to obtain a list of notes in std notationnote_queue.clear()# clears the queueThe use of the MidiQueue for the melodic and rhythic parsing will be explained in the following sections.Parsing melodiesThe melody parsing allows to translate a midi note number in a abstract melody notation. This parser supports three different abstract melody symbols.c: chord tone l: color tone x: random toneAn abstract melody can be realized in a particular chord; in order to obtain the correct abstract melody symbol, is necessary to input both a note in standard notation, and a chord. To obtain a note in standard notation from a midi note value, an additional parsing function is needed.midi_msg=note_queue.pop()# getting a midi messagenote_midi=midi_msg['note']# getting the midi note numbernote_std_notation=amp.parse_midi_note(note_midi)# from midi note number to std note notationchord='CM'# C major chordnote_abstract_melody_notation=amp.parse_musical_note(note_std_notation,chord)# from std note notation to abstract note notationThe notes in standard notation are uppercase letters, and a sharp symbol can be present (diesis are not used)musical_notes=['C','C#','D','D#','E','F','F#','G','G#','A','A#','B']To modify the particular tones (chord or color tones) of a chord or to add a new one, the chord_tones dictionary, inside the chord.py script can be edited.# excerpt of the dictionarychord_tones={'CM':{'c':['C','E','G'],# chord tones can be added here'l':['B','D','F','A']# color tones can be added here},# the list continues ...}Parsing rhythmThe rhythmic parsing process, allows to translate a series of notes composing a melody, in multiple symbols describing the duration of each note. These are the symbol used to represent the rhythm, they are related to a certain bpm and they refer to a 4/4 rhythmic signature.1: whole note 2: half note 4: quarter note 4dot: quarter note dotted 4t: quarter note triplet 8: eight note 8t: eight note triplet 16: sixteenth note 16t: sixteenth note tripletTo define a rhythmic frame for the analysis, a duration dictionary must be created and updated every time the bpm changes.bpm=120.5durations=amp.get_durations(bpm)After defining the durations from the bpm, the parsing can follow. It will return a list of symbols for all the notes in the melody.noteQueue.clean_unclosed_note_ons()# always do that before parsingrhythmic_symbols=amp.parse_rhythm(note_queue,durations)Putting all togetherSometimes the informative content of a melody can only be found in the rhythm or in the armonic relations between the notes and a chord, but in many other occasions, is the underlying relation between the two that really expresses the message that a musitian is trying to share with his/her performance. Following this perspective, it can be useful to perform both melodic and rhythmic parsing from a single MidiQueue, and read the resulting symbols as pairs. Let's suppose to parse a MidiQueue that contains a melody playing on a Dm chordcurrent_chord='Dm'bpm=125durations=amp.get_durations(bpm)noteQueue.clean_unclosed_note_ons()# again, don't forget to clean the unclosed note_on messagesnotes=midi_queue.get_notes()# list comprehension to map the std notes to an abstract melodyabstract_melody=[amp.parse_musical_note(note,current_chord)fornoteinnotes]# getting the rhythm toorhythm=amp.parse_rhythm(midi_queue,durations)The resulting lists can than be merged in a single one.full_melody=[x+yforxinabstract_melodyforyinrhythm]
abstract-metrics
✨Abstract✨ metricsA lightweight framework for defining arbitrary business metrics as Python code.Installationpipinstallabstract-metricsBasic UsageInstallSubclass one of the metric types (e.g. ApplicationMetricBoolean)Add an optimisticcomputemethod that returns your metric.Register a runner with your metric and run it!ExampleLet's say you want to write a metric that checks if one of your applications follows your organizations application naming standards. We will use the ApplicationMetricBoolean type to do this. All metrics are formated as{subject}Metric{type}. In this case, our subject is anApplicationand our type isBoolean.Because Applications are our subject, we need to subclassApplicationMetricBoolean. This class has a few methods that we can use to help us compute our metric. The most important one iscompute. This method is called by the runner and should return a the boolean value. Additionally, we should use theapplicationproperty (via theApplicationwrapper) to store any context we need to compute this metric.frommetricimportApplication,ApplicationMetricfrommetric.utilsimportMatrixRunnerfrommetric.typesimportOutputFormatdefmain():# Construct list of basic Application objectsapps=[Application(app)forappin["myorg-sales-app","sandbox-dinasour"]]# Construct a list of our metricsmetrics=[AppNameCompliance]# This will not only construct our Application Metrics but it will also execute# them and print them to stdout as a table.MatrixRunner(format=OutputFormat.OUTPUT_FORMAT_TABLE).run(apps,metrics)classAppNameCompliance(ApplicationMetricBoolean):r"""All Application names should start with 'myorg-'."""defcompute(self):ifself.application.name.startswith("myorg-"):returnTruereturnFalseIf we run this code, we'll get the following output:ApplicationAppNameCompliance myorg-finance-appTrue myorg-marketing-serviceTrue myorg-sales-appTrue sandbox-dinasourFalse
abstractMLBaseModelTask
No description available on PyPI.
abstractModel
Abstract Model Basic Class只需要完整实现_load_model及_forward即可线程安全使用模型推理
abstract-modules
#abstract_modulesPython Module Upload to PyPIThis utility script allows you to easily upload your Python module to the Python Package Index (PyPI) using Twine. It automates several steps of the packaging and distribution process, making it easier to share your module with the Python community.PrerequisitesBefore using this script, ensure you have the following prerequisites:Python 3.x installed on your systemtwine,build,abstract_utilities,abstract_guiandpexpectpackages installed. You can install them usingpip:pipinstalltwinebuildpexpectGetting StartedClone the repository or download the script file (upload_to_pypi.py) to your local machine.Navigate to the directory where your Python module is located using the command line.Optional: If you use a virtual environment, activate it before proceeding.UsageRun the scriptupload_to_pypi.pywith Python 3:python3upload_to_pypi.pyThe script will guide you through the following steps:Selecting Module Directory: You will be prompted to pick the module directory using a GUI window. This directory should contain the necessary files, including thesetup.pyfile.Updating Version Number: If the version number in thesetup.pyfile matches an existing version in thedistdirectory, you will be asked to enter a new version number.Building the Module: The script will build your module using thesetup.pyscript. The distribution files (wheels) will be placed in thedistdirectory.Uploading to PyPI: The script will prompt you to enter your PyPI username and password securely. It will then upload the module to PyPI using Twine.Installing the Module: After successful upload, you will have the option to install the module using pip for testing purposes.Examplefromabstract_modules.upload_utilsimportupload_mainupload_main("path_to_parent_directory")ContributingContributions are welcome! If you find any issues or have suggestions for improvements, please feel free to create an issue or submit a pull request.LicenseThis utility script is open-source and distributed under theMIT License.AcknowledgmentsThis script utilizes the following packages and resources:pexpect- For automating interactive command-line applications.Twine- For securely uploading Python packages to PyPI.Python- The Python programming language.DisclaimerThis script is provided "as is" without warranty of any kind. Use it at your own risk.SupportIf you encounter any issues or need assistance, pleasecreate an issueor seek support in the Python community forums.Thank you for using our utility script! If you have any feedback or questions, don't hesitate to contact us. Happy packaging and distributing!
abstract-open-traffic-generator
Open Traffic Generator Abstract Python PackageThis package is an autogenerated abstract python package based on the Open Traffic Generator models.
abstractor
No description available on PyPI.
abstract-package-test
This is the README for project abstract_package_test
abstract-python-core
AbstractAPI python-core libraryPure Python library for usingAbstract API. This is core library that has some shared logic and functions used by other libraries.See otherTo use and maintain this library, please see the following:Python Email ValidationPython IP GeolocationPython Phone Validation
abstract-python-email-validation
AbstractAPI python-email-validation libraryIntegrate the powerfulemail validation API from Abstractin your Python project in a few lines of code.Abstract's Email Validation and Verification API is a fast, lightweight, modern, and RESTful JSON API for determining the validity and other details of email addresses.It's very simple to use: you only need to submit your API key and an email address, and the API will respond an assessment of its validity, as well as additional details like quality score if it's a disposable email, a catchall address, and more.Validating and verifying email addresses is a critical step to reducing the chances of low-quality data and fraudulent or risky users in your website or application.DocumentationSupported Python VersionsThis library supports thePython version 3.6and higher.InstallationYou can installpython-email-validationvia PyPi or by downloading the source.Via PyPi:python-email-validationis available on PyPi as theabstract-python-email-validationpackage:pipinstallabstract-python-email-validationAPI keyGet your API key for free and without hassle from theAbstact website.QuickstartVerify email# Verify email using Abstract's Email Validation and Verification API and Pythonfrompython_email_validationimportAbstractEmailValidationEMAIL_VAL_API_KEY="YYYYYY";# Get your API Key from https://app.abstractapi.com/api/email-validation/documentationAbstractEmailValidation.configure(EMAIL_VAL_API_KEY)AbstractEmailValidation.verify("contact.email@gmail.com")API responseThe API response is returned in aEmailValidationDataobject.PARAMETERTYPEDETAILSemailStringThe value for "email" that was entered into the request.auto_correctStringIf a typo has been detected then this parameter returns a suggestion of the correct email (e.g.,johnsmith@gmial.com=>johnsmith@gmail.com). If no typo is detected then this is empty.deliverabilityStringAbstract's evaluation of the deliverability of the email. Possible values are: DELIVERABLE, UNDELIVERABLE, RISKY, and UNKNOWNquality_scoreNumberAn internal decimal score between 0.01 and 0.99 reflecting Abstract's confidence in the quality and deliverability of the submitted email.is_valid_formatBooleanIs true if the email follows the format of "address @ domain . TLD". If any of those elements are missing or if they contain extra or incorrect special characters, then it returns false.is_free_emailBooleanIs true if the email's domain is found among Abstract's list of free email providers (e.g., Gmail, Yahoo, etc).is_disposable_emailBooleanIs true if the email's domain is found among Abstract's list of disposable email providers (e.g., Mailinator, Yopmail, etc).is_role_emailBooleanIs true if the email's local part (e.g., the "to" part) appears to be for a role rather than individual. Examples of this include "team@", "sales@", info@", etc.is_catchall_emailBooleanIs true if the domain is configured to catch all email.is_mx_foundBooleanIs true if MX Records for the domain can be found. Only available on paid plans. Will return null and UNKNOWN on free plans.is_smtp_validBooleanIs true is the SMTP check of the domain was successful. Only available on paid plans. Will return null and UNKNOWN on free plans.Detailed documentationYou will find additional information and request examples in theAbstract help page.Getting helpIf you need help installing or using the library, please contactAbstract's Support.For bug report and feature suggestion, please usethis repository issues page.ContributionContributions are always welcome, as they improve the quality of the libraries we provide to the community.Please provide your changes covered by the appropriate unit tests, and post them in thepull requests page.SetupTo install the requirements, run:python3setup.pyinstall--userOnce you implementer all your changes and the unit tests, run the following command to run the tests:EMAIL_VAL_API_KEY=YYYYYYpython3tests/test_python_email_validation.py
abstract-python-exchange-rates
AbstractAPI python-exchange-rates libraryIntegrate the powerfulExchange Rates API from Abstractin your Python project in a few lines of code.The Exchange Rate API is an REST API that allows you to:look up the latest exchange rates for 80+ currencies via theliveendpointget historical exchange rates using thehistoricalendpointconvert an arbitrary amount from one currency to another using theconvertendpointIt's very simple to use: you only need to submit your API key and a currency symbol (such as "USD"), and the API will respond with current exchange rate, historical data, or convertion rates.DocumentationSupported Python VersionsThis library supports thePython version 3.6and higher.InstallationYou can installpython-exchange-ratesvia PyPi or by downloading the source.Via PyPi:python-exchange-ratesis available on PyPi as theabstract-python-exchange-ratespackage:pipinstallabstract-python-exchange-ratesAPI keyGet your API key for free and without hassle from theAbstact website.QuickstartGet exchange ratesimportpprintfrompython_exchange_ratesimportAbstractExchangeRatesEXCHANGE_RATES_API_KEY="YYYYYY";# Get your API Key from https://app.abstractapi.com/api/exchange-rates/documentationAbstractExchangeRates.configure(EXCHANGE_RATES_API_KEY)# Get live exchange rates using Abstract's Exchange Rates API and Pythonresponse=AbstractExchangeRates.live("EUR")pprint(response)# Get historical exchange rates using Abstract's Exchange Rates API and Pythonresponse=AbstractExchangeRates.historical('EUR','2021-05-01');pprint(response)# Convert currency using Abstract's Exchange Rates API and Pythonresponse=AbstractExchangeRates.convert('EUR','USD');pprint(response)API responseThe API response contains the following fields:liveresponse parametersParameterTypeDetailsbaseStringThe base currency used to get the exchange rates.last_updatedStringThe Unix timestamp of when the returned data was last updated.exchange_ratesObjectA JSON Object containing each of the target currency as the key and its exchange rate versus the base currency as that key's value.historicalresponse parametersParameterTypeDetailsbaseStringThe base currency used to get the exchange rates.dateStringThe date the currencies were pulled from, per the successful request.exchange_ratesObjectA JSON Object containing each of the target currency as the key and its exchange rate versus the base currency as that key's value.convertresponse parametersParameterTypeDetailsbaseStringThe base currency used to get the exchange rates.targetStringThe target currency that the base_amount was converted into.dateStringThe date the currencies were pulled from, per the successful request.base_amountFloatThe amount of the base currency from the request.converted_amountFloatThe amount of the target currency that the base_amount has been converted intoexchange_rateFloatThe exchange rate used to convert the base_amount from the base currency to the target currencyDetailed documentationYou will find additional information and request examples in theAbstract help page.Getting helpIf you need help installing or using the library, please contactAbstract's Support.For bug report and feature suggestion, please usethis repository issues page.ContributionContributions are always welcome, as they improve the quality of the libraries we provide to the community.Please provide your changes covered by the appropriate unit tests, and post them in thepull requests page.SetupTo install the requirements, run:python3setup.pyinstall--userOnce you implementer all your changes and the unit tests, run the following command to run the tests:EXCHANGE_RATES_API_KEY=YYYYYYpython3tests/test_python_exchange_rates.py
abstract-python-ip-geolocation
AbstractAPI python-ip-geolocation libraryIntegrate the powerfulIP Geolocation API from Abstractin your Python project in a few lines of code.Abstract's IP Geolocation API is a fast, lightweight, modern, and RESTful JSON API allowing you to look up the location, timezone, country details, and more of an IPv4 or IPv6 address.It's very simple to use: you only need to submit your API key and an IP address, and the API will respond with an assessment of its geographical location, as well as additional details like the timezone, if it's a VPN address, and more.Validating and verifying IP addresses is a critical step to reducing the chances of low-quality data and fraudulent or risky users in your website or application.DocumentationSupported Python VersionsThis library supports thePython version 3.6and higher.InstallationYou can installpython-ip-geolocationvia PyPi or by downloading the source.Via Composer:python-ip-geolocationis available on PyPi as theabstract-python-ip-geolocationpackage:pipinstallabstract-python-ip-geolocationAPI keyGet your API key for free and without hassle from theAbstact website.QuickstartGeolocation from an IP Address# Get a Geolocation from an IP Address Abstract's IP Geolocation API and Pythonfrompython_ip_geolocationimportAbstractIpGeolocationIP_GEOLOCATION_API_KEY="YYYYYY";# Get your API Key from https://app.abstractapi.com/api/ip-geolocation/documentationAbstractIpGeolocation.configure(IP_GEOLOCATION_API_KEY)AbstractIpGeolocation.look_up("108.177.16.0")API responseThe API response is returned in aIpGeolocationDataobject.PARAMETERTYPEDETAILSParameterTypeDetailsip_addressStringThe requested IP addresscityStringCity's name.city_geoname_idStringCity's geoname ID.regionStringState or province in which the the city is located.region_iso_codeChar[2]State or province's ISO 3166-2 code.region_geoname_idStringState or province's geoname ID.postal_codeStringZIP or postal code.countryStringCountry's name.country_codeChar[2]Country's ISO 3166-1 alpha-2 code.country_geoname_idStringCountry's geoname ID.country_is_euBooleanTrue if the country is in the EU, false if it is not.continentStringContinent's name.continent_codeChar[2]2 letter continent code: AF, AS, EU, NA, OC, SA, ANcontinent_geoname_idStringContinent's geoname ID.longitudeFloatDecimal of the longitude.latitudeFloatDecimal of the latitude.security > is_vpnBooleanWhether the IP address is using from a VPN or using a proxytimezone > nameStringTimezone's name from the IANA Time Zone Database.timezone > abbreviationStringTimezone's abbreviation, also from the IANA Time Zone Database.timezone > gmt_offsetStringTimezone's offset from Greenwich Mean Time (GMT).timezone > current_timeStringCurrent time in the local time zone.timezone > is_dstBooleanTrue if the location is currently in Daylight Savings Time (DST).flag > svgStringLink to a hosted version of the country's flag in SVG format.flag > pngStringLink to a hosted version of the country's flag in PNG format.flag > emojiStringCountry's flag as an emoji.flag > unicodeStringCountry's flag in unicode.currency > currency_nameStringThe currency's name.currency > currency_codeStringThe currency's code in ISO 4217 format.connection > connection_typeStringType of network connection: Dialup, Cable/DSL, Cellular, Corporateconnection > autonomous_system_numberUint32Autonomous System numberconnection > autonomous_system_organizationStringAutonomous System Organization name.connection > isp_nameStringInternet Service Provider (ISP) name.connection > organization_nameStringOrganization name.Detailed documentationYou will find additional information and request examples in theAbstract help page.Getting helpIf you need help installing or using the library, please contactAbstract's Support.For bug report and feature suggestion, please usethis repository issues page.ContributionContributions are always welcome, as they improve the quality of the libraries we provide to the community.Please provide your changes covered by the appropriate unit tests, and post them in thepull requests page.SetupTo install the requirements, run:python3setup.pyinstall--userOnce you implementer all your changes and the unit tests, run the following command to run the tests:IP_GEOLOCATION_API_KEY=YYYYYYpython3tests/test_python_ip_geolocation.py
abstract-python-phone-validation
AbstractAPI python-phone-validation libraryIntegrate the powerfulPhone Validation API from Abstractin your Python project in a few lines of code.Abstract's Phone Number Validation and Verification API is a fast, lightweight, modern, and RESTful JSON API for determining the validity and other details of phone numbers from over 190 countries.It's very simple to use: you only need to submit your API key and a phone number, and the API will respond as assessment of its validity, as well as additional details like the carrier details, line type, region and city details, and more.Validating and verifying phone numbers is a critical step to reducing the chances of low quality data and fraudulent or risky users in your website or application.DocumentationSupported Python VersionsThis library supports thePython version 3.6and higher.InstallationYou can installpython-phone-validationvia PyPi or by downloading the source.Via Composer:python-phone-validationis available on Packagist as theabstract-python-phone-validationpackage:pipinstallabstract-python-phone-validationAPI keyGet your API key for free and without hassle from theAbstact website.QuickstartVerify phone# Verify phone using Abstract's Phone Validation and Verification API and Pythonfrompython_phone_validationimportAbstractPhoneValidationPHONE_VAL_API_KEY="YYYYYY";# Get your API Key from https://app.abstractapi.com/api/phone-validation/documentationAbstractPhoneValidation.configure(PHONE_VAL_API_KEY)AbstractPhoneValidation.verify("14154582468")API responseThe API response is returned in aIpGeolocationDataobject.PARAMETERTYPEDETAILSnumberStringThe phone number submitted for validation and verification.validBooleanIs true if the phone number submitted is valid.local_formatStringThe local or national format of the submitted phone number. For example, it removes any international formatting, such as "+1" in the case of the US.international_formatStringThe international format of the submitted phone number. This means appending the phone number's country code and a "+" at the beginning.country_nameStringThe name of the country in which the phone number is registered.country_codeStringThe country's two letter ISO 3166-1 alpha-2 code.country_prefixThe country's calling code prefix.registered_locationStringAs much location details as are available from our data. This can include the region, state / province, and in some cases down to the city.carrierStringThe carrier that the number is registered with.line_typeStringThe type of phone number. The possible values are: Landline, Mobile, Satellite, Premium, Paging, Special, Toll_Free, and Unknown.Detailed documentationYou will find additional information and request examples in theAbstract help page.Getting helpIf you need help installing or using the library, please contactAbstract's Support.For bug report and feature suggestion, please usethis repository issues page.ContributionContributions are always welcome, as they improve the quality of the libraries we provide to the community.Please provide your changes covered by the appropriate unit tests, and post them in thepull requests page.SetupTo install the requirements, run:python3setup.pyinstall--userOnce you implementer all your changes and the unit tests, run the following command to run the tests:PHONE_VAL_API_KEY=YYYYYYpython3tests/test_python_phone_validation.py
abstractqueue
# abstractqueue
abstract-queue
No description available on PyPI.
abstract_rendering
UNKNOWN
abstracts
Abstract class and interface definitions.Create anabstract.AbstractionAnAbstractionis ametaclassfor defining abstract classes.Let’s define an abstractAFooclass and give it an abstractdo_foomethod.Like any python class, anAbstractioncan have any name, but it may be helpful to distinguish abstract classes from others by prefixing their name withA.>>>importabc>>>importabstracts>>>classAFoo(metaclass=abstracts.Abstraction):......@abc.abstractmethod...defdo_foo(self):...raiseNotImplementedErrorAbstract classescannotbe instantiated directly.>>>AFoo()Traceback (most recent call last):...TypeError:Can't instantiate abstract class AFoo with abstract method... do_fooCreate animplementerfor anabstract.AbstractionIn order to make use ofAFoo, we need to create an implementer for it.>>>@abstracts.implementer(AFoo)...classFoo:...passThe implementermustimplement all of the abstract methods, defined by its abstract classes.>>>Foo()Traceback (most recent call last):...TypeError:Can't instantiate abstract class Foo with abstract method... do_foo>>>@abstracts.implementer(AFoo)...classFoo2:......defdo_foo(self):...return"DID FOO">>>Foo2()<__main__.Foo2 object at ...>An implementer inherits from itsAbstractionsAnimplementerclass is a subclass of itsAbstraction.>>>issubclass(Foo2,AFoo)TrueLikewise an instance of an implementer is an instance of itsAbstraction>>>isinstance(Foo2(),AFoo)TrueTheAbstractionclass can be seen in the classbases, and the methods of theAbstractioncan be invoked by the implementer.>>>importinspect>>>AFooininspect.getmro(Foo2)TrueCreate animplementerthat implements multipleAbstractions.An implementer can implement multiple abstractions.Let’s create a second abstraction.>>>classABar(metaclass=abstracts.Abstraction):......@abc.abstractmethod...defdo_bar(self):...raiseNotImplementedErrorAnd now we can create an implementer that implememts both theAFooandABarAbstractions.>>>@abstracts.implementer((AFoo,ABar))...classFooBar:......defdo_foo(self):...return"DID FOO"......defdo_bar(self):...return"DID BAR">>>FooBar()<__main__.FooBar object at ...>Defining abstract propertiesProperties can be defined in an abstract class, and just like with normal methods, they must be implemented by any implementers.>>>classAMover(metaclass=abstracts.Abstraction):......@property...@abc.abstractmethod...defspeed(self):...return5......@property...@abc.abstractmethod...defdirection(self):...return"forwards"Callingsuper()on anabstractmethodJust like with pythons “Abstract Base Classes” you can callsuper()in anabstractmethod, to invoke an abstract implementation.>>>@abstracts.implementer(AMover)...classMover:......@property...defdirection(self):...return"backwards"......@property...defspeed(self):...returnsuper().speedThis custom implementation ofAMovermustimplement bothspeedanddirection, even if its implementation invokes the abstract implementation.In this case it uses the default/abstract implementation ofspeedwhile providing its own implementation ofdirection.>>>mover=Mover()>>>mover<__main__.Mover object at ...>>>>mover.speed5>>>mover.direction'backwards'Defining anabstracts.InterfaceclassAnInterfaceis much like anAbstraction, but with a few differences.AnInterfacecan only define methods with the@interfacemethoddecorator.It cannot define normal methods or methods with the@abstractmethod, only methods with@interfacemethod.An@interfacemethodif invoked will always raise anNotImplementedError, and therefore cannot be used as an abstract implementation.Lets add anInterfaceclass that we can use.In the way that it may be helpful to distinguish anAbstractionfrom other types of classes, it may be also useful to distinguish anInterfaceby using anIprefix when naming them.>>>classIGeared(metaclass=abstracts.Interface):......@property...@abstracts.interfacemethod...defnumber_of_gears(self):...# Raising an error is ~superfluous as the decorator will raise...# anyway if the method is invoked....raiseNotImplementedErrorImplementing anInterfaceJust like with anAbstraction, anInterfacecan be implemented using the@implementerdecorator.An implementer, can implement a combination ofAbstractionsandInterfaces.>>>@abstracts.implementer((AMover,IGeared))...classBicycle:......@property...defdirection(self):...returnsuper().direction......@property...defspeed(self):...returnsuper().speed......@property...defnumber_of_gears(self):...return7>>>Bicycle().number_of_gears7An implementer doesnotinherit from itsInterfacesAnimplementerclass is a subclass of itsInterfaces.>>>issubclass(Bicycle,AMover)True>>>issubclass(Bicycle,IGeared)TrueLikewise an instance of an implementer is an instance of itsInterfaces>>>isinstance(Bicycle(),AMover)True>>>isinstance(Bicycle(),IGeared)TrueUnlike withAbstractionsit doesnothowever, inherit from itsInterfaces.>>>AMoverininspect.getmro(Bicycle)True>>>IGearedininspect.getmro(Bicycle)False@interfacemethodscan never be invokedThe key thing to remember is that you cannot callsuper()on any@interfacemethod, or directly invoke it.If it was defined as part of anInterfaceyou will receive anAttributeError, as the implementation does not inherit directly from the interface.>>>@abstracts.implementer((AMover,IGeared))...classBrokenBicycle:......@property...defdirection(self):...returnsuper().direction......@property...defspeed(self):...returnsuper().speed......@property...defnumber_of_gears(self):...returnsuper().number_of_gears>>>BrokenBicycle().number_of_gearsTraceback (most recent call last):...AttributeError:'super' object has no attribute 'number_of_gears'WarningMisuse of this class can haveunintended consequencesIf you invokesuper()on an@interfacemethoddefined as part of anAbstractionit will raiseNotImplementedError.As anInterfacecan only hold this type of method, you can never invoke any of its methods. Doing so directly will raising aNotImplementedError.>>>IGeared.number_of_gears.__get__(Bicycle())Traceback (most recent call last):...NotImplementedErrorCombining@abstractmethodand@interfacemethodin anAbstractionAsInterfacesare “pure”, they cannot use@abstractmethodor contain any implementation.AnAbstractionon the other hand can combine both.Lets create a pureInterfacethat represents a “shed”.>>>classIShed(metaclass=abstracts.Interface):......@property...@abstracts.interfacemethod...defsize(self):...raiseNotImplementedErrorWe can use this interface to create anABikeShedAbstraction>>>classABikeShed(IShed,metaclass=abstracts.Abstraction):......@property...@abstracts.interfacemethod...defmax_bike_size(self):...raiseNotImplementedError......@abc.abstractmethod...defget_capacity(self):...returnint(self.size/self.max_bike_size)We can now create an implementation.It will need to define both thesizeand themax_bike_size, as these areinterfacemethods.It can, however, make use of the abstract implementation ofget_capacity, even if it must be defined.>>>@abstracts.implementer(ABikeShed)...classBikeShed:......@property...defmax_bike_size(self):...return7......@property...defsize(self):...return161......defget_capacity(self):...returnsuper().get_capacity()>>>bikeshed=BikeShed()>>>bikeshed.get_capacity()23
abstract-scorm-xblock
Abstract Scorm XBlockYet another SCORM XBlock for Open edX®.Supports course export/import. Editable within Open edx Studio. Saves student state and reports scores to the progress tab of the course. Currently supports SCORM 1.2 and SCORM 2004 standards.Developed byAbstract Technology, based onedx_xblock_scormbyRaccoon Gang.InstallationInstall package withpip install abstract-scorm-xblockUsageAddabstract_scorm_xblockto the list of advanced modules in the advanced settings of a course.Add ascormcomponent to your Unit.Upload a zip file containing your content package.Theimsmanifest.xmlfile must be at the root of the zipped package. Make sure you don't have an additional directory at the root of the zip archive.Publish your content as usual.DevelopmentSetupTo setup the development environment:create a Python3 virtualenv. If direnv is installed adirenv allowshould be enough.install derex withpip install -r requirements.txtsetup the derex project. Readhttps://derex.page/quickstart.html#quickstartfor further informations.Development and DebuggingIn order to be able to develop and debug effectively some steps may be taken:get a shell inside the container:cd derex_project ddc-project exec cms shsetup the package in editable mode. This will allow for testing changes without the need to reinstall the package:pip install -e /openedx/derex.requirements/abstract_scorm_xblocklaunch the Django debug server manually and bind it on port81:python manage.py cms runserver 0:81on your browserhttps://studio.scorm.localhost:81should now be available. You should now be able to insert debug code in both Python and JS files and benefit from Django runserver auto reload feature.Running testsTests can be run with:ddc-project run --rm lms python manage.py lms test abstract_scorm_xblock --keepdbThe first time this command is run it will initialize the test database. Remove the--keepdbflag if you want the test database to be created/destroyed each time.To run a coverage report:ddc-project run -e COVERAGE_RCFILE=../derex.requirements/abstract_scorm_xblock/.coveragerc --rm lms sh -c "coverage run manage.py lms test abstract_scorm_xblock --keepdb && coverage html"This will produce an HTML coverage report in theabstract_scorm_xblock/htmlcovdirectory.You can also use the Makefile shortcuts:make test make coverageCaveatsIf a SCORM package is deleted from the course "Files & Uploads" section, the Import/Export functionality will export a course with a broken XBlock.TODODelete extracted old SCORM packages from default storage
abstract-security
Abstract SecurityTheabstract_securitymodule is a Python utility that provides functionality for managing environment variables and securely loading sensitive information from.envfiles. It is designed to simplify the process of accessing and managing environment variables within your Python applications.Table of ContentsFeaturesInstallationUsageFunctionsLicenseContactFeaturesFlexible.envFile Location: Searches for.envfiles in current working directory, home directory, and a special.envy_alldirectory within the home directory.Clean and Secure Key Retrieval: Offers functionality to cleanly split strings at equals signs and safely retrieve environment variable values.InstallationInstallabstract_securityusing pip:pipinstallabstract-securityUsageBasic UsageHere's a simple example to get started:fromabstract_securityimportget_env_valueenv_key='YOUR_ENV_VARIABLE_KEY'value=get_env_value(key=env_key)Advanced UsageTheAbstractEnvclass can be used for more advanced scenarios, including custom paths and file names for the.envfile.fromabstract_securityimportAbstractEnv# Initialize with custom parametersabstract_env=AbstractEnv(key='YOUR_ENV_VARIABLE_KEY',file_name='custom.env',path='/custom/path')value=abstract_env.env_value##FunctionsAbstractEnvClassTheAbstractEnvclass allows you to manage environment variables and securely load values from a.envfile. Here's how to use it:Initializing anAbstractEnvObject# Create an AbstractEnv object with default settingsabstract_env=AbstractEnv()You can also customize the initialization by specifying the key, file name, and path as follows:# Custom initializationabstract_env=AbstractEnv(key='MY_PASSWORD',file_name='.env',path='/path/to/.env')Getting Environment Variable ValuesYou can retrieve the value of a specific environment variable using theget_env_valuemethod of theAbstractEnvobject:# Retrieve the value of a specific environment variablevalue=abstract_env.get_env_value(key='YOUR_ENV_VARIABLE')get_env_valueFunctionAlternatively, you can use theget_env_valuefunction to directly retrieve the value of an environment variable without creating anAbstractEnvobject:fromabstract_securityimportget_env_value# Retrieve the value of a specific environment variablevalue=get_env_value(key='YOUR_ENV_VARIABLE',path='/path/to/.env')API ReferenceAbstractEnvClassAbstractEnv(key='MY_PASSWORD', file_name='.env', path=os.getcwd())Initializes anAbstractEnvobject to manage environment variables.key(str, optional): The key to search for in the.envfile. Defaults to 'MY_PASSWORD'.file_name(str, optional): The name of the.envfile. Defaults to '.env'.path(str, optional): The path where the.envfile is located. Defaults to the current working directory.re_initialize(key='MY_PASSWORD', file_name='.env', path=os.getcwd())Re-initializes anAbstractEnvobject with new settings.key(str, optional): The key to search for in the.envfile. Defaults to 'MY_PASSWORD'.file_name(str, optional): The name of the.envfile. Defaults to '.env'.path(str, optional): The path where the.envfile is located. Defaults to the current working directory.get_env_value(key='MY_PASSWORD', path=os.getcwd(), file_name='.env')Retrieves the value of the specified environment variable.key(str): The key to search for in the.envfile.path(str): The path to the environment file.file_name(str): The name of the environment file.get_env_valueFunctionget_env_value(key=None, path=None, file_name=None)Retrieves the value of a specified environment variable from a.envfile.key(str, optional): The key to search for in the.envfile. Defaults to None.path(str, optional): The path to the.envfile. Defaults to None.file_name(str, optional): The name of the.envfile. Defaults to None.LicenseThis module is distributed under theMIT License.For more information and usage examples, please refer to theGitHub repositoryandPyPI package.If you encounter any issues or have questions, feel free to open an issue on GitHub or contact the author, putkoff, for assistance.ContactAuthor: putkoffEmail:partners@abstractendeavors.comProject Link:https://github.com/AbstractEndeavors/abstract_security
abstract-security-test
This is the README for module abstract_security_test
abstract-server
abstract ServerTable of ContentsIntroductionInstallationGetting StartedDocumentationContactLicenseIntroductionInstallation ofabstract-serverTo installabstract_server, you can either use pip or manually set it up by cloning the repository:Using pip:pipinstallabstract-serverNote:abstract_serverrequires Python 3.6 or later. Ensure you meet this requirement before proceeding with the installation.Getting StartedHere is a basic example of usingabstract_server:Documentationabstract_serverconsists of the following Python files and their corresponding functionalities:1.response_handling.py:2.api_call.py:Sure, here's an exhaustivereadme.mdfor theapi_calls.pycomponent of theabstract_aimodule:api_calls.py- Abstract AI Moduleapi_calls.pyis a component of the Abstract AI module, designed to facilitate API calls to OpenAI's GPT-3 model. This module is intended to simplify the interaction with the GPT-3 API and handle responses in a structured manner.Table of ContentsOverviewInstallationUsageClasses and FunctionsPromptManagerhard_requestquick_requestExamplesContributingLicenseOverviewapi_calls.pyserves as a bridge between your application and the OpenAI GPT-3 API. It provides a convenient interface to send requests, manage responses, and control the behavior of the API calls. This module is highly customizable, allowing you to define prompts, instructions, and response handling logic.InstallationInstall the required Python packages:pipinstallopenaiSet your OpenAI API key as an environment variable. By default, the module looks for an environment variable namedOPENAI_API_KEYto authenticate API calls.UsageClasses and FunctionsPromptManager Classhard_request FunctionThehard_requestfunction sends a hard request to the OpenAI API with the provided parameters. It is a simplified way to make API calls.quick_request FunctionThequick_requestfunction sends a quick request to the OpenAI API with simple configurations and prints the result. It is a convenient shortcut for quick API interactions.ExamplesFor detailed examples and usage scenarios, refer to theexamplesdirectory in this repository. You'll find practical code samples demonstrating how to use theabstract_server.pymodule for various tasks.ContributingIf you'd like to contribute to the development of theabstract_servermodule or report issues, please refer to theContributing Guidelines.LicenseThis module is licensed under theMIT License, which means you are free to use and modify it as per the terms of the license. Make sure to review the license file for complete details.Feel free to useapi_calls.pyto enhance your interactions with OpenAI's GPT-3 model in your projects.3.endpoints.py:4.tokenization.pyContactShould you have any issues, suggestions or contributions, please feel free to create a new issue on ourGithub repository.Licenseabstract_serveris released under theMIT License.
abstract-singleton
Abstract-SingletonA Singleton that also enforces abstract methods are implemented.
abstract-telegram-processor
No description available on PyPI.
abstract-test-package
abstract_test_packageThis is a Python package that facilitates testing with abstract scenarios. Utilizing PyTest, it offers extra utilities to streamline the creation and execution of abstract tests.You can locate this package in theabstract_essentialsproject atgithub.io/abstract_endeavors/abstract_essentials/abstract_test_package/.InstallationYou can install theabstract_test_packagemodule via pip:pipinstallabstract_test_packageOr directly from the source:gitclonehttps://github.io/abstract_endeavors/abstract_essentials/abstract_test_package/cdabstract_test_package pythonsetup.pyinstallUsageBelow is a usage example of theabstract_test_package:fromabstract_test_packageimportcreate_test,execute_testtest=create_test(name="Test 1",scenario=[...])execute_test(test)This example creates an abstract test and then executes it.DocumentationTheabstract_test_packagemodule provides the following classes and functions:create_test(name: str = 'Test', scenario: list = [...])Generates an abstract test scenario with the provided name and details.execute_test(test: any)Executes the given abstract test scenario.validate_test(test: any) -> boolVerifies if the given object is a valid test scenario.calculate_test_results(test: any) -> dictCalculates the results of a given abstract test scenario.compare_test_results(test1: any, test2: any) -> boolCompares two test scenarios and returns True if their results are equivalent, False otherwise.... and many more!Please refer to the source code for the complete list of classes and functions provided by the module, as well as their detailed documentation.ContributingContributions are welcome! Please fork this repository and open a pull request to add snippets, make grammar tweaks, etc.ContactIf you have any questions, feel free to reach out to us atpartners@abstractendeavors.com.LicenseThis project is licensed under the MIT License. See theLICENSEfile for details.Authorsputkoff - main developerThis README file was last updated on May 29, 2023.
abstract-tracker
Welcome toabstract_trackerDocumentation📔 SeeFull Documentation HERE.This library help you track the business critical task status moving frompending(todo), toin_progress, then possibly tofailed(also with error traceback information) orsucceeded. If it failed too many times, it will be marked asexhausted. If you never want to see it anymore, it will be marked asignored.This library provides the abstraction layer that can work with arbitrary backend. For instance, local file, AWS S3, SQL Database, AWS DynamoDB, Redis, MongoDB, …, as you wish.Installabstract_trackeris released on PyPI, so all you need is to:$pipinstallabstract-trackerTo upgrade to latest version:$pipinstall--upgradeabstract-tracker
abstractTradeSimpleT
abstractTrade-simpleTradeThis project is responsible to hold the abstraction for the flowing class and ensure they will work and have the same structure on other projects.InitializationRatesTickTimeframeTrade
abstractTrade-simpleTrade
abstractTrade-simpleTradeThis project is responsible to hold the abstraction for the flowing class and ensure they will work and have the same structure on other projects.InitializationRatesTickTimeframeTrade
abstracttree
This Python package contains a few abstract base classes for tree data structures. Trees are very common data structure that represents a hierarchy of common nodes. This package defines abstract base classes for these data structure in order to make code reusable.Abstract base classesfromabstracttreeimportto_mermaidto_mermaid(AbstractTree)graph TD; AbstractTree[AbstractTree]; UpTree[UpTree]; Tree[Tree]; MutableTree[MutableTree]; DownTree[DownTree]; Tree[Tree]; MutableTree[MutableTree]; MutableDownTree[MutableDownTree]; MutableTree[MutableTree]; AbstractTree-->UpTree; UpTree-->Tree; Tree-->MutableTree; AbstractTree-->DownTree; DownTree-->Tree; DownTree-->MutableDownTree; MutableDownTree-->MutableTree;Downtrees are trees that have links to their direct children. Uptrees are trees that link to their parent. A Tree has links in both directions.ABCInherits fromAbstract MethodsMixin MethodsAbstractTreenid,eqv()UpTreeAbstractTreeparentroot,is_root,ancestors,pathDownTreeAbstractTreechildrennodes,descendants,leaves,levels,is_leaf,transform()TreeUpTree,DownTreesiblingsMutableDownTreeDownTreeadd_child(),remove_child()add_children()MutableTreeTree,MutableDownTreedetach()In your own code, you can inherit from these trees. For example, if your tree only has links to children:importabstracttreefromabstracttreeimportprint_treeclassMyTree(abstracttree.DownTree):def__init__(self,value,children=()):self.value=valueself._children=childrendef__str__(self):return"MyTree "+str(self.value)@propertydefchildren(self):returnself._childrentree=MyTree(1,children=[MyTree(2),MyTree(3)])print_tree(tree)# This generates the following output:# MyTree 1# ├─ MyTree 2# └─ MyTree 3AdapterIn practice, not all existing tree data structures implement one of these abstract classes. As a bridge, you can useastreeto convert these trees to aTreeinstance. However, whenever possible, it's recommended to inherit fromTreedirectly for minimal overhead.Examples:# Trees from built-ins and standard libraryastree(int)astree(ast.parse("1 + 1 == 2"))astree(pathlib.Path("abstracttree"))# Anything that has parent and children attributes (anytree / bigtree / littletree)astree(anytree.Node())# Nested listastree([[1,2,3],[4,5,6]])# Tree from json-datadata={"name":"a","children":[{"name":"b","children":[]},{"name":"c","children":[]}]}astree(data,children=operator.itemgetter["children"])# pyqt.QtWidgetastree(widget,children=lambdaw:w.children(),parent=lambdaw:w.parent())# Tree from treelibastree(tree.root,children=lambdanid:tree.children(nid),parent=lambdanid:tree.parent(nid))# itertreeastree(tree,children=iter,parent=lambdat:t.parent)# Infinite binary treeinf_binary=astree(0,children=lambdan:(2*n+1,2*n+2))Utility functionsPretty printingtree=astree(seq,children=lambdax:[x[:-2],x[1:]]ifxelse[])print_tree(tree)print(to_string(tree))# ['a', 'b', 'c', 'd']# ├─ ['a', 'b']# │ └─ ['b']# └─ ['b', 'c', 'd']# ├─ ['b']# └─ ['c', 'd']# └─ ['d']Plotting with matplotlibimportmatplotlib.pyplotaspltexpr=ast.parse("y = x*x + 1")plot_tree(expr)plt.show()Export to various formatsto_dot(tree)to_mermaid(tree)to_latex(tree)to_image(Path('.'),"filetree.png",how="dot")to_image(AbstractTree,"class_hierarchy.svg",how="mermaid")to_pillow(tree).show()Find distance between nodesimportheapqfromabstracttreeimportHeapTree,Routetree=HeapTree([5,4,3,2,1])heapq.heapify(tree.heap)left_child=tree.children[0]right_child=tree.children[1]route=Route(left_child,right_child)print(f"{route.lca= }")# => HeapTree([1, 2, 3, 5, 4], 0)print(f"{route.nodes.count()= }")# => 3print(f"{route.edges.count()= }")# => 2A few concrete tree implementationsanytreetreelibbigtreeitertreedendropyetelittletree- also by meTree visualisationPrettyPrintTree- colored terminal output
abstract-turtle
abstract_turtleReimplementation of python's turtle module that allows for arbitrary backends for rendering
abstract-utilities
Abstract UtilitiesAbstract Utilities is a comprehensive collection of utility modules assembled to assist a multitude of common tasks. The aim of this collection is to eliminate the need to perpetually rewrite trivial but essential functions that are frequently used across projects. With minimal dependencies, this suite is an amalgam of bare essential functions, ranging from data comparison, string manipulation, mathematical operations, time tracking to JSON handling and others.The module's ease of use and detailed implementation allow for practical and efficient usage across different programs.ContentsIntroductionModulesInstallationLicenseInstallationThe package can be installed via pip by entering the following command:pip install abstract_utilities ~~or~~ pip3 install abstract_utilitiesEnsure that you're using Python 3.11 or later, and the following dependencies are installed: 'pathlib>=1.0.1', 'abstract_security>=0.0.1', 'yt_dlp>=2023.10.13', 'pexpect>=4.8.0'.After you've installed the module, you can import any utility into your scripts using:from abstract_utilities import module_nameModulesThe repository contains several utility files, each dealing with a specific theme or operation. For example,compare_utils.pyhandles string and object comparison,list_utils.pyhelps with list manipulation,json_utils.pydeals with JSON handling, and so on.In-depth analysis and explanation of each utility will be provided in the upcoming sections. On a high level, utilities covered are:Class utilitiesCommand line utilitiesCollator utilitiesComparison utilitiesGlobal utilitiesHistory utilitiesJSON utilitiesList utilitiesMath utilitiesPath utilitiesRead-Write utilitiesString utilitiesThread utilitiesTime utilitiesType utilitiesLicenseAbstract Utilities is licensed under the MIT License.(Remainder of README coming soon. Each specific utility will be documented in the upcoming sections.)Class Utilitiesclass_utils.pyThis module includes helper methods tailored for manipulating and handling classes, objects, and modules. Some of the operations accomplished in the class utilities module include:Fetching and verifying object types.Manipulating global variables.Checking an object's membership status in a module.Inspecting function signatures and their arguments within a module.Executing functions with supplied arguments.Converting layout definitions into components.Accessing attributes and methods of a module.Key Functions:get_type_list:This function helps to procure a list of common Python types.remove_key:This function is used to remove a specific key from a provided dictionary.get_module_obj:This function facilitates the retrieval of an object from a specified module.spec_type_mod:This function checks whether an object matches a specific type.get_type_mod:This function retrieves the type of a given object.is_module_obj:This function checks if a provided object is part of a module.inspect_signature:This function fetches the signature of a specified function within a module.get_parameter_defaults:This method helps in fetching the default parameter values for a provided function in a module.convert_layout_to_components:This utility converts a layout definition to its component representation.get_dir:This function aids in listing all attributes and methods of a specified module.get_proper_args:This function calls a function using either positional or keyword arguments based on the args type.get_fun:This method parses a dictionary to retrieve a function and call it.if_none_change:This function replaces a None object with a default value.call_functions:Executes a specified function or method using provided arguments.process_args:Evaluates and processes nested function calls in arguments.has_attribute:Checks if a function exists in a module.get_all_functions_for_instance:Retrieves all callable methods/functions of an object instance.Dependencies:This utility module depends on two other Python modules: 'inspect' and 'json'. Each utility method in the module has its own docstring, offering a more in-depth explanation of its purpose, expected inputs, and outputs.Command UtilitiesThecommand_utils.pyhouses a collection of utilities for executing commands with various functionalities, including handling of sudo, logging outputs, and interacting with commands that expect input.Functionsget_all_params(instance, function_name): This function retrieves information about the parameters of a callable method/function of an instance.mk_fun(module,function): This function checks if a function exists in a given module and prints a statement indicating whether the function exists or not.get_output_text(parent_dir): Provides the path to an 'output.txt' file in a given directory.get_env_value(key, env_path): Fetches the environment value associated with a provided key from a specific .env file path.print_cmd(input, output):Prints the executed command alongside its corresponding output.get_sudo_password(key): Retrieves the sudo password stored in an environment file.cmd_run_sudo(cmd, password, key, output_text): Executes a command with sudo privileges, either using a given password or retrieving it from an environment file.cmd_run(cmd, output_text): Executes a command and captures its output in a specified file.pexpect_cmd_with_args(command, child_runs, output_text): Interacts with a command's expected input using the pexpect library and logs the command's output.get_output(p): This function retrieves the output generated by a subprocess command execution.get_cmd_out(st): This function executes a shell command and retrieves the output generated by the command's execution.Dependenciesostimepexpectsubprocessabstract_security.envy_it: find_and_read_env_file, get_env_valueabstract_utilities.time_utils: get_sleepNotesEnsure the necessary dependencies are installed.Make sure you have appropriate permissions to execute commands, especially sudo-based ones.Always keep environment files secure and away from public access to ensure sensitive information like passwords remains confidential.Command Line UtilitiesCommandLine utilities provide functions to interface with command line and appropriately handle its output and prompts. The utility includes methods for executing shell commands (including sudo commands), handling command prompts, and retrieving environment values.Below are descriptions for some of the key functions available in the command line utilities section:cmd_input(st: str) -> subprocess.PopenThis function executes a shell command using subprocess module. It takes a command string as an argument and returns a subprocess.Popen object for communication.get_output_text(parent_dir: str = os.getcwd()) -> strThis function fetches the path to the 'output.txt' file in the provided directory. If no directory is provided it takes the current working directory.get_env_value(key: str = None, env_path: str = None) -> strThis function retrieves an environment value based on a key from a specified .env file.print_cmd(input: str, output: str) -> NoneThis function prints the input command along with its corresponding output.cmd_run_sudo(cmd: str, password: str = None, key: str = None, output_text: str = None) -> NoneThis function executes a command with sudo privileges.cmd_run(cmd: str, output_text: str = None) -> NoneThis function executes a command and logs its output in a specified file.pexpect_cmd_with_args(command: str, child_runs: list, output_text: str = os.getcwd()) -> intThis function executes a command using pexpect and handles its prompts with specified responses.Collator UtilitiesCollator utilities offer functions related to operations with alphabets and numbers. It includes generating lists of alphabetic characters and numbers, finding an index of a character in a list, and more.Key functions available in the collator utilities section are:get_alpha_list() -> listThis function generates a list containing all lowercase alphabets.get_num_list() -> listThis function generates a list of numbers in string format.find_it_alph(ls: list, y: any) -> intThis function finds the index of an element in a list.get_alpha(k: Union[int,float]) -> strThis function retrieves the alphabetic character corresponding to the given index.Compare UtilitiesCompare Utilities offer functions for comparing strings and objects. These include methods for calculating string similarity and comparing the lengths of objects.Comparison UtilitiesThis section provides information about different functions present in thecompare_utils.pymodule. These functions aid in various string comparison operations, counting specific characters in a string, safely getting length of the string etc.Functions include:get_comp(string:str, string_2:str): This function calculates the similarity between two strings based on overlapping sequences of characters.get_lower(obj, obj2): This function compares the lengths of two objects or their string representations and returns the one with shorter length.is_in_list(obj: any, ls: list = []): This function checks whether a given object is present in the list or not.safe_len(obj: str = ''): This function gets the length of the string representation of a given object in a manner that avoids exceptions.line_contains(string: str = None, compare: str = None, start: int = 0, length: int = None): This function checks whether a substring is present in another string starting from a specific index.count_slashes(url: str) -> int: This function counts the number of slashes in a given URL.get_letters() -> list: This function returns a list of lowercase letters from 'a' to 'z'.get_numbers() -> list: This function returns a list of numeric digits from 0 to 9.percent_integer_of_string(obj: str, object_compare: str = 'numbers') -> float: This function calculates the percentage of characters in a string that are either letters or numbers.return_obj_excluded(list_obj:str, exclude:str, substitute='*'): This function replaces all occurrences of a specified substring with a substitute string in a given list_obj.determine_closest(string_comp:str, list_obj:str): This function finds the closest consecutive substrings from comp in list_obj.longest_consecutive(list_cons:list): This function calculates the length of the longest consecutive non-empty elements in a list of strings.combined_list_len(list_cons:list): This function calculates the total length of a list of strings by summing their individual lengths.percent_obj(list_cons:list, list_obj:str): This function calculates the percentage of the combined length of a list of strings relative to the length of a target string.get_closest_match_from_list(comp_str:str, total_list:list, case_sensative:bool=True): This function finds the closest match from a list of strings based on various criteria such as longest consecutive substring, combined length of consecutive substrings, and percentage of combined length relative to the length of the target string.untuple(obj): This function returns the first element of a tuple if the provided input is a tuple.Comparison UtilitiesThecompare_utils.pymodule provides functions that help compare and identify patterns or similarities between strings or group of strings.get_closest_match_from_list(comp_str:str, total_list:list,case_sensative:bool=True)This function finds the closest match from a list of strings to a target string based on various criteria such as longest consecutive substring, combined length of consecutive substrings, and percentage of combined length relative to the length of the target string. It returns the string from the list that best matches the target, or None if no match is found.make_list(obj)Converts an object into a list. Valid for set and tuple types.create_new_name(name=None, names_list=None, default=True, match_true=False, num=0)Creates a new name that does not exist in provided list of names. It can be used to avoid name collision when generating file names, variable names, etc. The function generates a unique name by appending an incrementing number at the end. The base name and the list of existing names can be provided as arguments. If not provided, it uses 'Default_name' as the base name.get_last_comp_list(string, compare_list)Finds and returns the last string in the 'compare_list' that contains the target 'string'. Returns None if no match is found.Global UtilitiesInglobal_utils.py, it provides functions to manage and manipulate global variables.global_registry(name:str,glob:dict)It records the name and the dictionary of a global variable to a global registry. If the name is not in the registry, it adds it and the provided dictionary. It returns the index of the name in the registry.get_registry_number(name:str)It returns the index of a name in the registry.update_registry(var:str, val:any, name:str)It updates a global variable with a new value.get_global_from_registry(name:str)It gets a dictionary of a global variable recorded in the registry using the name as reference.return_globals() -> dictIt returns the global variables dictionary.change_glob(var: str, val: any, glob: dict = return_globals()) -> anyIt changes the value of a global variable and returns the new value.get_globes(string:str='', glob:dict=return_globals())It gets a specified global variable.if_none_default(string:str, default:any, glob:dict=return_globals())It checks if a global variable isNone, if it is, it assigns it a default value and updates the global variable.History UtilitiesThehistory_utils.pymodule comprises theHistoryManagerclass which allows to handle the history of objects, states or actions for undo/redo features.HistoryManager()This is the constructor for theHistoryManagerclass. It initializes a 'history_names' dictionary to store the history of different objects.add_history_name(self, name, initial_data='')This method adds a new object to the history with an initial state.transfer_state(self, primary_data, secondary_data)This method transfers the latest state from primary_data to secondary_data and returns the modified primary and secondary data.add_to_history(self, name, data)This method adds a new state to the history of an object.redo(self, name)This method reverts the object to the next state in the redo history. If no redo history exists, the object remains unchanged.JSON Utilities'json_utils' is a utility module that allows you to work with JSON data. Its functionalities include:Converting JSON strings to dictionaries and vice versa.Merging, adding to, updating, and removing keys from dictionaries.Retrieving keys, values, specific items, and key-value pairs from dictionaries.Recursively displaying values of nested JSON data structures with indentation.Loading from and saving dictionaries to JSON files.Validating and cleaning up JSON strings.Searching and modifying nested JSON structures based on specific keys, values, or paths.Inverting JSON data structures.Creating and reading from JSON files.The module contains functions like 'json_key_or_default', 'all_try_json_loads', 'safe_dump_to_file', etc. Each function comes with elaborate Python docstrings that provide detailed usage instructions.The utility, for instance, provides a function named 'create_and_read_json' that allows you to create a JSON file if it does not exist, and then read from it. It also offers functions like 'is_valid_json' which checks whether a given string is a valid JSON string.Take this function 'safe_write_to_json', it safely writes data to a JSON file. If an error occurs during writing, the data is written to a temporary file first, and then the temporary file is replaced with the original one.Other functions like 'safe_read_from_json', 'find_keys', 'all_try', 'safe_json_loads', 'try_json_loads', and 'unified_json_loader' provide ways to work with JSON data and make handling JSON in Python easier and more efficient. []path_utils.pyThis module contains utility functions for processing file paths, directories, and files. This includes operations such as getting the home directory, checking if a path is a file, updating global variables, listing directory contents, and working with file sizes and directory sizes. The implemented functions are:get_home_folder(): This function returns the path to the home directory of the current user.is_file(path: str) -> bool: This function checks if the provided path is a file.update_global_variable(name: str, value) -> None: This function updates the global variable with the provided name and value.list_directory_contents(path: str) -> list: This function returns a list of directory contents or a list with a single file, if the path is a file.trunc(a: float, x: int) -> float: This function truncates a float number to a specific number of decimal places.mkGb(k) -> float: This function converts a value to Gigabytes (GB).mkGbTrunk(k) -> float: This function converts a value to GB and truncates the result to five decimal places.mkGbTrunFroPathTot(k) -> float: This function fetches the file size from a path, converts it to GB, and truncates the result to five decimal places.get_abs_name_of_this() -> Path: This function returns the absolute name of the current module.createFolds(ls: list) -> None: This function creates multiple directories from a list of paths.mkdirs(path: str) -> str: This function creates a directory and any necessary intermediate directories.file_exists(file_path: str) -> bool: This function checks if a file exists at the specified path.dir_exists(path: str) -> bool: This function checks if a directory exists at the specified path.file_size(path:str): This function returns the size of a file in bytes, if the path is a file, else it returns 0.get_size(path: str) -> int: This function calculates the size of a file or a directory in bytes.get_total_size(folder_path: str) -> int: This function calculates the total size of a directory and its subdirectories in bytes.get_files(directory): This function returns a list of all files in a directory including the ones in its subdirectories.For detailed information about each function, please refer to their respective documentation in the module.Read-Write UtilitiesTheread_write_utils.pymodule contains a variety of utility functions to assist with file I/O operations. If you need to perform read or write operations to files in your software, this utility can ease the process and shorten your codebase. Notably, it enables you to quickly write contents to a file, read contents from a file, or check if a string includes a file extension.Here are the primary functions:Write content to a fileread_write_utils.write_to_file(file_path:str,contents:any)This function writes the provided contents to a file at the specified path. If the file doesn't exist, it will be created.Read content from a fileread_write_utils.read_from_file(file_path:str)This function reads and returns the contents of a file at the specified file path.Check if a string has a file extensionread_write_utils.is_file_extension(obj:str)This function checks whether a provided string includes a file extension and returns a boolean value accordingly.Read from or write to a file depending on the number of argumentsread_write_utils.determine_path_and_content(*args,**kwargs)This function determines the file path and the contents based on the provided arguments. It can be used when you want to infer the operation (read/write) based on the kind and count of arguments.Create a file if it does not exist, then read from itread_write_utils.create_and_read_file(file_path:str,contents:str)This function attempts to open a file from its path. If the file doesn't exist, it creates the file, writes the provided contents to it, and then reads the file content back.All the utility functions are designed to be easily incorporated into your code and have detailed docstrings explaining their usage.Note: All file paths need to be absolute paths, and the file operations are conducted with 'UTF-8' encoding. If a function is called with incorrect arguments, it will alert the user with an 'Too many arguments' or 'Missing file path or contents.' message.Please refer to theread_write_utils.pysource code for more details and to understand the inner workings of these utilities for optimal usage. []Thread Utilitiesall_alive: This method returns a dictionary indicating whether each thread is alive or not. The keys are the thread names, and the values are boolean.all_thread_names: This method returns the keys (names) of all threads in the dictionary.get_last_result: In the absence of a specific thread name, this method returns the result of the last thread inthread_name_list. If a thread name is specified, it first checks the validity of the name usingcheck_nameand then returns the result.check_name: This method checks if the provided thread name is present in existing threads.Time Utilitiesget_time_stamp: Returns the current timestamp in seconds.get_milisecond_time_stamp: Returns the current timestamp in milliseconds.get_day: Returns the current day of the week.get_date: Returns the current date in YYYY-MM-DD format.save_last_time: Saves the last timestamp to a file named 'last.txt'.get_day_seconds: Returns the number of seconds in a day.get_week_seconds: Returns the number of seconds in a week.get_hour_seconds: Returns the number of seconds in an hour.get_minute_seconds: Returns the number of seconds in a minute.get_24_hr_start: Returns the timestamp for the start of the current day.create_timestamp: Accepts a date string and military time string to create a timestamp.Time Utilitiestime_utilsis a module inabstract_utilitiesthat provides functions to work with time stamps, get the current date and time, and manage chronological operations.Here is an example of a function provided in this module:get_second(): This function returns the value of one second as a float.Additional functions in time utilities include:get_time_stamp()get_milisecond_time_stamp()get_day()get_time()get_date()save_last_time()get_day_seconds()get_week_seconds()get_hour_seconds()get_minute_seconds()get_24_hr_start()Each function in thetime_utilsmodule provides a unique operation relating to time management in your programs.Type Utilitiestype_utilsis another utility module that provides type checking and conversion functionality. This module incorporates features such as determining the type of an object, checking if an object is of a certain type, and facilitating type conversion. This simplifies data handling across different data types and ensures consistent behavior.Some mainstay functions within this module include:is_iterable(obj: any) -> boolis_number(obj: any) -> boolis_str(obj: any) -> boolis_int(obj: any) -> boolis_float(obj: any) -> boolis_bool(obj: any) -> boolis_list(obj: any) -> boolis_tuple(obj: any) -> boolis_set(obj: any) -> boolis_dict(obj: any) -> boolis_frozenset(obj: any) -> boolis_bytearray(obj: any) -> boolis_bytes(obj: any) -> boolis_memoryview(obj: any) -> boolis_range(obj: any) -> boolis_enumerate(obj: any) -> boolis_zip(obj: any) -> boolis_filter(obj: any) -> boolis_map(obj: any) -> boolis_property(obj: any) -> boolis_slice(obj: any) -> boolis_super(obj: any) -> boolis_type(obj: any) -> boolis_Exception(obj: any) -> boolis_none(obj: any) -> boolis_str_convertible_dict(obj: any) -> boolis_dict_or_convertable(obj: any) -> booldict_check_conversion(obj: any) -> Union[dict, any]These functions and more form thetype_utilsmodule, playing an integral part in ensuring type compatibility and facilitating data conversion.Type UtilitiesThetype_utils.pymodule encompasses numerous functions that help in identifying the type of data structures, converting strings to their appropriate data types, and checking if the data can be represented in a specific format. Here is a description of the functions available in this module.is_iterable():Determines whether the given object is iterable or not.get_type(obj):Determines the type of the given object and updates it accordingly.is_number(obj):Checks whether the given object can be represented as a number.is_object(obj):Checks whether the given object is of type 'object'.is_str(obj):Checks whether the given object is of type 'str'.is_int(obj):Checks whether the given object is of type 'int'.is_float(obj):Checks whether the given object is of type 'float'.is_bool(obj):Checks whether the given object is of type 'bool'.The following functions check if the object is of respective data types (list, tuple, dictionary, frozenset, bytearray etc.)is_list(obj)is_tuple(obj)is_set(obj)is_dict(obj)is_frozenset(obj)is_bytearray(obj)is_bytes(obj)is_memoryview(obj)is_range(obj)is_enumerate(obj)is_zip(obj)is_filter(obj)is_map(obj)is_property(obj)is_slice(obj)is_super(obj)is_type(obj)is_Exception(obj)is_none(obj)dict_check_conversion(obj):Converts the given object to a dictionary if possible, otherwise returns the original object.make_list(obj):Converts the given object to a list if it's not already a list.make_list_lower(ls):Converts all string elements in a list to lowercase.Please note to replaceobjandlswith the object and list you want to analyze or manipulate, respectively.
abstract-utilities-test
This is the README for module abstract_utilities_test
abstract-webtools
Abstract WebToolsProvides utilities for inspecting and parsing web content, including React components and URL utilities, with enhanced capabilities for managing HTTP requests and TLS configurations.Features:URL Validation: Ensures URL correctness and attempts different URL variations.HTTP Request Manager: Custom HTTP request handling, including tailored user agents and improved TLS security through a custom adapter.Source Code Acquisition: Retrieves the source code of specified websites.React Component Parsing: Extracts JavaScript and JSX source code from web pages.Comprehensive Link Extraction: Collects all internal links from a specified website.Web Content Analysis: Extracts and categorizes various web content components such as HTML elements, attribute values, attribute names, and class names.abstract_webtools.pyDescription:Abstract WebTools offers a suite of utilities designed for web content inspection and parsing. One of its standout features is its ability to analyze URLs, ensuring their validity and automatically attempting different URL variations to obtain correct website access. It boasts a custom HTTP request management system that tailors user-agent strings and employs a specialized TLS adapter for heightened security. The toolkit also provides robust capabilities for extracting source code, including detecting React components on web pages. Additionally, it offers functionalities for extracting all internal website links and performing in-depth web content analysis. This makes Abstract WebTools an indispensable tool for web developers, cybersecurity professionals, and digital analysts.Dependencies:requestssslHTTPAdapterfromrequests.adaptersPoolManagerfromurllib3.poolmanagerssl_fromurllib3.utilurlparse,urljoinfromurllib.parseBeautifulSoupfrombs4UrlManagerTheUrlManageris a Python class designed to handle and manipulate URLs. It provides methods for cleaning and normalizing URLs, determining the correct version of a URL, extracting URL components, and more. This class is particularly useful for web scraping, web crawling, or any application where URL management is essential.UsageTo use theUrlManagerclass, first import it into your Python script:fromabstract_webtoolsimportUrlManagerInitializing a UrlManager ObjectYou can create aUrlManagerobject by providing an initial URL and an optionalrequestssession. If no URL is provided, it defaults to 'www.example.com':url_manager=UrlManager(url='https://www.example.com')URL Cleaning and NormalizationTheclean_urlmethod takes a URL and returns a list of potential URL variations, including versions with and without 'www.', 'http://', and 'https://':cleaned_urls=url_manager.clean_url()Getting the Correct URLTheget_correct_urlmethod tries each possible URL variation with an HTTP request to determine the correct version of the URL:correct_url=url_manager.get_correct_url()Updating the URLYou can update the URL associated with theUrlManagerobject using theupdate_urlmethod:url_manager.update_url('https://www.example2.com')Extracting URL ComponentsTheurl_to_piecesmethod extracts various components of the URL, such as protocol, domain name, path, and query:url_manager.url_to_pieces()print(url_manager.protocol)print(url_manager.domain_name)print(url_manager.path)print(url_manager.query)Additional Utility Methodsget_domain_name(url): Returns the domain name (netloc) of a given URL.is_valid_url(url): Checks if a URL is valid.make_valid(href, url): Ensures a relative or incomplete URL is valid by joining it with a base URL.get_relative_href(url, href): Converts a relative URL to an absolute URL based on a base URL.Compatibility NoteTheget_domainmethod is kept for compatibility but is inconsistent. Use it only for "webpage_url_domain." Similarly,url_basename,base_url, andurljoinmethods are available for URL manipulation.ExampleHere's a quick example of using theUrlManagerclass:fromabstract_webtoolsimportUrlManagerurl_manager=UrlManager(url='https://www.example.com')cleaned_urls=url_manager.clean_url()correct_url=url_manager.get_correct_url()url_manager.update_url('https://www.example2.com')print(f"Cleaned URLs:{cleaned_urls}")print(f"Correct URL:{correct_url}")DependenciesTheUrlManagerclass relies on therequestslibrary for making HTTP requests. Ensure you have therequestslibrary installed in your Python environment.SafeRequestTheSafeRequestclass is a versatile Python utility designed to handle HTTP requests with enhanced safety features. It integrates with other managers likeUrlManager,NetworkManager, andUserAgentManagerto manage various aspects of the request, such as user-agent, SSL/TLS settings, proxies, headers, and more.UsageTo use theSafeRequestclass, first import it into your Python script:fromabstract_webtoolsimportSafeRequestInitializing a SafeRequest ObjectYou can create aSafeRequestobject with various configuration options. By default, it uses sensible default values, but you can customize it as needed:safe_request=SafeRequest(url='https://www.example.com')Updating URL and UrlManagerYou can update the URL associated with theSafeRequestobject using theupdate_urlmethod, which also updates the underlyingUrlManager:safe_request.update_url('https://www.example2.com')You can also update theUrlManagerdirectly:fromurl_managerimportUrlManagerurl_manager=UrlManager(url='https://www.example3.com')safe_request.update_url_manager(url_manager)Making HTTP RequestsTheSafeRequestclass handles making HTTP requests using thetry_requestmethod. It handles retries, timeouts, and rate limiting:response=safe_request.try_request()ifresponse:# Process the response hereAccessing Response DataYou can access the response data in various formats:safe_request.source_code: HTML source code as a string.safe_request.source_code_bytes: HTML source code as bytes.safe_request.source_code_json: JSON data from the response (if the content type is JSON).safe_request.react_source_code: JavaScript and JSX source code extracted from<script>tags.Customizing Request ConfigurationTheSafeRequestclass provides several options for customizing the request, such as headers, user-agent, proxies, SSL/TLS settings, and more. These can be set during initialization or updated later.Handling Rate LimitingThe class can handle rate limiting scenarios by implementing rate limiters and waiting between requests.Error HandlingTheSafeRequestclass handles various request-related exceptions and provides error messages for easier debugging.DependenciesTheSafeRequestclass relies on therequestslibrary for making HTTP requests. Ensure you have therequestslibrary installed in your Python environment:pipinstallrequestsExampleHere's a quick example of using theSafeRequestclass:fromabstract_webtoolsimportSafeRequestsafe_request=SafeRequest(url='https://www.example.com')response=safe_request.try_request()ifresponse:print(f"Response status code:{response.status_code}")print(f"HTML source code:{safe_request.source_code}")SoupManagerTheSoupManagerclass is a Python utility designed to simplify web scraping by providing easy access to the BeautifulSoup library. It allows you to parse and manipulate HTML or XML source code from a URL or provided source code.UsageTo use theSoupManagerclass, first import it into your Python script:fromabstract_webtoolsimportSoupManagerInitializing a SoupManager ObjectYou can create aSoupManagerobject with various configuration options. By default, it uses sensible default values, but you can customize it as needed:soup_manager=SoupManager(url='https://www.example.com')Updating URL and Request ManagerYou can update the URL associated with theSoupManagerobject using theupdate_urlmethod, which also updates the underlyingUrlManagerandSafeRequest:soup_manager.update_url('https://www.example2.com')You can also update the source code directly:source_code='<html>...</html>'soup_manager.update_source_code(source_code)Accessing and Parsing HTMLTheSoupManagerclass provides easy access to the BeautifulSoup object, allowing you to search, extract, and manipulate HTML elements easily. You can use methods likefind_all,get_class,has_attributes, and more to work with the HTML content.elements=soup_manager.find_all(tag='a')Extracting LinksThe class also includes methods for extracting all website links from the HTML source code:all_links=soup_manager.all_linksExtracting Meta TagsYou can extract meta tags from the HTML source code using themeta_tagsproperty:meta_tags=soup_manager.meta_tagsCustomizing ParsingYou can customize the parsing behavior by specifying the parser type during initialization or updating it:soup_manager.update_parse_type('lxml')DependenciesTheSoupManagerclass relies on theBeautifulSouplibrary for parsing HTML or XML. Ensure you have thebeautifulsoup4library installed in your Python environment:pipinstallbeautifulsoup4ExampleHere's a quick example of using theSoupManagerclass:fromabstract_webtoolsimportSoupManagersoup_manager=SoupManager(url='https://www.example.com')all_links=soup_manager.all_linksprint(f"All Links:{all_links}")LinkManagerTheLinkManagerclass is a Python utility designed to simplify the extraction and management of links (URLs) and associated data from HTML source code. It leverages other classes likeUrlManager,SafeRequest, andSoupManagerto facilitate link extraction and manipulation.UsageTo use theLinkManagerclass, first import it into your Python script:fromabstract_webtoolsimportLinkManagerInitializing a LinkManager ObjectYou can create aLinkManagerobject with various configuration options. By default, it uses sensible default values, but you can customize it as needed:link_manager=LinkManager(url='https://www.example.com')Updating URL and Request ManagerYou can update the URL associated with theLinkManagerobject using theupdate_urlmethod, which also updates the underlyingUrlManager,SafeRequest, andSoupManager:link_manager.update_url('https://www.example2.com')Accessing Extracted LinksTheLinkManagerclass provides easy access to extracted links and associated data:all_links=link_manager.all_desired_linksCustomizing Link ExtractionYou can customize the link extraction behavior by specifying various parameters during initialization or updating them:link_manager.update_desired(img_attr_value_desired=['thumbnail','image'],img_attr_value_undesired=['icon'],link_attr_value_desired=['blog','article'],link_attr_value_undesired=['archive'],image_link_tags='img',img_link_attrs='src',link_tags='a',link_attrs='href',strict_order_tags=True,associated_data_attr=['data-title','alt','title'],get_img=['data-title','alt','title'])DependenciesTheLinkManagerclass relies on other classes within theabstract_webtoolsmodule, such asUrlManager,SafeRequest, andSoupManager. Ensure you have these classes and their dependencies correctly set up in your Python environment.ExampleHere's a quick example of using theLinkManagerclass:fromabstract_webtoolsimportLinkManagerlink_manager=LinkManager(url='https://www.example.com')all_links=link_manager.all_desired_linksprint(f"All Links:{all_links}")##Overall Usecasesfromabstract_webtoolsimportUrlManager,SafeRequest,SoupManager,LinkManager,VideoDownloader# --- UrlManager: Manages and manipulates URLs for web scraping/crawling ---url="example.com"url_manager=UrlManager(url=url)# --- SafeRequest: Safely handles HTTP requests by managing user-agent, SSL/TLS, proxies, headers, etc. ---request_manager=SafeRequest(url_manager=url_manager,proxies={'8.219.195.47','8.219.197.111'},timeout=(3.05,70))# --- SoupManager: Simplifies web scraping with easy access to BeautifulSoup ---soup_manager=SoupManager(url_manager=url_manager,request_manager=request_manager)# --- LinkManager: Extracts and manages links and associated data from HTML source code ---link_manager=LinkManager(url_manager=url_manager,soup_manager=soup_manager,link_attr_value_desired=['/view_video.php?viewkey='],link_attr_value_undesired=['phantomjs'])# Download videos from provided links (list or string)video_manager=VideoDownloader(link=link_manager.all_desired_links).download()# Use them individually, with default dependencies for basic inputs:standalone_soup=SoupManager(url=url).soupstandalone_links=LinkManager(url=url).all_desired_links# Updating methods for manager classesurl_1='thedailydialectics.com'print(f"updating URL to{url_1}")url_manager.update_url(url=url_1)request_manager.update_url(url=url_1)soup_manager.update_url(url=url_1)link_manager.update_url(url=url_1)# Updating URL manager referencesrequest_manager.update_url_manager(url_manager=url_manager)soup_manager.update_url_manager(url_manager=url_manager)link_manager.update_url_manager(url_manager=url_manager)# Updating source code for managerssource_code_bytes=request_manager.source_code_bytessoup_manager.update_source_code(source_code=source_code_bytes)link_manager.update_source_code(source_code=source_code_bytes)LicenseThis project is licensed under the MIT License - see theLICENSEfile for details.Module Information-Author: putkoff -Author Email:partners@abstractendeavors.com-Github:https://github.com/AbstractEndeavors/abstract_essentials/tree/main/abstract_webtools-PYPI:https://pypi.org/project/abstract-webtools-Part of: abstract_essentials -Date: 10/10/2023 -Version: 0.1.4.54
abstra-runtimes
abstra-runtimes-libCollects runtimes/executions for Abstra Cloud
abstrys-core
This repository contains core modules used by a number of different applications:It consists of the following modules:abstrys/app_settings.py– stores application settings in a directory named after the application (~/.dhop/settings.json). It’s basically just a dict that knows how to store and restore itself.Installing itIf you’ve installed any of my other applications, this library is likely to be already installed.However, if you want to install it from source, just run the setup script:./setup.py install –userLicenseThis is provided as open-source software per the BSD 3-clause license. See the LICENSE file provided with this repository for details.About the authorThis library was written by Eron Hennessey <eron@abstrys.com>.About PythonIf you want to learn more about Python (an incredibly practical and elegant programming language), head over to <https://python.org/>!How to contributeIf you find my apps (or this library) useful, you may want to contribute. The easiest way, by far, is bycreating a pull request!
abstrys-toolkit
UNKNOWN
absu
absu: Azure Blob Storage UpdaterWhat is it?absuis a tool that helps you syncing a local folder to the $web container of an Azure Blob Storage. This is useful if you want to host a website that was generated with a static website generator likehugo,mkdocs,Jekyll,next.jsand so on on Azure Blob Storage...absudoes the following things:create a resource group in Azure (if not existing)create a Storage Account within that resource group (if not existing)create a $web container within that storage (if not existing)delete all files in that containerupload all data from a local folder into that containerYou can skip the first two steps by providing a connection string for an existing Azure Blob Storage.How to installYou will need these tools installed:Azure CLI2.20.0 or higherPython3.9 or higherUse the following commands to make sure your installations works.Python:python--versionPip:pip--versionAz:az--versionThen install theabsupackage with pip:pipinstallabsuHow to useShow help:python-mabsu-husage: __main__.py [-h] [-c CONNECTIONSTRING] [-s STORAGE] [-r RESOURCEGROUP] [-f FOLDER] [-v] Please provide at least one of the following: 1) a Azure Blob Storage name OR 2) a connection string. optional arguments: -h, --help show this help message and exit -c CONNECTIONSTRING, --connectionstring CONNECTIONSTRING Azure Blob Storage connection string. -s STORAGE, --storage STORAGE Azure Blob Storage resource name. Creates new one if not existing. -r RESOURCEGROUP, --resourcegroup RESOURCEGROUP The Azure Blob Storage is in this resource group. Default: blogs-rg -f FOLDER, --folder FOLDER Folder with static website data. Will be pushed to the storage. -v, --verbose Verbose, use this flag for debugging.Execute the tool with default parameters:python-mabsuabsuwill ask you for the Azure Blob Storage name and for the local folder in the command line. If you have access to multiple subscriptions, then it will also ask you which subscription you want to use.The resource group will be called "blob-rg" per default. It can be changed with the --resourcegroup parameter.Provide a connection string:python-mabsu--connectionstring"DefaultEndpointsProtocol=https;AccountName=STORAGENAME;AccountKey=PASSWORD;EndpointSuffix=core.windows.net"Provide a local folder (mywebsite) a storage account name (mystorage01) and a resource group (mybloggroup):python-mabsu--foldermywebsite--resourcegroupmybloggroup--storagemystorage01Debugging:python-mabsu--verboseBuild this projectBuild locally:pipinstall.Build dist files:pythonsetup.pysdistUpload tohttps://test.pypi.org:twineupload--repository-urlhttps://test.pypi.org/legacy/dist/*Upload tohttps://pypi.org:twineuploaddist/*
absum
absum - Abstractive Summarization for Data AugmentationIntroductionImbalanced class distribution is a common problem in ML. Undersampling combined with oversampling are two methods of addressing this issue. A technique such as SMOTE can be effective for oversampling, although the problem becomes a bit more difficult with multilabel datasets.MLSMOTEhas been proposed, but the high dimensional nature of numerical vectors created from text can sometimes make other forms of data augmentation more appealing.absum is an NLP library that uses abstractive summarization to perform data augmentation in order to oversample under-represented classes in datasets. Recent developments in abstractive summarization make this approach optimal in achieving realistic data for the augmentation process.It uses the latestHuggingface T5model by default, but is designed in a modular way to allow you to use any pre-trained or out-of-the-box Transformers models capable of abstractive summarization. absum is format agnostic, expecting only a dataframe containing text and all features. It also uses multiprocessing to achieve optimal performance.Singular summarization calls are also possible.AlgorithmAppend counts or the number of rows to add for each feature are first calculated with a ceiling threshold. Namely, if a given feature has 1000 rows and the ceiling is 100, its append count will be 0.For each feature it then completes a loop from an append index range to the append count specified for that given feature. The append index is stored to allow for multi processing.An abstractive summarization is calculated for a specified size subset of all rows that uniquely have the given feature. If multiprocessing is set, the call to abstractive summarization is stored in a task array later passed to a sub-routine that runs the calls in parallel using themultiprocessinglibrary, vastly reducing runtime.Each summarization is appended to a new dataframe with the respective features one-hot encoded.InstallationVia pippipinstallabsumFrom sourcegitclonehttps://github.com/aaronbriel/absum.git pipinstall[--editable].orpipinstallgit+https://github.com/aaronbriel/absum.gitUsageabsum expects a DataFrame containing a text column which defaults to 'text', and the remaining columns representing one-hot encoded features. If additional columns are present that you do not wish to be considered, you have the option to pass in specific one-hot encoded features as a comma-separated string to the 'features' parameter. All available parameters are detailed in the Parameters section below.importpandasaspd fromabsumimportAugmentorcsv='path_to_csv'df=pd.read_csv(csv)augmentor=Augmentor(df,text_column='review_text')df_augmented=augmentor.abs_sum_augment()# Store resulting dataframe as a csvdf_augmented.to_csv(csv.replace('.csv','-augmented.csv'),encoding='utf-8',index=False)Running singular summarization on any chunk of text is simple:text = chunk_of_text_to_summarize augmentor = Augmentor(min_length=100, max_length=200) output = augmentor.get_abstractive_summarization(text)NOTE: When running any summarizations you may see the following warning message which can be ignored: "Token indices sequence length is longer than the specified maximum sequence length for this model (2987 > 512). Running this sequence through the model will result in indexing errors". For more information refer tothis issue.ParametersNameTypeDescriptiondf(:class:pandas.Dataframe,optional, defaults to None)Dataframe containing text and one-hot encoded features.text_column(:obj:string,optional, defaults to "text")Column in df containing text.features(:obj:string,optional, defaults to None)Comma-separated string of features to possibly augment data for.device(:class:torch.device,optional, 'cuda' or 'cpu')Torch device to run on cuda if available otherwise cpu.model(:class:~transformers.T5ForConditionalGeneration,optional, defaults to T5ForConditionalGeneration.from_pretrained('t5-small'))Model used for abstractive summarization.tokenizer(:class:~transformers.T5Tokenizer,optional, defaults to T5Tokenizer.from_pretrained('t5-small'))Tokenizer used for abstractive summarization.return_tensors(:obj:str,optional, defaults to "pt")Can be set to ‘tf’, ‘pt’ or ‘np’ to return respectively TensorFlow tf.constant, PyTorch torch.Tensor or Numpy :oj: np.ndarray instead of a list of python integers.num_beams(:obj:int,optional, defaults to 4)Number of beams for beam search. Must be between 1 and infinity. 1 means no beam search. Default to 1.no_repeat_ngram_size(:obj:int,optional, defaults to 4If set to int > 0, all ngrams of size no_repeat_ngram_size can only occur once.min_length(:obj:int,optional, defaults to 10)The min length of the sequence to be generated. Between 0 and infinity. Default to 10.max_length(:obj:int,optional, defaults to 50)The max length of the sequence to be generated. Between min_length and infinity. Default to 50.early_stopping(:obj:bool,optional, defaults to True)bool if set to True beam search is stopped when at least num_beams sentences finished per batch. Defaults to False as defined in configuration_utils.PretrainedConfig.skip_special_tokens(:obj:bool,optional, defaults to True)Don't decode special tokens (self.all_special_tokens). Default: False.num_samples(:obj:int,optional, defaults to 100)Number of samples to pull from dataframe with specific feature to use in generating new sample with Abstractive Summarization.threshold(:obj:int,optional, defaults to 3500)Maximum ceiling for each feature, normally the under-sample max.multiproc(:obj:bool,optional, defaults to True)If set, stores calls to abstractive summarization in array which is then passed to run_cpu_tasks_in_parallel to allow for increasing performance through multiprocessing.debug(:obj:bool,optional, defaults to True)If set, prints generated summarizations.CitationPlease referencethis libraryand the HuggingFacepytorch-transformerslibrary if you use this work in a published or open-source project.
absurdia
Official Absurdia Bindings for PythonA Python library for Absurdia's API.InstallYou can install this package by using the pip tool and installing:$ pip install absurdiaSigning up to Absurdia backtestingSign up for Absurdia athttps://app.absurdia.markets/signup.Using the the packageCreate a new agent in (your dashboard)[https://app.absurdia.markets/agents] and download the credential file. Use the agent token with the client as in the example blow.fromabsurdiaimportClient# Create clientclient=Client('<Your Agent Token>')# Get your accountaccount=client.accounts.current()Alternatively, use the environment variableABSURDIA_TOKEN, or put the credential file in the same directory as your Python script.Import a Freqtrade backtestFreqtrade backtests are run using its CLI. This Python library also comes with a CLI that can work together with Freqtrade's commands. First, add a token to authenticate your agent:$absurdialogin--token'<Your Agent Token>'Once authenicated, simply appendabsurdia backtest --to your Freqtrade backtesting command. For example:$absurdiabacktest--freqtradebacktesting--strategyAwesomeStrategy--timeframe1mLicenseLicensed under the BSD 3 license, seeLICENSE.
abs-web-testing
No description available on PyPI.
absynthe
Absynthe: A (branching) Behavior SynthesizerMotivationAbsynthe came about in response to the need for test data for analysizing the performance and accuracy of log analysis algorithms. Even though plenty of real life logs are available, e.g./var/log/in unix-based laptops, they do not serve the purpose of test data. For that, we need to understand the core application logic that is generating these logs.A more interesting situation arises while trying to test log analytic (and anomaly detection) solutions for distributed applications where multiple sources or modules emit their respective log messages in a single log queue or stream. This means that consecutive log lines could have originated from different, unrelated application components. Absynthe providesground truthmodels to simulate such situations.You need Absynthe if you wish to simulate the behavior of any well defined process -- whether it's a computer application or a business process flow.OverviewEach business process or compuater application is modelled as acontrol flow graph(orCFG), which typically has one or more roots (i.e. entry) nodes and multiple leaf (i.e. end) nodes.Tree-like CFGAn example of a simple, tree-like CFG generated using Absynthe is shown below. This is like a tree since nodes are laid out in levels, and nodes at levelihave outgoing edges only to nodes at leveli + 1.Eachbehavioris the sequence of nodes encountered while traversing this CFG from a root to a leaf. Of course, a CFG might contain loops which could be traversed multiple times before arriving at the leaf. Moreover, if there are multiple CFGs, then Absynthe can synthesizeinterleavedbehaviors. This means that a single sequence of nodes might contain nodes from multiple CFGs. We are ultimately interested in this interleaving behavior, which is produced by multiple CFGs.The above screenshot shows logs generated by Absynthe. Each log line starts with a time stamp, followed by a session ID, CFG ID, and a log message. At present, the log message is simply a random concatenation of the node ID to which the log message corresponds. A single CFG might participate in multiple sessions, where each session is a different traversal of the CFG. Therefore, we maintain both session ID and CFG ID in the log line.Directed Cyclic CFGAn example of a more complex CFG, a directed cyclic graph, is shown in the figure below. It expands the tree-like graph illustrated above by:attaching loops on some of the nodes,constructing skip-level edges, i.e. edges from a node at levelito a node at level ≥(i + 2), andoptionally, upward edges (not shown here), i.e. edges from a node at levelito a node at level ≤(i - 1).The identifiers of nodes appearing loops are helpfully prefixed with the identifiers of nodes where these loops start and finish. Moreover, loops could be traversed multiple times in a single behavior, as illustrated in the figure below.InstallationThis package has been developed withPython 3.6.*and depends onscipy 1.2.1. Things might not work withPython 3.7.*orscipy 1.3.*. Therefore, consider creating a virtual environment if your default configuration differs.The latest release is available on PyPi, simplypip install absynthe. Themasterbranch of this repository will always provide the latest release.For the latest features not yet released, clone or download thedevelopbranch and then:# Change dir to absynthecd/path/to/absynthe# Install dependenciespipinstall-rrequirements.txt# Install absynthepipinstall.UsageIt is possible to start using Absynthe with two classes:any concrete implementation of the abstractGraphBuilderclass, which generates CFGs, andany concrete implementation of the abstractBehaviorclass, which traverses the CFGs generated above and emits log messages.For instance, consider thebasicLogGenerationmethod in./examples/01_generateSimpleBehavior.py:fromabsynthe.graph_builderimportTreeBuilderfromabsynthe.behaviorimportMonospaceInterleavingdefbasicLogGeneration(numRoots:int=2,numLeaves:int=4,branching:int=2,numInnerNodes:int=16,loggerNodeTypes:str="SimpleLoggerNode"):# Capture all the arguments required by GraphBuilder classtree_kwargs={TreeBuilder.KW_NUM_ROOTS:str(numRoots),TreeBuilder.KW_NUM_LEAVES:str(numLeaves),TreeBuilder.KW_BRANCHING_DEGREE:str(branching),TreeBuilder.KW_NUM_INNER_NODES:str(numInnerNodes),TreeBuilder.KW_SUPPORTED_NODE_TYPES:loggerNodeTypes}# Instantiate a concrete GraphBuilder. Note that the# generateNewGraph() method of this class returns a# new, randomly generated graph that (more or less)# satisfies all the parameters provided to the# constructor, viz. tree_kwargs in the present case.simpleTreeBuilder=TreeBuilder(**tree_kwargs)# Instantiate a concrete behavior generator. Some# behavior generators do not print unique session ID# for each run, but it's nice to have those.wSessionID:bool=TrueexBehavior=MonospaceInterleaving(wSessionID)# Add multiple graphs to this behavior generator. The# behaviors that it will synthesize would essentially# be interleavings of simultaneous traversals of all# these graphs.exBehavior.addGraph(simpleTreeBuilder.generateNewGraph())exBehavior.addGraph(simpleTreeBuilder.generateNewGraph())exBehavior.addGraph(simpleTreeBuilder.generateNewGraph())exBehavior.addGraph(simpleTreeBuilder.generateNewGraph())# Specify how many behaviors are to be synthesized,# and get going.numTraversalsOfEachGraph:int=2forlogLineinexBehavior.synthesize(numTraversalsOfEachGraph):print(logLine)returnIn order to generate behaviors from a directed cyclic CFG, create a DCG as shown in./examples/03_generateControlFlowDCG.pyand then generate behaviors after adding the DCG to a behavior object as shown in the code snippet above.Note:When generating a behavior, i.e. when traversing a graph, successors of nodes are chosen based on the probability distributions associated with those nodes. Different nodes rely on different distributions and these nodes are randomly assigned in the graphs that are constructed bygenerateNewGraph()methods, resulting in graphs with a mix of nodes.Release NotesNote:This tool is still in alpha stage, so backward compatibility is not guaranteed between releases. However, inasmuch as users stick to graph builders'generateNewGraph()methods, they will stay away from compatibility problems.Major changes in v0.0.2Added new graph builders, viz.DAGBuilderandDCGBuilder, which build CFGs with skip-level edges and loops respectively.Added new node, viz.BinomialNode, which exploits the binomial distribution in order to select its successors at the time of graph traversal.Added a separate utility class calledUtilsinabsynthe.cfg.utils.pyto create a newNodeobject from any of the concrete implementations ofNodeat random. All concrete implementations ofNodetherefore transparently available to graph builders (and everyone else) through this utility.Coming up in future releasesSophisticated interleaving behaviorsLogger nodes that emit morelife likelog messagesAnomalousbehaviors
abt
No description available on PyPI.
ab-telegram-bot
No description available on PyPI.
abtem
No description available on PyPI.
abtesify
No description available on PyPI.
abtest
A - B Test PlatformKey Featuresallows you to find the Distribution of the testing values.Time period detection (year, quarter, month, week, week-part, day, hour) adding as subgroups.subgroups of testing are availableschedule your test daily, monthly, weekly, hourly.The confidence level can automatically assign and tests for each Confidence levels (e.g. 0.01, 0.05 applying for testes individually)Running PlatformTest Parameterstest_groups :if there are any sub-groups of the active and control group, the framework can run Test results for each subgroup. This parameter must be the column name that exists on the given data set for both Active and Control groups.groups :The column name represents the active and controls group flag.feature :The column name that represents actual values that are tested according to two main groups.data_source :The location where the data is stored or the query (check data source for details).data_query_path :Type of data source to import data to the platform (optional Ms SQL, PostgreSQL, AWS RedShift, Google BigQuery, csv, json, pickle).time_period :The additional time period which (optional year, month, day, hour, week, week day, day part quarter) (check details time periods).This parameter must be assigned when A/B Test is scheduling.time_indicator :If test is running periodically, the column name that related to time must be assigned.This parameter must be assigned when A/B Test is scheduling.exporting_data :Output results of export as CSV format (optional). The only path is enough for importing data with .csv format. The output will be '_results.csv' with the test executed date. e.g. 20201205.results.csv This parameter is by default True. When you don't want to create a result file, assign False and collect data viaget_results.export_path :Output results of export as csv format. Only path is enough for importing data with .csv format. Output will be '_results.csv' with the test executed date. e.g. 20201205.results.csv This parameter is crucial, otherwisedocsfolder can not be copied given path.connector :if there is a connection paramters as user, pasword, host port, this allows us to assign it as dictionary format (e.g {"user": ***, "pw": ****}).confidence_level :The Confidence level of test results (list or float).boostrap_sample_ratio :Bootstrapping randomly selected sample data rate (between 0 and 1).boostrap_iteration :Number of iteration for bootstrapping.time_schedule :When AB Test need to be scheduled, the only period of time is required. Available time periods are 'Hourly', 'Monthly', 'Weekly', 'Mondays', ... , Sundays..This parameter must be assigned when A/B Test is scheduling.Data SourceHere is the data source that you can connect with your SQL queries:Ms SQL ServerPostgreSQLAWS RedShiftGoogle BigQuery.csv.jsonpickleConnection PostgreSQL - MS SQL - AWS RedShiftdata_source = "postgresql" connector = {"user": ***, "password": ***, "server": "127.0.0.1", "port": ****, "db": ***} data_main_path =""" SELECT groups, test_groups feature, time_indicator FROM table """Connection Google BigQuerydata_source = "googlebigquery" connector = {"data_main_path": "./json_file_where_you_stored", "db": "flash-clover-*********.json"} data_main_path =""" SELECT groups, test_groups feature, time_indicator FROM table """Connection csv - .json - .pickleIt is crucial that when data source is assigned as 'csv' - 'json' - 'pickle', file path must be assigned directly to file with the format. For instance data_source is 'csv' and 'data_main_path must be '/data_where_you_store/data_where_you_store_2/../data_that_you_want_to_import.csv'data_source = "csv" data_main_path = "./data_where_you_store/***.csv"Running ABTestgroups = "groups" test_groups = "test_groups" feature = "feature" data_source = "postgresql" connector = {"user": ***, "password": ***, "server": "127.0.0.1", "port": ****, "db": ***} data_main_path =""" SELECT groups, test_groups feature, time_indicator FROM table """ confidence_level = [0.01, 0.05] boostrap_ratio = [0.1, 0.2] export_path = abspath("") + '/data' ab = ABTest(test_groups=test_groups, groups=groups, feature=feature, data_source=data_source, data_query_path=query, time_period=time_period, time_indicator=time_indicator, time_schedule=time_schedule, export_path=export_path, connector=connector, confidence_level=confidence_level, boostrap_sample_ratio=boostrap_ratio) ab.ab_test_init()Get Resultsab = ABTest(test_groups=test_groups, groups=groups, feature=feature, data_source=data_source, data_query_path=query, time_period=time_period, time_indicator=time_indicator, time_schedule=time_schedule, export_path=None, connector=connector, confidence_level=confidence_level, boostrap_sample_ratio=boostrap_ratio) ab.ab_test_init() results = ab.get_results()SchedulePlatform allows you to schedule your ABTest weekly, daily, monthly, hourly, every Monday, Tuesday, ..., Sunday.time_schedule :Additional to ABTest parameters, this parameter allows you to fix the time period.daily schedule: Dailymonthly schedule: Monthlyday of week schedule: Monday - Mondays, Tuesday - Tuesdays, Wednesday - Wednesdayshourly schedule: Hourly from ab_test_platform.executor import ABTestgroups = "groups" test_groups = "test_groups" feature = "feature" data_source = "postgresql" data_source = "postgresql" connector = {"user": ***, "password": ***, "server": "127.0.0.1", "port": ****, "db": ***} data_main_path =""" SELECT groups, test_groups feature, time_indicator FROM table """ confidence_level = [0.01, 0.05] boostrap_ratio = [0.1, 0.2] export_path = abspath("") + '/data' ab = ABTest(test_groups=test_groups, groups=groups, feature=feature, data_source=data_source, data_query_path=query, time_period=time_period, time_indicator=time_indicator, time_schedule=time_schedule, export_path=export_path, connector=connector, confidence_level=confidence_level, boostrap_sample_ratio=boostrap_ratio) ab.schedule_test()Every 1 hour at 00:50:00 do run_ab_test() (last run: [never], next run: 2020-12-03 22:50:00)Once you have assign the parameter time_schedule, A/B Test will be run with the recent date and recent date will be updated by *time_periodandtime_schedule.e.g.1st iteration:recent date = 2020-12-05 00:00, time_schedule=Hourly.2nd iteration:recent date = 2020-12-05 01:00 (updated).e.g.1st iteration:recent date = 2020-12-05 00:00, start_date = 2020-11-29 00:00 (recent date - 1 week) time_schedule=Hourly. time_period=Weekly,2nd iteration:recent date = 2020-12-05 01:00 (updated) start_date = 2020-11-29 01:00 (recent date - 1 week)This parameter must be assigned when A/B Test is scheduling.
ab-test
No description available on PyPI.
ab-test-client
No description available on PyPI.
abtesting
No description available on PyPI.
ab-testing-analysis
A/B-testingA/B testing is process which allows developer/data scientist to analyze and evaluate, the performance of products in an experiment. In this process two or more versions of a variable (web page, page element, products etc.) are shown to different segments of website visitors at the same time to determine which version leaves the maximum impact and drives business metrics.In A/B testing,Arefers to the original testing variable. WhereasBrefers to a new version of the original testing variable. Impact of the results can be evaluated based on,Conversion RateSignificance testDocumentation can be found on-ab-testing-analysis.readthedocs.ioInstallation & UsageInstalling the library frompypi- It has only dependency onpandas & numpypipinstallab-testing-analysisUsages & working sample -TutorialExample code,fromab_testingimportABTestfromab_testing.dataimportDatasetdf=Dataset().data()ab_obj=ABTest(df,response_column='Response',group_column='Group')print(ab_obj.conversion_rate(),'\n','-'*10)print(ab_obj.significance_test(),'\n','-'*10)print(df.head())Output:ConversionRateStandardDeviationStandardError A20.20%0.4010.018 B22.20%0.4160.0186---------- zstatistic:-0.77p-value:0.439 ConfidenceInterval95%forAgroup:16.68%to23.72% ConfidenceInterval95%forBgroup:18.56%to25.84% TheGroupAfailtoperformsignificantlydifferentthangroupB. TheP-Valueofthetestis0.439whichisabove0.05,henceNullhypothesisHₒcannotberejected.----------UsersResponseGroup0IS36FC7AQJ0A1LZW2YNYHZG1A29588IGN0RN1A3HSAH1TYQFF1A45D9G1479410AContributionAll contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome.A detailed overview on how to contribute can be found in thecontributing guide.Code of ConductAs contributors and maintainers to this project, you are expected to abide by code of conduct. More information can be found atCode of conductLicenseMITMisc links and information,Recent talk inThe Data Science Hub @ Northeastern UniversitySlide deck for library demo-AB Test analysis - PPT/DeckColab Notebook for walkthrough-Notebook ipynb
abtest-sdk
No description available on PyPI.
abtestsizer
No description available on PyPI.
ab-test-toolkit
ab-test-toolkitInstallpipinstallab_test_toolkitimportsfromab_test_toolkit.generatorimport(generate_binary_data,generate_continuous_data,data_to_contingency,contingency_from_counts,)fromab_test_toolkit.powerimport(simulate_power_binary,sample_size_binary,simulate_power_continuous,sample_size_continuous,)fromab_test_toolkit.plottingimport(plot_power,plot_distribution,plot_betas,plot_binary_power,)fromab_test_toolkit.analyzeimportp_value_binaryBinary target (e.g. conversion rate experiments)Sample size:We can calculate the sample size required with the function “sample_size_binary”. Input needed is:Conversion rate control: cr0Conversion rate variant for minimal detectable effect: cr1 (for example, if we have a conversion rate of 1% and want to detect an effect of at least 20% relate, we would set cr0=0.010 and cr1=0.012)Significance threshold: alpha. Usually set to 0.05, this defines our tolerance for falsely detecting an effect if in reality there is none (alpha=0.05 means that in 5% of the cases we will detect an effect even though the samples for control and variant are drawn from the exact same distribution).Statistical power. Usually set to 0.8. This means that if the effect is the minimal effect specified above, we have an 80% probability of identifying it at statistically significant (and hence 20% of not idenfitying it).one_sided: If the test is one-sided (one_sided=True) or if it is two-sided (one_sided=False). As a rule of thumb, if there are very strong reasons to believe that the variant cannot be inferior to the control, we can use a one sided test. In case of doubts, using a two sided test is better.let us calculate the sample size for the following example:n_sample=sample_size_binary(cr0=0.01,cr1=0.012,alpha=0.05,power=0.8,one_sided=True,)print(f"Required sample size per variant is{int(n_sample)}.")Required sample size per variant is 33560.n_sample_two_sided=sample_size_binary(cr0=0.01,cr1=0.012,alpha=0.05,power=0.8,one_sided=False,)print(f"For the two-sided experiment, required sample size per variant is{int(n_sample_two_sided)}.")For the two-sided experiment, required sample size per variant is 42606.Power simulationsWhat happens if we use a smaller sample size? And how can we understand the sample size?Let us analyze the statistical power with synthethic data. We can do this with the simulate_power_binary function. We are using some default argument here, seethis pagefor more information.# simulation = simulate_power_binary()Note: The simulation object return the total sample size, so we need to split it per variant.# simulationFinally, we can plot the results (note: the plot function show the sample size per variant):# plot_power(# simulation,# added_lines=[{"sample_size": sample_size_binary(), "label": "Chi2"}],# )Compute p-valuen0=5000n1=5100c0=450c1=495df_c=contingency_from_counts(n0,c0,n1,c1)df_c<style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style>usersconvertednot_convertedcvrgroup0500045045500.0900001510049546050.097059p_value_binary(df_c)0.11824221841149218The problem of peakingwipContunious target (e.g. average)Here we assume normally distributed data (which usually holds due to the central limit theorem).Sample sizeWe can calculate the sample size required with the function “sample_size_continuous”. Input needed is:mu1: Mean of the control groupmu2: Mean of the variant group assuming minimal detectable effect (e.g. if the mean it 5, and we want to detect an effect as small as 0.05, mu1=5.00 and mu2=5.05)sigma: Standard deviation (we assume the same for variant and control, should be estimated from historical data)alpha, power, one_sided: as in the binary caseLet us calculate an example:n_sample=sample_size_continuous(mu1=5.0,mu2=5.05,sigma=1,alpha=0.05,power=0.8,one_sided=True)print(f"Required sample size per variant is{int(n_sample)}.")Let us also do some simulations. These show results for the t-test as well as bayesian testing (only 1-sided).# simulation = simulate_power_continuous()# plot_power(# simulation,# added_lines=[# {"sample_size": continuous_sample_size(), "label": "Formula"}# ],# )Data GeneratorsWe can also use the data generators for example data to analyze or visualuze as if they were experiments.Distribution without effect:df_continuous=generate_continuous_data(effect=0)# plot_distribution(df_continuous)Distribution with effect:df_continuous=generate_continuous_data(effect=1)# plot_distribution(df_continuous)VisualizationsPlot beta distributions for a contingency table:df=generate_binary_data()df_contingency=data_to_contingency(df)# fig = plot_betas(df_contingency, xmin=0, xmax=0.04)False positives# simulation = simulate_power_binary(cr0=0.01, cr1=0.01, one_sided=False)# plot_power(simulation, is_effect=False)# simulation = simulate_power_binary(cr0=0.01, cr1=0.01, one_sided=True)# plot_power(simulation, is_effect=False)
abtoast
abtoastis a collection of tools for A/B testing in Python.
abtools
No description available on PyPI.
abtor
Failed to fetch description. HTTP Status Code: 404
abtpackage
This is a simple example package. You can use [Github-flavored Markdown](https://github.com/codeArrow/packaging) to write your content.
abtrap
abtrap
abu
UNKNOWN
abu.admin
UNKNOWN
abulafia
𝚊𝚋𝚞𝚕𝚊𝚏𝚒𝚊: A tool for fair and reproducible crowdsourcing𝚊𝚋𝚞𝚕𝚊𝚏𝚒𝚊 is a tool for creating and deploying tasks on the theTolokacrowdsourcing platform.The tool allows you to create crowdsourcing tasks using pre-defined task interfaces and to configure their settings usingYAMLfiles.For a description of the tool and the motivation for its development, see thispublication.Please cite the following publication if you use the tool in your research.Tuomo Hiippala, Helmiina Hotti, and Rosa Suviranta. 2022. Developing a tool for fair and reproducible use of paid crowdsourcing in the digital humanities. InProceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature, pages 7–12, Gyeongju, Republic of Korea. International Conference on Computational Linguistics.For convenience, you can use the BibTeX entry below.@inproceedings{hiippala-etal-2022-developing, title = "Developing a tool for fair and reproducible use of paid crowdsourcing in the digital humanities", author = "Hiippala, Tuomo and Hotti, Helmiina and Suviranta, Rosa", booktitle = "Proceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature", month = oct, year = "2022", address = "Gyeongju, Republic of Korea", publisher = "International Conference on Computational Linguistics", url = "https://aclanthology.org/2022.latechclfl-1.2", pages = "7--12", abstract = "This system demonstration paper describes ongoing work on a tool for fair and reproducible use of paid crowdsourcing in the digital humanities. Paid crowdsourcing is widely used in natural language processing and computer vision, but has been rarely applied in the digital humanities due to ethical concerns. We discuss concerns associated with paid crowdsourcing and describe how we seek to mitigate them in designing the tool and crowdsourcing pipelines. We demonstrate how the tool may be used to create annotations for diagrams, a complex mode of expression whose description requires human input.", }InstallationYou can install the tool fromPyPIusing the following command:pip install abulafiaAlternatively, you can clone this repository and install the tool locally. Move to the directory that contains the repository and type:pip install .UsageSee the directoryexamplesfor documentation and practical examples.To deploy your crowdsourcing tasks to Toloka, the tool needs to read your credentials from a JSON file e.g.creds.json. Never add this file to version control.The file must contain the following key/value pairs in JSON:{"token":"YOUR_OAUTH_TOKEN","mode":"SANDBOX"}When you have tested your tasks in the Toloka sandbox, change the value for"mode"from"SANDBOX"to"PRODUCTION"to deploy the tasks on Toloka.The screenshot below illustrates tool in action.ContactIf you have questions about the tool, feel free to contact tuomo.hiippala (at) helsinki.fi or open an issue on GitHub.
abu-package
No description available on PyPI.
abupy
abu追求的是一句话就能够说明的智能策略abu能够帮助用户自动完善策略,主动分析策略产生的交易行为,智能拦截策略生成的容易失败的交易单。现阶段的量化策略还是人工编写的代码,abu量化交易系统的设计将会向着由计算机自动实现整套流程的方向迈进,包括编写量化策略本身。我们对未来的期望是:abupy用户只需要提供一些简单的种子策略,计算机在这些种子基础上不断自我学习、自我成长,创造出新的策略,并且随着时间序列数据不断智能调整策略的参数。### 特点使用多种机器学习技术智能优化策略在实盘中指导策略进行交易,提高策略的实盘效果,战胜市场### 支持的投资市场:美股,A股,港股期货,期权比特币,莱特币### 工程设计目标:分离基础策略和策略优化监督模块提高灵活度和适配性
abu-quant
UNKNOWN
abu.rpc
Failed to fetch description. HTTP Status Code: 404
abus
ABuS is a script for backing up (and restoring) your files to a local disk.The backups are encrypted, compressed, and deduplicated. It is assumed that another program (e.g. rsync) is used to make off-site copies of the backups (see below).Content of this document:CaveatsInstallationDocumentationPurgingRestoringOff-site copiesIndex DatabaseConfiguration fileCommand line switchesHistoryCaveatsABuS only works on Windows.ABuS only backs up file content. In particular the backups do not include permissions, symbolic links, hard links, or special files.If you use ABuS in anger (inspite of the lack of guarantees in the licence), please pay particular attention to what the documentation below says aboutoff-site backupsthepasswordoptionInstallationInstall Python 3.6 from python.orginclude pipit helps to add python to pathFrom the command line, “as administrator” if python has been installed “for all users”:c:\path\to\python36\scripts\pip install abusCreate minimal config file, e.g.:logfile c:/my/home/abus.log archive e:/backups password password1234 just kidding! [include] c:/my/homeInitialise the backup directory and the index database with:c:\path\to\python36\scripts\abus.exe -f c:/my/home/abus.cfg --initAdd to Task Scheduler:c:\path\to\python36\scripts\abus.exe -f c:/my/home/abus.cfg --backupIf there are any problems that prevent ABuS from getting as far as opening the log file (and Windows permissions can cause many such problems), then use cmd.exe to allow redirection:cmd /c c:\path\to\python36\scripts\abus.exe -f c:/my/home/abus.cfg --backup >c:\abus.err 2>&1DocumentationOverviewABuS is a single script for handling backups. Its command line parameters determine whether the backups are to be created, listed, or restored. The backups are stored in subdirectories of thebackup directorywhich must be on a local filesystem. For off-site copies another program is to be used, for example rsync.Warning:Off-site copies must be made correctly to minimise the risk of propagating any local corruption (see below).A configuration file is used to point to the backup directory, define the backup set, and some options. ABuS finds the configuration file either via a command line parameter or an environment variable.PurgingOld backup files are deleted after every backup. In order to determine which backups are deleted, time is divided into slots and only the latest version of a file in each slot is retained while the others are subject to purging. As slots get old they are combined into bigger slots.The configuration file defines the slot sizes usingfreq/agepairs of numbers, which define that 1 version infreqdays is to be retained for backups up toagedays old.For example, if the retention values are 1 7, 7 30, 28 150, then for each file one version a day is kept from the versions that are up to 7 days old, one a week is kept for versions up to 30 days old, and one every four weeks is kept up to 150 days.There is also a single slot older than the highestagedefined, called “slot 0”. In the example above one file older than 150 days will be kept as well.Purging of deleted filesThe time that a file deletion is detected (i.e. a file previously backed up no longer exists) must fall into slot 0 before the last backup of the file is purged. E.g. with default retention values, 150 days after a file is deleted its backups will be purged.RestoringBackup files can be restored from the backups using the–restorecommand line option.By default the backups to be restored are the latest version of each known file. The set of files can be restricted using “glob” positional arguments. As for exclusions, a*matches the directory separator. A backup is restored if its path matches any of the glob arguments. Slashes and backslashes can be used interchangeably.With the-doption the latest version of each backup before the given time is restored rather than the latest version before now. With-aall versions (before the cut-off time) are restored and a timestamp is added to the restored files’ names.After the set of restore files has been determined, ABuS removes the common part of their paths and creates the remaining relative paths in the current directory. E.g. if these the files were to be restored:c:/home/project/file_a c:/home/project/src/file_b c:/home/project/src/file_cThen they would be restored as:./file_a ./src/file_b ./src/file_cDeletionsFiles that have been deleted at the cut-off time are not restored. Note, however, that ABuS does not track historic deletions; for example, assume a certain file was last changed on Monday, deleted on Tuesday, and recreated on Wednesday. A restore with an end-of-Tuesday cut-off would restore the Monday version.ListingThe–listingoption lists backed up files. It takes the same options as–restoreand lists exactly those backup versions that would be restored.The–listingoption is implied if any of the restore filters are used without a–restore.Off-site copiesABuS only backs up to local filesystems. This means that the backups themselves are at risk of corruption, for example from ramsomware. It is important that another copy of the backup is made and that it fulfills these criteria:It must not be on a locally accessible filesystem or network share, so that the machine being backed up cannot corrupt it.Files must never be overwritten, once created, so that any local corruption does not propagate.As a consequence, partially transferred files must be removed at the destination.The following is an example of an rsync command that would copy the local backup directory to an off-site location:rsync --recursive --ignore-existing \ --exclude index.sl3 --exclude '*.part' \ /my/local/backups/ me@offsite:/backups/index.sl3need not be transferred because it changes and it can be rebuilt from the static files. Files with.partextension are backup files that are currently being written and will be renamed once complete. Excluding them ensures that incomplete backup files are not transferred.Off-site purgingSince it is not advisable to propagate changed files - and therefor deletions - to the off-site copy of the backup files, these must be purged independently.To that end ABuS creates acontent filein the backup directory which lists all backup files. The content file is compressed with gzip and its file name is that of the last backup run with a .gz extension. When such a file is written, the previous one is removed. Since the run names are basically ISO dates, a script on the off-site server can easily pick up the latest and remove all backup files that are not listed in it.N.B.:The following is only an outline of such a script to convey the idea. You must not use it without checking it first:cd .../offsite-copy keep_list=$(ls *.gz | tail -n 1) (find -type f -printf '%P\n'; zcat $keep_list $keep_list) | sort | uniq -u >/tmp/remove [[ $(wc -l /tmp/remove) -lt 50 ]] || exit # sanity check xargs rm </tmp/removeIndex DatabaseThe index database duplicates backup meta data for quicker access. Since it is changed during normal operation, it cannot be included in the off-site copy. There are therefore command line options to rebuild the index database from the backup files.Important:Before rebuilding the index database, check the integrity of the content file, for example by comparing it with its off-site copy.It is important that the index database be not rebuilt from corrupt backup data. Since the backup files are encrypted, corruption would normally show, but amissingbackup file would not. The integrity of the content file (seeOff-site purgingabove), which is not encrypted, must therefore be ascertained before rebuilding the index database.Configuration fileThe file has three sectionsparameters at the beginninginclusionsexclusionsABuS uses slashes as path separators internally. All filenames given in the config file or on the command line may use backslashes or slashes; all backslashes are converted to slashes.ParametersThe first word of each line is a parameter name, the following words form the value. Leading and trailing spaces are trimmed while spaces within the value are preserved.logfileSpecifies the path of a file to which all log entries are made. The parameter should be given first so that any subsequent errors in the configuration can be reported to the log.archiveSpecifies the path to the root backup directory containing all backup files.indexdbSpecifies the path to the index database. By default this isindex.sl3inside the backup directory, but it might be preferable to place it on a faster disk, for example.passwordSpecifies the encryption password to be used for all backup files. The encryption allows copying the backup archive to an off-site location.N.B.:Make sure the the config file is UTF-8 encoded, so that any special characters in the password are interpreted in a well-defined way.N.B.:Once a backup has been created the password must not be changed, since ABuS does no keep track of which backup files use which password (obviously). If you want to change the password, you need to create a new archive.retainSpecifies how old backups are pruned. The keyword is followed by a space-separated list of numbers formingfreqandagepairs, meaning: “keep one backup perfreqdays for files up toagedays old”. SeePurgingabove.Theagevalues must not repeat and thefreqvalues must be multiples of each other.freqcan be a float, e.g.0.25for six hours.The retention values default to:retain 1 7 56 150compressed_extensionsSpace-separated list of file extensions that ABuS assumes belong to files that are already compressed. All other files will be compressed before they are encrypted.The extensions are shell global patterns and are matched ignoring case. Thusjp*gis matched byjpg,JPG, andjpeg;*would switch compression off completely.Defaults to:7z arj avi bz2 flac gif gz jar jpeg jpg lz lzmo lzo mov mp3 mp4 png rar tgz tif tiff wma xz zipthreadsSets the maximum number simultaneous backups in order to limit the strain on CPU, IO, and memory. The default value is one less than the number of hardware threads on the system, but at most 8.InclusionsA line containing the header[include]starts the inclusion section, each line of which is a directory path which will be backed up recursively. There must be at least one inclusion.ExclusionsA line containing the header[exclude]starts the exclusion section, each line of which is a shell global pattern. All file paths that would be backed up (or directory paths that would be searched for files) are skipped if they match any of the patterns.A * in the patterns also matches the directory separators.*.bakignores any file with the extension .bak;*/~*ignores any file or directory starting with a tilde.Command line switchesRunabus--helpfor detailed command line switch help.Historyv11 2018-06-21Configuration option for maximum number of simultaneous backups (fixes MemoryError in lzma module on 32-bit Python)fix: possible ZeroDivisionError at restore “progress bar”v10 2018-06-17configuration option for extensions of already-compressed filesfix: matching of already-compressed extensions was case-sensitivefix: uncaught exceptions when writing encrypted filesv9 2017-12-31handling deletions correctly at list, restore, and rebuilddefault action is to report version rather than list all fileslist/restore glob argument now case-insensitive and allows backslashesfix: list and restore were not including all files when used without a date argumentfix: restore did not allow restoring single filev8 (beta) 2017-12-10purges backups of deleted files (see above)much reduced size of index databasev7 (beta) 2017-11-19fix: index database on different drive caused exception at purgefix: restore could not handle paths from different drivesfix: exception for u64 file numbersv6 (beta) 2017-11-12retries if file changes while readingconfig file option “indexdb” to set location of index databaseimproved restore performanceprogress indicators during restorefix: exception when no files matched during restorev5 (beta) 2017-11-05feature: content files allow safe purging of off-site copiesindex database upgrades ifself on startupfix: spaces in filenames caused index-rebuild to fall overv4 (alpha) 2017-10-22feature: purging of old backupsfix: -a and -d options didn’t work with –listfix: timestamp rounding error at index-rebuildfix: –init could not create backup directoryv3 (alpha) 2017-10-15feature: rebuilding of index database from backup meta datav2 (alpha) 2017-10-07not excruciatingly slow any morev1 (alpha) 2017-10-04first version
abuscom-libs
No description available on PyPI.
abuse
Abuse: World's First Profanity LibraryBehold, the power of Abuse v1.Abuseallows you to use profanity when you need it, without the need for getting angry. There's no need to manually search curse words fromGoogle, or to ask anybody. Abuse is 100% automatic, thanks to Python.Besides, all the cool kids are doing it. You should try it!Let's import the beast :>>>import abuse as ab >>>ab.ListAbusesFrom("a") ['...', '...', '...'] #gives a list of abuses starting with the letter 'a'Feature SupportAbuse is in it's beta.ListAbusesFrom(): Returns list of abuses starting from a specific letter.RandomAbuseFrom(): Returns a random abuse starting from a specific letter.ListAnyAbuse(): Returns any random abuse from it's built-in dataset.ListAllAbuses(): Returns list of abuses in it's database.Abuse officially supports Python 3.4 to 3.7, but can also run on Python 2.InstallationTo install Abuse, simply use pipenv(or pip, of course):For Linux :$ pip install abuseOR$ pipenv install abuseFor Windows :C:\Python34\Scripts> pip install abuseORC:\>py -3 -m pip install abuseSatisfaction guaranteed.DocumentationFantastic documentation will be available in some days, for a limited time only.
abuseACL
A python script to automatically list vulnerable Windows ACEs/ACLs.InstallationYou can install it from pypi (latest version is) with this command:sudopython3-mpipinstallabuseACLOR from source :gitclonehttps://github.com/AetherBlack/abuseACLcdabuseACL sudopython3-mpipinstall-rrequirements.txt sudopython3setup.pyinstallOR with pipx :python3-mpipxinstallgit+https://github.com/AetherBlack/abuseACL/ExamplesYou want to list vulnerable ACEs/ACLs for the current user :abuseACL$DOMAIN/$USER:"$PASSWORD"@$TARGETYou want to list vulnerable ACEs/ACLs for another user/computer/group :abuseACL-principalAether$DOMAIN/$USER:"$PASSWORD"@$TARGETYou want to list vulnerable ACEs/ACLs for a list of users/computers/groups :abuseACL-principalsfileaccounts.txt$DOMAIN/$USER:"$PASSWORD"@$TARGETHere is an example ofprincipalsfilecontent:Administrateur Group aether Machine$You want to list vulnerable ACEs/ACLs on Schema or on adminSDHolder :abuseACL-extends$DOMAIN/$USER:"$PASSWORD"@$TARGETYou can look in the documentation ofDACLto find out how to exploit the rights and usedacleditto exploit the ACEs.How it worksThe tool will connect to the DC's LDAP to list users/groups/computers/OU/certificate templates and their nTSecurityDescriptor, which will be parsed to check for vulnerable rights.Credits@_nwodtuhsfor the helpfulDACLdocumentation@fortrafor developpingimpacketLicenseGNU General Public License v3.0
abuse-finder
No description available on PyPI.
abuseipdb
AbuseIpDbWrapper around the AbuseIPDb service APIThis was a project born of having to do this in an automated fashion for our internal systems, and not finding a decent Python 2.7 package worth installing.In order to use it, all you need to do is:importabuseipdbOnce imported into your project, configure the API key for further use (you need to sign up for a webmaster account for this):abuseipdb.configure_api_key("[API KEY]")This just updates the internal api key value in use. Update that as needed if you need to report into multiple accounts over the course of your script.Following that, there are 3 main methods for use within the module. They are modelled against the AbuseIPDb API. These methods are:check_ipabuseip.check_ip(ip="[IP]",days="[DAYS]")check_cidrcheck_cidr(cidr="[CIDR]",days="[DAYS]")report_ipreport_ip(categories="[CATEGORIES]",comment="[OPTIONAL COMMENT]",ip="[IP]")Out of these 3 methods, the parameters follow the rules set forth by AbuseIPDb posted here:Abuse IP DB APIFieldRequiredDefaultExampleDescription[IP]YNA8.8.8.8IPv4 Address[DAYS]N3030Check for IP Reports in the last 30 days.[CIDR]YNA207.126.144.0/20IPv4 Address Block in CIDR notation[CATEGORIES]YNA10,12,15Comma delineated list of category IDs[OPTIONAL COMMENT]NNAThis is a comment.Describe the type of malicious activity[API KEY]YNATzmp1...quWvaiOYour API key.Source code can be found here:####AbuseIpDB Repository
abuseipdb2iptables
abuseipdb2iptablesSmall python utility to convert abuseipdb json file into iptables rules.InstallWith pip :pipinstallabuseipdb2iptablesUsageIt will group similar ip addresses with networks CIDR.abuseipdb2iptablespath/to/abuseipdb.json -AINPUT-s192.168.1.12/32-jDROP -AINPUT-s172.16.0.0/31-jDROP -AINPUT-s10.9.8.7/31-jDROP ...
abuseipdb-wrapper
Infopython wrapper for abuseipdb API ->https://docs.abuseipdb.com/#introductiongives you informations about abuse level of specified IP addressesfocused on local db caching and viewingInstallstable version from pypipipinstallabuseipdb-wrapperor newest version from githubpipinstallgit+https://github.com/streanger/abuseipdb-wrapper.gitCommand-line usageabusePython usageinit `AbuseIPDB` objectInitAbuseIPDBobject using API KEY created onhttps://www.abuseipdb.com/. Optionally you can providedb_filefor your local database. It is recommended becasue this project focuses on storing data for further quick access without need of another requests.fromabuseipdb_wrapperimportAbuseIPDBAPI_KEY='YOUR_API_KEY'abuse=AbuseIPDB(API_KEY=API_KEY,db_file='abuseipdb.json')abuse.colors_legend()check list of IP’sSpecify list of IP’s to check and apply them usingadd_ip_listmethod. Next step runcheckmethod and wait.ips=['1.2.3.4','5.6.7.8','9.10.11.12','13.14.15.16']abuse.add_ip_list(ips)abuse.check()abuse.tor_info_enrich()# new feature from v.0.1.7# get info about tor exit nodesno db caching approachIf you are not interested in caching data in local database and only want to request for IP addresses one by one use the following code. Have in mind that.check_ipmethod enriches results and removesreportssection If using wrapper is like overkill in your project, go to:https://docs.abuseipdb.com/?python#check-endpointfromabuseipdb_wrapperimportAbuseIPDBAPI_KEY='YOUR_API_KEY'abuse=AbuseIPDB(API_KEY=API_KEY)ips=['1.2.3.4','2.3.4.5','3.4.5.6']forIPinips:result=abuse.check_ip()# enriched with url and request timeresult=abuse.check_ip_orig()# results in original formprint(result)show local dbTo display collected information useshow_dbcall. Data table should be displayed on terminal. Alternatively callprinton yourAbuseIPDBobject. Before showing db you can specifiy columns to be displayed. Do it usingapply_columns_ordermethod.columns=['ipAddress','abuseConfidenceScore','totalReports','countryCode','domain','isp']abuse.apply_columns_order(columns)# show db by print or using .show_db methodprint(abuse)abuse.show_db(matched_only=False,table_view=True)db viewerFor interactive IPs check and use.viewermethod. It let you to provide list of IP’s or single one. Use help for more information.abuse.viewer()# commands inside interactive viewcolumns[columnslist]# shows or apply columns orderexport[csv,html,xlsx]# export to fileall# show all databaseexport db to csv fileabuse.export_csv('out.csv',matched_only=False)export db to styled html fileabuse.export_html_styled('out.html',matched_only=False)export db to styled xlsx fileabuse.export_xlsx_styled('out.xlsx',matched_only=False)convert to dataframe objectdf=abuse.get_df(matched_only=False)json columnsabuseConfidenceScorecountryCodedate # additionaldomainhostnamesipAddressipVersionisPublicisWhitelistedisplastReportedAtnumDistinctUserstotalReportsurl # additionalusageTypeisTorNode # additionalScreenshotscli entrypointcolors legendinteractive viewer helpchecking IPsshowing IPs in vertical modeshowing IPs in table modeIdeaswrap text in table columns (not only cut off with dots)allow for justify/center tableallow for db sorting (specified by user)IP ranges for viewer -> 1.2.3.0/24think of more info than ‘data’ section in api response: reports -> comments, categoriescheck subnet 1.2.3.4/24 ->https://www.abuseipdb.com/check-block/1.2.3.4/24allow passing arguments (colors) for style_df function from abuse class levelexport html (from rich)Changelogv.0.1.7:abuseentrypointcolumnscommand in interactive viewexportcommand in interactive view (to .csv, .html, .xlsx)tor exit nodes enrichmentstoring db file in user home directoryoriginal API request ->.check_ip_origgetpass and keyring for API_KEY read & storev.0.1.6and before:black background for better view in powershellexport to html (from pandas df)export to xlsxexport to csvwrap text in table cells - made using rich tablereturn dataframe objectdate of last check
abuse_whois
abuse_whoisA Sigma and RDAP/Whois based abuse contacts finder.This tool is highly inspired from the following libraries:https://github.com/bradleyjkemp/abwhosehttps://github.com/certsocietegenerale/abuse_finderHow It WorksQuery a given address via RDAP (fallback to Whois is if RDAP fails)Check a query result with Sigma rules and find contacts (fallback to regex if there is no match)RequirementsPython 3.10+Installationpipinstallabuse_whois# or if you want to use built-in REST APIpipinstallabuse_whois[api]UsageAs a libraryfromabuse_whoisimportget_abuse_contactsawaitget_abuse_contacts("1.1.1.1")awaitget_abuse_contacts("github.com")awaitget_abuse_contacts("https://github.com")awaitget_abuse_contacts("foo@example.com")As a CLI toolabuse_whois1.1.1.1 abuse_whoisexample.com abuse_whoisfoo@example.com abuse_whoishttp://example.comAs a REST API$uvicornabuse_whois.api.main:app INFO:Startedserverprocess[2283]INFO:Waitingforapplicationstartup. INFO:Applicationstartupcomplete. INFO:Uvicornrunningonhttp://127.0.0.1:8000(PressCTRL+Ctoquit)$httplocalhost:8000/api/whois/address=https://github.comWith Dockergitclonehttps://github.com/ninoseki/abuse_whoiscdabuse_whois dockerbuild.-tabuse-whois dockerrun-i-d-p8000:8000abuse-whoisSettingsAll settings can be done via environment variables or.envfile.NameTypeDefaultDesc.QUERY_TIMEOUTint10Timeout value for whois lookup (seconds)QUERY_CACHE_SIZEint1024Cache size for whois lookupQUERY_CACHE_TTLint3600Cache TTL value for whois lookup (seconds)QUERY_MAX_RETRIESint3Max retries on timeout errorRULE_EXTENSIONSCommaSeparatedStringsyaml,ymlRule file extensionsADDITIONAL_WHOIS_RULE_DIRECTORIESCommaSeparatedStringsAdditional directories contain whois rule filesADDITIONAL_SHARED_HOSTING_RULE_DIRECTORIESCommaSeparatedStringsAdditional directories contain shared hosting rule filesContributionsabuse_whoisworks based on a combination of static rules and a parsing result of whois response.Rules:Registrar and hosting providerShared hosting providerPlease submit a PR (or submit a feature request) if you find something missing.
abusify-id
AbusifyIDAbusiveness Verification in Bahasa Indonesia. Predict the abusiveness level of a sentence, detect abusive words, and filter abusive words.Live Demo:https://abusifyid.streamlit.app/RequirementsAll requirements below have been installed automatically. Install manually if there are problems:[Python 2.6 or higher]scikit-learnpandasnltkpymysqlpython-decouplefuzzywuzzypython-LevenshteinInstallationInstall usingpip.pip install abusify-idHow to UsePredict AbusivenessPredict the abusiveness level of a sentence, using text input or.txtfile input.import abusify_id as ai text = "Anjing, lu tolol ya!" level = ai.predict_abusiveness(text) print(level) ... 99.59%import abusify_id as ai ai.predict_abusiveness_file("input.txt", "output.txt") ... The results have been saved in a file: output.txtwithdecimal_places:import abusify_id as ai text = "Anjing, lu tolol ya!" level = ai.predict_abusiveness(text, decimal_places=5) print(level) ... 99.59093%import abusify_id as ai ai.predict_abusiveness_file("input.txt", "output.txt", decimal_places=4) ... The results have been saved in a file: output.txtAbusive Word Detectorimport abusify_id as ai text = "Anjing, lu tolol ya!" detect = ai.abusiveword_detector(text) print(detect) ... [Anjing](https://stopucapkasar.com/detail.php?id=9), [Tolol](https://stopucapkasar.com/detail.php?id=95)Abusive Word Filterimport abusify_id as ai text = "Anjing, lu tolol ya!" filter = ai.abusiveword_filter(text) print(filter) ... Sialan, lu bebal ya!WebsiteVisit our website:https://stopucapkasar.com/
abutils
abutilsModels, functions and visualization tools for working with adaptive immune receptore repertoire (AIRR) data. The primary purpose ofabutilsis to provide generalizable tools suitable for direct use analyzing bulk AIRR datasets, and is used byscabfor single cell AIRR analysis.abutilsis a core component of the ab[x] toolkit for AIRR data analysis.Source code:github.com/briney/abutilsDocumentation:abutils.readthedocs.orgDownload:pypi.python.org/pypi/abutilsinstallpip install abutilsapiWe've tried to design theabutilsAPI to be intuitive yet powerful, with the goal of enabling both interactive analyses (via environments like Jupyter notebooks) as well as integration ofabutilstools into more complex analysis pipelines and/or standalone software tools. See thedocumentationfor more detail about the API. As always, any feedback is greatly appreciated!!testingYou can run the completeabutilstest suite by first installingpytest:pip install pytestfollowed by:git clone https://github.com/briney/abutils cd abutils pytestThis test suite is automatically run following every commit, and is tested against all supported versions of Python.requirementspython 3.8+abstarbalticbiopythonceleryete3fastclustermatplotlibmnemonicnatsortnumpypandasparamikoparasailpytestpython-circospython-Levenshteinpyyamlsample-sheetscikit-learnscipyseabornsmart_openAll of the above dependencies can be installed withpip, and will be installed automatically when installingabutilswithpip.abutilspackages several additional external binaries that are required for specific functions:abutils.tl.mafftusesMAFFTabutils.tl.muscleusesMUSCLEabutils.tl.clusterrequires:CD-HITMMseqs2VSEARCHabutils.tl.fasttreerequiresFastTreeAlthogh these binaries are all packaged intoabutils, each respectiveabutilsfunction provides the option to supply a different binary path in case you'd prefer to use a different version or an alternate compilation.
abuu
ABUU - Another Bunch of Uselessfull UtilsJust a collection of utils I've used time and again in different python projects put together.Main aim of this repo and package is to allow me to use it across multiple projects without having to copy paste code.