diff --git "a/data/markdown/data.json" "b/data/markdown/data.json" new file mode 100644--- /dev/null +++ "b/data/markdown/data.json" @@ -0,0 +1,100 @@ +{"size":5540,"ext":"md","lang":"Markdown","max_stars_count":5.0,"content":"# AWS::Neptune::DBParameterGroup<\/a>\n\n `AWS::Neptune::DBParameterGroup` creates a new DB parameter group\\. This type can be declared in a template and referenced in the `DBParameterGroupName` parameter of `AWS::Neptune::DBInstance`\\.\n\n**Note** \nApplying a parameter group to a DB instance might require the instance to reboot, resulting in a database outage for the duration of the reboot\\.\n\nA DB parameter group is initially created with the default parameters for the database engine used by the DB instance\\. To provide custom values for any of the parameters, you must modify the group after creating it using *ModifyDBParameterGroup*\\. Once you've created a DB parameter group, you need to associate it with your DB instance using *ModifyDBInstance*\\. When you associate a new DB parameter group with a running DB instance, you need to reboot the DB instance without failover for the new DB parameter group and associated settings to take effect\\.\n\n**Important** \nAfter you create a DB parameter group, you should wait at least 5 minutes before creating your first DB instance that uses that DB parameter group as the default parameter group\\. This allows Amazon Neptune to fully complete the create action before the parameter group is used as the default for a new DB instance\\. This is especially important for parameters that are critical when creating the default database for a DB instance, such as the character set for the default database defined by the `character_set_database` parameter\\. You can use the *Parameter Groups* option of the Amazon Neptune console or the *DescribeDBParameters* command to verify that your DB parameter group has been created or modified\\.\n\n## Syntax<\/a>\n\nTo declare this entity in your AWS CloudFormation template, use the following syntax:\n\n### JSON<\/a>\n\n```\n{\n \"Type\" : \"AWS::Neptune::DBParameterGroup\",\n \"Properties\" : {\n \"[Description](#cfn-neptune-dbparametergroup-description)\" : String,\n \"[Family](#cfn-neptune-dbparametergroup-family)\" : String,\n \"[Name](#cfn-neptune-dbparametergroup-name)\" : String,\n \"[Parameters](#cfn-neptune-dbparametergroup-parameters)\" : Json,\n \"[Tags](#cfn-neptune-dbparametergroup-tags)\" : [ [Tag](https:\/\/docs.aws.amazon.com\/AWSCloudFormation\/latest\/UserGuide\/aws-properties-resource-tags.html), ... ]\n }\n}\n```\n\n### YAML<\/a>\n\n```\nType: AWS::Neptune::DBParameterGroup\nProperties: \n [Description](#cfn-neptune-dbparametergroup-description): String\n [Family](#cfn-neptune-dbparametergroup-family): String\n [Name](#cfn-neptune-dbparametergroup-name): String\n [Parameters](#cfn-neptune-dbparametergroup-parameters): Json\n [Tags](#cfn-neptune-dbparametergroup-tags): \n - [Tag](https:\/\/docs.aws.amazon.com\/AWSCloudFormation\/latest\/UserGuide\/aws-properties-resource-tags.html)\n```\n\n## Properties<\/a>\n\n`Description` <\/a>\nProvides the customer\\-specified description for this DB parameter group\\. \n*Required*: Yes \n*Type*: String \n*Update requires*: [Replacement](https:\/\/docs.aws.amazon.com\/AWSCloudFormation\/latest\/UserGuide\/using-cfn-updating-stacks-update-behaviors.html#update-replacement)\n\n`Family` <\/a>\nMust be `neptune1`\\. \n*Required*: Yes \n*Type*: String \n*Update requires*: [Replacement](https:\/\/docs.aws.amazon.com\/AWSCloudFormation\/latest\/UserGuide\/using-cfn-updating-stacks-update-behaviors.html#update-replacement)\n\n`Name` <\/a>\nProvides the name of the DB parameter group\\. \n*Required*: No \n*Type*: String \n*Update requires*: [Replacement](https:\/\/docs.aws.amazon.com\/AWSCloudFormation\/latest\/UserGuide\/using-cfn-updating-stacks-update-behaviors.html#update-replacement)\n\n`Parameters` <\/a>\nThe parameters to set for this DB parameter group\\. \nThe parameters are expressed as a JSON object consisting of key\\-value pairs\\. \nChanges to dynamic parameters are applied immediately\\. During an update, if you have static parameters \\(whether they were changed or not\\), it triggers AWS CloudFormation to reboot the associated DB instance without failover\\. \n*Required*: Yes \n*Type*: Json \n*Update requires*: [No interruption](https:\/\/docs.aws.amazon.com\/AWSCloudFormation\/latest\/UserGuide\/using-cfn-updating-stacks-update-behaviors.html#update-no-interrupt)\n\n`Tags` <\/a>\nThe tags that you want to attach to this parameter group\\. \n*Required*: No \n*Type*: List of [Tag](https:\/\/docs.aws.amazon.com\/AWSCloudFormation\/latest\/UserGuide\/aws-properties-resource-tags.html) \n*Update requires*: [No interruption](https:\/\/docs.aws.amazon.com\/AWSCloudFormation\/latest\/UserGuide\/using-cfn-updating-stacks-update-behaviors.html#update-no-interrupt)\n\n## Return values<\/a>\n\n### Ref<\/a>\n\nWhen you pass the logical ID of this resource to the intrinsic `Ref` function, `Ref` returns the resource name\\.\n\nFor more information about using the `Ref` function, see [Ref](https:\/\/docs.aws.amazon.com\/AWSCloudFormation\/latest\/UserGuide\/intrinsic-function-reference-ref.html)\\.","avg_line_length":65.1764705882,"max_line_length":715,"alphanum_fraction":0.7703971119} +{"size":514,"ext":"md","lang":"Markdown","max_stars_count":null,"content":"---\ndescription: Google Tag Manager Event Tracker for Paella Player\n---\n\n# Paella GTM Tracker\n\n## Installataion\n\ndfgdfdf\n\n## Google Tag Manager Setup\n\nfgdfg\n\n```text\n$ give me super-powers\n```\n\n{% hint style=\"info\" %}\nSuper-powers are granted randomly so please submit an issue if you're not happy with yours.\n{% endhint %}\n\nOnce you're strong enough, save the world:\n\n{% code title=\"hello.sh\" %}\n```bash\n# Ain't no code for that yet, sorry\necho 'You got to trust me on this, I saved the world'\n```\n{% endcode %}\n\n","avg_line_length":16.0625,"max_line_length":91,"alphanum_fraction":0.6906614786} +{"size":33642,"ext":"md","lang":"Markdown","max_stars_count":null,"content":"---\ntitle: Controllo di accesso del bus di servizio di Azure con firme di accesso condiviso\ndescription: Panoramica del controllo degli accessi del bus di servizio con firme di accesso condiviso, dettagli dell'autorizzazione con firme di accesso condiviso con il bus di servizio di Azure.\nms.topic: article\nms.date: 11\/03\/2020\nms.custom: devx-track-csharp\nms.openlocfilehash: f71320613682f7d4b9f3b706845e68f581b3dc10\nms.sourcegitcommit: fa90cd55e341c8201e3789df4cd8bd6fe7c809a3\nms.translationtype: MT\nms.contentlocale: it-IT\nms.lasthandoff: 11\/04\/2020\nms.locfileid: \"93339411\"\n---\n# <\/a>Controllo degli accessi del bus di servizio con firme di accesso condiviso\n\nLe *firme di accesso condiviso* sono il meccanismo di sicurezza principale per la messaggistica del bus di servizio. Questo articolo illustra le firme di accesso condiviso, il loro funzionamento e come usarle in modo indipendente dalla piattaforma.\n\nLa firma di accesso condiviso consente inoltre l'accesso al bus di servizio in base alle regole di autorizzazione configurate in uno spazio dei nomi o in un'entit\u00e0 di messaggistica (inoltro, coda o argomento). Una regola di autorizzazione ha un nome, \u00e8 associata a diritti specifici e include una coppia di chiavi di crittografia. Usare il nome e la chiave della regola tramite l'SDK del bus di servizio o nel proprio codice per generare un token di firma di accesso condiviso. Un client pu\u00f2 quindi passare il token al bus di servizio per dimostrare l'autorizzazione per l'operazione richiesta.\n\n> [!NOTE]\n> Il bus di servizio di Azure supporta l'autorizzazione dell'accesso a uno spazio dei nomi del bus di servizio e alle relative entit\u00e0 usando Azure Active Directory (Azure AD). L'autorizzazione di utenti o applicazioni che usano il token OAuth 2,0 restituito da Azure AD offre sicurezza e facilit\u00e0 d'uso superiori rispetto alle firme di accesso condiviso (SAS). Con Azure AD, non \u00e8 necessario archiviare i token nel codice e rischiare potenziali vulnerabilit\u00e0 della sicurezza.\n>\n> Microsoft consiglia di usare Azure AD con le applicazioni del bus di servizio di Azure, quando possibile. Per altre informazioni, vedere gli articoli seguenti:\n> - [Autenticare e autorizzare un'applicazione con Azure Active Directory per accedere alle entit\u00e0 del bus di servizio di Azure](authenticate-application.md).\n> - [Autenticare un'identit\u00e0 gestita con Azure Active Directory per accedere alle risorse del bus di servizio di Azure](service-bus-managed-service-identity.md)\n\n## <\/a>Panoramica di SAS\n\nLe firme di accesso condiviso (SAS) sono un meccanismo di autorizzazione basato sulle attestazioni che usa token semplici. Quando si usano le firme di accesso, le chiavi non vengono mai passate durante il transito. Le chiavi vengono usate per firmare crittograficamente informazioni che possono essere verificate in un secondo momento dal servizio. L'uso delle firme di accesso condiviso \u00e8 paragonabile a quello della combinazione di nome utente e password in cui il client entra immediatamente in possesso di un nome di regola di autorizzazione e una chiave corrispondente. \u00c8 paragonabile anche a un modello di sicurezza federata, in cui il client riceve un token di accesso firmato per un tempo limitato da un servizio token di sicurezza senza mai entrare in possesso della chiave di firma.\n\nL'autenticazione SAS nel bus di servizio \u00e8 configurata con il nome [Regole di autorizzazione di accesso condiviso](\/dotnet\/api\/microsoft.servicebus.messaging.sharedaccessauthorizationrule) e a essa sono associati diritti di accesso e una coppia di chiavi di crittografia primaria e secondaria. Le chiavi sono valori a 256 bit nella rappresentazione Base64. \u00c8 possibile configurare le regole a livello di spazio dei nomi nel bus di servizio [inoltri](..\/azure-relay\/relay-what-is-it.md), [code](service-bus-messaging-overview.md#queues) e [argomenti](service-bus-messaging-overview.md#topics).\n\nIl token [Firma di accesso condiviso](\/dotnet\/api\/microsoft.servicebus.sharedaccesssignaturetokenprovider) include il nome della regola di autorizzazione scelta, l'URI della risorsa alla quale si avr\u00e0 accesso, un'istante di scadenza e una firma crittografica HMAC-SHA256 calcolata in base a questi campi mediante la chiave crittografica primaria o secondaria della regola di autorizzazione scelta.\n\n## <\/a>Criteri di autorizzazione dell'accesso condiviso\n\nOgni spazio dei nomi e ogni entit\u00e0 del bus di servizio prevede un criterio di autorizzazione dell'accesso condiviso costituito da regole. I criteri a livello di spazio dei nomi si applicano a tutte le entit\u00e0 in esso incluse, indipendentemente dalle specifiche configurazioni dei criteri.\n\nPer ogni regola del criterio di autorizzazione si stabiliscono tre informazioni: **nome** , **ambito** e **diritti**. Il **nome** \u00e8 un nome univoco all'interno dell\u2019ambito. L'ambito \u00e8 abbastanza semplice: \u00e8 l'URI della risorsa in questione. Per uno spazio dei nomi del bus di servizio, l'ambito \u00e8 il nome di dominio completo (FQDN), ad esempio `https:\/\/.servicebus.windows.net\/`.\n\nI diritti assegnati dalla regola del criterio possono essere una combinazione di:\n\n* \"Send\": conferisce il diritto di inviare messaggi all'entit\u00e0\n* \"Listen\": conferisce il diritto di ascolto (inoltro) o di ricezione (coda, sottoscrizioni) e di tutta la gestione correlata ai messaggi\n* \"Manage\": conferisce il diritto di gestire la topologia dello spazio dei nomi, incluse la creazione e l'eliminazione di entit\u00e0\n\nIl diritto \"Manage\" include i diritti \"Send\" e \"Receive\".\n\nUn criterio di entit\u00e0 o dello spazio dei nomi pu\u00f2 contenere fino a 12 regole di autorizzazione di accesso condiviso, rendendo disponibile spazio per tre set di regole, ognuno dei quali copre i diritti di base e la combinazione di Send e Listen. Questo limite sottolinea che l'archivio dei criteri di firma di accesso condiviso non deve essere un utente o un archivio di account del servizio. Se l'applicazione deve concedere l'accesso al bus di servizio in base alle identit\u00e0 utente o del servizio, deve implementare un servizio token di sicurezza che rilascia token di firma di accesso condiviso dopo un controllo di autenticazione e accesso.\n\nA una regola di autorizzazione vengono assegnate una *chiave primaria* e una *chiave secondaria*. Si tratta di chiavi di crittografia complesse. Queste chiavi non possono essere perse perch\u00e9 sono sempre disponibili nel [portale di Azure][Azure portal]. \u00c8 possibile utilizzare una delle chiavi generate ed \u00e8 possibile rigenerarle in qualsiasi momento. Se si rigenera o si modifica una chiave nel criterio, tutti i token emessi in precedenza in base a tale chiave diventano immediatamente non validi. Le connessioni in corso create in base a tali token continueranno invece a funzionare fino alla scadenza del token.\n\nQuando si crea uno spazio dei nomi del bus di servizio, viene creato automaticamente un criterio denominato **RootManageSharedAccessKey**. Questo criterio dispone delle autorizzazioni Manage per l'intero spazio dei nomi. \u00c8 consigliabile considerare questa regola come un account **radice** amministratore e non usarla nell'applicazione. \u00c8 possibile creare regole di criteri aggiuntive nella scheda **Configura** per lo spazio dei nomi nel portale, tramite PowerShell o l'interfaccia della riga di comando di Azure.\n\n## <\/a>Procedure consigliate per l'uso di SAS\nQuando si utilizzano le firme di accesso condiviso nell'applicazione, \u00e8 necessario essere consapevoli di due rischi potenziali:\n\n- Se una firma di accesso condiviso viene persa, pu\u00f2 essere usata da chiunque lo ottenga, che pu\u00f2 potenzialmente compromettere le risorse di hub eventi.\n- Se una firma di accesso condiviso fornita a un'applicazione client scade e l'applicazione non \u00e8 in grado di recuperare una nuova firma di accesso condiviso dal servizio, le funzionalit\u00e0 dell'applicazione potrebbero essere ostacolate.\n\nPer mitigare questi rischi, \u00e8 consigliabile attenersi ai consigli seguenti relativi all'utilizzo di firme di accesso condiviso:\n\n- Chiedere **ai client di rinnovare automaticamente la firma di accesso condiviso se necessario** : i client devono rinnovare la firma di accesso condiviso prima della scadenza. Se la firma di accesso condiviso \u00e8 concepita per essere usata per un numero ridotto di operazioni immediate e di breve durata che dovrebbero essere completate entro il periodo di scadenza, potrebbe non essere necessario perch\u00e9 la firma di accesso condiviso non \u00e8 prevista per il rinnovo. Se tuttavia si dispone di client che effettuano normalmente richieste tramite la firma di accesso condiviso, \u00e8 necessario considerare la possibilit\u00e0 che la firma scada. La considerazione chiave consiste nel bilanciare la necessit\u00e0 che la firma di accesso condiviso sia di breve durata (come indicato in precedenza) con la necessit\u00e0 di garantire che il client stia richiedendo un rinnovo tempestivo (per evitare l'intralcio dovuto alla scadenza della firma di accesso condiviso prima di un rinnovo positivo).\n- **Prestare attenzione all'ora di inizio della** firma di accesso condiviso: se si imposta l'ora di inizio per la firma di accesso condiviso su **adesso** , a causa dello sfasamento di clock (differenze nell'ora corrente in base a computer diversi), \u00e8 possibile che gli errori vengano osservati in modo intermittente per i primi minuti. In generale, impostare l'ora di inizio ad almeno 15 minuti prima. In alternativa, non impostare alcun valore, in modo da renderlo immediatamente valido in tutti i casi. Lo stesso vale in genere anche per l'ora di scadenza. Tenere presente che \u00e8 possibile osservare fino a 15 minuti di sfasamento dell'orologio in una delle due direzioni di qualsiasi richiesta. \n- **Essere specifici della risorsa a cui accedere** : una procedura di sicurezza consigliata consiste nel fornire all'utente i privilegi minimi necessari. Se un utente necessita solo dell'accesso in lettura a una singola entit\u00e0, concedere solo tale tipo di accesso per tale entit\u00e0 e non l'accesso in lettura\/scrittura\/eliminazione per tutte le entit\u00e0. Consente inoltre di ridurre il danno se una firma di accesso condiviso viene compromessa perch\u00e9 la firma di accesso condiviso ha meno energia nelle mani di un utente malintenzionato.\n- **Non usare sempre** la firma di accesso condiviso: talvolta i rischi associati a una particolare operazione sull'hub eventi superano i vantaggi della firma di accesso condiviso. Per queste operazioni, creare un servizio di livello intermedio che scrive nell'hub eventi dopo la convalida, l'autenticazione e il controllo delle regole business.\n- **Usare sempre https** : usare sempre HTTPS per creare o distribuire una firma di accesso condiviso. Se una firma di accesso condiviso viene passata tramite HTTP e intercettata, un utente malintenzionato che esegue un attacco man-in-the-Middle pu\u00f2 leggere la firma di accesso condiviso e quindi usarla come l'utente previsto, potenzialmente compromettendo i dati sensibili o consentendo il danneggiamento dei dati da parte di utenti malintenzionati.\n\n## <\/a>Configurazione dell'autenticazione della firma di accesso condiviso\n\n\u00c8 possibile configurare la regola [SharedAccessAuthorizationRule](\/dotnet\/api\/microsoft.servicebus.messaging.sharedaccessauthorizationrule) in spazi dei nomi, code, argomenti del bus di servizio. La configurazione di una regola [SharedAccessAuthorizationRule](\/dotnet\/api\/microsoft.servicebus.messaging.sharedaccessauthorizationrule) in una sottoscrizione del bus di servizio non \u00e8 attualmente supportata, ma \u00e8 possibile usare le regole configurate in uno spazio dei nomi o in un argomento per proteggere l'accesso alle sottoscrizioni. Per un esempio pratico di questa procedura, vedere l'articolo relativo all' [uso dell'autenticazione della firma di accesso condiviso con le sottoscrizioni del bus di servizio](https:\/\/code.msdn.microsoft.com\/Using-Shared-Access-e605b37c) .\n\n![SAS](.\/media\/service-bus-sas\/service-bus-namespace.png)\n\nIn questa figura le regole di autorizzazione *manageRuleNS* , *sendRuleNS* e *listenRuleNS* si applicano sia alla coda Q1 che all'argomento T1, mentre *listenRuleQ* e *sendRuleQ* si applicano solo alla coda Q1 e *sendRuleT* si applica solo all'argomento T1.\n\n## <\/a>Generare un token della firma di accesso condiviso\n\nQualsiasi client che abbia accesso al nome di una regola di autorizzazione e a una delle relative chiavi di firma pu\u00f2 generare un token di firma di accesso condiviso. Il token viene generato creando una stringa nel formato seguente:\n\n```\nSharedAccessSignature sig=&se=&skn=&sr=\n```\n\n* **`se`** -Istante di scadenza del token. Valore intero che riflette i secondi trascorsi dalle `00:00:00 UTC` del 1 \u00b0 gennaio 1970 (epoca UNIX) quando il token scade.\n* **`skn`** : Nome della regola di autorizzazione.\n* **`sr`** : URI della risorsa a cui si accede.\n* **`sig`** Firma.\n\n`signature-string`\u00c8 l'hash SHA-256 calcolato sull'URI della risorsa ( **ambito** come descritto nella sezione precedente) e la rappresentazione di stringa dell'istante di scadenza del token, separate da LF.\n\nIl calcolo del codice hash \u00e8 simile allo pseudo codice seguente e restituisce un valore hash a 256 bit o 32 byte.\n\n```\nSHA-256('https:\/\/.servicebus.windows.net\/'+'\\n'+ 1438205742)\n```\n\nIl token contiene i valori non hash in modo che il destinatario possa ricalcolare il codice hash con gli stessi parametri, verificando che l'autorit\u00e0 di certificazione sia in possesso di una chiave di firma valida.\n\nL'URI di risorsa \u00e8 l'URI completo della risorsa del bus di servizio a cui si richiede l'accesso. Ad esempio `http:\/\/.servicebus.windows.net\/` o `sb:\/\/.servicebus.windows.net\/`, ovvero `http:\/\/contoso.servicebus.windows.net\/contosoTopics\/T1\/Subscriptions\/S3`. \n\n**L'URI deve essere [codificato in percentuale](\/dotnet\/api\/system.web.httputility.urlencode).**\n\nLa regola di autorizzazione di accesso condiviso usata per la firma deve essere configurata nell'entit\u00e0 specificata da questo URI o in un elemento padre nella gerarchia. Ad esempio `http:\/\/contoso.servicebus.windows.net\/contosoTopics\/T1` o `http:\/\/contoso.servicebus.windows.net` nell'esempio precedente.\n\nUn token di firma di accesso condiviso \u00e8 valido per tutte le risorse precedute dal prefisso `` usato in `signature-string`.\n\n> [!NOTE]\n> Per esempi di generazione di un token SAS usando linguaggi di programmazione diversi, vedere [generare un token SAS](\/rest\/api\/eventhub\/generate-sas-token). \n\n## <\/a>Rigenerazione delle chiavi\n\n\u00c8 consigliabile rigenerare periodicamente le chiavi usate nella regola [SharedAccessAuthorizationRule](\/dotnet\/api\/microsoft.servicebus.messaging.sharedaccessauthorizationrule) . La presenza degli slot di chiave primaria e secondaria cha lo scopo di consentire di ruotare le chiavi gradualmente. Se l'applicazione usa in genere la chiave primaria, \u00e8 possibile copiare la chiave primaria nello slot della chiave secondaria e solo allora rigenerare la chiave primaria. Il nuovo valore di chiave primaria pu\u00f2 essere quindi configurato nelle applicazioni client, che possono accedere in modo continuativo usando la chiave primaria precedente nello slot secondario. Dopo che tutti i client sono stati aggiornati, \u00e8 possibile rigenerare la chiave secondaria per ritirare infine la chiave primaria precedente.\n\nSe \u00e8 noto o si sospetta che una chiave \u00e8 compromessa ed \u00e8 necessario revocare le chiavi, \u00e8 possibile rigenerare entrambi gli oggetti [PrimaryKey](\/dotnet\/api\/microsoft.servicebus.messaging.sharedaccessauthorizationrule) e [SecondaryKey](\/dotnet\/api\/microsoft.servicebus.messaging.sharedaccessauthorizationrule) per una regola [SharedAccessAuthorizationRule](\/dotnet\/api\/microsoft.servicebus.messaging.sharedaccessauthorizationrule), sostituendo le chiavi precedenti con quelle nuove. Se viene eseguita questa procedura, tutti i token firmati con le chiavi precedenti non sono pi\u00f9 validi.\n\n## <\/a>Autenticazione della firma di accesso condiviso con il bus di servizio\n\nGli scenari illustrati di seguito includono la configurazione delle regole di autorizzazione, la generazione di token di firma di accesso condiviso e l'autorizzazione dei client.\n\nPer un esempio pratico completo di un'applicazione del bus di servizio che illustra la configurazione e usa l'autorizzazione con firma di accesso condiviso, vedere [Autenticazione della firma di accesso condiviso con il bus di servizio](https:\/\/code.msdn.microsoft.com\/Shared-Access-Signature-0a88adf8). Un esempio correlato che illustra l'uso delle regole di autorizzazione con firma di accesso condiviso configurate negli spazi dei nomi o negli argomenti per proteggere le sottoscrizioni del bus di servizio \u00e8 disponibile nella pagina relativa all' [uso dell'autenticazione con firma di accesso condiviso con le sottoscrizioni del bus di servizio](https:\/\/code.msdn.microsoft.com\/Using-Shared-Access-e605b37c).\n\n## <\/a>Accedere alle regole di autorizzazione per l'accesso condiviso in un'entit\u00e0\n\nCon le librerie .NET Framework del bus di servizio \u00e8 possibile accedere a un oggetto [Microsoft.ServiceBus.Messaging.SharedAccessAuthorizationRule](\/dotnet\/api\/microsoft.servicebus.messaging.sharedaccessauthorizationrule) configurato in una coda o in un argomento del bus di servizio tramite la raccolta [AuthorizationRules](\/dotnet\/api\/microsoft.servicebus.messaging.authorizationrules) negli oggetti [QueueDescription](\/dotnet\/api\/microsoft.servicebus.messaging.queuedescription) o [TopicDescription](\/dotnet\/api\/microsoft.servicebus.messaging.topicdescription) corrispondenti.\n\nIl codice seguente illustra come aggiungere regole di autorizzazione per una coda.\n\n```csharp\n\/\/ Create an instance of NamespaceManager for the operation\nNamespaceManager nsm = NamespaceManager.CreateFromConnectionString(\n );\nQueueDescription qd = new QueueDescription( );\n\n\/\/ Create a rule with send rights with keyName as \"contosoQSendKey\"\n\/\/ and add it to the queue description.\nqd.Authorization.Add(new SharedAccessAuthorizationRule(\"contosoSendKey\",\n SharedAccessAuthorizationRule.GenerateRandomKey(),\n new[] { AccessRights.Send }));\n\n\/\/ Create a rule with listen rights with keyName as \"contosoQListenKey\"\n\/\/ and add it to the queue description.\nqd.Authorization.Add(new SharedAccessAuthorizationRule(\"contosoQListenKey\",\n SharedAccessAuthorizationRule.GenerateRandomKey(),\n new[] { AccessRights.Listen }));\n\n\/\/ Create a rule with manage rights with keyName as \"contosoQManageKey\"\n\/\/ and add it to the queue description.\n\/\/ A rule with manage rights must also have send and receive rights.\nqd.Authorization.Add(new SharedAccessAuthorizationRule(\"contosoQManageKey\",\n SharedAccessAuthorizationRule.GenerateRandomKey(),\n new[] {AccessRights.Manage, AccessRights.Listen, AccessRights.Send }));\n\n\/\/ Create the queue.\nnsm.CreateQueue(qd);\n```\n\n## <\/a>Usare l'autorizzazione con firma di accesso condiviso\n\nLe applicazioni che usano Azure .NET SDK con le librerie .NET del bus di servizio possono usare l'autorizzazione con firma di accesso condiviso nella classe [SharedAccessSignatureTokenProvider](\/dotnet\/api\/microsoft.servicebus.sharedaccesssignaturetokenprovider) . Il codice seguente illustra l'uso del provider di token per inviare messaggi a una coda del bus di servizio. In alternativa all'uso qui illustrato, \u00e8 possibile anche passare un token emesso in precedenza al metodo factory del provider di token.\n\n```csharp\nUri runtimeUri = ServiceBusEnvironment.CreateServiceUri(\"sb\",\n , string.Empty);\nMessagingFactory mf = MessagingFactory.Create(runtimeUri,\n TokenProvider.CreateSharedAccessSignatureTokenProvider(keyName, key));\nQueueClient sendClient = mf.CreateQueueClient(qPath);\n\n\/\/Sending hello message to queue.\nBrokeredMessage helloMessage = new BrokeredMessage(\"Hello, Service Bus!\");\nhelloMessage.MessageId = \"SAS-Sample-Message\";\nsendClient.Send(helloMessage);\n```\n\n\u00c8 anche possibile utilizzare direttamente il provider di token per il rilascio di token da passare ad altri client.\n\nLe stringhe di connessione possono includere un nome di regola ( *SharedAccessKeyName* ) e una chiave di regola ( *SharedAccessKey* ) o un token rilasciato in precedenza ( *SharedAccessSignature* ). Quando sono presenti nella stringa di connessione passata a un costruttore o a un metodo factory che accetta una stringa di connessione, il provider di token di firma di accesso condiviso viene automaticamente creato e popolato.\n\nSi noti che per usare l'autorizzazione con firma di accesso condiviso con gli inoltri del bus di servizio \u00e8 possibile usare le chiavi della firma di accesso condiviso configurate nello spazio dei nomi del bus di servizio. Se si crea in modo esplicito un inoltro per lo spazio dei nomi ([NamespaceManager](\/dotnet\/api\/microsoft.servicebus.namespacemanager) con un oggetto [RelayDescription](\/dotnet\/api\/microsoft.servicebus.messaging.relaydescription)), \u00e8 possibile impostare le regole della firma di accesso condiviso solo per tale inoltro. Per usare l'autorizzazione con firma di accesso condiviso con le sottoscrizioni del bus di servizio \u00e8 possibile usare le chiavi della firma di accesso condiviso configurate in uno spazio dei nomi o in un argomento del bus di servizio.\n\n## <\/a>Usare la firma di accesso condiviso (a livello HTTP)\n\nDopo aver creato firme di accesso condiviso per tutte le entit\u00e0 nel bus di servizio, si \u00e8 pronti a eseguire una richiesta HTTP POST:\n\n```http\nPOST https:\/\/.servicebus.windows.net\/\/messages\nContent-Type: application\/json\nAuthorization: SharedAccessSignature sr=https%3A%2F%2F.servicebus.windows.net%2F&sig=&se=1438205742&skn=KeyName\nContentType: application\/atom+xml;type=entry;charset=utf-8\n```\n\nQuesta procedura funziona per ogni elemento. \u00c8 possibile creare una firma di accesso condiviso per una coda, un argomento o una sottoscrizione.\n\nSe un token SAS viene assegnato a un mittente o ad un client, questi ultimi non dispongono della chiave direttamente e non possono invertire l'hash per ottenerla. Di conseguenza, \u00e8 necessario controllare a cosa possono accedere e per quanto tempo. \u00c8 importante tenere presente che, se si modifica la chiave primaria nel criterio, le firme di accesso condiviso create da tale chiave verranno invalidate.\n\n## <\/a>Usare la firma di accesso condiviso (a livello AMQP)\n\nNella sezione precedente, \u00e8 stato illustrato come utilizzare il token SAS con una richiesta HTTP POST per l'invio di dati per il Bus di servizio. Com'\u00e8 noto, \u00e8 possibile accedere al bus di servizio usando il protocollo AMQP (Advanced Message Queuing Protocol), ovvero il protocollo preferito da usare per motivi di prestazioni in molti scenari. L'utilizzo dei token di firma di accesso condiviso con AMQP \u00e8 descritto nel documento [AMQP Claim-Based Security versione 1,0](https:\/\/www.oasis-open.org\/committees\/download.php\/50506\/amqp-cbs-v1%200-wd02%202013-08-12.doc) che si trova nel Draft di lavoro a partire da 2013, ma \u00e8 attualmente supportato da Azure.\n\nPrima di iniziare a inviare i dati al bus di servizio, il server di pubblicazione deve inviare il token di firma di accesso condiviso all'interno di un messaggio AMQP a un nodo AMQP ben definito denominato **\"$cbs\"**. Pu\u00f2 essere visualizzato come una coda \"speciale\" usata dal servizio per acquisire e convalidare tutti i token di firma di accesso condiviso. Il server di pubblicazione deve specificare il campo **ReplyTo** all'interno del messaggio AMQP. Si tratta del nodo in cui il servizio invia una risposta al server di pubblicazione con il risultato della convalida del token. \u00c8 un modello di richiesta\/risposta semplice tra il server di pubblicazione e il servizio. Questo nodo risposta viene creato al momento in quanto \"creazione dinamica di nodo remoto\" come descritto nella specifica di AMQP 1.0. Dopo avere verificato che il token di firma di accesso condiviso \u00e8 valido, il server di pubblicazione pu\u00f2 andare avanti e iniziare a inviare dati al servizio.\n\nI passaggi seguenti illustrano come inviare il token SAS con il protocollo AMQP usando la libreria [AMQP.NET Lite](https:\/\/github.com\/Azure\/amqpnetlite) . Questa operazione \u00e8 utile se non \u00e8 possibile usare l'SDK ufficiale del bus di servizio (ad esempio in WinRT, .NET Compact Framework, .NET Micro Framework e mono) che si sviluppa in C \\# . Naturalmente, questa libreria \u00e8 utile per comprendere il funzionamento della sicurezza basata sulle attestazioni a livello AMQP, dopo aver visto il funzionamento a livello HTTP (con una richiesta HTTP POST e il token SAS inviati all'interno dell'intestazione \"Authorization\"). Se non sono necessarie informazioni approfondite su AMQP, \u00e8 possibile usare l'SDK ufficiale del bus di servizio con .NET Framework applicazioni, che lo eseguir\u00e0 per l'utente.\n\n### <\/a>C#\n\n```csharp\n\/\/\/ \n\/\/\/ Send claim-based security (CBS) token\n\/\/\/ <\/summary>\n\/\/\/ Shared access signature (token) to send<\/param>\nprivate bool PutCbsToken(Connection connection, string sasToken)\n{\n bool result = true;\n Session session = new Session(connection);\n\n string cbsClientAddress = \"cbs-client-reply-to\";\n var cbsSender = new SenderLink(session, \"cbs-sender\", \"$cbs\");\n var cbsReceiver = new ReceiverLink(session, cbsClientAddress, \"$cbs\");\n\n \/\/ construct the put-token message\n var request = new Message(sasToken);\n request.Properties = new Properties();\n request.Properties.MessageId = Guid.NewGuid().ToString();\n request.Properties.ReplyTo = cbsClientAddress;\n request.ApplicationProperties = new ApplicationProperties();\n request.ApplicationProperties[\"operation\"] = \"put-token\";\n request.ApplicationProperties[\"type\"] = \"servicebus.windows.net:sastoken\";\n request.ApplicationProperties[\"name\"] = Fx.Format(\"amqp:\/\/{0}\/{1}\", sbNamespace, entity);\n cbsSender.Send(request);\n\n \/\/ receive the response\n var response = cbsReceiver.Receive();\n if (response == null || response.Properties == null || response.ApplicationProperties == null)\n {\n result = false;\n }\n else\n {\n int statusCode = (int)response.ApplicationProperties[\"status-code\"];\n if (statusCode != (int)HttpStatusCode.Accepted && statusCode != (int)HttpStatusCode.OK)\n {\n result = false;\n }\n }\n\n \/\/ the sender\/receiver may be kept open for refreshing tokens\n cbsSender.Close();\n cbsReceiver.Close();\n session.Close();\n\n return result;\n}\n```\n\nIl metodo `PutCbsToken()` riceve la *connessione* , vale a dire l'istanza della classe di connessione AMQP indicata dalla [libreria AMQP .NET Lite](https:\/\/github.com\/Azure\/amqpnetlite), che rappresenta la connessione TCP al servizio e il parametro *sasToken* , ovvero il token di firma di accesso condiviso da inviare.\n\n> [!NOTE]\n> \u00c8 importante che la connessione venga creata con il **meccanismo di autenticazione SASL impostato su ANONYMOUS** (e non sul valore predefinito PLAIN con nome utente e password usato quando non \u00e8 necessario inviare il token SAS).\n>\n>\n\nSuccessivamente, il server di pubblicazione crea due collegamenti AMQP per inviare il token SAS e ricevere la risposta (il risultato di convalida del token) dal servizio.\n\nIl messaggio AMQP contiene un insieme di propriet\u00e0 e altre informazioni rispetto a un semplice messaggio. Il token SAS rappresenta il corpo del messaggio (tramite il relativo costruttore). La propriet\u00e0 **\"ReplyTo\"** \u00e8 impostata sul nome del nodo per la ricezione del risultato di convalida sul collegamento ricevitore (se si desidera modificare il nome, \u00e8 possibile farlo e verr\u00e0 creato in modo dinamico dal servizio). Le ultime tre propriet\u00e0 personalizzate\/relative all'applicazione vengono usate dal servizio per indicare quale tipologia di operazione \u00e8 necessario eseguire. Come descritto nella bozza di specifica CBS, devono essere il **nome dell'operazione** (\"put-token\"), il **tipo di token** (in questo caso, `servicebus.windows.net:sastoken`) e il **\"nome\" del gruppo di destinatari** a cui viene applicato il token (l'intera entit\u00e0).\n\nDopo aver inviato il token SAS sul collegamento mittente, il server di pubblicazione deve leggere la risposta sul collegamento ricevitore. La risposta \u00e8 un semplice messaggio AMQP con una propriet\u00e0 dell'applicazione denominata **\"status-code\"** che pu\u00f2 contenere gli stessi valori di un codice di stato HTTP.\n\n## <\/a>Diritti necessari per le operazioni del bus di servizio\n\nLa tabella seguente illustra i diritti di accesso necessari per l'esecuzione di operazioni relative alle risorse del bus di servizio.\n\n| Operazione | Attestazione necessaria | Ambito attestazione |\n| --- | --- | --- |\n| **Spazio dei nomi** | | |\n| Configurare le regole di autorizzazione relative a uno spazio dei nomi |Gestione |Qualsiasi indirizzo dello spazio dei nomi |\n| **Service Registry** | | |\n| Enumerare i criteri privati |Gestione |Qualsiasi indirizzo dello spazio dei nomi |\n| Iniziare l'attesa su uno spazio dei nomi del servizio |Attesa |Qualsiasi indirizzo dello spazio dei nomi |\n| Inviare messaggi a un listener in uno spazio dei nomi |Invia |Qualsiasi indirizzo dello spazio dei nomi |\n| **Coda** | | |\n| Creare una coda |Gestione |Qualsiasi indirizzo dello spazio dei nomi |\n| Eliminare una coda |Gestione |Qualsiasi indirizzo valido della coda |\n| Enumerare le code |Gestione |\/$Resources\/Queues |\n| Ottenere la descrizione di una coda |Gestione |Qualsiasi indirizzo valido della coda |\n| Configurare le regole di autorizzazione per una coda |Gestione |Qualsiasi indirizzo valido della coda |\n| Effettuare un invio alla coda |Invia |Qualsiasi indirizzo valido della coda |\n| Ricevere messaggi da una coda |Attesa |Qualsiasi indirizzo valido della coda |\n| Abbandonare o completare messaggi dopo la ricezione del messaggio in modalit\u00e0 PeekLock (blocco di visualizzazione) |Attesa |Qualsiasi indirizzo valido della coda |\n| Rinviare un messaggio per il successivo recupero |Attesa |Qualsiasi indirizzo valido della coda |\n| Spostare un messaggio nella coda dei messaggi non recapitabili |Attesa |Qualsiasi indirizzo valido della coda |\n| Ottenere lo stato associato a una sessione della coda dei messaggi |Attesa |Qualsiasi indirizzo valido della coda |\n| Impostare lo stato associato a una sessione della coda dei messaggi |Attesa |Qualsiasi indirizzo valido della coda |\n| Pianificare il recapito ritardato di un messaggio, ad esempio [ScheduleMessageAsync()](\/dotnet\/api\/microsoft.azure.servicebus.queueclient.schedulemessageasync#Microsoft_Azure_ServiceBus_QueueClient_ScheduleMessageAsync_Microsoft_Azure_ServiceBus_Message_System_DateTimeOffset_) |Attesa | Qualsiasi indirizzo valido della coda\n| **Argomento** | | |\n| Creare un argomento |Gestione |Qualsiasi indirizzo dello spazio dei nomi |\n| Eliminare un argomento |Gestione |Qualsiasi indirizzo valido dell'argomento |\n| Enumerare gli argomenti |Gestione |\/$Resources\/Topics |\n| Ottenere la descrizione di un argomento |Gestione |Qualsiasi indirizzo valido dell'argomento |\n| Configurare le regole di autorizzazione per un argomento |Gestione |Qualsiasi indirizzo valido dell'argomento |\n| Effettuare un invio all'argomento |Invia |Qualsiasi indirizzo valido dell'argomento |\n| **Sottoscrizione** | | |\n| Creare una sottoscrizione |Gestione |Qualsiasi indirizzo dello spazio dei nomi |\n| Eliminare una sottoscrizione |Gestione |..\/myTopic\/Subscriptions\/mySubscription |\n| Enumerare le sottoscrizioni |Gestione |..\/myTopic\/Subscriptions |\n| Ottenere la descrizione di una sottoscrizione |Gestione |..\/myTopic\/Subscriptions\/mySubscription |\n| Abbandonare o completare messaggi dopo la ricezione del messaggio in modalit\u00e0 PeekLock (blocco di visualizzazione) |Attesa |..\/myTopic\/Subscriptions\/mySubscription |\n| Rinviare un messaggio per il successivo recupero |Attesa |..\/myTopic\/Subscriptions\/mySubscription |\n| Spostare un messaggio nella coda dei messaggi non recapitabili |Attesa |..\/myTopic\/Subscriptions\/mySubscription |\n| Ottenere lo stato associato a una sessione dell'argomento |Attesa |..\/myTopic\/Subscriptions\/mySubscription |\n| Impostare lo stato associato a una sessione dell'argomento |Attesa |..\/myTopic\/Subscriptions\/mySubscription |\n| **Regole** | | |\n| Creare una regola |Gestione |..\/myTopic\/Subscriptions\/mySubscription |\n| Eliminare una regola |Gestione |..\/myTopic\/Subscriptions\/mySubscription |\n| Enumerare le regole |Manage o Listen |..\/myTopic\/Subscriptions\/mySubscription\/Rules\n\n## <\/a>Passaggi successivi\n\nPer altre informazioni sulla messaggistica del bus di servizio, vedere gli argomenti seguenti.\n\n* [Code, argomenti e sottoscrizioni del bus di servizio](service-bus-queues-topics-subscriptions.md)\n* [Come usare le code del bus di servizio](service-bus-dotnet-get-started-with-queues.md)\n* [Come usare gli argomenti e le sottoscrizioni del bus di servizio](service-bus-dotnet-how-to-use-topics-subscriptions.md)\n\n[Azure portal]: https:\/\/portal.azure.com","avg_line_length":103.8333333333,"max_line_length":973,"alphanum_fraction":0.7998335414} +{"size":3025,"ext":"md","lang":"Markdown","max_stars_count":4.0,"content":"---\ntitle: Configure and use public environments in Azure DevTest Labs | Microsoft Docs\ndescription: This article describes how to configure and use public environments (Azure Resource Manager templates in a Git repo) in Azure DevTest Labs.\nms.topic: article\nms.date: 06\/26\/2020\n---\n\n# Configure and use public environments in Azure DevTest Labs\nAzure DevTest Labs has a [public repository of Azure Resource Manager templates](https:\/\/github.com\/Azure\/azure-devtestlab\/tree\/master\/Environments) that you can use to create environments without having to connect to an external GitHub source by yourself. This repository includes frequently used templates such as Azure Web Apps, Service Fabric Cluster, and development SharePoint Farm environment. This feature is similar to the public repository of artifacts that is included for every lab that you create. The environment repository allows you to quickly get started with pre-authored environment templates with minimum input parameters to provide you with a smooth getting started experience for PaaS resources within labs. \n\n## Configuring public environments\nAs a lab owner, you can enable the public environment repository for your lab during the lab creation. To enable public environments for your lab, select **On** for the **Public environments** field while creating a lab. \n\n![Enable public environment for a new lab](media\/devtest-lab-configure-use-public-environments\/enable-public-environment-new-lab.png)\n\n\nFor existing labs, the public environment repository is not enabled. Manually enable it to use templates in the repository. For labs created using Resource Manager templates, the repository is disabled by default as well.\n\nYou can enable\/disable public environments for your lab, and also make only specific environments available to lab users by using the following steps: \n\n1. Select **Configuration and policies** for your lab. \n2. In the **VIRTUAL MACHINE BASES** section, select **Public environments**.\n3. To enable public environments for the lab, select **Yes**. Otherwise, select **No**. \n4. If you enabled public environments, all the environments in the repository are enabled by defaults. You can de-select an environment to make it not available to your lab users. \n\n![Public environments page](media\/devtest-lab-configure-use-public-environments\/public-environments-page.png)\n\n## Use environment templates as a lab user\nAs a lab user, you can create a new environment from the enabled list of environment templates by simply selecting **+Add** from the tool bar in the lab page. The list of bases includes the public environments templates enabled by your lab admin at the top of the list.\n\n![Public environment templates](media\/devtest-lab-configure-use-public-environments\/public-environment-templates.png)\n\n## Next steps\nThis repository is an open-source repository that you can contribute to add frequently used and helpful Resource Manager templates of your own. To contribute, simply submit a pull request against the repository. \n","avg_line_length":86.4285714286,"max_line_length":730,"alphanum_fraction":0.8036363636} +{"size":2960,"ext":"md","lang":"Markdown","max_stars_count":null,"content":"---\nlayout: post\ntitle: \"Un Printemps 2017, face B\"\nauthors:\n - Joe Gantdelaine\ncategory:\n - \u00c9ditorial\n - Compile\ncover: printemps-2017-b.jpg\ndescription: >\n O\u00f9 on cause toons et tunes.\n---\n\nTel le grand g\u00e9ocoucou r\u00e9ussissant \u00e0 slalomer entre les pi\u00e8ges de Wile Ethelbert\nCoyote, _Popfolkrockus Musicalis_ reste toujours insaisissable pour l'_Homo\nDeadroosteriens Compilatorus_. Saison apr\u00e8s saison, ce dernier arrive, tant bien\nque mal, \u00e0 en capturer quelques morceaux. Mais \u00e0 peine en a-t-il fini avec les\n\u00e9l\u00e9ments hivernaux qu'il doit d\u00e9j\u00e0 remettre ses oreilles \u00e0 l'ouvrage, pour\ntenter de mettre en cage quelques-uns des plus beaux repr\u00e9sentants printaniers.\n\nEn ce n'est pas tout\u00a0! Sit\u00f4t les composants d\u00e9cid\u00e9s, le compileur doit encore en\ntrouver l'agencement. Il se retrouve \u00e0 devoir d\u00e9cortiquer chaque morceau,\nessayant de le combiner avec d'autres, voyant si la fin de l'un s'emboite bien\navec le d\u00e9but du suivant. Et petit \u00e0 petit se dessine une sorte de sch\u00e9ma,\nsemblable \u00e0 un blueprint ACME envoy\u00e9 au Coyote avec les pi\u00e8ces d\u00e9tach\u00e9es de la\nfus\u00e9e sur rail.\n\nEt si Wile E. Coyote n'agissait pas seul\u00a0? On imagine bien les collisions truffe\ncontre truffe au milieu d'un grand virage en \u00e9pingle et les \u00e9crasements fa\u00e7on\nsandwich double-deck entre trois rochers de l'Arizona. Eh bien, comme un seul\ncompileur n'est pas suffisant, voila qu'il fait \u00e9quipe avec un deuxi\u00e8me amateur\nde chasse aux chansons (rien \u00e0 voir avec P. Sevran). Apparaissent alors les\ndifficult\u00e9s du travail d'\u00e9quipe\u00a0: \"mais, pourquoi tu l'as laiss\u00e9 passer\ncelui-l\u00e0, il \u00e9tait bien\u00a0!\", \"T'es s\u00fbr pour celle-l\u00e0\u00a0? Le saxo, l\u00e0, t'es s\u00fbr\u00a0? On\nla prend quand m\u00eame\u00a0? Bon\u2026\" Heureusement, chaque compileur a son petit pr\u00e9\ncarr\u00e9, sa moiti\u00e9 de face B, o\u00f9 il peut, seul dans son coin, se cr\u00e9er son petit\npi\u00e8ge \u00e0 titres rien qu'\u00e0 lui. Il en est fier, c'est lui qui a \u00e9cout\u00e9, collect\u00e9\net assembl\u00e9 tous les morceaux.\n\nVous voici donc devant un superbe \u00e9crin de deux machin(e)s savamment pens\u00e9(e)s\npar vos serviteurs. Mesurez votre chance, c'est pas dans les dessins anim\u00e9s que\nvous verrez Wile E. profitez d'un tel festin\u00a0!\n\nAu prochain \"bip-bip\", il sera exactement l'heure de l'Et\u00e9 2017.\n\n{% spotify 2FhlVbCDvPojqLvF0ZIrQH guiguilele %}\n\n### Face Joe\n\n1. Diamond Rugs - _Blue Mountains_\n1. Pere Ubu - _Non-Alignment Pact_\n1. Habibi - _Far From Right_\n1. Darker My Love - _Backseat_\n1. Tall Firs - _Hairdo_\n1. Galaxie 500 - _Strange_\n1. The Chills - _Pink Frost_\n1. Jonathan Wilson - _Coming Into Los Angeles_\n1. Moondog - _Do Your Thing_\n1. Arthur Lee - _Everybody's Gotta Live_\n\n### Face Dirty\n\n1. Yo La Tengo - _You Can Have It All_\n1. Courtney Barnett - _How to Boil an Egg_\n1. Cass McCombs - _That's That_\n1. Grizzly Bear - _Three Rings_\n1. Spoon - _WhisperI'lllistentohearit_\n1. Okkervil River - _John Allyn Smith Sails_\n1. Half Man Half Biscuit - _National Shite Day_\n1. Fleet Foxes - _Third of May \/ \u014cdaigahara_\n1. PJ Harvey - _I'll Be Waiting_\n1. Sam Cooke - _A Change Is Gonna Come_\n","avg_line_length":40.5479452055,"max_line_length":80,"alphanum_fraction":0.7658783784} +{"size":1618,"ext":"md","lang":"Markdown","max_stars_count":null,"content":"---\ntitle: SQL \u8868\u8fbe\u5f0f \uff08Access \u684c\u9762\u6570\u636e\u5e93\u53c2\u8003 \uff08\u82f1\u6587\uff09\nTOCTitle: SQL expressions\nms:assetid: 91722f18-8589-d9fc-79ef-0be4ab11f822\nms:mtpsurl: https:\/\/msdn.microsoft.com\/library\/Ff197629(v=office.15)\nms:contentKeyID: 48546349\nms.date: 09\/18\/2015\nmtps_version: v=office.15\nms.openlocfilehash: 5a8d340f008fd198068d6dacc1b2bf847838ede1\nms.sourcegitcommit: 45feafb3b55de0402dddf5548c0c1c43a0eabafd\nms.translationtype: MT\nms.contentlocale: zh-CN\nms.lasthandoff: 11\/07\/2018\nms.locfileid: \"26025733\"\n---\n# <\/a>SQL \u8868\u8fbe\u5f0f\n\n**\u9002\u7528\u4e8e**\uff1a Access 2013\u3001 Office 2013\n\nSQL \u8868\u8fbe\u5f0f\u662f\u6784\u6210 SQL \u8bed\u53e5\u7684\u5168\u90e8\u6216\u4e00\u90e8\u5206\u7684\u5b57\u7b26\u4e32\u3002\u4f8b\u5982\uff0c **Recordset** \u5bf9\u8c61\u7684 **FindFirst** \u65b9\u6cd5\u4f7f\u7528\u7531 SQL [WHERE \u5b50\u53e5](https:\/\/docs.microsoft.com\/office\/vba\/access\/Concepts\/Structured-Query-Language\/where-clause-microsoft-access-sql)\u4e2d\u7684\u9009\u62e9\u6761\u4ef6\u6240\u7ec4\u6210\u7684 SQL \u8868\u8fbe\u5f0f\u3002\n\nMicrosoft Access \u6570\u636e\u5e93\u5f15\u64ce\u4f7f\u7528 Microsoft Visual Basic for Applications (VBA\uff09 \u8868\u8fbe\u5f0f\u670d\u52a1\u4ee5\u6267\u884c\u7b80\u5355\u7684\u7b97\u672f\u8fd0\u7b97\u7b26\u548c\u51fd\u6570\u8ba1\u7b97\u3002 VBA \u8868\u8fbe\u5f0f\u670d\u52a1\u5b9a\u4e49\u6240\u6709 \uff08\u9664**[\u4e4b\u95f4](https:\/\/docs.microsoft.com\/office\/vba\/access\/concepts\/miscellaneous\/and-operator)**\u3001**[\u4e2d](https:\/\/docs.microsoft.com\/office\/vba\/access\/concepts\/miscellaneous\/in-operator-microsoft-access-sql)**\uff0c\u5e76**[\u50cf](https:\/\/docs.microsoft.com\/office\/vba\/access\/Concepts\/Structured-Query-Language\/like-operator-microsoft-access-sql)**\uff09 \u7684 Microsoft Access \u6570\u636e\u5e93\u5f15\u64ce SQL \u8868\u8fbe\u5f0f\u4e2d\u4f7f\u7528\u7684\u8fd0\u7b97\u7b26\u3002 \u53e6\u5916\uff0cVBA \u8868\u8fbe\u5f0f\u670d\u52a1\u6240\u63d0\u4f9b\u7684 100 \u591a\u4e2a VBA \u51fd\u6570\u80fd\u591f\u7528\u4e8e SQL \u8868\u8fbe\u5f0f\u4e2d\u3002 \u4f8b\u5982\uff0c\u60a8\u53ef\u4ee5\u4f7f\u7528\u8fd9\u4e9b VBA \u51fd\u6570\u64b0\u5199 Microsoft Access \u67e5\u8be2\u8bbe\u8ba1\u89c6\u56fe\u4e2d\u7684 SQL \u67e5\u8be2\u548c\u8fd8\u53ef\u4ee5\u5728 Microsoft Visual c + +\u3001 Microsoft Visual Basic \u4e2d\uff0c\u548c Microsoft DAO **OpenRecordset**\u65b9\u6cd5\u4e2d\u7684 SQL \u67e5\u8be2\u4e2d\u4f7f\u7528\u8fd9\u4e9b\u51fd\u6570Excel \u4ee3\u7801\u3002\n\n## <\/a>\u53e6\u8bf7\u53c2\u9605\n\n- [Access VBA \u6982\u5ff5](https:\/\/docs.microsoft.com\/office\/vba\/access\/concepts\/miscellaneous\/concepts-access-vba-reference)","avg_line_length":62.2307692308,"max_line_length":690,"alphanum_fraction":0.7911001236} +{"size":1919,"ext":"md","lang":"Markdown","max_stars_count":1.0,"content":"# [Concatenation](@id Concatenation)\n\nDue to the limitations of the PDF format, creating highlights spanning two or more pages is\nrather complicated. Similar to [Readwise](https:\/\/readwise.io\/)'s solution on this problem,\nthis package supports the concatenation of highlights utilizing comments.\n\n## Concatenation IDs\n\nTo connect two or more highlights, the comments of these highlights must start with the\nidentifiers `.c1`, `.c2`, and so on. The order of the identifiers must coincide with the\norder of display of the highlights themselves: left to right, top-down within one page,\nand in ascending order of pages otherwise.\n\nExample: highlight with the text \"Hello\" and the comment \".c1 General\" on page 1, a\nhighlight with the text \"there!\" and the comment \".c2 Kenobi\" on page 2. Concatenation\nresult: highlight with the text \"Hello there!\" and the comment \"General Kenobi\".\n\n## Word hyphenation\n\nConcatenation does some magic under the hood. It removes concatenation identifiers\n(even if the chain consists of one highlight) and also connects hyphenated words.\n\nExample: highlight with the text \"Mine-\" and the comment \".c1\" on page 1, a\nhighlight with the text \"craft\" and the comment \".c2\" on page 2. Concatenation\nresult: highlight with the text \"Minecraft\" and an empty comment.\n\n## The keyword\n\nThe [`import_highlights`](@ref) function uses concatenation by default. Functions for\n[getting pieces](@ref ExtractingData) containing the words `highlights`, `comments`, or\n`pages` in their name support the keyword `concatenate`.\n\nExample without concatenation:\n\n```@setup pdf\nusing PDFHighlights\npdf = joinpath(pathof(PDFHighlights) |> dirname |> dirname, \"test\", \"pdf\", \"TestPDF.pdf\")\n```\n\n```@example pdf\nget_highlights_comments(pdf; concatenate = false)\n```\n\nExample with concatenation:\n\n```@example pdf\nget_highlights_comments(pdf)\n```\n\nwhere `pdf` is the path to the PDF used by this package in the tests.\n","avg_line_length":37.6274509804,"max_line_length":91,"alphanum_fraction":0.767587285} +{"size":8658,"ext":"md","lang":"Markdown","max_stars_count":null,"content":"---\nlayout: page\ntitle: \"Lab 7: Type Analysis\"\nexcerpt: \"Lab 7: Type Analysis\"\ntags: [\"project\"]\ncontext: project\nsubcontext: ms2\n---\n\n{% include _toc.html %}\n\nIn this lab, you define name bindings and corresponding constraints for MiniJava in Statix. The\nconcepts you are going to use in Statix are described in the following papers:\n\n1. H. van Antwerpen, C. B. Poulsen, A. Rouvoet, E. Visser: [Scopes as Types](https:\/\/researchr.org\/publication\/AntwerpenPRV18), OOPSLA 2018\n2. P. Neron, A. Tolmach, E. Visser, G. Wachsmuth: [A Theory of Name Resolution](https:\/\/researchr.org\/publication\/TUD-SERG-2015-001), ESOP 2015\n3. H. van Antwerpen, P. Neron, A. Tolmach, E. Visser, G. Wachsmuth: [A Constraint Language for Static Semantic Analysis based on Scope Graphs](https:\/\/researchr.org\/publication\/AntwerpenNTVW16), PEPM 2016\n\nUpdate your Spoofax installation for this lab, to get the latest bug fixes.\nSee the [update instructions](\/documentation\/spoofax#updating).\n{: .notice .notice-info}\n\n## Overview\n\n### Objectives\n\nSpecify type analysis for MiniJava in Statix. The specification should include:\n\n1. Name binding constraints for\n * method calls.\n2. Typing constraints for\n * integer and boolean literals,\n * unary and binary expressions,\n * variable and field references,\n * object creation,\n * `this` expressions,\n * method calls,\n * and the subtyping relation on classes.\n\n### Submission\n\nYou need to submit your MiniJava project with a merge request against branch `assignment-7-submission` on GitLab.\nThe [Git documentation](\/documentation\/git.html#submitting-an-assignment) explains how to file such a request.\n\nThe deadline for submission is November 23, 2019, 23:59.\n{: .notice .notice-warning}\n\n### Grading\n\nYou can earn up to 100 points for the correctness of your type analysis. Therefore, we run several\ntest cases against your implementation. You earn points, when your implementation passes test\ncases. The total number of points depends on how many test cases you pass in each of the following\ngroups:\n\n* name binding (20 points)\n * method declarations\n * method references\n* typing rules (35 points)\n * literals (4 points)\n * unary expressions (5 points)\n * binary expressions (8 points)\n * variable and field references (4 points)\n * object creation (4 point)\n * `this` expression (5 points)\n * method call (5 points)\n* constraints (45 points)\n * expressions (10 points)\n * statements (10 points)\n * subtyping in assignments, return expressions, method calls (10 points)\n * cyclic inheritance (2 points)\n * method overloading and method overriding (10 points)\n * main class usage (3 points)\n\n### Early Feedback\n\nWe provide early feedback for your language implementation. This feedback gives you an indication\nwhich parts of your name and type analysis might still be wrong. It includes a summary on how many\ntests you pass and how many points you earn by passing them. You have 3 early feedback attempts.\n\n## Detailed Instructions\n\n### Git Repository\n\nYou continue with your work from the previous assignment. See the\n[Git documentation](\/documentation\/git.html#continue-from-previous-assignment) on how to create the\n`assignment-7-develop` branch from your previous work.\n\nFor the grading to work correctly, you have to pull some changes from the template.\nSee the [instructions](\/git.html\/#pulling-in-changes-from-template) on how to do that.\n{: .notice .notice-info}\n\n### Example specification\n\nA complete example specification for the simply typed lambda calculus can be found at [STLC](lab5-example.html). Furthermore, we would recommend looking at the [Tiger Statix implementation](https:\/\/github.com\/MetaBorgCube\/metaborg-tiger\/blob\/master\/org.metaborg.lang.tiger.statix\/trans\/static-semantics.stx).\n\n### Declaring Types\n\nIn this lab we use the _scopes as types_ idea to model class types. Instead of using the syntactic\ntype constructors from the syntax directly, we introduce semantic types. These are the types that we\nuse as the types of declarations and AST nodes. The types you should use are given by the following\nsignature, which you should add to your specification:\n\n signature\n sorts TYPE constructors\n CLASS : scope -> TYPE\n INT : TYPE\n BOOL : TYPE\n INTARRAY : TYPE\n\nWe use _scopes as types_ for classes, where the class is identified by its scope. Sometimes it is\nuseful (and for this lab required for grading) to map the scope back to the original class\nname. Therefore, you must define a relation that associates the class name with the class scope:\n\n signature\n relations\n className : -> ID\n \nIn the rule for class declarations, use the constraint `!className[x] in s_cls`, where `x` is the\nclass name, and `s_cls` is the class scope, to store the class name. *If you don't do this, some of\nthe grading tests will fail!*\n\nYou also need to add a type constructor for methods. This type should have two arguments, the return\ntype, and a list of argument types of the method. We do not test for this type directly, so you can\nchoose its name yourself.\n\n### Predicate Rules for Typing\n\nBuilding on the predicate rules of lab 5, we need to extend the rules for AST nodes that have types\nwith an extra type parameter. Instead of adding this as a regular parameter to the predicate, we\nusually prefer to use the functional syntax for these. For example, the predicate for typing\nexpressions would get the following signature and rules:\n\n expOk : scope * Exp -> TYPE\n \n expOk(s, True()) = BOOL().\n\nWhen using the function syntax, the predicate calls have to be used in term positions. For example,\nin an equality constraint like `expOk(s, e) == T`. Alternatively, you can use regular predicates and\nadd an extra parameter. In that case, the signature and rules look like this:\n\n expOk : scope * Exp * TYPE\n \n expOk(s, True(), BOOL()).\n\nIn the latter case, the predicate is used as a predicate, not a term, and the usage would be\n`expOk(s, e, T)`. Although we usually prefer the functional form, both are equivalent, and you are\nfree to choose either of them. Note that the functional form is translated to the predicate form,\nwhich you can see using the `Spoofax > Syntax > Format Normalized AST` menu.\n\nThe type is not automatically associated with the AST node. We do this be setting the `type`\nproperty on an AST node, using the constraint `@e.type := T`. For example, the rule for `true` could\nbe written as:\n\n expOk(s, e@True()) = T@BOOL() :- @e.type := T.\n\nThe convenience syntax `v@t` is inline notation for `v == t`, for a variable `v` and a term `t`.\n\n### Typing Declarations\n\nIn this lab we need to associate types with the declarations in the scope graph. Therefore, we\nreplace the `decl` and `scopeOf` relations with a new `type` relation with the following signature:\n\n signature\n relations\n type : occurrence -> TYPE\n\nDeclarations that were introduced with `s -> OCCURRENCE` must now add a type as well, using the\nsyntax `s -> OCCURRENCE with type TYPE`. Similarly, resolution queries such as `Var{x} in s |-> [(_,\nVar{x'})]`, must now query the `type` relation as `type of Var{x} in s |-> [(_, (Var{x'}, T))]`.\n\n### Subtyping\n\nMiniJava supports subtyping between classes. To support this, you will have to implement a predicate\nto check whether one type is a subtype of another type. This predicate must be used instead of type\nequality wherever subtyping is allowed.\n\nSubtyping between classes can be checked by using the structure of the scope graph. For this you can\nuse a query that does not query a specific relation, but only considers the final scope of a\npath. The syntax for that is:\n\n query () filter ... and { s :- ... } min ... and { s1, s2 :- ... } in s |-> ps\n\nwhere the result `ps` has type `list((path * scope))`.\n\n### Type-dependent Name Resolution\n\nUsing types we can now implement resolution of method names. Use the class types to resolve method\nnames in the right scope. The _scopes as types_ representation of class types should make this\neasy. Do not forget to add the `@x.ref := x'` property on method references, to ensure editor\nresolution and testing work.\n\nRemember that you can check if method resolution works, by *Ctrl\/Cmd + click* on a method call in an\nexample program.\n{: .notice .notice-info}\n\n### Overriding\n\nFinally, check if overriding is done correctly. If a method overrides a method from the super class,\nits parameter types must be the same. Also check that the return type is correct. Be careful, it\ndoes not have to be the same, but it needs to be co-variant! Incorrect overrides (including\noverloads) should result in an error.\n","avg_line_length":42.8613861386,"max_line_length":308,"alphanum_fraction":0.7447447447} +{"size":47181,"ext":"md","lang":"Markdown","max_stars_count":1.0,"content":"## `php:8.1.0beta2-alpine3.13`\n\n```console\n$ docker pull php@sha256:d2843c8e5da34a1aea4edc7e89acf448aa7164717a8e3c8bcb5be42f7567f922\n```\n\n-\tManifest MIME: `application\/vnd.docker.distribution.manifest.list.v2+json`\n-\tPlatforms: 7\n\t-\tlinux; amd64\n\t-\tlinux; arm variant v6\n\t-\tlinux; arm variant v7\n\t-\tlinux; arm64 variant v8\n\t-\tlinux; 386\n\t-\tlinux; ppc64le\n\t-\tlinux; s390x\n\n### `php:8.1.0beta2-alpine3.13` - linux; amd64\n\n```console\n$ docker pull php@sha256:30e48c6a944d69135fdb75791616f2c193fb0dc97688c5b01f9b79fe67d559e2\n```\n\n-\tDocker Version: 20.10.7\n-\tManifest MIME: `application\/vnd.docker.distribution.manifest.v2+json`\n-\tTotal Size: **32.2 MB (32162167 bytes)** \n\t(compressed transfer size, not on-disk size)\n-\tImage ID: `sha256:1b8ec2f9ef3222fa44081519ca3c5adff91f519afd4b208fc27774f586ad2b06`\n-\tEntrypoint: `[\"docker-php-entrypoint\"]`\n-\tDefault Command: `[\"php\",\"-a\"]`\n\n```dockerfile\n# Wed, 14 Apr 2021 19:19:39 GMT\nADD file:8ec69d882e7f29f0652d537557160e638168550f738d0d49f90a7ef96bf31787 in \/ \n# Wed, 14 Apr 2021 19:19:39 GMT\nCMD [\"\/bin\/sh\"]\n# Wed, 14 Apr 2021 23:59:18 GMT\nENV PHPIZE_DEPS=autoconf \t\tdpkg-dev dpkg \t\tfile \t\tg++ \t\tgcc \t\tlibc-dev \t\tmake \t\tpkgconf \t\tre2c\n# Wed, 14 Apr 2021 23:59:20 GMT\nRUN apk add --no-cache \t\tca-certificates \t\tcurl \t\ttar \t\txz \t\topenssl\n# Wed, 14 Apr 2021 23:59:22 GMT\nRUN set -eux; \taddgroup -g 82 -S www-data; \tadduser -u 82 -D -S -G www-data www-data\n# Wed, 14 Apr 2021 23:59:22 GMT\nENV PHP_INI_DIR=\/usr\/local\/etc\/php\n# Wed, 14 Apr 2021 23:59:23 GMT\nRUN set -eux; \tmkdir -p \"$PHP_INI_DIR\/conf.d\"; \t[ ! -d \/var\/www\/html ]; \tmkdir -p \/var\/www\/html; \tchown www-data:www-data \/var\/www\/html; \tchmod 777 \/var\/www\/html\n# Wed, 14 Apr 2021 23:59:24 GMT\nENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64\n# Wed, 14 Apr 2021 23:59:24 GMT\nENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64\n# Wed, 14 Apr 2021 23:59:24 GMT\nENV PHP_LDFLAGS=-Wl,-O1 -pie\n# Thu, 10 Jun 2021 22:27:46 GMT\nENV GPG_KEYS=528995BFEDFBA7191D46839EF9BA0ADA31CBD89E 39B641343D8C104B2B146DC3F9C39DC0B9698544 F1F692238FBC1666E5A5CCD4199F9DFEF6FFBAFD\n# Thu, 05 Aug 2021 22:21:46 GMT\nENV PHP_VERSION=8.1.0beta2\n# Thu, 05 Aug 2021 22:21:47 GMT\nENV PHP_URL=https:\/\/downloads.php.net\/~ramsey\/php-8.1.0beta2.tar.xz PHP_ASC_URL=https:\/\/downloads.php.net\/~ramsey\/php-8.1.0beta2.tar.xz.asc\n# Thu, 05 Aug 2021 22:21:47 GMT\nENV PHP_SHA256=c0b8d45639f171fcbbaaf6b239a4759126130cf14628eaf74c4916408cb16ce0\n# Thu, 05 Aug 2021 22:21:53 GMT\nRUN set -eux; \t\tapk add --no-cache --virtual .fetch-deps gnupg; \t\tmkdir -p \/usr\/src; \tcd \/usr\/src; \t\tcurl -fsSL -o php.tar.xz \"$PHP_URL\"; \t\tif [ -n \"$PHP_SHA256\" ]; then \t\techo \"$PHP_SHA256 *php.tar.xz\" | sha256sum -c -; \tfi; \t\tif [ -n \"$PHP_ASC_URL\" ]; then \t\tcurl -fsSL -o php.tar.xz.asc \"$PHP_ASC_URL\"; \t\texport GNUPGHOME=\"$(mktemp -d)\"; \t\tfor key in $GPG_KEYS; do \t\t\tgpg --batch --keyserver keyserver.ubuntu.com --recv-keys \"$key\"; \t\tdone; \t\tgpg --batch --verify php.tar.xz.asc php.tar.xz; \t\tgpgconf --kill all; \t\trm -rf \"$GNUPGHOME\"; \tfi; \t\tapk del --no-network .fetch-deps\n# Thu, 05 Aug 2021 22:21:53 GMT\nCOPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in \/usr\/local\/bin\/ \n# Thu, 05 Aug 2021 22:31:42 GMT\nRUN set -eux; \tapk add --no-cache --virtual .build-deps \t\t$PHPIZE_DEPS \t\targon2-dev \t\tcoreutils \t\tcurl-dev \t\tlibedit-dev \t\tlibsodium-dev \t\tlibxml2-dev \t\tlinux-headers \t\toniguruma-dev \t\topenssl-dev \t\tsqlite-dev \t; \t\texport CFLAGS=\"$PHP_CFLAGS\" \t\tCPPFLAGS=\"$PHP_CPPFLAGS\" \t\tLDFLAGS=\"$PHP_LDFLAGS\" \t; \tdocker-php-source extract; \tcd \/usr\/src\/php; \tgnuArch=\"$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)\"; \t.\/configure \t\t--build=\"$gnuArch\" \t\t--with-config-file-path=\"$PHP_INI_DIR\" \t\t--with-config-file-scan-dir=\"$PHP_INI_DIR\/conf.d\" \t\t\t\t--enable-option-checking=fatal \t\t\t\t--with-mhash \t\t\t\t--with-pic \t\t\t\t--enable-ftp \t\t--enable-mbstring \t\t--enable-mysqlnd \t\t--with-password-argon2 \t\t--with-sodium=shared \t\t--with-pdo-sqlite=\/usr \t\t--with-sqlite3=\/usr \t\t\t\t--with-curl \t\t--with-libedit \t\t--with-openssl \t\t--with-zlib \t\t\t\t--with-pear \t\t\t\t$(test \"$gnuArch\" = 's390x-linux-musl' && echo '--without-pcre-jit') \t\t\t\t${PHP_EXTRA_CONFIGURE_ARGS:-} \t; \tmake -j \"$(nproc)\"; \tfind -type f -name '*.a' -delete; \tmake install; \tfind \/usr\/local\/bin \/usr\/local\/sbin -type f -perm +0111 -exec strip --strip-all '{}' + || true; \tmake clean; \t\tcp -v php.ini-* \"$PHP_INI_DIR\/\"; \t\tcd \/; \tdocker-php-source delete; \t\trunDeps=\"$( \t\tscanelf --needed --nobanner --format '%n#p' --recursive \/usr\/local \t\t\t| tr ',' '\\n' \t\t\t| sort -u \t\t\t| awk 'system(\"[ -e \/usr\/local\/lib\/\" $1 \" ]\") == 0 { next } { print \"so:\" $1 }' \t)\"; \tapk add --no-cache $runDeps; \t\tapk del --no-network .build-deps; \t\tpecl update-channels; \trm -rf \/tmp\/pear ~\/.pearrc; \t\tphp --version\n# Thu, 05 Aug 2021 22:31:42 GMT\nCOPY multi:efd917b98407edb5d558edb0edbd8e63c9318f701892aaa449794d019a092f37 in \/usr\/local\/bin\/ \n# Thu, 05 Aug 2021 22:31:44 GMT\nRUN docker-php-ext-enable sodium\n# Thu, 05 Aug 2021 22:31:44 GMT\nENTRYPOINT [\"docker-php-entrypoint\"]\n# Thu, 05 Aug 2021 22:31:44 GMT\nCMD [\"php\" \"-a\"]\n```\n\n-\tLayers:\n\t-\t`sha256:540db60ca9383eac9e418f78490994d0af424aab7bf6d0e47ac8ed4e2e9bcbba` \n\t\tLast Modified: Wed, 14 Apr 2021 17:59:29 GMT \n\t\tSize: 2.8 MB (2811969 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:933cf2f4a68ffb603d67468c6e390ce893a1410ea927dc00e8faabfd01032afa` \n\t\tLast Modified: Thu, 15 Apr 2021 02:15:25 GMT \n\t\tSize: 1.7 MB (1702188 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:93c5cc202a60c205410f5462131556b8ecfba3092bceab1bf75723d1a356c7fb` \n\t\tLast Modified: Thu, 15 Apr 2021 02:15:23 GMT \n\t\tSize: 1.3 KB (1260 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:74403c16157d84037726eebe566275f9e5fdb3f301ce6c101eeb3fb37b8914ef` \n\t\tLast Modified: Thu, 15 Apr 2021 02:15:22 GMT \n\t\tSize: 269.0 B \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:024be38df7cd4354407a4340e93a2b9e6d82ba65ca889ecb7be10df6623724a1` \n\t\tLast Modified: Thu, 05 Aug 2021 22:51:24 GMT \n\t\tSize: 11.6 MB (11619218 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:15c14917cbbb8a0edd6bc421ab86b545230146bcaa0b355671c89f8a7e5a809b` \n\t\tLast Modified: Thu, 05 Aug 2021 22:51:23 GMT \n\t\tSize: 497.0 B \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:c948d496d31f9e28a89bc6f347a892e509299e4ae701e5e227cd5a3ef64d1a4c` \n\t\tLast Modified: Thu, 05 Aug 2021 22:51:27 GMT \n\t\tSize: 16.0 MB (16006903 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:dc141d30aefa7b699134e195b6885332edc5f7ada8688b5c86ad85bc1f379139` \n\t\tLast Modified: Thu, 05 Aug 2021 22:51:23 GMT \n\t\tSize: 2.3 KB (2267 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:fdad99220ceb8ed8c42a3ff9275fddae2ecfc5590f37de24236e1928eec3b5e1` \n\t\tLast Modified: Thu, 05 Aug 2021 22:51:23 GMT \n\t\tSize: 17.6 KB (17596 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\n### `php:8.1.0beta2-alpine3.13` - linux; arm variant v6\n\n```console\n$ docker pull php@sha256:7e3cd75e12263182d9e0e8d32050b354ad217f4b246d4ad2192f7aa61729a390\n```\n\n-\tDocker Version: 20.10.7\n-\tManifest MIME: `application\/vnd.docker.distribution.manifest.v2+json`\n-\tTotal Size: **30.3 MB (30322502 bytes)** \n\t(compressed transfer size, not on-disk size)\n-\tImage ID: `sha256:3ef980f20b393e360fe1959b87f2c7d253f7f47a8819659227791a35f157ce2a`\n-\tEntrypoint: `[\"docker-php-entrypoint\"]`\n-\tDefault Command: `[\"php\",\"-a\"]`\n\n```dockerfile\n# Fri, 30 Jul 2021 17:49:55 GMT\nADD file:4479f0a51530e039edf231d87201896dcff908aa542a613cdccb015f93dda8a3 in \/ \n# Fri, 30 Jul 2021 17:49:55 GMT\nCMD [\"\/bin\/sh\"]\n# Sat, 31 Jul 2021 00:44:27 GMT\nENV PHPIZE_DEPS=autoconf \t\tdpkg-dev dpkg \t\tfile \t\tg++ \t\tgcc \t\tlibc-dev \t\tmake \t\tpkgconf \t\tre2c\n# Sat, 31 Jul 2021 00:44:30 GMT\nRUN apk add --no-cache \t\tca-certificates \t\tcurl \t\ttar \t\txz \t\topenssl\n# Sat, 31 Jul 2021 00:44:32 GMT\nRUN set -eux; \taddgroup -g 82 -S www-data; \tadduser -u 82 -D -S -G www-data www-data\n# Sat, 31 Jul 2021 00:44:33 GMT\nENV PHP_INI_DIR=\/usr\/local\/etc\/php\n# Sat, 31 Jul 2021 00:44:34 GMT\nRUN set -eux; \tmkdir -p \"$PHP_INI_DIR\/conf.d\"; \t[ ! -d \/var\/www\/html ]; \tmkdir -p \/var\/www\/html; \tchown www-data:www-data \/var\/www\/html; \tchmod 777 \/var\/www\/html\n# Sat, 31 Jul 2021 00:44:35 GMT\nENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64\n# Sat, 31 Jul 2021 00:44:35 GMT\nENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64\n# Sat, 31 Jul 2021 00:44:35 GMT\nENV PHP_LDFLAGS=-Wl,-O1 -pie\n# Sat, 31 Jul 2021 00:44:36 GMT\nENV GPG_KEYS=528995BFEDFBA7191D46839EF9BA0ADA31CBD89E 39B641343D8C104B2B146DC3F9C39DC0B9698544 F1F692238FBC1666E5A5CCD4199F9DFEF6FFBAFD\n# Thu, 05 Aug 2021 22:34:35 GMT\nENV PHP_VERSION=8.1.0beta2\n# Thu, 05 Aug 2021 22:34:35 GMT\nENV PHP_URL=https:\/\/downloads.php.net\/~ramsey\/php-8.1.0beta2.tar.xz PHP_ASC_URL=https:\/\/downloads.php.net\/~ramsey\/php-8.1.0beta2.tar.xz.asc\n# Thu, 05 Aug 2021 22:34:36 GMT\nENV PHP_SHA256=c0b8d45639f171fcbbaaf6b239a4759126130cf14628eaf74c4916408cb16ce0\n# Thu, 05 Aug 2021 22:34:44 GMT\nRUN set -eux; \t\tapk add --no-cache --virtual .fetch-deps gnupg; \t\tmkdir -p \/usr\/src; \tcd \/usr\/src; \t\tcurl -fsSL -o php.tar.xz \"$PHP_URL\"; \t\tif [ -n \"$PHP_SHA256\" ]; then \t\techo \"$PHP_SHA256 *php.tar.xz\" | sha256sum -c -; \tfi; \t\tif [ -n \"$PHP_ASC_URL\" ]; then \t\tcurl -fsSL -o php.tar.xz.asc \"$PHP_ASC_URL\"; \t\texport GNUPGHOME=\"$(mktemp -d)\"; \t\tfor key in $GPG_KEYS; do \t\t\tgpg --batch --keyserver keyserver.ubuntu.com --recv-keys \"$key\"; \t\tdone; \t\tgpg --batch --verify php.tar.xz.asc php.tar.xz; \t\tgpgconf --kill all; \t\trm -rf \"$GNUPGHOME\"; \tfi; \t\tapk del --no-network .fetch-deps\n# Thu, 05 Aug 2021 22:34:45 GMT\nCOPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in \/usr\/local\/bin\/ \n# Thu, 05 Aug 2021 22:39:09 GMT\nRUN set -eux; \tapk add --no-cache --virtual .build-deps \t\t$PHPIZE_DEPS \t\targon2-dev \t\tcoreutils \t\tcurl-dev \t\tlibedit-dev \t\tlibsodium-dev \t\tlibxml2-dev \t\tlinux-headers \t\toniguruma-dev \t\topenssl-dev \t\tsqlite-dev \t; \t\texport CFLAGS=\"$PHP_CFLAGS\" \t\tCPPFLAGS=\"$PHP_CPPFLAGS\" \t\tLDFLAGS=\"$PHP_LDFLAGS\" \t; \tdocker-php-source extract; \tcd \/usr\/src\/php; \tgnuArch=\"$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)\"; \t.\/configure \t\t--build=\"$gnuArch\" \t\t--with-config-file-path=\"$PHP_INI_DIR\" \t\t--with-config-file-scan-dir=\"$PHP_INI_DIR\/conf.d\" \t\t\t\t--enable-option-checking=fatal \t\t\t\t--with-mhash \t\t\t\t--with-pic \t\t\t\t--enable-ftp \t\t--enable-mbstring \t\t--enable-mysqlnd \t\t--with-password-argon2 \t\t--with-sodium=shared \t\t--with-pdo-sqlite=\/usr \t\t--with-sqlite3=\/usr \t\t\t\t--with-curl \t\t--with-libedit \t\t--with-openssl \t\t--with-zlib \t\t\t\t--with-pear \t\t\t\t$(test \"$gnuArch\" = 's390x-linux-musl' && echo '--without-pcre-jit') \t\t\t\t${PHP_EXTRA_CONFIGURE_ARGS:-} \t; \tmake -j \"$(nproc)\"; \tfind -type f -name '*.a' -delete; \tmake install; \tfind \/usr\/local\/bin \/usr\/local\/sbin -type f -perm +0111 -exec strip --strip-all '{}' + || true; \tmake clean; \t\tcp -v php.ini-* \"$PHP_INI_DIR\/\"; \t\tcd \/; \tdocker-php-source delete; \t\trunDeps=\"$( \t\tscanelf --needed --nobanner --format '%n#p' --recursive \/usr\/local \t\t\t| tr ',' '\\n' \t\t\t| sort -u \t\t\t| awk 'system(\"[ -e \/usr\/local\/lib\/\" $1 \" ]\") == 0 { next } { print \"so:\" $1 }' \t)\"; \tapk add --no-cache $runDeps; \t\tapk del --no-network .build-deps; \t\tpecl update-channels; \trm -rf \/tmp\/pear ~\/.pearrc; \t\tphp --version\n# Thu, 05 Aug 2021 22:39:11 GMT\nCOPY multi:efd917b98407edb5d558edb0edbd8e63c9318f701892aaa449794d019a092f37 in \/usr\/local\/bin\/ \n# Thu, 05 Aug 2021 22:39:13 GMT\nRUN docker-php-ext-enable sodium\n# Thu, 05 Aug 2021 22:39:13 GMT\nENTRYPOINT [\"docker-php-entrypoint\"]\n# Thu, 05 Aug 2021 22:39:14 GMT\nCMD [\"php\" \"-a\"]\n```\n\n-\tLayers:\n\t-\t`sha256:740c950346cf39c85b52576998695c9b909c24a58a8bb1b64cce91fda3ef1d3a` \n\t\tLast Modified: Wed, 14 Apr 2021 18:50:30 GMT \n\t\tSize: 2.6 MB (2622131 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:5ce98299684db5b4de896c92970b5f539ad451ff2edf01c20626f3651f60ab20` \n\t\tLast Modified: Sat, 31 Jul 2021 02:25:42 GMT \n\t\tSize: 1.7 MB (1696407 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:2ba6f14006bcb7491817d48740e9491f5a6e05085085cf91e4517ed7b6aeb2cc` \n\t\tLast Modified: Sat, 31 Jul 2021 02:25:40 GMT \n\t\tSize: 1.3 KB (1264 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:c777fe9c3549486f6d8f4c18d1c8eb056c98a5c6e74f5ccfd75b282ba26687d0` \n\t\tLast Modified: Sat, 31 Jul 2021 02:25:40 GMT \n\t\tSize: 266.0 B \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:d2e57eda1ff04d47fc0a817f1d6cd004799b81f32c06a33cd80c71624d7d387d` \n\t\tLast Modified: Thu, 05 Aug 2021 22:56:59 GMT \n\t\tSize: 11.6 MB (11619201 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:2b848502558c9263bde3e90c8bc6458e6fe0d51fa5b0c0b6006d399b306d44aa` \n\t\tLast Modified: Thu, 05 Aug 2021 22:56:55 GMT \n\t\tSize: 495.0 B \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:521addff0507d07bfb645cec73dd5584b037b3bf719eafa499da64c521e34f1e` \n\t\tLast Modified: Thu, 05 Aug 2021 22:57:06 GMT \n\t\tSize: 14.4 MB (14363097 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:739ab1f4636f308296c7653e6a4845b3c3962de8103f14753101b19b4962c1ea` \n\t\tLast Modified: Thu, 05 Aug 2021 22:56:55 GMT \n\t\tSize: 2.3 KB (2263 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:4c810ee782f600e64711dc9dad3f37c6010a2f20c66d4282d9cfffc598085450` \n\t\tLast Modified: Thu, 05 Aug 2021 22:56:55 GMT \n\t\tSize: 17.4 KB (17378 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\n### `php:8.1.0beta2-alpine3.13` - linux; arm variant v7\n\n```console\n$ docker pull php@sha256:d2e9fb2af9d2844094aaf16db383504ed4aeff25b16dad6d56b9e3cad49850cc\n```\n\n-\tDocker Version: 20.10.7\n-\tManifest MIME: `application\/vnd.docker.distribution.manifest.v2+json`\n-\tTotal Size: **29.1 MB (29067308 bytes)** \n\t(compressed transfer size, not on-disk size)\n-\tImage ID: `sha256:e8225530a7438f29f3d69185a766f440179db02efa30eec0e38d2b49b2a1f235`\n-\tEntrypoint: `[\"docker-php-entrypoint\"]`\n-\tDefault Command: `[\"php\",\"-a\"]`\n\n```dockerfile\n# Fri, 30 Jul 2021 18:36:39 GMT\nADD file:028c5b473d862250586e174c5dd19b37f8fc3bffbc02d888e72df30f32fd6129 in \/ \n# Fri, 30 Jul 2021 18:36:39 GMT\nCMD [\"\/bin\/sh\"]\n# Sat, 31 Jul 2021 06:43:26 GMT\nENV PHPIZE_DEPS=autoconf \t\tdpkg-dev dpkg \t\tfile \t\tg++ \t\tgcc \t\tlibc-dev \t\tmake \t\tpkgconf \t\tre2c\n# Sat, 31 Jul 2021 06:43:29 GMT\nRUN apk add --no-cache \t\tca-certificates \t\tcurl \t\ttar \t\txz \t\topenssl\n# Sat, 31 Jul 2021 06:43:31 GMT\nRUN set -eux; \taddgroup -g 82 -S www-data; \tadduser -u 82 -D -S -G www-data www-data\n# Sat, 31 Jul 2021 06:43:32 GMT\nENV PHP_INI_DIR=\/usr\/local\/etc\/php\n# Sat, 31 Jul 2021 06:43:33 GMT\nRUN set -eux; \tmkdir -p \"$PHP_INI_DIR\/conf.d\"; \t[ ! -d \/var\/www\/html ]; \tmkdir -p \/var\/www\/html; \tchown www-data:www-data \/var\/www\/html; \tchmod 777 \/var\/www\/html\n# Sat, 31 Jul 2021 06:43:34 GMT\nENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64\n# Sat, 31 Jul 2021 06:43:34 GMT\nENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64\n# Sat, 31 Jul 2021 06:43:35 GMT\nENV PHP_LDFLAGS=-Wl,-O1 -pie\n# Sat, 31 Jul 2021 06:43:35 GMT\nENV GPG_KEYS=528995BFEDFBA7191D46839EF9BA0ADA31CBD89E 39B641343D8C104B2B146DC3F9C39DC0B9698544 F1F692238FBC1666E5A5CCD4199F9DFEF6FFBAFD\n# Thu, 05 Aug 2021 21:30:40 GMT\nENV PHP_VERSION=8.1.0beta2\n# Thu, 05 Aug 2021 21:30:41 GMT\nENV PHP_URL=https:\/\/downloads.php.net\/~ramsey\/php-8.1.0beta2.tar.xz PHP_ASC_URL=https:\/\/downloads.php.net\/~ramsey\/php-8.1.0beta2.tar.xz.asc\n# Thu, 05 Aug 2021 21:30:41 GMT\nENV PHP_SHA256=c0b8d45639f171fcbbaaf6b239a4759126130cf14628eaf74c4916408cb16ce0\n# Thu, 05 Aug 2021 21:30:48 GMT\nRUN set -eux; \t\tapk add --no-cache --virtual .fetch-deps gnupg; \t\tmkdir -p \/usr\/src; \tcd \/usr\/src; \t\tcurl -fsSL -o php.tar.xz \"$PHP_URL\"; \t\tif [ -n \"$PHP_SHA256\" ]; then \t\techo \"$PHP_SHA256 *php.tar.xz\" | sha256sum -c -; \tfi; \t\tif [ -n \"$PHP_ASC_URL\" ]; then \t\tcurl -fsSL -o php.tar.xz.asc \"$PHP_ASC_URL\"; \t\texport GNUPGHOME=\"$(mktemp -d)\"; \t\tfor key in $GPG_KEYS; do \t\t\tgpg --batch --keyserver keyserver.ubuntu.com --recv-keys \"$key\"; \t\tdone; \t\tgpg --batch --verify php.tar.xz.asc php.tar.xz; \t\tgpgconf --kill all; \t\trm -rf \"$GNUPGHOME\"; \tfi; \t\tapk del --no-network .fetch-deps\n# Thu, 05 Aug 2021 21:30:48 GMT\nCOPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in \/usr\/local\/bin\/ \n# Thu, 05 Aug 2021 21:35:00 GMT\nRUN set -eux; \tapk add --no-cache --virtual .build-deps \t\t$PHPIZE_DEPS \t\targon2-dev \t\tcoreutils \t\tcurl-dev \t\tlibedit-dev \t\tlibsodium-dev \t\tlibxml2-dev \t\tlinux-headers \t\toniguruma-dev \t\topenssl-dev \t\tsqlite-dev \t; \t\texport CFLAGS=\"$PHP_CFLAGS\" \t\tCPPFLAGS=\"$PHP_CPPFLAGS\" \t\tLDFLAGS=\"$PHP_LDFLAGS\" \t; \tdocker-php-source extract; \tcd \/usr\/src\/php; \tgnuArch=\"$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)\"; \t.\/configure \t\t--build=\"$gnuArch\" \t\t--with-config-file-path=\"$PHP_INI_DIR\" \t\t--with-config-file-scan-dir=\"$PHP_INI_DIR\/conf.d\" \t\t\t\t--enable-option-checking=fatal \t\t\t\t--with-mhash \t\t\t\t--with-pic \t\t\t\t--enable-ftp \t\t--enable-mbstring \t\t--enable-mysqlnd \t\t--with-password-argon2 \t\t--with-sodium=shared \t\t--with-pdo-sqlite=\/usr \t\t--with-sqlite3=\/usr \t\t\t\t--with-curl \t\t--with-libedit \t\t--with-openssl \t\t--with-zlib \t\t\t\t--with-pear \t\t\t\t$(test \"$gnuArch\" = 's390x-linux-musl' && echo '--without-pcre-jit') \t\t\t\t${PHP_EXTRA_CONFIGURE_ARGS:-} \t; \tmake -j \"$(nproc)\"; \tfind -type f -name '*.a' -delete; \tmake install; \tfind \/usr\/local\/bin \/usr\/local\/sbin -type f -perm +0111 -exec strip --strip-all '{}' + || true; \tmake clean; \t\tcp -v php.ini-* \"$PHP_INI_DIR\/\"; \t\tcd \/; \tdocker-php-source delete; \t\trunDeps=\"$( \t\tscanelf --needed --nobanner --format '%n#p' --recursive \/usr\/local \t\t\t| tr ',' '\\n' \t\t\t| sort -u \t\t\t| awk 'system(\"[ -e \/usr\/local\/lib\/\" $1 \" ]\") == 0 { next } { print \"so:\" $1 }' \t)\"; \tapk add --no-cache $runDeps; \t\tapk del --no-network .build-deps; \t\tpecl update-channels; \trm -rf \/tmp\/pear ~\/.pearrc; \t\tphp --version\n# Thu, 05 Aug 2021 21:35:01 GMT\nCOPY multi:efd917b98407edb5d558edb0edbd8e63c9318f701892aaa449794d019a092f37 in \/usr\/local\/bin\/ \n# Thu, 05 Aug 2021 21:35:04 GMT\nRUN docker-php-ext-enable sodium\n# Thu, 05 Aug 2021 21:35:04 GMT\nENTRYPOINT [\"docker-php-entrypoint\"]\n# Thu, 05 Aug 2021 21:35:05 GMT\nCMD [\"php\" \"-a\"]\n```\n\n-\tLayers:\n\t-\t`sha256:e160e00eb35d5bc2373770873fbc9c8f5706045b0b06bfd1c364fcf69f02e9fe` \n\t\tLast Modified: Wed, 14 Apr 2021 18:58:36 GMT \n\t\tSize: 2.4 MB (2424145 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:ffc27d66ec61bc7281a320ccbeeab49d1b7c3f3526a66c2fb0bae5d574458fc6` \n\t\tLast Modified: Sat, 31 Jul 2021 08:32:20 GMT \n\t\tSize: 1.6 MB (1564336 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:f32c80bad9922938e19e82af2071300f3c3c4bf3b810cfa19be492e7d4bfbe09` \n\t\tLast Modified: Sat, 31 Jul 2021 08:32:19 GMT \n\t\tSize: 1.3 KB (1263 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:4652eed42f881ec06fa175174ddea1340a4f99918638ca14ef643a742d218119` \n\t\tLast Modified: Sat, 31 Jul 2021 08:32:19 GMT \n\t\tSize: 267.0 B \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:967487207982de853c807d38e56294b24d9e3ddb7cc141b1f3f110e75f592361` \n\t\tLast Modified: Thu, 05 Aug 2021 22:06:44 GMT \n\t\tSize: 11.6 MB (11619203 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:a41a83a64fd6878116a6c74dac385cf8d845accbfa1214827751cce323f561f3` \n\t\tLast Modified: Thu, 05 Aug 2021 22:06:41 GMT \n\t\tSize: 498.0 B \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:59d1f4c7bc4e6bca2e6b191831a0e1d1247610de3fc57341f3966ba72d736c3c` \n\t\tLast Modified: Thu, 05 Aug 2021 22:06:50 GMT \n\t\tSize: 13.4 MB (13437952 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:f151008a5d9040307b91414fece8fc20dc29663a93b80dc47ba07c79142c30b0` \n\t\tLast Modified: Thu, 05 Aug 2021 22:06:41 GMT \n\t\tSize: 2.3 KB (2262 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:927a8c98d0e9a6289d062979b4f856f555329f46daca2a4c4498660ee66429ac` \n\t\tLast Modified: Thu, 05 Aug 2021 22:06:41 GMT \n\t\tSize: 17.4 KB (17382 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\n### `php:8.1.0beta2-alpine3.13` - linux; arm64 variant v8\n\n```console\n$ docker pull php@sha256:14a02748f9a2c5b315e97e07033a85fcab644bfd63680eb53add844a8dbd8e7a\n```\n\n-\tDocker Version: 20.10.7\n-\tManifest MIME: `application\/vnd.docker.distribution.manifest.v2+json`\n-\tTotal Size: **32.1 MB (32080713 bytes)** \n\t(compressed transfer size, not on-disk size)\n-\tImage ID: `sha256:7b776a661ce13d75475ed3608dadc967d049cba1cac28036d81799dcf9890248`\n-\tEntrypoint: `[\"docker-php-entrypoint\"]`\n-\tDefault Command: `[\"php\",\"-a\"]`\n\n```dockerfile\n# Tue, 15 Jun 2021 21:45:03 GMT\nADD file:ca9d8b5d1cc2f2186983fc6b9507da6ada5eb92f2b518c06af1128d5396c6f34 in \/ \n# Tue, 15 Jun 2021 21:45:04 GMT\nCMD [\"\/bin\/sh\"]\n# Wed, 16 Jun 2021 06:30:49 GMT\nENV PHPIZE_DEPS=autoconf \t\tdpkg-dev dpkg \t\tfile \t\tg++ \t\tgcc \t\tlibc-dev \t\tmake \t\tpkgconf \t\tre2c\n# Wed, 16 Jun 2021 06:30:51 GMT\nRUN apk add --no-cache \t\tca-certificates \t\tcurl \t\ttar \t\txz \t\topenssl\n# Wed, 16 Jun 2021 06:30:51 GMT\nRUN set -eux; \taddgroup -g 82 -S www-data; \tadduser -u 82 -D -S -G www-data www-data\n# Wed, 16 Jun 2021 06:30:52 GMT\nENV PHP_INI_DIR=\/usr\/local\/etc\/php\n# Wed, 16 Jun 2021 06:30:52 GMT\nRUN set -eux; \tmkdir -p \"$PHP_INI_DIR\/conf.d\"; \t[ ! -d \/var\/www\/html ]; \tmkdir -p \/var\/www\/html; \tchown www-data:www-data \/var\/www\/html; \tchmod 777 \/var\/www\/html\n# Wed, 16 Jun 2021 06:30:53 GMT\nENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64\n# Wed, 16 Jun 2021 06:30:53 GMT\nENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64\n# Wed, 16 Jun 2021 06:30:53 GMT\nENV PHP_LDFLAGS=-Wl,-O1 -pie\n# Wed, 16 Jun 2021 06:30:53 GMT\nENV GPG_KEYS=528995BFEDFBA7191D46839EF9BA0ADA31CBD89E 39B641343D8C104B2B146DC3F9C39DC0B9698544 F1F692238FBC1666E5A5CCD4199F9DFEF6FFBAFD\n# Thu, 05 Aug 2021 22:33:15 GMT\nENV PHP_VERSION=8.1.0beta2\n# Thu, 05 Aug 2021 22:33:15 GMT\nENV PHP_URL=https:\/\/downloads.php.net\/~ramsey\/php-8.1.0beta2.tar.xz PHP_ASC_URL=https:\/\/downloads.php.net\/~ramsey\/php-8.1.0beta2.tar.xz.asc\n# Thu, 05 Aug 2021 22:33:16 GMT\nENV PHP_SHA256=c0b8d45639f171fcbbaaf6b239a4759126130cf14628eaf74c4916408cb16ce0\n# Thu, 05 Aug 2021 22:33:20 GMT\nRUN set -eux; \t\tapk add --no-cache --virtual .fetch-deps gnupg; \t\tmkdir -p \/usr\/src; \tcd \/usr\/src; \t\tcurl -fsSL -o php.tar.xz \"$PHP_URL\"; \t\tif [ -n \"$PHP_SHA256\" ]; then \t\techo \"$PHP_SHA256 *php.tar.xz\" | sha256sum -c -; \tfi; \t\tif [ -n \"$PHP_ASC_URL\" ]; then \t\tcurl -fsSL -o php.tar.xz.asc \"$PHP_ASC_URL\"; \t\texport GNUPGHOME=\"$(mktemp -d)\"; \t\tfor key in $GPG_KEYS; do \t\t\tgpg --batch --keyserver keyserver.ubuntu.com --recv-keys \"$key\"; \t\tdone; \t\tgpg --batch --verify php.tar.xz.asc php.tar.xz; \t\tgpgconf --kill all; \t\trm -rf \"$GNUPGHOME\"; \tfi; \t\tapk del --no-network .fetch-deps\n# Thu, 05 Aug 2021 22:33:21 GMT\nCOPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in \/usr\/local\/bin\/ \n# Thu, 05 Aug 2021 22:40:23 GMT\nRUN set -eux; \tapk add --no-cache --virtual .build-deps \t\t$PHPIZE_DEPS \t\targon2-dev \t\tcoreutils \t\tcurl-dev \t\tlibedit-dev \t\tlibsodium-dev \t\tlibxml2-dev \t\tlinux-headers \t\toniguruma-dev \t\topenssl-dev \t\tsqlite-dev \t; \t\texport CFLAGS=\"$PHP_CFLAGS\" \t\tCPPFLAGS=\"$PHP_CPPFLAGS\" \t\tLDFLAGS=\"$PHP_LDFLAGS\" \t; \tdocker-php-source extract; \tcd \/usr\/src\/php; \tgnuArch=\"$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)\"; \t.\/configure \t\t--build=\"$gnuArch\" \t\t--with-config-file-path=\"$PHP_INI_DIR\" \t\t--with-config-file-scan-dir=\"$PHP_INI_DIR\/conf.d\" \t\t\t\t--enable-option-checking=fatal \t\t\t\t--with-mhash \t\t\t\t--with-pic \t\t\t\t--enable-ftp \t\t--enable-mbstring \t\t--enable-mysqlnd \t\t--with-password-argon2 \t\t--with-sodium=shared \t\t--with-pdo-sqlite=\/usr \t\t--with-sqlite3=\/usr \t\t\t\t--with-curl \t\t--with-libedit \t\t--with-openssl \t\t--with-zlib \t\t\t\t--with-pear \t\t\t\t$(test \"$gnuArch\" = 's390x-linux-musl' && echo '--without-pcre-jit') \t\t\t\t${PHP_EXTRA_CONFIGURE_ARGS:-} \t; \tmake -j \"$(nproc)\"; \tfind -type f -name '*.a' -delete; \tmake install; \tfind \/usr\/local\/bin \/usr\/local\/sbin -type f -perm +0111 -exec strip --strip-all '{}' + || true; \tmake clean; \t\tcp -v php.ini-* \"$PHP_INI_DIR\/\"; \t\tcd \/; \tdocker-php-source delete; \t\trunDeps=\"$( \t\tscanelf --needed --nobanner --format '%n#p' --recursive \/usr\/local \t\t\t| tr ',' '\\n' \t\t\t| sort -u \t\t\t| awk 'system(\"[ -e \/usr\/local\/lib\/\" $1 \" ]\") == 0 { next } { print \"so:\" $1 }' \t)\"; \tapk add --no-cache $runDeps; \t\tapk del --no-network .build-deps; \t\tpecl update-channels; \trm -rf \/tmp\/pear ~\/.pearrc; \t\tphp --version\n# Thu, 05 Aug 2021 22:40:24 GMT\nCOPY multi:efd917b98407edb5d558edb0edbd8e63c9318f701892aaa449794d019a092f37 in \/usr\/local\/bin\/ \n# Thu, 05 Aug 2021 22:40:25 GMT\nRUN docker-php-ext-enable sodium\n# Thu, 05 Aug 2021 22:40:25 GMT\nENTRYPOINT [\"docker-php-entrypoint\"]\n# Thu, 05 Aug 2021 22:40:25 GMT\nCMD [\"php\" \"-a\"]\n```\n\n-\tLayers:\n\t-\t`sha256:595b0fe564bb9444ebfe78288079a01ee6d7f666544028d5e96ba610f909ee43` \n\t\tLast Modified: Wed, 14 Apr 2021 18:43:41 GMT \n\t\tSize: 2.7 MB (2712026 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:4fc18e501d273a5dcc26bc1081722d23265e91017730cc8d28538506985fbfae` \n\t\tLast Modified: Wed, 16 Jun 2021 08:22:48 GMT \n\t\tSize: 1.7 MB (1710127 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:85cae51f9170182c26896b8887cec72ef02fc7a6ee745b63f2d5205fd4d2d1a9` \n\t\tLast Modified: Wed, 16 Jun 2021 08:22:46 GMT \n\t\tSize: 1.3 KB (1262 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:53fa7f9524184404473a9920618e9aa956decfca08f5ccdd2a9b83aa9c564a03` \n\t\tLast Modified: Wed, 16 Jun 2021 08:22:47 GMT \n\t\tSize: 268.0 B \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:147e40ce9b43aa35fbbec9035af6e0dbbdfab444d9bab3a7a5ce825d9004178e` \n\t\tLast Modified: Thu, 05 Aug 2021 23:01:49 GMT \n\t\tSize: 11.6 MB (11619224 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:2f3bc779eab0aee8db705ace829c72ddf82e16f880e304012bc2abd11f507fec` \n\t\tLast Modified: Thu, 05 Aug 2021 23:01:47 GMT \n\t\tSize: 500.0 B \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:41c131e39795d35a7852621409d49dd40f1b3b39a3ce226bda10c74f0b824a75` \n\t\tLast Modified: Thu, 05 Aug 2021 23:01:50 GMT \n\t\tSize: 16.0 MB (16017619 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:a9e9cde666bb38ce0351b992203384057ec4ad97d780935a51eb7f2a702ddb16` \n\t\tLast Modified: Thu, 05 Aug 2021 23:01:48 GMT \n\t\tSize: 2.3 KB (2267 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:81cceb7690ac32e70055ffade8aa88e1b7e0e6783b6ec471a49a6414e3f9d673` \n\t\tLast Modified: Thu, 05 Aug 2021 23:01:48 GMT \n\t\tSize: 17.4 KB (17420 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\n### `php:8.1.0beta2-alpine3.13` - linux; 386\n\n```console\n$ docker pull php@sha256:c6c7f3ce20d19472b7e7f768d2ee250211b49ffa40346a889d30a7b5b9a40de5\n```\n\n-\tDocker Version: 20.10.7\n-\tManifest MIME: `application\/vnd.docker.distribution.manifest.v2+json`\n-\tTotal Size: **32.7 MB (32658309 bytes)** \n\t(compressed transfer size, not on-disk size)\n-\tImage ID: `sha256:8712197fc29f3e2ebb59f4aba70a6f9294a485a593d5159e0dc8fefae9ac968f`\n-\tEntrypoint: `[\"docker-php-entrypoint\"]`\n-\tDefault Command: `[\"php\",\"-a\"]`\n\n```dockerfile\n# Wed, 14 Apr 2021 18:38:29 GMT\nADD file:36634145ad6ec95a6b1a4f8d875f48719357c7a90f9b4906f8ce74f71b28c19d in \/ \n# Wed, 14 Apr 2021 18:38:29 GMT\nCMD [\"\/bin\/sh\"]\n# Thu, 15 Apr 2021 03:38:44 GMT\nENV PHPIZE_DEPS=autoconf \t\tdpkg-dev dpkg \t\tfile \t\tg++ \t\tgcc \t\tlibc-dev \t\tmake \t\tpkgconf \t\tre2c\n# Thu, 15 Apr 2021 03:38:46 GMT\nRUN apk add --no-cache \t\tca-certificates \t\tcurl \t\ttar \t\txz \t\topenssl\n# Thu, 15 Apr 2021 03:38:47 GMT\nRUN set -eux; \taddgroup -g 82 -S www-data; \tadduser -u 82 -D -S -G www-data www-data\n# Thu, 15 Apr 2021 03:38:47 GMT\nENV PHP_INI_DIR=\/usr\/local\/etc\/php\n# Thu, 15 Apr 2021 03:38:48 GMT\nRUN set -eux; \tmkdir -p \"$PHP_INI_DIR\/conf.d\"; \t[ ! -d \/var\/www\/html ]; \tmkdir -p \/var\/www\/html; \tchown www-data:www-data \/var\/www\/html; \tchmod 777 \/var\/www\/html\n# Thu, 15 Apr 2021 03:38:48 GMT\nENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64\n# Thu, 15 Apr 2021 03:38:48 GMT\nENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64\n# Thu, 15 Apr 2021 03:38:48 GMT\nENV PHP_LDFLAGS=-Wl,-O1 -pie\n# Thu, 10 Jun 2021 23:01:16 GMT\nENV GPG_KEYS=528995BFEDFBA7191D46839EF9BA0ADA31CBD89E 39B641343D8C104B2B146DC3F9C39DC0B9698544 F1F692238FBC1666E5A5CCD4199F9DFEF6FFBAFD\n# Thu, 05 Aug 2021 23:01:26 GMT\nENV PHP_VERSION=8.1.0beta2\n# Thu, 05 Aug 2021 23:01:26 GMT\nENV PHP_URL=https:\/\/downloads.php.net\/~ramsey\/php-8.1.0beta2.tar.xz PHP_ASC_URL=https:\/\/downloads.php.net\/~ramsey\/php-8.1.0beta2.tar.xz.asc\n# Thu, 05 Aug 2021 23:01:26 GMT\nENV PHP_SHA256=c0b8d45639f171fcbbaaf6b239a4759126130cf14628eaf74c4916408cb16ce0\n# Thu, 05 Aug 2021 23:01:32 GMT\nRUN set -eux; \t\tapk add --no-cache --virtual .fetch-deps gnupg; \t\tmkdir -p \/usr\/src; \tcd \/usr\/src; \t\tcurl -fsSL -o php.tar.xz \"$PHP_URL\"; \t\tif [ -n \"$PHP_SHA256\" ]; then \t\techo \"$PHP_SHA256 *php.tar.xz\" | sha256sum -c -; \tfi; \t\tif [ -n \"$PHP_ASC_URL\" ]; then \t\tcurl -fsSL -o php.tar.xz.asc \"$PHP_ASC_URL\"; \t\texport GNUPGHOME=\"$(mktemp -d)\"; \t\tfor key in $GPG_KEYS; do \t\t\tgpg --batch --keyserver keyserver.ubuntu.com --recv-keys \"$key\"; \t\tdone; \t\tgpg --batch --verify php.tar.xz.asc php.tar.xz; \t\tgpgconf --kill all; \t\trm -rf \"$GNUPGHOME\"; \tfi; \t\tapk del --no-network .fetch-deps\n# Thu, 05 Aug 2021 23:01:33 GMT\nCOPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in \/usr\/local\/bin\/ \n# Thu, 05 Aug 2021 23:08:49 GMT\nRUN set -eux; \tapk add --no-cache --virtual .build-deps \t\t$PHPIZE_DEPS \t\targon2-dev \t\tcoreutils \t\tcurl-dev \t\tlibedit-dev \t\tlibsodium-dev \t\tlibxml2-dev \t\tlinux-headers \t\toniguruma-dev \t\topenssl-dev \t\tsqlite-dev \t; \t\texport CFLAGS=\"$PHP_CFLAGS\" \t\tCPPFLAGS=\"$PHP_CPPFLAGS\" \t\tLDFLAGS=\"$PHP_LDFLAGS\" \t; \tdocker-php-source extract; \tcd \/usr\/src\/php; \tgnuArch=\"$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)\"; \t.\/configure \t\t--build=\"$gnuArch\" \t\t--with-config-file-path=\"$PHP_INI_DIR\" \t\t--with-config-file-scan-dir=\"$PHP_INI_DIR\/conf.d\" \t\t\t\t--enable-option-checking=fatal \t\t\t\t--with-mhash \t\t\t\t--with-pic \t\t\t\t--enable-ftp \t\t--enable-mbstring \t\t--enable-mysqlnd \t\t--with-password-argon2 \t\t--with-sodium=shared \t\t--with-pdo-sqlite=\/usr \t\t--with-sqlite3=\/usr \t\t\t\t--with-curl \t\t--with-libedit \t\t--with-openssl \t\t--with-zlib \t\t\t\t--with-pear \t\t\t\t$(test \"$gnuArch\" = 's390x-linux-musl' && echo '--without-pcre-jit') \t\t\t\t${PHP_EXTRA_CONFIGURE_ARGS:-} \t; \tmake -j \"$(nproc)\"; \tfind -type f -name '*.a' -delete; \tmake install; \tfind \/usr\/local\/bin \/usr\/local\/sbin -type f -perm +0111 -exec strip --strip-all '{}' + || true; \tmake clean; \t\tcp -v php.ini-* \"$PHP_INI_DIR\/\"; \t\tcd \/; \tdocker-php-source delete; \t\trunDeps=\"$( \t\tscanelf --needed --nobanner --format '%n#p' --recursive \/usr\/local \t\t\t| tr ',' '\\n' \t\t\t| sort -u \t\t\t| awk 'system(\"[ -e \/usr\/local\/lib\/\" $1 \" ]\") == 0 { next } { print \"so:\" $1 }' \t)\"; \tapk add --no-cache $runDeps; \t\tapk del --no-network .build-deps; \t\tpecl update-channels; \trm -rf \/tmp\/pear ~\/.pearrc; \t\tphp --version\n# Thu, 05 Aug 2021 23:08:50 GMT\nCOPY multi:efd917b98407edb5d558edb0edbd8e63c9318f701892aaa449794d019a092f37 in \/usr\/local\/bin\/ \n# Thu, 05 Aug 2021 23:08:51 GMT\nRUN docker-php-ext-enable sodium\n# Thu, 05 Aug 2021 23:08:52 GMT\nENTRYPOINT [\"docker-php-entrypoint\"]\n# Thu, 05 Aug 2021 23:08:52 GMT\nCMD [\"php\" \"-a\"]\n```\n\n-\tLayers:\n\t-\t`sha256:31b7e7ccca9e17fd08b39c9a4ffd3ded380b62816c489d6c3758c9bb5a632430` \n\t\tLast Modified: Wed, 14 Apr 2021 18:39:24 GMT \n\t\tSize: 2.8 MB (2818900 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:b3818e7e8eb8d9c0ab0e377e89ed7df3037ec2c0d4c4c89bd3a2abedc459366c` \n\t\tLast Modified: Thu, 15 Apr 2021 05:57:45 GMT \n\t\tSize: 1.8 MB (1799274 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:274cbc3732aca6ca32c0ba51f7504eb2d19d4fd52d0f89f9ff2597a673484be8` \n\t\tLast Modified: Thu, 15 Apr 2021 05:57:43 GMT \n\t\tSize: 1.3 KB (1260 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:32aa0b7923dc47bd54c1e8cc895686f0638c3a28a9a40f8ed831545df00ce891` \n\t\tLast Modified: Thu, 15 Apr 2021 05:57:46 GMT \n\t\tSize: 268.0 B \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:573588730a9ece268023bf20a689b0e124ebbe02b298d819e43bfe4046a48af7` \n\t\tLast Modified: Thu, 05 Aug 2021 23:32:00 GMT \n\t\tSize: 11.6 MB (11619186 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:8ec4fad9f5fd041cc4de9d33090a545d6773e1b16630693fc9ed9bf46bba02bf` \n\t\tLast Modified: Thu, 05 Aug 2021 23:31:58 GMT \n\t\tSize: 498.0 B \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:656d15b748884a20f5e5ed901ab3a0efa043d9cdc5e3949e56398b7f1acf18cf` \n\t\tLast Modified: Thu, 05 Aug 2021 23:32:03 GMT \n\t\tSize: 16.4 MB (16399061 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:ca075b4e48d1b4205a65087ed5bc49093f840d090aa35dcb8de4fc22e236a618` \n\t\tLast Modified: Thu, 05 Aug 2021 23:31:58 GMT \n\t\tSize: 2.3 KB (2264 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:844731e64ce5abe094a17e24dddc26fe67a9370257e1985beb3a7a0896e19b99` \n\t\tLast Modified: Thu, 05 Aug 2021 23:31:58 GMT \n\t\tSize: 17.6 KB (17598 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\n### `php:8.1.0beta2-alpine3.13` - linux; ppc64le\n\n```console\n$ docker pull php@sha256:9328e68c1f284ede3d4b95305fb001817595af0304a4d92e1d196ef269dfa4ff\n```\n\n-\tDocker Version: 20.10.7\n-\tManifest MIME: `application\/vnd.docker.distribution.manifest.v2+json`\n-\tTotal Size: **32.7 MB (32704990 bytes)** \n\t(compressed transfer size, not on-disk size)\n-\tImage ID: `sha256:a7562b165374269437755bc03372502a3b5a5b268e59feffa7bb3800de3561fa`\n-\tEntrypoint: `[\"docker-php-entrypoint\"]`\n-\tDefault Command: `[\"php\",\"-a\"]`\n\n```dockerfile\n# Fri, 30 Jul 2021 17:24:49 GMT\nADD file:52162c4413e3597dad4ccb790c379b67ef40d50c0d0659e8b6c65d833886b3af in \/ \n# Fri, 30 Jul 2021 17:24:54 GMT\nCMD [\"\/bin\/sh\"]\n# Sat, 31 Jul 2021 02:20:00 GMT\nENV PHPIZE_DEPS=autoconf \t\tdpkg-dev dpkg \t\tfile \t\tg++ \t\tgcc \t\tlibc-dev \t\tmake \t\tpkgconf \t\tre2c\n# Sat, 31 Jul 2021 02:20:08 GMT\nRUN apk add --no-cache \t\tca-certificates \t\tcurl \t\ttar \t\txz \t\topenssl\n# Sat, 31 Jul 2021 02:20:21 GMT\nRUN set -eux; \taddgroup -g 82 -S www-data; \tadduser -u 82 -D -S -G www-data www-data\n# Sat, 31 Jul 2021 02:20:24 GMT\nENV PHP_INI_DIR=\/usr\/local\/etc\/php\n# Sat, 31 Jul 2021 02:20:31 GMT\nRUN set -eux; \tmkdir -p \"$PHP_INI_DIR\/conf.d\"; \t[ ! -d \/var\/www\/html ]; \tmkdir -p \/var\/www\/html; \tchown www-data:www-data \/var\/www\/html; \tchmod 777 \/var\/www\/html\n# Sat, 31 Jul 2021 02:20:33 GMT\nENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64\n# Sat, 31 Jul 2021 02:20:35 GMT\nENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64\n# Sat, 31 Jul 2021 02:20:37 GMT\nENV PHP_LDFLAGS=-Wl,-O1 -pie\n# Sat, 31 Jul 2021 02:20:39 GMT\nENV GPG_KEYS=528995BFEDFBA7191D46839EF9BA0ADA31CBD89E 39B641343D8C104B2B146DC3F9C39DC0B9698544 F1F692238FBC1666E5A5CCD4199F9DFEF6FFBAFD\n# Thu, 05 Aug 2021 22:21:01 GMT\nENV PHP_VERSION=8.1.0beta2\n# Thu, 05 Aug 2021 22:21:06 GMT\nENV PHP_URL=https:\/\/downloads.php.net\/~ramsey\/php-8.1.0beta2.tar.xz PHP_ASC_URL=https:\/\/downloads.php.net\/~ramsey\/php-8.1.0beta2.tar.xz.asc\n# Thu, 05 Aug 2021 22:21:11 GMT\nENV PHP_SHA256=c0b8d45639f171fcbbaaf6b239a4759126130cf14628eaf74c4916408cb16ce0\n# Thu, 05 Aug 2021 22:21:31 GMT\nRUN set -eux; \t\tapk add --no-cache --virtual .fetch-deps gnupg; \t\tmkdir -p \/usr\/src; \tcd \/usr\/src; \t\tcurl -fsSL -o php.tar.xz \"$PHP_URL\"; \t\tif [ -n \"$PHP_SHA256\" ]; then \t\techo \"$PHP_SHA256 *php.tar.xz\" | sha256sum -c -; \tfi; \t\tif [ -n \"$PHP_ASC_URL\" ]; then \t\tcurl -fsSL -o php.tar.xz.asc \"$PHP_ASC_URL\"; \t\texport GNUPGHOME=\"$(mktemp -d)\"; \t\tfor key in $GPG_KEYS; do \t\t\tgpg --batch --keyserver keyserver.ubuntu.com --recv-keys \"$key\"; \t\tdone; \t\tgpg --batch --verify php.tar.xz.asc php.tar.xz; \t\tgpgconf --kill all; \t\trm -rf \"$GNUPGHOME\"; \tfi; \t\tapk del --no-network .fetch-deps\n# Thu, 05 Aug 2021 22:21:33 GMT\nCOPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in \/usr\/local\/bin\/ \n# Thu, 05 Aug 2021 22:26:27 GMT\nRUN set -eux; \tapk add --no-cache --virtual .build-deps \t\t$PHPIZE_DEPS \t\targon2-dev \t\tcoreutils \t\tcurl-dev \t\tlibedit-dev \t\tlibsodium-dev \t\tlibxml2-dev \t\tlinux-headers \t\toniguruma-dev \t\topenssl-dev \t\tsqlite-dev \t; \t\texport CFLAGS=\"$PHP_CFLAGS\" \t\tCPPFLAGS=\"$PHP_CPPFLAGS\" \t\tLDFLAGS=\"$PHP_LDFLAGS\" \t; \tdocker-php-source extract; \tcd \/usr\/src\/php; \tgnuArch=\"$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)\"; \t.\/configure \t\t--build=\"$gnuArch\" \t\t--with-config-file-path=\"$PHP_INI_DIR\" \t\t--with-config-file-scan-dir=\"$PHP_INI_DIR\/conf.d\" \t\t\t\t--enable-option-checking=fatal \t\t\t\t--with-mhash \t\t\t\t--with-pic \t\t\t\t--enable-ftp \t\t--enable-mbstring \t\t--enable-mysqlnd \t\t--with-password-argon2 \t\t--with-sodium=shared \t\t--with-pdo-sqlite=\/usr \t\t--with-sqlite3=\/usr \t\t\t\t--with-curl \t\t--with-libedit \t\t--with-openssl \t\t--with-zlib \t\t\t\t--with-pear \t\t\t\t$(test \"$gnuArch\" = 's390x-linux-musl' && echo '--without-pcre-jit') \t\t\t\t${PHP_EXTRA_CONFIGURE_ARGS:-} \t; \tmake -j \"$(nproc)\"; \tfind -type f -name '*.a' -delete; \tmake install; \tfind \/usr\/local\/bin \/usr\/local\/sbin -type f -perm +0111 -exec strip --strip-all '{}' + || true; \tmake clean; \t\tcp -v php.ini-* \"$PHP_INI_DIR\/\"; \t\tcd \/; \tdocker-php-source delete; \t\trunDeps=\"$( \t\tscanelf --needed --nobanner --format '%n#p' --recursive \/usr\/local \t\t\t| tr ',' '\\n' \t\t\t| sort -u \t\t\t| awk 'system(\"[ -e \/usr\/local\/lib\/\" $1 \" ]\") == 0 { next } { print \"so:\" $1 }' \t)\"; \tapk add --no-cache $runDeps; \t\tapk del --no-network .build-deps; \t\tpecl update-channels; \trm -rf \/tmp\/pear ~\/.pearrc; \t\tphp --version\n# Thu, 05 Aug 2021 22:26:29 GMT\nCOPY multi:efd917b98407edb5d558edb0edbd8e63c9318f701892aaa449794d019a092f37 in \/usr\/local\/bin\/ \n# Thu, 05 Aug 2021 22:26:41 GMT\nRUN docker-php-ext-enable sodium\n# Thu, 05 Aug 2021 22:26:42 GMT\nENTRYPOINT [\"docker-php-entrypoint\"]\n# Thu, 05 Aug 2021 22:26:45 GMT\nCMD [\"php\" \"-a\"]\n```\n\n-\tLayers:\n\t-\t`sha256:771d2590aa602a0d4a922e322f02b22cc9d193f8cd159d9d1a140cadf1f8b4d4` \n\t\tLast Modified: Wed, 14 Apr 2021 19:32:33 GMT \n\t\tSize: 2.8 MB (2813141 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:77a2890170d37448eaefd3de6e15c0fc1b6e2b6600fbbfbe3dea15caf1087650` \n\t\tLast Modified: Sat, 31 Jul 2021 04:33:57 GMT \n\t\tSize: 1.8 MB (1753292 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:14b573fd29f39051468d7f22963bcfabdf83e9880c2c4f44fa9db0eaaddb71d0` \n\t\tLast Modified: Sat, 31 Jul 2021 04:33:57 GMT \n\t\tSize: 1.3 KB (1264 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:f3130c0cff8be9725a62dc79e7127238cd0c2bde33f734fac3b97827e203c951` \n\t\tLast Modified: Sat, 31 Jul 2021 04:33:56 GMT \n\t\tSize: 268.0 B \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:a917c1ff386dca3f9bcd86be5023f29e7e9c0db4dcbe934f4ced1f0d4c9da16d` \n\t\tLast Modified: Thu, 05 Aug 2021 22:45:45 GMT \n\t\tSize: 11.6 MB (11619211 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:a5803f230580e8b6ebfe454116f8dc2bda0c1db91c6656d6a5d253a18238bedb` \n\t\tLast Modified: Thu, 05 Aug 2021 22:45:44 GMT \n\t\tSize: 498.0 B \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:b68d451bdf050c9d5d2667bab0dc182cb4a7181acccecdfb31e62df4ba8b8a33` \n\t\tLast Modified: Thu, 05 Aug 2021 22:45:48 GMT \n\t\tSize: 16.5 MB (16497651 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:d2d5b769e0b5067685224d12d80b6681ff476ea4acff84dbc85332c7b340b6e3` \n\t\tLast Modified: Thu, 05 Aug 2021 22:45:44 GMT \n\t\tSize: 2.3 KB (2267 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:216ee65cb445efb76980885276eb40311cd9c0f2ce64adc1db17429439b5c9e5` \n\t\tLast Modified: Thu, 05 Aug 2021 22:45:44 GMT \n\t\tSize: 17.4 KB (17398 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\n### `php:8.1.0beta2-alpine3.13` - linux; s390x\n\n```console\n$ docker pull php@sha256:6acf2534aeffd0cafad0aaec15fca83b26d3d5a805214c950e6a22add0e22160\n```\n\n-\tDocker Version: 20.10.7\n-\tManifest MIME: `application\/vnd.docker.distribution.manifest.v2+json`\n-\tTotal Size: **30.7 MB (30732332 bytes)** \n\t(compressed transfer size, not on-disk size)\n-\tImage ID: `sha256:3a814fce474d786ada0ad3f33796554a8f6b33b3ec729b257ff93deb7174f430`\n-\tEntrypoint: `[\"docker-php-entrypoint\"]`\n-\tDefault Command: `[\"php\",\"-a\"]`\n\n```dockerfile\n# Fri, 30 Jul 2021 17:41:41 GMT\nADD file:c715fef757fe2b022ae1bbff71dbc58bddf5a858deb0aac5a6fbcf10d5f3111c in \/ \n# Fri, 30 Jul 2021 17:41:42 GMT\nCMD [\"\/bin\/sh\"]\n# Sat, 31 Jul 2021 00:12:48 GMT\nENV PHPIZE_DEPS=autoconf \t\tdpkg-dev dpkg \t\tfile \t\tg++ \t\tgcc \t\tlibc-dev \t\tmake \t\tpkgconf \t\tre2c\n# Sat, 31 Jul 2021 00:12:49 GMT\nRUN apk add --no-cache \t\tca-certificates \t\tcurl \t\ttar \t\txz \t\topenssl\n# Sat, 31 Jul 2021 00:12:50 GMT\nRUN set -eux; \taddgroup -g 82 -S www-data; \tadduser -u 82 -D -S -G www-data www-data\n# Sat, 31 Jul 2021 00:12:50 GMT\nENV PHP_INI_DIR=\/usr\/local\/etc\/php\n# Sat, 31 Jul 2021 00:12:51 GMT\nRUN set -eux; \tmkdir -p \"$PHP_INI_DIR\/conf.d\"; \t[ ! -d \/var\/www\/html ]; \tmkdir -p \/var\/www\/html; \tchown www-data:www-data \/var\/www\/html; \tchmod 777 \/var\/www\/html\n# Sat, 31 Jul 2021 00:12:51 GMT\nENV PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64\n# Sat, 31 Jul 2021 00:12:51 GMT\nENV PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64\n# Sat, 31 Jul 2021 00:12:52 GMT\nENV PHP_LDFLAGS=-Wl,-O1 -pie\n# Sat, 31 Jul 2021 00:12:52 GMT\nENV GPG_KEYS=528995BFEDFBA7191D46839EF9BA0ADA31CBD89E 39B641343D8C104B2B146DC3F9C39DC0B9698544 F1F692238FBC1666E5A5CCD4199F9DFEF6FFBAFD\n# Thu, 05 Aug 2021 22:23:12 GMT\nENV PHP_VERSION=8.1.0beta2\n# Thu, 05 Aug 2021 22:23:13 GMT\nENV PHP_URL=https:\/\/downloads.php.net\/~ramsey\/php-8.1.0beta2.tar.xz PHP_ASC_URL=https:\/\/downloads.php.net\/~ramsey\/php-8.1.0beta2.tar.xz.asc\n# Thu, 05 Aug 2021 22:23:14 GMT\nENV PHP_SHA256=c0b8d45639f171fcbbaaf6b239a4759126130cf14628eaf74c4916408cb16ce0\n# Thu, 05 Aug 2021 22:23:19 GMT\nRUN set -eux; \t\tapk add --no-cache --virtual .fetch-deps gnupg; \t\tmkdir -p \/usr\/src; \tcd \/usr\/src; \t\tcurl -fsSL -o php.tar.xz \"$PHP_URL\"; \t\tif [ -n \"$PHP_SHA256\" ]; then \t\techo \"$PHP_SHA256 *php.tar.xz\" | sha256sum -c -; \tfi; \t\tif [ -n \"$PHP_ASC_URL\" ]; then \t\tcurl -fsSL -o php.tar.xz.asc \"$PHP_ASC_URL\"; \t\texport GNUPGHOME=\"$(mktemp -d)\"; \t\tfor key in $GPG_KEYS; do \t\t\tgpg --batch --keyserver keyserver.ubuntu.com --recv-keys \"$key\"; \t\tdone; \t\tgpg --batch --verify php.tar.xz.asc php.tar.xz; \t\tgpgconf --kill all; \t\trm -rf \"$GNUPGHOME\"; \tfi; \t\tapk del --no-network .fetch-deps\n# Thu, 05 Aug 2021 22:23:20 GMT\nCOPY file:ce57c04b70896f77cc11eb2766417d8a1240fcffe5bba92179ec78c458844110 in \/usr\/local\/bin\/ \n# Thu, 05 Aug 2021 22:28:27 GMT\nRUN set -eux; \tapk add --no-cache --virtual .build-deps \t\t$PHPIZE_DEPS \t\targon2-dev \t\tcoreutils \t\tcurl-dev \t\tlibedit-dev \t\tlibsodium-dev \t\tlibxml2-dev \t\tlinux-headers \t\toniguruma-dev \t\topenssl-dev \t\tsqlite-dev \t; \t\texport CFLAGS=\"$PHP_CFLAGS\" \t\tCPPFLAGS=\"$PHP_CPPFLAGS\" \t\tLDFLAGS=\"$PHP_LDFLAGS\" \t; \tdocker-php-source extract; \tcd \/usr\/src\/php; \tgnuArch=\"$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)\"; \t.\/configure \t\t--build=\"$gnuArch\" \t\t--with-config-file-path=\"$PHP_INI_DIR\" \t\t--with-config-file-scan-dir=\"$PHP_INI_DIR\/conf.d\" \t\t\t\t--enable-option-checking=fatal \t\t\t\t--with-mhash \t\t\t\t--with-pic \t\t\t\t--enable-ftp \t\t--enable-mbstring \t\t--enable-mysqlnd \t\t--with-password-argon2 \t\t--with-sodium=shared \t\t--with-pdo-sqlite=\/usr \t\t--with-sqlite3=\/usr \t\t\t\t--with-curl \t\t--with-libedit \t\t--with-openssl \t\t--with-zlib \t\t\t\t--with-pear \t\t\t\t$(test \"$gnuArch\" = 's390x-linux-musl' && echo '--without-pcre-jit') \t\t\t\t${PHP_EXTRA_CONFIGURE_ARGS:-} \t; \tmake -j \"$(nproc)\"; \tfind -type f -name '*.a' -delete; \tmake install; \tfind \/usr\/local\/bin \/usr\/local\/sbin -type f -perm +0111 -exec strip --strip-all '{}' + || true; \tmake clean; \t\tcp -v php.ini-* \"$PHP_INI_DIR\/\"; \t\tcd \/; \tdocker-php-source delete; \t\trunDeps=\"$( \t\tscanelf --needed --nobanner --format '%n#p' --recursive \/usr\/local \t\t\t| tr ',' '\\n' \t\t\t| sort -u \t\t\t| awk 'system(\"[ -e \/usr\/local\/lib\/\" $1 \" ]\") == 0 { next } { print \"so:\" $1 }' \t)\"; \tapk add --no-cache $runDeps; \t\tapk del --no-network .build-deps; \t\tpecl update-channels; \trm -rf \/tmp\/pear ~\/.pearrc; \t\tphp --version\n# Thu, 05 Aug 2021 22:28:30 GMT\nCOPY multi:efd917b98407edb5d558edb0edbd8e63c9318f701892aaa449794d019a092f37 in \/usr\/local\/bin\/ \n# Thu, 05 Aug 2021 22:28:33 GMT\nRUN docker-php-ext-enable sodium\n# Thu, 05 Aug 2021 22:28:33 GMT\nENTRYPOINT [\"docker-php-entrypoint\"]\n# Thu, 05 Aug 2021 22:28:34 GMT\nCMD [\"php\" \"-a\"]\n```\n\n-\tLayers:\n\t-\t`sha256:afadee6ad6a38d3172beeeca818219604c782efbe93201ef4d39512f289b05ae` \n\t\tLast Modified: Wed, 14 Apr 2021 18:42:16 GMT \n\t\tSize: 2.6 MB (2602650 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:848262f2210b3624468cbcaac4a4ce187700740d41f537609fd08bffb68b24a9` \n\t\tLast Modified: Sat, 31 Jul 2021 01:15:42 GMT \n\t\tSize: 1.8 MB (1767520 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:4e5bbc32c9810d15969147a9201e6b1b6b38420decf3ea6b03f42c6f5782589e` \n\t\tLast Modified: Sat, 31 Jul 2021 01:15:42 GMT \n\t\tSize: 1.3 KB (1261 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:8e978624a1dc1bc4168e2327cf9eac7cdd34cf4e1761659a19abd1b991957849` \n\t\tLast Modified: Sat, 31 Jul 2021 01:15:42 GMT \n\t\tSize: 267.0 B \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:48e51b93957b8e836cb714553e3650bd17dd0604b8f628263f94716f8b380b5d` \n\t\tLast Modified: Thu, 05 Aug 2021 23:05:35 GMT \n\t\tSize: 11.6 MB (11619209 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:448040f16a97ec9cf33658d1501b4e213c70b2a637fb981778d80177646e26ab` \n\t\tLast Modified: Thu, 05 Aug 2021 23:05:34 GMT \n\t\tSize: 496.0 B \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:a2102aa5dcefabb254a647aca8cd9be6d21159f0eb91fef66a40ed43ca285f0c` \n\t\tLast Modified: Thu, 05 Aug 2021 23:05:36 GMT \n\t\tSize: 14.7 MB (14721276 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:ed487a141440b8baa46300a44cb58ed39e4fd1f04291621a16177471164dc347` \n\t\tLast Modified: Thu, 05 Aug 2021 23:05:34 GMT \n\t\tSize: 2.3 KB (2262 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n\t-\t`sha256:a94b4f016ea8da3a1fd794c05430980d71bf5acb87a12a0e7b978c4bcd451197` \n\t\tLast Modified: Thu, 05 Aug 2021 23:05:34 GMT \n\t\tSize: 17.4 KB (17391 bytes) \n\t\tMIME: application\/vnd.docker.image.rootfs.diff.tar.gzip\n","avg_line_length":67.8863309353,"max_line_length":1525,"alphanum_fraction":0.718912274} +{"size":176,"ext":"md","lang":"Markdown","max_stars_count":24.0,"content":"---\nmoneySpent: 0\nexercise: 30 minutes\n---\n## day no 8\ntoday is a good day!\n \n\ni did some writing [writing:: true]\n\nlorem ipsum lorem ipsum lorem ipsum lorem ipsum lorem ipsum\n","avg_line_length":14.6666666667,"max_line_length":59,"alphanum_fraction":0.7102272727} +{"size":3850,"ext":"md","lang":"Markdown","max_stars_count":1.0,"content":"---\ntitle: AppId\ndescription: AppId\nMSHAttr:\n- 'PreferredSiteName:MSDN'\n- 'PreferredLib:\/library\/windows\/hardware'\nms.assetid: a1e11625-e37f-484f-9613-efe325c09dd3\nms.prod: W10\nms.mktglfcycl: deploy\nms.sitesec: msdn\nms.author: alhopper\nms.date: 05\/02\/2017\nms.topic: article\nms.prod: windows-hardware\nms.technology: windows-oem\n---\n\n# AppId\n\n\n`AppId` specifies the `AppID` of the Windows Store apps that appear as square tiles on the **Start** screen. The `AppId` must be the `AppUserModelID` found in the application's AUMIDs.txt file, which is located in the app package downloaded from the OEM channel partner portal of the Windows Store.\n\n## Values\n\n\nThe `AppId` is a string, with a maximum value of 256 characters.\n\n## Valid Configuration Passes\n\n\nspecialize\n\nauditUser\n\noobeSystem\n\n## Parent Hierarchy\n\n\n[Microsoft-Windows-Shell-Setup](microsoft-windows-shell-setup.md) | [StartTiles](microsoft-windows-shell-setup-starttiles.md) | [SquareTiles](microsoft-windows-shell-setup-starttiles-squaretiles.md) | [SquareTile3](microsoft-windows-shell-setup-starttiles-squaretiles-squaretile3.md) | **AppId**\n\n## Applies To\n\n\nFor a list of the Windows editions and architectures that this component supports, see [Microsoft-Windows-Shell-Setup](microsoft-windows-shell-setup.md).\n\n## XML Example\n\n\nThe following XML output shows how to use the `` component.\n\n``` syntax\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 C:\\programdata\\microsoft\\windows\\start menu\\programs\\desktoptile1.lnk<\/AppIdOrPath>\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 backgroundtask.js<\/FirstRunTask>\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/SquareOrDesktopTile1>\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 67890ChannelFabrikam.channel-JKL_mnop1234789!App<\/AppIdOrPath>\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Fabrikam.FirstRunTask<\/FirstRunTask>\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/SquareOrDesktopTile2>\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 C:\\programdata\\microsoft\\windows\\start menu\\programs\\desktoptile3.lnk<\/AppIdOrPath>\n\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0<\/SquareOrDesktopTile3>\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 12345ChannelFabrikam.channel-ABC_defghij6789!App<\/AppId>\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 backgroundtask.js<\/FirstRunTask>\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/SquareTile1>\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a034567ChannelFabrikam.channel-DEF_012ghijk345!App<\/AppId>\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Fabrikam.FirstRunTask<\/FirstRunTask>\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/SquareTile2>\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 56789ChannelFabrikam.channel-GHI_67890jklmno!App<\/AppId>\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 <\/SquareTile3>\n\u00a0\u00a0\u00a0\u00a0 <\/SquareTiles> \n```\n\n## Related topics\n\n\n[StartTiles](microsoft-windows-shell-setup-starttiles.md)\n\n[RegionalOverrides](microsoft-windows-shell-setup-starttiles-regionaloverrides.md)\n\n[SquareTiles](microsoft-windows-shell-setup-starttiles-regionaloverrides-regionaloverride-squaretiles.md)\n\n[SquareTiles](microsoft-windows-shell-setup-starttiles-squaretiles.md)\n\n\u00a0\n\n\u00a0\n\n[Send comments about this topic to Microsoft](mailto:wsddocfb@microsoft.com?subject=Documentation%20feedback%20%5Bp_unattend\\p_unattend%5D:%20AppId%20%20RELEASE:%20%2810\/3\/2016%29&body=%0A%0APRIVACY%20STATEMENT%0A%0AWe%20use%20your%20feedback%20to%20improve%20the%20documentation.%20We%20don't%20use%20your%20email%20address%20for%20any%20other%20purpose,%20and%20we'll%20remove%20your%20email%20address%20from%20our%20system%20after%20the%20issue%20that%20you're%20reporting%20is%20fixed.%20While%20we're%20working%20to%20fix%20this%20issue,%20we%20might%20send%20you%20an%20email%20message%20to%20ask%20for%20more%20info.%20Later,%20we%20might%20also%20send%20you%20an%20email%20message%20to%20let%20you%20know%20that%20we've%20addressed%20your%20feedback.%0A%0AFor%20more%20info%20about%20Microsoft's%20privacy%20policy,%20see%20http:\/\/privacy.microsoft.com\/default.aspx. \"Send comments about this topic to Microsoft\")\n\n\n\n\n\n","avg_line_length":38.5,"max_line_length":921,"alphanum_fraction":0.7516883117} +{"size":4357,"ext":"md","lang":"Markdown","max_stars_count":23.0,"content":"---\nlayout: weekly-metrics-v0.1\ntitle: Metrics report for twitter\/scoot | WEEKLY-REPORT-2018-10-05\npermalink: \/twitter\/scoot\/WEEKLY-REPORT-2018-10-05\/\n\nowner: twitter\nrepo: scoot\nreportID: WEEKLY-REPORT-2018-10-05\ndatestampThisWeek: 2018-10-05\ndatestampLastWeek: 2018-09-28\n---\n\n\n\n\n \n \n \n\n \n \n \n \n \n \n \n \n \n \n \n
Metric<\/th>\n Latest<\/th>\n Previous<\/th>\n Difference<\/th>\n <\/tr>\n <\/thead>\n
Commits<\/td>\n 404<\/td>\n 403<\/td>\n 1<\/td>\n 0.25%<\/td>\n <\/tr>\n \n
Issues<\/td>\n 6<\/td>\n 6<\/td>\n 0<\/td>\n 0.0%<\/td>\n <\/tr>\n \n
Open Issues<\/td>\n 0<\/td>\n 0<\/td>\n 0<\/td>\n 0.0%<\/td>\n <\/tr>\n \n
Closed Issues<\/td>\n 6<\/td>\n 6<\/td>\n 0<\/td>\n 0.0%<\/td>\n <\/tr>\n \n
Pull Requests<\/td>\n 391<\/td>\n 391<\/td>\n 0<\/td>\n 0.0%<\/td>\n <\/tr>\n \n
Open Pull Requests<\/td>\n 0<\/td>\n 1<\/td>\n -1<\/td>\n -100.0%<\/td>\n <\/tr>\n \n
Merged Pull Requests<\/td>\n 358<\/td>\n 357<\/td>\n 1<\/td>\n 0.28%<\/td>\n <\/tr>\n \n
Closed Pull Requests<\/td>\n 33<\/td>\n 33<\/td>\n 0<\/td>\n 0.0%<\/td>\n <\/tr>\n \n
Forks<\/td>\n 11<\/td>\n 11<\/td>\n 0<\/td>\n 0.0%<\/td>\n <\/tr>\n \n
Stars<\/td>\n 32<\/td>\n 32<\/td>\n 0<\/td>\n 0.0%<\/td>\n <\/tr>\n \n
Watchers<\/td>\n 15<\/td>\n 15<\/td>\n 0<\/td>\n 0.0%<\/td>\n <\/tr>\n \n <\/tbody>\n<\/table>\n
\n

CHAOSS<\/a> Metrics<\/h4>\n\n\n \n
Bus Factor<\/td>\n Best: 3<\/td>\n Worst: 1<\/td>\n <\/tbody>\n<\/table>\n","avg_line_length":34.5793650794,"max_line_length":123,"alphanum_fraction":0.5120495754} +{"size":1083,"ext":"md","lang":"Markdown","max_stars_count":null,"content":"The MIT License (MIT)\n\nCopyright (c) 2019 Karl Schellenberg\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and\/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.","avg_line_length":120.3333333333,"max_line_length":460,"alphanum_fraction":0.8051708218} +{"size":12769,"ext":"md","lang":"Markdown","max_stars_count":1.0,"content":"---\n# required metadata\n\ntitle: Create custom localized error messages for payment terminal extensions\ndescription: This topic explains how to create custom error messages for payment terminal extensions.\nauthor: Reza-Assadi\nmanager: AnnBe\nms.date: 07\/20\/2018\nms.topic: article\nms.prod: \nms.service: dynamics-365-retail\nms.technology: \n\n# optional metadata\n\n# ms.search.form: \n# ROBOTS: \naudience: Developer\n# ms.devlang: \nms.reviewer: rhaertle\nms.search.scope: Operations, Retail\n# ms.tgt_pltfrm: \nms.custom: \nms.search.region: Global\nms.search.industry: Retail\nms.author: rassadi\nms.search.validFrom: 2018-07-20\nms.dyn365.ops.version: AX 7.0.0, Retail July 2017 update\n\n---\n\n# Create custom localized error messages for payment terminal extensions\n\n[!include[banner](..\/includes\/banner.md)]\n\nThis topic explains how to create custom localized error messages for payment terminal extensions. These custom error messages are most often used so that the payment terminal can give the cashier who is using the point of sale (POS) terminal relevant information about why a specific payment was unsuccessful. For example, the external payment terminal or gateway might return unique identifiers (such as reference numbers or transaction identifiers) that are relevant for troubleshooting with the payment provider.\n\n## Key terms\n\n| Term | Description |\n|---|---|\n| Payment connector | An extension library that is written to integrate the POS with a payment terminal. |\n\n## Overview\nThe remaining sections in this topic describe the following steps for creating custom localized error messages for payment terminal extensions:\n\n- **[Create custom error messages](#create-custom-error-messages)** \u2013 This section explains how to create custom error messages in the payment connector that can be returned and shown in the POS.\n- **[Create localized error messages](#create-localized-error-messages)** \u2013 This section explains how to localize the error messages in the payment connector that are returned and shown in the POS.\n\n## Create custom error messages\nTo trigger a custom error message in the POS, you must set the appropriate error in the **Errors** property of the **paymentInfo** object that is passed to the **AuthorizePaymentTerminalDeviceResponse** object. Specifically, you must set the **isLocalized** parameter on the constructor of the **PaymentError** object to **true** to force the POS to use the custom error message instead of the built-in error message for a declined payment.\n\n``` csharp\nnamespace Contoso.Commerce.HardwareStation.PaymentSample \n{ \n \/\/\/ \n \/\/\/ Simulator<\/c> manager payment device class.\n \/\/\/ <\/summary>\n public class PaymentDeviceSample : INamedRequestHandler\n {\n \/\/\/ \n \/\/\/ Gets the collection of supported request types by this handler.\n \/\/\/ <\/summary>\n public IEnumerable SupportedRequestTypes\n {\n get\n {\n return new[]\n {\n typeof(AuthorizePaymentTerminalDeviceRequest),\n ...\n };\n }\n }\n\n \/\/\/ \n \/\/\/ Executes the payment device simulator operation based on the incoming request type.\n \/\/\/ <\/summary>\n \/\/\/ The payment terminal device simulator request message.<\/param>\n \/\/\/ Returns the payment terminal device simulator response.<\/returns>\n public Response Execute(Microsoft.Dynamics.Commerce.Runtime.Messages.Request request)\n {\n ThrowIf.Null(request, nameof(request));\n Type requestType = request.GetType();\n Response response;\n if (requestType == typeof(AuthorizePaymentTerminalDeviceRequest))\n {\n response = this.AuthorizePayment((AuthorizePaymentTerminalDeviceRequest)request);\n }\n else if (...)\n {\n ...\n }\n else\n {\n throw new NotSupportedException(string.Format(CultureInfo.InvariantCulture, \"Request '{0}' is not supported.\", request));\n }\n return response;\n }\n\n \/\/\/ \n \/\/\/ Authorize payment.\n \/\/\/ <\/summary>\n \/\/\/ The authorize payment request.<\/param>\n \/\/\/ The authorize payment response.<\/returns>\n public AuthorizePaymentTerminalDeviceResponse AuthorizePayment(AuthorizePaymentTerminalDeviceRequest request)\n {\n ThrowIf.Null(request, \"request\");\n ...\n\n \/\/ Assuming the external payment terminal\/gateway returned a decline and a reference number.\n \/\/ Construct the custom error message and set the payment error on the 'paymentInfo' object set\n \/\/ on the response.\n PaymentInfo paymentInfo = new PaymentInfo();\n bool isLocalized = true;\n string errorMessage = string.Format(\"The payment was declined. Reference number '{0}'.\", referenceNumber);\n PaymentError paymentError = new PaymentError(ErrorCode.Decline, errorMessage, isLocalized);\n paymentInfo.Errors = new PaymentError[] { paymentError };\n return new AuthorizePaymentTerminalDeviceResponse(paymentInfo);\n }\n }\n}\n```\n\nThe following illustration shows how the custom error message appears in the POS.\n\n![Custom payment error message in the POS](media\/PAYMENTS\/CUSTOM-ERRORS\/POS-Custom-Payment-Error.jpg)\n\n## Create localized error messages\n\n### Create resource files for each locale\nTo return localized error messages from the payment connector to the POS, you must create localized resource files for each locale that you plan to support. To create a resource file, follow these steps.\n\n1. In Microsoft Visual Studio, right-click the connector project (or a subfolder, as required), and then select **Add \\> New Item**.\n2. In the new **Add New Item** dialog box, select **Visual C# Items** in the left pane and **Resource File** in the center pane.\n\n ![Create new resource file in Visual Studio](media\/PAYMENTS\/CUSTOM-ERRORS\/VisualStudio-New-Resource-File.jpg)\n\nNote that a culture-specific postfix (for example, **en-us**) is required in the file name of every resource file that you create, so that localized satellite assemblies can be generated.\n\nWhen you've finished, the following resource files should be present in your project. Although the following illustration shows only one extra locale (**en-us**), you can add support for as many locales as you require.\n\nMake sure that a culture-neutral resources file (**Messages.resx** in this example) is defined. This file is used as a fallback if the file for a specific culture is missing.\n\n![Resource files in Visual Studio](media\/PAYMENTS\/CUSTOM-ERRORS\/VisualStudio-Layout-Resource-File.jpg)\n\nYou must also make sure that the correct properties are set for the resource files in Visual Studio, as shown in the following illustration.\n\n![Properties of a new resource file in Visual Studio](media\/PAYMENTS\/CUSTOM-ERRORS\/VisualStudio-Properties-Resource-File.jpg)\n\n### Create custom localized error messages\nEvery resource file must contain every error message that you want to customize and localize. The following illustration shows an example of a resource file. Notice that the **CustomPaymentConnector_Decline** entry is referenced in the code to retrieve the appropriate message for a specific locale. Every resource file for every locale should have an identical set of localized messages.\n\n![Content of a resource file in Visual Studio](media\/PAYMENTS\/CUSTOM-ERRORS\/VisualStudio-Content-Resource-File.jpg)\n\n### Load the localized message in the connector code\nThe following example shows how you can use the resource files that you created earlier in your payment connector code to load a localized message. The process consists of two steps:\n\n1. Make sure that **terminalSettings** is retrieved during the **OpenPaymentTerminalDeviceRequest** request, to access the locale for the request.\n2. During the **AuthorizePaymentTerminalDeviceRequest** call (or equivalent calls), use the **Locale** property on **terminalSettings** to retrieve the correct resource file for the localized message.\n\n> [!NOTE]\n> The following example has been significantly simplified to show the mechanics of loading localized messages during the runtime of your payment connector code. However, we recommend that you introduce a new set of classes to manage loading of the appropriate resource file.\n\n``` csharp\nnamespace Contoso.Commerce.HardwareStation.PaymentSample \n{ \n \/\/\/ \n \/\/\/ Simulator<\/c> manager payment device class.\n \/\/\/ <\/summary>\n public class PaymentDeviceSample : INamedRequestHandler\n {\n \/\/ Cached version of the terminal settings retrieved during the OpenPaymentTerminalDeviceRequest call.\n private SettingsInfo terminalSettings;\n\n \/\/ Resource manager to retrieve localized messages.\n private ResourceManager messagesResourceManager;\n\n \/\/\/ \n \/\/\/ Initializes a new instance of the class.\n \/\/\/ <\/summary>\n public PaymentDeviceSample()\n {\n this.messagesResourceManager = new ResourceManager(\"Contoso.Commerce.HardwareStation.PaymentSample.PaymentDeviceSample .Resources.Messages\", typeof(PaymentDeviceSample).GetTypeInfo().Assembly);\n }\n\n \/\/\/ \n \/\/\/ Gets the collection of supported request types by this handler.\n \/\/\/ <\/summary>\n public IEnumerable SupportedRequestTypes\n {\n get\n {\n return new[]\n {\n typeof(OpenPaymentTerminalDeviceRequest),\n typeof(AuthorizePaymentTerminalDeviceRequest),\n ...\n };\n }\n }\n\n \/\/\/ \n \/\/\/ Executes the payment device simulator operation based on the incoming request type.\n \/\/\/ <\/summary>\n \/\/\/ The payment terminal device simulator request message.<\/param>\n \/\/\/ Returns the payment terminal device simulator response.<\/returns>\n public Response Execute(Microsoft.Dynamics.Commerce.Runtime.Messages.Request request)\n {\n ThrowIf.Null(request, nameof(request));\n Type requestType = request.GetType();\n Response response;\n if (requestType == typeof(OpenPaymentTerminalDeviceRequest))\n {\n response = this.Open((OpenPaymentTerminalDeviceRequest)request);\n }\n else if (requestType == typeof(AuthorizePaymentTerminalDeviceRequest))\n {\n response = this.AuthorizePayment((AuthorizePaymentTerminalDeviceRequest)request);\n }\n else if (...)\n {\n ...\n }\n else\n {\n throw new NotSupportedException(string.Format(CultureInfo.InvariantCulture, \"Request '{0}' is not supported.\", request));\n }\n return response;\n }\n\n \/\/\/ \n \/\/\/ Open the payment terminal.\n \/\/\/ <\/summary>\n \/\/\/ The open request.<\/param>\n \/\/\/ The open response.<\/returns>\n private Response Open(OpenPaymentTerminalDeviceRequest request)\n {\n this.terminalSettings = request.TerminalSettings;\n ...\n }\n\n \/\/\/ \n \/\/\/ Authorize payment.\n \/\/\/ <\/summary>\n \/\/\/ The authorize payment request.<\/param>\n \/\/\/ The authorize payment response.<\/returns>\n public AuthorizePaymentTerminalDeviceResponse AuthorizePayment(AuthorizePaymentTerminalDeviceRequest request)\n {\n ...\n\n \/\/ Assuming the external payment terminal\/gateway returned a decline and a reference number. Construct \n \/\/ the custom error message and set the payment error on the 'paymentInfo' object set on the response.\n PaymentInfo paymentInfo = new PaymentInfo();\n CultureInfo cultureInfo = new CultureInfo(this.terminalSettings.Locale);\n string localizedString = this.messagesResourceManager.GetString(\"CustomPaymentConnector_Decline\", cultureInfo);\n string errorMessage = string.Format(localizedString, referenceNumber);\n bool isLocalized = true;\n PaymentError paymentError = new PaymentError(ErrorCode.Decline, errorMessage, isLocalized);\n paymentInfo.Errors = new PaymentError[] { paymentError };\n return new AuthorizePaymentTerminalDeviceResponse(paymentInfo);\n }\n }\n}\n```\n","avg_line_length":48.0037593985,"max_line_length":516,"alphanum_fraction":0.6890907667} +{"size":710,"ext":"md","lang":"Markdown","max_stars_count":584.0,"content":"---\ntitle: EndStyle Property\nms.prod: excel\napi_name:\n- Excel.EndStyle\nms.assetid: 2d12c0c5-7c48-41c0-b270-d5cf70eb7d47\nms.date: 06\/08\/2017\n---\n\n\n# EndStyle Property\n\nReturns or sets the end style for the error bars. Read\/write **XlEndStyleCap**.\n\n\n## \n\n\n\n|XlEndStyleCap can be one of these XlEndStyleCap constants.|\n| **xlCap**|\n| **xlNoCap**|\n _expression_. **EndStyle**\n\n _expression_ Required. An expression that returns one of the objects in the Applies To list.\n\n\n## Example\n\nThis example sets the end style for the error bars for series one. The example should be run on a 2-D line chart that has Y error bars for the first series.\n\n\n```\nmyChart.SeriesCollection(1).ErrorBars. EndStyle = xlCap\n\n```\n\n\n","avg_line_length":18.2051282051,"max_line_length":156,"alphanum_fraction":0.7295774648} +{"size":7197,"ext":"md","lang":"Markdown","max_stars_count":1.0,"content":"# Wire1 (1-wire) Level II driver\n\n**Available for**: Linux (Kernel v2.6.29+)\n\n1-Wire sensors especially the [DS18B20](https:\/\/datasheets.maximintegrated.com\/en\/ds\/DS18B20.pdf) temperature sensor is extremely popular in the hobby community for it's ease of use. Also the possibility to have many sensors connected on one 1-wire bus is a big plus for this sensor.\n\nThe wire1 driver is built to take advantage of the 1-wire support that is available in the Linux kernel from version 2.6.29.\n\nThis driver reports temperature from any number of DS18B20 temperature sensors connected to a Linux system. The reported event is [CLASS1.MEASUREMENT, Type=6 temperature](http:\/\/docs.vscp.org\/spec\/latest\/#\/.\/class1.measurement#type6).\n\n**Driver Linux**: vscpl2_wire1.so\n\nThe configuration string has the following format\n\n NumberOfSensors\n\n##### NumberOfSensors\n\nThe parameter *NumberOfSensors* (which is optional) is the number of sensors the driver should report data from. This value can also be available as a VSCP daemon variable and if both are present the VSCP daemon variable will be used. Default is 1. \n\n | Variable name | Type | Description | \n | ------------- | ---- | ----------- | \n | _numberofsensors | integer | NumberOfSensors is the number of sensors the driver should report data from. | \n | _path[0..n] | string | Path to the lm-sensor data file. | \n | _guid[0..n] | guid | GUID to use when the data for the sensor is reported. If no GUID is used the reserved GUID for 1-wire with the unique id for the sensor will be used. | \n | _interval[0..n] | integer | Sample interval in seconds for events. Default is 30 seconds. | \n | _unit[0..n] | integer | Unit use. Allowed values 0=Kelvin,1=Celsius or 2=Fahrenheit. Default is 2 (Celsius). | \n | _index[0..n] | integer | Measurement index 0-7. Default is 0. | \n | _coding[0..n] | integer | Message coding 0-2. Default is 0. Allowed values 0=Normalized integer, 1=string, 2=floating point. | \n\nThe full variable name is built from the name you give the driver (prefix before _variablename) in vscpd.conf. So in the examples below the driver have the name **wire1** and the full variable name for the **_numberofsensors** will thus be\n\n wire11_numberofsensors\n\nIf you have another diver and name it **wire2** it will therefore instead request variable **wire2_numberofsensors**\n\nIf your driver name contains spaces, for example \u201cname of driver\u201d it will get a prefix that is \u201cname_of_driver\u201d. Leading and trailing spaces will be removed. \n\n##### Example of vscpd.conf entry for the wire1 driver.\n\n```xml\n\n wire1<\/name>\n \/usr\/local\/lib\/vscpl2_wire1.so<\/path>\n 2<\/config>\n 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00<\/guid>\n<\/driver>\n```\n\nExample for variable configuration. In this sample the temperature from two DS18B20 temperature sensors is reported every ten seconds.\n\n```xml\n\n\n\n\n\n\twire1_numberofsensors<\/name> \t\n\t2<\/value> \n<\/variable>\n\n\n\n\n\n\n\n\twire1_interval0<\/name> \t\n\t10<\/value> \n<\/variable>\n\n\n\n\n\n\n`wire1_path0`<\/name>\n`\/sys\/bus\/w1\/devices\/28-00000548476b\/w1_slave`<\/value>\n<\/variable>\n\n\n\n\n\n\n\n\n\twire1_interval1<\/name> \t\n\t10<\/value> \n<\/variable>\n\n\n \t\n\n\n\t\n\twire1_path1<\/name> \t\n\t\/sys\/bus\/w1\/devices\/28-000003e71198\/w1_slave<\/value> \n<\/variable>\n\n```\n\n\n----\n\n# Using the Level II wire1 driver\n\nThe 1-wiew support is yet quite limited on Linux systems and at the moment only memory and temperature devices are really supported. The wire1 driver currently just support temperature readings.\n\n## Loading the kernel driver\n\n**Info**\n\n* https:\/\/www.kernel.org\/doc\/Documentation\/w1\/w1.generic\n* How to configure - https:\/\/how-to.wikia.com\/wiki\/How_to_configure_the_Linux_kernel\/drivers\/w1\n\nFor the 1-wire subsystem to support you need to first add a master driver for the 1-wire bus. Popular is the now discontinued DS2490 USB adapter. To get a working system reading temperatures if you have this device you need to load the needed modules\n\n modprob ds2490\n modprob w1-term\n\nIf you have some other interface than DS2490 search for which driver to use. \n\nIf you have a Raspberry Pi you can instead use GPIO4 (pin 7) which have support for 1-Wire. You have to add a 4k7 pull up resistor to 3.3V yourself.\n\nOn the latest kernel builds on Raspberry Pi you need to add \"dtoverlay=w1-gpio\" to \/boot\/config.txt for any of this to work! \n\nTo get it all working\n\n modprob w1-gpio\n modprob w1_therm\n\nAfter this is done temperature sensor will be visible in the folder *\/sys\/bus\/w1\/devices* \n\nThe id will start with the [1-wire family code](https:\/\/github.com\/owfs\/owfs-doc\/wiki\/1Wire-Device-List) which is **10** for 18S20 and **28** for 18B20.)\n\nFor exampel\n\n \/sys\/bus\/w1\/devices\/10-00080192afa8\/w1_slave\n\nWill show have a file content of where t=4937 is the temperature * 1000\n\n 0a 00 4b 46 ff ff 0d 10 79 : crc=79 YES\n 0a 00 4b 46 ff ff 0d 10 79 t=4937 \n\nThis is the file that is read by the wire1 driver and sent to the VSCP subsystem.\n\nYou way want to add the modules to **\/etc\/modules** so that they load automatically the next time you start your system.\n\n# Hooking up a Ds18B20 to Raspberry Pi\n\n![](.\/images\/drivers\/level2-drivers\/006_small.png)\n\nOn this picture a standard DS18B20 and a water proof sensor with a DS18B20 is connected to the Raspberry Pi. The connection is very easy.\n\n\n* Fetch power from pin 1 - 3.3V\n\n* Fetch ground from pin 6.\n\n* The reserved 1-Wire pin is GPIO4 which is on pin 4.\n\nA good pin out diagram can be found here ![](https:\/\/howto8165.files.wordpress.com\/2014\/08\/rpi-pinout.png)\n\nClick to see in full size.\n\nA wiring diagram is here ![](https:\/\/www.sbprojects.com\/projects\/raspberrypi\/ds1820connect.png)\n\nNote the 4k7 pullup that should be connected from 3.3V to the data line of the sensor.\n\nTo be complete we include the DS18B20 pin out also\n\n![](https:\/\/www.modmypi.com\/image\/data\/tutorials\/DS18B20\/DS18B20+.png)\n\n[filename](.\/bottom_copyright.md ':include')\n","avg_line_length":40.4325842697,"max_line_length":283,"alphanum_fraction":0.6733361123} +{"size":450,"ext":"md","lang":"Markdown","max_stars_count":null,"content":"+++\ntitle = \"A roadmap for global synthesis of the plant tree of life\"\ndate = 2018-01-01\nauthors = [\"Wolf L Eiserhardt\", \"Alexandre Antonelli\", \"Dominic J Bennett\", \"Laura R Botigu\u00e9\", \"J Gordon Burleigh\", \"Steven Dodsworth\", \"Brian J Enquist\", \"F\u00e9lix Forest\", \"Jan T Kim\", \"Alexey M Kozlov\", \" others\"]\npublication_types = [\"2\"]\nabstract = \"\"\nfeatured = false\npublication = \"*American journal of botany*\"\ntags = [\"phylogenetics\", \"supersmartr\"]\n+++\n\n","avg_line_length":37.5,"max_line_length":213,"alphanum_fraction":0.6933333333} +{"size":12700,"ext":"md","lang":"Markdown","max_stars_count":null,"content":"---\ntitle: \"<msmqTransport> | Microsoft Docs\"\nms.custom: \"\"\nms.date: \"03\/30\/2017\"\nms.prod: \".net-framework\"\nms.reviewer: \"\"\nms.suite: \"\"\nms.technology: \n - \"dotnet-clr\"\nms.tgt_pltfrm: \"\"\nms.topic: \"article\"\nms.assetid: 19d89f35-76ac-49dc-832b-e8bec2d5e33b\ncaps.latest.revision: 14\nauthor: \"Erikre\"\nms.author: \"erikre\"\nmanager: \"erikre\"\n---\n# <msmqTransport>\nCauses a channel to transfers messages on the MSMQ transport when it is included in a custom binding. \n \n \\ \n\\ \n\\ \n\\ \n\\ \n \n## Syntax \n \n```xml \n \n customDeadLetterQueue=\"Uri\" \n deadLetterQueue=\"Custom\/None\/System\" \n durable=\"Boolean\" \n exactlyOnce=\"Boolean\" \n manualAddressing=\"Boolean\" \n maxBufferPoolSize=\"Integer\" \n maxImmediateRetries=\"Integer\" \n maxPoolSize=\"Integer\" \n maxReceivedMessageSize=\"Integer\" \n maxRetryCycles=\"Integer\" \n....queueTransferProtocol=\"Native\/Srmp\/SrmpSecure\" \n rejectAfterLastRetry=\"Boolean\" \n retryCycleDelay=\"TimeSpan\" \n timeToLive=\"TimeSpan\" \n useActiveDirectory=\"Boolean\" \n useSourceJournal=\"Boolean\" \n useMsmqTracing=\"Boolean\" \n \n <\/msmqTransportSecurity> \n<\/msmqIntegration> \n``` \n \n## Attributes and Elements \n The following sections describe attributes, child elements, and parent elements. \n \n### Attributes \n \n|Attribute|Description| \n|---------------|-----------------| \n|customDeadLetterQueue|A URI that indicates the location of the per-application dead letter queue, where messages that have expired or failed to be delivered to the application are transferred.

For messages that require ExactlyOnce assurances (that is, `exactlyOnce` is set to `true`), this attribute defaults to the system-wide transactional dead-letter queue in MSMQ.

For messages that require no assurances (that is, `exactlyOnce` is set to `false`), this attribute defaults to `null`.

The value must use the net.msmq scheme. The default is `null`.

If `deadLetterQueue` is set to `None` or `System`, then this attribute must be set to `null`. If this attribute is not `null`, then `deadLetterQueue` must be set to `Custom`.| \n|deadLetterQueue|Specifies the type of dead letter queue to use.

Valid values include

- Custom: Custom deadletter queue.
- None: No deadletter queue is to be used.
- System: Use the system deadletter queue.

This attribute is of type DeadLetterQueue.| \n|durable|A Boolean value that specifies whether the messages processed by this binding are durable or volatile. The default is `true`.

A durable message survives a queue manager crash, while a volatile message does not. Volatile messages are useful when applications require lower latency and can tolerate occasional lost messages.

If `exactlyOnce` is set to `true`, the messages must be durable.| \n|exactlyOnce|A Boolean that specifies whether messages processed by this binding will be received exactly once. The default is `true`.

A message can be sent with or without assurances. An assurance enables an application to ensure that a sent message reached the receiving message queue, or if it did not, the application can determine this by reading the dead letter queue.

`exactlyOnce`, when set to `true`, indicates that MSMQ will ensure that a sent message is delivered to the receiving message queue once and only once, and if delivery fails, the message is sent to the dead letter queue.

Messages sent with `exactlyOnce` set to `true` must be sent to a transactional queue only.| \n|manualAddressing|A Boolean value that enables the user to take control of message addressing. This property is usually used in router scenarios, where the application determines which one of several destinations to send a message to.

When set to `true`, the channel assumes the message has already been addressed and does not add any additional information to it. The user can then address every message individually.

When set to `false`, the default Windows Communication Foundation (WCF) addressing mechanism automatically creates addresses for all messages.

The default is `false`.| \n|maxBufferPoolSize|A positive integer that specifies the maximum size of the buffer pool. The default is 524288.

Many parts of WCF use buffers. Creating and destroying buffers each time they are used is expensive, and garbage collection for buffers is also expensive. With buffer pools, you can take a buffer from the pool, use it, and return it to the pool once you are done. Thus the overhead in creating and destroying buffers is avoided.| \n|maxImmediateRetries|An integer that specifies the maximum number of immediate retry attempts on a message that is read from the application queue.. The default is 5.

If the maximum number of immediate retries for the message is attempted and the message is not consumed by the application, then the message is sent to a retry queue for retrying at some later point in time. If no retry cycles are specified, then the messages is either sent to the poison message queue, or a negative acknowledgment is sent back to the sender.| \n|maxPoolSize|A positive integer that specifies the maximum size of the pool. The default is 524288.| \n|maxReceivedMessageSize|A positive integer that specifies the maximum message size in bytes including headers. The sender of a message receives a SOAP fault when the message is too large for the receiver. The receiver drops the message and creates an entry of the event in the trace log. The default is 65536.| \n|maxRetryCycles|An integer that specifies the maximum number of retry cycles to attempt delivery of messages to the receiving application. The default is .

A single retry cycle attempts to deliver a message to an application a specified number of times. The number of attempts made is set by the `maxImmediateRetries` attribute. If the application fails to consume the message after the attempts at delivery have been exhausted, the message is sent to a retry queue. Subsequent retry cycles consist of the message being returned from the retry queue to the application queue to attempt delivery to the application again, after a delay specified by the `retryCycleDelay` attribute. The `maxRetryCycles` attribute specifies the number of retry cycles the application uses to attempt to deliver the message.| \n|queueTransferProtocol|Specifies the queued communication channel transport that this binding uses. Valid values are

- Native: Use the native MSMQ protocol.
- Srmp: Use the Soap Reliable Messaging Protocol (SRMP).
- SrmpSecure: Use the Soap Reliable Messaging Protocol Secure (SRMPS) transport.

This attribute is of type .

Since MSMQ does not support Active Directory addressing when using SOAP Reliable Messaging Protocol, you should not set this attribute to Srmp or Srmps when `u``seActiveDirectory` is set to `true`.| \n|rejectAfterLastRetry|A Boolean value that specifies what action to take for a message that has failed delivery after the maximum number of retries have been attempted.

`true` means that a negative acknowledgment is returned to the sender and the message is dropped; `false` means that the message is sent to the poison message queue. The default is `false`.

If the value is `false`, the receiving application can read the poison message queue to process poison messages (that is, messages that have failed delivery).

MSMQ 3.0 does not support returning a negative acknowledgment to the sender, so this attribute will be ignored in MSMQ 3.0.| \n|retryCycleDelay|A that specifies the time delay between retry cycles when attempting to deliver a message that could not be delivered immediately. The default is 00:10:00.

A single retry cycle attempts to deliver a message to a receiving application a specified number of times. The number of attempts made is specified by the `maxImmediateRetries` attribute. If the application fails to consume the message after the specified number of immediate retries, the message is sent to a retry queue. Subsequent retry cycles consist of the message being returned from the retry queue to the application queue to attempt delivery to the application again, after a delay specified by the `retryCycleDelay` attribute. The number of retry cycles is specified by `maxRetryCycles` attribute.| \n|timeToLive|A that specifies how long the messages are valid before they expired and are put in the dead-letter queue. The default is 1.00:00:00, which means 1 day.

This attribute is set to ensure that time-sensitive messages do not become stale before they are processed by the receiving applications. A message in a queue that is not consumed by the receiving application within the time interval specified is said to be expired. Expired messages are sent to special queue called the dead letter queue. The location of the dead letter queue is set with the `customDeadLetterQueue` attribute or to the appropriate default, based on assurances.| \n|UseActiveDirectory|A Boolean value that specifies whether queue addresses should be converted using Active Directory.

MSMQ queue addresses can consist of path names or direct format names. With a direct format name, MSMQ resolves the computer name using DNS, NetBIOS or IP. With a path name, MSMQ resolves the computer name using Active Directory. By default, the Windows Communication Framework (WCF) queued transport converts the URI of a message queue to a direct format name. By setting this attribute to `true`, an application can specify that the queued transport should resolve the computer name using Active Directory rather than DNS, NetBIOS, or IP.| \n|useMsmqTracing|A Boolean value that specifies whether messages processed by this binding should be traced. The default is `false`.

When tracing is enabled, report messages are created and sent to the report queue each time the message leaves or arrives at a Message Queuing computer.| \n|useSourceJournal|A Boolean value that specifies whether copies of messages processed by this binding should be stored in the source journal queue. The default is `false`.

Queued applications that want to keep a record of messages that have left the computer's outgoing queue can copy the messages to a journal queue. Once a message leaves the outgoing queue and an acknowledgment is received that the message was received on the destination computer, a copy of the message is kept in the sending computer's system journal queue.| \n \n### Child Elements \n \n|Element|Description| \n|-------------|-----------------| \n|[\\](..\/..\/..\/..\/..\/docs\/framework\/configure-apps\/file-schema\/wcf\/msmqtransportsecurity.md)|Specifies transport security settings for this binding. This element is of type .| \n \n### Parent Elements \n \n|Element|Description| \n|-------------|-----------------| \n|[\\](..\/..\/..\/..\/..\/docs\/framework\/misc\/binding.md)|Defines all binding capabilities of the custom binding.| \n \n## Remarks \n The `msmqTransport` element enables the user to set the properties of the queued communication channel. The queued communication channel uses Message Queuing for its transport. \n \n This binding element is the default binding element used by the Message Queuing standard binding (`netMsmqBinding`). \n \n## See Also \n \n \n \n \n [Queues in WCF](..\/..\/..\/..\/..\/docs\/framework\/wcf\/feature-details\/queues-in-wcf.md) \n [Transports](..\/..\/..\/..\/..\/docs\/framework\/wcf\/feature-details\/transports.md) \n [Choosing a Transport](..\/..\/..\/..\/..\/docs\/framework\/wcf\/feature-details\/choosing-a-transport.md) \n [Bindings](..\/..\/..\/..\/..\/docs\/framework\/wcf\/bindings.md) \n [Extending Bindings](..\/..\/..\/..\/..\/docs\/framework\/wcf\/extending\/extending-bindings.md) \n [Custom Bindings](..\/..\/..\/..\/..\/docs\/framework\/wcf\/extending\/custom-bindings.md) \n [\\](..\/..\/..\/..\/..\/docs\/framework\/configure-apps\/file-schema\/wcf\/custombinding.md)\n","avg_line_length":118.691588785,"max_line_length":847,"alphanum_fraction":0.7598425197} +{"size":1519,"ext":"md","lang":"Markdown","max_stars_count":null,"content":"200 \u540d\u5b66\u751f\u8eb2\u5e9f\u5f03\u5382\u623f\u5077\u5077\u8865\u8bfe\uff0c\u95e8\u53e3\u4fdd\u5b89\u79f0\u300c\u5168\u662f\u7269\u6d41\u516c\u53f8\u5de5\u4eba\u300d\uff0c\u57f9\u8bad\u673a\u6784\u53ef\u80fd\u4f1a\u53d7\u5230\u54ea\u4e9b\u60e9\u7f5a\uff1f \n8 \u6708 20 \u65e5\u6e38\u620f\u79d1\u5b66\u653e\u51fa\u300a\u9ed1\u795e\u8bdd\uff1a\u609f\u7a7a\u300b UE5 \u5b9e\u673a\u5185\u5bb9\u6f14\u793a\uff0c\u6709\u54ea\u4e9b\u65b0\u4fe1\u606f\uff1f \n\u89e3\u653e\u519b\u4e1c\u90e8\u6218\u533a 17 \u65e5\u5728\u53f0\u6e7e\u5c9b\u897f\u5357\u3001\u4e1c\u5357\u7b49\u5468\u8fb9\u6d77\u7a7a\u57df\u7ec4\u7ec7\u5b9e\u5175\u6f14\u7ec3\uff0c\u91ca\u653e\u4e86\u54ea\u4e9b\u4fe1\u53f7\uff1f \n\u5982\u4f55\u770b\u5f85\u65b0\u7248\u300a\u5929\u9f99\u516b\u90e8\u300b\uff082021 \u5e74\uff09\u6bb5\u8a89\u5728\u89c1\u5230\u738b\u8bed\u5ae3\u540e\u7684\u5931\u6001\u8868\u73b0\uff1f \n\u795e\u821f\u5341\u4e8c\u53f7\u822a\u5929\u5458\u7b2c\u4e8c\u6b21\u51fa\u8231\u6210\u529f\uff0c\u8fd9\u6b21\u51fa\u8231\u4eba\u5458\u5b89\u6392\u4e0a\u6709\u4f55\u8003\u8651\uff1f\u8fd9\u6b21\u4efb\u52a1\u6709\u54ea\u4e9b\u770b\u70b9\uff1f \n\u5982\u4f55\u770b\u5f85\u300c\u4eba\u7c7b\u9ad8\u8d28\u91cf\u7537\u6027\u300d\u7c89\u4e1d\u7fa4\u6708\u6536\u8d39 2 \u4e07 5 \u63d0\u4f9b\u91d1\u878d\u54a8\u8be2\uff1f\u9760\u8c31\u5417\uff1f\u8fd9\u6837\u7684\u91d1\u878d\u54a8\u8be2\u5408\u89c4\u5417\uff1f \n\u66fe\u51fa\u6f14\u96c4\u9738\u7684\u6f14\u5458\u5343\u53f6\u771f\u4e00\u53bb\u4e16\uff0c\u4f60\u66fe\u770b\u8fc7\u4ed6\u7684\u54ea\u4e9b\u4f5c\u54c1\uff1f \n2021 \u5e74 8 \u6708 19 \u65e5\u665a\u4e94\u661f\u8fde\u73e0\u5929\u8c61\u662f\u771f\u7684\u5417\uff1f\u5f62\u6210\u539f\u7406\u662f\u4ec0\u4e48\uff0c\u8be5\u600e\u6837\u89c2\u6d4b\uff1f \n\u4e3a\u4ec0\u4e48\u5b59\u609f\u7a7a\u5927\u95f9\u5929\u5bab\u660e\u663e\u5931\u8d25\u4e86\uff0c\u897f\u5929\u53d6\u7ecf\u540e\u8ddf\u5929\u5ead\u7684\u4eba\u78b0\u5230\uff0c\u5927\u5bb6\u8fd8\u5927\u5723\u5927\u5723\u7684\u53eb\u4ed6\uff1f \n\u4e70\u76d0\u9009\u4f1a\u5458\u6708\u5361\uff0c\u9001 60 \u5143\u5496\u5561\u793c\u5238\uff01\u9996\u6708\u6700\u4f4e\u4ec5\u9700 9 \u5143\uff5e \n\u300c\u4e09\u5b69\u300d\u751f\u80b2\u653f\u7b56\u6b63\u5f0f\u5165\u6cd5\uff0c\u56fd\u5bb6\u91c7\u53d6\u591a\u9879\u63aa\u65bd\u51cf\u8f7b\u517b\u80b2\u8d1f\u62c5\uff0c\u8fd8\u6709\u54ea\u4e9b\u4fe1\u606f\u503c\u5f97\u5173\u6ce8\uff1f \n\u5e94\u5c4a\u751f\u5b9e\u4e60\u4e24\u5929\u5c31\u88ab\u8f9e\u9000\uff0c\u516c\u53f8\u771f\u80fd\u5728\u77ed\u77ed\u4e24\u5929\u91cc\u770b\u6e05\u4e00\u4e2a\u4eba\u5417\uff1f \n2021 LPL \u590f\u5b63\u5b63\u540e\u8d5b RNG 1:3 \u4e0d\u654c LNG \u6b62\u6b65\u5b63\u540e\u8d5b\u516d\u5f3a\uff0c\u5982\u4f55\u8bc4\u4ef7\u8fd9\u573a\u6bd4\u8d5b\uff1f \n\u5982\u4f55\u770b\u5f85\u6e38\u620f\u300a\u4fdd\u536b\u841d\u535c\u300b\u5b98\u65b9\u79f0\u300c\u5728\u5854\u9632\u6e38\u620f\u62bd\u5361\uff0c\u4f60\u5728\u641e\u7b11\u5417\u300d\uff1f \n\u300a\u9ed1\u795e\u8bdd\uff1a\u609f\u7a7a\u300b\uff0c\u5927\u5bb6\u7684\u68a6\u60f3 \n8 \u6708 20 \u65e5\u5cb3\u7236\u6740\u5bb3\u5973\u5a7f\u4e00\u5bb6\u4e09\u53e3\u6848\u518d\u5ba1\u5f00\u5ead\uff0c\u6b64\u524d\u4e00\u5ba1\u6b7b\u5211\uff0c\u4e8c\u5ba1\u6539\u5224\u6b7b\u7f13\uff0c\u8fd8\u6709\u54ea\u4e9b\u503c\u5f97\u5173\u6ce8\u7684\u4fe1\u606f\uff1f \n\u5e02\u573a\u6d88\u606f\u79f0\u6052\u5927\u6b63\u5728\u4e0e\u5c0f\u7c73\u8c08\u5224\u51fa\u552e\u7535\u52a8\u8f66\u90e8\u95e8\u7684\u80a1\u4efd\uff0c\u771f\u5b9e\u6027\u5982\u4f55\uff1f\u8fd8\u6709\u54ea\u4e9b\u4fe1\u606f\u503c\u5f97\u5173\u6ce8\uff1f \n\u793e\u4fdd\u4e2d\u65ad\u4e00\u4e2a\u6708\u6709\u4ec0\u4e48\u5f71\u54cd\u5417\uff1f \n\u6768\u5029\u3001\u9648\u68a6\u3001\u5168\u7ea2\u5a75\u7b49\u59d3\u540d\u88ab\u62a2\u6ce8\u5546\u6807\uff0c\u4e2d\u56fd\u5965\u59d4\u4f1a\u56de\u5e94\u300c\u4e0d\u5f97\u4ee5\u5965\u8fd0\u5065\u513f\u59d3\u540d\u6076\u610f\u62a2\u6ce8\u5546\u6807\u300d\uff0c\u5982\u4f55\u770b\u5f85\u8be5\u884c\u4e3a\uff1f \n\u5982\u4f55\u770b\u5f85\u65e5\u672c\u52a8\u6f2b\u4ea7\u4e1a\u9500\u552e\u989d\u5341\u5e74\u6765\u9996\u6b21\u4e0b\u964d\uff1f \n\u9648\u5c9a\u5728\u76f4\u64ad\u95f4\u91cc\u8bc4\u4ef7\u970d\u5c0a\u79f0\u300c\u970d\u5c0a\u662f\u5355\u7eaf\u7684\u5927\u7537\u5b69\uff0c\u5c31\u662f\u56e0\u4e3a\u592a\u5fc3\u8f6f\u4e86\u300d\uff0c\u8fd9\u4e2a\u8bc4\u4ef7\u5ba2\u89c2\u5417\uff1f \n\u8054\u5408\u5229\u534e\u627f\u8ba4\u68a6\u9f99\u4e2d\u5916\u7528\u6599\u4e0d\u540c\uff0c\u6b27\u6d32\u7528\u6599\u4e3a\u6d53\u7f29\u5976\uff0c\u4e2d\u56fd\u5219\u4e3a\u5976\u7c89\u52a0\u6c34\uff0c\u4e24\u8005\u6709\u54ea\u4e9b\u5dee\u522b\uff1f \n\u6e38\u620f\u300a\u539f\u795e\u300b\u597d\u73a9\u5417\uff1f \n\u4e3a\u4ec0\u4e48\u6211\u5df2\u7ecf\u6ee1\u5341\u516b\u4e86\uff0c\u5bb6\u957f\u8fd8\u662f\u53cd\u5bf9\u6211\u5316\u5986\uff0c\u81ed\u9a82\u4e86\u6211\u4e00\u987f\uff1f \n\u5982\u4f55\u770b\u5f85\u300a\u738b\u8005\u8363\u8000\u300b\u8ba1\u5212\u63a8\u51fa\u82f1\u96c4\u4e13\u5c5e\u88c5\u5907\uff1f\u4f1a\u5bf9\u5e73\u8861\u6027\u548c\u6e38\u620f\u4f53\u9a8c\u4ea7\u751f\u54ea\u4e9b\u5f71\u54cd\uff1f \n\u4e13\u79d1\u751f\u60f3\u8003\u7814\u7a76\u751f\uff0c\u53ef\u884c\u5417\uff1f\u8be5\u600e\u4e48\u51c6\u5907\uff1f \n\u5982\u4f55\u5efa\u7acb\u81ea\u5df1\u7684\u77e5\u8bc6\u4f53\u7cfb\u548c\u89c2\u70b9\uff1f \n\u600e\u4e48\u6837\u7a7f\u642d\u80fd\u8ba9\u4eba\u4e00\u79d2\u8bb0\u4f4f\uff1f \n\u6b66\u6c49\u4e00\u5973\u5b50\u53cd\u5bf9 5 \u5c81\u7537\u7ae5\u8fdb\u5973\u5395\u53cd\u88ab\u603c\uff0c\u4e13\u5bb6\u5efa\u8bae\u300c\u4e0d\u5b9c\u4e0a\u7eb2\u4e0a\u7ebf\u300d\uff0c\u9047\u5230\u6b64\u7c7b\u60c5\u51b5\u7a76\u7adf\u8be5\u5982\u4f55\u5904\u7406\uff1f \n\u5982\u4f55\u73a9\u597d\u300a\u738b\u8005\u8363\u8000\u300b\uff1f \n\u5982\u4f55\u770b\u5e74\u62a5\uff1f \n\u600e\u4e48\u7f16\u8f91\u767e\u5ea6\u767e\u79d1\u8bcd\u6761\u624d\u5bb9\u6613\u901a\u8fc7\uff1f \n\u53bb\u89c1\u8001\u5e08\u5e26\u4ec0\u4e48\u793c\u7269\uff1f \n\u5bf9\u65b0\u624b\u8bbe\u8ba1\u5e08\u6709\u4ec0\u4e48\u5efa\u8bae\uff1f \n\u8170\u90e8\u8d58\u8089\u662f\u600e\u4e48\u5f62\u6210\u7684\uff1f\u600e\u6837\u51cf\u6389\uff1f \n\u4e3a\u4ec0\u4e48\u6709\u7684\u5973\u751f\u559c\u6b22\u62ab\u7740\u5934\u53d1\uff1f \n\u5854\u5229\u73ed\u79f0\u4f9d\u636e\u4f0a\u65af\u5170\u6559\u6cd5\u6cbb\u56fd\uff0c\u5b97\u6559\u5b66\u8005\u5c06\u51b3\u5b9a\u5973\u6027\u4e0a\u5b66\u3001\u7740\u88c5\u7b49\u6743\u76ca\uff0c\u8fd9\u5bf9\u963f\u5bcc\u6c57\u5973\u6027\u610f\u5473\u7740\u4ec0\u4e48\uff1f \n\u65e7 T \u6064\u5f53\u7761\u8863\u7684\u5371\u5bb3\u6709\u54ea\u4e9b\uff1f \n\u9ad8\u4e2d\u662f\u80cc\u5355\u8bcd\u91cd\u8981\u8fd8\u662f\u8bed\u6cd5\u91cd\u8981? \n\u5982\u4f55\u8bc4\u4ef7\u52a8\u753b\u7535\u5f71\u300a\u4e54\u897f\u7684\u864e\u4e0e\u9c7c\u300b\uff1f\u54ea\u4e00\u4e2a\u60c5\u666f\u4ee4\u4f60\u96be\u5fd8\uff1f \n\u82f1\u8bed\u56db\u7ea7\u662f\u96c5\u601d 6.5 \u7684\u6c34\u5e73\u5417? \n\u5854\u5229\u73ed\u5ba3\u5e03\u6210\u7acb\u300c\u963f\u5bcc\u6c57\u4f0a\u65af\u5170\u914b\u957f\u56fd\u300d\uff0c\u6709\u54ea\u4e9b\u4fe1\u606f\u503c\u5f97\u5173\u6ce8\uff1f \n\u521d\u9ad8\u4e2d\u5973\u751f\u8981\u4e0d\u8981\u5f53\u73ed\u957f\uff1f \n\u662f\u4e0d\u662f\u5c0f\u8bf4\u770b\u591a\u4e86\u4f1a\u8d8a\u6765\u8d8a\u6311\u6587? \n\u4f60\u7684\u5bb6\u4e61\u6709\u54ea\u4e9b\u4f60\u300c\u5403\u4e0d\u8d77\u300d\u7684\u7f8e\u98df\uff1f \n\u7b2c\u56db\u4e2a\u4e2d\u56fd\u533b\u5e08\u8282\uff0c\u5728\u8fd9\u4e2a\u7279\u6b8a\u7684\u65f6\u4ee3\u80cc\u666f\u4e0b\uff0c\u4f60\u5bf9\u533b\u751f\u4eec\uff0c\u6709\u4ec0\u4e48\u8bdd\u60f3\u8868\u8fbe\uff1f \n2022 \u5e74\u4e92\u8054\u7f51\u5934\u90e8\u516c\u53f8\u53d1\u5c55\u5982\u4f55\uff0c\u5e94\u5c4a\u751f\u5e94\u8be5\u5982\u4f55\u9009\u62e9\uff1f \n\u6709\u54ea\u4e9b\u8bbd\u523a\u6027\u6781\u5f3a\u7684\u6587\u6848\uff1f \n22 \u8003\u7814\u7684\u4f60\u73b0\u5728\u90fd\u590d\u4e60\u5230\u54ea\u91cc\u4e86\uff1f \n\u53bb\u666e\u901a\u521d\u4e2d\u7684\u91cd\u70b9\u73ed\u8fd8\u662f\u91cd\u70b9\u521d\u4e2d\u7684\u666e\u901a\u73ed? \n","avg_line_length":29.7843137255,"max_line_length":53,"alphanum_fraction":0.7814351547} +{"size":9240,"ext":"md","lang":"Markdown","max_stars_count":17.0,"content":"---\ntitle: UTXOs\nlayout: wiki\ndescription: Information about how what UTXOs are, how they work, and why they matter\n---\n\n\n# Introduction\n\nAn unspent transaction output (UTXO) is what's left over from a transaction.\nUnder the hood of most cryptocurrencies, Gridcoin included, everything runs\non this concept.\n\nUTXOs are like bills.[^1]When you receive a transaction, you get a \"bill\" of a\nspecific size. When you send a transaction you make \"bills\" of other sizes. \nTo make a transaction you must use up those \"bills\" exactly\n\nThe network doesn't think in terms of balances. Everything is checked\nthrough transaction inputs and outputs. Balances are simply a nicer way for \npeople to look at thing --- not how the network thinks\n\n# Sending Transactions\nWhen you make a transaction it has to use up existing UTXOs (as inputs) and create new ones (as outputs). \nThe network doesn't look at balances --- just what came out of one transaction. \nAfter an output is used up, they are no longer UTXOs and become spent transaction outputs.\n\n## Example Diagram\n![Diagram showing a \"wallet\" with different UTXOs that looks like \"bills\". Shows two being selected as inputs(and becoming spent) that are 15 and 45 Gridcoin. Shows and arrow represent a transaction of 60 Gridcoin and three new UTXOs as outputs of 20 Gridcoin, 35 Gridcoin, and 15 Gridcoin](\/assets\/img\/wiki\/utxo-diagram.svg){: class=\"img-fluid mx-auto\"}\n\n## Verbal Examples\n\n### Example 1\n\nYou only ever received 50 Gridcoin in a transaction from Bob and you want to send \n25 Gridcoin to Alice.\n\nYou would use up that 50 Gridcoin UTXO to in a transaction as an input and have \ntwo outputs: one to Alice with the 25 Gridcoin and another to yourself \nwith the leftover 25 Gridcoin (you must use up all of a UTXO). \n\nYou went from having 1 UTXO that was 50 Gridcoin large to having 1 new UTXO that\nis 25 Gridcoin large. The 50 Gridcoin output is now spent and cannot be used again\n\n\n### Example 2\nSuppose you wanted to send 400 Gridcoin of your balance of 500 Gridcoin \nto Alice and Bob. You have only ever received 5 transactions each \nmade of 100 Gridcoin\n\nTo make this transaction you would use 4 UTXOs (totals 400 Gridcoin) \nas an input and you would have two outputs: one to Bob of 200 Gridcoin and one \nto Alice with 200 Gridcoin. No output to yourself is needed here\n\nYou went from having 5 UTXO that were 100 Gridcoin large to having 1 UTXO that\nis 100 Gridcoin large. Four of the five 100 Gridcoin outputs are now spent \nand cannot be used again\n\n## Example Looking at a Real Transaction\n(Randomly selected transaction)\n\nTransaction ID: \n`c9ac9f4e771a2c8510411fa007cd0ac501d10c74dfdfa225eab1be98108bb12a`\n\n* Involves 6 006.0311 GRC because it uses up an output from the transaction\n `a1a2050495ac7e44f0c8727f2c430520fb1eea6724a992b8801e7095e116ac17` that is\n that exact size\n\n* Has only one input\n\n* Has two outputs\n * One with `1.1705 GRC` \n * One with `6 004.8596 GRC` (likely sent back to the original sender)\n \n* {% include _start_dropdown.htm \n dropdown-header=\"More technical transaction details (click to expand)\"\n %}\n \n Below is the JSON output from `gettransaction` but annotated and simplified\n (irrelevant details & alternative formats removed). Don't worry if you\n don't understand what's below. It's not something you need to know to\n understand UTXOs \n\n Everything with `-----` around it is annotation. You may have to\n scroll to the right to read all of them\n\n ```\n {\n \"vin\": [ ----- (INPUTS) -----\n {\n \"txid\": \"a1a2050495ac7e44f0c8727f2c430520fb1eea6724a992b8801e7095e116ac17\", ----- (INPUT TRANSACTION) -----\n \"vout\": 0, ----- (FIRST OUTPUT FROM THAT TX IS USED UP) -----\n \"scriptSig\": { ----- (PROVE OWNERSHIP OF INPUT) -----\n \"asm\": \"3044022065fb8633839d5188f2545a71fb6a116dbf362388848d0bca5f9dad2f42ef616c02204e1f871b31f72a3b4772fc404bbf8c7681862fff1a9629da082b194c21d3becc01 0348e0c550c94114f3874c02769b748f167177f1786b0d6e269f26183af8f6e9a1\",\n },\n \"sequence\": -1\n }\n ],\n \"vout\": [ ----- (OUTPUTS) -----\n { ----- (FIRST UTXO) -----\n \"value\": 5904.04678755,\n \"n\": 0,\n \"scriptPubKey\": { ----- (REQUIREMENTS TO SPEND NEW UTXO) -----\n \"asm\": \"OP_DUP OP_HASH160 c4602570098f0a40cb7d44a05d896bb021cc278d OP_EQUALVERIFY OP_CHECKSIG\", ---- (REQUIRE KEY TO SEND) --------\n \"addresses\": [\n \"SFrjfgyAJXmAgeasfNbYGEe7yYPmaMKKhG\" ----- (RECIPIENT OR OTHER ADDRESS OF SENDER ) -----\n ]\n }\n },\n { ----- (SECOND UTXO) -----\n \"value\": 100.81185269,\n \"n\": 1,\n \"scriptPubKey\": { ----- (REQUIREMENTS TO SPEND NEW UTXO) -----\n \"asm\": \"OP_DUP OP_HASH160 bf39722f50a3ec44c057a86fc1541e76bcabb860 OP_EQUALVERIFY OP_CHECKSIG\", ----- (REQUIRE KEY TO SEND) -----\n \"addresses\": [\n \"SFPVvnxuL9vr4ojRtuoVKbyWZvuhu8wvgn\" ----- (RECIPIENT OR OTHER ADDRESS OF SENDER ) -----\n ]\n }\n }\n ]\n }\n ```\n {% include _end_dropdown.htm %}\n\n# Staking\n\n[Staking](staking \"wikilink\") similarly works on a UTXO level. When you stake, it is actually an\nindividual UTXO that stakes.\n\n\n## Probability\n\nStaking working with UTXOs may be confusing at first if you have read about\nhow total balance is the only factor (excluding cooldowns) that changes odds of staking.\nThe odds are just designed so that 1 UTXO or any number of UTXOs of the same \ntotal will have the same *total* odds to stake. \n\nThink of it as if your coins were part of a \"raffle\" and your UTXOs were a chunk\nof \"tickets\". No matter how you segment your \"tickets\" (total GRC), you still \nhave the same odds of \"winning\" (staking).\n\n## Reward Coinstake Transaction \n\nThe UTXO that staked is used up and a new special transaction \nis formed to send rewards. This special transaction is called a coinstake. A coinstake uses\nyour staked UTXO as an input, but it's allowed to send more coins than the input[^2]to \nsend you your rewards for staking. Coinstake are where the new coins come from. They include the new\ncoins and fees from other transactions in the block.\nCoinstakes (and coinbases) are also how the very first UTXOs on the network were made.\n\nNote that in Proof of Work cryptocurrencies, this is called a coinbase[^3]instead \nof a coinstake. Gridcoin technically does still have a coinbase transaction,[^4]in \nevery block but they are not used any more (since block 2049 --- the last PoW block). The coinstake\nis where the actual reward comes from after then\n\nThe new UTXO from staking is unable to used in as an input for the next 100 blocks,\nand like any UTXO, it will also undergo a cooldown for staking\n\n\n## Relation to Cooldown & Efficiency\n\nWhen a new UTXO is created, it is unable to stake for the first 16 hours.[^5]This time \nis called a cooldown. The aim of the cooldown is to make a 51% attack more difficult\n\nSince staking creates a new UTXO, that UTXO goes on cooldown. This means that\nwhen you stake, part of your balance will be offline and unable to stake. Thus\nhaving your balance split across smaller UTXOs will make staking more efficient \nsince less of your total balance will be offline after a stake. \n\nMore efficient staking can be achieved with adding `stakesplit=1` to your\n[config file](config-file \"wikilink\"). This will make the reward transaction\nhave multiple outputs to yourself instead of one thus splitting your \nUTXO into multiple (if it helps efficiency by a reasonable amount)\n\n# Rational Behind UTXOs\n\nThe reason that the network uses UTXOs instead of storing balances is because\nit is much easier and quicker to lookup and validate a UTXO. Using total balances\nrequires looking at every transaction that ever occurred on an address and \nmakes transaction validation much more complex\n\n\n# Other Miscellaneous Notes\n\n* You can control which exact UTXOs are used as inputs and created as outputs\nusing the coin control feature in the GUI. If on Ingrid (5.3.1.0) or higher this\nis the dropdown that shows when you send a transaction\n\n\n* The size in bytes (and thus fees) of a transaction is determined largely \nby the number of inputs and output and not directly with the number of Gridcoin moved\n\n* Trying to use too many UTXOs as either an input or an output in one transaction \ncan make a transaction too big in bytes to send. This can be fixed\nby either making multiple transaction or consolidating your UTXOs if inputs are \nthe problem. See the [troubleshooting section of the FAQ](faq#troubleshooting \"wikilink\") for\nhow to consolidate your UTXO\n\n\n# Footnotes\n[^1]: Not literally --- there's no physical \"bill\" for them \n[^2]: It's not allowed to send as many coins as you want --- just what you are owed for staking (new coins) and the fees from the transactions in the block (not new coins)\n[^3]: A coinbase transaction is what the company Coinbase is named after, but they are otherwise unrelated to the actual coinbase transaction\n[^4]: The coinbase transaction is the 0th (index 0) transaction in every block and the coinstake is the 1st (index 1) transaction in the block since 2049\n[^5]: This is not an approximation of the number of blocks. It is defined in the code in terms of time\n","avg_line_length":45.0731707317,"max_line_length":355,"alphanum_fraction":0.7351731602} +{"size":10814,"ext":"md","lang":"Markdown","max_stars_count":null,"content":"---\ntitle: '\ube60\ub978 \uc2dc\uc791: Python\uc744 \uc0ac\uc6a9\ud558\uc5ec \ud14d\uc2a4\ud2b8 \ubd84\uc11d API \ud638\ucd9c'\ntitleSuffix: Azure Cognitive Services\ndescription: Azure Cognitive Services\uc5d0\uc11c \ud14d\uc2a4\ud2b8 \ubd84\uc11d API \uc0ac\uc6a9\uc744 \ube60\ub974\uac8c \uc2dc\uc791\ud558\ub294 \ub370 \ub3c4\uc6c0\uc774 \ub418\ub294 \uc815\ubcf4 \ubc0f \ucf54\ub4dc \uc0d8\ud50c\uc744 \ud655\uc778\ud569\ub2c8\ub2e4.\nservices: cognitive-services\nauthor: aahill\nmanager: nitinme\nms.service: cognitive-services\nms.subservice: text-analytics\nms.topic: quickstart\nms.date: 05\/09\/2019\nms.author: aahi\nms.openlocfilehash: 9ae894bee803c60b56a1bfacd5667f355aa44d2b\nms.sourcegitcommit: 36c50860e75d86f0d0e2be9e3213ffa9a06f4150\nms.translationtype: HT\nms.contentlocale: ko-KR\nms.lasthandoff: 05\/16\/2019\nms.locfileid: \"65799994\"\n---\n# <\/a>\ube60\ub978 \uc2dc\uc791: Python REST API\ub97c \uc0ac\uc6a9\ud558\uc5ec Text Analytics Cognitive Service \ud638\ucd9c \n<\/a>\n\n\uc774 \ube60\ub978 \uc2dc\uc791\uc744 \uc0ac\uc6a9\ud558\uc5ec Text Analytics REST API \ubc0f Python\uc744 \ud1b5\ud574 \uc5b8\uc5b4 \ubd84\uc11d\uc744 \uc2dc\uc791\ud569\ub2c8\ub2e4. \uc774 \ubb38\uc11c\uc5d0\uc11c\ub294 [\uc5b8\uc5b4 \uac80\uc0c9](#Detect), [\uac10\uc815 \ubd84\uc11d](#SentimentAnalysis), [\ud575\uc2ec \uad6c \ucd94\ucd9c](#KeyPhraseExtraction) \ubc0f [\uc5f0\uacb0\ub41c \uc5d4\ud130\ud2f0 \uc2dd\ubcc4](#Entities)\uc744 \uc218\ud589\ud558\ub294 \ubc29\ubc95\uc744 \ubcf4\uc5ec\uc90d\ub2c8\ub2e4.\n\nAPI \uae30\uc220 \ubb38\uc11c\ub294 [API \uc815\uc758](\/\/go.microsoft.com\/fwlink\/?LinkID=759346)\ub97c \ucc38\uc870\ud558\uc138\uc694.\n\n## <\/a>\ud544\uc218 \uc870\uac74\n\n* [Python 3.x](https:\/\/python.org)\n\n* \ub4f1\ub85d\ud558\ub294 \ub3d9\uc548 \uc0dd\uc131\ub41c [\uc5d4\ub4dc\ud3ec\uc778\ud2b8 \ubc0f \uc561\uc138\uc2a4 \ud0a4](..\/How-tos\/text-analytics-how-to-access-key.md).\n\n* Python \uc694\uccad \ub77c\uc774\ube0c\ub7ec\ub9ac\n \n \uc774 \uba85\ub839\uc744 \uc0ac\uc6a9\ud558\uc5ec \ub77c\uc774\ube0c\ub7ec\ub9ac\ub97c \uc124\uce58\ud560 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\n\n ```console\n pip install --upgrade requests\n ```\n\n[!INCLUDE [cognitive-services-text-analytics-signup-requirements](..\/..\/..\/..\/includes\/cognitive-services-text-analytics-signup-requirements.md)]\n\n\n## <\/a>\uc0c8 Python \uc560\ud50c\ub9ac\ucf00\uc774\uc158 \ub9cc\ub4e4\uae30\n\n\uc990\uaca8 \ucc3e\ub294 \ud3b8\uc9d1\uae30 \ub610\ub294 IDE\uc5d0\uc11c \uc0c8 Python \uc560\ud50c\ub9ac\ucf00\uc774\uc158\uc744 \ub9cc\ub4ed\ub2c8\ub2e4. \ub2e4\uc74c \uac00\uc838\uc624\uae30\ub97c \ud30c\uc77c\uc5d0 \ucd94\uac00\ud569\ub2c8\ub2e4.\n\n```python\nimport requests\n# pprint is used to format the JSON response\nfrom pprint import pprint\nfrom IPython.display import HTML\n```\n\n\uad6c\ub3c5 \ud0a4\uc5d0 \ub300\ud55c \ubcc0\uc218 \ubc0f Text Analytics REST API\uc5d0 \ub300\ud55c \uc5d4\ub4dc\ud3ec\uc778\ud2b8\ub97c \ub9cc\ub4ed\ub2c8\ub2e4. \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc758 \ud558\uc704 \uc9c0\uc5ed\uc774 \uac00\uc785\ud560 \ub54c \uc0ac\uc6a9\ud55c \uac83\uacfc \uc77c\uce58\ud558\ub294\uc9c0 \ud655\uc778\ud569\ub2c8\ub2e4(\uc608: `westcentralus`). \ud3c9\uac00\ud310 \ud0a4\ub97c \uc0ac\uc6a9\ud558\ub294 \uacbd\uc6b0\uc5d0\ub294 \uc544\ubb34\uac83\ub3c4 \ubcc0\uacbd\ud560 \ud544\uc694\uac00 \uc5c6\uc2b5\ub2c8\ub2e4.\n \n```python\nsubscription_key = \"\"\ntext_analytics_base_url = \"https:\/\/westcentralus.api.cognitive.microsoft.com\/text\/analytics\/v2.1\/\"\n```\n\n\ub2e4\uc74c \uc139\uc158\uc5d0\uc11c\ub294 \uac01 API \uae30\ub2a5\uc744 \ud638\ucd9c\ud558\ub294 \ubc29\ubc95\uc744 \uc124\uba85\ud569\ub2c8\ub2e4.\n\n<\/a>\n\n## <\/a>\uc5b8\uc5b4 \uac10\uc9c0\n\nText Analytics \uae30\ubcf8 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 `languages`\ub97c \ucd94\uac00\ud558\uc5ec \uc5b8\uc5b4 \uac80\uc0c9 URL\uc744 \ud615\uc131\ud569\ub2c8\ub2e4. \uc608: `https:\/\/westcentralus.api.cognitive.microsoft.com\/text\/analytics\/v2.1\/languages`\n \n```python\nlanguage_api_url = text_analytics_base_url + \"languages\"\n```\n\nAPI\uc5d0 \ub300\ud55c \ud398\uc774\ub85c\ub4dc\ub294 \uac01\uac01 `id` \ubc0f `text` \ud2b9\uc131\uc774 \ud3ec\ud568\ub41c \ud29c\ud50c\uc778 `documents`\uc758 \ubaa9\ub85d\uc73c\ub85c \uad6c\uc131\ub429\ub2c8\ub2e4. `text` \ud2b9\uc131\uc740 \ubd84\uc11d\ud560 \ud14d\uc2a4\ud2b8\ub97c \uc800\uc7a5\ud558\uba70, `id`\ub294 \uc784\uc758\uc758 \uac12\uc774 \ub420 \uc218 \uc788\uc2b5\ub2c8\ub2e4. \n\n```python\ndocuments = { \"documents\": [\n { \"id\": \"1\", \"text\": \"This is a document written in English.\" },\n { \"id\": \"2\", \"text\": \"Este es un document escrito en Espa\u00f1ol.\" },\n { \"id\": \"3\", \"text\": \"\u8fd9\u662f\u4e00\u4e2a\u7528\u4e2d\u6587\u5199\u7684\u6587\u4ef6\" }\n]}\n```\n\n\uc694\uccad \ub77c\uc774\ube0c\ub7ec\ub9ac\ub97c \uc0ac\uc6a9\ud558\uc5ec API\uc5d0 \ubb38\uc11c\ub97c \ubcf4\ub0c5\ub2c8\ub2e4. `Ocp-Apim-Subscription-Key` \ud5e4\ub354\uc5d0 \uad6c\ub3c5 \ud0a4\ub97c \ucd94\uac00\ud558\uace0 `requests.post()`\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc694\uccad\uc744 \ubcf4\ub0c5\ub2c8\ub2e4. \n\n```python\nheaders = {\"Ocp-Apim-Subscription-Key\": subscription_key}\nresponse = requests.post(language_api_url, headers=headers, json=documents)\nlanguages = response.json()\npprint(languages)\n```\n\n### <\/a>\ucd9c\ub825\n\n```json\n{\n\"documents\":[\n {\n \"detectedLanguages\":[\n {\n \"iso6391Name\":\"en\",\n \"name\":\"English\",\n \"score\":1.0\n }\n ],\n \"id\":\"1\"\n },\n {\n \"detectedLanguages\":[\n {\n \"iso6391Name\":\"es\",\n \"name\":\"Spanish\",\n \"score\":1.0\n }\n ],\n \"id\":\"2\"\n },\n {\n \"detectedLanguages\":[\n {\n \"iso6391Name\":\"zh_chs\",\n \"name\":\"Chinese_Simplified\",\n \"score\":1.0\n }\n ],\n \"id\":\"3\"\n }\n],\n\"errors\":[]\n}\n```\n\n<\/a>\n\n## <\/a>\uac10\uc815 \ubd84\uc11d\n\n\ubb38\uc11c \uc138\ud2b8\uc758 \uac10\uc815(\uc591\uc218 \ub610\ub294 \uc74c\uc218 \uc0ac\uc774\uc758 \ubc94\uc704)\uc744 \uac80\uc0c9\ud558\ub824\uba74 Text Analytics \uae30\ubcf8 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 `sentiment`\ub97c \ucd94\uac00\ud558\uc5ec \uc5b8\uc5b4 \uac80\uc0c9 URL\uc744 \ud615\uc131\ud569\ub2c8\ub2e4. \uc608: `https:\/\/westcentralus.api.cognitive.microsoft.com\/text\/analytics\/v2.1\/sentiment`\n \n```python\nsentiment_url = text_analytics_base_url + \"sentiment\"\n```\n\n\uc5b8\uc5b4 \uac80\uc0c9 \uc608\uc81c\uc640 \ub9c8\ucc2c\uac00\uc9c0\ub85c, \ubb38\uc11c \ubaa9\ub85d\uc73c\ub85c \uad6c\uc131\ub41c `documents` \ud0a4\uac00 \uc788\ub294 \uc0ac\uc804\uc744 \ub9cc\ub4ed\ub2c8\ub2e4. \uac01 \ubb38\uc11c\ub294 `id`, \ubd84\uc11d\ud560 `text` \ubc0f \ud14d\uc2a4\ud2b8\uc758 `language`\ub85c \uad6c\uc131\ub41c \ud29c\ud50c\uc785\ub2c8\ub2e4. \n\n```python\ndocuments = {\"documents\" : [\n {\"id\": \"1\", \"language\": \"en\", \"text\": \"I had a wonderful experience! The rooms were wonderful and the staff was helpful.\"},\n {\"id\": \"2\", \"language\": \"en\", \"text\": \"I had a terrible time at the hotel. The staff was rude and the food was awful.\"}, \n {\"id\": \"3\", \"language\": \"es\", \"text\": \"Los caminos que llevan hasta Monte Rainier son espectaculares y hermosos.\"}, \n {\"id\": \"4\", \"language\": \"es\", \"text\": \"La carretera estaba atascada. Hab\u00eda mucho tr\u00e1fico el d\u00eda de ayer.\"}\n]}\n```\n\n\uc694\uccad \ub77c\uc774\ube0c\ub7ec\ub9ac\ub97c \uc0ac\uc6a9\ud558\uc5ec API\uc5d0 \ubb38\uc11c\ub97c \ubcf4\ub0c5\ub2c8\ub2e4. `Ocp-Apim-Subscription-Key` \ud5e4\ub354\uc5d0 \uad6c\ub3c5 \ud0a4\ub97c \ucd94\uac00\ud558\uace0 `requests.post()`\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc694\uccad\uc744 \ubcf4\ub0c5\ub2c8\ub2e4. \n\n```python\nheaders = {\"Ocp-Apim-Subscription-Key\": subscription_key}\nresponse = requests.post(sentiment_url, headers=headers, json=documents)\nsentiments = response.json()\npprint(sentiments)\n```\n\n### <\/a>\ucd9c\ub825\n\n\ubb38\uc11c\uc758 \uac10\uc815 \uc810\uc218\ub294 0.0\uc5d0\uc11c 1.0 \uc0ac\uc774\uc774\uba70, \uc810\uc218\uac00 \ub192\uc744\uc218\ub85d \ubcf4\ub2e4 \uae0d\uc815\uc801\uc778 \uac10\uc815\uc744 \ub098\ud0c0\ub0c5\ub2c8\ub2e4.\n\n```json\n{\n \"documents\":[\n {\n \"id\":\"1\",\n \"score\":0.9708490371704102\n },\n {\n \"id\":\"2\",\n \"score\":0.0019068121910095215\n },\n {\n \"id\":\"3\",\n \"score\":0.7456425428390503\n },\n {\n \"id\":\"4\",\n \"score\":0.334433376789093\n }\n ],\n \"errors\":[\n\n ]\n}\n```\n\n<\/a>\n\n## <\/a>\ud575\uc2ec \uad6c \ucd94\ucd9c\n \n\ubb38\uc11c \uc138\ud2b8\uc5d0\uc11c \ud575\uc2ec \uad6c\ub97c \ucd94\ucd9c\ud558\ub824\uba74 Text Analytics \uae30\ubcf8 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 `keyPhrases`\ub97c \ucd94\uac00\ud558\uc5ec \uc5b8\uc5b4 \uac80\uc0c9 URL\uc744 \ud615\uc131\ud569\ub2c8\ub2e4. \uc608: `https:\/\/westcentralus.api.cognitive.microsoft.com\/text\/analytics\/v2.1\/keyPhrases`\n \n```python\nkeyphrase_url = text_analytics_base_url + \"keyPhrases\"\n```\n\n\uc774 \ubb38\uc11c \uceec\ub809\uc158\uc740 \uac10\uc815 \ubd84\uc11d \uc608\uc81c\uc5d0 \uc0ac\uc6a9\ub41c \uac83\uacfc \ub3d9\uc77c\ud569\ub2c8\ub2e4.\n\n```python\ndocuments = {\"documents\" : [\n {\"id\": \"1\", \"language\": \"en\", \"text\": \"I had a wonderful experience! The rooms were wonderful and the staff was helpful.\"},\n {\"id\": \"2\", \"language\": \"en\", \"text\": \"I had a terrible time at the hotel. The staff was rude and the food was awful.\"}, \n {\"id\": \"3\", \"language\": \"es\", \"text\": \"Los caminos que llevan hasta Monte Rainier son espectaculares y hermosos.\"}, \n {\"id\": \"4\", \"language\": \"es\", \"text\": \"La carretera estaba atascada. Hab\u00eda mucho tr\u00e1fico el d\u00eda de ayer.\"}\n]}\n```\n\n\uc694\uccad \ub77c\uc774\ube0c\ub7ec\ub9ac\ub97c \uc0ac\uc6a9\ud558\uc5ec API\uc5d0 \ubb38\uc11c\ub97c \ubcf4\ub0c5\ub2c8\ub2e4. `Ocp-Apim-Subscription-Key` \ud5e4\ub354\uc5d0 \uad6c\ub3c5 \ud0a4\ub97c \ucd94\uac00\ud558\uace0 `requests.post()`\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc694\uccad\uc744 \ubcf4\ub0c5\ub2c8\ub2e4. \n\n```python\nheaders = {\"Ocp-Apim-Subscription-Key\": subscription_key}\nresponse = requests.post(keyphrase_url, headers=headers, json=documents)\nkey_phrases = response.json()\npprint(key_phrases)\n```\n\n### <\/a>\ucd9c\ub825\n\n```json\n{\n \"documents\":[\n {\n \"keyPhrases\":[\n \"wonderful experience\",\n \"staff\",\n \"rooms\"\n ],\n \"id\":\"1\"\n },\n {\n \"keyPhrases\":[\n \"food\",\n \"terrible time\",\n \"hotel\",\n \"staff\"\n ],\n \"id\":\"2\"\n },\n {\n \"keyPhrases\":[\n \"Monte Rainier\",\n \"caminos\"\n ],\n \"id\":\"3\"\n },\n {\n \"keyPhrases\":[\n \"carretera\",\n \"tr\u00e1fico\",\n \"d\u00eda\"\n ],\n \"id\":\"4\"\n }\n ],\n \"errors\":[\n\n ]\n}\n```\n\n<\/a>\n\n## <\/a>\uc5d4\ud130\ud2f0 \uc2dd\ubcc4\n\n\ud14d\uc2a4\ud2b8 \ubb38\uc11c\uc5d0\uc11c \uc798 \uc54c\ub824\uc9c4 \uc5d4\ud130\ud2f0(\uc0ac\ub78c, \uc7a5\uc18c \ubc0f \uc0ac\ubb3c)\ub97c \uc2dd\ubcc4\ud558\ub824\uba74 Text Analytics \uae30\ubcf8 \uc5d4\ub4dc\ud3ec\uc778\ud2b8\uc5d0 `entities`\ub97c \ucd94\uac00\ud558\uc5ec \uc5b8\uc5b4 \uac80\uc0c9 URL\uc744 \ud615\uc131\ud569\ub2c8\ub2e4. \uc608: `https:\/\/westcentralus.api.cognitive.microsoft.com\/text\/analytics\/v2.1\/entities`\n \n```python\nentities_url = text_analytics_base_url + \"entities\"\n```\n\n\uc774\uc804 \uc608\uc81c\uc5d0\uc11c\uc640 \uac19\uc740 \ubb38\uc11c \uceec\ub809\uc158\uc744 \ub9cc\ub4ed\ub2c8\ub2e4. \n\n```python\ndocuments = {\"documents\" : [\n {\"id\": \"1\", \"text\": \"Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800.\"}\n]}\n```\n\n\uc694\uccad \ub77c\uc774\ube0c\ub7ec\ub9ac\ub97c \uc0ac\uc6a9\ud558\uc5ec API\uc5d0 \ubb38\uc11c\ub97c \ubcf4\ub0c5\ub2c8\ub2e4. `Ocp-Apim-Subscription-Key` \ud5e4\ub354\uc5d0 \uad6c\ub3c5 \ud0a4\ub97c \ucd94\uac00\ud558\uace0 `requests.post()`\ub97c \uc0ac\uc6a9\ud558\uc5ec \uc694\uccad\uc744 \ubcf4\ub0c5\ub2c8\ub2e4.\n\n```python\nheaders = {\"Ocp-Apim-Subscription-Key\": subscription_key}\nresponse = requests.post(entities_url, headers=headers, json=documents)\nentities = response.json()\n```\n\n### <\/a>\ucd9c\ub825\n\n```json\n{'documents': [{'id': '1',\n 'entities': [{'name': 'Microsoft',\n 'matches': [{'wikipediaScore': 0.502357972145024,\n 'entityTypeScore': 1.0,\n 'text': 'Microsoft',\n 'offset': 0,\n 'length': 9}],\n 'wikipediaLanguage': 'en',\n 'wikipediaId': 'Microsoft',\n 'wikipediaUrl': 'https:\/\/en.wikipedia.org\/wiki\/Microsoft',\n 'bingId': 'a093e9b9-90f5-a3d5-c4b8-5855e1b01f85',\n 'type': 'Organization'},\n {'name': 'Bill Gates',\n 'matches': [{'wikipediaScore': 0.5849375085784292,\n 'entityTypeScore': 0.999847412109375,\n 'text': 'Bill Gates',\n 'offset': 25,\n 'length': 10}],\n 'wikipediaLanguage': 'en',\n 'wikipediaId': 'Bill Gates',\n 'wikipediaUrl': 'https:\/\/en.wikipedia.org\/wiki\/Bill_Gates',\n 'bingId': '0d47c987-0042-5576-15e8-97af601614fa',\n 'type': 'Person'},\n {'name': 'Paul Allen',\n 'matches': [{'wikipediaScore': 0.5314163053043621,\n 'entityTypeScore': 0.9988409876823425,\n 'text': 'Paul Allen',\n 'offset': 40,\n 'length': 10}],\n 'wikipediaLanguage': 'en',\n 'wikipediaId': 'Paul Allen',\n 'wikipediaUrl': 'https:\/\/en.wikipedia.org\/wiki\/Paul_Allen',\n 'bingId': 'df2c4376-9923-6a54-893f-2ee5a5badbc7',\n 'type': 'Person'},\n {'name': 'April 4',\n 'matches': [{'wikipediaScore': 0.37312706493069636,\n 'entityTypeScore': 0.8,\n 'text': 'April 4',\n 'offset': 54,\n 'length': 7}],\n 'wikipediaLanguage': 'en',\n 'wikipediaId': 'April 4',\n 'wikipediaUrl': 'https:\/\/en.wikipedia.org\/wiki\/April_4',\n 'bingId': '52535f87-235e-b513-54fe-c03e4233ac6e',\n 'type': 'Other'},\n {'name': 'April 4, 1975',\n 'matches': [{'entityTypeScore': 0.8,\n 'text': 'April 4, 1975',\n 'offset': 54,\n 'length': 13}],\n 'type': 'DateTime',\n 'subType': 'Date'},\n {'name': 'BASIC',\n 'matches': [{'wikipediaScore': 0.35916049097766867,\n 'entityTypeScore': 0.8,\n 'text': 'BASIC',\n 'offset': 89,\n 'length': 5}],\n 'wikipediaLanguage': 'en',\n 'wikipediaId': 'BASIC',\n 'wikipediaUrl': 'https:\/\/en.wikipedia.org\/wiki\/BASIC',\n 'bingId': '5b16443d-501c-58f3-352e-611bbe75aa6e',\n 'type': 'Other'},\n {'name': 'Altair 8800',\n 'matches': [{'wikipediaScore': 0.8697256853652899,\n 'entityTypeScore': 0.8,\n 'text': 'Altair 8800',\n 'offset': 116,\n 'length': 11}],\n 'wikipediaLanguage': 'en',\n 'wikipediaId': 'Altair 8800',\n 'wikipediaUrl': 'https:\/\/en.wikipedia.org\/wiki\/Altair_8800',\n 'bingId': '7216c654-3779-68a2-c7b7-12ff3dad5606',\n 'type': 'Other'}]}],\n 'errors': []}\n```\n\n## <\/a>\ub2e4\uc74c \ub2e8\uacc4\n\n> [!div class=\"nextstepaction\"]\n> [\ud14d\uc2a4\ud2b8 \ubd84\uc11d \ubc0f Power BI](..\/tutorials\/tutorial-power-bi-key-phrases.md)\n\n## <\/a>\ucc38\uace0 \ud56d\ubaa9 \n\n [Text Analytics \uac1c\uc694](..\/overview.md) \n [FAQ(\uc9c8\ubb38\uacfc \ub300\ub2f5)](..\/text-analytics-resource-faq.md)\n","avg_line_length":28.3089005236,"max_line_length":194,"alphanum_fraction":0.620122064} +{"size":2430,"ext":"md","lang":"Markdown","max_stars_count":null,"content":"---\nlayout: post\ntitle: Week Five \n---\n\nThis week in Software Engineering, my partner Maurya and I submitted our Netflix project. Early in the week, we finished up writing our prediction algorithm and fixing pylint and stylistic issues. We ran into an (almost) big problem when finishing up our prediction algorithm. We have our caches set up to be read as a byte stream through a series of web requests, which means we don't need to have local copies of any of the caches, they're simply read straight from the \/u\/fares\/...\/ directory. One morning, our request code started giving us errors that we hadn't seen before. I turned out that the cache our algorithm most heavily depended upon was deleted. This was a problem because there was basically no way for us to recover that specific cache. Luckily enough, I still had all the local copies I had made before we started reading the caches remotely. Without that cache, we would have largely had to come up with a different algorithm that did not depend on the missing cache. Side stepping this potential disaster, we simply had to re-upload the missing cache.\n\nAlong with finishing up Netflix, we also had a guest lecturer from Bloomberg talk to us about his contributions and project at Bloomberg, as well as tell us about his company. Amongst other things, the lecturer worked on a search engines of sorts at the California site of Bloomberg. One of the main challenges of this search engine is that there is a very large dataset that must be searched through in order to find the results of a query. Not only does this make indexing the data difficult, but the data is also distributed across multiple machines, which now makes creating an index even much more difficult than before. I didn't quite fully understand how Bloomberg goes about successfully querying against a distributed index, but the general idea was minimizing amount of data that needs to be shared between different nodes in the distributed system (Shards was the word they used to describe different partitions of data on the different nodes, I believe).\n\nMy tip of the week is this interesting article on Distributed Search Engines. This article isn't talking about the difficulties in their implementation, but instead analyzes their importance from a security point of view.\nhttps:\/\/www.techdirt.com\/articles\/20140701\/03143327738\/distributed-search-engines-why-we-need-them-post-snowden-world.shtml\n\n\n","avg_line_length":173.5714285714,"max_line_length":1072,"alphanum_fraction":0.8049382716} +{"size":9358,"ext":"md","lang":"Markdown","max_stars_count":3157.0,"content":"---\nreviewers:\n- edithturn\n- raelga\n- electrocucaracha\ntitle: Snapshots de Vol\u00famenes\ncontent_type: concept\nweight: 20\n---\n\n\n\nEn Kubernetes, un _VolumeSnapshot_ representa un Snapshot de un volumen en un sistema de almacenamiento. Este documento asume que est\u00e1 familiarizado con [vol\u00famenes persistentes](\/docs\/concepts\/storage\/persistent-volumes\/) de Kubernetes.\n\n\n\n\n\n\n## Introducci\u00f3n\n\nAl igual que los recursos de API `PersistentVolume` y `PersistentVolumeClaim` se utilizan para aprovisionar vol\u00famenes para usuarios y administradores, `VolumeSnapshotContent` y `VolumeSnapshot` se proporcionan para crear Snapshots de volumen para usuarios y administradores.\n\nUn `VolumeSnapshotContent` es un Snapshot tomado de un volumen en el cl\u00faster que ha sido aprovisionado por un administrador. Es un recurso en el cl\u00faster al igual que un PersistentVolume es un recurso de cl\u00faster.\n\nUn `VolumeSnapshot` es una solicitud de Snapshot de un volumen por parte del usuario. Es similar a un PersistentVolumeClaim.\n\n`VolumeSnapshotClass` permite especificar diferentes atributos que pertenecen a un `VolumeSnapshot`. Estos atributos pueden diferir entre Snapshots tomados del mismo volumen en el sistema de almacenamiento y, por lo tanto, no se pueden expresar mediante el mismo `StorageClass` de un `PersistentVolumeClaim`.\n\nLos Snapshots de volumen brindan a los usuarios de Kubernetes una forma estandarizada de copiar el contenido de un volumen en un momento determinado, sin crear uno completamente nuevo. Esta funcionalidad permite, por ejemplo, a los administradores de bases de datos realizar copias de seguridad de las bases de datos antes de realizar una edici\u00f3n o eliminar modificaciones.\n\nCuando utilicen esta funci\u00f3n los usuarios deben tener en cuenta lo siguiente:\n\n* Los objetos de API `VolumeSnapshot`, `VolumeSnapshotContent`, y `VolumeSnapshotClass` son {{< glossary_tooltip term_id=\"CustomResourceDefinition\" text=\"CRDs\" >}}, y no forman parte de la API principal.\n* La compatibilidad con `VolumeSnapshot` solo est\u00e1 disponible para controladores CSI.\n* Como parte del proceso de implementaci\u00f3n de `VolumeSnapshot`, el equipo de Kubernetes proporciona un controlador de Snapshot para implementar en el plano de control y un sidecar auxiliar llamado csi-snapshotter para implementar junto con el controlador CSI. El controlador de Snapshot observa los objetos `VolumeSnapshot` y `VolumeSnapshotContent` y es responsable de la creaci\u00f3n y eliminaci\u00f3n del objeto `VolumeSnapshotContent`. El sidecar csi-snapshotter observa los objetos `VolumeSnapshotContent` y activa las operaciones `CreateSnapshot` y `DeleteSnapshot` en un punto final CSI.\n* Tambi\u00e9n hay un servidor webhook de validaci\u00f3n que proporciona una validaci\u00f3n m\u00e1s estricta en los objetos Snapshot. Esto debe ser instalado por las distribuciones de Kubernetes junto con el controlador de Snapshots y los CRDs, no los controladores CSI. Debe instalarse en todos los cl\u00fasteres de Kubernetes que tengan habilitada la funci\u00f3n de Snapshot.\n* Los controladores CSI pueden haber implementado o no la funcionalidad de Snapshot de volumen. Los controladores CSI que han proporcionado soporte para Snapshot de volumen probablemente usar\u00e1n csi-snapshotter. Consulte [CSI Driver documentation](https:\/\/kubernetes-csi.github.io\/docs\/) para obtener m\u00e1s detalles.\n* Los CRDs y las instalaciones del controlador de Snapshot son responsabilidad de la distribuci\u00f3n de Kubernetes.\n\n## Ciclo de vida de un Snapshot de volumen y el contenido de un Snapshot de volumen.\n\n`VolumeSnapshotContents` son recursos en el cl\u00faster. `VolumeSnapshots` son solicitudes de esos recursos. La interacci\u00f3n entre `VolumeSnapshotContents` y `VolumeSnapshots` sigue este ciclo de vida:\n\n### Snapshot del volumen de aprovisionamiento\n\nHay dos formas de aprovisionar los Snapshots: aprovisionadas previamente o aprovisionadas din\u00e1micamente.\n\n#### Pre-aprovisionado {#static}\nUn administrador de cl\u00faster crea una serie de `VolumeSnapshotContents`. Llevan los detalles del Snapshot del volumen real en el sistema de almacenamiento que est\u00e1 disponible para que lo utilicen los usuarios del cl\u00faster. Existen en la API de Kubernetes y est\u00e1n disponibles para su consumo.\n\n#### Din\u00e1mica\nEn lugar de utilizar un Snapshot preexistente, puede solicitar que se tome una Snapshot din\u00e1micamente de un PersistentVolumeClaim. El [VolumeSnapshotClass](\/docs\/concepts\/storage\/volume-snapshot-classes\/) especifica los par\u00e1metros espec\u00edficos del proveedor de almacenamiento para usar al tomar una Snapshot.\n\n### Vinculante\n\nEl controlador de Snapshots maneja el enlace de un objeto `VolumeSnapshot` con un objeto `VolumeSnapshotContent` apropiado, tanto en escenarios de aprovisionamiento previo como de aprovisionamiento din\u00e1mico. El enlace es un mapeo uno a uno.\n\nEn el caso de un enlace aprovisionado previamente, el VolumeSnapshot permanecer\u00e1 sin enlazar hasta que se cree el objeto VolumeSnapshotContent solicitado.\n\n### Persistent Volume Claim como Snapshot Source Protection\n\nEl prop\u00f3sito de esta protecci\u00f3n es garantizar que los objetos de la API\n{{< glossary_tooltip text=\"PersistentVolumeClaim\" term_id=\"persistent-volume-claim\" >}}\nen uso, no se eliminen del sistema mientras se toma un Snapshot (ya que esto puede resultar en la p\u00e9rdida de datos).\n\nMientras se toma un Snapshot de un PersistentVolumeClaim, ese PersistentVolumeClaim est\u00e1 en uso. Si elimina un objeto de la API PersistentVolumeClaim en uso activo como fuente de Snapshot, el objeto PersistentVolumeClaim no se elimina de inmediato. En cambio, la eliminaci\u00f3n del objeto PersistentVolumeClaim se pospone hasta que el Snapshot est\u00e9 readyToUse o se cancele.\n\n### Borrar\n\nLa eliminaci\u00f3n se activa al eliminar el objeto `VolumeSnapshot`, y se seguir\u00e1 la `DeletionPolicy`. S\u00ed `DeletionPolicy` es `Delete`, entonces el Snapshot de almacenamiento subyacente se eliminar\u00e1 junto con el objeto `VolumeSnapshotContent`. S\u00ed `DeletionPolicy` es `Retain`, tanto el Snapshot subyacente como el `VolumeSnapshotContent` permanecen.\n\n## VolumeSnapshots\n\nCada VolumeSnapshot contiene una especificaci\u00f3n y un estado.\n\n```yaml\napiVersion: snapshot.storage.k8s.io\/v1\nkind: VolumeSnapshot\nmetadata:\n name: new-snapshot-test\nspec:\n volumeSnapshotClassName: csi-hostpath-snapclass\n source:\n persistentVolumeClaimName: pvc-test\n```\n\n`persistentVolumeClaimName` es el nombre de la fuente de datos PersistentVolumeClaim para el Snapshot. Este campo es obligatorio para aprovisionar din\u00e1micamente un Snapshot.\n\nUn Snapshot de volumen puede solicitar una clase particular especificando el nombre de un [VolumeSnapshotClass](\/docs\/concepts\/storage\/volume-snapshot-classes\/)\nutilizando el atributo `volumeSnapshotClassName`. Si no se establece nada, se usa la clase predeterminada si est\u00e1 disponible.\n\nPara los Snapshots aprovisionadas previamente, debe especificar un `volumeSnapshotContentName` como el origen del Snapshot, como se muestra en el siguiente ejemplo. El campo de origen `volumeSnapshotContentName` es obligatorio para los Snapshots aprovisionados previamente.\n\n```yaml\napiVersion: snapshot.storage.k8s.io\/v1\nkind: VolumeSnapshot\nmetadata:\n name: test-snapshot\nspec:\n source:\n volumeSnapshotContentName: test-content\n```\n\n## Contenido del Snapshot de volumen\n\nCada VolumeSnapshotContent contiene una especificaci\u00f3n y un estado. En el aprovisionamiento din\u00e1mico, el controlador com\u00fan de Snapshots crea objetos `VolumeSnapshotContent`. Aqu\u00ed hay un ejemplo:\n\n```yaml\napiVersion: snapshot.storage.k8s.io\/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: snapcontent-72d9a349-aacd-42d2-a240-d775650d2455\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n volumeHandle: ee0cfb94-f8d4-11e9-b2d8-0242ac110002\n volumeSnapshotClassName: csi-hostpath-snapclass\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default\n uid: 72d9a349-aacd-42d2-a240-d775650d2455\n```\n\n`volumeHandle` es el identificador \u00fanico del volumen creado en el backend de almacenamiento y devuelto por el controlador CSI durante la creaci\u00f3n del volumen. Este campo es obligatorio para aprovisionar din\u00e1micamente un Snapshot. Especifica el origen del volumen del Snapshot.\n\nPara los Snapshots aprovisionados previamente, usted (como administrador del cl\u00faster) es responsable de crear el objeto `VolumeSnapshotContent` de la siguiente manera.\n\n```yaml\napiVersion: snapshot.storage.k8s.io\/v1\nkind: VolumeSnapshotContent\nmetadata:\n name: new-snapshot-content-test\nspec:\n deletionPolicy: Delete\n driver: hostpath.csi.k8s.io\n source:\n snapshotHandle: 7bdd0de3-aaeb-11e8-9aae-0242ac110002\n volumeSnapshotRef:\n name: new-snapshot-test\n namespace: default\n```\n\n`snapshotHandle` es el identificador \u00fanico del Snapshot de volumen creado en el backend de almacenamiento. Este campo es obligatorio para las Snapshots aprovisionadas previamente. Especifica el ID del Snapshot CSI en el sistema de almacenamiento que representa el `VolumeSnapshotContent`.\n\n## Aprovisionamiento de Vol\u00famenes a partir de Snapshots\n\nPuede aprovisionar un nuevo volumen, rellenado previamente con datos de una Snapshot, mediante el campo *dataSource* en el objeto `PersistentVolumeClaim`.\n\nPara obtener m\u00e1s detalles, consulte\n[Volume Snapshot and Restore Volume from Snapshot](\/docs\/concepts\/storage\/persistent-volumes\/#volume-snapshot-and-restore-volume-from-snapshot-support).\n","avg_line_length":61.1633986928,"max_line_length":586,"alphanum_fraction":0.8139559735} +{"size":63,"ext":"md","lang":"Markdown","max_stars_count":null,"content":"# meta-alaric\nYocto BSP layer for the REFLEX CES Alaric board \n","avg_line_length":21.0,"max_line_length":48,"alphanum_fraction":0.7777777778} +{"size":304,"ext":"md","lang":"Markdown","max_stars_count":2186.0,"content":"Please complete all sections.\n\n### Configuration\n\n- Provider Gem: `omniauth-*`\n- Ruby Version: ``\n- Framework: ``\n- Platform: ``\n\n### Expected Behavior\n\nTell us what should happen.\n\n### Actual Behavior\n\nTell us what happens instead.\n\n### Steps to Reproduce\n\nPlease list all steps to reproduce the issue.\n","avg_line_length":14.4761904762,"max_line_length":45,"alphanum_fraction":0.7039473684} +{"size":2564,"ext":"md","lang":"Markdown","max_stars_count":null,"content":"Planetary Input \/ Output [![Join the chat at https:\/\/gitter.im\/USGS-Astrogeology\/plio](https:\/\/badges.gitter.im\/Join%20Chat.svg)](https:\/\/gitter.im\/USGS-Astrogeology\/plio?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)\n===============================\n\nA planetary surface data input\/output library written in Python. The release version of `plio` is avaiable via conda-forge. \n\nCurrent build status\n====================\n\n[![Linux](https:\/\/img.shields.io\/circleci\/project\/github\/conda-forge\/plio-feedstock\/master.svg?label=Linux)](https:\/\/circleci.com\/gh\/conda-forge\/plio-feedstock)\n[![OSX](https:\/\/img.shields.io\/travis\/conda-forge\/plio-feedstock\/master.svg?label=macOS)](https:\/\/travis-ci.org\/conda-forge\/plio-feedstock)\n[![Windows](https:\/\/img.shields.io\/appveyor\/ci\/conda-forge\/plio-feedstock\/master.svg?label=Windows)](https:\/\/ci.appveyor.com\/project\/conda-forge\/plio-feedstock\/branch\/master)\n\nCurrent release info\n====================\n\n| Name | Downloads | Version | Platforms |\n| --- | --- | --- | --- |\n| [![Conda Recipe](https:\/\/img.shields.io\/badge\/recipe-plio-green.svg)](https:\/\/anaconda.org\/conda-forge\/plio) | [![Conda Downloads](https:\/\/img.shields.io\/conda\/dn\/conda-forge\/plio.svg)](https:\/\/anaconda.org\/conda-forge\/plio) | [![Conda Version](https:\/\/img.shields.io\/conda\/vn\/conda-forge\/plio.svg)](https:\/\/anaconda.org\/conda-forge\/plio) | [![Conda Platforms](https:\/\/img.shields.io\/conda\/pn\/conda-forge\/plio.svg)](https:\/\/anaconda.org\/conda-forge\/plio) |\n\nInstalling plio\n===============\n\nInstalling `plio` from the `conda-forge` channel can be achieved by adding `conda-forge` to your channels with:\n\n```\nconda config --add channels conda-forge\n```\n\nOnce the `conda-forge` channel has been enabled, `plio` can be installed with:\n\n```\nconda install plio\n```\n\nIt is possible to list all of the versions of `plio` available on your platform with:\n\n```\nconda search plio --channel conda-forge\n```\n\nInstalling development branch of plio\n=====================================\n\nWe maintain a development branch of plio that is used as a staging area for our releases. The badges and information below describe the bleeding edge builds.\n\n[![Build Status](https:\/\/travis-ci.org\/USGS-Astrogeology\/plio.svg?branch=dev)](https:\/\/travis-ci.org\/USGS-Astrogeology\/plio)\n\n[![Coverage Status](https:\/\/coveralls.io\/repos\/github\/USGS-Astrogeology\/plio\/badge.svg?branch=master)](https:\/\/coveralls.io\/github\/USGS-Astrogeology\/plio?branch=master)\n\nTo install the development version: \n\n```\nconda install -c usgs-astrogeology\/label\/dev plio\n```\n","avg_line_length":46.6181818182,"max_line_length":458,"alphanum_fraction":0.7164586583} +{"size":5950,"ext":"md","lang":"Markdown","max_stars_count":null,"content":"Contributing Code to react-server\n---------------------------------\n\n## Want to help?\n\nGreat! There's a lot to do!\n\nYou could:\n\n- [Improve documentation][improve-documentation]\n- [Fix bugs][fix-bugs]\n- [Build new features][build-new-features]\n- [Help triage issues][help-triage-issues]\n\nNot sure where to start? Join us [on slack](https:\/\/slack.react-server.io\/) and ask!\n\n## Getting started\n\nReact Server uses a tool called [Lerna](https:\/\/www.npmjs.com\/package\/lerna) to\nmanage the entire React Server ecosystem in a single repository. This allows\nmaintainers to know that everything works together well, without having to know\nall of the inner workings of each package in the monorepo. It does present some\nchallenges to new developers, since the idea is relatively new. If you want to\nknow more about why we chose a monorepo, check out the [babel monorepo design\ndoc](https:\/\/github.com\/babel\/babel\/blob\/master\/doc\/design\/monorepo.md),\nespecially the [previous discussion](https:\/\/github.com\/babel\/babel\/blob\/master\/doc\/design\/monorepo.md#previous-discussion).\n\nTo get your local clone into working order, run an\n\n```\nnpm run bootstrap\n```\n\nIf things get hairy and you have errors that you don't understand, you can get\na clean install of all the dependencies\n\n```\nnpm run nuke\nnpm run bootstrap\n```\n\nIf you've been running `bootstrap` to track down build errors a lot, and have a\nlot of debug files lying around, you can clean them up\n\n```\nnpm run clean\n```\n\nMost commands have a corresponding `lerna` command; if you want to find out more,\nyou can look at the `scripts` hash in the root `package.json`, check out the\n[lerna docs](https:\/\/github.com\/lerna\/lerna), and run the lerna commands yourself\n\n```\nnpm i -g lerna david\nlerna clean\nlerna bootstrap\nlerna run lint\nlerna exec -- david u\n```\n\nYou can also work on a single package by `cd`-ing into that module, and using\nnormal `npm` scripts\n\n```\ncd packages\/generator-react-server\nnpm i\nnpm test\n```\n\nbut you should still run a full monorepo build and test before submitting a pr.\n\n## Testing\n\nYeah! Do it!\n\nHead over to [react-server-test-pages](\/packages\/react-server-test-pages) and\ncheck out the README to get a test server set up.\n\nAdd some automated [integration\ntests](\/packages\/react-server-integration-tests) if you're up for it.\n\nIf nothing else, check for regressions:\n\n```bash\nnpm test\n```\n\nThat will, among other things, run [`eslint`](\/.eslintrc).\n\nIf you would like to test your changes in [React Server\ncore](\/packages\/react-server) in a project from outside of the monorepo, you'll\nneed to use `npm install` with a local file path to do it.\n\n```bash\ncd \/path\/to\/my\/project\nnpm install \/path\/to\/react-server\/packages\/react-server\n```\n\nMake sure you install the path to [React Server core](\/packages\/react-server),\ninstead of the monorepo root, or else you'll get the following error\n\n```\nnpm ERR! addLocal Could not install \/Users\/vince.chang\/code\/react-server\nnpm ERR! Darwin 15.2.0\nnpm ERR! argv \"\/path\/to\/node\/v4.3.1\/bin\/node\" \"\/path\/to\/node\/v4.3.1\/bin\/npm\" \"i\" \"\/path\/to\/react-server\"\nnpm ERR! node v4.3.1\nnpm ERR! npm v2.14.12\n\nnpm ERR! No name provided in package.json\nnpm ERR!\nnpm ERR! If you need help, you may report this error at:\nnpm ERR! \u00a0 \u00a0 \n\nnpm ERR! Please include the following file with any support request:\nnpm ERR! \u00a0 \u00a0 \/path\/to\/my\/project\/npm-debug.log\n```\n\nYou can't use `npm link`, since when you `npm link \/path\/to\/react-server`,\nReact Server and your instance will use separate versions of `react`,\n`request-local-storage`, `q`, `superagent` &c, which introduces bugs in React\nServer. Even if you were to remove all those singleton modules from your client,\nor from React Server code, then you\u2019d still have problems, because\n`react-server` doesn\u2019t have access to your instance's dependencies, and vice\nversa.\n\nNote that if you would like to test changes in a fork of React Server, you can't\ninstall from your github fork because it will attempt to install the monorepo\ninstead of React Server core, and the monorepo is not a valid node module.\nInstead, we recommend you publish a test version into a npm private repository,\nlike [npm Enterprise](https:\/\/docs.npmjs.com\/enterprise\/index) or\n[Sinopia](https:\/\/github.com\/rlidwka\/sinopia).\n\nIf you add a new test that starts a server, make sure to update the\ntesting port registry to make sure there aren't any conflicts. Our\nCI test target runs many packages' tests simultaneously, so its\nimportant that every server starts on a unique port. You can find\nthe manifest in the docs.\n\nIf you're making changes to the website, you'll want it to use the docs that are\nchecked out on disk, rather than on github (we get the markdown directly from\ngithub so we don't have to reploy the site every time we update the docs). You\ncan do this by setting the `LOCAL_DOCS` environment variable.\n\n```sh\nLOCAL_DOCS=1 npm start\n```\n\n## Contributor License Agreement\n\nTo get started, please [sign the Contributor License\nAgreement](https:\/\/cla-assistant.io\/redfin\/react-server). The purpose\nof this license is to protect contributors, Redfin, as well as users\nof this software project. Signing this agreement does not affect your\nrights to use your contributions for any other purpose.\n\n## Code of Conduct\n\nPlease note that this project is released with a [Contributor Code of\nConduct](\/CODE_OF_CONDUCT.md).\nBy participating in this project you agree to abide by its terms.\n\n## Thanks!\n\nThanks for contributing!\n\n\n[improve-documentation]: https:\/\/github.com\/redfin\/react-server\/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22+label%3Adocumentation\n[fix-bugs]: https:\/\/github.com\/redfin\/react-server\/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22+label%3Abug\n[build-new-features]: https:\/\/github.com\/redfin\/react-server\/issues?q=is%3Aopen+is%3Aissue+label%3A\"help+wanted\"+label%3Aenhancement\n[help-triage-issues]: https:\/\/github.com\/redfin\/react-server\/issues\n","avg_line_length":34.7953216374,"max_line_length":141,"alphanum_fraction":0.7566386555} +{"size":1833,"ext":"md","lang":"Markdown","max_stars_count":null,"content":"---\ntitle: El formato autom\u00e1tico en un servicio de lenguaje heredado | Microsoft Docs\nms.custom: ''\nms.date: 11\/15\/2016\nms.prod: visual-studio-dev14\nms.reviewer: ''\nms.suite: ''\nms.technology:\n- vs-ide-sdk\nms.tgt_pltfrm: ''\nms.topic: article\nhelpviewer_keywords:\n- language services, automatic formatting\nms.assetid: c210fc94-77bd-4694-b312-045087d8a549\ncaps.latest.revision: 11\nms.author: gregvanl\nmanager: ghogen\nms.openlocfilehash: c1e9be96038334edeb9163c15d16a98999bd0c2e\nms.sourcegitcommit: af428c7ccd007e668ec0dd8697c88fc5d8bca1e2\nms.translationtype: MT\nms.contentlocale: es-ES\nms.lasthandoff: 11\/16\/2018\nms.locfileid: \"51795497\"\n---\n# <\/a>Formato autom\u00e1tico en un servicio de lenguaje heredado\n[!INCLUDE[vs2017banner](..\/..\/includes\/vs2017banner.md)]\n\nCon el formato autom\u00e1tico, un servicio de lenguaje inserta autom\u00e1ticamente un fragmento de c\u00f3digo cuando un usuario empieza a escribir una construcci\u00f3n de c\u00f3digo conocidos. \n \n## <\/a>Comportamiento de formato autom\u00e1tico \n Por ejemplo, si escribe `if`, el servicio de lenguaje inserta autom\u00e1ticamente las llaves coincidentes, o si presiona la tecla ENTRAR, el servicio de lenguaje fuerza el punto de inserci\u00f3n en la l\u00ednea nueva para el nivel de sangr\u00eda adecuado, dependiendo de si la anterior Abre un nuevo \u00e1mbito de l\u00ednea. \n \n Tambi\u00e9n se puede usar el filtro de comando usado para el resto del servicio de lenguaje para el formato autom\u00e1tico. Tambi\u00e9n puede resaltar las llaves coincidentes mediante una llamada a . \n \n## <\/a>Vea tambi\u00e9n \n [Desarrollo de un servicio de lenguaje heredado](..\/..\/extensibility\/internals\/developing-a-legacy-language-service.md)\n\n","avg_line_length":48.2368421053,"max_line_length":303,"alphanum_fraction":0.7937806874} +{"size":575,"ext":"md","lang":"Markdown","max_stars_count":null,"content":"> \u6b32\u6234\u7687\u51a0\uff0c\u5fc5\u5148\u627f\u8d77\u91cd \n\n\u5927\u5bb6\u597d\uff0c\u6211\u662f\u8bb8\u632f\u96ea\uff0c\u73b0\u5c31\u8bfb\u4e8e\u8d35\u5dde\u5927\u5b66\u8ba1\u7b97\u673a\u5b66\u9662\uff0c\u8d35\u5dde\u7701\u5148\u8fdb\u8ba1\u7b97\u4e0e\u533b\u7597\u5de5\u7a0b&&\u6570\u636e\u5e93\u5b9e\u9a8c\u5ba4\uff0c\u5b66\u4e60\u548c\u4ece\u4e8b\u4e00\u4e9bJava Web\u548c\u6570\u636e\u76f8\u5173\u7684\u4e1c\u897f\u3002\u76ee\u524d\u7814\u7a76\u65b9\u5411\u5927\u81f4\u4e3aETL\u4efb\u52a1\u8c03\u5ea6\u76f8\u5173\uff0c\u5e0c\u671b\u53ef\u4ee5\u548c\u5927\u5bb6\u591a\u4ea4\u6d41\u3002\n\n##### Education\n###### 2013\u5e749\u6708-2017\u5e747\u6708\n- [\u534e\u4fa8\u5927\u5b66][4]\u00b7\u672c\u79d1\u00b7\u4fe1\u606f\u7ba1\u7406\u4e0e\u4fe1\u606f\u7cfb\u7edf \u00b7 [HQU][1] \u6cc9\u5dde 2013\n###### 2017\u5e749\u6708-\u81f3\u4eca\n- [\u8d35\u5dde\u5927\u5b66][3]\u00b7\u7855\u58eb\u00b7\u8ba1\u7b97\u673a\u79d1\u5b66\u4e0e\u6280\u672f \u00b7 [GZU][2] \u8d35\u9633 2017\n\n#### Interest\n\n- Java Web\u5168\u6808\u5f00\u53d1\n- \u5927\u89c4\u6a21\u6570\u636e\u5904\u7406\n- \u6570\u636e\u6316\u6398\u4e0e\u5206\u6790\n- \u6570\u636e\u53ef\u89c6\u5316\n\n#### Experience\n\n- \u8d35\u9633\u5e02\u516c\u5b89\u4ea4\u901a\u7ba1\u7406\u5c40\u6570\u636e\u8d28\u91cf\u76d1\u6d4b\u5e73\u53f0--\u524d\u540e\u7aef\u5f00\u53d1 \u00b7\u8d35\u9633 2018\n\n[1]: http:\/\/www.hqu.edu.cn\/\n[2]: http:\/\/www.gzu.edu.cn\/\n[3]: https:\/\/baike.baidu.com\/item\/%E8%B4%B5%E5%B7%9E%E5%A4%A7%E5%AD%A6\n[4]: https:\/\/baike.baidu.com\/item\/%E5%8D%8E%E4%BE%A8%E5%A4%A7%E5%AD%A6","avg_line_length":23.0,"max_line_length":99,"alphanum_fraction":0.667826087} +{"size":5965,"ext":"md","lang":"Markdown","max_stars_count":null,"content":"---\ntitle: \"Step 5: Add Enter Event Handlers for the NumericUpDown Controls | Microsoft Docs\"\nms.custom: \"\"\nms.date: \"11\/04\/2016\"\nms.reviewer: \"\"\nms.suite: \"\"\nms.technology: \n - \"vs-acquisition\"\nms.tgt_pltfrm: \"\"\nms.topic: \"article\"\nms.assetid: 45a99a5d-c881-4298-b74d-adb481dec5ee\ncaps.latest.revision: 18\nauthor: \"TerryGLee\"\nms.author: \"tglee\"\nmanager: ghogen\nms.workload: \n - \"multiple\"\n---\n# Step 5: Add Enter Event Handlers for the NumericUpDown Controls\nIn the fifth part of this tutorial, you'll add Enter event handlers to make entering answers for quiz problems a little easier. This code will select and clear the current value in each NumericUpDown control as soon as the quiz taker chooses it and starts to enter a different value. \n \n> [!NOTE]\n> This topic is part of a tutorial series about basic coding concepts. For an overview of the tutorial, see [Tutorial 2: Create a Timed Math Quiz](..\/ide\/tutorial-2-create-a-timed-math-quiz.md). \n \n### To verify the default behavior \n \n1. Run your program, and start the quiz. \n \n In the NumericUpDown control for the addition problem, the cursor flashes next to **0** (zero). \n \n2. Enter `3`, and note that the control shows **30**. \n \n3. Enter `5`, and note that **350** appears but changes to **100** after a second. \n \n Before you fix this problem, think about what's happening. Consider why the **0** didn't disappear when you entered `3` and why **350** changed to **100** but not immediately. \n \n This behavior may seem odd, but it makes sense given the logic of the code. When you choose the **Start** button, its **Enabled** property is set to **False**, and the button appears dimmed and is unavailable. Your program changes the current selection (focus) to the control that has the next lowest TabIndex value, which is the NumericUpDown control for the addition problem. When you use the Tab key to go to a NumericUpDown control, the cursor is automatically positioned at the start of the control, which is why the numbers that you enter appear from the left side and not the right side. When you specify a number that's higher than the value of the **MaximumValue** property, which is set to 100, the number that you enter is replaced with the value of that property. \n \n### To add an Enter event handler for a NumericUpDown control \n \n1. Choose the first NumericUpDown control (named \"sum\") on the form, and then, in the **Properties** dialog box, choose the **Events** icon on the toolbar. \n \n The **Events** tab in the **Properties** dialog box displays all of the events that you can respond to (handle) for the item that you choose on the form. Because you chose the NumericUpDown control, all of the events listed pertain to it. \n \n2. Choose the **Enter** event, enter `answer_Enter`, and then choose the Enter key. \n \n ![Properties dialog box](..\/ide\/media\/express_answerenter.png \"Express_AnswerEnter\") \nProperties dialog box \n \n You've just added an Enter event handler for the sum NumericUpDown control, and you've named the handler **answer_Enter**. \n \n3. In the method for the **answer_Enter** event handler, add the following code. \n \n [!code-vb[VbExpressTutorial3Step5_6#11](..\/ide\/codesnippet\/VisualBasic\/step-5-add-enter-event-handlers-for-the-numericupdown-controls_1.vb)]\n [!code-csharp[VbExpressTutorial3Step5_6#11](..\/ide\/codesnippet\/CSharp\/step-5-add-enter-event-handlers-for-the-numericupdown-controls_1.cs)] \n \n This code may look complex, but you can understand it if you look at it step by step. First, look at the top of the method: `object sender` in C# or `sender As System.Object` in Visual Basic. This parameter refers to the object whose event is firing, which is known as the sender. In this case, the sender object is the NumericUpDown control. So, in the first line of the method, you specify that the sender isn't just any generic object but specifically a NumericUpDown control. (Every NumericUpDown control is an object, but not every object is a NumericUpDown control.) The NumericUpDown control is named **answerBox** in this method, because it will be used for all of the NumericUpDown controls on the form, not just the sum NumericUpDown control. Because you declare the answerBox variable in this method, its scope applies only to this method. In other words, the variable can be used only within this method. \n \n The next line verifies whether answerBox was successfully converted (cast) from an object to a NumericUpDown control. If the conversion was unsuccessful, the variable would have a value of `null` (C#) or `Nothing` (Visual Basic). The third line gets the length of the answer that appears in the NumericUpDown control, and the fourth line selects the current value in the control based on this length. Now, when the quiz taker chooses the control, Visual Studio fires this event, which causes the current answer to be selected. As soon as the quiz taker starts to enter a different answer, the previous answer is cleared and replaced with the new answer. \n \n4. In Windows Forms Designer, choose the difference NumericUpDown control. \n \n5. In the **Events** page of the **Properties** dialog box, scroll down to the **Enter** event, choose the drop-down arrow at the end of the row, and then choose the `answer_Enter` event handler that you just added. \n \n6. Repeat the previous step for the product and quotient NumericUpDown controls. \n \n7. Save your program, and then run it. \n \n When you choose a NumericUpDown control, the existing value is automatically selected and then cleared when you start to enter a different value. \n \n### To continue or review \n \n- To go to the next tutorial step, see [Step 6: Add a Subtraction Problem](..\/ide\/step-6-add-a-subtraction-problem.md). \n \n- To return to the previous tutorial step, see [Step 4: Add the CheckTheAnswer() Method](..\/ide\/step-4-add-the-checktheanswer-parens-method.md).","avg_line_length":79.5333333333,"max_line_length":923,"alphanum_fraction":0.7466890193} +{"size":30,"ext":"md","lang":"Markdown","max_stars_count":3.0,"content":"---\ntitle: Philip Fleming\n---\n","avg_line_length":7.5,"max_line_length":21,"alphanum_fraction":0.6} +{"size":225,"ext":"md","lang":"Markdown","max_stars_count":null,"content":"---\nlayout: archive\ntitle: \"Contact\"\npermalink: \/contact\/\nauthor_profile: true\n---\nCollege of Engineering and Computer Science, Australian National University
\nB345 Brain Aderson Building
\nEmail: zjnwpu [at] gmail.com\n\n","avg_line_length":20.4545454545,"max_line_length":79,"alphanum_fraction":0.7733333333} +{"size":8710,"ext":"md","lang":"Markdown","max_stars_count":null,"content":"---\ntitle: Informacje o autoryzacji w mikrous\u0142ugach .NET i aplikacjach internetowych\ndescription: Zabezpieczenia w mikrous\u0142ugach platformy .NET i aplikacjach sieci Web \u2014 zapoznaj si\u0119 z om\u00f3wieniem g\u0142\u00f3wnych opcji autoryzacji w aplikacjach ASP.NET Core \u2014 opartych na rolach i opartych na zasadach.\nauthor: mjrousos\nms.date: 01\/30\/2020\nms.openlocfilehash: 27936a33ea2bb46cedb9d10ee47a2117e1843e14\nms.sourcegitcommit: e3cbf26d67f7e9286c7108a2752804050762d02d\nms.translationtype: MT\nms.contentlocale: pl-PL\nms.lasthandoff: 04\/09\/2020\nms.locfileid: \"80988209\"\n---\n#
<\/a>Informacje o autoryzacji w mikrous\u0142ugach .NET i aplikacjach internetowych\n\nPo uwierzytelnieniu ASP.NET core interfejs\u00f3w API sieci Web musz\u0105 autoryzowa\u0107 dost\u0119p. Ten proces umo\u017cliwia us\u0142udze udost\u0119pnianie interfejs\u00f3w API niekt\u00f3rym uwierzytelnionym u\u017cytkownikom, ale nie wszystkim. [Autoryzacja](\/aspnet\/core\/security\/authorization\/introduction) mo\u017ce by\u0107 wykonywana na podstawie r\u00f3l u\u017cytkownik\u00f3w lub na podstawie zasad niestandardowych, kt\u00f3re mog\u0105 obejmowa\u0107 inspekcj\u0119 o\u015bwiadcze\u0144 lub innych heurystyki.\n\nOgraniczenie dost\u0119pu do ASP.NET trasy Core MVC jest tak proste, jak zastosowanie atrybutu Authorize do metody akcji (lub do klasy kontrolera, je\u015bli wszystkie akcje kontrolera wymagaj\u0105 autoryzacji), jak pokazano na poni\u017cszym przyk\u0142adzie:\n\n```csharp\npublic class AccountController : Controller\n{\n public ActionResult Login()\n {\n }\n\n [Authorize]\n public ActionResult Logout()\n {\n }\n}\n```\n\nDomy\u015blnie dodanie atrybutu Authorize bez parametr\u00f3w ograniczy dost\u0119p do uwierzytelnionych u\u017cytkownik\u00f3w dla tego kontrolera lub akcji. Aby dodatkowo ograniczy\u0107 interfejs API, kt\u00f3ry ma by\u0107 dost\u0119pny tylko dla okre\u015blonych u\u017cytkownik\u00f3w, atrybut mo\u017cna rozwin\u0105\u0107, aby okre\u015bli\u0107 wymagane role lub zasady, kt\u00f3re u\u017cytkownicy musz\u0105 spe\u0142ni\u0107.\n\n## <\/a>Wdra\u017canie autoryzacji opartej na rolach\n\nASP.NET Podstawowa to\u017csamo\u015b\u0107 ma wbudowan\u0105 koncepcj\u0119 r\u00f3l. Opr\u00f3cz u\u017cytkownik\u00f3w ASP.NET To\u017csamo\u015b\u0107 podstawowa przechowuje informacje o r\u00f3\u017cnych rolach u\u017cywanych przez aplikacj\u0119 i \u015bledzi, kt\u00f3rzy u\u017cytkownicy s\u0105 przypisani do r\u00f3l. Te przypisania mo\u017cna zmieni\u0107 programowo z typem, `RoleManager` kt\u00f3ry aktualizuje role w utrwalone magazynu i `UserManager` typu, kt\u00f3ry mo\u017ce przyzna\u0107 lub odwo\u0142a\u0107 role od u\u017cytkownik\u00f3w.\n\nJe\u015bli uwierzytelniasz si\u0119 za pomoc\u0105 token\u00f3w no\u015bnych JWT, ASP.NET core JWT oprogramowanie po\u015brednicz\u0105ce uwierzytelniania na okaziciela wype\u0142ni role u\u017cytkownika na podstawie o\u015bwiadcze\u0144 roli znalezionych w tokenie. Aby ograniczy\u0107 dost\u0119p do akcji lub kontrolera MVC do u\u017cytkownik\u00f3w w okre\u015blonych rolach, mo\u017cna do\u0142\u0105czy\u0107 parametr Roles w adnotacji Autoryzacyjnej (atrybut), jak pokazano w nast\u0119puj\u0105cym fragmencie kodu:\n\n```csharp\n[Authorize(Roles = \"Administrator, PowerUser\")]\npublic class ControlPanelController : Controller\n{\n public ActionResult SetTime()\n {\n }\n\n [Authorize(Roles = \"Administrator\")]\n public ActionResult ShutDown()\n {\n }\n}\n```\n\nW tym przyk\u0142adzie tylko u\u017cytkownicy w rolach administratora lub PowerUser mog\u0105 uzyskiwa\u0107 dost\u0119p do interfejs\u00f3w API w kontrolerze ControlPanel (na przyk\u0142ad wykonywania akcji SetTime). Interfejs API ShutDown jest dodatkowo ograniczony, aby zezwoli\u0107 na dost\u0119p tylko do u\u017cytkownik\u00f3w w roli administratora.\n\nAby wymaga\u0107, aby u\u017cytkownik by\u0142 w wielu rolach, nale\u017cy u\u017cy\u0107 wielu atrybut\u00f3w Autoryzuj, jak pokazano w poni\u017cszym przyk\u0142adzie:\n\n```csharp\n[Authorize(Roles = \"Administrator, PowerUser\")]\n[Authorize(Roles = \"RemoteEmployee \")]\n[Authorize(Policy = \"CustomPolicy\")]\npublic ActionResult API1 ()\n{\n}\n```\n\nW tym przyk\u0142adzie, aby wywo\u0142a\u0107 API1, u\u017cytkownik musi:\n\n- By\u0107 w roli administratora *lub* PowerUser, *i*\n\n- Wcieli\u0107 si\u0119 w rol\u0119 RemoteEmployee *i*\n\n- Spe\u0142nij niestandardowy program obs\u0142ugi autoryzacji CustomPolicy.\n\n## <\/a>Wdra\u017canie autoryzacji opartej na zasadach\n\nRegu\u0142y autoryzacji niestandardowej mo\u017cna r\u00f3wnie\u017c zapisywa\u0107 przy u\u017cyciu [zasad autoryzacji](https:\/\/docs.asp.net\/en\/latest\/security\/authorization\/policies.html). Ta sekcja zawiera om\u00f3wienie. Aby uzyska\u0107 wi\u0119cej informacji, zobacz [warsztaty autoryzacji ASP.NET](https:\/\/github.com\/blowdart\/AspNetAuthorizationWorkshop).\n\nZasady autoryzacji niestandardowej s\u0105 rejestrowane w metodzie Startup.ConfigureServices przy u\u017cyciu us\u0142ugi. AddAuthorization metody. Ta metoda przyjmuje pe\u0142nomocnika, kt\u00f3ry konfiguruje argument AuthorizationOptions.\n\n```csharp\nservices.AddAuthorization(options =>\n{\n options.AddPolicy(\"AdministratorsOnly\", policy =>\n policy.RequireRole(\"Administrator\"));\n\n options.AddPolicy(\"EmployeesOnly\", policy =>\n policy.RequireClaim(\"EmployeeNumber\"));\n\n options.AddPolicy(\"Over21\", policy =>\n policy.Requirements.Add(new MinimumAgeRequirement(21)));\n});\n```\n\nJak pokazano w przyk\u0142adzie, zasady mog\u0105 by\u0107 skojarzone z r\u00f3\u017cnymi typami wymaga\u0144. Po zarejestrowaniu zasad mo\u017cna je zastosowa\u0107 do akcji lub kontrolera, przekazuj\u0105c nazw\u0119 zasad jako argument Zasad atrybutu Authorize (na przyk\u0142ad `[Authorize(Policy=\"EmployeesOnly\")]`) Zasady mog\u0105 mie\u0107 wiele wymaga\u0144, a nie tylko jeden (jak pokazano w tych przyk\u0142adach).\n\nW poprzednim przyk\u0142adzie pierwsze wywo\u0142anie AddPolicy jest tylko alternatywnym sposobem autoryzowania przez rol\u0119. Je\u015bli `[Authorize(Policy=\"AdministratorsOnly\")]` jest stosowany do interfejsu API, tylko u\u017cytkownicy w roli administratora b\u0119d\u0105 mogli uzyska\u0107 do niego dost\u0119p.\n\nDrugie wywo\u0142anie pokazuje \u0142atwy spos\u00f3b wymaga\u0107, \u017ce okre\u015blone o\u015bwiadczenie powinno by\u0107 obecne dla u\u017cytkownika. Metoda r\u00f3wnie\u017c opcjonalnie przyjmuje oczekiwane warto\u015bci dla o\u015bwiadczenia. Je\u015bli warto\u015bci s\u0105 okre\u015blone, wymaganie jest spe\u0142nione tylko wtedy, gdy u\u017cytkownik ma zar\u00f3wno o\u015bwiadczenie prawid\u0142owego typu i jedn\u0105 z okre\u015blonych warto\u015bci. Je\u015bli u\u017cywasz oprogramowania po\u015brednicz\u0105cego uwierzytelniania na okaziciela JWT, wszystkie w\u0142a\u015bciwo\u015bci JWT b\u0119d\u0105 dost\u0119pne jako o\u015bwiadczenia u\u017cytkownika.\n\nNajciekawsze zasady pokazane tutaj jest `AddPolicy` w trzeciej metody, poniewa\u017c u\u017cywa wymagania autoryzacji niestandardowej. Za pomoc\u0105 wymaga\u0144 autoryzacji niestandardowej, mo\u017cna mie\u0107 du\u017c\u0105 kontrol\u0119 nad jak autoryzacja jest wykonywana. Aby to zadzia\u0142a\u0142o, nale\u017cy zaimplementowa\u0107 nast\u0119puj\u0105ce typy:\n\n- Typ wymagania, kt\u00f3ry pochodzi z i kt\u00f3ry zawiera pola okre\u015blaj\u0105ce szczeg\u00f3\u0142y wymagania. W tym przyk\u0142adzie jest to pole `MinimumAgeRequirement` wiekowe dla typu pr\u00f3bki.\n\n- Program obs\u0142ugi, kt\u00f3ry implementuje , gdzie T jest typem, kt\u00f3ry program obs\u0142ugi mo\u017ce spe\u0142ni\u0107. Program obs\u0142ugi musi implementowa\u0107 metod\u0119, kt\u00f3ra sprawdza, czy okre\u015blony kontekst, kt\u00f3ry zawiera informacje o u\u017cytkowniku spe\u0142nia wymagania.\n\nJe\u015bli u\u017cytkownik spe\u0142nia wymagania, wywo\u0142anie `context.Succeed` wskazuje, \u017ce u\u017cytkownik jest autoryzowany. Je\u015bli istnieje wiele sposob\u00f3w, \u017ce u\u017cytkownik mo\u017ce spe\u0142ni\u0107 wymagania autoryzacji, mo\u017cna utworzy\u0107 wiele program\u00f3w obs\u0142ugi.\n\nOpr\u00f3cz rejestrowania niestandardowych wymaga\u0144 `AddPolicy` dotycz\u0105cych zasad za pomoc\u0105 wywo\u0142a\u0144, nale\u017cy r\u00f3wnie\u017c`services.AddTransient()`zarejestrowa\u0107 niestandardowe programy obs\u0142ugi wymaga\u0144 za po\u015brednictwem iniekcji zale\u017cno\u015bci ( ).\n\nPrzyk\u0142ad wymagania autoryzacji niestandardowej i programu obs\u0142ugi do sprawdzania wieku `DateOfBirth` u\u017cytkownika (na podstawie o\u015bwiadczenia) jest dost\u0119pny w [dokumentacji autoryzacji](https:\/\/docs.asp.net\/en\/latest\/security\/authorization\/policies.html)ASP.NET Core .\n\n## <\/a>Zasoby dodatkowe\n\n- **Uwierzytelnianie ASP.NET rdzeniowe** \\\n [https:\/\/docs.microsoft.com\/aspnet\/core\/security\/authentication\/identity](\/aspnet\/core\/security\/authentication\/identity)\n\n- **Autoryzacja ASP.NET Core** \\\n [https:\/\/docs.microsoft.com\/aspnet\/core\/security\/authorization\/introduction](\/aspnet\/core\/security\/authorization\/introduction)\n\n- **Autoryzacja oparta na rolach** \\\n [https:\/\/docs.microsoft.com\/aspnet\/core\/security\/authorization\/roles](\/aspnet\/core\/security\/authorization\/roles)\n\n- **Autoryzacja oparta na zasadach niestandardowych** \\\n [https:\/\/docs.microsoft.com\/aspnet\/core\/security\/authorization\/policies](\/aspnet\/core\/security\/authorization\/policies)\n\n>[!div class=\"step-by-step\"]\n>[Poprzedni](index.md)\n>[nast\u0119pny](developer-app-secrets-storage.md)\n","avg_line_length":65.9848484848,"max_line_length":653,"alphanum_fraction":0.8150401837} +{"size":746,"ext":"md","lang":"Markdown","max_stars_count":3.0,"content":"# \u4ecb\u7ecd\n\n`vframework` \u662f\u4e00\u4e2a\u57fa\u7840\u7684`PHP`\u6846\u67b6\u3002\u4ed6\u5c06\u6846\u67b6\u7684\u5404\u4e2a\u8981\u7d20\u4ee5\u201c\u7ec4\u4ef6\u201d\u7684\u5f62\u5f0f\u6765\u63d0\u4f9b\uff0c\u4ece\u800c\u8ba9\u4f60\u53ef\u4ee5\u901a\u8fc7\u914d\u7f6e\u548c\u6dfb\u52a0\u81ea\u5df1\u7684\u7ec4\u4ef6\u6765\u5b9a\u5236\u51fa\u66f4\u9002\u5408\u81ea\u5df1\u9879\u76ee\u7684\u57fa\u7840\u67b6\u6784\u3002\n\n## \u5b89\u88c5\n\n\u5b89\u88c5`vframework` ,\u4f60\u53ef\u4ee5\u5728github\u4e0a\u4e0b\u8f7d\u6e90\u7801\u5305\u6216\u8005\u76f4\u63a5fork\u9879\u76ee\u7136\u540eclone\u5230\u672c\u5730. [\u4f20\u9001\u95e8](https:\/\/github.com\/dingusxp\/vframework)\n \n\u5f53\u7136\u4e3a\u4e86\u5207\u5408`\u73b0\u4ee3\u5316`\u7684php\uff0c\u8bf7\u4f7f\u7528php7.+\uff0c \u5982\u679c\u4f60\u4e0d\u60f3\u4f7f\u7528php7\uff0c`vframework`\u4e5f\u662f\u5411\u4e0b\u517c\u5bb9\u7684\u3002\n\n\u5efa\u8bae\u7684\u670d\u52a1\u5668\u73af\u5883\u8981\u6c42\n\n* PHP >= 7.2\n* OpenSSL PHP \u6269\u5c55\n* JSON PHP \u6269\u5c55\n* PDO PHP \u6269\u5c55 \uff08\u5982\u9700\u8981\u4f7f\u7528\u5230 MySQL \u5ba2\u6237\u7aef\uff09\n* Redis PHP \u6269\u5c55 \uff08\u5982\u9700\u8981\u4f7f\u7528\u5230 Redis \u5ba2\u6237\u7aef\uff09\n\n## \u76ee\u5f55\u7ed3\u6784\n\n`vframework`\u7684\u6587\u4ef6\u5939\u7ed3\u6784\n\n* framework\u76ee\u5f55(\u6846\u67b6\u76ee\u5f55)\n * config\u76ee\u5f55\n * language\u76ee\u5f55\n * library\u76ee\u5f55\n * shell\u76ee\u5f55\n * test\u76ee\u5f55\n * tpl\u76ee\u5f55\n * \u6846\u67b6\u5f15\u5bfc\u6587\u4ef6V.php\n* app\u76ee\u5f55(\u5e94\u7528\u7a0b\u5e8f\u76ee\u5f55)\n * config\u76ee\u5f55\n * language\u76ee\u5f55\n * library\u76ee\u5f55\n * shell\u76ee\u5f55\n * test\u76ee\u5f55\n * tpl\u76ee\u5f55\n * \u6846\u67b6\u5f15\u5bfc\u6587\u4ef6V.php\n\n### framework config\u76ee\u5f55\n\n \u8fd9\u4e2a\u76ee\u5f55\u5b58\u653e\u4e86\u6846\u67b6\u542f\u52a8\u7684\u914d\u7f6e\n \n### framework language\u76ee\u5f55\n\n \u6846\u67b6\u8bed\u8a00\u5305\u914d\u7f6e\n","avg_line_length":15.8723404255,"max_line_length":99,"alphanum_fraction":0.6742627346} +{"size":20,"ext":"md","lang":"Markdown","max_stars_count":null,"content":"# Copy-Portal-Items\n","avg_line_length":10.0,"max_line_length":19,"alphanum_fraction":0.75} +{"size":5787,"ext":"md","lang":"Markdown","max_stars_count":null,"content":"---\ntitle: \"crunchy-postgres-appdev\"\ndate:\ndraft: false\nweight: 11\n---\n\nPostgreSQL (pronounced \"post-gress-Q-L\") is an open source, ACID compliant, relational database management system (RDBMS) developed by a worldwide team of volunteers. The crunchy-postgres-appdev container image is unmodified, open source PostgreSQL packaged and maintained by professionals.\n\nThis image is identical to the crunchy-postgres-gis image except it is built specifically for ease of use for application developers. To achieve that we have set reasonable default for some environment variables, remove some functionality needed for a production usage (such as replication and backup). The **goal** for this image is to get application developers up and going as soon as possible with PostgreSQL with most of the useful extensions and features pre-installed. \n\nTHIS IMAGE COMES WITH NO SUPPORT FROM CRUNCHY DATA. Support on this image is through community work and on a good faith basis. If you need support for your containers please contact Crunchy Data to become a customer. \n\nThis image should NOT be used for production deployment. It shares most of the same configuration as the crunchy-postgres and the crunchy-postgres-gis image. Therefore, you can use this as a test bed for developing your applications that will eventually be used in the supported containers. \n\n\n## Features\n\nThe following features are supported by the `crunchy-postgres-appdev` container:\n\n* Kubernetes and OpenShift secrets\n* Custom mounted configuration files (see below)\n* PostGIS\n* PL\/R\n\n## Packages\n\nThe crunchy-postgres-ppdev Docker image contains the following packages (versions vary depending on PostgreSQL version):\n\n* Latest PostgreSQL \n* Latest PostGIS\n* CentOS 7, CentOS 8 - publicly available\n\n## Environment Variables\n\n### Required\n**Name**|**Default**|**Description**\n:-----|:-----|:-----\n**PG_PASSWORD**|None|Set this value to specify the password of the user role, if **PG_ROOT_PASSWORD** is unset then it will share this password\n\n### Optional - Common\n**Name**|**Default**|**Description**\n:-----|:-----|:-----\n**PG_DATABASE**|None|Set this value to create an initial database\n**PG_PRIMARY_PORT**|5432|Set this value to configure the primary PostgreSQL port. It is recommended to use 5432.\n**PG_USER**|None|Set this value to specify the username of the general user account\n**PG_ROOT_PASSWORD**|None|Set this value to specify the password of the superuser role. If unset it is the same as the password **PG_PASSWORD**\n\n### Optional - Other\n**Name**|**Default**|**Description**\n:-----|:-----|:-----\n**ARCHIVE_MODE**|Off|Set this value to `on` to enable continuous WAL archiving\n**ARCHIVE_TIMEOUT**|60|Set to a number (in seconds) to configure `archive_timeout` in `postgresql.conf`\n**CHECKSUMS**|Off|Enables `data-checksums` during initialization of the database. Can only be set during initial database creation.\n**CRUNCHY_DEBUG**|FALSE|Set this to true to enable debugging in logs. Note: this mode can reveal secrets in logs.\n**LOG_STATEMENT**|none|Sets the `log_statement` value in `postgresql.conf`\n**LOG_MIN_DURATION_STATEMENT**|60000|Sets the `log_min_duration_statement` value in `postgresql.conf`\n**MAX_CONNECTIONS**|100|Sets the `max_connections` value in `postgresql.conf`\n**PG_LOCALE**|UTF-8|Set the locale of the database\n**PGAUDIT_ANALYZE**|None|Set this to enable `pgaudit_analyze`\n**PGBOUNCER_PASSWORD**|None|Set this to enable `pgBouncer` support by creating a special `pgbouncer` user for authentication through the connection pooler.\n**PGDATA_PATH_OVERRIDE**|None|Set this value to override the `\/pgdata` directory name. By default `\/pgdata` uses `hostname` of the container. In some cases it may be required to override this with a custom name\n**SHARED_BUFFERS**|128MB|Set this value to configure `shared_buffers` in `postgresql.conf`\n**TEMP_BUFFERS**|8MB|Set this value to configure `temp_buffers` in `postgresql.conf`\n**WORK_MEM**|4MB|Set this value to configure `work_mem` in `postgresql.conf`\n**XLOGDIR**|None| Set this value to configure PostgreSQL to send WAL to the `\/pgwal` volume (by default WAL is stored in `\/pgdata`)\n**PG_CTL_OPTS**|None| Set this value to supply custom `pg_ctl` options (ex: `-c shared_preload_libraries=pgaudit`) during the initialization phase the container start.\n\n## Volumes\n\n**Name**|**Description**\n:-----|:-----\n**\/pgconf**|Volume used to store custom configuration files mounted to the container.\n**\/pgdata**|Volume used to store the data directory contents for the PostgreSQL database.\n\n## Custom Configuration\n\nThe following configuration files can be mounted to the `\/pgconf` volume in the `crunchy-postgres` container to customize the runtime:\n\n**Name**|**Description**\n:-----|:-----\n`ca.crt`| Certificate of the CA used by the server when using SSL authentication\n`ca.crl`| Revocation list of the CA used by the server when using SSL authentication\n`pg_hba.conf`| Client authentication rules for the database\n`pg_ident.conf`| Mapping of external users (such as SSL certs, GSSAPI, LDAP) to database users\n`postgresql.conf`| PostgreSQL settings\n`server.key`| Key used by the server when using SSL authentication\n`server.crt`| Certificate used by the server when using SSL authentication\n`setup.sql`|Custom SQL to execute against the database. Note: only run during the first startup (initialization)\n\n## Verifying PL\/R\n\nIn order to verify the successful initialization of the PL\/R extension, the following commands can be run:\n\n```sql\ncreate extension plr;\nSELECT * FROM plr_environ();\nSELECT load_r_typenames();\nSELECT * FROM r_typenames();\nSELECT plr_array_accum('{23,35}', 42);\nCREATE OR REPLACE FUNCTION plr_array (text, text)\nRETURNS text[]\nAS '$libdir\/plr','plr_array'\nLANGUAGE 'c' WITH (isstrict);\nselect plr_array('hello','world');\n```\n","avg_line_length":54.0841121495,"max_line_length":476,"alphanum_fraction":0.7660273026} +{"size":327,"ext":"md","lang":"Markdown","max_stars_count":1.0,"content":"---\r\nlayout: post\r\ntitle: TCP \ud1b5\uc2e0\uc5d0\uc11c \ub370\uc774\ud130 \uc804\uc1a1\uc744 \ubaa8\uc544\uc11c \ubcf4\ub0b4\uc57c\ud560 \ub610 \ud558\ub098\uc758 \uc774\uc720\r\npublished: true\r\ncategories: [Network]\r\ntags: network send\r\n---\r\n