diff --git "a/github_com/github_com_samples_markdown.json" "b/github_com/github_com_samples_markdown.json" new file mode 100644--- /dev/null +++ "b/github_com/github_com_samples_markdown.json" @@ -0,0 +1,702 @@ +[ + { + "url": "https://github.com/AlmostMatt", + "domain": "github.com", + "file_source": "part-00436-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n* Waterloo, ON, Canada\n* http://www.almostmatt.com\n\n## Popular repositories Loading\n\n* LD25-Villain\nLD25-VillainPublic\nPublic\nOrthographic rendering, a game where you wander and destroy stuff\n\nLua 1\n\n* LD29-The-Mine\nLD29-The-MinePublic\nPublic\nA game where you mark a location and a guy will dig his way to that point and collect resources\n\nLua 1\n\n* ScoreBoard\nScoreBoardPublic\nPublic\nA Django server to provide highscores/replays/custom level functionality for something I'm working on\n\nPython 1\n\nSomething went wrong, please refresh the page to try again.\n\nIf the problem persists, check the GitHub status page or contact support.\nIf the problem persists, check the GitHub status page or contact support.", + "content_format": "markdown" + }, + { + "url": "https://github.com/arbfranklin/scalatron-tinybot", + "domain": "github.com", + "file_source": "part-00614-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nThis is the home of \"tinybot\", an implementation of a Scalatron bot. On 19 July 2012, tinybot was validated to have a high score of 12,330,400 on the freestyle benchmark. The latest version scores ~18 million.\n\nThe general approach of tinybot is as follows:\n\n* A set of strategies are used to vote on the master (& slaves) next move through a collaborative process.\n* Strategies can abstain from voting or mandate a particular move in certain circumstances.\n* All strategies and associated moves are weighted using a \"Genome\" for the run.\n* tinybot can self tune its \"Genome\" by running using a genetic algorithm style approach.", + "content_format": "markdown" + }, + { + "url": "https://github.com/pr3d4t0r", + "domain": "github.com", + "file_source": "part-00894-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\npr3d4t0r Follow\n\n* San Francisco, Pattaya, Tokyo, NYC\n* http://eugeneciurana.com\n\n## Popular repositories Loading\n\n* weechat-btc-ticker\nweechat-btc-tickerPublic\nPublic\nWeeChat plugin for Bitcoin, Litecoin, other cyrptocurrency ticker reporting\n\nSomething went wrong, please refresh the page to try again.\n\nIf the problem persists, check the GitHub status page or contact support.\nIf the problem persists, check the GitHub status page or contact support.", + "content_format": "markdown" + }, + { + "url": "https://github.com/spring-projects/spring-framework/issues/14777", + "domain": "github.com", + "file_source": "part-00894-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nClosed\n\n## Description\n\nMinas Manthos opened SPR-10144 and commented\n\nPersistenceExceptionTranslationPostProcessor extends AbstractAdvisingBeanPostProcessor\n\nMethod isEligible(...) throws a NPE when instantiating a Configurable Bean (because beanName is null).\n\nSee attached zip. It contains a minimal maven project.\n\n> \n\nmvn test\n\nWith 3.1.2 test successes, with 3.2.0 it fails.\n\n```\njava.lang.NullPointerException\n\tat java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:768)\n\tat org.springframework.aop.framework.AbstractAdvisingBeanPostProcessor.isEligible(AbstractAdvisingBeanPostProcessor.java:102)\n\tat org.springframework.aop.framework.AbstractAdvisingBeanPostProcessor.postProcessAfterInitialization(AbstractAdvisingBeanPostProcessor.java:74)\n\tat org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsAfterInitialization(AbstractAutowireCapableBeanFactory.java:412)\n\tat org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1492)\n\tat org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:391)\n\tat org.springframework.beans.factory.wiring.BeanConfigurerSupport.configureBean(BeanConfigurerSupport.java:141)\n\tat org.springframework.beans.factory.aspectj.AnnotationBeanConfigurerAspect.configureBean(AnnotationBeanConfigurerAspect.aj:59)\n\tat org.springframework.beans.factory.aspectj.AbstractDependencyInjectionAspect.ajc$afterReturning$org_springframework_beans_factory_aspectj_AbstractDependencyInjectionAspect$2$1ea6722c(AbstractDependencyInjectionAspect.aj:89)\n\tat com.Foo.(Foo.java:8)\n\tat com.FooTest.testFoo(FooTest.java:17)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)\n\tat org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)\n\tat org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)\n\tat org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)\n\tat org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)\n\tat org.springframework.test.context.junit4.statements.RunBeforeTestMethodCallbacks.evaluate(RunBeforeTestMethodCallbacks.java:74)\n\tat org.springframework.test.context.junit4.statements.RunAfterTestMethodCallbacks.evaluate(RunAfterTestMethodCallbacks.java:83)\n\tat org.springframework.test.context.junit4.statements.SpringRepeat.evaluate(SpringRepeat.java:72)\n\tat org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:231)\n\tat org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:88)\n\tat org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)\n\tat org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)\n\tat org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)\n\tat org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)\n\tat org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)\n\tat org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61)\n\tat org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:71)\n\tat org.junit.runners.ParentRunner.run(ParentRunner.java:292)\n\tat org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:174)\n\tat org.junit.runner.JUnitCore.run(JUnitCore.java:157)\n```\n\nAffects: 3.2 GA\n\nAttachments:\n\n* test.zip (4.16 kB)\n\nIssue Links:\n\n* [regresion] NullPointerException is thrown when beanName is null in AutowireCapableBeanFactory.initializeBean [SPR-10108] #14741 [regresion] NullPointerException is thrown when beanName is null in AutowireCapableBeanFactory.initializeBean\n\nReferenced from: commits 97ae403", + "content_format": "markdown" + }, + { + "url": "https://github.com/lou-k", + "domain": "github.com", + "file_source": "part-00564-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Popular repositories Loading\n\n* pi-photo-sync\npi-photo-syncPublic\nPublic\nTriggers a raspberry pi to sync photos when you plugin a memory card.\n\n* lightroom-cc-api\nlightroom-cc-apiPublic\nPublic\nA Python implementation of Adobe's Creative Cloud Lightroom API\n\n* joint-sagemaker-lambda-container\njoint-sagemaker-lambda-containerPublic\nPublic\nA proof of concept for building a docker container that can be deployed to _both_ sagemaker and lambda\n\nPython 2\n\n* 3d-ken-burns\n3d-ken-burnsPublic\nPublic\nForked from sniklaus/3d-ken-burns\n\nan implementation of 3D Ken Burns Effect from a Single Image using PyTorch\n\nPython\n\nSomething went wrong, please refresh the page to try again.\n\nIf the problem persists, check the GitHub status page or contact support.\nIf the problem persists, check the GitHub status page or contact support.", + "content_format": "markdown" + }, + { + "url": "https://github.com/oeg-upm/LDP4RO", + "domain": "github.com", + "file_source": "part-00614-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nCreators: Nandana Mihindukulasooriya, Daniel Garijo Contributors: Oscar Corcho\n\nProject designed to create and browse Research Objects following the W3C LDP protocol and using LDP4J (http://www.ldp4j.org/#/). The project consists on a client for easily creating ROs and a connector to handle the requests to LDP. This is a work in progress.\n\nThe specification for the alignment between the RO model and LDP can be accessed on the following link: https://docs.google.com/document/d/1mPqn0nW7u0Lo0KubexFwBpQbyhnX4Tf2SXs3n5KMrg4/edit#\n\nCurrently supported: Creation of simple ROs, RO description and RO browisng.\n\nCurrently working on: Adding folders, handling of Zip files, improvements to the client html (see issues).\n\nA live demo is available at http://purl.org/net/ldp4ro\n\nThis work has been supported by the DrInventor project (http://drinventor.eu/)", + "content_format": "markdown" + }, + { + "url": "https://github.com/dirtyvagabond", + "domain": "github.com", + "file_source": "part-00894-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Pinned Loading\n\n* Factual/sosueme\nFactual/sosuemePublic\nPublic\nA collection of Clojure functions for things we like to do.\n\n* factual-haskell-driver\nfactual-haskell-driverPublic\nPublic\nForked from rudyl313/factual-haskell-driver\n\nA Haskell driver for the Factual API\n\nHaskell\n\nSomething went wrong, please refresh the page to try again.\n\nIf the problem persists, check the GitHub status page or contact support.\nIf the problem persists, check the GitHub status page or contact support.", + "content_format": "markdown" + }, + { + "url": "https://github.com/pyjs/pyjs/wiki/pyjamaswithwebopjsonrpc", + "domain": "github.com", + "file_source": "part-00328-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n* Notifications\nYou must be signed in to change notification settings\n* Fork 213\n\n# pyjamaswithwebopjsonrpc\n\n* How to do JSONRPC with WebOp\n\nhttp://pythonpaste.org/webob/jsonrpc-example.html\n\nan illustration of how you can do JSONRPC with WebOp, such that a pyjamas application will be able to talk to it. what's particularly good about this code is that the exact same code looks like it can be used for command-line test purposes as well as being run server-side (as a wsgi script).\n\ncomparing the code to http://pyjamas-dev.googlegroups.com/web/jsonrpc.py for example, which is incredibly similar (but is only server-side) the disadvantage of the webop illustrated example appears to be that you are forced to declare a class containing the jsonrpc service operations. with the jsonrpc.py example shown on pyjamas-dev, you can add the decorator to a global function _or_ you can add the decorator to a function in a class.\n\n(thanks to jim washington for finding this)", + "content_format": "markdown" + }, + { + "url": "https://github.com/mozilla-b2g/b2g-manifest/", + "domain": "github.com", + "file_source": "part-00328-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n#b2g-manifest\n\nPlease make sure to test your changes in a personal fork, before merging. It is imperative that travis tests pass before merging changes back, since broken travis tests implies that b2g bumper will break in production.\n\nIf you create a new branch in b2g-manifest, it is critical that you update `run_travis_tests.sh` so that it invokes b2g bumper using the correct\nmozharness config file(s) for your new branch. See comments in `run_travis_tests.sh` for more details.", + "content_format": "markdown" + }, + { + "url": "https://github.com/nvaccess/nvda/issues/3454", + "domain": "github.com", + "file_source": "part-00564-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nClosed\n\n## Description\n\nReported by jteh on 2013-08-20 12:13\n\n(Spun off ticket:1913#comment:10.)\nIf you're in object review and your navigator object is on an object with no location (obj.location is None), switching to screen review throws this error:\n\n```\nERROR - scriptHandler.executeScript (22:09:53):\nerror executing script: > with gesture u'NVDA+numpad 7'\nTraceback (most recent call last):\n File \"scriptHandler.py\", line 165, in executeScript\n script(gesture)\n File \"globalCommands.py\", line 283, in script_reviewMode_next\n label=review.nextMode()\n File \"review.py\", line 133, in nextMode\n return label or nextMode(prev=prev,startMode=newMode)\n File \"review.py\", line 132, in nextMode\n label=setCurrentMode(newMode)\n File \"review.py\", line 113, in setCurrentMode\n pos=func(obj)\n File \"review.py\", line 62, in getScreenPosition\n pos=DisplayModelTextInfo(s,obj)\n File \"displayModel.py\", line 199, in __init__\n super(DisplayModelTextInfo, self).__init__(obj, position)\n File \"textInfos\\offsets.py\", line 267, in __init__\n start,end=self._getOffsetsFromNVDAObject(position)\n File \"displayModel.py\", line 375, in _getOffsetsFromNVDAObject\n raise RuntimeError\nRuntimeError\n```\n\nThis is not a regression; it occurs when switching to flat review with 2013.1 as well.\n\nBlocking #3517", + "content_format": "markdown" + }, + { + "url": "https://github.com/shawnlaffan", + "domain": "github.com", + "file_source": "part-00894-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nWe read every piece of feedback, and take your input very seriously.\n\nTo see all available qualifiers, see our documentation.\n\nPrevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.\n\nYou must be logged in to block users.\n\nContact GitHub support about this user’s behavior. Learn more about reporting abuse.\n\nA tool for the spatial analysis of diversity\n\nPerl 76 19", + "content_format": "markdown" + }, + { + "url": "https://github.com/jerodsanto", + "domain": "github.com", + "file_source": "part-00830-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nAlways be shipping\n\nDeveloper podcasts are my jam 🎙\n\n* Omaha, Nebraska\n* 06:40\n(UTC -06:00)\n* https://changelog.com\n* @jerodsanto\n* @jerod@changelog.social\n\n## Pinned Loading\n\n* thechangelog/changelog.com\nthechangelog/changelog.comPublic\nPublic\nChangelog makes world-class developer pods. This is our open source platform.\n\n* thechangelog/nightly\nthechangelog/nightlyPublic\nPublic\nChangelog Nightly unearths the hottest repos on GitHub before they blow up. Subscribe for free. Keep up.\n\nSomething went wrong, please refresh the page to try again.\n\nIf the problem persists, check the GitHub status page or contact support.\nIf the problem persists, check the GitHub status page or contact support.", + "content_format": "markdown" + }, + { + "url": "https://github.com/zakird/wkhtmltopdf_binary_gem", + "domain": "github.com", + "file_source": "part-00614-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nInstall in your Gemfile as usual\n\n `gem 'wkhtmltopdf-binary'` \nIn many environments, this is all you need to do. This gem installs a binary stub that tries to determine which wkhtmltopdf binary will work on your system, and point to the packaged binary that most closely matches.\n\nIn some environments, invoking this binary will result in an error, saying the needed permissions are not available. This is because `wkhtmltopdf-binary` ships with gzipped binaries for many platforms, and then picks the appropriate one\nupon first use and unzips it into the same directory. So if your ruby gem binaries are installed here:\n\n```\n/usr/lib/ruby/versions/2.6/bin/\n```\n\nThe various wkhtmltopdf-binaries will be installed here:\n\n```\n/usr/lib/ruby/versions/2.6/lib/ruby/gems/2.6.0/gems/wkhtmltopdf-binary-0.12.5.1/bin/\n```\n\nGiving write access whatever user is running your program (e.g. web server, background job processor), e.g. your own personal user in a dev environment, will fix the problem. After the binary is uncompressed, write access can be revoked again if desired.\n\n```\nchmod -R 777 /usr/lib/ruby/versions/2.6/lib/ruby/gems/2.6.0/gems/wkhtmltopdf-binary-0.12.5.1/bin/\n```\n\nHints for extracting binaries from https://wkhtmltopdf.org/downloads.html (dpkg and rpm2cpio is available on Homebrew).\n\nDebian/Ubuntu\n\n```\ndpkg -x wkhtmltox_0.12.5-1.trusty_amd64.deb .\n```\n\nCentOS\n\n```\nrpm2cpio wkhtmltox-0.12.5-1.centos7.x86_64.rpm | cpio -idmv\n```\n\nArchlinux/manjaro\n\n```\ntar -xf wkhtmltox-0.12.6-1.archlinux.x86_64.tar.xz\n```\n\nmacOS\n\n```\nxar -xf wkhtmltox-0.12.5-1.macos-cocoa.pkg\ncat Payload | gunzip -dc | cpio -i\n```\n\nBinaries should be compressed with `gzip --best` after extracting. The matching binary will be extracted on first\nexecution of `bin/wkhtmltopdf`.\nHints for compressing binaries\n\nDebian/Ubuntu user/local/bin refers to the extracted binaries directory gzip --best -c usr/local/bin/wkhtmltopdf > wkhtmltopdf_ubuntu_22.04.amd64.gz\n\nTo execute gem tests locally, install in your OS:\n\n* Docker\n* Docker compose\n* Ruby\n* Bundler\n\nThen, execute the commands below:\n\n```\ngit clone https://github.com/zakird/wkhtmltopdf_binary_gem\ncd wkhtmltopdf_binary_gem/\nbundle install\nbundle exec rake\n```", + "content_format": "markdown" + }, + { + "url": "https://github.com/jmettraux/rufus-scheduler/issues/42", + "domain": "github.com", + "file_source": "part-00614-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nClosed\n\n## Description\n\nWhen defining the tasks to run I really want to spec them to ensure the tasks were actually scheduled correctly.\n\nI have the following in mind, but not sure what's the best way to do it.\n\n```\nclass TickTack\n attr_reader :scheduler\n\n def initialize\n @scheduler = Rufus::PlainScheduler.new # Note that I don't want to actually start it!\n\n scheduler.cron '0 22 * * 1-5' do\n # every day of the week at 22:00 (10pm)\n Security.activate\n end\n end\n\n def start!\n scheduler.start\n end\nend\n\n# Now I want to spec it similarly to:\n\ndescribe TickTack do\n\n def tick\n subject.scheduler.step #????\n end\n\n it \"should activate alarm on the weekdays after 10pm\" do\n Timecop.travel Chronic.parse(\"next monday 22:01\")\n Security.should_receive(:activate)\n tick\n end\n\n it \"should not activate alarm on the weekdays before 10pm\" do\n Timecop.travel Chronic.parse(\"next monday 8pm\")\n Security.should_not_receive(:activate)\n tick\n end\n\n it \"should not activate alarm on the weekend after 10pm\" do\n Timecop.travel Chronic.parse(\"next sunday 22:01\")\n Security.should_not_receive(:activate)\n tick\n end\nend\n```\n\nBut I don't know if that's a a good way of doing it.\n\n## Metadata\n\n### Assignees\n\n### Labels\n\nNo labels", + "content_format": "markdown" + }, + { + "url": "https://github.com/Microsoft/AzureStorageExplorer/issues/965", + "domain": "github.com", + "file_source": "part-00894-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nWe read every piece of feedback, and take your input very seriously.\n\nTo see all available qualifiers, see our documentation.\n\nIf our call to get ACL fails, we don't catch it, and the activity for \"Getting permissions...\" is just always spinning.", + "content_format": "markdown" + }, + { + "url": "https://github.com/jbpease/", + "domain": "github.com", + "file_source": "part-00096-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n🧬\n\n* The Ohio State University\n* Columbus, Ohio, USA\n* http://www.peaselab.org\n* https://orcid.org/0000-0003-0125-9156\n\n## Pinned Loading\n\n* FePhyFoFum/quartetsampling\nFePhyFoFum/quartetsamplingPublic\nPublic\nQuartet Sampling method for phylogenetic branch support evaluation\n\nSomething went wrong, please refresh the page to try again.\n\nIf the problem persists, check the GitHub status page or contact support.\nIf the problem persists, check the GitHub status page or contact support.", + "content_format": "markdown" + }, + { + "url": "https://github.com/diviproject/docs", + "domain": "github.com", + "file_source": "part-00436-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nContribute to Divi's documentation using the instructions in this README.\n\nThis documentation was created with Slate. Check it out at lord.github.io/slate.\n\nYou're going to need:\n\n* Linux or macOS — Windows may work, but is unsupported.\n* Ruby, version 2.3.1 or newer\n* Bundler — If Ruby is already installed, but the\n `bundle` command doesn't work, just run `gem install bundler` in a terminal.\n\n* Fork this repository on GitHub.\n* Clone your forked repository (not our original one) to your hard drive with\n\n```\ngit clone https://github.com/YOURUSERNAME/docs.git\n```\n\n* \n `cd docs` * Initialize and start Slate. You can either do this locally, or with Vagrant:\n\n```\n# either run this to run locally\nbundle install\nbundle exec middleman server\n\n# OR run this to run with vagrant\nvagrant up\n```\n\nYou can now see the docs at http://localhost:4567. Whoa! That was fast!\n\nNow that Slate is all set up on your machine, you'll probably want to learn more about editing Slate markdown.\n\nFor those who don't have JavaScript runtime or are experiencing JavaScript runtime issues with ExecJS, it is recommended to add the rubyracer gem to your gemfile and run `bundle` again.\nAfter you are satisified with your changes create a pull request to this repository and one of the primary developers will take a look and pull it in.\n\nThank you to the folks who have contributed to the Divi Documetation!", + "content_format": "markdown" + }, + { + "url": "https://github.com/evoye", + "domain": "github.com", + "file_source": "part-00293-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Popular repositories Loading\n\n* IdentityServer4\nIdentityServer4Public\nPublic\nForked from IdentityServer/IdentityServer4\n\nOpenID Connect and OAuth 2.0 Framework for ASP.NET Core\n\nC#\n\n### Repositories\n\nShowing 1 of 1 repositories\n\n* IdentityServer4 Public Forked from IdentityServer/IdentityServer4\n\nOpenID Connect and OAuth 2.0 Framework for ASP.NET Core\n\nEvoye/IdentityServer4’s past year of commit activity\n\n# People\n\nThis organization has no public members. You must be a member to see who’s a part of this organization.\n\n# Top languages\n\nLoading…\n\n# Most used topics\n\nLoading…", + "content_format": "markdown" + }, + { + "url": "https://github.com/seanhess/zero/commit/e574d3cdef258ff8f090af853e0dfaf978a51b8c", + "domain": "github.com", + "file_source": "part-00894-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n* Notifications\nYou must be signed in to change notification settings\n* Fork 0\n\n## Commit\n\nThis commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.\n\n* Loading branch information\n\nShowing 1 changed file with 1 addition and 1 deletion.\n\n## There are no files selected for viewing\n\nThis file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters", + "content_format": "markdown" + }, + { + "url": "https://github.com/nodejs/node/pull/10958", + "domain": "github.com", + "file_source": "part-00293-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n* Notifications\nYou must be signed in to change notification settings\n* Fork 30.6k\n\n# New issue\n\nHave a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.\n\nBy clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.\n\nAlready on GitHub? Sign in to your account\n\n# doc: add links to alternative versions of doc #10958\n\n## Conversation\n\nEach page of the api documentation should have links to other versions\n\nof the same page. This will make it easier to switch between the current\n\n\"live\" release at nodejs.org and LTS versions.\nFixes: #10726\n\n# Checklist\n\n* \n `make -j4 test` (UNIX), or `vcbuild test` (Windows) passes* documentation is changed or added\n* commit message follows commit guidelines\n\n# Affected core subsystem(s)\n\ndoc, tools\n\nThis is great @chris--young. Thanks for doing this.\n\n# tools/doc/html.js Outdated\n\n| const a = (v) => `v${v}`; |\n| --- |\n| const as = (vs) => vs.filter(lte).map(a).join(' / '); |\n| |\n| const lts = as(['0.12.x', '4.x', '6.x']); |\n\nThere was a problem hiding this comment.\n\n### Choose a reason for hiding this comment\n\nThe reason will be displayed to describe this comment to others. Learn more.\n\n0.12 is no longer in maintenance mode, so I would say remove it from here.\n\nThere was a problem hiding this comment.\n\n### Choose a reason for hiding this comment\n\nThe reason will be displayed to describe this comment to others. Learn more.\n\nI've moved v0.12 into the \"Unsupported\" section\n\n# tools/doc/html.js Outdated\n\n| if (lts.length) |\n| --- |\n| html += 'LTS: ' + lts; |\n| |\n| const unsupported = as(['0.10.x', '5.x', '7.x']); |\n\nThere was a problem hiding this comment.\n\n### Choose a reason for hiding this comment\n\nThe reason will be displayed to describe this comment to others. Learn more.\n\nv7.x is still a supported version, so calling it `Unsupported` would be confusing IMO\nThere was a problem hiding this comment.\n\n### Choose a reason for hiding this comment\n\nThe reason will be displayed to describe this comment to others. Learn more.\n\nv7.x is now labeled as `Current` `12b9524` to `710734b` Compare `710734b` to `32bfaa1` Compare `32bfaa1` to `f1c0607` Compare\n@evanlucas It looks like @chris--young has tried to address your comments. Can you PTAL when you get a chance and (if appropriate) update your review status?\n\nThis could use some more reviewers. Looping in the people who Also: @nodejs/documentation\n\nI didn't give it a +1 in the original issue, at least I see no signs I did :-). I just CCed it to the website group.The docs we have right now for 7.x are better to use for 4.x than than the 4.x docs were.This is likely to misdirect people into thinking that node is like express, http://expressjs.com/en/4x/api.html, where you should always look at the docs for the version you are using.I won't object if others find this useful, but I also won't approve.\n\n@sam-github any suggestions for how I can salvage this? I was thinking if I restyled the links it might be clearer that you're probably looking for the latest version (7.x) and that 0.10x - 6.x are older versions.\n\n@chris--young I'm +/- 0. You could try to convince me, then you'd have two approvals once you address @evanlucas 's comments :-). The PR says what it does, but doesn't say why (\"I was thinking it'd be nice\" doesn't count). What's the use-case?@nodejs/documentation @nodejs/website PTAL\n\nI would think one use case is if some API was the subject of a breaking change in 7.x but you're using 4.x or 6.x, you'll want to switch to the earlier docs easily.\n\nExcept you won't really know that was a breaking change, unless it was the intoduction or removal of an entire API (because we don't document major changes other than that). And many of the doc improvements apply to older releases. We'd have to start being really rigorous about backporting docs. Currently, the don't backport clean to 4.x because of the .markdown to .md rename, so 4.x docs are no longer the best reference for the features in 4.x. I guess I'm saying I don't think we maintain the LTS docs rigorously enough, and we don't break the node API very much, so this isn't as useful as http://expressjs.com/en/4x/api.html vs http://expressjs.com/en/3x/api.html, and it may be dangerous to suggest we do.On the other hand, maybe this will force us to backport all doc improvements more consistently, which would be a good thing.\n\nI filed the original issue for this. @sam-github the idea isn't to be able to figure out what is different but to easily be able to find relevant documentation. Each major version is a major version specifically because something in an outward API changed. If i am writing code for 4.x (or even 6.x) I need to be able to quickly look up docs for that version. Right now this is hard - if you google something you always get latest. It should be trivial to get linked to the version of the docs for the API you are working withThe node 7 docs are not helpful when writing code for node 4 (or even node .12 if you are still supporting such a thing).\n\n# doc/api/addons.md Outdated\n\n| @@ -1,5 +1,7 @@ |\n| --- |\n| # Addons |\n| |\n| |\n\nThere was a problem hiding this comment.\n\n### Choose a reason for hiding this comment\n\nThe reason will be displayed to describe this comment to others. Learn more.\n\n `doc` sounds superfluous to me. How about \"introduced_in\"?\nThere was a problem hiding this comment.\n\n### Choose a reason for hiding this comment\n\nThe reason will be displayed to describe this comment to others. Learn more.\n\n@thefourtheye: seems reasonable, just made that change\n\n@toddself spelling corrections in the thrown Error messages are semver-majorOK, I don't object to this, it sounds easy and like some people will find it useful, but I should point out that when the documentation is improved on master, as it often is, for features going all the way back to before LTS (including docs of APIs that existed in 4.x but were not doced), that does NOT guarantee that the docs get backported.\n `991e273` to `ea244fa` Compare\n@evanlucas please re-review\n\n# doc/api/crypto.md Outdated\n\n| @@ -1,5 +1,7 @@ |\n| --- |\n| # Crypto |\n| |\n| |\n\nThere was a problem hiding this comment.\n\n### Choose a reason for hiding this comment\n\nThe reason will be displayed to describe this comment to others. Learn more.\n\nI don't believe this is correct. (https://nodejs.org/docs/v0.3.6/api/crypto.html) `crypto` has been around for a while. https://nodejs.org/en/download/releases/ is a useful page to check :]\nThere was a problem hiding this comment.\n\n### Choose a reason for hiding this comment\n\nThe reason will be displayed to describe this comment to others. Learn more.\n\nOh cool, I had only been looking at https://nodejs.org/dist/ and must have messed up a few versions. I'll go back over all the docs and make sure these tags are the right version number.\n\n# doc/api/debugger.md Outdated\n\n| @@ -1,5 +1,7 @@ |\n| --- |\n| # Debugger |\n| |\n| |\n\nThere was a problem hiding this comment.\n\n### Choose a reason for hiding this comment\n\nThe reason will be displayed to describe this comment to others. Learn more.\n\nI don't think this is correct either? https://nodejs.org/dist/v0.9.12/docs/api/debugger.html\n\n# tools/doc/html.js Outdated\n\n| const a = (v) => `v${v}`; |\n| --- |\n| const as = (vs) => vs.filter(lte).map(a).join(' / '); |\n| |\n| html += 'View another version of this page Current: ' + a('7.x'); |\n\nThere was a problem hiding this comment.\n\n### Choose a reason for hiding this comment\n\nThe reason will be displayed to describe this comment to others. Learn more.\n\nso one concern I have here is what will this look like when Node 8 is released? It will be released as a \"Current\" release line for a while before going LTS. With that, there will be a few months of layover where Node 7 is still also a \"Current\" release.\n\nhad same problem finding different versions of docs, and feel it'd be really convenient to add links to different versions\n `48b3af1` to `7838914` Compare\n@BridgeAR I don't really like the idea of merging it when we know it's broken on mobile. If you can give me till the end of the weekend i'll get it to where it needs to be.\n\n# doc/api/addons.md Outdated\n\n| Node.js Addons are dynamically-linked shared objects, written in C++, that |\n| --- |\n| |\n| |\n| Node.js Addons are dynamically-linked shared objects, written in C or C++, that |\n\nThere was a problem hiding this comment.\n\n### Choose a reason for hiding this comment\n\nThe reason will be displayed to describe this comment to others. Learn more.\n\nThis commit explicitly changed this to C++ abfd4bf\n\n# tools/doc/html.js Outdated\n\n| @@ -31,6 +31,7 @@ const typeParser = require('./type-parser.js'); |\n| --- |\n| module.exports = toHTML; |\n| |\n| const STABILITY_TEXT_REG_EXP = /(.*:)\\s*(\\d)([\\s\\S]*)/; |\n| const DOC_CREATED_REG_EXP = //; |\n\nThere was a problem hiding this comment.\n\n### Choose a reason for hiding this comment\n\nThe reason will be displayed to describe this comment to others. Learn more.\n\nThe dots need to be escaped. Also, you can optionally relax the RegEx to accept optional spaces.\n\n# tools/doc/html.js Outdated\n\n| return html; |\n| --- |\n| } |\n| |\n| function lte(v) { |\n\nThere was a problem hiding this comment.\n\n### Choose a reason for hiding this comment\n\nThe reason will be displayed to describe this comment to others. Learn more.\n\nString based comparison with `v.num` is not good enough. Why not split it at dots and the do integer comparison?\n\n# tools/doc/html.js Outdated\n\n| let html = ''; |\n| --- |\n| |\n| if (!docCreated) { |\n| console.error('Failed to add alternative version links'); |\n\nThere was a problem hiding this comment.\n\n### Choose a reason for hiding this comment\n\nThe reason will be displayed to describe this comment to others. Learn more.\n\nNit: If the filename is also included, it would help in debugging.\n\n `d41c2ad` to `d01c036` Compare\n\n# tools/doc/html.js Outdated\n\n| @@ -31,6 +31,7 @@ const typeParser = require('./type-parser.js'); |\n| --- |\n| module.exports = toHTML; |\n| |\n| const STABILITY_TEXT_REG_EXP = /(.*:)\\s*(\\d)([\\s\\S]*)/; |\n| const DOC_CREATED_REG_EXP = //; |\n\nThere was a problem hiding this comment.\n\n### Choose a reason for hiding this comment\n\nThe reason will be displayed to describe this comment to others. Learn more.\n\nI was thinking more like\n\n```\nconst DOC_CREATED_REG_EXP = //;\n```\n\n# tools/doc/html.js Outdated\n\n| { num: '5.x' }, |\n| --- |\n| { num: '4.x', lts: true }, |\n| { num: '0.12.x' }, |\n| { num: '0.10.x' }, |\n\nThere was a problem hiding this comment.\n\n### Choose a reason for hiding this comment\n\nThe reason will be displayed to describe this comment to others. Learn more.\n\nNit: Extra comma at the end.\n\n `7047f9c` to `967cdba` Compare\n@BridgeAR should be good to go now, unless anyone has objections\n> Each page of the API documentation should have links to other versions of the same page. This will make it easier to switch between the current \"live\" release at nodejs.org and LTS versions. PR-URL: nodejs#10958 Fixes: nodejs#10726 Reviewed-By: Refael Ackermann Reviewed-By: Evan Lucas Reviewed-By: Sakthipriyan Vairamani Reviewed-By: Ruben Bridgewater \n\nLanded in cacce30, thank you for the contribution! 🎉\n\nA little linting error was introduced with this commit. Fix: #15063\n> Each page of the API documentation should have links to other versions of the same page. This will make it easier to switch between the current \"live\" release at nodejs.org and LTS versions. PR-URL: nodejs/node#10958 Fixes: nodejs/node#10726 Reviewed-By: Refael Ackermann Reviewed-By: Evan Lucas Reviewed-By: Sakthipriyan Vairamani Reviewed-By: Ruben Bridgewater \n> Each page of the API documentation should have links to other versions of the same page. This will make it easier to switch between the current \"live\" release at nodejs.org and LTS versions. PR-URL: nodejs/node#10958 Fixes: nodejs/node#10726 Reviewed-By: Refael Ackermann Reviewed-By: Evan Lucas Reviewed-By: Sakthipriyan Vairamani Reviewed-By: Ruben Bridgewater \n> Each page of the API documentation should have links to other versions of the same page. This will make it easier to switch between the current \"live\" release at nodejs.org and LTS versions. PR-URL: nodejs#10958 Fixes: nodejs#10726 Reviewed-By: Refael Ackermann Reviewed-By: Evan Lucas Reviewed-By: Sakthipriyan Vairamani Reviewed-By: Ruben Bridgewater \n> Each page of the API documentation should have links to other versions of the same page. This will make it easier to switch between the current \"live\" release at nodejs.org and LTS versions. PR-URL: #10958 Fixes: #10726 Reviewed-By: Refael Ackermann Reviewed-By: Evan Lucas Reviewed-By: Sakthipriyan Vairamani Reviewed-By: Ruben Bridgewater \n> Each page of the API documentation should have links to other versions of the same page. This will make it easier to switch between the current \"live\" release at nodejs.org and LTS versions. PR-URL: #10958 Fixes: #10726 Reviewed-By: Refael Ackermann Reviewed-By: Evan Lucas Reviewed-By: Sakthipriyan Vairamani Reviewed-By: Ruben Bridgewater \n\nShould this be backported to\n\n@chris--young IMHO it would be nice to have this in 6 so as to enable cross referencing.\n\n@MylesBorins yea seems like a good idea. will open a pr soon\n> Each page of the API documentation should have links to other versions of the same page. This will make it easier to switch between the current \"live\" release at nodejs.org and LTS versions. PR-URL: nodejs#10958 Fixes: nodejs#10726 Reviewed-By: Refael Ackermann Reviewed-By: Evan Lucas Reviewed-By: Sakthipriyan Vairamani Reviewed-By: Ruben Bridgewater \n> Each page of the API documentation should have links to other versions of the same page. This will make it easier to switch between the current \"live\" release at nodejs.org and LTS versions. Backport-PR-URL: #15670 PR-URL: #10958 Fixes: #10726 Reviewed-By: Refael Ackermann Reviewed-By: Evan Lucas Reviewed-By: Sakthipriyan Vairamani Reviewed-By: Ruben Bridgewater ", + "content_format": "markdown" + }, + { + "url": "https://github.com/paulzee", + "domain": "github.com", + "file_source": "part-00564-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n* Melbourne, Australia\n* http://gplus.to/paulzee\n\n## Popular repositories Loading\n\n* aiko_openlab\naiko_openlabPublic\nPublic\nForked from geekscape/aiko_openlab\n\nEducational application with graphical LCD for general purpose input / output, waveform generation and display\n\nJava 2\n\n* simple_festive_lights\nsimple_festive_lightsPublic\nPublic\nA simple Arduino festive lights project that repurposes a string of basic DC operated LED lights and enhances them to provide PWM fade and flash routines.\n\nJava 1\n\n* aiko_SEG\naiko_SEGPublic\nPublic\nForked from samotage/Aiko\n\nSimple framework for allowing Arduino applications / examples / libraries to be built in a modular, event-driven fashion. Aiko enables more events and less delay()s !\n\nJava 1\n\n* LaserTagMT\nLaserTagMTPublic\nPublic\nForked from georgepatterson/LaserTagMT\n\nArduino based Laser Tag system based on the MilesTag (MT) protocol\n\nArduino 1\n\nIf the problem persists, check the GitHub status page or contact support.", + "content_format": "markdown" + }, + { + "url": "https://github.com/YunoHost-Apps/etherpad_mypads_ynh", + "domain": "github.com", + "file_source": "part-00894-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nRead this README in other languages.\n\n> \n\nThis package allows you to install Etherpad MyPads quickly and simply on a YunoHost server.\n\nIf you don't have YunoHost, please consult the guide to learn how to install it.\nEtherpad is a real-time collaborative editor scalable to thousands of simultaneous real time users. It provides full data export capabilities, and runs on your server, under your control.\n\nThis version of Etherpad is preconfigured with a collection of plugins:\n\n* ep_mypads - Groups and private pads for etherpad\n* ep_align - Add Left/Center/Right/Justify alignment\n* ep_author_hover - Display author names when hovereing text\n* ep_comments_page - Add comments on sidebar and link it to the text.\n* ep_countable - Add paragraphs, words and characters count\n* ep_delete_empty_pads - Delete pads which were never edited\n* ep_font_color - Be able to change font color\n* ep_font_size - Be able to change font size\n* ep_headings2 - Be able to set text as headers\n* ep_markdown - Edit and export as Markdown\n* ep_spellcheck - Add spell checking\n* ep_subscript_and_superscript - Support for subscript and superscript\n\nShipped version: 1.9.1~ynh3\n\nDemo: https://video.etherpad.com\n\n* Official app website: http://etherpad.org\n* Official admin documentation: http://etherpad.org/doc/v1.9.0\n* Upstream app code repository: https://github.com/ether/etherpad-lite\n* YunoHost Store: https://apps.yunohost.org/app/etherpad_mypads\n* Report a bug: https://github.com/YunoHost-Apps/etherpad_mypads_ynh/issues\nPlease send your pull request to the `testing` branch.To try the `testing` branch, please proceed like that:\n\n```\nsudo yunohost app install https://github.com/YunoHost-Apps/etherpad_mypads_ynh/tree/testing --debug\nor\nsudo yunohost app upgrade etherpad_mypads -u https://github.com/YunoHost-Apps/etherpad_mypads_ynh/tree/testing --debug\n```\n\nMore info regarding app packaging: https://yunohost.org/packaging_apps", + "content_format": "markdown" + }, + { + "url": "https://github.com/Mercury-Language/mercury/commits/master/README.lcc", + "domain": "github.com", + "file_source": "part-00830-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nWe read every piece of feedback, and take your input very seriously.\n\nTo see all available qualifiers, see our documentation.", + "content_format": "markdown" + }, + { + "url": "https://github.com/nelstrom/vim-pml/commits/nelstrom", + "domain": "github.com", + "file_source": "part-00293-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n# Commits\n\n## Branch selector\n\n## User selector\n\n## Datepicker\n\n## Commit History\n\n### Commits on Sep 7, 2011\n\n* committed\n\n### Commits on Aug 29, 2011\n\n### Commits on Jul 16, 2011\n\n### Commits on Jun 26, 2011\n\n* committed\n* committed\n* committed\n* committed\n* committed\n* committed\n\n### Commits on Jun 25, 2011\n\n* committed\n* committed\n* committed\n* committed\n* committed\n* committed\n* committed\n* committed\n* committed\n* committed\n* committed\n* committed\n* committed\n\n### Commits on Jun 24, 2011\n\n### Commits on May 27, 2011\n\n* authored andNathan ErorcommittedNathan Eror", + "content_format": "markdown" + }, + { + "url": "https://github.com/scylladb/scylla/commit/e3429142651f55f47162a4b5c613979bd2dc079d", + "domain": "github.com", + "file_source": "part-00613-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n* Notifications\nYou must be signed in to change notification settings\n* Fork 1.3k\n\n## Commit\n\nThis commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.\n\nMerge \"Fixes for incremental backup\" from Glauber\n\n> \"The control over backups is now moved to the CF itself, from the storage service. That allows us to simplify the code (while making it correct) for cases in which the storage service is not available. With this change, we no longer need the database config passed down to the storage_service object. So that patch is reverted.\"\n\n* Loading branch information\n\nShowing 11 changed files with 76 additions and 28 deletions.\n\n## There are no files selected for viewing\n\nThis file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters\n\nThis file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters\n\nThis file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters\n\nThis file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters\n\nThis file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters\n\nThis file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters\n\nThis file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters\n\nThis file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters\n\nThis file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters\n\nThis file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters\n\nThis file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters", + "content_format": "markdown" + }, + { + "url": "https://github.com/openjs-foundation/cross-project-council/pull/460", + "domain": "github.com", + "file_source": "part-00614-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n* Notifications\nYou must be signed in to change notification settings\n* Fork 170\n\n# New issue\n\nHave a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.\n\nBy clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.\n\nAlready on GitHub? Sign in to your account\n\n# add Dhruv (@maddhruv) as regular member #460\n\n## Conversation\n\nI've been a regular member of the Node.js Foundation - https://github.com/orgs/nodejs/people?query=maddhruv\n\nfor a long time (> 3 months).\n\nI request to be added to the OpenJS Foundation as a regular member 😄\nThere was a problem hiding this comment.\n\n### Choose a reason for hiding this comment\n\nThe reason will be displayed to describe this comment to others. Learn more.\n\nLGTM\n\nThere was a problem hiding this comment.\n\n### Choose a reason for hiding this comment\n\nThe reason will be displayed to describe this comment to others. Learn more.\n\nlgtm\n\nThere was a problem hiding this comment.\n\n### Choose a reason for hiding this comment\n\nThe reason will be displayed to describe this comment to others. Learn more.\n\nLGTM\n\nThere was a problem hiding this comment.\n\n### Choose a reason for hiding this comment\n\nThe reason will be displayed to describe this comment to others. Learn more.\n\nLGTM\n\n| |\n| --- |", + "content_format": "markdown" + }, + { + "url": "https://github.com/sferik/rails_admin/issues/1305", + "domain": "github.com", + "file_source": "part-00096-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nClosed\n\n## Description\n\nI am making my own custom field to represent an associated collection. Inline editing of associated models will make using my admin interface much more streamlined. Is there a way that I can use the fields and forms that RA would normally put in the pop-over, so I don't have to entirely roll my own UI?\n\n## Metadata\n\n### Assignees\n\n### Labels\n\nNo labels", + "content_format": "markdown" + }, + { + "url": "https://github.com/ZPTXDev/Quaver", + "domain": "github.com", + "file_source": "part-00830-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nQuaver is available for public use here, and its dashboard is available here. Keep in mind that this instance of Quaver will only run the latest stable version.\n\nQuaver utilizes Discord's built-in slash commands, buttons, select menus, modals, and more. After deploying the commands, type `/` into your chat to list Quaver's commands.\nAs Quaver is designed to be as user-friendly as possible, users should be able to immediately understand how a function works within Quaver without having to read any documentation.\n\n* Node.js v20 (or higher)\n* Lavalink v4 (or higher)\n\n* youtube-source plugin installed\n* LavaSrc plugin installed\n* java-timed-lyrics plugin installed\n> \n\nPlease note the connection details of your Lavalink instance. You will need to specify them in\n\n `settings.json` later.\n\n* Bot token from Discord\n\n* Clone the repository\n* Make a copy of\n `settings.example.json` and rename it to `settings.json` * Edit the fields in\n `settings.json` as necessary> \n\nRefer to CONFIGURATION.md for a detailed explanation on configuration.\n\n* Run\n `pnpm i` to install packages required to run Quaver* Run\n `pnpm build` to compile the source code* Run\n `pnpm run slash:deploy` to deploy slash commands* Run\n `pnpm start` to start Quaver\nI cannot guarantee anything. However, the chances of getting into legal trouble is slim if your bot is used privately. I would still exercise caution when hosting any music bot.\n\nI'll consider it! Submit an issue here and I'll be happy to take a look.\n\nSlash commands are defined when running `npm run slash-deploy`.This means that slash command descriptions will follow the language set in `settings.json` ( `defaultLocaleCode` key), and not the language set through the `/settings` command.You need to re-deploy the commands using `npm run slash-deploy` for the new locale to take effect.\nDue to Discord's limitations and the localizations we have, we don't currently use Discord's localized command name & description functionality. This may be worked on in the future.\n\nYes! As of 5.0.0, Quaver has a web dashboard add-on available here. Please note that this is an optional addon and is not required to run Quaver normally.\n\nAs of 7.0.0, Spotify support is provided through Lavalink. Please use the LavaSrc plugin with Lavalink to enable Spotify support.\n\n> \n\nNOTE: To enable support via Lavalink, version 7.0.2 or higher is required. Older versions may block Spotify queries locally.\n\nTake a look at our Crowdin project.\n\nRefer to CONTRIBUTING.md.", + "content_format": "markdown" + }, + { + "url": "https://github.com/alloy/microgem/tree/069afe97d56f1bb95a21a9e127d10e2a88862ccb", + "domain": "github.com", + "file_source": "part-00403-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nThe aim of this project is, at first, to create a re-implementation of the RubyGems `install` command, which should be easier to get up and running on Ruby implementations such as MacRuby.\nIf a character is not in the ASCII table, it doesn’t make it illegal or however people are calling them nowadays. That’s problem it was trying to draw attention to. I see people normalizing these types of characters in places where unicode is very very valid. Just because it looks like a ‘u’ does not make it one…\n\nBut after multiple request it was decided to add an executable without a multibyte character; ugem.\n\nIf you are hardcore, you can get the ‘µ’ character on OS X with: ⌥ + M (that’s the ALT key…)\n\nMu (uppercase Μ, lowercase μ; Greek: Μι or Μυ [mi]) is the 12th letter of the Greek alphabet. In Unicode, the upper and lower case mu are encoded at U+039C and U+03BC respectively. In ISO 8859-7 they are encoded at CCHEX and ECHEX. The micro sign or micron is considered a distinct character from the Greek alphabet letter by Unicode for historical reasons (although it is a homoglyph)\n\nBecause µ is the abbreviation for the Metric System prefix micro-, the symbol is used in many word plays about the field of micro-computing.\n\nSo to recap, it’s ‘micro’ in MicroGem, got it?\n\n> $ sudo gem install alloy-microgem -s http://gems.github.com\n\nGet source:\n\n> $ git clone git://github.com/alloy/microgem.git $ cd microgem\n\nInstall the remote gem:\n\n> $ sudo env PRODUCTION=true macruby ./bin/µgem install alloy-microgem --simple --debug\n> Microgem is an unsophisticated package manager for Ruby. And the first commandline utility to start with a multibyte character; µ Usage: µgem [command] [arguments…] [options…] Example: µgem install rake µgem install rails --force µgem cache update --debug Options: --debug Raises the log level to `debug' --force Forces a command --simple-downloader Use curl to download files instead of Net::HTTP --simple-unpacker Use external tools to unpack archives instead of Zlib --simple Enables --simple-downloader and --simple-unpacker --help Show help information\n\nGet the source:\n\n> $ git clone git://github.com/alloy/microgem.git\n\nInstall a gem:\n\n> $ ./bin/µgem install rake\n\nNote that unless you set the PRODUCTION environment variable everything is installed in ./tmp.\n\nThe current default sources are rubyforge and github.\n\nThere are a lot of limitations currently compared to RubyGems. The most important ones are:\n\n* \n\nDoes not install C extensions yet.\n\nThe way it’s being developed is in a test and RubyGems data driven manner. We are using the ‘quick’ marshalled specs, so we use those fixtures to run the tests.\n\nIf you encounter an invalid gemspec, RubyGems will warn you about it, then please add it to test/regression/gemspecs and send me a pull request.\n\n* \n\nInstall gems.\n\n* \n\nSmall.\n\n* \n\nTest driven.\n\n* \n\nClean room design.\n\n* \n\nNaive, no more code than necessary should be written. (YAGNI)", + "content_format": "markdown" + }, + { + "url": "https://github.com/Arks-Layer/PSO2esTranslations", + "domain": "github.com", + "file_source": "part-00436-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n* Get ESBreaker from its repository https://github.com/PolCPP/ESBreaker\n* Copy the JSON files to the JSON directory (if it's not there create it on the same level as the EXE).\n* Follow the usage instructions on the ESBreaker repo\n* Run ESBreaker (it may take a while).\n* Grab the Output files from the output directory and put them on your android device.\n\n* SEGAC - For the libs that power the trans tool (lol)\n* PolCPP (Rupikachu) - for the tool\n* Aida - Initial translations\n* Logokas - A lot of UI translation work\n* SynthSy - Translation work\n* Alam Arias - Adding travis support and other stuff\n* Bumped.org - Item Names and Wonderful other translations\n* ARKS-Visiphone - Weapon names and other neat stuff\n* Nyaato - Misc. translations\n* YukaLily - Misc. translations\n* Nora - Misc. translations\n* Dabir - Translation Work\n* Snack - Story translation work\n\n```\nDO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE\n Version 2, December 2004\n```\n\nCopyright (C) 2004 Sam Hocevar 14 rue de Plaisance, 75014 Paris, France Everyone is permitted to copy and distribute verbatim or modified copies of this license document, and changing it is allowed as long as the name is changed.\n\n```\nDO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE\n```\n\nTERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION\n\n* You just DO WHAT THE FUCK YOU WANT TO.", + "content_format": "markdown" + }, + { + "url": "https://github.com/Cog-Creators/Red-DiscordBot/pull/3509", + "domain": "github.com", + "file_source": "part-00403-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nAdd this suggestion to a batch that can be applied as a single commit. This suggestion is invalid because no changes were made to the code. Suggestions cannot be applied while the pull request is closed. Suggestions cannot be applied while viewing a subset of changes. Only one suggestion per line can be applied in a batch. Add this suggestion to a batch that can be applied as a single commit. Applying suggestions on deleted lines is not supported. You must change the existing code in this line in order to create a valid suggestion. Outdated suggestions cannot be applied. This suggestion has been applied or marked resolved. Suggestions cannot be applied from pending reviews. Suggestions cannot be applied on multi-line comments. Suggestions cannot be applied while the pull request is queued to merge. Suggestion cannot be applied right now. Please check back later.", + "content_format": "markdown" + }, + { + "url": "https://github.com/lukas-krecan", + "domain": "github.com", + "file_source": "part-00564-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Popular repositories Loading\n\nSomething went wrong, please refresh the page to try again.\n\nIf the problem persists, check the GitHub status page or contact support.\nIf the problem persists, check the GitHub status page or contact support.", + "content_format": "markdown" + }, + { + "url": "https://github.com/poelzi/ulatencyd/wiki/How-does-it-work", + "domain": "github.com", + "file_source": "part-00403-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n* Notifications\nYou must be signed in to change notification settings\n* Fork 30\n\n# How does it work\n\npoelzi edited this pageFeb 7, 2011 · 3 revisions\n· 3 revisions\nulatencyd has 3 different parts:\n\n* core, which does process parsing, building a process tree, etc\n* rules, which categorize the processes, analyze the system etc\n* the scheduler, which uses the information collected by the core and rules to make decisions on the processes\nSome settings are adjustable in `/etc/ulatencyd/ulatencyd.conf` and the cgroups that will be used can be changed\nin `/etc/ulatencyd/cgroups.conf` \nThe core listens on the kernel when new processes are spawned or exit and runs the rules an scheduler on them. Additionally, a full iteration is run every 10 seconds on all processes. This is required for example when flags, set on a process expire and the scheduler will make another decision.\n\nThe rules and the scheduler can be adjusted by the user to his wishes.", + "content_format": "markdown" + }, + { + "url": "https://github.com/robinandeer/cookiecutter-pyvanguard", + "domain": "github.com", + "file_source": "part-00894-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nCookiecutter template for bleeding edge Python development. See @audreyr/cookiecutter.\n\nThe template aspires adoption of new and exciting developer tools. Focus is on automation and keeping your repo DRY. Whenever justifiable, new and Python-native is preferred over \"tried and true\".\n\nAutomate everything. Banish tedious tasks. Ensure reproducibility. Minimize errors.\n\n* pytest for test discovery and automation\n* Travis for continuous integration\n* bumpversion for updating version numbers with one command\n* Invoke for task execution as a Python-native Make replacement\n* Coveralls.io for integrating test coverage with GitHub\n\nEmbrace conventions. Don't fret details when you don't have to. Make it easy for others to help you out.\n\n* EditorConfig for maintaing consistent coding styles\n* wheel for the future standard in Python packaging\n* Sensible conventions with first class GitHub support like\n `CONTRIBUTING.md` * Let setuptools generate virutal scripts for you by deep linking into your package (see\n `setup.py` for more details)\nLevel out inconsistencies between platforms. Virtualize. Simplify development. Inspire experimentation.\n\n* conda as an optional, improved \"virtualenv\" replacement\n* Vagrant to define and share development environments, provisioned by Ansible.\nPython 2.7.x isn't bleeding edge but it would be crazy to not officially support it. The compromise is developing for Python 3 first and ensure backwards compatability through a lightweight `_compat.py` module.\nIn your projects folder, scaffold a brand new Python project:\n\n```\n$ cookiecutter https://github.com/robinandeer/cookiecutter-pyvanguard.git\n```\n\nThen:\n\n* Create a repo and put it there.\n* Add the repo to your Travis CI account.\n* Sign up and activate your repo at coveralls.io.\n* Release your package the standard Python way. Here's a release checklist: https://gist.github.com/audreyr/5990987\n\nDon't worry, you have options; fork, remix, and pull requests!\n\n* \n\nNekroze/cookiecutter-pypackage: with PyTest test runner, strict flake8 checking with Travis/Tox, and some docs and setup.py differences.\n\n* \n\ntony/cookiecutter-pypackage: with py2.7+3.3 optimizations. Flask/Werkzeug-style test runner,\n\n `_compat` module and module/doc conventions. See `README.rst` or the github comparison view for exhaustive list of additions and modifications.* \n\nAlso see the network and family tree for this repo. (If you find anything that should be listed here, please add it and send a pull request!)\n\nIf you have differences in your preferred setup, I encourage you to fork this to create your own version. Or create your own; it doesn't strictly have to be a fork.\n\n* \n\nOnce you have your own version working, add it to the Similar Cookiecutter Templates list above with a brief description.\n\n* \n\nIt's up to you whether or not to rename your fork/own version. Do whatever you think sounds good.\n\nI also accept pull requests on this repository provided they are small, atomic, and if they make the overall packaging experience better.", + "content_format": "markdown" + }, + { + "url": "https://github.com/OndraZizka/weld-se-jpa", + "domain": "github.com", + "file_source": "part-00894-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n* Notifications\nYou must be signed in to change notification settings\n* Fork 1\n\n# OndraZizka/weld-se-jpa\n\n## Folders and files\n\n| Name | Name | | |\n| --- | --- | --- | --- |\n\n## Repository files navigation\n\n> Got inspired by http://www.laliluna.de/articles/2011/01/12/jboss-weld-jpa-hibernate.html This implementation will cause a separate entity manager for every transactional method. If you are aware of EJB3 or Spring transactions, then you will probably know the transaction type requires_new. It is the same approach. If you want to achieve context propagation as in a EJB3, we need to improve our implementation. I leave the task to you but will outline the required steps. * Add an attribute to the @Transactional annotation which defines if the method should reuse an existing transaction. * Improve the interceptor o Check if the @Transactional annotation defines the reuse of an existing transaction o If an existing TX should be reused, it could check if the store has already a current entity manager. We could add a service method to the store like boolean hasEntitymanager() o If an existing TX should be reused and an entity manager exists already, do nothing else handle the transaction and create an entity manager. Other way would be to implement doInJpa() like in Spring 2.5.\n\n## About\n\nJPA support for Weld SE, implementation of CDI for Java SE\n\n### Resources\n\n### Stars\n\n### Watchers\n\n### Forks\n\n## Releases\n\nNo releases published\n\n## Packages 0\n\nNo packages published", + "content_format": "markdown" + }, + { + "url": "https://github.com/librenms/librenms/pull/6058", + "domain": "github.com", + "file_source": "part-00564-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nAdd this suggestion to a batch that can be applied as a single commit. This suggestion is invalid because no changes were made to the code. Suggestions cannot be applied while the pull request is closed. Suggestions cannot be applied while viewing a subset of changes. Only one suggestion per line can be applied in a batch. Add this suggestion to a batch that can be applied as a single commit. Applying suggestions on deleted lines is not supported. You must change the existing code in this line in order to create a valid suggestion. Outdated suggestions cannot be applied. This suggestion has been applied or marked resolved. Suggestions cannot be applied from pending reviews. Suggestions cannot be applied on multi-line comments. Suggestions cannot be applied while the pull request is queued to merge. Suggestion cannot be applied right now. Please check back later.", + "content_format": "markdown" + }, + { + "url": "https://github.com/lessthanoptimal/SURFPerformance", + "domain": "github.com", + "file_source": "part-00830-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n* Notifications\nYou must be signed in to change notification settings\n* Fork 6\n\nBenchmark for SURF interest point detector and descriptors\n\n# lessthanoptimal/SURFPerformance\n\n## Folders and files\n\n| Name | Name | | |\n| --- | --- | --- | --- |\n\n## Repository files navigation\n\n> Benchmark for evaluating SURF libraries by Peter Abeles. Evaluates detection stability, description stability, detection runtime, description runtime, and overall runtime. Results can be viewed at: http://boofcv.org/index.php?title=Performance:SURF Source code can be found here: https://github.com/lessthanoptimal/SURFPerformance Directory Descriptions: interfaces/ Where applications are stored for computing results from 3rd party libraries src/ Benchmark source code data/ Where benchmark data is stored. All third party data and libraries must be downloaded from their respective sites, but the benchmark source code is provided. All code is released under a Apache 2.0 license. http://www.apache.org/licenses/LICENSE-2.0\n\n## About\n\nBenchmark for SURF interest point detector and descriptors\n\n### Resources\n\n### Stars\n\n### Watchers\n\n### Forks\n\n## Releases\n\nNo releases published\n\n## Packages 0\n\nNo packages published", + "content_format": "markdown" + }, + { + "url": "https://github.com/peterc/videocr", + "domain": "github.com", + "file_source": "part-00830-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n `videocr.py` uses macOS's native OCR to extract text from videos. This can then be perused or `grep` -ed for credentials, usernames, AWS access keys, or other personal data you may not want to be in a video you plan to share. `python videocr.py in.mp4` Other than the Python packages in `requirements.txt`, you'll need:\n\n* FFmpeg\n* Python 3.10 or higher\n* macOS Monterey or higher\n\nThis is only a rough proof of concept for now. I intend to have it automatically alert on specific, common token types, to tell you where in the video they appear, and to let you specify your own usernames and password fragments to search for.\n\nIf you wish to take this code and turn it into something more generally useful, be my guest, but if you reuse specific code from the project, then include the necessary copyright attribution as per `LICENSE`. Thank you.There is another project called `videocr` which extracts hard-coded subtitles from videos. It's also written in Python but works cross-platform by using Tesseract (which I also tried, but it's just too slow). I may rename this project due to this, but as it's currently a prototype and the other videocr hasn't been updated in nearly three years, I'll let it sit for now.\nA huge debt goes to those who work on the amazing FFmpeg project for starters.\n\nAlso thanks to Rhet Turnbull whose code I found to learn about ways to call macOS's OCR routines from Python.", + "content_format": "markdown" + }, + { + "url": "https://github.com/Timboy67678?tab=stars", + "domain": "github.com", + "file_source": "part-00830-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Stars\n\nNintendo Switch emulator written in C#, originally created by gdkchan.\n\nValheim dedicated gameserver with automatic update, World backup, BepInEx and ValheimPlus mod support\n\nVisual Assist X Color Scheme For Visual C++ With ReSharper C++\n\nA few scripts and archives for reverse engineering Nintendo 64 games on Ghidra\n\nA binary splitting tool to assist with decompilation and modding projects\n\nTool to statically recompile N64 games into native executables\n\nOpen-source Windows and Office activator featuring HWID, Ohook, TSforge, KMS38, and Online KMS activation methods, along with advanced troubleshooting.\n\nModern dark theme based on the original ghidra-dark\n\nPython script to extract savefiles out of Xbox Game Pass for PC games\n\nA tool to facilitate converting Bethesda plugin files to a text based format that can be stored in Git\n\nOpen Hardware Monitor - a tool for monitoring hardware performance. Includes support for various temperature sensors, disk I/O ratings and power consumption.\n\nUse your Raspberry Pi as a browser-based KVM.\n\nA DIY IPMI / IP KVM system utilizing the Raspberry Pi\n\nDouble weave on high latency, and mishmash of modding tools - especially for fonts and internationalization for Final Fantasy XIV.\n\nA bunch of scripts that I've collected, written, and forked for the ultimate administration & automation of your Media Server - Think of this as your \"Media server in a box\"\n\n🎉🌩️ Dynamic DNS (DDNS) service based on Cloudflare! Access your home network remotely via a custom domain name without a static IP!\n\nA free, powerful, multi-purpose tool that helps you monitor system resources, debug software and detect malware. Brought to you by Winsider Seminars & Solutions, Inc. @ http://www.windows-internals…\n\nUniversal Extractor 2 is a tool to extract files from any type of archive or installer.\n\n🎶 A Discord music bot that's easy to set up and run yourself!\n\n### Sakatard / WhatsTraining\n\nForked from fusionpit/WhatsTraining\nA WoW Classic Addon that shows you upcoming trainer abilities", + "content_format": "markdown" + }, + { + "url": "https://github.com/PiotrSperka/avrTinyBootloader", + "domain": "github.com", + "file_source": "part-00096-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nHomepage: https://sperka.online\n\nDescription [PL]: https://blog.sperka.online/2019/09/uboot-avrtiny-bootloader-dla-attiny13/\n\nDescription [EN]: https://blog.sperka.online/en/2019/09/uboot-avrtiny-bootloader-for-attiny13/\nSimple bootloader for tiniest AVR microcontrollers (like ATtiny13, etc). Without EEPROM programming capabilities, it takes only 160 bytes (80 words!). For communication it uses software UART fed through single wire (half-duplex).\n\nBootloader is provided in two versions:\n\n* flash read/write,\n* flash and eeprom read/write.\n\nI guess that asm files are commented well enough to be understood. I've also provided simple Python script to make use of bootloader.\n\nSoftware UART functions are placed at the end of flash memory, and they can be reused by user application if needed.\n\nBootloader and examples are compiled to use with 1.2MHz oscillator! If you want to use different speed, consider modifying baud constant at the beginning of bootloader code according to AVR305 note\n\nOne important thing to remember: You have to enable self-programming in fuse bits!\n\nFiles in bin directory:\n\n* hex2bin.exe - application to convert hex files to binary (not mine)\n* prog-flash-and-eeprom.py - script to use with EEPROM + FLASH version of bootloader\n* prog-flash-only.py - script to use with FLASH only version of bootloader\n* TestBlink1.bin - user application that blinks LED\n* TestBlink2.bin - user application that blinks LED, but slower\n* UartExample.bin - user application that uses UART routines from bootloader\n* uBoot-flash-and-eeprom.bin - EEPROM + FLASH version of bootloader, 9600 baud/s @ 1.2MHz\n* uBoot-flash-only.bin - FLASH only version of bootloader, 9600 baud/s @ 1.2MHz", + "content_format": "markdown" + }, + { + "url": "https://github.com/ergochat/ergo/releases/tag/v1.0.0", + "domain": "github.com", + "file_source": "part-00564-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n# v1.0.0 - History, Unicode, and Nickname Ownership\n\nWe've finally made it to v1.0.0! With this release, our list of need-to-haves is rounded out, and we reckon the software's ready for production use in smaller networks. slingamn and I have been working with our contributors and translators to prepare a cracker of a release. Thanks to @csmith our Docker builds have been updated, with automatic rebuilds as we develop the software. Thanks to @bogdomania our translation workflow has been improved a lot.\n\nHighlights include:\n\n* Optional support for storing and replaying message history with the\n `draft/resume-0.3` capability, the draft IRCv3 `CHATHISTORY` command, and a custom `HISTORY` command.* Better detection of confusing nick/account/channel names.\n* User-customizable nickname protection methods.\n* An account-only mode in which all clients must have an account and login to it (using SASL) before they can join the server.\n\nThanks to Mauropek, @modinfo, @bogdomania, @Shillos, Tony Chen, and Remini for adding new translations. Thanks to @Ascrod, @bogdomania, @csmith, @jesopo, @jwheare, @remini1998, @enckse, and @iNecas for finding bugs and/or writing new features.\n\n### Config Changes\n\n* \n `allow-custom-enforcement` key added under `accounts`.* \n `allow-plaintext-resume` key added under `server`.* \n `history` section added.* \n `identlen` key added under `limits`.* \n `login-throttling` section added under `accounts`.* \n `max-channels-per-account` key added under `channels.registration` (limiting the number of channels that can be registered).* \n `max-channels-per-client` key added under `channels` (limiting the number of channels that can be joined).* \n `method` key now under `accounts` now allows the value `\"optional\"`.* Exemption lists now accept\n `localhost` as a value, meaning any loopback IPV4, loopback IPV6, or unix domain address.* Logging type\n `server` has been added, replacing the `startup`, `rehash`, and `shutdown` types.* The default logging configuration now logs to stderr only, rather than to both stderr and a file.\n* We no longer listen on port\n `6668` by default (this fixes Docker installs).\n\n### Security\n\n* Added a SASL-only mode in which all clients must authenticate with SASL.\n* Added login throttling as a hardening measure against password guessing.\n* Configurable limits are imposed on how many channels clients can join or register.\n\n### Added\n\n* Added automagic datastore creation on\n `oragono run`.* Added detection and prevention of confusing nicknames, account names, and channel names.\n* Added limited message history for connection resuming (to be extended in future).\n* Added new Español (es) translation (thanks to Mauropek!).\n* Added new Polski (pl) translation (thanks to @modinfo!).\n* Added new Română (ro) translation (thanks to @bogdomania!).\n* Added new Ελληνικά (el) translation (thanks to @Shillos!).\n* Added new 简体中文 (zh-CN) translation (thanks to Tony Chen and Remini!)).\n* Added proposed IRCv3 capability\n `draft/setname`.* Added subcommands to\n `NICKSERV`, including:\n\n* \n `PASSWD` to change account passwords.* \n `ENFORCE` to set a specific enforcement mechanism on your nick.* \n `SAREGISTER` to allow operators to manually create new user accounts.\n\n### Changed\n\n* \n `SASL PLAIN` logins now log more correctly.* Database upgrade failures now provide information about the error that occurred.\n* Halfops can now kick unprivileged users.\n* Idents (sometimes called \"usernames\") are now restricted to ASCII, similar to other servers.\n* Improved compatibility with ZNC's nickserv module.\n* In addition to the founder, now auto-ops (halfop and higher) automatically bypass channel join restrictions.\n* Log lines now display time down to milliseconds, instead of just seconds.\n* Updated all translation files (thanks to our amazing translators!).\n* Updated proposed IRCv3 capability\n `draft/resume` to `draft/resume-0.3`.* When nick ownership is enabled, users can now select which enforcement mechanism to use with their nickname.\n\n### Fixed\n\n* \n `INVITE` : Fixed bug where invited users could not join the channel they were invited to (thanks to @unendingpattern!).* \n `oragono.io/maxline` capability was accidentally disabled, and is now re-enabled.* \n `oragono genpasswd` now works when piping input in (fixes Docker installs).* \n `PRIVMSG` : Messages sent to multiple clients (such as channel messages) now share the same timestamp (previously each client got a very slightly different time).* \n `WHOIS` : Now responds properly for NickServ, ChanServ, etc.* Channel names with right-to-left characters are now casefolded correctly (thanks to @remini1998!).\n* Fixed handling of CIDR width in connection limiting/throttling.\n* Fixed incorrect behavior of\n `CHANSERV OP` command.* Fixed incorrect rejection of nickmasks with Unicode RTL nicknames.\n* Fixed many responses that violated the specifications (thanks to @Ascrod, @bogdomania, @csmith, @jesopo, and @jwheare!).\n* Fixed nickname sync issue which could cause clients to fail to see each other.\n* Invalid\n `ISUPPORT` tokens are now explicitly rejected.* Made\n `server-time` timestamp format more consistent and safer.* Oragono now exits with status (1) if it fails to start.\n* Prevent logging in multiple times when using\n `/NS IDENTIFY`.* Prevented the db handler from automagically creating the database without initializing it (thanks @enckse!). We also now automatically create the datastore on\n `run`.\n\n### Internal Notes\n\n* \n `DLINE` and `KLINE` refactored, and expired bans are now removed from the database.* Command-line parsing was upgraded to match modern best practices (thanks to @iNecas!).\n* Direct responses to client commands are now sent \"synchronously\", bypassing the sendq.\n* Logging system optimised.\n* Services handlers refactored.\n* Translations are now sent to/PR'd from CrowdIn automagically as we develop the software.", + "content_format": "markdown" + }, + { + "url": "https://github.com/tmplat-extension/tmplat-chrome/issues/116", + "domain": "github.com", + "file_source": "part-00293-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Description\n\nI've had an idea on how to completely redesign the layout for the Templates tab in the options page. This is pretty much a brain dump so please bare with me here.\n\nThe page layout will be changed to list templates in a table.\n\nRows can be dragged and dropped to rearrange the sort order.\n\nTemplates can also be dynamically filtered using a search box (possibly including auto-complete functionality) and the number of templates displayed can be limited and dynamic pagination will be used for navigation.\n\nEach row will have a check box that will allow group actions (e.g. Delete, Import, Export) along with a check all box, which will obviously check/uncheck all boxes.\n\nClicking (not dragging) an individual row will open a dialog box which will allow the user to modify it. The Add button will remain and will also open this dialog when clicked, without any information pre-filled. This will also provide the following actions; Save, Delete, Export, Reset, Cancel.\n\nSave\n\n* Performs validation when clicked\n* Disabled unless modifications are detected\n\n* Tool tip text should be used to explain this\n\nDelete\n\n* Disabled if template is predefined\n\n* Tool tip text should be used to explain this\n* Modal dialog will be used to ask the user to confirm this action\n\nExport\n\n* Opens Export dialog with the template pre-checked\n* Asks user if changes should be saved first if modifications are detected\n\nReset\n\n* Resets all fields to their original values\n\nCancel\n\n* Closes the dialog without persisting any changes\n* Warns user if modifications are detected\n\n* User can ask that this warning not be displayed again\n\nThe down side of this process is that - once again - the user will be required to manually save their changes, although I think it makes more sense in a dialog context.\n\nFinally, a sidebar navigation menu will be added (much like the one in the Guide) which will only contain one link (for now), My Templates. In the future, after #112, this will contain another link, Library. Once added, this will dynamically load the templates hosted on template-extension.org and allow the user to download them.", + "content_format": "markdown" + }, + { + "url": "https://github.com/pdewacht/brlaser/releases", + "domain": "github.com", + "file_source": "part-00096-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n# Releases: pdewacht/brlaser\n\nReleases · pdewacht/brlaser\n\n## brlaser version 6\n\nAdded support for some more Brother HL-series printers. These printers had glitched output in earlier releases.\n\n* Brother HL-2030 series\n* Brother HL-2140 series\n* Brother HL-2220 series\n* Brother HL-2270DW series\n* Brother HL-5030 series\n* Brother DCP-L2520D series\n\n## brlaser version 5\n\nFixed problems with Brother HL-series printers in 600 dpi mode. Thanks to Onno Kortmann for the fix.\n\nAdded brlaser.drv stanzas for several new printers.\n\n## brlaser version 4\n\n* Added several printers.\n* Merged duplex printing support from @xc-racer99. Enabled for DCP-7065DN.\n* Switched to a CMake build system.\n\n## brlaser version 3\n\nAdded DCP-7065DN description.", + "content_format": "markdown" + }, + { + "url": "https://github.com/wolfeidau/node-netif", + "domain": "github.com", + "file_source": "part-00328-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nNode library which uses native calls to locate the mac address for a given interface name.\n\nCurrently works on OSX, Solaris, Linux and Windows. Maybe FreeBSD is next?\n\nInstall the module with: `npm install netif` \n\n```\nvar netif = require('netif');\nnetif.getMacAddress('eth0'); // '00:0C:00:00:00:00'\n```\n\nCopyright (c) 2012 Mark Wolfe\n\nLicensed under the MIT license.", + "content_format": "markdown" + }, + { + "url": "https://github.com/bmuessig/SMF-Notifier", + "domain": "github.com", + "file_source": "part-00613-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nAPI and Client application originally designed for the CodeWalr.us forum to have a visual notification on new posts. Works on any SMF forum with its feed enabled. Just change the constants in the server PHP file to allow it to run for other servers.\n\nThe server needs PHP to run. Any normal PHP5 installation will work. Read and write access rights are required for the PHP file in the folder it is located in. The client requires .NET Framework >= 4 and Gtk# from http://download.xamarin.com/GTKforWindows/Windows/gtk-sharp-2.12.44.msi installed.", + "content_format": "markdown" + }, + { + "url": "https://github.com/ThisIsMissEm/node-websocket-server/commit/dae6bed226ccfccf3939973155570b39dc8b3df0", + "domain": "github.com", + "file_source": "part-00894-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nThis repository has been archived by the owner on Mar 27, 2018. It is now read-only.\n\n* Notifications\nYou must be signed in to change notification settings\n* Fork 185\n\n## Commit\n\nThis commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.\n\nEdited README.md to feature more / better details as for finding help…\n\n> … on this project.\n\n* Loading branch information\n\nShowing 1 changed file with 16 additions and 2 deletions.\n\n## There are no files selected for viewing\n\nThis file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters\n\n| Original file line number | Diff line number | Diff line change |\n| --- | --- | --- |\n| @@ -1,7 +1,21 @@ |\n| # node-websocket-server # |\n| |\n| This is a server for the WebSocket Protocol. It currently to works |\n| with both [draft75](http://tools.ietf.org/html/draft-hixie-thewebsocketprotocol-75) and [draft76](http://www.whatwg.org/specs/web-socket-protocol/) of the protocol specification. |\n| This is a server for drafts [75](http://tools.ietf.org/html/draft-hixie-thewebsocketprotocol-75) and [76](http://tools.ietf.org/html/draft-hixie-thewebsocketprotocol-76) of the WebSocket Protocol. |\n| |\n| ## Getting help: |\n| |\n| If you have an issues with this server, please check the [issue tracker](http://github.com/miksago/node-websocket-server/issues). |\n| |\n| - If you have an issue with a stacktrace / bug report, please submit an issue in the issue tracker, make sure to include details as to how to reproduce the issue. |\n| - If you have a feature request, create an issue on the bug tracker and specifically state that it is a feature request, also send an email to the mailing list referencing this feature request, discussion on feature requests should be done in the issue tracker. |\n| - If you need general help or want to share what you're using this project in, join & email the mailing list. |\n| |\n| |\n| ## Mailing List: |\n| |\n| We have a mailing list, it is hosted on google groups: [http://groups.google.com/group/node-websocket-server](http://groups.google.com/group/node-websocket-server) |\n| |\n| ## Documentation (outdated) |\n| |\n| See http://static.brandedcode.com/nws-docs/ for some slightly outdated |\n| documentation. |", + "content_format": "markdown" + }, + { + "url": "https://github.com/AaronJaramillo?tab=repositories", + "domain": "github.com", + "file_source": "part-00328-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n* \n\n### zOAuth2 Public\n\nAn Oauth2 authorization server utilizing the Zcash blockchain for client registration (Prototype/PoC)\n\n* \n\n### metaplex-program-library Public\n\nForked from metaplex-foundation/metaplex-program-library\nSmart contracts maintained by the Metaplex team\n\nRust GNU Affero General Public License v3.0 UpdatedSep 23, 2022\n* \n\n### openzeppelin-contracts Public\n\nForked from OpenZeppelin/openzeppelin-contracts\nOpenZeppelin Contracts is a library for secure smart contract development.\n\nJavaScript MIT License UpdatedApr 11, 2022\n* \n\n### drizzle Public\n\nForked from trufflesuite/drizzle\nReactive Ethereum dapp UI suite\n\nJavaScript UpdatedNov 7, 2021\n* \n\n### pancake-swap-interface Public\n\nForked from PancakeBunny-finance/pancake-swap-interfaceTypeScript GNU General Public License v3.0 UpdatedNov 27, 2020\n* \n\n### liquibook Public\n\nForked from enewhuis/liquibook\nModern C++ order matching engine\n\nC++ Other UpdatedJul 9, 2019\n* \n\n### LibbitcoinTutorial Public\n\nExample Programs from Libbitcoin Tutorials at\n\n* \n\n### HoldThePenProd Public\n\nProduction Code For Song Lyric Analysis http://HoldThePen.com\n\nPython UpdatedMar 13, 2018\n* \n\n### libbitcoin-explorer Public\n\nForked from libbitcoin/libbitcoin-explorer\nBitcoin Command Line Tool\n\nC++ Other UpdatedMar 30, 2017\n* \n\n### DarkWallet Public\n\nForked from DissentDifference/DarkWallet\nBitcoin privacy wallet for anonymous anarchist hackers\n\nPython UpdatedFeb 23, 2017\n* \n\n### garden.qrcode Public\n\nForked from kivy-garden/garden.qrcode\nQrcode generator", + "content_format": "markdown" + }, + { + "url": "https://github.com/AustinTi/HolidayBot", + "domain": "github.com", + "file_source": "part-00614-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nA discord bot created with DiscordGo that spits out real holidays that you may have never heard of before. All holidays are grabbed from Checkiday\n\nVote for HolidayBot on Top.gg!\n\n* \n `/about` - Shows infomation about the bot (invite, voting, source, purpose, author, etc.)* \n `/settings` - Displays current server-specific settings.* \n `/ping` - Pong!* \n `/stats` - Shows bot statistics like uptime, lib versions, etc.* \n `/h [timezone]` - Displays holidays in the specified timezone or server timezone on command (if enabled).* \n `/set` - Sets server-specific settings (Manage Server permission required).\n\n* \n `/set timezone ` - Changes the timezone to any valid tz/zoneinfo database timezone (eg. `America/Chicago` ). See list here. This is used for the daily posting timezone, as well as the default timezone used when `/h` is run. (default: `UTC` ).* \n `/set adult ` - Enables/disables content that may not be safe for viewing by children. True = Adult content enabled, False = Adult content disabled (default: `False` ).* \n `/set daily ` - Enables/disables the bot posting new holidays every midnight in the set timezone. True = Daily posting enabled, False = Daily posting disabled (default: `True` ).* \n `/set dailyChannel ` - Sets the channel the daily holidays (if enabled) will be posted in. Permission to view, send messages, and embed links must be granted in the channel before setting it. By default, this will be the first channel the bot is able to send messages in.* \n `/set command ` - Enables/disables the ability for users to run `/h` to display holidays on command. True = /h command enabled, False = /h command disabled (default: `True` ).* \n `/set reset` - Resets this guild's settings to the default settings.HolidayBot's Terms of Service and Privacy Policy are contained in `TERMS.md` and `PRIVACY.md` respectively.\nHolidays listed are largely US and international only, no matter which timezone the holidays are displayed in. This is nothing I can really control, as there lacks a good holiday database suitable for the fun-oriented purposes HolidayBot is designed to serve. Furthermore, neither HolidayBot nor its creator endorse any of the holidays the bot will display.", + "content_format": "markdown" + }, + { + "url": "https://github.com/ZeframLou/pooled-cdai", + "domain": "github.com", + "file_source": "part-00293-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nPools DAI, converts it into Compound DAI (cDAI), and sends interests to a beneficiary. Users putting DAI into the pool receives Pooled cDAI (pcDAI), an ERC20 token which is 1-for-1 redeemable for DAI at any time.\n\nCompound accrues interest to cDAI by increasing `exchangeRate`, the amount of DAI you can redeem per cDAI. Therefore, you can calculate the current DAI value of the pool's cDAI using `exchangeRate * poolCDAIBalance`. To calculate the interest, simply subtract the total DAI deposited from that value: `exchangeRate * poolCDAIBalance - totalDAIDeposited`. pcDAI records the total deposit using `totalSupply`, since pcDAI is 1-for-1 redeemable for DAI.\nSince DAI deposits & withdrawals add/subtract the same amount from both sides of the minus sign, they don't affect the interest calculation, so there's no need for lock periods.\n\n```\nnpm install\ntruffle compile\n```\n\n* Beneficiaries: the accounts that receives the interest\n* Owner: the account that can change the beneficiary, default is the creator of the pcDAI smart contract\n* User: accounts that can deposit into/withdraw from the pool (all accounts)\nCall this function in `PooledCDAIFactory` `function createPCDAI(string calldata name, string calldata symbol, PooledCDAI.Beneficiary[] calldata beneficiaries, bool renounceOwnership) external returns (PooledCDAI)` \n\n* \n\n```\nfunction mint(address to, uint256 amount) public returns (bool)\n```\n\nDeposit `amount` DAI into pool, send minted pcDAI to `to` \n\n* \n\n```\nfunction burn(address to, uint256 amount) public returns (bool)\n```\n\nBurn `amount` pcDAI, send redeemed DAI to `to` \n\n* \n\n```\nfunction withdrawInterestInDAI() public returns (bool)\n```\n\nWithdraw accrued interest to beneficiary in DAI\n\n* \n\n```\nfunction setBeneficiaries(Beneficiary[] calldata newBeneficiaries) external onlyOwner returns (bool)\n```\n\nChange the beneficiary to `newBeneficiary` \n\n* \n\n```\nfunction accruedInterestCurrent() public returns (uint256)\n```\n\nCalculates the current accrued interest. It's not a `view` function, since it updates the `exchangeRate` of cDAI.\n\n* \n\n```\nfunction accruedInterestStored() public view returns (uint256)\n```\n\nCalculates the current accrued interest. It's a `view` function, but it uses the cDAI exchange rate at the last call to the cDAI smart contract, so it might not be up to date.\nExtensions are smart contracts that extend the features of Pooled cDAI.\n\n* Location:\n\n```\ncontracts/extensions/PooledCDAIKyberExtension.sol\n```\n\n* Description: Enables minting & burning pcDAI using ETH & ERC20 tokens supported by Kyber Network, rather than just DAI. There's no need to deploy one for each pool, since it uses pcDAI as a black box.\n\n* PooledCDAI template: 0x3D6d83649939bAA953ddC589d2d5Db775Df91520\n* MetadataPooledCDAIFactory: 0xB72B4B94d1eD3Cc382D5beEEfE3d03dd55Ad8229\n* Kyber extension: 0x44FBF73a97cf50640A3208b883F810F730D80c2B\n* Sai2Dai migration contract: 0x02c9e4174E9D23BB7619c83Ef5f771fCB1E6FDB8", + "content_format": "markdown" + }, + { + "url": "https://github.com/SUSE/Portus/wiki/Generating-man-pages-for-portusctl", + "domain": "github.com", + "file_source": "part-00614-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nYou signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert\n\n{{ message }}\n\nThis repository has been archived by the owner on Apr 17, 2023. It is now read-only.\n\nMiquel Sabaté Solà edited this page Aug 9, 2016 · 1 revision\n\nDocumentation for portusctl comes in the form of UNIX manual pages. They can be found inside of the packaging/suse/portusctl/man directory. In this directory, there are two subdirectories:\n\nmarkdown: the files that developers use to write the documentation.\n\nman1: the manual pages that should be installed in the system.\n\nAs you can see, we use regular Markdown files to edit manual pages. This is possible thanks to the md2man gem. In order to generate the resulting man pages inside of the packaging/suse/portusctl/man/man1 directory, you have to execute:\n\n$ rake portus:generate_man_pages\n\nNote that this should be done by developers. Packagers should only have to care about the files inside of packaging/suse/portusctl/man/man1 and disregard completely the markdown files.", + "content_format": "markdown" + }, + { + "url": "https://github.com/davidsivocha", + "domain": "github.com", + "file_source": "part-00096-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nGamer, Amatuer Blacksmith and full time Back End Developer\n\n* Stafford, UK\n* 06:49\n(UTC)\n* http://sivocha.com\n\n## Pinned Loading\n\n* invoiceninja/invoiceninja\ninvoiceninja/invoiceninjaPublic\nPublic\nA source-available invoice, quote, project and time-tracking app built with Laravel\n\n* PrestaShop/PrestaShop\nPrestaShop/PrestaShopPublic\nPublic\nPrestaShop is the universal open-source software platform to build your e-commerce solution.\n\nSomething went wrong, please refresh the page to try again.\n\nIf the problem persists, check the GitHub status page or contact support.\nIf the problem persists, check the GitHub status page or contact support.", + "content_format": "markdown" + }, + { + "url": "https://github.com/thelia/thelia/pull/1903", + "domain": "github.com", + "file_source": "part-00328-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n# Added missing generateErrorRedirect() #1903\n\nMerged\n\nAdd this suggestion to a batch that can be applied as a single commit. This suggestion is invalid because no changes were made to the code. Suggestions cannot be applied while the pull request is closed. Suggestions cannot be applied while viewing a subset of changes. Only one suggestion per line can be applied in a batch. Add this suggestion to a batch that can be applied as a single commit. Applying suggestions on deleted lines is not supported. You must change the existing code in this line in order to create a valid suggestion. Outdated suggestions cannot be applied. This suggestion has been applied or marked resolved. Suggestions cannot be applied from pending reviews. Suggestions cannot be applied on multi-line comments. Suggestions cannot be applied while the pull request is queued to merge. Suggestion cannot be applied right now. Please check back later.", + "content_format": "markdown" + }, + { + "url": "https://github.com/MatrixAI/Polykey", + "domain": "github.com", + "file_source": "part-00564-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nPolykey is an open-source, peer-to-peer system that addresses the critical challenge in cybersecurity: the secure sharing and delegation of authority, in the form of secrets like keys, tokens, certificates, and passwords.\n\nIt allows users including developers, organizations, and machines—to store these secrets in encrypted vaults on their own devices, and share them directly with trusted parties.\n\n* All data is end-to-end encrypted, both in transit and at rest, eliminating the risk associated with third-party storage.\n* Polykey provides a command line interface, desktop and mobile GUI, and a web-based control plane for organizational management.\n* By treating secrets as tokenized authority, it offers a fresh approach to managing and delegating authority in zero-trust architectures without adding burdensome policy complexity - a pervasive issue in existing zero-trust systems.\n* Unlike complex self-hosted secrets management systems that require specialized skills and infrastructure, Polykey is installed and running directly from the end-user device.\n* It is built to automatically navigate network complexities like NAT traversal, connecting securely to other nodes without manual configuration.\n\nKey features:\n\n* Decentralized Encrypted Storage - No storage of secrets on third parties, secrets are stored on your device and synchronised point-to-point between Polykey nodes.\n* Secure Peer-to-Peer Communication - Polykey bootstraps TLS keys by federating trusted social identities (e.g. GitHub).\n* Secure Computational Workflows - Share static secrets (passwords, keys, tokens and certificates) with people, between teams, and across machine infrastructure. Create dynamic (short-lived) smart-tokens with embedded policy for more sophisticated zero-trust authority verification.\n* With Polykey Enterprise, you can create private networks of Polykey nodes and apply mandatory policy governing node behaviour.\n `npm install --save polykey` Run `nix develop`, and once you're inside, you can use:\n\n```\n# install (or reinstall packages from package.json)\nnpm install\n# build the dist\nnpm run build\n# run the repl (this allows you to import from ./src)\nnpm run ts-node\n# run the tests\nnpm run test\n# lint the source code\nnpm run lint\n# automatically fix the source\nnpm run lintfix\n```\n\n `npm run docs` \nSee the docs at: https://matrixai.github.io/Polykey/\n\n```\n# npm login\nnpm version patch # major/minor/patch\nnpm run build\nnpm publish --access public\ngit push\ngit push --tags\n```\n\nPolykey is licensed under the GPLv3, you may read the terms of the license here.", + "content_format": "markdown" + }, + { + "url": "https://github.com/parrot/parrot/blame/pge_no_namespace_methods/DEVELOPING", + "domain": "github.com", + "file_source": "part-00830-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nWe read every piece of feedback, and take your input very seriously.\n\nTo see all available qualifiers, see our documentation.", + "content_format": "markdown" + }, + { + "url": "https://github.com/joeyhoer/", + "domain": "github.com", + "file_source": "part-00403-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nDesigner, developer, manager, and strategist\n\n* Richmond, VA\n* joeyhoer.com\n\n## Popular repositories Loading\n\nSomething went wrong, please refresh the page to try again.\n\nIf the problem persists, check the GitHub status page or contact support.\nIf the problem persists, check the GitHub status page or contact support.", + "content_format": "markdown" + }, + { + "url": "https://github.com/uncosoft", + "domain": "github.com", + "file_source": "part-00830-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n# Uncosoft\n\n## Popular repositories Loading\n\n* Photoshop-Export-Layers-to-Files-Fast\nPhotoshop-Export-Layers-to-Files-FastPublic\nPublic\nForked from antipalindrome/Photoshop-Export-Layers-to-Files-Fast\n\nThis script allows you to export your layers as individual JPGs / PNGs at a speed much faster than the built-in script from adobe.\n\nJavaScript\n\n* DTCompileTimeTracker\nDTCompileTimeTrackerPublic\nPublic\nForked from DarrenTsung/DTCompileTimeTracker\n\nUnity editor extension which tracks compile time\n\nC#\n\n* unity-builder\nunity-builderPublic\nPublic\nForked from game-ci/unity-builder\n\nBuild Unity projects for different platforms\n\nTypeScript\n\n### Repositories\n\n* rider-shared-settings Public\nuncosoft/rider-shared-settings’s past year of commit activity\n* Photoshop-Export-Layers-to-Files-Fast Public Forked from antipalindrome/Photoshop-Export-Layers-to-Files-Fast\n\nThis script allows you to export your layers as individual JPGs / PNGs at a speed much faster than the built-in script from adobe.\n\nuncosoft/Photoshop-Export-Layers-to-Files-Fast’s past year of commit activity\n* DTCompileTimeTracker Public Forked from DarrenTsung/DTCompileTimeTracker\n\nUnity editor extension which tracks compile time\n\nuncosoft/DTCompileTimeTracker’s past year of commit activity", + "content_format": "markdown" + }, + { + "url": "https://github.com/pwarelis/Ajax-Typeahead/pull/8", + "domain": "github.com", + "file_source": "part-00613-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nAdd this suggestion to a batch that can be applied as a single commit. This suggestion is invalid because no changes were made to the code. Suggestions cannot be applied while the pull request is closed. Suggestions cannot be applied while viewing a subset of changes. Only one suggestion per line can be applied in a batch. Add this suggestion to a batch that can be applied as a single commit. Applying suggestions on deleted lines is not supported. You must change the existing code in this line in order to create a valid suggestion. Outdated suggestions cannot be applied. This suggestion has been applied or marked resolved. Suggestions cannot be applied from pending reviews. Suggestions cannot be applied on multi-line comments. Suggestions cannot be applied while the pull request is queued to merge. Suggestion cannot be applied right now. Please check back later.", + "content_format": "markdown" + }, + { + "url": "https://github.com/simpeg", + "domain": "github.com", + "file_source": "part-00328-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Pinned Loading\n\n### Repositories\n\nShowing 10 of 43 repositories\n\n* simpeg-docs Public\nsimpeg/simpeg-docs’s past year of commit activity", + "content_format": "markdown" + }, + { + "url": "https://github.com/sunng87/clojuredocs-android", + "domain": "github.com", + "file_source": "part-00096-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nClojureDocs Android This is an android app for clojuredocs.org. You can download the lastest version from github. Any pull request is welcomed.", + "content_format": "markdown" + }, + { + "url": "https://github.com/slic3r/Slic3r/wiki", + "domain": "github.com", + "file_source": "part-00436-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n* Notifications\nYou must be signed in to change notification settings\n* Fork 1.3k\n\n# Home\n\nAndy Alt edited this pageJan 10, 2019 · 13 revisions\n· 13 revisions\nNeed help running Slic3r? A selection of official and unofficial documents to get you started are listed in the Documentation, but first check out the Frequently Asked Questions.\n\nLooking to report an issue with Slic3r? The Quick guide to writing good bug reports lists all you need to include in your issue report.\n\nTrying to get Slic3r running from GIT? Instructions for getting the dependencies installed are available for OS X, GNU/Linux and Windows.\n\n* Prebuilt Win32 builds of Slic3r pull requests (other branches): https://dl.slic3r.org/dev/win/branches\n\n* These are considered highly experimental, please post any issues in the related pull request thread.\n\nIf you want to contribute to the Slic3r development, checkout the contributing page.\n\nOther useful pages:\n\n* Code:-Adding-new-option-to-GUI-CLI - What files need to be modified to add a new configuration option to the CLI/GUI.\n* Code: Writing Test Cases - Templates/examples for adding test cases for your pull requests.", + "content_format": "markdown" + }, + { + "url": "https://github.com/nathany/vagrant-gopher", + "domain": "github.com", + "file_source": "part-00613-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nA Vagrantfile for running Go VMs (Linux, BSD and Solaris). While cross-compilation works great in Go, having actual VMs can be useful to run tests or debug programs under other operating systems.\n\nVagrant, VirtualBox and a Go workspace.\n\nOn a Mac, you can use Homebrew Cask to install:\n\n```\nbrew cask install virtualbox vagrant\n```\n\nCopy this Vagrantfile to $GOPATH/src/Vagrantfile:\n\n```\nsrc:~$ curl -O https://raw.githubusercontent.com/nathany/vagrant-gopher/master/Vagrantfile\n```\n\nThen run `vagrant up` from any subfolder.This will mount your `src` folder as a shared folder inside the VMs, while creating new bin/pkg folders to avoid collisions (particularly bin).Use `vagrant ssh linux`, `vagrant ssh bsd`, or `vagrant ssh solaris` to login and look around, or run a single command like `vagrant ssh linux -c 'go version'`.\nYou will likely need to change directories. CDPATH is configured for GitHub to save a few keystrokes, e.g.\n\n```\nvagrant ssh linux -c 'cd fsnotify/fsnotify; go test ./... -race'\n```\n\nUse `vagrant halt` to shutdown or `vagrant destroy` to free up disk space.\nSee the Vagrant Command Line documentation for details.\n\n* Currently this shares the src/ folder into\n `$HOME/src` on the virtual machine ( `$HOME/go/src` did not work).* It would be nice to have a wrapper/plugin around the ssh command that runs commands based on the current folder.\n* Shared folders aren't compatible with tools that watch files inside the VM using fsnotify.\n* 64-bit boxes are used to support the Go race detector.\n* The BSD box does not support Windows hosts.", + "content_format": "markdown" + }, + { + "url": "https://github.com/telekineticyeti", + "domain": "github.com", + "file_source": "part-00614-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Pinned Loading\n\n* gw2-prometheus-exporter\ngw2-prometheus-exporterPublic\nPublic\nExports data scraped from the Guild Wars 2 API, for ingestion by Prometheus.\n\nTypeScript\n\n* subsonic-api-wrapper\nsubsonic-api-wrapperPublic\nPublic\nSubsonic API wrapper for Node, written in Typescript\n\nTypeScript\n\n* subsonic-nowplaying-rgb-display\nsubsonic-nowplaying-rgb-displayPublic\nPublic\nSubsonic Now Playing album art for Taschen Flaschen RGB Matrix display\n\nTypeScript\n\n* subsonic-playlist-export\nsubsonic-playlist-exportPublic\nPublic\nCLI to export playlists and songs from Subsonic-compatible servers.\n\nTypeScript\n\nSomething went wrong, please refresh the page to try again.\n\nIf the problem persists, check the GitHub status page or contact support.\nIf the problem persists, check the GitHub status page or contact support.", + "content_format": "markdown" + }, + { + "url": "https://github.com/jclawson", + "domain": "github.com", + "file_source": "part-00328-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n* Google\n* Denver, CO\n* http://www.jasonclawson.com\n\n## Popular repositories Loading\n\n* jackson-dataformat-hocon\njackson-dataformat-hoconPublic\nPublic\nJackson parser format for parsing HOCON\n\n* hazelcast-lambdaj\nhazelcast-lambdajPublic\nPublic\nUse lambdaj syntax to execute distributed methods across the cluster\n\nJava 3\n\nSomething went wrong, please refresh the page to try again.\n\nIf the problem persists, check the GitHub status page or contact support.\nIf the problem persists, check the GitHub status page or contact support.", + "content_format": "markdown" + }, + { + "url": "https://github.com/mgechev/codelyzer/pull/925", + "domain": "github.com", + "file_source": "part-00830-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n# chore(deps): update dependency typescript to v3.7.3 #925\n\nAdd this suggestion to a batch that can be applied as a single commit. This suggestion is invalid because no changes were made to the code. Suggestions cannot be applied while the pull request is closed. Suggestions cannot be applied while viewing a subset of changes. Only one suggestion per line can be applied in a batch. Add this suggestion to a batch that can be applied as a single commit. Applying suggestions on deleted lines is not supported. You must change the existing code in this line in order to create a valid suggestion. Outdated suggestions cannot be applied. This suggestion has been applied or marked resolved. Suggestions cannot be applied from pending reviews. Suggestions cannot be applied on multi-line comments. Suggestions cannot be applied while the pull request is queued to merge. Suggestion cannot be applied right now. Please check back later.", + "content_format": "markdown" + }, + { + "url": "https://github.com/x64dbg/x64dbg/issues/1939", + "domain": "github.com", + "file_source": "part-00096-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nOpen\n\n## Description\n\n* Apr 5 2018\n* Windows 8.1 64bit\nI posted a question at google groups a few days ago, but I have no answer.\n\nhttps://groups.google.com/forum/#!topic/x64dbg/lFrbKW3DxaY\nIs there the command to ouput the disassembled result to log window?\n\nThe `disasm` command don't output the disassembled result to log window.\n\nhttp://help.x64dbg.com/en/latest/commands/gui/disasm.htmlIn windbg, `u` command display the disassembled result to log window.\n\n```\n0:000> u USER32!GetMessageW\nUSER32!GetMessageW:\n00007ff8`026f2660 fff3 push rbx\n00007ff8`026f2662 4883ec20 sub rsp,20h\n00007ff8`026f2666 418bc0 mov eax,r8d\n00007ff8`026f2669 458bd1 mov r10d,r9d\n00007ff8`026f266c 488bd9 mov rbx,rcx\n00007ff8`026f266f 410bc1 or eax,r9d\n00007ff8`026f2672 a90000feff test eax,0FFFE0000h\n00007ff8`026f2677 0f8503700500 jne USER32!GetMessageW+0x57020 (00007ff8`02749680)\n```", + "content_format": "markdown" + }, + { + "url": "https://github.com/zubair-io?tab=followers", + "domain": "github.com", + "file_source": "part-00403-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Highlights\n\nPavlo Bondarenko OfficialCodeVoyage\n\nProduct Manager / Software Engineer / Cloud Engineer / IT Enthusiast\n\nDenver, CO\n\nMd. Abdullah Al Marjan mdmarjanalijss2021\n\nB2B Lead Generation | Data Mining | List Building | Finding Leads| Service mdmarjanalijss2021@gmail.com\n\nSadia It Rangpur City in Bangladesh\n\nKevinHock\n\n0.01x Engineer. The (Myspace) Tom of GitHub. 11% of pre-tax income to effective altruism charities.\n\n@grammarly (Formerly @pinterest, @Yelp) I love San Francisco\n\nJosé Naves Moura Neto josenaves\n\nSoftware Engineer ::: Java, Kotlin, Kotlin, Rust, blockchain, and other cool stuff :::\n\nVolvoCars Sweden", + "content_format": "markdown" + }, + { + "url": "https://github.com/CityZenApp", + "domain": "github.com", + "file_source": "part-00403-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n# CityZen app project\n\nThe open source Android app that helps you find POIs such as ATMs, gas stations etc based on your location using OpenStreetMap. It doesn't track you!\n\n* 4 followers\n* Tirana, Albania\n* https://cityzenapp.co\n* redon@skikuli.com\n\n## Pinned Loading\n\n### Repositories\n\nShowing 4 of 4 repositories\n\n* Project-Management Public\n\nInternal Repo for CityZen tasks, workflow, documentation, project management and wiki\n\nCityZenApp/Project-Management’s past year of commit activity", + "content_format": "markdown" + }, + { + "url": "https://github.com/aboodman", + "domain": "github.com", + "file_source": "part-00293-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\naboodman Follow\n\n😀\n\nProgrammer, troublemaker, father, lover (of the color pink)\n\n* Hawaii\n* http://aaronboodman.com\n\n## Highlights\n\n* Pro\n\n## Popular repositories Loading\n\n* open-frame\nopen-framePublic\nPublic\nChrome extension that adds a context menu for \"Open frame in new tab\"\n\nSomething went wrong, please refresh the page to try again.\n\nIf the problem persists, check the GitHub status page or contact support.\nIf the problem persists, check the GitHub status page or contact support.", + "content_format": "markdown" + }, + { + "url": "https://github.com/highcharts/highcharts/issues/4699", + "domain": "github.com", + "file_source": "part-00293-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nClosed\n\n## Description\n\nLines pass through bars when inverted is set to true: http://jsfiddle.net/t2Lha3at/1/\n\nThey don't in not inverted chart: http://jsfiddle.net/t2Lha3at/2/\nThis is at least an inconsistency.\n\nLines pass through bars when inverted is set to true: http://jsfiddle.net/t2Lha3at/1/\n\nThey don't in not inverted chart: http://jsfiddle.net/t2Lha3at/2/\nThis is at least an inconsistency.", + "content_format": "markdown" + }, + { + "url": "https://github.com/nvaccess/nvda/issues/7652", + "domain": "github.com", + "file_source": "part-00293-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Description\n\nSteps to reproduce:\n\n* Go to https://codepen.io/Venkatesh-a11y/pen/OxZWdv\n* As soon as page loads focus lands on edit field so hit ESC key to switch to browse mode.\n* Hit M to move to the result frame.\n* Now arrow down to both unordered and ordered list over there.\n* Or hit L to navigate lists on the page.\n\nExpected behavior:\n\nNVDA should announce the aria-label before announcing the list item when focus enters list with quick key L or with down arrow otherwise context will miss for the users though specific context is described for each and every list with aria-label.\n\nIn the mentioned example, when user hits L NVDA should announce \"list with two items here are the list of fruits I like bullet apples\" and in the next hit of L it should announce \"list with two items here are the list of vegetables I like one potato\".\nActual Result:\n\nNVDA doesn't announce the aria-label with quick key L or while arrowing down the list instead it announces the number of list items followed by the first list item directly.\n\nIn the mentioned example, NVDA announces \"list with two items bullet Apples\" and \"list with two items one Potato\" when focus enters lists.\nSystem configuration:\n\nNVDA Version: 2017.3, Installed\n\nOther information:\n\nOS version: Windows 10\n\nBrowser Versions: Firefox 56, Chrome 61, IE 11, Microsoft Edge 41\n\nTested with NVDA 2017.1 and 2017.2 also and the issue repro's.", + "content_format": "markdown" + }, + { + "url": "https://github.com/realXtend/taiga-wizard", + "domain": "github.com", + "file_source": "part-00328-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n* Notifications\nYou must be signed in to change notification settings\n* Fork 7\n\n# realXtend/taiga-wizard\n\n## Folders and files\n\n| Name | Name | | |\n| --- | --- | --- | --- |\n\n## Repository files navigation\n\n> TaigaWizard is a Taiga configuration helper, designed to help aleviate the pain of configuring the set of ini/xml files needed to have a functional setup. If discover any bugs, or have any suggestions please let us know by email, IRC, or by placing a bug in the issue tracker. == Building == TaigaWizard is a Qt application, and requires the Qt SDK. === Linux === $ qmake $ make $ cd bin/ $ ./TaigaWizard\n\n## About\n\nConfiguration wizard for Taiga\n\n### Resources\n\n### Stars\n\n### Watchers\n\n### Forks\n\n## Releases\n\nNo releases published\n\n## Packages 0\n\nNo packages published", + "content_format": "markdown" + }, + { + "url": "https://github.com/vazgriz/VkColors", + "domain": "github.com", + "file_source": "part-00564-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nThis program generates colorful images using algorithms created by Jozsef Fejes.\n\nVkColors can be invoked from the command line\n\n `VkColors [options]` \nThere are multiple options to change the behavior of the image generator.\n\n* \n `--generator=[generator]` \nThis selects the algorithm to use in the image generator. Values that can be used are\n\n `wave`, `coral`, `cpu-wave`, and `cpu-coral`. Default is `coral`.* \n `--size=[width]x[height]` \nThis sets the size of the image. Valid values are any size between\n\n `1x1` and `4096x4096`. Default is `512x512`.* \n `--color=[source]` \nThis sets the method used to color the image. Values that can be used are\n\n `shuffle` and `hue`. Default is `shuffle`.* \n `--seed=[seed]` \nThis sets the seed used by the random number generator. Must be a 32-bit unsigned value. Default is based on system time.\n\nOther options can be set, but this may result in strange behavior or crashing.\n\n* \n `--workgroupsize=[size]` \nThis sets the size of the compute work group used. Must be positive. Default is 64 on AMD hardware and 32 on anything else.\n\n* \n `--maxbatchabsolute=[size]` \nThis sets the maximum number of pixels that can be generated in one batch. Must be positive. Default is 64.\n\n* \n `--maxbatchrelative=[size]` \nThis sets the maximum number of pixels that can be generated, based on the current state of the image. Must be positive. Default is 1024.\n\nThis project uses CMake as its build system.", + "content_format": "markdown" + }, + { + "url": "https://github.com/swagger-api/swagger-core/wiki/Swagger-Core-JAX-RS-Project-Setup", + "domain": "github.com", + "file_source": "part-00614-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n* Notifications\nYou must be signed in to change notification settings\n* Fork 2.2k\n\n# Swagger Core JAX RS Project Setup\n\n#Swagger-Core JAX-RS Project Setup\n\n# This document is here for legacy information and refers to an old version of swagger-core. To use the latest, please refer to the new guide.\n\nIn order to integrate the Swagger documentation in your application, you'd need to follow these three set up steps:\n\n* Adding Swagger's dependencies to your project.\n* Hook Swagger into your JAX-RS application configuration.\n* Configure and Initialize Swagger.\n\nEach implementation has its own wiki page summarizing the steps:\n\nOnce you finish the set up process, you can continue on to adding the annotations to your code.\n\nSwagger uses maven for build and deployment and its artifacts are available at Maven Central. You can use the maven dependencies with any dependency management system that supports maven dependencies such as Maven, Ivy and Gradle.\n\nSwagger-Core uses the `groupId` name of `com.wordnik`. This is true to all dependencies.\nThere are three artifacts that can be applied to JAX-RS:\n\n| # | Artifact Basic Name | JAX-RS Version | Additional Properties |\n| --- | --- | --- | --- |\n| 1 | | 1.0 |\n| 2 | | 1.0 | Supports documentation of Jersey-based file uploads. |\n| 3 | | 2.0 | Supports documentation of Jersey2-based file uploads. |\n\nFrom the table above, you can see that `swagger-jersey2-jaxrs` supports JAX-RS 2.0. This is basically by adding support to the `@BeanParam` annotation, which was introduced in JAX-RS 2.0. This dependency can be used by any JAX-RS 2.0 implementation including RestEasy. The only limitation is the support for file upload documentation, which can be done otherwise.\nSwagger-Core's 1.3.X versions and prior version use scala, and as such depend on the scala version. All 1.3.X are available with scala 2.10, and starting from 1.3.10, the dependencies are available with scala 2.11 as well.\n\nTo get the complete name of the artifact, the scala version is attached to the artifact name. So for scala 2.10, the artifact names would be `swagger-jaxrs_2.10`, `swagger-jersey-jaxrs_2.10` and `swagger-jersey-jaxrs_2.10`.Scala developers who use version 2.11, can replace `_2.10` with `_2.11`. For Java developers, you can use either, it has no effect on functionality.\nAt the time of writing this document, Swagger 1.3.12 is the latest release, which will be used in the examples in this document. It can be assumed that future versions will be attached in a similar manner. Should the behavior change, the document will be updated accordingly. The latest Swagger-Core version can be found here - https://github.com/swagger-api/swagger-core#compatibility.\n\nProjects that cannot utilize maven's dependencies would need to add the dependencies manually. Since those may change from version to version, the list of dependencies will not be documented here. Instead, it is advised that you clone the swagger-core repository, go to the directory of the relevant module (artifact) and run `mvn dependency:list`. That would give you a list of dependencies required by swagger-core which you would have to include manually in your application. Keep in mind this requires you to have maven installed but it does not require you to use maven for your project.\nSwagger-core exposes JAX-RS providers and one JAX-RS resource to generate and serve the Swagger API documentation. In this section we'll explore the various methods of hooking these components with your application to start serving your documentation.\n\nA successful hook-up will give you access to Swagger's `/api-docs`. It does not mean the `/api-docs` will be populated, and you may need to follow the Swagger Configuration step as well.\nThe final step is to configure Swagger and initialize it. This is required for Swagger to scan the resources and expose them.", + "content_format": "markdown" + }, + { + "url": "https://github.com/pbek/QOwnNotes/issues/176", + "domain": "github.com", + "file_source": "part-00403-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nClosed\n\n## Description\n\nWhen I click system tray icon, QOwnNotes window will just show/hide its main window.\n\nIt doesn't change priority of window.\n\nSo, if I activate an another application window after hide QOwnNotes window, QOwnNotes window will be restored under that application window when I click system tray icon.\n\nI must click on taskbar icon to activate QOwnNotes window.\n\nIt is better to restore QOwnNotes window at the top of windows when I click system tray icon, isn't it ?\nAn another problem is the case of minimized window.\n\n* Enable Show in system tray option.\n* Minimized QOwnNotes window to hide QOwnNotes window.\n* Click system tray icon.\n* Click system tray icon to show QOwnNotes window.\nIn this case, QOwnNotes window is still minimized and hidden.\n\nIt is better to restore from minimized when I click system tray icon, isn't it ?\n\n## Metadata\n\n### Assignees\n\n### Labels\n\nNo labels", + "content_format": "markdown" + }, + { + "url": "https://github.com/mohsen1/quicksprite", + "domain": "github.com", + "file_source": "part-00830-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nI made my own sprite builder using HTML5 technologies. This sprite builder doesn’t rely on server side image processing and works all in front end. It also generate CSS code developer may need for it’s sprite. All background-position properties are calculated and there is no need for eyeballing pixels in Photoshop to find out exact position of an icon! It generates width and height and even background-image property for each image. CSS classes are guessed based on each image file name. User can select between camel case and dash separated class names. Using Quick Sprite is very easy, you just drag your images in the app and it generates sprite image and CSS code in a second! You can drag out the result image our download it using the provided download link.\n\nI’ve used Backbone.js to organize my views and collection. While it wan’t necessary to use Backbone but it helped me to rapidly develop this app. To generate sprite image I used a `` element to draw images on it and take out result from canvas. I’ve used drag and drop API, File Blob, anchor tag download attribute and all good stuff from HTML5.\nIt works perfect in Chrome and Safari but CSS code didn’t show up in when I tried it out in Firefox. I didn’t try it in IE or Opera.\n\nCSS code is machine generated and will not be your perfect code to put in production so you may want to edit it. My plan is to use Code Mirror for syntax highlighting and code editing in CSS code block. Making sprite of big images could be slow and app freezes while processing images. It’s because images processing happens in sam thread as DOM (no surprise here!). Solution to this is using Web Workers but problem is I lose DOM in workers. I’m using DOM (canvas) to process images. Right now I am investigating on re-implementing `CanvasRenderingContext2D.prototype.drawImage` and `CanvasRenderingContext2D.prototype.getImageData` in a web worker. If I could do that then I can transfer each image binary data individually to worker and let worker combine those into a big ImageData object and transfer it back to original window. This might not be as easy as it looks but is not impossible.\nRight now the app render images below each other in a very tall and thin sprite (width of sprite is based on widest image). Maybe people want to stack their images in sprite horizontally or like a grid .Actually PNG file size will not change if we stack images differently (say like a grid) but having smaller numbers for background-position is a plus.\nHere is the app", + "content_format": "markdown" + }, + { + "url": "https://github.com/jquery/jquery/commit/9e0c056171d1a5cac407f8fedbf926be91eaba1a", + "domain": "github.com", + "file_source": "part-00564-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n* Notifications\nYou must be signed in to change notification settings\n* Fork 20.6k\n\n## Commit\n\nThis commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.\n\nFix #10589. Remove deprecated $.fn.data(\"events\") special case.\n\n> No unit tests were harmed in the removal of this hack.\n\n* Loading branch information\n\nShowing 1 changed file with 4 additions and 11 deletions.\n\n## There are no files selected for viewing\n\nThis file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters", + "content_format": "markdown" + }, + { + "url": "https://github.com/apache/incubator-ariatosca/commit/3d22d36fc5c9fb780facfb8880143dda46d16f6f", + "domain": "github.com", + "file_source": "part-00293-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nThis repository has been archived by the owner on Jul 17, 2018. It is now read-only.\n\n* Notifications\nYou must be signed in to change notification settings\n* Fork 45\n\n## Commit\n\nThis commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.\n\nARIA-162 Upgrade Colorama library version\n\n> Upgraded the Colorama library version - This should take care of the closed-stream error that appeared sporadically after test runs.\n\n* Loading branch information\nRan Ziv committedMay 22, 2017\n\n1 parent 5798bbf commit 3d22d36\n\nShowing 2 changed files with 2 additions and 5 deletions.\n\n## There are no files selected for viewing\n\nThis file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters\n\nThis file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters", + "content_format": "markdown" + }, + { + "url": "https://github.com/paeselhz/RStudio-Shiny-Server-on-GCP", + "domain": "github.com", + "file_source": "part-00894-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nThe ultimate guide to deploying Rstudio Open Source and Shiny Server Open Source at Google Cloud Platform\n\n* Updated on August 28th, 2018\n\nWith the advance of cloud computing, and the higher accessibility given to this platforms, there's an uprising trend to use the existing cloud services, and integrating it with one of the greatest statistical software in the market: R!\n\nThis guide is a step-by-step way to show how to configure the Google Cloud Platform as easy as possible, and how to set up two of the most used features of R in the cloud: RStudio Server - To develop your code with an easy to use interface; and Shiny Server, to deploy amazing data visualization websites easily.\n\nIn order to use the Google services, you need to signup for the cloud feature ( `cloud.google.com` ), with a valid billing account. Many cloud services such as GCP, Amazon Web Services and others offer Free Tier Machines, that are often offered for a year.\nAfter logging in to your Google account, and setting up your google cloud console. There are a few steps to take that make the experience of using cloud computing easier. First of all, you'll need a few applications to connect to your server, download the desirable authentications to operate and transfer files from the local machine to your Google cloud compute engine.\n\nThe applications needed are:\n\n* WinSCP: A file transfer feature that allows transitioning of files from/to your server easily and safely. (https://winscp.net/eng/download.php)\n* Putty: The terminal connection that allows the execution of code inside your virtual machine. (http://www.putty.org/)\n* Google Cloud SDK: The Google service management that allows the import of private keys to your server, and that helps manage firewall rules. (https://cloud.google.com/sdk/docs/quickstart-windows)\n\nSince the installation of the desired software is complete, now you need to create your first VM instance. At the side panel of Google Cloud Platform's Console, you'll find the Compute Engine menu, and inside that, the VM Instances Option. Clicking on that you'll be prompted to create or import a Virtual Machine. Select Create, and you'll be taken to the following page:\n\nAt this section, you can choose a name for your virtual machine, later you can select where that VM instance will be hosted, the cheaper places are in the US, but given the distance, you might find the connection \"laggy\". After picking the name and the zone, select what kind of instance you'll be holding. This will be given by your use of technologies, if you'll need lots of memory or lots of processing cores, here's where you'll choose it. The price per month will depend on this configurations.\n\nAfter choosing the kind of horsepower that will equip your virtual machine, you can choose what kind of Operational System will come installed with it. For this project, I will choose Ubuntu 18.04, the newest LTS version of this OS available. Also, while choosing the OS, you can choose what kind of storage you'll need. For this project, we will choose 20 GBs hosted at an SSD, this choice can be made given the amount of space you need, for simple projects 20GB should be more than enough. This gives us speed and reliability.\n\nLater, the last thing to do is allow HTTP traffic and create the VM instance. This process might take a while. If all went correctly you probably will see something like this:\n\nNow, you have a Virtual Machine hosted at Google Cloud Platform! At the external IP that is given to you, you can access different ports on that server, hosted at the cloud. But before we start installing R, Rstudio Server and Shiny Server, we need to create a secure connection between your local machine and the hosted Virtual Machine.\n\nTo do that you click on the arrow, at the left of SSH button, as shown below, and select \"View gcloud Command\":\n\nThis step will generate a line of code that will be used to create a secure connection between your local machine, and your server. Do not share this snippet with anyone, since it can be used to connect to your server. Copy this line of code, and paste at the Google SDK app that we downloaded before.\n\nIt'll prompt you to authenticate your connection with Google, before downloading the private key. After that is done, you'll be able to find the private key at your Users folder, inside the `/.ssh` folder. If the authentication went correctly, the gcloud SDK will open a Putty session, running the Linux distro that is hosted by the virtual machine, you can close this since we'll authenticate our connection through WinSCP.After that is done, you'll open WinSCP, and create a new connection. At the Host you'll insert the EXTERNAL IP ADDRESS that is given by Google to your server, the door will remain 22, and the user will be the name of your Google account user. After filling this parts, leave the password blank, and click on the Advanced... box, right below the password field. Inside of the Advanced Settings, go to SSH > Authentication at the map to your left. At the field that asks for your Private Key File, you'll click browse, and search for the `\"google_compute_engine.ppk\"` file that is stored in your `User/.ssh/` folder.\nIf this setting went successfully, you'll be able to see at the left portion of your screen the files of your local machine and at the right side of your monitor, the files of your virtual machine. Success, we have managed to enter the files at the Google cloud server!\n\nEven though we haven't installed yet the RStudio nor the Shiny Server, we can allow the connection ports of the server that will be used later by this applications to connect via browser to our server.\n\nIn order to do that, you'll open the GCloud SDK feature, and paste the following codes:\n\n* For the RStudio Connection:\n\n```\nsudo gcloud compute firewall-rules create rstudio-conn --allow=tcp:8787\n```\n\n* For the Shiny Server Connection:\n\n```\nsudo gcloud compute firewall-rules create shiny-conn --allow=tcp:3838\n```\n\nDone! Now you'll be able to access the hosted process by RStudio and Shiny Server\n\nNow, we have the configuration, and connection setup, we need to open WinSCP, and open Putty (Ctrl+P), to use the terminal command line at our virtual machine.\n\nBefore installing any applications, make sure that your machine is up-to-date running this commands:\n\n```\nsudo apt-get update\nsudo apt-get upgrade\n```\n\nAfter it went successfully, you'll need to add the R repository to the sources.list file, so that the Ubuntu will know where to fetch the application. The code chunk below adds a line to the repository list, then passes a key for the Ubuntu server to download R, updates the existing packages, and installs r-base and r-base dev.\n\n```\nsudo sh -c 'echo \"deb https://cloud.r-project.org/bin/linux/ubuntu bionic-cran35/\" >> /etc/apt/sources.list'\nsudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E084DAB9\nsudo apt-get update\nsudo apt-get install r-base r-base-dev\n```\n\nA few features won't work using only the r-base, since the packages are based on other programs as well, so to cover a few of those, below are the codes used to install the software needed.\n\n* Spatial Libraries:\n\n```\nsudo apt-get install libgeos-dev libproj-dev libgdal-dev\n```\n\n* Tidyverse Universe:\n\n```\nsudo apt-get install libcurl4-openssl-dev libssl-dev libxml2-dev\n```\n\nNow, you can run R on your virtual machine. Since the user logged in does not have root permissions, we advise running R with the following code, so that the installation of packages will be smoother this way.\n\n `sudo -i R` \nNow that we have R open at the terminal of our virtual machine, we might as well install a few packages that will be useful with Shiny, such as the shiny package, the RMarkdown package, and dplyr.\n\nTo do that, at the command line in R type:\n\n```\ninstall.packages(c('shiny', 'rmarkdown', 'dplyr'))\n```\n\nSelect the desired mirror, and download the aforementioned packages. This process might take a while.\n\nOnce installation is complete, leave the R command line:\n\n `quit()` \nNow, we have R installed on our virtual machines, and we need to install the RStudio server in order to access it through the external ip address at door 8787(which is default). To do that, we need to install gdebi first, which is used to install both Shiny Server and Rstudio Server.\n\nThe following code will install gdebi, download the .deb file that contains the RStudio server file, and execute it.\n\n```\nsudo apt-get install gdebi-core\nwget https://download2.rstudio.org/rstudio-server-1.1.456-amd64.deb\nsudo gdebi rstudio-server-1.1.456-amd64.deb\n```\n\nThis execution will prompt you to agree with the installation of RStudio server, and if all went well, you'll see that the rstudio-server process is running.\n\nTo verify if the installation went correctly, access `http://your_external_ip:8787`. If everything is fine, you'll find this screen:\nAfter this, you'll need to create a user to use the RStudio hosted on your virtual machine, so you go back to the Linux terminal, and type:\n\n `sudo adduser YOUR_USER_NAME` \nThis process will prompt you to create a password and set a few parameters for this new user. But after that, you'll be able to use YOUR_USER_NAME and the chosen password to login to RStudio Server. If this process went well, you'll find the RStudio interface at your browser.\n\nNOTE: Remember, the Rstudio is not running with root permissions, so to install new packages and save files at different folders, you'll need to do that using the Linux terminal, with the help of the `sudo` command.The process of installation of the shiny server is pretty similar to the one of the RStudio Server, the difference is the door that the process will be hosted, which by default is 3838, and that the creation of files inside shiny-server is dependent on root permissions, so every modification must be dealt with a `sudo` prefix.\nTo install the Shiny Server, we'll install the gdebi to execute the installation, if you already have installed gdebi before, you can discard the first line. The second line is to download the file of installation, and the third one is to execute the installation.\n\n```\nsudo apt-get install gdebi-core\nwget https://download3.rstudio.org/ubuntu-14.04/x86_64/shiny-server-1.5.7.907-amd64.deb\nsudo gdebi shiny-server-1.5.7.907-amd64.deb\n```\n\nAs well as with the RStudio Server installation, you'll be prompted to agree with the installation. If it all went successfully you'll notice a shiny-server process up and running.\n\nTo verify if the installation is complete, you can access `http://your_external_ip:3838`. If everything is fine, you'll see this screen:\nNOTE: Even though this might not be considered as a good practice in development, to make your life easier while dealing with shiny apps in construction, you might set full edit permissions to the folder containing shiny-server folders using the following command:\n\n```\nsudo chmod 777 -R /srv/shiny-server/\n```\n\nThis project is in constant updates and modifications, so you might notice a few changes since the last access or the last commit. If is there any trouble with any of the installations above, make sure to check the documentation of each feature, to ensure that everything is working fine, and as is supposed to work.\n\nThe links below are easy links to the documentation for each of the features revised by this post.\n\n* Google Cloud Compute Documentation (https://cloud.google.com/docs/)\n* RStudio Server Documentation (http://docs.rstudio.com/ide/server-pro/)\n* Shiny Server Documentation (http://docs.rstudio.com/shiny-server/)\n\nThanks! Luis Paese", + "content_format": "markdown" + }, + { + "url": "https://github.com/strobl/plato-research-dialogue-system", + "domain": "github.com", + "file_source": "part-00436-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nThis is a v 0.1 release.\n\nThe Plato Research Dialogue System is a flexible framework that can be used to create, train, and evaluate conversational AI agents in various environments. It supports interactions through speech, text, or dialogue acts and each conversational agent can interact with data, human users, or other conversational agents (in a multi-agent setting). Every component of every agent can be trained independently online or offline and Plato provides an easy way of wrapping around virtually any existing model, as long as Plato's interface is adhered to.\n\nPublication citations:\n\nAlexandros Papangelis, Yi-Chia Wang, Piero Molino, and Gokhan Tur, “Collaborative Multi-Agent Dialogue Model Training Via Reinforcement Learning”, SIGDIAL 2019 [paper]\n\nAlexandros Papangelis, Yi-Chia Wang, Mahdi Namazifar, Chandra Khatri, Piero Molino, and Gokhan Tur, \"Plato Research Dialogue System: A Flexible Conversational AI Research Platform\", ArXiv (to appear)\n\nNew tutorial on how to train a simple sequence to sequence model on MetalWOZ data.\n\nContents:\n\n* How does the Plato Research Dialogue System Work?\n* Quick Start Guide\n* Running Plato Agents\n* Training from Data\n\nConceptually, a conversational agent needs to go through various steps in order to process information it receives as input (e.g., “What’s the weather like today?”) and produce an appropriate output (“Windy but not too cold.”). The primary steps, which correspond to the main components of a standard architecture (see Figure 1), are:\n\n* Speech recognition (transcribe speech to text)\n* Language understanding (extract meaning from that text)\n* State tracking (aggregate information about what has been said and done so far)\n* API call (search a database, query an API, etc.)\n* Dialogue policy (generate abstract meaning of agent’s response)\n* Language generation (convert abstract meaning into text)\n* Speech synthesis (convert text into speech)\n\nPlato has been designed to be as modular and flexible as possible; it supports traditional as well as custom conversational AI architectures, and importantly, enables multi-party interactions where multiple agents, potentially with different roles, can interact with each other, train concurrently, and solve distributed problems.\n\nFigures 1 and 2, below, depict example Plato conversational agent architectures when interacting with human users and with simulated users. Interacting with simulated users is a common practice used in the research community to jump-start learning (i.e., learn some basic behaviours before interacting with humans). Each individual component can be trained online or offline using any machine learning library (for instance, Ludwig, TensorFlow, PyTorch, your own implementations) as Plato is a universal framework. Ludwig, Uber's open source deep learning toolbox, makes for a good choice, as it does not require writing code and is fully compatible with Plato.\n\nFigure 1: Plato's modular architecture means that any component can be trained online or offline and can be replaced by custom or pre-trained models. (Grayed components in this diagram are not core Plato components.)\n\nFigure 2: Using a simulated user rather than a human user, as in Figure 1, we can pre-train statistical models for Plato's various components. These can then be used to create a prototype conversational agent that can interact with human users to collect more natural data that can be subsequently used to train better statistical models. (Grayed components in this diagram are not Plato core components.)\n\nIn addition to single-agent interactions, Plato supports multi-agent conversations where multiple Plato agents can interact with and learn from each other. Specifically, Plato will spawn the conversational agents, make sure that inputs and outputs (what each agent hears and says) are passed to each agent appropriately, and keep track of the conversation.\n\nThis setup can facilitate research in multi-agent learning, where agents need to learn how to generate language in order to perform a task, as well as research in sub-fields of multi-party interactions (dialogue state tracking, turn taking, etc.). The dialogue principles define what each agent can understand (an ontology of entities or meanings; for example: price, location, preferences, cuisine types, etc.) and what it can do (ask for more information, provide some information, call an API, etc.). The agents can communicate over speech, text, or structured information (dialogue acts) and each agent has its own configuration. Figure 3, below, depicts this architecture, outlining the communication between two agents and the various components:\n\nFigure 3: Plato's architecture allows concurrent training of multiple agents, each with potentially different roles and objectives, and can facilitate research in fields such as multi-party interactions and multi-agent learning. (Grayed components in this diagram are not core Plato components.)\n\nFinally, Plato supports custom architectures (e.g. splitting NLU into multiple independent components) and jointly-trained components (e.g. text-to-dialogue state, text-to-text, or any other combination) via the generic agent architecture shown in Figure 4, below. This mode moves away from the standard conversational agent architecture and supports any kind of architecture (e.g., with joint components, text-to-text or speech-to-speech components, or any other set-up) and allows loading existing or pre-trained models into Plato.\n\nFigure 4: Plato's generic agent architecture supports a wide range of customization, including joint components, speech-to-speech components, and text-to-text components, all of which can be executed serially or in parallel.\n\nUsers can define their own architecture and/or plug their own components into Plato by simply providing a Python class name and package path to that module, as well as the model’s initialization arguments. All the user needs to do is list the modules in the order they should be executed and Plato takes care of the rest, including wrapping the input/output, chaining the modules, and handling the dialogues. Plato supports serial and parallel execution of modules.\n\nPlato also provides support for Bayesian optimization of conversational AI architectures or individual module parameters through Bayesian Optimisation of Combinatorial Structures (BOCS).\n\nContents:\n\n* Quick Start Guide\n\n* Understanding the configuration files\n* Running Plato Agents\n* Running multiple Plato Agents\n* Running generic Plato Agents\n* Training from data\n\n* Plato internal experience\n* Training with Plato\n* Training with Ludwig\n* Create a new domain\n* Create a new component\n\n* Inheriting from the abstract classes\n* Creating a custom component\n* Bayesian Optimisation in Plato\n* Conclusion\n\n* \n\nClone this repository:\n\n```\ngit clone git@github.com:uber-research/plato-research-dialogue-system.git\n```\n\n* \n\nInstall the requirements:\n\nFor MacOS:\n\n```\nbrew install portaudio pip install -r requirements.txt\n```\n\nFor Ubuntu/Debian:\n\n```\nsudo apt-get install python3-pyaudio pip install -r requirements.txt\n```\n\nFor Windows:\n\n```\npip install -r requirements.txt\n```\n\n* \n\nRun Plato!\n\nSee below for a quick introduction to the configuration files and how to run your first Plato agent.\n\nTo support speech it is necessary to install PyAudio, which has a number of dependencies that might not exist on a developer's machine. If the steps above are unsuccessful, this post on a PyAudio installation error includes instructions on how to get these dependencies and install PyAudio.\n\n\"CommonIssues.md\" file contains common issues and their resolution that a user might encounter while installation.\n\nPlato uses a Controller class to orchestrate the conversation between the agents. The Controller will instantiate the agents, initialize them for each dialogue, pass input and output appropriately, and keep track of statistics.\n\nTo run a Plato conversational agent, the user must run the following command with the appropriate configuration file (see Examples/simulate_agenda.yaml for an example configuration file which contains a number of settings on the environment and the agent(s) to be created as well as their components):\n\n```\npython runPlatoRDS.py -config PATH_TO_CONFIG_FILE.yaml\n```\n\nSome examples are listed below.\n\nTo run a simulation using the agenda based user simulator in the Cambridge Restaurants domain:\n\n```\npython runPlatoRDS.py -config Examples/config/simulate_agenda.yaml\n```\n\nTo run a text based interaction using the agenda based simulator in the Cambridge Restaurants domain:\n\n```\npython runPlatoRDS.py -config Examples/config/simulate_text.yaml\n```\n\nTo run a speech based interaction using the agenda based simulator in the Cambridge Restaurants domain:\n\n```\npython runPlatoRDS.py -config Examples/config/simulate_speech.yaml\n```\n\nOne of Plato's main features allows two agents to interact with each other. Each agent can have a different role (for instance, system and user), different objectives, and receive different reward signals. If the agents are cooperating, some of these can be shared (e.g., what constitutes a successful dialogue). (In the future, we plan to build support for Plato to enable interaction between more than two agents at a time.)\n\nFor example, to run multiple Plato agents on the benchmark Cambridge Restaurants domain, we run the following commands to train the agents’ dialogue policies and test them:\n\n* \n\nTraining phase\n\n```\npython runPlatoRDS.py -config Examples/config/CamRest_MA_train.yaml\n```\n\n* \n\nTesting phase\n\n```\npython runPlatoRDS.py -config Examples/config/CamRest_MA_test.yaml\n```\n\nMost of the discussion and examples in this guide revolve around the traditional conversational agent architecture. Plato, however, does not need to adhere to that pipeline; its generic agents support any range of custom modules, from splitting natural language understanding into many components to having multiple components running in parallel to having just a single text-to-text model.\n\nGeneric agents allow users to load their custom modules as Python class objects. For each module listed in the configuration file, Plato will instantiate the class using the given path and arguments. Then, during each dialogue turn, the generic agent will sequentially call each module (in the order provided in its configuration file) and will pass the output of the current module to the next module in the list. The generic agent will return the last module’s output.\n\nthe following are two examples of running a single Plato agent or multiple Plato agents in the generic module mode:\n\n* \n\nSingle generic agent, used to implement custom architectures or to use existing, pre-trained statistical models:\n\n```\npython runPlatoRDS.py -config Examples/config/simulate_agenda_generic.yaml\n```\n\n* \n\nMultiple generic agents, same as above but for multiple agents (assuming you have trained dialogue policies using Examples/config/CamRest_MA_train.yaml):\n\n```\npython runPlatoRDS.py -config Examples/config/MultiAgent_test_generic.yaml\n```\n\nPlato supports the training of agents’ internal components in an online (during the interaction) or offline (from data) manner, using any deep learning framework. Virtually any model can be loaded into Plato as long as Plato’s interface Input/Output is respected; for example, if a model is a custom NLU it simply needs to inherit from Plato's NLU abstract class, implement the necessary functions and pack/unpack the data into and out of the custom model.\n\nTo facilitate online learning, debugging, and evaluation, Plato keeps track of its internal experience in a structure called the Dialogue Episode Recorder, which contains information about previous dialogue states, actions taken, current dialogue states, utterances received and utterances produced, rewards received, and a few other structs including a custom field that can be used to track anything else that cannot be contained by the aforementioned categories\n\nAt the end of a dialogue or at specified intervals, each conversational agent will call the train() function of each of its internal components, passing the dialogue experience as training data. Each component then picks the parts it needs for training.\n\nTo use learning algorithms that are implemented inside Plato, any external data, such as DSTC2 data, should be parsed into this Plato experience so that they may be loaded and used by the corresponding component under training.\n\nAlternatively, users may parse the data and train their models outside of Plato and simply load the trained model when they want to use it for a Plato agent.\n\nTraining online is as easy as flipping the 'Train' flags to 'True' in the configuration for each component users wish to train.\n\nTo train from data, users simply need to load the experience they parsed from their dataset. As an example of offline training in Plato, we will use the DSTC2 dataset, which can be obtained from the 2nd Dialogue State Tracking Challenge website:\n\n```\nhttp://camdial.org/~mh521/dstc/downloads/dstc2_traindev.tar.gz\n```\n\nOnce the download is complete, you need to unzip the file.\n\nThe `runDSTC2DataParser.py` script will parse the DSTC2 data for you, and save it\nas Plato experience. It will then load that experience and train a Supervised\nPolicy:\n\n```\npython runDSTC2DataParser.py -data_path /dstc2_traindev/data/\n```\n\nYou can test using the following configuration:\n\n```\npython runPlatoRDS.py -config Examples/config/simulate_agenda_supervised.yaml\n```\n\nNote that you may load your experience into Plato and then keep training your model with Reinforcement Learning or other learning methods.\n\nWhile each component has its own training parameters (e.g. learning rate), the Conversational Agent defines the meta-parameters of training such as:\n\n* length of the experience\n* length of the minibatch\n* training interval (train after that many dialogues)\n* how many epochs to train for at each training interval\n\nLudwig is an open source deep learning framework that allows you to train models without writing any code. You only need to parse your data into .csv files, create a ludwig config (in YAML), that describes the architecture you want, which features to use from the .csv and other parameters and then simply run a command in a terminal.\n\nLudwig also provides an API, that Plato is compatible with. This allows Plato to integrate with Ludwig models, i.e. load / save the models, train and query them.\n\nIn the previous section, the `runDSTC2DataParser.py` actually generated\nsome .csv files as well that can be used to train NLU and NLG. If all went well,\nyou can find them here: `Data/data/`. Now, you need to write a\nconfiguration file that looks like this:\n\n```\ninput_features:\n -\n name: nlg_input\n type: sequence\n encoder: rnn\n cell_type: lstm\n\noutput_features:\n -\n name: nlg_output\n type: sequence\n decoder: generator\n cell_type: lstm\n\ntraining:\n epochs: 20\n learning_rate: 0.001\n dropout: 0.2\n```\n\nand train your model:\n\n```\nludwig experiment \n --model_definition_file Examples/config/ludwig_nlg_train.yaml \n --data_csv Data/data/DSTC2_NLG_sys.csv \n --output_directory Models/CamRestNLG/Sys/\n```\n\nThe next step is to load the model in Plato. Go to the `simulate_agenda_nlg.yaml` configuration file and update the path if\nnecessary:\n\n```\n...\n\nNLG:\n nlg: CamRest\n model_path: /model\n\n...\n```\n\nand test that the model works:\n\n```\npython runPlatoRDS.py -config Examples/config/simulate_agenda_nlg.yaml\n```\n\nRemember that Ludwig will create a new experiment_run_i directory each time it is called, so please make sure you keep the correct path in Plato's config up to date.\n\nNote that Ludwig also offers a method to train your model online, so in practice you need to write very little code to build, train, and evaluate a new deep learning component in Plato.\n\nYou can download MetalWOZ from here.\n\nPlato supports jointly trained models through Generic Agents. Here we will see the steps needed to create a simple seq2seq conversational agent from scratch. Using MetalWOZ as an example, we need to do the following:\n\nAs we are only training a simple seq2seq model (text to text), we need our parser to extract user and system utterances. These will be saved in .csv files that will be used by Ludwig in step 4.\n\nFor a simple implementation of a MetalWOZ parser, see `Data/data/Parse_MetalWOZ.py` \nPlease note that this parser will only parse one single file (one domain). You can easily modify it, however, to fit your needs. Here is a sample of the output produced by the parser for the pizza ordering domain:\n\n| user | system |\n| --- | --- |\n| hi | Hello how may I help you? |\n| I need placing an order for a pizza need help* | Certainly, what would you like to add to your order? |\n| I want a pizza with mushroom, pepperoni, and bacon toppings | Unfortunately, this location is out of the bacon topping you requested. Would there be a different topping you'd like to replace it with? |\n| How about pineapple | That topping is available. I've added a pizza with mushroom, pepperoni, and pineapple to your order. What size would you like? |\n| Medium | Alright, updated. Would you like to add more to your order? |\n| That's all, thanks | Alright, your total is 14.99. Your pizza will be ready for pickup in approx. 20 mins. |\n\nNote the first user utterance does not actually exist in the data. However, we need something to prompt the model to produce the system's greeting - we could have used an empty sentence, or any other greeting (or a combination of these).\n\nYou can run the example script as follows:\n\n `python runMetalWOZDataParser.py -data_path /dialogues/FILE.txt` \nTo get started we can train a very simple model using Ludwig (feel free to use your favourite deep learning framework here):\n\n```\ninput_features:\n -\n name: user\n type: text\n level: word\n encoder: rnn\n cell_type: lstm\n reduce_output: null\n\noutput_features:\n -\n name: system\n type: text\n level: word\n decoder: generator\n cell_type: lstm\n attention: bahdanau\n loss:\n type: sampled_softmax_cross_entropy\n\ntraining:\n epochs: 100\n```\n\nYou can modify this config to reflect the architecture of your choice and train using Ludwig:\n\n```\nludwig train \n --data_csv Data/data/metalwoz.csv \n --model_definition_file Examples/config/metalWOZ_seq2seq_ludwig.yaml\n --output_directory \"Models/JointModels/\"\n```\n\nThis class simply needs to handle loading of the model, querying it appropriately and formatting its output appropriately. In our case, we need to wrap the input text into a pandas dataframe, grab the predicted tokens from the output and join them in a string that will be returned. See the class here: `JointModels/MetalWOZSeq2Seq.py` See `Examples/config/metalwoz_generic.yaml` for an example generic\nconfiguration file that interacts with the seq2seq agent over text. You can try\nit out as follows: `python runPlatoRDS.py -config Examples/config/metalwoz_generic.yaml` \nRemember to update the path to your trained model if necessary! The default path assumes you run the ludwig train command from Plato's root directory.\n\nIn order to build a conversational agent for task-oriented applications (such as slot-filling), you need a database of items and an ontology describing your domain. Plato provides a script for automating this process.\n\nLet's say for example that you want to build a conversational agent for a flower shop, and you have the following items in a .csv:\n\n```\nid,type,color,price,occasion\n1,rose,red,1,any\n2,rose,white,2,anniversary\n3,rose,yellow,2,celebration\n4,lilly,white,5,any\n5,orchid,pink,30,any\n6,dahlia,blue,15,any\n```\n\nYou can simply call createSQLiteDB.py to automatically generate a .db SQL file and a .json Ontology file. If you want to specify informable, requestable, and system-requestable slots, you may do so in the configuration file:\n\n```\nGENERAL:\n csv_file_name: Data/data/flowershop.csv\n db_table_name: flowershop\n db_file_path: Ontology/Ontologies/flowershop-dbase.db\n ontology_file_path: Ontology/Ontologies/flowershop-rules.json\n\nONTOLOGY:\n informable_slots: [type, price, occasion]\n\n requestable_slots: [price, color]\n\n System_requestable_slots: [type, price, occasion]\n```\n\nand run the script:\n\n```\npython createSQLiteDB.py -config Examples/config/create_flowershop_DB.yaml\n```\n\nIf all went well, you should have a `flowershop.json` and a `flowershop.db` into the `Data/data/` folder.\nYou can now simply run Plato's dummy components as a sanity check and talk to your flower shop agent:\n\n```\npython runPlatoRDS.py -config Examples/config/flowershop_text.yaml\n```\n\nThere are two ways to create a new module depending on its function. If a module implements a new way of performing NLU or dialogue policy, then the user should write a class that inherits from the corresponding abstract class.\n\nIf, however, a module does not fit one of the single agent basic components, for example, it performs Named Entity Recognition or predicts dialogue acts from text, then the user must write a class that inherits from the ConversationalModule directly, which can then only be used by the generic agents.\n\nUsers need to create a new class inheriting from the corresponding Plato abstract class and implement the interface defined by the abstract class and any other functionality they wish. This class should have a unique name (e.g. 'myNLG') that will be used to distinguish it from other options when parsing the configuration file. At this version of Plato, users will need to manually add some conditions where the configuration files are being parsed (e.g. in the Conversational Agent, Dialogue Manager, etc.) unless the generic agent is used.\n\nTo construct a new module, the user must add their code to a new class inheriting from the conversational module. They can then load the module via a generic agent by providing the appropriate package path, class name, and arguments in the configuration.\n\n```\n...\nMODULE_i:\n package: myPackage.myModule\n Class: myModule\n arguments:\n model_path: Models/myModule/parameters/\n ...\n...\n```\n\nBe careful! You are responsible for guaranteeing that the I/O of this module can be processed and consumed appropriately by modules before and after, as provided in your generic configuration file.\n\nPlato also supports parallel execution of modules. To enable that you need to have the following structure in your config:\n\n```\n...\nMODULE_i:\n parallel_modules: 5\n\n PARALLEL_MODULE_0:\n package: myPackage.myModule\n Class: myModule\n arguments:\n model_path: Models/myModule/parameters/\n ...\n\n PARALLEL_MODULE_1:\n package: myPackage.myModule\n Class: myModule\n arguments:\n model_path: Models/myModule/parameters/\n ...\n\n ...\n...\n```\n\nBe careful! Outputs from the modules executed in parallel will be packed into a list. The next module (e.g. `MODULE_i+1` ) will need to be able\nto handle this kind of input. The provided Plato modules are not designed to\nhandle this, you will need to write a custom module to process input from\nmultiple sources.\nTutorial coming soon!\n\nSpecial thanks to Yi-Chia Wang, Mahdi Namazifar, Chandra Khatri, Piero Molino, Michael Pearce, Zack Kaden, and Gokhan Tur for their contributions and support, and to studio FF3300 for allowing us to use the Messapia font.\n\nThis is the very first release of Plato. Please understand that many features are still being implemented and some use cases may not be supported yet.\n\nEnjoy!", + "content_format": "markdown" + }, + { + "url": "https://github.com/kuroneko/", + "domain": "github.com", + "file_source": "part-00894-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nSoftware and Firmware Programmer, Pilot and Recovering SysAdmin. VK1NKO.\n\n* Australia\n* http://chris.collins.id.au/\n\n## Pinned Loading\n\nSomething went wrong, please refresh the page to try again.\n\nIf the problem persists, check the GitHub status page or contact support.\nIf the problem persists, check the GitHub status page or contact support.", + "content_format": "markdown" + }, + { + "url": "https://github.com/RussellSprouts", + "domain": "github.com", + "file_source": "part-00613-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Popular repositories Loading\n\n* minecraft-amethyst-tool\nminecraft-amethyst-toolPublic\nPublic\nHelps to generate Minecraft amethyst shard farms\n\nSomething went wrong, please refresh the page to try again.\n\nIf the problem persists, check the GitHub status page or contact support.\nIf the problem persists, check the GitHub status page or contact support.", + "content_format": "markdown" + }, + { + "url": "https://github.com/jiangtj/hexo-extend-theme", + "domain": "github.com", + "file_source": "part-00894-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nThis is a very very simple plugin. Only replace any theme file in `layout` folder to a custom file.This plugin name from `hexo-theme-plus` to `hexo-extend-theme` \n\n```\nyarn add @jiangtj/hexo-extend-theme\n```\n\nAny file in custom path (default `custom/theme` ) will replace theme file in same path in `layout` folder. You can set another path:\n\n```\ntheme_plus:\n custom_path: custom/theme # disabled: set 'false'\n```\n\nIn hexo `_config.yml`, you can special pick up a file:\n\n```\ntheme_plus:\n views:\n path: 'layout.ejs'\n file: 'custom/layout.ejs'\n# or\ntheme_plus:\n views:\n - path: 'index.ejs'\n file: 'custom/index.ejs'\n - path: 'layout.ejs'\n file: 'custom/layout.ejs'\n```\n\n* If there is a file path in the replacement file, it may cause rendering errors\n* Unable to listen for file modification. If you modify a custom file, you need to rerun 'hexo s' to view the changes", + "content_format": "markdown" + }, + { + "url": "https://github.com/tjweir/liftbook/commits/Lift_2.0", + "domain": "github.com", + "file_source": "part-00293-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n# Commits\n\n## Branch selector\n\n## User selector\n\n## Datepicker\n\n## Commit History\n\n### Commits on Jun 18, 2010\n\n### Commits on Feb 26, 2010\n\n* committed\n\n### Commits on Oct 15, 2009\n\n### Commits on Sep 17, 2009\n\n### Commits on Aug 4, 2009\n\n### Commits on Jul 27, 2009\n\n### Commits on Jul 24, 2009\n\n### Commits on Jul 22, 2009\n\n* committed\n\n### Commits on Jul 10, 2009\n\n### Commits on Jun 9, 2009\n\n### Commits on Jun 8, 2009\n\n### Commits on Jun 5, 2009\n\n### Commits on May 11, 2009\n\n### Commits on May 7, 2009\n\n### Commits on May 5, 2009\n\n* committed\n* committed\n* committed\n* committed", + "content_format": "markdown" + }, + { + "url": "https://github.com/vdbelt", + "domain": "github.com", + "file_source": "part-00096-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Popular repositories Loading\n\n* omnipay-oppwa\nomnipay-oppwaPublic\nPublic\nOppwa / PAY.ON driver for the Omnipay PHP payment processing library\n\nSomething went wrong, please refresh the page to try again.\n\nIf the problem persists, check the GitHub status page or contact support.\nIf the problem persists, check the GitHub status page or contact support.", + "content_format": "markdown" + }, + { + "url": "https://github.com/CaymanUnterborn/ArCCoS", + "domain": "github.com", + "file_source": "part-00328-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nThe Arbitrary Composition Condensation Sequence Calculator (ArCCoS) is an equilibrium condensation sequence calculator. It calculates the distribution of 23 elements between ~400 gas phases and ~100 solid phases of interest to planetary and exoplanetary science. See https://arxiv.org/abs/1604.08309 more information\n\nArCCoS is released under the GNU GPL v2 or newer\n\nSource code: https://github.com/CaymanUnterborn/ArCCoS\n\nAuthors (as of 2016, listed alphabetically by first name): Dr. Cayman Unterborn (unterborn.1@osu.edu)\n\nBasic Structure: The code is split into 5 files: Input (input.py) Main code (arccos.py) Mass balance function (condensation/fun.py) Database calculations & entropy/enthalpy calculator(condensation/get_data.py) Output (condensation/write.py)\n\nas well as a database of thermodynamic values for the gasses/solids (Data/).\n\nFollowing the prompts in input.py will show the current input variables including: Solar composition model and total pressure. Future versions will include database manipulation as well as more complex thermodynamics (e.g. solid solutions).\n\nAfter changing the input file, to run ArCCoS, simply type: python input.py and the code will run until 100% of all refractory elements are condensed. At completion, matplotlib will produce a plot showing the refractory element condensation (% element condensed vs. temperature). Further output files including a text version of the appearance and disappearance temperature of each solid (output/sequence/) and individual element distributions as a function of temperature (output/abundance/).\n\nRequirements\n\nPython 2.7.x or Python 3.4+ Python modules: NumPy, SciPy, Matplotlib\n\nInstall under Ubuntu\n\nInstall using apt by opening a terminal window and entering sudo apt-get install python python-scipy python-numpy python-matplotlib Go to the Burnman examples directory and type: python example_beginner.py Figures should show up, indicating that it is working. Install on a Mac\n\nget Xcode If you don't have Python yet, download it (for free) from python.org/download . Make sure to use either Python 2.7 or Python 3.4+. To check your version of python, type the following in a terminal: python --version Install the latest Numpy version: http://sourceforge.net/projects/numpy/files/NumPy/ Install the latest Scipy at http://sourceforge.net/projects/scipy/files/ Install the latest Matplotlib from http://sourceforge.net/projects/matplotlib/files/matplotlib/matplotlib-1.1.1/ Go to the Burnman examples directory and type: python example_beginner.py Figures should show up, indicating that it is working. Python can also be downloaded in an ready to use fashion by using the Canopy distribution of python located at: https://www.enthought.com/products/canopy/ with numpy, scipy and matplotlib installs being accessed through the Package Manager.\n\nProblems you might run into:\n\nInstalling numpy/scipy/matplotlib for a different python version than the one on your computer\n\nHaving matplotlib for 32-bit instead of 64-bit (for me this got fixed by installing the very latest version). This will give you the error no matching architecture in universal wrapper. You can check if your python distribution is 32 or 64 bit with the following lines:\n\npython\n\n> \n\nimport platform print platform.architecture() Install under Windows\n\nTo get Python 2.7.x (for example) running under Windows:\n\nDownload Python from http://www.python.org/ and install the version at C:\\Python27; the 32-bit version is recommended Go to http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy, download \"numpy-MKL-1.6.2.win32-py2.7.exe\" and install Go to http://www.lfd.uci.edu/~gohlke/pythonlibs/#scipy, download \"scipy-0.10.1.win32-py2.7.exe\" and install Go to http://www.lfd.uci.edu/~gohlke/pythonlibs/#matplotlib, download \"matplotlib-1.1.1.win32-py2.7.exe\" and install Open Python Shell (IDLE Python GUI) File -- Open -- find one of the example files Run the module (or press F5)", + "content_format": "markdown" + }, + { + "url": "https://github.com/spring-projects/spring-ws/issues/818", + "domain": "github.com", + "file_source": "part-00614-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nClosed\n\n## Description\n\nDejia Meng opened SWS-727 and commented\n\nBuilding a webapp targetting Java 1.5 envrionment with JDK1.6 as the build environment, doesn't pull in the maven dependency: javax.xml.stream : stax-api, which causes java.lang.ClassNotFoundException: javax.xml.stream.XMLStreamException.\n\nThe expected result will be that the final artifacts from the build should be the same regardless of the Java version of the build environment. The dependencies should be based on the target Java runtime environment not the build environment.\n\nThe following is found in spring-ws-core-2.0.2.RELEASE.pom file:\n\n```\n\n jdk15\n \n !1.6\n \n \n \n javax.xml.stream\n stax-api\n \n```\n\nAffects: 2.0.2, 2.0.3", + "content_format": "markdown" + }, + { + "url": "https://github.com/atykhonov", + "domain": "github.com", + "file_source": "part-00614-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nI am a seasoned software engineer with over 15 years of hands-on experience in the dynamic realm of software development. My diverse background spans outsourcing, product development, and IT consulting, encompassing both greenfield projects featuring cutting-edge technologies and legacy systems requiring meticulous maintenance.\n\nOver the past three years, I have excelled in remote work environments, showcasing exceptional communication and self-management skills. My programming journey has been anchored by Python, my preferred language, although I am equally adept at working with JavaScript, PHP, Ruby, and more. With expertise extending to the full-stack, I have crafted web solutions from frontend to backend, and my proficiency extends to DevOps practices, including Linux and Docker.", + "content_format": "markdown" + }, + { + "url": "https://github.com/gradle/gradle/releases/tag/v3.4.1", + "domain": "github.com", + "file_source": "part-00436-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n# 3.4.1\n\n## Gradle 3.4.1 is now available\n\nThis bug-fix release addresses uncaught regressions in v3.4.0 in the Java incremental compilation.\n\nFixed issues:\n\n* gradle/gradle#1474: Incremental compilation with literals in 3.4\n* gradle/gradle#1476: Compile avoidance should respect public constant changes\n\n## Upgrade Instructions\n\nSwitch your build to use Gradle 3.4.1 by updating your wrapper properties:\n\n `./gradlew wrapper --gradle-version=3.4.1` \nStandalone downloads are available at https://gradle.org/gradle-download.\n\n## Reporting Problems\n\nIf you find a problem with Gradle 3.4.1, please file a bug on GitHub Issues adhering to our issue guidelines. If you're not sure you're encountering a bug, please use the forum.", + "content_format": "markdown" + }, + { + "url": "https://github.com/rails/rails/pull/24227", + "domain": "github.com", + "file_source": "part-00830-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n* Notifications\nYou must be signed in to change notification settings\n* Fork 21.8k\n\n# New issue\n\nHave a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.\n\nBy clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.\n\nAlready on GitHub? Sign in to your account\n\n# Adjust Puma default config to use a minimum of 1 thread, instead of 5 #24227\n\n## Conversation\n\nRe @schneems default Puma config introduced in 5563c32, this change would lower the default minimum thread count from `5` to `1` and also introduce a new `ENV` variable to make this easy to adjust without editing the config file.\nI found the config file a bit confusing and wondered why it was using the same value for the minimum and maximum thread count. It appears that Puma automatically scales the number of threads (https://github.com/puma/puma#thread-pool) and in the interest of keeping memory usage low on VPS servers etc, I thought it may be smart to default to a lower thread count so that people are more likely to be happy with the default config.\n\nI also tweaked the inline documentation to clarify what was going on.\n\nI'm new to Puma, so forgive me if I'm misunderstanding something. Thank you!\n\n| |\n| --- |\n\n| |\n| --- |\n\n| |\n| --- |\n\n| |\n| --- |\n\n| |\n| --- |\n\n| |\n| --- |\n\n| |\n| --- |\n\n| |\n| --- |\n\n| |\n| --- |", + "content_format": "markdown" + }, + { + "url": "https://github.com/imageworks/OpenShadingLanguage/blob/master/INSTALL.md", + "domain": "github.com", + "file_source": "part-00328-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nOSL currently compiles and runs cleanly on Linux (x86_64), Mac OS X (x86_64 and aarch64), and Windows (x86_64). It may build and run on other platforms as well, but we don't officially support or test other than these platforms.\n\nShader execution is supported on the native architectures of those x86_64 and aarch64 platforms, a special batched 4-, 8- or 16-wide SIMD execution mode requiring x86_64 with SSE2, AVX/AVX2 or AVX-512 instructions, as well as on NVIDIA GPUs using Cuda+OptiX.\n\nOSL requires the following dependencies or tools. NEW or CHANGED dependencies since the last major release are bold.\n\n* \n\nBuild system: CMake 3.19 or newer (tested through 3.31)\n\n* \n\nA suitable C++17 compiler to build OSL itself, which may be any of:\n\n* GCC 9.3 or newer (tested through gcc 13.1)\n* Clang 5 or newer (tested through clang 19)\n* Microsoft Visual Studio 2017 or newer\n* Intel C++ compiler icc version 19 or newer or LLVM-based icx compiler version 2022 or newer.\n* \n\nOpenImageIO 2.5 or newer (tested through 3.0 and main)\n\nOSL uses OIIO both for its texture mapping functionality as well as numerous utility classes. If you are integrating OSL into an existing renderer, you may use your own favorite texturing system rather than OpenImageIO with a little minor surgery. There are only a few places where OIIO texturing calls are made, and they could easily be bypassed. But it is probably not possible to remove OIIO completely as a dependency, since we so heavily rely on a number of other utility classes that it provides (for which there was no point reinventing redundantly for OSL).\n\nAfter building OpenImageIO, if you don't have it installed in a \"standard\" place (like /usr/include), you should set the environment variable\n\n `$OpenImageIO_ROOT` to point to the compiled distribution, and then OSL's build scripts will be able to find it. You should also have $OpenImageIO_ROOT/lib to be in your LD_LIBRARY_PATH (or DYLD_LIBRARY_PATH on OS X).* \n\nLLVM 11, 12, 13, 14, 15, 16, 17, 18, or 19, including clang libraries.\n\n* \n\n(optional) For GPU rendering on NVIDIA GPUs:\n\n* \n\nImath 3.1 or newer.\n\n* \n\nFlex 2.5.35 or newer and GNU Bison 2.7 or newer. Note that on some MacOS/xcode releases, the system-installed Bison is too old, and it's better to install a newer Bison (via Homebrew is one way to do this easily).\n\n* \n\nPugiXML >= 1.8 (we have tested through 1.13).\n\n* \n\n(optional) Partio If it is not found at build time, the OSL\n\n `pointcloud` functions will not be operative.* \n\n(optional) Python: If you are building the Python bindings or running the testsuite:\n\n* Python >= 3.7 (tested through 3.12)\n* pybind11 >= 2.7 (tested through 2.13)\n* NumPy\n* \n\n(optional) Qt5 >= 5.6 or Qt6 (tested Qt5 through 5.15 and Qt6 through 6.7). If not found at build time, the\n\n `osltoy` application will be disabled.\nHere are the steps to check out, build, and test the OSL distribution:\n\n* \n\nInstall and build dependencies.\n\n* \n\nCheck out a copy of the source code from the Git repository:\n\n```\ngit clone https://github.com/AcademySoftwareFoundation/OpenShadingLanguage.git osl\n```\n\n* \n\nChange to the distribution directory and 'make'\n\n `cd osl make` \nNote: OSL uses 'CMake' for its cross-platform build system. But for simplicity, we have made a \"make wrapper\" around it, so that by just typing 'make' everything will build. Type 'make help' for other options, and note that 'make nuke' will blow everything away for the freshest possible compile.\n\nYou can also ignore the top level Makefile wrapper, and instead use CMake directly:\n\n```\ncmake -B build -S . cmake --build build --target install\n```\n\nNOTE: If the build breaks due to compiler warnings which have been elevated to errors, you can try \"make clean\" followed by \"make STOP_ON_WARNING=0\", or if using cmake directly, add\n\n `-DSTOP_ON_WARNING=0` to the cmake configuration command. That will create a build that will only stop for full errors, not warnings.* \n\nAfter compilation, you'll end up with a full OSL distribution in dist/\n\n* \n\nAdd the \"dist/bin\" to your\n\n `$PATH`, and \"dist/lib\" to your `$LD_LIBRAY_PATH` (or `$DYLD_LIBRARY_PATH` on MacOS), or copy the contents of those files to appropriate directories. Public include files (those needed when building applications that incorporate OSL) can be found in \"dist/include\", and documentation can be found in \"dist/share/doc\".* \n\nAfter building (and setting your library path), you can run the test suite with:\n\n `make test`", + "content_format": "markdown" + }, + { + "url": "https://github.com/rtomayko/adoc-themes", + "domain": "github.com", + "file_source": "part-00614-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nThis repository has been archived by the owner on Nov 9, 2017. It is now read-only.\n\n* Notifications\nYou must be signed in to change notification settings\n* Fork 7\n\nThemes for AsciiDoc and a framework for assembling them ...\n\n### License\n\n# rtomayko/adoc-themes\n\n## Folders and files\n\n| Name | Name | | |\n| --- | --- | --- | --- |\n\n## Repository files navigation\n\n> AsciiDoc Themes < http://tomayko.com/src/adoc-themes/ > AsciiDoc is a (mostly humane) text document format for writing short documents, articles, books and man pages. Its toolchain is capable of producing HTML4, XHTML, DocBook, man, and LaTeX. Many projects use AsciiDoc, including the Git content tracker. AsciiDoc generates basically good semantic markup, is styled with external CSS, and includes support for themeing. The default theme included in AsciiDoc is functional but is somewhat lacking in typographic consistencies; it is also quite blue. This package includes a variety of additional themes and a framework for assembling them. See the HACKING file included with this package for more information.\n\n## About\n\nThemes for AsciiDoc and a framework for assembling them ...\n\n### Resources\n\n### License\n\n### Stars\n\n### Watchers\n\n### Forks\n\n## Releases\n\nNo releases published\n\n## Packages 0\n\nNo packages published", + "content_format": "markdown" + }, + { + "url": "https://github.com/nezumisannn", + "domain": "github.com", + "file_source": "part-00096-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Popular repositories Loading\n\nSomething went wrong, please refresh the page to try again.\n\nIf the problem persists, check the GitHub status page or contact support.\nIf the problem persists, check the GitHub status page or contact support.", + "content_format": "markdown" + }, + { + "url": "https://github.com/palfrey/moolah", + "domain": "github.com", + "file_source": "part-00830-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nMoolah is a tool for converting money in your non-default currency in your Splitwise account into your default currency, using the rates for that day from fixer.io.\n\n* Copy\n `config.yaml.example` to `config.yaml` and fill in the values there as we go through the later steps* Register app at https://secure.splitwise.com/oauth_clients and get the OAuth Client/Secret for\n `config.yaml` \n\n* Callback URL should be \"/oauth/response\" (http://localhost:5000/oauth/response for local setup)\n* If you've already got Bower installed, just run\n `bower install`. Otherwise, install Node.js and run `npm install`, which will install and run Bower.* \n\n```\npip install -r requirements.txt\n```\n\n(preferably within a Virtualenv because that's just sensible)* \n `./debug-run.sh` You've now got a running version of the app at http://localhost:5000. Running `python fixer.py` will synchronise all registered users.\nThere's a running instance of this at https://moolah-heroku.herokuapp.com/ but here's how I did that.\n\n* Get a Heroku account. Free ones work fine.\n* Install the Heroku toolbelt\n* Goto your dashboard and make a new app. Mine was called \"moolah-heroku\" but you'll need to use another name for yours, and replace anywhere I use that.\n* \n\n```\nheroku git:remote --app moolah-heroku\n```\n\nto make it possible to push to deploy to your new app.* We're using multiple buildpacks, both the Python (backend) and Node.js (assets). You'll need to do the following:\n\n* \n\n```\nheroku buildpacks:add --index 1 heroku/nodejs\n```\n\n* \n\n```\nheroku buildpacks:add --index 2 heroku/python\n```\n\n* \n `heroku buildpacks` should now say (and if it doesn't, read the docs)\n\n* heroku/nodejs 2. heroku/python\n* Add the PostgreSQL addon\n* Go into the settings for your app and set the following config variables:\n\n* CLIENT_ID/CLIENT_SECRET - Splitwise app configured as per above, but with your Heroku URL, not localhost\n* FLASK_ENCRYPTION_KEY - Something secret for Flask to use for cookie encryption\n* \n `git push heroku master` * At this point, goto your Heroku URL and check everything works. You might have an error page the first time you load it due to clashes between multiple workers all trying to configure the DB. Just refresh and it should fix itself.\n* Add the Scheduler addon and configure the update command (\n `python fixer.py` ) to run every so often.", + "content_format": "markdown" + }, + { + "url": "https://github.com/dh-notes/dhnotes", + "domain": "github.com", + "file_source": "part-00894-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n† Bold titles indicate substantially complete resources.\n\nThese documents contain the working notes, tutorials, manuals, and other \"local knowledge\" materials from Columbia's Group for Experimental Methods in the Humanities.\n\n> \n\nFunding, Journals, Programs, Review, and Conferences\n\n* Conferences\n* Peer Review and T&P Guidelines\n* DH- and New Media-friendly Journals\n* Funding\n* Essay and Book Prizes in Literary/Media/DH\n* Selected Journals in Literary Studies\n* Advanced Degrees in Digital Humanities\n> \n\nTutorials\n\n* Command Line Fundamentals\n* Markdown + Pandoc\n* Python for Human(s|ists)\n* Version Control with Git\n* Minimal Computing\n> \n\nTechnologies of Dissent\n\nA series of workshops and teach-ins to address the needs of scholars and activists at risk.> \n\nTools\n\n> \n\nSystem Administration\n\n> \n\nCollections & Datasets\n\n> \n\nMisc\n\n* Friendly Design + Development Studios\n* Linux on ChromeOS with Crouton\n* VPN for Linux at Columbia\n* Advanced Degrees in Digital Humanities\n\nmade with plain text + vim + git | [cc-by](http://creativecommons.org/licenses/by/2.0/) | Comments, questions, suggestions? [@denten](https://github.com/denten)", + "content_format": "markdown" + }, + { + "url": "https://github.com/gamaral/rpi-buildroot/commit/124c4e040af817951ceb67bab04e2575269c827a", + "domain": "github.com", + "file_source": "part-00403-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\n## Commit\n\nThis commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.\n\n> Also capitalize help text while we're at it. Signed-off-by: Peter Korsgaard \n\n* Loading branch information", + "content_format": "markdown" + }, + { + "url": "https://github.com/tc39/proposal-import-meta/blob/master/HTML%20Integration.md", + "domain": "github.com", + "file_source": "part-00894-d9b45f53-efdc-4ae8-a3e4-b1d5d9e13869-c000", + "content": "# \n\nDate: \nCategories: \nTags: \n\nWe hope to eventually introduce two properties onto `import.meta` in web browsers, via the the HTML Standard.\nThis property will contain the module script's base URL, serialized.\n\nA specification pull request exists at whatwg/html#3141 for this. (It's pretty straightforward.)\n\nThis property will contain the `