Response
stringlengths
15
2k
Instruction
stringlengths
37
2k
Prompt
stringlengths
14
160
It is easier to start with the recommendation formgitignore.io.Seegitignore.io/api/laravelLaravel/vendor node_modules/ npm-debug.log# Laravel 4 specific bootstrap/compiled.php app/storage/ # Laravel 5 & Lumen specific public/storage public/hot storage/*.key .env.*.php .env.php .env Homestead.yaml Homestead.json # Rocketeer PHP task runner and deployment package. https://github.com/rocketeers/rocketeer .rocketeer/Note that ignoringfoldersshould be specified asaFolder/, with a trailing slash.Any folder withgeneratedcontent should be ignored.Ifvendor/is *not generated (but includes sources that you need to compile), then you shouldnotignore it.
I start usingGithub2 days ago, and it's in a private repository.So I with my friend (I appointed him as Collaborator) working a project together, but what is the best setting for.gitignorefile when you're working with a team?Because what I'm thinking is, ifGithubignore file such as.env(the app key) &/vendor, the program wouldn't work as what it must to be right?for now I using the default setting of.gitignorefiles./node_modules /public/hot /public/storage /storage/*.key /vendor /.idea /.vscode /.vagrant Homestead.json Homestead.yaml npm-debug.log yarn-error.log .envPlease help, I already search about it but I can't get the answer.
Does this .gitignore laravel makes sense when you're working with a team?
Containers isolate applications from each other on the same machine, but you're right, they all use the underlying OS. If you need different OS to run different applications on the same machine, you need to use virtual machines instead. Containers are good because you get everything you need to run an application in a single package, and there's less waste of resources because you're not throwing a whole big OS in there as well.Note that for development purposes it's not unusual to run containers inside a virtual machine, so for instance you can run a linux vm on your pc/mac, and easily move the containers you develop there into real linux-based production.Check out the snappy FAQ explanation here:https://docs.docker.com/engine/faq/#how-much-does-engine-costShareFollowansweredMay 10, 2017 at 18:19George M Reinstate MonicaGeorge M Reinstate Monica37122 silver badges1010 bronze badgesAdd a comment|
As a newbie,I have read the official Docker documentation, and have followed many explanations here, tutorials, videos on this, but have not yet got a clear answer to my question. If a docker container must use the underlying host OS kernel, then how can they claim "build, ship and run anywhere"? I mean, linux-based containers can run only on linux-based host OS machines, and similarly with windows containers. Is this correct, or have I completely missed it? I am not sure there is such a thing as "linux-based containers" and "windows-based containers".I can see when someone claims that java apps can run onanyOS, but dont see how the same claim can be made for docker containers.
Are Docker containers tied to the underlying host OS?
Can you share pod's logs?kubectl logs <pod_name>Postgres is using init script with defined variable names:POSTGRES_USER POSTGRES_PASSWORD POSTGRES_DBTry this one outapiVersion: v1 kind: ReplicationController metadata: name: postgres spec: replicas: 1 template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres:9.6 env: - name: POSTGRES_USER value: admin - name: POSTGRES_PASSWORD value: password - name: POSTGRES_DB value: testdb - name: PGDATA value: /var/lib/postgresql/data/pgdata ports: - containerPort: 5432 volumeMounts: - mountPath: /var/lib/postgresql/data name: pg-data volumes: - name: pg-data emptyDir: {}
I'm trying to change the settings of my postgres database inside my local minikube cluster. I mistakenly deployed a database without specifying the postgres user, password and database.The problem: When I add the new env variables and usekubectl apply -f postgres-deployment.yml, postgres does not create the user, password or database that specified by the environment variables.This is the deployment:apiVersion: apps/v1 kind: Deployment metadata: name: postgres-deployment spec: replicas: 1 selector: matchLabels: component: postgres template: metadata: labels: component: postgres spec: volumes: - name: postgres-storage persistentVolumeClaim: claimName: database-persistent-volume-claim containers: - name: postgres image: postgres ports: - containerPort: 5432 volumeMounts: - name: postgres-storage mountPath: /var/lib/postgresql/data subPath: postgres env: - name: PGUSER value: admin - name: PGPASSWORD value: password - name: PGDATABSE value: testdbHow can I change the settings of postgres when I apply the deployment file?
Reconfigure postgres with kubectl apply command
Sounds like you're making a backup using the pg_dump utility. That saves the information needed to recreate the database from scratch. You don't need to dump the information in the indexes for that to work. You have the schema, and the schema includes the index definitions. If you load this backup, the indexes will be rebuilt from the data, the same way they were created in the first place: built as new rows are added.If you want to do a physical backup of the database blocks on disk, which will include the indexes, you need to do aPITR backupinstead. That's a much more complicated procedure, but the resulting backup will be instantly usable. The pg_dump style backups can take quite some time to restore.
When I make a backup in postgres 8 it only backs up the schemas and data, but not the indexes. How can i do this?
How can I backup everything in Postgres 8, including indexes?
I ended up with this solution: you simply start several php-cgi processes and bind them to different ports, and you need to update nginx config:http { upstream php_farm { server 127.0.0.1:9000 weight=1; server 127.0.0.1:9001 weight=1; server 127.0.0.1:9002 weight=1; server 127.0.0.1:9003 weight=1; } ... server { ... fastcgi_pass php_farm; } }For the sake of convenience, I created simple batch files.start_sandbox.bat:@ECHO OFF ECHO Starting sandbox... RunHiddenConsole.exe php\php-cgi.exe -b 127.0.0.1:9000 -c php\php.ini RunHiddenConsole.exe php\php-cgi.exe -b 127.0.0.1:9001 -c php\php.ini RunHiddenConsole.exe php\php-cgi.exe -b 127.0.0.1:9002 -c php\php.ini RunHiddenConsole.exe php\php-cgi.exe -b 127.0.0.1:9003 -c php\php.ini RunHiddenConsole.exe mysql\bin\mysqld --defaults-file=mysql\bin\my.ini --standalone --console cd nginx && START /B nginx.exe && cd ..andstop_sandbox.bat:pstools\pskill php-cgi pstools\pskill mysqld pstools\pskill nginxas you can see, there are 2 dependencies:pstoolsandrunhiddenconsole.exe
I'm currently usingnginxandPHP FastCGIbut that arrangement suffers from the limitation that it can only serve one HTTP request at a time. (Seehere.) I start PHP from the Windows command prompt by doing;c:\Program Files\PHP>php-cgi -b 127.0.0.1:9000However there is another way to run PHP know as "Fast CGI Process Manager" (PHP-FPM).When running on Windows 7 behind nginx, can PHP-FPM handle multiple simultaneous HTTP requests?
Can Windows PHP-FPM serve multiple simultaneous requests?
3 Where does this declaration occur? I think it should fit in the memory of a Linux machine, but probably not on the stack, unless you take special actions (e.g. ulimit -s). In general, it's not a good idea to use large local C style arrays—in fact, except in special cases, it's not a good idea to use local arrays at all. Just define it as you would any normal array in C++: std::vector<int> arr( 10000000 ); This will move the actual data on to the heap, which is probably where such large data sets belong. Share Follow answered Jan 4, 2013 at 18:33 James KanzeJames Kanze 152k1818 gold badges185185 silver badges332332 bronze badges Add a comment  | 
How can I increase the memory limit for a C Program. I am using code blocks and trying the following code - int arr[10000000] It is giving me run-time error. I am using Linux(Fedora). Any help...?
code blocks memory limit
If you want yourpost-install/post-upgradechart hooksto work, you should addreadiness probesto your first pod and use--waitflag.helm upgrade --install -n test --wait mychart .pod.yamlapiVersion: v1 kind: Pod metadata: name: readiness-exec labels: test: readiness spec: containers: - name: readiness image: k8s.gcr.io/busybox args: - /bin/sh - -c - sleep 30; touch /tmp/healthy; sleep 600 readinessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 10 periodSeconds: 5 failureThreshold: 10hook.yamlapiVersion: batch/v1 kind: Job metadata: name: "post-deploy" annotations: "helm.sh/hook": post-upgrade,post-install "helm.sh/hook-delete-policy": before-hook-creation spec: backoffLimit: 1 template: metadata: name: "post-deploy" spec: restartPolicy: Never containers: - name: post-deploy image: k8s.gcr.io/busybox args: - /bin/sh - -c - echo "executed only after previous pod is ready"
So I have a helm chart that deploys a pod, so the next task is to create another pod once the first pod is running.So I created a simple pod.yaml in chart/templates which creates a simple pod-b, so next step to only create pod-b after pod-a is running.So was only at helm hooks but don't think they care about pod status.Another idea is to use Init container like below but not sure how to write command to lookup a pod is running?spec: containers: - name: myapp-container image: busybox command: ['sh', '-c', 'echo The app is running! && sleep 3600'] initContainers: - name: init-myservice image: busybox command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']Another idea is a simple script to check pod status something like:y=`kubectl get po -l app=am -o 'jsonpath={.items[0].status.phase}'` while [ $i -le 5 ] do if [[ "$y" == "Running" ]]; then break fi sleep 5 doneAny advice would be great.
Serialize creation of Pods in a deployment manifest using Helm charts
I don't know what am I doing differently now but, the xml is being written to the proper place. I suppose it was a path configuration mistake.ShareFollowansweredAug 8, 2019 at 14:38paulinhorochapaulinhorocha43055 silver badges1313 bronze badges1Did you manage to tell sonar to use this XML file? Is it on 'sonar.testExecutionReportPaths' (I'm using jenkins btw).–Ε Г И І И ОMay 28, 2021 at 4:32Add a comment|
The coverage output of karma sonarqube unit report comes out as html instead of a xmlI am trying to integrate code coverage during my sonar analysis.I have have coverageify in my stack, i don't know if it is interfering with my output from sonarqube-unit-reporter. In my karma options, i have it do output an ut_report.xml as suggested in the example its github page.Here is relevant part of my karma config:reporters: ['progress', 'sonarqubeUnit', 'coverage'], coverageReporter: { dir: 'test-coverage/', reporters: [ { type: 'html', subdir: 'html'}, { type: 'cobertura', subdir: 'reports/app', file: 'coverage.xml' }, { type : 'lcov', subdir : 'coverage', file: 'sonar.xml' } ] }, sonarQubeUnitReporter: { sonarQubeVersion: '7.6.0', outputFile: 'reports/ut_report.xml', useBrowserName: false }, plugins: [ 'karma-browserify', 'karma-mocha', 'karma-spec-reporter', 'karma-phantomjs-launcher', 'karma-coverage', 'karma-sonarqube-unit-reporter' ],But the ut_report.xml is nowhere to be found.
karma sonarqube unit reporter output comes out as HTML
Based on the description, the--forceflag should do the trick.--force force resource updates through a replacement strategyHowever, there are some issues with it as mentioned in thisGitHub issue.
I have a problem where we essentially discovered a piece of stale configuration in a live environment on one of our deployments (a config map was added as a volume mount). Reading through the docshere(search for 'Upgrades where live state has changed') we can see that helm v2 would purge changes that were introduced to a template via external actors. Whereas v3 is very clever and will merge externally introduced changes alongside template changes as long as they dont conflict.So how do we in helm v3 run a upgrade that purges any manual template changes that may have been introduced?
How to helm upgrade with v3 and remove / overwrite any manual changes that have been applied to templates
HEAD~0is your latest commit (aka simplyHEAD)HEAD~2represents the hash of the second commit counting from zero.So, typinggit revert HEAD~2you are trying to revert Commit1. That's the difference.
I have 3 commits pushed to my repository.Commit3Commit2Commit1So, if I try to revertCommit2with the commandgit revert commit2Hashit will give an alert in order to solve conflicts before merge.But if I try to revertCommit2with the commandgit revert HEAD~1it will revert Commit2 directly without give me any conflict.Please, why does that happen?
What's the difference between revert <hash> and revert <head>?
ngx.thread.spawn not working, only this code worked:access_by_lua ' local socket = require "socket" local conn = socket.tcp() conn:connect("10.10.1.1", 2015) conn:send("GET /lua_async HTTP/1.1\\n\\n") conn:close() ';
How I can duplicate (or create and send) a request with the nginx web server. I can't usepost_action, because it is a synchronous method. Also, I compiled nginx with Lua support, but if I try to usehttp.requestwithngx.thread.spawnorcoroutine, I find the request has been executed synchronously. How do I solve this?location ~ /(.*)\.jpg { proxy_pass http://127.0.0.1:6081; access_by_lua_file '/var/m-system/stats.lua'; }Lua script (withcoroutine):local http = require "socket.http" local co = coroutine.create(function() http.request("http://10.10.1.1:81/log?action=view") end ) coroutine.resume(co)
Asynchronous duplication request with nginx
Rather than doing an HTTP proxy, I would use Nginx'sbuilt-in capacityto communicate with uWSGI. (This will still work if you are using separate Docker containers for Nginx and uWSGI since the communication is done over TCP)A typical configuration (mine) looks like this:location / { uwsgi_pass http://127.0.0.1:8001; include uwsgi_params; }You will have to remove the--httpargument (or config-file equivalent) from your uWSGI invocation.Additionally, in uwsgi_params (found in/etc/nginxor a custom location you specify) there are several directives to pass meta data through. Here's an excerpt from mine that looks like it could be related to your problem:... uwsgi_param REQUEST_URI $request_uri; uwsgi_param DOCUMENT_ROOT $document_root; uwsgi_param SERVER_PROTOCOL $server_protocol; uwsgi_param HTTPS $https if_not_empty;Relevant docs:http://uwsgi-docs.readthedocs.org/en/latest/WSGIquickstart.html#putting-behind-a-full-webserver
I've been working on a django app recently and it is finally ready to get deployed to a qa and production environment. Everything worked perfectly locally, but since adding the complexity of the real world deployment I've had a few issues.First my tech stack is a bit complicated. For deployments I am using aws for everything with my site deployed on multiple ec2's backed by a load balancer. The load balancer is secured with ssl, but the connections to the load balancer are forwarded to the ec2's over standard http on port 80. After hitting an ec2 on port 80 they are forwarded to a docker container on port 8000 (if you are unfamiliar with docker just consider it to be a standard vm). Inside the container nginx listens on port 8000, it handles a redirection for the static files in django and for web requests it forwards the request to django running on 127.0.0.1:8001. Django is being hosted by uwsgi listening on port 8001.server { listen 8000; server_name localhost; location /static/ { alias /home/library/deploy/thelibrary/static/; } location / { proxy_set_header X-Forwarded-Host $host:443; proxy_pass http://127.0.0.1:8001/; } }I use X-Forwarded host because I was having issues with redirects from google oauth and redirects to prompt the user to login making the browser request the url 127.0.0.1:8001 which will obviously not work. Within my settings.py file I also includedUSE_X_FORWARDED_HOST = Trueto force django to use the correct host for redirects.Right now general browsing of the site works perfectly, static files load, redirects work and the site is secured with ssl. The problem however is that CSRF verification fails.On a form submission I get the following errorReferer checking failed -https://qa-load-balancer.com/projects/newdoes not matchhttps://qa-load-balancer.com:443/.I'm really not sure what to do about this, its really through stackoverflow questions that I got everything working so far.
Django CSRF Error Casused by Nginx X-Forwarded-host
NOTE: This answer uses boto. See the other answer that uses boto3, which is newer. Try this... import boto import boto.s3 import sys from boto.s3.key import Key AWS_ACCESS_KEY_ID = '' AWS_SECRET_ACCESS_KEY = '' bucket_name = AWS_ACCESS_KEY_ID.lower() + '-dump' conn = boto.connect_s3(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) bucket = conn.create_bucket(bucket_name, location=boto.s3.connection.Location.DEFAULT) testfile = "replace this with an actual filename" print 'Uploading %s to Amazon S3 bucket %s' % \ (testfile, bucket_name) def percent_cb(complete, total): sys.stdout.write('.') sys.stdout.flush() k = Key(bucket) k.key = 'my test file' k.set_contents_from_filename(testfile, cb=percent_cb, num_cb=10) [UPDATE] I am not a pythonist, so thanks for the heads up about the import statements. Also, I'd not recommend placing credentials inside your own source code. If you are running this inside AWS use IAM Credentials with Instance Profiles (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html), and to keep the same behaviour in your Dev/Test environment, use something like Hologram from AdRoll (https://github.com/AdRoll/hologram)
I want to copy a file in s3 bucket using python. Ex : I have bucket name = test. And in the bucket, I have 2 folders name "dump" & "input". Now I want to copy a file from local directory to S3 "dump" folder using python... Can anyone help me?
How to upload a file to directory in S3 bucket using boto
I solved the Problem. It was a plain beginner mistake:- namespaceSelector: matchLabels: namespace: kube-systemI didn't add theLabelnamespace: kube-systemto theNamespacekube-system.After adding the Label it worked instantly.
we are using Rancher to setup clusters with Canal as the CNI. We decided to use Traefik as an Ingress Controller and wanted to create a NetworkPolicy. We disabled ProjectIsolation and Traefik is running in the System project in the kube-system namespace.I created this Policy:# deny all ingress traffic kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: default-deny-all spec: podSelector: {} ingress: - from: - podSelector: {} --- # allow traefik kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: ingress-allow-traefik spec: podSelector: {} ingress: - from: - namespaceSelector: matchLabels: namespace: kube-system podSelector: matchLabels: app: traefik --- # allow backnet kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: ingress-allow-backnet spec: podSelector: {} ingress: - from: - ipBlock: cidr: 10.0.0.0/24 - ipBlock: cidr: 10.1.0.0/24 - ipBlock: cidr: 10.2.0.0/24 - ipBlock: cidr: 192.168.0.0/24But somehow we can't get this to work. The connection gets time-outed and that's it. Is there a major problem with this policy? Something i didn't understand about NetworkPolicies?Thanks in advance
Kubernetes/Rancher: NetworkPolicy with Traefik
OK, I changed the hoster. Now everything works well. I do not know what the problem was exactly. But my hoster did not care about that and I decided to switch to another one.
I am trying to set up a .htaccess file with the following content with Apache 2.2.31:Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"This is working fine but not for PHP files. The Header is sent twice. I created anemptyHTML file and anemptyPHP file. For the HTML file the header is sent correctly but when I request the PHP file the header appears twice in the response.If I drop thealwayskeyword the header is sent once but not at all if an errors occurs.There are no other rewrites/redirects defined. Unfortunately I do not have access to the Apache core configuration but maybe someone got the same problem so I can contact my provider.
Apache: Created Headers in .htaccess file sent twice when requesting PHP file
You can simply put asonar.propertiesfile under/opt/docker-sonar/conf/. This file will be available inside the container under/opt/sonarqune/conf/, because the folder gets mounted as volume.A full example for asonar.propertiesfile can be found ongithub. However all you need to enter is:sonar.ce.javaOpts=-Xmx<XMX_VALUE -Xms<XMS_VALUE> -XX:+HeapDumpOnOutOfMemoryErrorShareFolloweditedMay 18, 2018 at 6:36tgr3,56844 gold badges3434 silver badges6464 bronze badgesansweredMay 16, 2018 at 9:14jayeshkh007jayeshkh00714188 bronze badges5Which file do you suggest to change?–tgrMay 16, 2018 at 9:17In sonar directory you need to add sonar.properties file which sonar config file and copy it Dockerfile using 'ADD' command of Dockerfile–jayeshkh007May 16, 2018 at 9:20Please provide a code example in your answer. I only understand vaguely what you are proposing.–tgrMay 16, 2018 at 9:21sonar.propertiesusing this link you get sonar.properties file content. You need to update Xmx value which you want and add it using docker file–jayeshkh007May 16, 2018 at 9:26you can use command like :ADD sonar.properties /opt/sonarqube/conf/sonar.propertiesto add conf file using Dockerfile–jayeshkh007May 16, 2018 at 9:41Add a comment|
I have a Sonarqube instance running as a docker container. Since I updated it to version 7.1 the analysis of my greatest project fails withGC limit exceeded. If I restart the server, it might succeed once. After a while of researching this issue, I am tempted to believe, I need to increase theXmxvalue for the background task.Where and how can I configure this parameter?docker-compose.ymlversion: "2" services: postgres: image: postgres:9-alpine container_name: postgres restart: always volumes: - /opt/docker-postgres/etc:/etc/postgresql - /opt/docker-postgres/log:/var/log/postgresql - /opt/docker-postgres/data:/var/lib/postgresql/data environment: POSTGRES_DB: sonar POSTGRES_USER: <SONAR_USER> POSTGRES_PASSWORD: <SONAR_PASSWORD> sonar: image: sonarqube:alpine container_name: sonar restart: always ports: - "9000:9000" - "9092:9092" environment: SONARQUBE_JDBC_USERNAME: <SONAR_USER> SONARQUBE_JDBC_PASSWORD: <SONAR_PASSWORD> SONARQUBE_JDBC_URL: jdbc:postgresql://postgres/sonar volumes: - /opt/docker-sonar/conf:/opt/sonarqube/conf - /opt/docker-sonar/data:/opt/sonarqube/data - /opt/docker-sonar/extensions:/opt/sonarqube/extensions - /opt/docker-sonar/bundled-plugins:/opt/sonarqube/lib/bundled-plugins
How to change Xmx settings for sonar runner?
No, a soft reset is not enough. Doing this will leave the file in your index (where you stage files to be committed). This means, that git is still tracking the file. You will want to do a mixed reset, which unstages these files as well. As René pointed out, it is also a good idea to remove the file or add it to your .gitignore so you don't accidentally commit it again. This is enough so the sensitive information will not be transmitted to a remote server on git push. However, the information is still in your local repository. If you ever "loose" a commit by accidentally resetting too far, git reflog is a very useful tool. Now to clean all away all commits, that are not reachable through a branch or tag: git reflog expire --expire=1.minute --all git prune git gc Removes all entries older than 1 minute from the reflog. A commit will not be removed, if there is any remaining reference to it. Such a reference can come from another commit, a branch, a tag and also the reflog. Removes all commits that aren't reachable anymore. Does a number of housekeeping tasks. For more info look at the documentation reflog expire and prune are destructive operations. I recommend running these commands with the --dry-run argument first, to see what exactly gets removed.
I accidentally commited something that might be sensetive information in git (only locally) and I wanna remove it from git history in a simple way. Will git reset --soft HEAD~1 and then unstage the sensetive information and add to gitignore be enough to completely remove it from git history?
does git reset delete history?
It is possible, butit is not possibleto customise the error message.Depending on your function use either:callback("Unauthorized", null);orthrow new Error('Unauthorized');Both of these will produce a 401 response.Seehttps://github.com/awslabs/aws-apigateway-lambda-authorizer-blueprints/blob/master/blueprints/nodejs/index.js
We have our API behind the AWS HTTP API gateway with a custom Lambda authorizer. Our JWT token contains an expiration time and base on that we have to return 401 when it is expired to tell the client to use his refresh token to update JWT.Lambda authorizer returns only 403 even if the token is present but it is expired. So in this case we don't have a possibility to force users for token updates it is confusing a lot. It seems like your permissions just not allow you to reach the API URL instead of telling you that your token is expired.With REST ApiGateway it seems possible but we can't use it because it doesn't work with APL, and this is a requirement.Is it possible to return 401 from HTTP API Gateway custom Lambda authorizer?
AWS HTTP Api Gateway lambda authorizer how to return 401 if a token is expired
2 Sounds like it is the Console panel blowing up. Consider limiting its buffer size. EDIT: It's in Preferences. Search for Console. Share Follow edited Dec 4, 2009 at 23:21 answered Dec 4, 2009 at 16:57 Thorbjørn Ravn AndersenThorbjørn Ravn Andersen 74.2k3434 gold badges196196 silver badges349349 bronze badges 1 and, BTW, who really wants to see a large 1-line XML file on the console? Pretty sure, the Console designer didn't forsee that ;-) – Andreas Dolk Dec 4, 2009 at 18:46 Add a comment  | 
We have a process that outputs the contents of a large XML file to System.out. When this output is pretty printed (ie: multiple lines) everything works. But when it's on one line Eclipse crashes with an OutOfMemory error. Any ideas how to prevent this?
How to best output large single line XML file (with Java/Eclipse)?
This is no longer supported as of 4.0.End of Support of WAR deployment ModeThe standalone mode is now the only mode that is supported. Standalone mode embeds a Tomcat server.http://docs.sonarqube.org/display/SONAR/Release+4.0+Upgrade+NotesShareFolloweditedMar 4, 2015 at 16:08schnatterer7,64977 gold badges6363 silver badges8181 bronze badgesansweredJan 17, 2014 at 17:57CTarczonCTarczon89899 silver badges1919 bronze badges4On which port start this sonar-tomcat?–MAGx2Jan 17, 2014 at 22:39OK I found it. Just edit lines: # TCP port for incoming HTTP connections. Disabled when value is -1. sonar.web.port=9999–MAGx2Jan 17, 2014 at 22:451Is it maybe still possible to build the war file for Version 4.4.5 on my own?–kiltekAug 28, 2015 at 14:28Running embedded tomcat is a waste of resources–nikenAug 23, 2016 at 14:04Add a comment|
How can I run Sonar on my Unix system with Tomcat. In previous versions there was way to make .war and deploy it on Tomcat.I tried to put into folder webaps (Tomcat) and run scriptsonarqube-4.1\bin\solaris-x86-32\sonar.sh. Everything was OK, but I didn't know what to write in webbrowser to get to Sonar.Version of my OS: *SunOS mdjava0.mydevil.net 5.11 joyent_20131213T023304Z i86pc i386 i86pc Solaris*
How to run Sonar 4.1 on Tomcat
0 As of now if you want to schedule a regular bare metal backup of Azure VMs, you can use the Temporary drive to store the files created by WSB if the OS disk size is less than Temporary drive(D:). Else, you have to attach a additional Data disk for backup purpose. I agree it will incur extra charge for using the storage disk. But there is no work around for it now. You may provide your requirement as Feedback to Azure Backup/Windows Server backup team through below link: http://feedback.azure.com/forums/258995-azure-backup-and-scdpm Share Follow answered Mar 28, 2015 at 17:34 Sadiqh AhmedSadiqh Ahmed 2111 bronze badge Add a comment  | 
Since Azure agent doesn't support full VM image backups directly through the portal (without shutting down the VM first), I wanted to schedule a regular bare metal backup of my Azure VMs using Windows Server Backup together with the Azure Backup agent. The challenge is to find a temporary place to store the files created by WSB while Azure Backup agent transfers them to the Azure backup store. I first thought the VM temporary disk ( D:\ ) would be suitable, but it turns out that on some VMs the temp disk is smaller than the OS disk and would thus not have enough space. An option is of course to attach one extra 127 GB disk to each VM and use that as a destination volume for WSB backups and use Azure backup agent to backup that volume, but this would incur significant extra storage charges, since you would pay for both the extra disks storage and the backup storage. The best thing would of course be if the functionality of Azure backup agent was built into Windows Server backup, but this is not the case, unfortunately.
Windows Server Backup and Microsoft Azure Backup
AFAIK you can not create an A record at the zone apex, only an AWS-specific Alias type.The Alias can refer to an ELB, S3 website, CloudFront distribution or another Route 53 record set. You have a couple of options:a) put your instance behind an ELB, and create example.com as an Alias record pointing to your ELB. Or,b) Create example.com as an Alias which points to the www.example.com CNAME.
I set up an EC2 instance with an elastic IP. I registered a domain with Namecheap and transferred my name servers from them to Route 53.I created an A-IP4 record and plugged in my elastic IP address. Didn't work.Then I decided to try creating the A-IP4 using www. It worked.I've tried setting up a pointer from www.mysite.com to mysite.com, no luck. I've searched around for hours in Amazon's docs but still can't figure out how to get it setup. Does anyone know how to set this up so I can access my root domain? I'd hate to be stuck with www.
Connecting to root domain - AWS Route 53 EC2
Try this.systemctl stop apache2.service
I'm trying to update an SSL certificate on digital ocean with the commandcertbot renewBut I get this error:Problem binding to port 80: Could not bind to IPv4 or IPv6.runningnetstat -pluntshows that port 80 is been used by 'docker-proxy'.What can I do to fix this should I stop docker-proxy how do I do that?
Problem binding to port 80: Could not bind to IPv4 or IPv6 with certbot
-1Yes in most cases you need to open port in your router first (check your NAT settings section), and then setup IIS site binding to 7895 portShareFollowansweredMay 25, 2012 at 7:45Alexander V.Alexander V.1,5281414 silver badges1414 bronze badges6Hi thanks now it's nice I open my port but still haven't see webpage from this port I also have bind iis with that port! what should i do i have check fromyougetsignal.com/tools/open-portsthat's my port is open but still don't see website in my local–CapripioMay 25, 2012 at 8:13also try to check your windows firewall settings it could block outbound connection–Alexander V.May 25, 2012 at 9:34yes i check but still don't show me my web page is there any details tutorial or article to make local pc to server kind? Google drive me crazy bit!–CapripioMay 25, 2012 at 16:08when you replace your ip address in IIS binding to localhost is your site working? (also please make sure that you using IIS not VS Dev server–Alexander V.May 25, 2012 at 16:46About tutorial, I read good beginner tutorial in Steven Sanderson book "Freeman A. Sanderson S. - Pro ASP.NET MVC 3 Framework" 3rd Edition - 2011 (it is an excellent asp .net mvc book btw) also you can trylearn.iis.netfor more specific information. But it is weird that your site doesn't showed up. also try different port like 8080 or even 80–Alexander V.May 25, 2012 at 16:59|Show1more comment
folks I want to know how I can open port for iis.I have also tired from firewall to open port but I can't it's seem I am missing some thing actually i have site (example on port) 7895 in my local I can access it by type localhost:7895 in browser or 192.168.1.1:7895 (local ip) but want to open it through over net for example my external ip is 119.155.116.102 so 119.155.116.102:7895 how I can do it? Yes is there any problem in router I mean I need to open or some function in router to open port?thanks for answere!
open port for iis
It is a default network created automatically by docker-compose. Read morehere.
I am new to Docker and found one thing I don't understand: I downloaded the imagejwilder/nginx-proxyfrom the Github reponginx-proxy/nginx-proxyand ran it withdocker-compose up. This will bring on a new networknginxproxy_default, which the new container is connected to, although the docker-compose.yml does not have a network specified. I searched all files in the repository but I didn't find any place where this network is configured, so where does it come from?
jwilder/nginx-proxy: Where is the network configured? (Not in docker-compose.yml)
0 So try to rename WAMP root folder from C:\wamp46\www to C:\Mirror Edit the httpd.conf file and/or the vhosts.conf file for the site wish to change. The Directory directive will let you specify where the files for this site are to be located. For more info on httpd.conf see: http://httpd.apache.org/docs/2.2/configuring.html And specifically: http://httpd.apache.org/docs/2.2/mod/core.html#directory Share Follow answered Jun 25, 2020 at 3:57 AziMezAziMez 2,02211 gold badge66 silver badges1818 bronze badges 3 I see this on httpd.conf and httpd-vhosts.conf: "${INSTALL_DIR}/www", so I'll change it to "C:/Mirror" directly? – Jorz Jun 25, 2020 at 4:01 Something like this: DocumentRoot c:/Mirror/www – AziMez Jun 25, 2020 at 4:04 ok got it.. will backup the files first then i'll give it a go. thanks! – Jorz Jun 25, 2020 at 4:07 Add a comment  | 
I got Seagate Backup Plus Slim 1TB today. I am planning to do mirror backup of my web projects from pc (C:\wamp46\www) to external drive (E:). The toolkit app created folders on C:\ and E:\ both named "Mirror" as the syncing folder. Tested it and it works well. But Seagate says: The Mirror folders must each be named “Mirror” in order to sync. Do not rename the folders. Now, how can I mirror backup my files under "www" folders if I can't rename "www" folder? Is there any way? Thanks!
Mirror backup WAMP folder into external hard drive
Try disabling theView Results Treein the script as it records all results for you to inspect.The jMeter documentation specifically mentions this:18.3.6 View Results TreeView Results Tree MUST NOT BE USED during load test as it consumes a lot of resources (memory and CPU). Use it only for either functional testing or during Test Plan debugging and Validation.ShareFollowansweredOct 20, 2014 at 11:34rsprsp23.2k66 gold badges5656 silver badges7070 bronze badgesAdd a comment|
first of all i already had a look at several questions which are quite similar. But i wasn't able to find a solution.My script performs a load test it calls several different URLs(GET http) to download the content behind. After 120 requests the memory usage increases up to 2GB and after 500 to 5-6GBI changed already the xmx size in hope that this will solve the problem but it doesn't.Is there any way to configure jmeter to not save the files coming from a response? Or lets say to discard immediately the downloaded files? Is it maybe an JRE setting?Or is there no way to solve this memory increasing problem?Br, Kabba
how to configure Jmeter to discard downloaded files?
The answer is 100 for nginx 1.9.9 or earlier and 1000, for nginx 1.9.10 and later. Thekeepalive_requests directive (default is 100/1000) allows you to configure the maximum number of requests done through a single keepalive connection. From documentation link above: Sets the maximum number of requests that can be served through one keep-alive connection. After the maximum number of requests are made, the connection is closed. A mere equivalent for this directive in HTTP/2 is http2_max_requests
HTTP client could send multiple requests in one HTTP 1.1 connection due to Keep alive feature. But is there any limit of that number in protocol? If not, what is the implementation for Nginx about that? Does it have any configuration?
what is the max number of requests Nginx server allow client to send in one HTTP 1.1 connection
It is because your rewrite rules are infinitely looping. Which is due to the factsection/products/(.*)pattern matches original and rewritten URI.You can use this to fix it:Options +FollowSymlinks -Indexes -MultiViews Options -MultiViews RewriteEngine on RewriteBase / RewriteRule ^(section/products)/([\w-]+)$ $1/product.php?url=$2 [QSA,L,NC]
I'm modifying the Apache.htaccessfile for rewrite products' URL, so I can go from thisdomain.com/section/products/product.php?url=some-product-nameto thisdomain.com/section/products/some-product-nameHere's themod_rewritecode that I'm using:Options -Indexes Options +FollowSymlinks Options -MultiViews RewriteEngine on RewriteBase / RewriteRule ^section/products/(.*)$ /section/products/product.php?url=$1 [QSA,L]It just returns a 500 server error.What could be the issue?
mod_rewrite issue with GET parameter
May be you have internet connection problem.
While I'm trying to push, pull or merge the local to github repo I'm getting some issues. I even tried to clone the new local repo but that is also giving problem. Can anyone please help me in this matter.Executed command result:$ git push error: Failed connect to github.com:443; Connection timed out while accessing https://github.com/xxxxxxxxx/xxxxxxxx.git/info/refs?service=git-receive-pack fatal: HTTP request failed
Error: Can't push, pull, merge or clone in github
You can't match against the query string in theRedirectdirective (nor in aRedirectMatch/RewriteRuleeither). You need to use mod_rewrite's%{QUERY_STRING}var:RewriteEngine On RewriteCond %{QUERY_STRING} (^|&)id=([0-9]+)($|&) RewriteRule ^/?page\.php$ http://test.com/profile/info/id/%2? [L,R=301]ShareFolloweditedSep 10, 2012 at 17:27Mike Brant70.9k1010 gold badges9999 silver badges103103 bronze badgesansweredSep 10, 2012 at 17:20Jon LinJon Lin143k2929 gold badges221221 silver badges220220 bronze badges1Added?to end of rewritten URL to discard the original query string.–Mike BrantSep 10, 2012 at 17:28Add a comment|
I'm trying to figure out a regular expression that will find some numbers in a URL and use them in the URL I redirect to.Redirect 301 /page.php?id=95485666 http://test.com/profile/info/id/95485666i was thinking maybeRedirect 301 /page.php?id=([0-9]+) http://test.com/profile/info/id/$1but it doesn't seem to workAlso, if I do a 301 redirect, how long do i have to keep the code in the .htaccess file? when is Google gonna figure out that the new link is the good one?
How can I write a .htaccess redirect to find all numbers?
You can disable the response buffer before you return the file result. Response.BufferOutput = false; return File(fileStream, contentType);
I have a file browser application in MVC4 that allows you to download a selected file from a controller. Currently, the FileResult returns the Stream of the file, along with the other response headers. While this works fine for smaller files, files that are larger generate an OutOfMemoryException. What I'd like to do is transmit the file from the controller, without buffering in memory in a fashion similiar to HttpReponse.TransmitFile in WebForms. How can this be accomplished?
ASP.NET MVC: Returning large amounts of data from FileResult
In your Dockerfile you have specified the CMD as CMD [ "/home/benchmarking-programming-languages/benchmark.sh -v" ] This uses the JSON syntax of the CMD instruction, i.e. is an array of strings where the first string is the executable and each following string is a parameter to that executable. Since you only have a single string specified docker tries to invoke the executable /home/benchmarking-programming-languages/benchmark.sh -v - i.e. a file named "benchmark.sh -v", containing a space in its name and ending with -v. But what you actually intended to do was to invoke the benchmark.sh script with the -v parameter. You can do this by correctly specifying the parameter(s) as separate strings: CMD0 or by using the shell syntax: CMD1
I was able to successfully build a Docker image, via docker build -t foo/bar .. Here is its Dockerfile: FROM ubuntu:20.04 COPY benchmark.sh /home/benchmarking-programming-languages/benchmark.sh CMD [ "/home/benchmarking-programming-languages/benchmark.sh -v" ] And here is the file benchmark.sh: #!/usr/bin/env bash ## Nothing here, this is not a typo However, running it via docker run -it foo/bar gives me the error: Error invoking remote method 'docker-run-container': Error: (HTTP code 400) unexpected - failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/home/benchmarking-programming-languages/benchmark.sh -v": stat /home/benchmarking-programming-languages/benchmark.sh -v: no such file or directory: unknown Despite this, when running the image as a container with a shell, via docker run -it foo/bar sh, I can, not only see the file, but execute it with no errors! Can someone suggest a reason for why the error happens, and how to fix it?
Docker gives 'no such file or directory: unknown' on a docker run command
This issue seems to be due to the fact, that opendkim does not set the pseudo resource recordOPT UDPSize, indicating that it can handle responses longer than 512 bytes, as defined byEDNS (wiki),RFC 2671.Opendkim (no EDNS)As can be seen in this tcpdump of an opendkim request:28112+ TXT? selector1._domainkey.outlook.com. (50)and the response from unbound:28112| q: TXT? selector1._domainkey.outlook.com. 1/0/0 selector1._domainkey.outlook.com. CNAME selector1._domainkey.outbound.protection.outlook.com. (105)Dig (EDNS)The same request fromdigcorrectly indicates that larger responses are fine (OPT UDPsize=4096):33350+ [1au] TXT? selector1._domainkey.outlook.com. ar: . OPT UDPsize=4096 (73)And unbound properly responds with the complete TXT record:33350 q: TXT? selector1._domainkey.outlook.com. 2/0/1 selector1._domainkey.outlook.com. CNAME selector1._domainkey.outbound.protection.outlook.com., selector1._domainkey.outbound.protection.outlook.com. TXT "v=DKIM1;k=rsa;p=MIIBI[...]1913" ar: . OPT UDPsize=4096 (567)The DKIM key in the TXT record was truncated for brevity.Unfortunately the opendkim project seems to be dead, so it is unlikely that this will be fixed.
I'm using postfix with opendkim and see a lot of the following errors:opendkim[63]: 84D4C390048: key retrieval failed (s=selector1, d=hotmail.com): 'selector1._domainkey.hotmail.com' reply truncatedThe error occurs for a lot of different domains, but always if a long dkim key (> 1024 bit) is used. I would assume this to be a fairly common issue, but couldn't find anything useful so far.Is this an issue with my opendkim config or is opendkim just broken in this regard?
Opendkim error "key retrieval failed" when long dkim keys are used
Use this one: location = / { index index.html; } location = /index.html { root /your/root/here; }
I would like to nginx to serve a static file from website root ( : http://localhost:8080/ ) but it serves my proxy pass; it serves "/" rule instead of "= /". Here is what my nginx config look like : listen 0.0.0.0:8080; server_name localhost; set $static_dir /path/to/static/ location = / { # got index.html in /path/to/static/html/index.html root $static_dir/html/; } location / { # ... proxy_pass http://app_cluster_1/; } Did i miss something ?
nginx rule to serve root
You can't use the ~ character in Java to represent the current home directory, so change to a fully qualified path, e.g.:file:///home/user1/hbaseBut i think you're going to run into problems in a fully distributed environment as the distcp command runs a map reduce job, so the destination path will be interpreted as local to each cluster node.If you want to pull data down from HDFS to a local directory, you'll need to use the -get or -copyToLocal switches to thehadoop fscommand
I'm trying to back up a directory from hdfs to a local directory. I have a hadoop/hbase cluster running on ec2. I managed to do what I want running in pseudo-distributed on my local machine but now I'm fully distributed the same steps are failing. Here is what worked for pseudo-distributedhadoop distcp hdfs://localhost:8020/hbase file:///Users/robocode/Desktop/Here is what I'm trying on the hadoop namenode (hbase master) on ec2ec2-user@ip-10-35-53-16:~$ hadoop distcp hdfs://10.35.53.16:8020/hbase file:///~/hbaseThe errors I'm getting are below13/04/19 09:07:40 INFO tools.DistCp: srcPaths=[hdfs://10.35.53.16:8020/hbase] 13/04/19 09:07:40 INFO tools.DistCp: destPath=file:/~/hbase 13/04/19 09:07:41 INFO tools.DistCp: file:/~/hbase does not exist. With failures, global counters are inaccurate; consider running with -i Copy failed: java.io.IOException: Failed to createfile:/~/hbase at org.apache.hadoop.tools.DistCp.setup(DistCp.java:1171) at org.apache.hadoop.tools.DistCp.copy(DistCp.java:666) at org.apache.hadoop.tools.DistCp.run(DistCp.java:881) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at org.apache.hadoop.tools.DistCp.main(DistCp.java:908)
Backup hdfs directory from full-distributed to a local directory?
if directive works before your request send to backend, so at that time there is no $sent_http_... variable. You could use map directive. log_format main_log '$extended_info'; map $sent_http_x_extended_info $extended_info { default $sent_http_x_extended_info; "" "-"; }
I'm trying to log data from custom header. In response: Cache-Control:no-cache Connection:keep-alive Content-Type:application/json Date:Mon, 09 Nov 2015 16:09:09 GMT Server:nginx/1.9.4 Transfer-Encoding:chunked X-Extended-Info:{"c":70} X-Powered-By:PHP/5.6.12 In php script (Symfony2): $response->headers->set('X-Extended-Info', json_encode($info)) I wanna write in log data from "X-Extended-Info". Nginx config: log_format main_log '$extended_info'; server { set $extended_info '-'; if ($sent_http_x_extended_info != '') { set $extended_info $sent_http_x_extended_info; } ... } And in log I see only '-'. I read nginx - read custom header from upstream server, but this solution doesn't work in my case (I tried to use $upstream_http_, $http_). Is possible to read response from phpfpm? Thank you.
nginx read custom response header
I'm using GitKraken version 4.0.5 (MacOS and Windows) and spaces are shown:You have however a designated button for hiding spaces (on the top-right corner):Which results (same source with spaces):Maybe is it turned on on your client?
I use GitKraken and and it is really cool tool! Is it possible to see spaces in GitKraken?For example, there are spaces, but GitKraken shows no spaces:But another git visual tool shows spaces:Is it possible to see spaces in free version of GitKraken?
How to see spaces in GitKraken free version
You can set the pointers to NULL, then the destructor will not delete them.struct WithPointers { int* ptr1; int* ptr2; WithPointers(): ptr1(NULL), ptr2(NULL) {} ~WithPointers() { delete ptr1; delete ptr2; } } ... WithPointers* object1 = new WithPointers; WithPointers* object2 = new WithPointers; object1->ptr1 = new int(11); object1->ptr2 = new int(12); object2->ptr1 = new int(999); object2->ptr2 = new int(22); ... int* pointer_to_999 = object2->ptr1; object2->ptr1 = NULL; delete object1; delete object2; // the number 999 is not deleted now! // Work with the number 999 delete pointer_to_999; // please remember to delete it at the end!
I have an object with some pointers inside of it. The destructor callsdeleteon these pointers. But sometimes I want to delete them, and sometimes I don't. So I'd like to be able to delete the objectwithoutcalling the destructor. Is this possible?Edit: I realize this is an AWFUL idea that no one should ever do. Nonetheless, I want to do it because it will make some internal functions much easier to write.
Is it possible to delete an object in c++ without calling the destructor?
On current MacOSes you would want to use packet filtering:https://blog.neilsabol.site/post/quickly-easily-adding-pf-packet-filter-firewall-rules-macos-osx/
I am connected to a network and there is a particular node that keeps scanning me and attempting to connect to me. It is always from the same IP. I have looked and cant seem to find a way to block that IP on my MAC. Is there a way to drop this particular IP on my MAC?
block IP address on MAC
The images are probably being cached. Take a look at[img setCacheMode:]Did you actually try doing 500 times or are you guessing at the behaviour? My guess would be that the cache would be cleared at some upper limit - maybe 50mb is not that much?It is important to note that-releaseis not equivalent tofree()ordestroy(), even if you call it immediately afteralloc inityou shouldn't make the assumption that the object has been cleared away. This is why there is so much hate for the-retainCountabusers that think it is a good way to debug memory management.
NSImage *randomImage = [[NSImage alloc] initWithContentsOfURL:imageURL]; [randomImage release];Why does the memory usage still go up? What is using that memory? I release the NSImage object. ( no, its not the URL )
NSImage + memory managament
-1As far as I know, when the private repo is yours and someone else opens a pull request to it, you as the owner have to merge the pull request in the respective branch.
I have a protected github repository, where I want a user that was already allowed 'read' access to also be able to merge PR's, so I gave him the 'write' role. According to thegithub docsthat should be enough. Still he is not able to merge, and he sees a warning about not having write access. Am I missing something?
Github organization member can't merge PR's even though he has write access
No, this is not supported, you might be able to hack your way through, but certainly not out of the box.But you can create an internal load balancer for your service in the network and its ip wouldnt change, you do this using a service with an annotation:--- apiVersion: v1 kind: Service metadata: name: name annotations: service.beta.kubernetes.io/azure-load-balancer-internal: "true" spec: ports: - port: xxx selector: app: name type: LoadBalancer
I am having an aks instance running. which I assigned an virtual network to it. So all the Node IPs in the network are good and I can reach them from within the network.Now I wonder if it is possible to create a 2nd virtual network and tell kubernetes to use it to assign public ips ?Or maybe is it possible to say that a specific service should always have the same node ip ?
assign kubernetes loadbalancer an ip from an internal network
I found the answer to my question here:http://www.mos-eisley.dk/display/it/Elasticsearch+Dashbord+in+Grafana. You can ignore the parts about setting fieldname=true and instead just set it to query the fieldname.keyword when creating the template.Just a quick note: Something that took me too long to realise is that when grouping by term, "fieldname.keyword" will not be available for selection in the drop down, so you simply have to type it in.
I have an Elasticsearch (5.1.2) data source and am visualizing the data in Kibana and Grafana (4.1.1). For string values in my dataset I am using the keyword feature as described athttps://www.elastic.co/guide/en/elasticsearch/reference/5.2/fielddata.html. An example of the mapping for fieldname "CATEGORY":"CATEGORY": { "type": "text", "norms": false, "fields": { "keyword": { "type": "keyword" } } }In Kibana this works fine as I can select "fieldname.keyword" when creating visualizations. However in Grafana it seems like the keyword field is not recognized, as I can only select "fieldname" when creating graphs, which displays the message "fielddata is disabled on text fields by default".Can anyone give any insight as to why the keyword field is not being recognized in Grafana? Setting fielddata=true is an option too, however I would really prefer get it working using keyword due to the memory overhead associated with setting fielddata=true. Thanks!
Grafana cannot aggregate on String fields as it does not recognize keyword field in Elasticsearch
The case you're describing is not what Git typically considers a rename. Generally, a rename in Git is when one file is removed in the same commit as another file is added and the files are identical or similar. In your case, the old file hasn't been removed, so you now have two files. If they are identical or similar, Git will consider this a copy, but not a rename. Git uses a similar technique to detect them, but they aren't the same. The way to handle this depends on the operating system you're on. If you're on a system that's case sensitive, like Linux, FreeBSD, or a case-sensitive macOS, then you can just delete the old file with git rm as part of your commit. If you're on a case-insensitive system, then you should use git mv -f to rename the old file to the new one. All of this assumes that your commit (and pull request) introduce the new file. If both files already exist in the repository history, then there's no way to make Git detect them as renames now.
Some time ago, someone pushed a file to the github repository whose name is the wrong case. Now there is a new version of the file, which has the correct case in its filename. When I push the new version to github and create a pull request, the "Files Changed" view shows a new file and no changes to the old file. In other words, if I merge this pull request, the file's history will break. When I was working with git in a bash shell, this didn't usually happen. As long as the file was similar enough, the change was interpreted as a "rename" and its history was preserved. Is there a way to do this in github?
How to change the case of a filename
Use awk command instead docker images | grep "none" | awk '{print $3}' | xargs docker rmi
I'm trying to delete each docker image that it's name == none in my system. I have tried this for image in $(docker images | grep none); do echo $image; done But this gives me the output of each column like that: <none> <none> a20d00ca4041 19 minutes ago 227MB I want it like that: <none> <none> a20d00ca4041 20 minutes ago 227MB So i can delete delete the image by it's id. Any help ?
docker images output manipulation
Amazon EKS uses IAM to provide authentication to your Kubernetes cluster through the AWS IAM Authenticator for Kubernetes. You may update your config file referring to the following format:apiVersion: v1 clusters: - cluster: server: ${server} certificate-authority-data: ${cert} name: kubernetes contexts: - context: cluster: kubernetes user: aws name: aws current-context: aws kind: Config preferences: {} users: - name: aws user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 command: aws-iam-authenticator env: - name: "AWS_PROFILE" value: "dev" args: - "token" - "-i" - "mycluster"Useful links:https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.htmlhttps://github.com/kubernetes-sigs/aws-iam-authenticator#specifying-credentials--using-aws-profiles
I have to setup CI in Microsoft Azure Devops to deploy and manage AWS EKS cluster resources. As a first step, found few kubernetes tasks to make a connection to kubernetes cluster (in my case, it is AWS EKS) but in the task "kubectlapply" task in Azure devops, I can only pass the kube config file or Azure subscription to reach the cluster.In my case, I have the kube config file but I also need to pass the AWS user credentials that is authorized to access the AWS EKS cluster. But there is no such option in the task when adding the New "k8s end point" to provide the AWS credentials that can be used to access the EKS cluster. Because of that, I am seeing the below error while verifying the connection to EKS cluster.During runtime, I can pass the AWS credentials via envrionment variables in the pipeline but can not add the kubeconfig file in the task and SAVE it.Azure and AWS are big players in Cloud and there should be ways to connect to connect AWS resources from any CI platform. Does anyone faced this kind of issues and What is the best approach to connect to AWS first and EKS cluster for deployments in Azure Devops CI.No user credentials found for cluster in KubeConfig content. Make sure that the credentials exist and try again.
How to connect AWS EKS cluster from Azure Devops pipeline - No user credentials found for cluster in KubeConfig content
Navigate to Setting->Emails (or directly tohttps://github.com/settings/emails), and just add the emails you used on your "unattributed" commits.
I just finished a semester at college and decided to import all of my projects from bitbucket (required for my classes) to github (where all of my other projects are). I successfully imported them. Unfortunately, at the time when I was working on these projects, I was switching between three different computers.As a result, the commit history has lots of different names for commits that I did myself. I'd like to set an "alias" like you can in bitbucket, saying "these three people are also me." Is this possible? How can I do that?
How can I set a username alias in github commits?
If your goal is to run an Apache Web Server (httpd), you should use thehttpd image.Docker containers are generally meant to run a single process. So, you wouldn't normally design a container to run something like systemd as the root process, and then run httpd as a child process. You would just run httpd directly in the foreground. The httpd image does this.
I am using archlinux/base official image from docker hub.I am trying to use systemctl and it says.$ docker run --rm -it ac16c4b756ed systemctl start httpd System has not been booted with systemd as init system (PID 1). Can't operate.How to solve this.
docker archlinux image: System has not been booted with systemd as init system (PID 1). Can't operate
8 You can create your own AMI but you need to use the Amazon-supplied kernels. The newest they provide is 2.6.21. I have a list of the fc (Fedora Core) kernels that I use for CentOS instances. I'm pretty sure they work fine with Ubuntu as well. You'll want to bake these into your AMI when you register it using ec2-register. They can be changed at the time you start an instance but I like having the proper AKI (kernel) and ARI (ramdisk) to start with. Adding support for the ephemeral disks is helpful as well. You're paying for the extra storage with larger instances, you might as well use it. My magic incantation for ec2-register: ec2-register --snapshot snap-12345678 -K pk-XXXXXXXXXXX.pem -C cert-XXXXXXXXXXX.pem \ --description "EBS CentOS 5.5 i386" --name "base-image-i386-4" --architecture i386 \ --root-device-name /dev/sda1 -b /dev/sdb=ephemeral0 -b /dev/sdc=ephemeral1 \ -b /dev/sdd=ephemeral2 -b /dev/sde=ephemeral3 --region us-east-1 \ --kernel aki-6eaa4907 --ramdisk ari-e7dc3c8e You can change region, snapshot ID, description, name, arch, etc. Also remember the kernels & ramdisks are region-specific. I can't remember where I got this list but I had trouble finding it. Hope it helps someone out. 2.6.21 kernels are available as: US Region: 32-bit: * aki-6eaa4907 * ari-e7dc3c8e * ami-48aa4921 64-bit: * aki-a3d737ca * ari-4fdf3f26 * ami-f61dfd9f EU Region: 32-bit: * aki-02486376 * ari-aa6348de * ami-0a48637e 64-bit: * aki-f2634886 * ari-a06348d4 * ami-927a51e6 AP Region: 64-bit: * aki-07f58a55 * ari-27f58a75 * ami-ddf58a8f 32 -bit * aki-01f58a53 * ari-25f58a77 * ami-c3f58a91 Share Improve this answer Follow answered Jan 27, 2011 at 21:55 JKGJKG 49644 silver badges66 bronze badges Add a comment  | 
I have an Amazon EC2 instance using the Amazon-supplied Fedora 8 64-bit AMI, which I would like to upgrade to Fedora 10. I tried doing this by running "yum update" to upgrade the kernel and all packages. This seemed to work fine and I see that I now have the fc10 kernel installed, and all of my installed packages have also been updated to the Fedora 10 versions. However, I also noticed that the fc8 kernel is still installed, and when I reboot my image it comes back running the fc8 kernel, not the fc10 kernel (I'm inferring this from the output of "uname -a"). Are there some additional steps I need to take to get my image to boot under the fc10 kernel, or is this even possible ? The Amazon documentation didn't turn up anything useful for me.
How does an Amazon EC2 instance select its kernel?
2 AWS Lambda comes with an ephemeral storage unit in /tmp. However, please note that the ephemeral storage unit still has a storage of 512MB. You can load your dependencies to this storage, and write code accordingly. Share Improve this answer Follow answered Jan 15, 2020 at 9:18 Arka MukherjeeArka Mukherjee 2,19211 gold badge1515 silver badges3030 bronze badges 2 Yes but that storage is not permanent right, I already tried downloading from s3 to the /tmp folder and it seems everytime I have to do it which is very inefficient in terms of cost and performance. is there any other way to do this? – Debangshu Paul Jan 15, 2020 at 14:01 Use the EFS and load your models from there to overcome the MB limit. – Jakub Jul 17, 2020 at 15:41 Add a comment  | 
I am new to AWS Lambda and running a tensorflow model in AWS Lambda. Now tensorflow 1.0.0 is the one that fits into the 50Mb limit but since tensorflow 2.0 is much bigger in size it does not fit. Does anyone knows of a way to use tensorflow 2.0 with AWS lambda?
How to use tensorflow 2.0 with AWS Lambda?
You can switch it off in the server configuration.The parameter name isenable_result_cache_for_sessionas mentioned inthe Redshift documentation.
Since the 21 November Amazon Redshift introduced the default caching of result sets. Is there a way to disable caching by default on a Redshift database? There don't seem to be many docs on it at the moment.
disable caching on redshift by default
Within your foreach loop, you are using array_push. You are adding to the array you are iterating through, this is an infinite loop.
Im trying to build an array that needs to be passed to a report. Some of the data returned has similar field names so im using the function below to add a prefix to the array key names before merging the arrays, however i get an out of memory exception "Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 44 bytes) in ..", is there another way of adding prefix to array keys in an array that will not use alot of memory? function prefixArrayKeys(&$_array,$prefix){ foreach($_array as $k=>$v){ $nk = $prefix.$k; $nv = $v; array_push($_array, array($nk=>$nv)); unset($_array[$k]); } var_dump($_array); } The call to the function: $aSQL = "select sex, a_number, to_char(b_dtm, 'DD/MM/YYYY') b_dtm from atable where a_id = ".$ped_array[1]['D'].""; execute_sql($aSQL,$rsGTYPE); prefixArrayKeys(&$rsGTYPE[0],"D"); if(count($rsGTYPE) > 0) $rowdata[0] = array_merge($rowdata[0],$rsGTYPE[0]);
Fatal error: Allowed memory size of 536870912 bytes exhausted
This is because/teammaps to an existent directory. When you request/teamserver changes the uri to/team/adding a directory slash thus it goes to the dir.You have to turn off the DirectorySlash.Add the following line to your .htaccessDirectorySlash offThis will allow you to access/team.phpas/teamwithout trailing slash.You can use this htaccess :DirectorySlash off RewriteEngine On RewriteCond %{REQUEST_FILENAME}.php -f RewriteRule ^([^\.]+)$ /$1.php [NC,L]
How can I set up an htaccess that can distinguish a file from a folder with the same name?I have under my websiteindex.php team.php team/Justin.php team/martin.php...and a htaccess with a URL Rewrite to make nice url and remove the .phpRewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^([^\.]+)$ $1.php [NC,L]Unfortunately when I go to the url mywebsite.com/team it goes straight to the team folder... I would likemywebsite.com/team goes to --> team.php pagemywebsite.com/team/xxx goes to --> any pages in the folder team.Thanks,
htaccess identical folder and file name
dataList = [dataArr];this is not valid Objecitve-C. If you wanted to writedataList = dataArr;that's still a no-go, as you're acessing the instance variable directly, not through the property setter, that is, your array won't be retained and it will badly crash.[dataList release]; [dataArr retain]; dataList = dataArr;is wrong again. IfdataListwas the same asdataArr, and the reference of the object (self) was the last reference to it, then it would get deallocated, breaking the followingretainmessage, and most likely crashing again.If you have a property setter (which you have), simply writeself.dataList = dataArr;this will retain the array correctly. By the way, the implementation of the setter is something like your last method, but it checks either for inequality:- (void)setDataList:(NSArray *)dl { if (dataList != dl) { [dataList release]; dataList = [dl retain]; } }or pre-retains the object to be set:- (void)setDataList:(NSArray *)dl { [dl retain]; [dataList release]; dataList = dl; }ShareFollowansweredMay 7, 2012 at 16:23user529758user529758Add a comment|
My purpose: making an API call to a server, and getting back from them is an array of data nameddataArrand I want to store these data to another array for later need.What I am doing so far ismyClass.h:@propery ( nonatomic, retain ) NSArray *dataList;myClass.m:@implementation myClass -(void)receivedData:(NSArray*) dataArr { // ??? }To fill in at line 3, I have two options, option A:dataList = dataArr;or option B:[dataList release]; [dataArr retain]; dataList = dataArr;I think option A is the right way to do it becausedataListis declared asretainin the header file. Therefore, the setter will make sure to release a current array (dataList) and reain a received array (dataArr) as wellI just want to double check that I am on the right path.Please correct me if I have just made a mistake in the middle. Thanks Any comments are welcomed.
save data to another array, memory management, Objective C
The response header gives: Content-Type: application/javascript This is the MIME type that needs to be included in your gzip_types statement in order to compress these types of response. Your existing value contains many similar MIME types, but not one of them is an exact match for what the server actually sends. See this document for details.
This is our Nginx configuration with regards to Gzip: gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 5; gzip_min_length 256; gzip_buffers 16 8k; gzip_http_version 1.0; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript image/png image/gif image/jpeg; Our JS files are served by Amazon Cloudfront but they are not compressed, even after invalidating the Amazon cache, clearing Wordpress cache and restarting our server. Response header gives: curl -I https://d3opmxxxxnoy.cloudfront.net/wp-content/cache/min/1/def188074382933064c622c86c835c7f.js HTTP/1.1 200 OK Content-Type: application/javascript Content-Length: 473913 Connection: keep-alive Server: nginx Date: Mon, 16 Apr 2018 16:45:55 GMT Last-Modified: Mon, 16 Apr 2018 15:45:54 GMT ETag: "5ad4c532-73b39" Expires: Thu, 31 Dec 2037 23:55:55 GMT Cache-Control: max-age=315360000 Access-Control-Allow-Origin: * Accept-Ranges: bytes X-Cache: Miss from cloudfront Via: 1.1 63db28xxxx087abd41a1692.cloudfront.net (CloudFront) How could we know that this is an Nginx or Amazon configuration issue? Not sure where the problem lies. Update: We have performed another test on our own domain using this tool: https://checkgzipcompression.com and it still says the JS file is not compressed. So this is an Nginx issue - but not sure why it happens as my conf seems correct.
Nginx - Amazon Cloudfront - Gzip Doesn't work for JS files
Other answers suggest the data might be stored in: C:\Users\Public\Documents\Hyper-V\Virtual hard disks\MobyLinuxVM.vhdx or since the Windows 10 Anniversary Update: C:\ProgramData\docker\containers You can find out by entering: docker info Credit to / More info: https://stackoverflow.com/a/38419398/331637 https://stackoverflow.com/a/39971954/331637
I'm currently experimenting with Docker containers on Windows Server. I've created a number of containers, and I want to see where they are actually saved on the host's file system (like a .vhd file for Hyper-V). Is there a default location I can look, or a way to find that out using Docker CLI?
Where are containers located in the host's file system?
0 Your expectation is incorrect. --oom-kill-disable does not disable virtual memory overcommitment, which would cause malloc to fail if mmap fails to allocate requested pages. Instead, the option maps to the cgroup v1 feature exposed via sysfs (/sys/fs/cgroup/memory/docker/<id>/memory.oom_control) and causes tasks requesting memory beyond the limit to block until either memory is freed or limits change. --oom-kill-disable requires cgroup v1. So, the first requirements is that your docker installation uses cgroup v1: $ docker system info | grep -i Cg Cgroup Driver: cgroupfs Cgroup Version: 1 If that's the case, --oom-kill-disable should be available. You can confirm that the cgroup option was set by docker using sysfs: $ cat /sys/fs/cgroup/memory/docker/<id>/memory.oom_control oom_kill_disable 1 under_oom 1 oom_kill 0 If oom_kill_disable is not set, your version of docker docker failed to set the option. under_oom indicates that the limit was reached and tasks wait for available memory. The proc filesystem can be used to confirm that tasks are indeed waiting for available memory: malloc0 In my test installation (Docker version 20.10.12, build e91ed57, Linux 5.10.76), I forced docker to use cgroup v1 by updating malloc1 as follows: malloc2 With cgroup v2, starting the container triggers the following warning: malloc3 cgroup v2 does not provide an malloc4 equivalent. What you desire is the behavior of malloc5 (or malloc6 as a compromise) at the cgroup level. Setting that on the host level to disable overcommitment results in the behavior you expected: malloc7 Unfortunately, AFAIK that facility does not exist as of today. Share Improve this answer Follow edited Jun 25, 2023 at 18:12 answered Jun 25, 2023 at 12:24 horstrhorstr 2,6071616 silver badges2525 bronze badges Add a comment  | 
I wrote a short java program to allocate memory: package com.company; import java.util.ArrayList; import java.util.List; public class Main { public static final int SIZE_NATIVE_LONG_IN_BYTE = 8; public static void main(String[] args) { Integer memoryConsumptionInMiB = Integer.parseInt(args[0]); List<long[][]> foo = new ArrayList<long[][]>(); int i = 0; while (true) { System.out.println(i++); foo.add(new long[(1024 / SIZE_NATIVE_LONG_IN_BYTE * 1024)][memoryConsumptionInMiB]); } } } i then try to run it within a docker container with several different parameters: Xmx 1G and docker container without memory limits Xmx 1G and docker run -m 512m Xmx 1G and docker run -m 512m --disable-oom-killer this is how i run the program (in case of 3): (edit: uploaded a prepared image with above class) docker run -it --oom-kill-disable -m 512m kazesberger/alpine-java-memory-tester java -classpath . com.company.Main 10 (~Megabytes allocated per iteration) my expectation was: OutOfMemory (Heap) at 1G host OS kills docker container (proofable with docker inspect or kernel log) before the actual OutOfMemory Error happens. malloc within docker container fails and results in jvm terminating with some kind of OutOfMemoryError. actual results: expectation met expectation met oomkiller still killing the container. docker inspect still shows OOMKilled true so you see my actual goal isn't really to suppress oomKiller in fulfilling its purpose. It's rather my goal to have the processes within the container fail allocating memory they're not allowed to. Preconditions/Versions: swapoff -a docker --version Docker version 17.11.0-ce-rc4, build 587f1f0 docker run -it kazesberger/alpine-java-memory-tester java -version openjdk version "1.8.0_111-internal"
docker oomkiller vs malloc failure from within the container
6 I played around with the USER command in the Dockerfile, but could never get it to work with an admin user. However, I found in a GitHub posting the mention of specifying the user in the docker run command like this did: docker run --user "NT Authority\System" ... Which also works in the Dockerfile like so: USER "NT Authority\System" Share Follow answered Jul 14, 2020 at 23:30 jtalaricojtalarico 95688 silver badges2222 bronze badges Add a comment  | 
I have a .NET Core app that's required to be "run as administrator" and I'm trying to get it to be built into a Docker image. I am able to build a Docker image just fine, but it fails at runtime with the "Need to run as Administrator" error. Is there a way in the Dockerfile or in the docker run command to specify this? Does something else need to be added to the ENTRYPOINT where I'm calling "dotnet"? Is this even possible?
Can't get a Windows Docker container to "run as administrator"
You can filter out metrics based on the other metrics withunlessoperator. It removes metrics from left-hand-side of this operator with same values of labels as those at the right-hand-side.For example if you have metricsmetric1{label1="value1"} metric1{label1="value2"} metric2{label1="value1"}expressionmetric1 unless metric2will returnmetric1{label1="value2"}For your exact case you'll additionally need to useonfor label matchingavg_over_time(system_cpu_usage[5m]) > 90 unless on(hostname) process_cpu_usage{cpu_usage="high"}
Let's say I have the following metrics:system_cpu_usage{hostname="host1"} 10 system_cpu_usage{hostname="host2"} 92 system_cpu_usage{hostname="host3"} 95 process_cpu_usage{hostname="host2", cpu_usage="high"} 90I have an alert condition as follows:avg_over_time(system_cpu_usage[5m]) > 90Which returns all instances where CPU usage is above 90:system_cpu_usage{hostname="host2"} 92 system_cpu_usage{hostname="host3"} 95But I would like toexcludeinstances which have theprocess_cpu_usage{cpu_usage="high"}metric present.So, in that case it would just return:system_cpu_usage{hostname="host3"} 95Is this even possible using Prometheus/Grafana?
Filter Prometheus metrics by label of another metric
As git isdecentralized(which actually meansdistributed), you can safely copy your original clone entire folder (the one with the.gitsub-directory) from your old machine to the new one without losing anything. Even your local commits or branches will be kept as git embeds the complete distant AND local history in each clone.Direct answer to your questionJust make a hard copy of your original clone and test a git command on your new one:# assuming you're actually on your new machine: scp -r old-user@old-machine:/old/path/ /new/path/ cd /new/path/ # check that GIT still consider your directory as a clone git status # test if your remotes are still valid git remote updateThe only point to take care of is to embed the.gitsub-directory with your copy (which may be hidden on OSX by default).EDIT - You could even make a tarball of the original clone and extract it on your new machine (just be sure to embed the .git sub-directory contents).To go a little furtherYou could even make a truegit cloneof your original old machine directory and have the exact same result (except any uncommitted change). This is the real meaning of adistributedVCS:git clone old-user@old-machine:/old/path/ /new/path/
I have seen some information on this topic, but didn't see a definitive agreement on the proper strategy.I just got a new macbook and have been setting everything up over the past few days. I have a git repo on the old machine that I want to move over to the new machine, what is the best way for me to move the entire folder over to the new machine (note - I will no longer be using the old machine).
Moving local repo to a new Macbook
6 No. Support for Zenodo is an open issue. Contact was made with Zenodo asking for Bitbucket support, but Zenodo said Bitbucket currently lacks the right features in the API. Share Follow answered May 16, 2016 at 2:06 beldazbeldaz 4,40133 gold badges4444 silver badges6464 bronze badges Add a comment  | 
I see that GitHub has DOI integration through Zenodo but is there an equivalent tool for Bitbucket? Or do I have to contact a DOI Registration Agency directly?
How to create digital object identifier (DOI) for bitbucket repository?
In your last block try making theprintline come beforeresult[dnsIpAddress] = "FAILURE"My guess is either there is more code than what is shown here or the line before the print statement causes a different exception.
While trying to implement aDNSRequest, I also needed to do some exception handling and noticed something weird. The following code is able to catch DNS requesttimeoutsdef lambda_handler(event, context): hostname = "google.de" dnsIpAddresses = event['dnsIpAddresses'] dnsResolver = dns.resolver.Resolver() dnsResolver.lifetime = 1.0 result = {} for dnsIpAddress in dnsIpAddresses: dnsResolver.nameservers = [dnsIpAddress] try: myAnswers = dnsResolver.query(hostname, "A") print(myAnswers) result[dnsIpAddress] = "SUCCESS" except dns.resolver.Timeout: print("caught Timeout exception") result[dnsIpAddress] = "FAILURE" except dns.exception.DNSException: print("caught DNSException exception") result[dnsIpAddress] = "FAILURE" except: result[dnsIpAddress] = "FAILURE" print("caught general exception") return resultNow, if I removed the Timeout block, and assuming that a Timeout would occur, on a DNSException the messagecaught DNSException exceptionwill never be shown.Now, if I removed the DNSException block, and assuming that a Timeout would occur, the messagecaught general exceptionwill never be shown.But the Timeout extends the DNSException and the DNSException extends Exception. I had the expectation that at least the general expect block should work.What am I missing?
Exception handling in aws-lambda functions
This will probably be the case of your result data set exceeding the limit 1MB:If the total number of scanned items exceeds the maximum data set size limit of 1 MB, the scan stops and results are returned to the user as a LastEvaluatedKey value to continue the scan in a subsequent operation. The results also include the number of items exceeding the limit. A scan can result in no table data meeting the filter criteria.Check on the result for theLastEvaluatedKeyfield and use it for the next scan operation passing it asExclusiveStartKey
In a DynamoDB table, I have an item with the following scheme:{ id: 427, type: 'page', ...other_data }When querying on primary index (id), I get the item returned as expected.With ascanoperation inside AWS DynamoDB web app to get all items with typepage, 188 items including this missing item are returned. However, performing this scan operation inside Lambda with the AWS SDK, only 162 items are returned. Part of the code looks like:const params = { TableName: <my-table-name>, FilterExpression: '#type = :type', ExpressionAttributeNames: { '#type': 'type' }, ExpressionAttributeValues: { ':type': 'page' } }; dynamodb.scan(params, (error, result) => { if (error) { console.log('error', error); } else { console.log(result.Items); // 162 items } });What is missing here?
DynamoDB scan leaves valid item out
Actually i figured out this CAN be done via the api in this way, it just requires headers and data indicating what permissions.: curl -H "Accept: application/vnd.github.v3+json" -u YourUserName:YourPersonalAccessToken -X PUT -d '{"permission":"write"}' https://api.github.com/teams/$team_id/repos/$org_name/$repo Alternatively, in Python: import requests, json data = json.dumps({"permission": 'read'}) . #could be 'write', etc.. headers = { 'content-type': 'application/json', 'accept': 'application/vnd.github.v3+json, text/plain, */*' } auth_tuple = (username, access_token) url = f"https://api.github.com/teams/{team_id}/repos/{org_name}/{repo}" requests.put(url, auth=auth_tuple, data=data, headers=headers)
It is possible to add collaborators via the api as described here: https://developer.github.com/v3/repos/collaborators/#add-user-as-a-collaborator Endpoint: /repos/:owner/:repo/collaborators/:username But what about adding team access, which is definitely possible via web interface in "Settings > Collaborators & Teams"
Is there a github api endpoint to give Team access to a repo?
c->data = &data; stores the address of the pointer data (the argument to your function), not the actual pointer. I.e., you're storing a pointer to a temporary. You could have built the container structure with just a void *data member.
I just wrote some C code: #include <stdlib.h> #include <time.h> #include <string.h> typedef struct { void **data; time_t lastModified; } container; container *container_init() { container *c = malloc(sizeof(container)); void *data = NULL; c->data = &data; c->lastModified = time(NULL); return c; } void *container_getData(container *c) { void **containerData = c->data; return *containerData; } // only pass manually allocated data that can be free()'d! void container_setData(container *c, void *data) { free(container_getData(c)); c->data = &data; } void container_free(container *c) { free(container_getData(c)); // <--- THIS LINE free(c); } int main(int argc, const char *argv[]) { for (int i = 0; i < 100000000; i++) { char *data = strdup("Hi, I don't understand pointers!"); container *c = container_init(); container_setData(c, data); container_free(c); } } My logic was the following: When I call container_setData(), the old data is free()'d and a pointer to the new data is stored. That new data will have to be released at some point. That happens for the last time during the call to container_free(). I have marked a line in the container_free() function. I would have sworn I'd need that line in order to prevent a memory leak. However, I can't use the line ("object beeing freed was not allocated") and there's no memory leak if I delete it. How does the string from my loop ever get released?! Could someone explain where the error is?
C - memory management
Jenkins itself will happily run on a micro, but there are two problems: 1) you won't have much memory left for building and testing, around 150MB, but the bigger problem is 2) if your CPU usage spikes for more than a few seconds Amazon will simply crush your instance with throttling cutting off 97% or more of available CPU. http://gregsramblings.com/2011/02/07/amazon-ec2-micro-instance-cpu-steal/ The throttling was completely impossible for us, making a build with testing take 12 minutes on EC2 instead of 25 seconds on a quad i7 laptop. But! There's a fix for the frugal: Run a Jenkins master on a micro, but start up a small instance when needed to run the actual tests. That gives us plenty of memory and decent CPU, yet it's still incredibly cheap (ten cents per push [or commit]). However, it substantially increases build time because it has to boot the instance and all that. The setup is rather involved, and requires working around some limitations of the ec2 plugin (which, overall, works extremely well), so we wrote up a blog post if you want to do this: http://wkmacura.tumblr.com/post/5416465911/jenkins-ec2
I am planning to install Hudson on Amazon EC2 using Ubuntu image. The code I am going to test does not have a big memory overhead - I will be executing mainly python unit tests. Which EC2 instance should I use? Would micro instance be sufficient (have enough memory) or should I use a bigger instance?
Running Hudson on EC2
I opened a ticket with AWS support and they were able to find the IP that was consuming the read capacity. They used an internal tool to query logs that are not available to customers. They also confirmed that these events do not get emitted to Cloudtrail logs, which only contain events related to the table, such as re-provisioning, queries about metrics, etc. They also shared this nugget that's relevant to the question: Q: Does read capacity get consumed when the lambda updates or overwrites the value for an existing key? A: Yes, when you issue an update item operation, Dynamodb does a Read/Get operation first and then does a PutItem to insert/overwrite existing Item. This is expensive as it consumes both RCU and WCU. I did also verify that there are no UpdateItem operations being made on this table. They also pointed me at more Cloudwatch metrics that shed some more light on what's going on with the table behind the scenes. Finding this through navigation with a link, you go to Cloudwatch service Metrics in the left bar All Metrics tab Scroll down to AWS Namespaces section (Custom Namespaces section is on top, if you have defined any custom metrics) Select DynamoDB Select Table Operation Metrics Metrics will be organized by table name. The one that was most helpful was Operation=Query, Metric Name=Returned Item Count. So the only answer to my question is: Open an AWS Support ticket.
We have a DynamoDB table that we thought we'd be able to turn off and delete. We shut down the callers to the web services that queried it (and can see on the web server metrics that the callers have dropped to zero), but the AWS console is still showing Read Capacity consumption greater than zero. However, every other graph that concerns reads is showing no data: Get latency, Put latency, Query latency, Scan latency, Get records, Scan returned item count, and Query returned item count are all blank. On other tables that I know to be in use, these charts show some data > 0. On other tables that I know not to be in use, the Read Capacity graph only shows the provisioned line, no consumed line. This table is still being written to via a Lambda filtering and aggregating events from a Kinesis stream. I've reviewed the Lambda code and it doesn't specifically read anything from the table – does read capacity get consumed when the lambda updates or overwrites the value for an existing key?
How can I find out what is consuming my DynamoDb tables Read Capacity?
You cannot access the env context in matrix. You can use a job with outputs to set the matrix: env: SERVICES_JSON: | [ "a", "b", "c" ] jobs: gen-matrix: runs-on: ubuntu-latest steps: - name: Generate Matrix id: gen-matrix run: | # use heredoc with random delimiter for multiline JSON delimiter="$(tr -dc A-Za-z0-9 </dev/urandom | head -c 20)" echo "services<<$delimiter $SERVICES_JSON $delimiter" >> "$GITHUB_OUTPUT" outputs: services: ${{ steps.gen-matrix.outputs.services }} test: runs-on: ubuntu-latest needs: gen-matrix strategy: matrix: service: ${{ fromJson(needs.gen-matrix.outputs.services) }} steps: - run: echo ${{ matrix.service }}
I have a list of service which I want to pull, and for that i wanted to use matrix. Since this list of services defined in SERVCES_JSON env variable will also be used in other jobs, I would like to reused in to convert to list, instead of defining the same list again. name: Deployment on: workflow_dispatch: env: SERVICES_JSON: '[ "a", "b", "c" ]' jobs: runs-on: ubuntu-latest name: Pull docker images strategy: matrix: service: ${{ fromJson($SERVICES_JSON) }} steps: - name: Prepare new image name env: SERVICE: ${{ matrix.service }} run: echo "NEW_IMAGE=${DOCKER_REGISTRY}/${PROJECT}/${SERVICE}:${BUILD}" >> $GITHUB_ENV The snippet I gave - service: ${{ fromJson($SERVICES_JSON) }} - it's not correct, but it's somthing similiar that I want to have there.
matrix in GitHub Actions: how to use a json defined in env variable as matrix list
Browsers will usually get this information through HTTP headers sent with the page.For example, the Last-Modified header tells the browser how old the page is. A browser can send a simple HEAD request to the page to get the last-modified value. If it's newer than what the browser has in cache, then the browser can reload it.There are a bunch of other headers related to caching as well (like Cache-Control). Check out:http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.htmlShareFollowansweredDec 7, 2009 at 20:41SethSeth45.6k1010 gold badges8686 silver badges122122 bronze badges2+1 More detail than my reply, same idea though... good answer–LorenVSDec 7, 2009 at 20:431A thing to note, modern browsers almost never send a head request on their own. While debugging app and accessing it through browsers, i have never seen a head request -stackoverflow.com/questions/33444413/…–Sudhir NJul 1, 2017 at 15:01Add a comment|
This is a dangerously easy thing I feel I should know more about - but I don't, and can't find much around.The question is:How exactly does a browser know a web page has changed?Intuitively I would say that F5 refreshes the cache for a given page, and that cache is used for history navigation only and has an expiration date - which leads me to think the browser never knows if a web page has changed, and it just reloads the page if the cache is gone --- but I am sure this is not always the case.Any pointers appreciated!
How does the browser know a web page has changed?
myteamid = teamidjson(['id']) That seems to be causing the error. The correct way to access the id key is: myteamid = teamidjson['id']
Having difficulty parsing json from GitHub api. I'm trying to populate a team with all the repos from an organisation. I'm using myteamname to obtain the teamid required for the loop which populates the team with the repo names. import json import requests mytokenid = "xxx" myorg = "xxx" myteamname = "xxx" headers = { 'Authorization': 'token %s' % mytokenid, } response = requests.get('https://api.github.com/orgs/{0}/teams/{1}'.format(myorg, myteamname), headers=headers) teamidjson = json.loads(response.text) myteamid = teamidjson(['id']) g = Github(tokenid) for repo in g.get_organization(myorg).get_repos(): myreponame = repo.name response = requests.put('https://api.github.com/teams/{0}/repos/{1}/{2}'.format(myteamid, myorg, myreponame), headers=headers) I get this error message File "githubteam.py", line 74, in <module> myteamid = teamidjson(['id']) TypeError: 'dict' object is not callable
Parsing json GitHub api with Python
sbin is not in the path when run via cron. Specify the full path to service. This is probably either /sbin/service or /usr/sbin/service. You can find the path on your system by running which service.
service service_name start When i tried running this from cmd line, it works. But when i try to schedule it via cron, i get an error saying /bin/sh: service: command not found
Unable to run a service command via cron
+50Your compare URL could be something likehttps://github.com/gaganmalvi/kernel_xiaomi_lime/compare/Q..02ca1a9Qis the name of the only branch in that repository. It is used here for the HEAD of the repository.02ca1a9is the "Git Object ID" for the state of the repository after the last commit in Dec 2020.GitHub documentation for comparing commits
I'm trying to see all modifications made from 06e27fd143240e8e4d13b29db831bedece2bf2d3 to the latest e1c34175b5556ac5ce1e60ba56db2493dd9f6b52. I triedhttps://github.com/gaganmalvi/kernel_xiaomi_lime/compare/Q:e1c34175b5556ac5ce1e60ba56db2493dd9f6b52%5E%5E%5E%5E%5E...Q:06e27fd143240e8e4d13b29db831bedece2bf2d3and vice-versa but it does not work.Also I triedhttps://github.com/gaganmalvi/kernel_xiaomi_lime/compare/06e27fd143240e8e4d13b29db831bedece2bf2d3%5E%5E%5E%5E%5E...Qwhich seems to work but brings changes from 2017, but the changes I want to see are from dec 2020 and beyond.
How to see changes from commit x to y on github?
Try removing thetablepart from your--formatargument, such as:docker ps --format '{{.Names}}'It should give you a simple list of container names with no table heading
docker ps --format "table {{.Names}}"outputNAMESin first row:root@docker-2gb-blr1-01:~# docker ps --format "table {{.Names}}" NAMES enr osticket osticket_db ...docker inspect --format '{{.Name}}' $(docker ps -q)prints/in the beginning of container name:root@docker-2gb-blr1-01:~# docker inspect --format '{{.Name}}' $(docker ps -q)" /enr /osticket /osticket_dbI want to list only names of running containers, without header or slash in beginning. Please share options how to do this.
'docker ps' output formatting: list only names of running containers
Doing agit pullshould do the right thing, as long as you haven't done git add on the files you don't want in to have under git. I suggest putting the names of those files in a .gitignore.If you are running into a specific problem with usinggit pull, you should ask aboutthat.I don't know much about DaftMonk, but if it generates a lot of boilerplate that shouldn't be committed, then it seems likely that is part of the build/development process you need to manage. Meaning, you would call DaftMonkafteryou clone your repo. Possibly with a Makefile, or whatever build tool is common in your language of choice.The idea is that generated files that can change because of a change in configuration/other source should not be modified by hand, but instead be regenerated as needed. Therefore, if DraftMonkey is doing such a generation, you need to incorporate that into your process.
How can I pull down a git and have it overwrite my local project ONLY where conflicts are found?E.g. I have many folders / files in my local project that are not on the git project and never will be.Ok... here is the full scenario.I used DaftMonk generator to create a fullstack boilerplate:https://github.com/DaftMonk/generator-angular-fullstackI then edited the boilerplate and created my app.Now, I want to share my code on git, for colleagues to start developing on - BUT, daftmonk generator has added several of its folders to gitignore file (node modules / dist etc). As its not good practice to check these in (Plus, it throws a wobbly since the paths are too long in node modules folder).So, I am trying to get the code working elsewhere. However, the code in the git needs all the node modules etc to work... So, I have made another install of daftmonk and am now wanting to place my git code on top of this.Where am I going wrong?
How to pull files and only override conflicts
I think that is not possible because :PVC is a namespaced resource and PV is not a namespaced resource.kubectl api-resources | grep 'pv\|pvc\|NAME' NAME SHORTNAMES APIVERSION NAMESPACED KIND persistentvolumeclaims pvc v1 true PersistentVolumeClaim persistentvolumes pv v1 false PersistentVolumeSo there can be multiple PVCs with the same 'name' across multiple namespaces. so when we are mentioning the name of the pvc under claimRef we need to mention the name space as well.
Below is my scenario. I have an NFS setup and it will be used to create PV. and then use PVC to bind the volume. Now, Consider I want to bind particular PV/PVC together irrespective of where PVC will be created. As far as I tried I could not bind PV/PVC without bringing namespace into the picture. Since I use helm charts for deployment and the namespace can be anything (use can create/use any namespace) hence I do not want to restrict PV to look for PVC only in one namespace, rather bind to matching PVC from any namespace.nfs-pv.yamlapiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv spec: capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: nfs claimRef: name: nfs-pvc namespace: default # This is something I wanna get rid off nfs: path: /apps/exports server: <nfs-server-ip>nfs-pvc.yaml#This one I should be able to create in any namespace and attach to the above PVC.apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-pvc namespace: fhir spec: volumeName: nfs-pv storageClassName: nfs accessModes: - ReadWriteMany resources: requests: storage: 1GiI have tried without giving the namespace option in PV, but it didn't work.Any help on this would be much appreciated.
is namespace mandatory while defining claimRef under k8s PersistentVolume manifest file?
"mocha: command not found" means you have to install mocha in your gitlab runner environment.test: stage: test script: - npm install --global mocha - mocha test
I want to try CI/CD. So I am working on a simple project. I wanted to run the test file. But I get the error "mocha: command not found". There is no problem when I try it in my own terminal. How can I solve this?Thanks.
mocha: command not found in GitLab
Posix is a standard, not a specific set of code, but we can look at libc for an example. Here's what posix_memalign() initially allocates in that implementation. mem = malloc (size + 2 * alignment); With this beautiful ASCII illustration. /* ______________________ TOTAL _________________________ / \ +---------------+-------------------------+--------------+ | | | | +---------------+-------------------------+--------------+ \____ INIT ____/ \______ RETURNED _______/ \____ END ___/ */ It then returns to the heap the unused storage on either end of the allocation. This means that fragmentation may get worse, though the heap memory used is the same amount.
I am trying to decide if I should use memalign() over malloc() because aligned memory would make my job easier. I read the GNU documentation here (http://www.gnu.org/software/libc/manual/html_node/Aligned-Memory-Blocks.html) which mentions that The function memalign works by allocating a somewhat larger block. I want to know the exact value for that "somewhat larger block". Logically thinking the extra memory required should be equal to the the value of alignment required. But I am not sure if there is an optimization over that.
how much extra memory does posix_memalign() take?
12 You can do both deployment and cache invalidation with the help of aws-cli. #!/bin/bash # enable cloudfront cli aws configure set preview.cloudfront true # deploy angular bundles aws s3 sync $LOCAL s3://$S3_BUCKET \ --region=eu-central-1 \ --cache-control max-age=$CACHE_TIME # invalidate cache in cloudfront aws cloudfront create-invalidation \ --distribution-id $CLOUDFRONT_DISTRO_ID \ --paths "/*" Share Improve this answer Follow edited Feb 15, 2019 at 13:50 answered Jun 1, 2017 at 8:05 mikedanylovmikedanylov 8271111 silver badges2020 bronze badges Add a comment  | 
we have our Angular2 code in S3 .And we access it via Cloudfront. It works fine. But after a deployment to Angular2 , we want every code to be invalidated from Cloudfront. What are the best approaches for clearing cache after deployment? How to handle cloudfront caching?
How to handle cloudfront cache after deployment
Developers can not add to this feature. Microsoft scanned over 100,000 GitHub repositories and looked at popular repositories to get these examples which I read awhile back inIntelliCode with API usage examples
In Visual Studio, when you hover overSystem.Reflection.MethodInfo.GetCustomAttributes(see definition), it has a link at the bottom "GitHub Examples and Documentation". When you click on that link, it opens these examples directly in Visual Studio.Does anyone know how this is implemented in the XML Code docs? Because that feature would be a pretty neat improvement of our code summaries.
How can I put GitHub examples in XML code docs?
As mentioned in the other answers, the list() call is running you out of memory. Instead, first iterate over maxcoorlist in order to find out its length. Then create random numbers in the range [0, length) and add them to an index set until the length of the index set is 1000. Then iterate through maxcoorlist again and add the current value to a sample set if the current index is in your index set. EDIT An optimization is to directly calculate the length of maxcoorlist instead of iterating over it: import math n = len(array) r = 4 length = math.factorial(n) / math.factorial(r) / math.factorial(n-r)
OK, so I have a problem that I really need help with. My program reads values from a pdb file and stores those values in (array = []) I then take every combination of 4 from this arrangement of stored values and store this in a list called maxcoorlist. Because the list of combinations is such a large number, to speed things up I would like to simply take a sample of 1000-10000 from this list of combinations. However, in doing so I get a memory error on the very line that takes the random sample. MemoryError Traceback (most recent call last) <ipython-input-14-18438997b8c9> in <module>() 77 maxcoorlist= itertools.combinations(array,4) 78 random.seed(10) ---> 79 volumesample= random_sample(list(maxcoorlist), 1000) 80 vol_list= [side(i) for i in volumesample] 81 maxcoor=max(vol_list) MemoryError: It is important that I use random.seed() in this code as well, as I will be taking other samples with the seed.
Python Memory Error when using random.sample()
Backups are per-device. So a backup of your iPod will not be restored to your iPhone. In other words, there is no sync.
When does data get restored for an app? What if I save data in the app's document directory. Then they sync with iTunes. Now iTunes has a backup. Will that data be populated to another device when they sync that new device to their iTunes or will they just get a clean install of my app? I'm trying to figure out how to keep track of a subscription in app purchase and was wondering if I could keep record in NSUserDefaults or some other local store.
iPhone when does data get restored from backup
There are pros and cons to using lambda functions as your AppsSync resolvers (although note you'll still need to invoke your lambdas from VTLs): Pros Easier to write and maintain More powerful for marshalling and validating requests and responses Common functionality can be more DRY than possible with VTLs (macros are not supported) More flexible debugging and logging Easier to test Better tooling and linting available If you need to support long integers in your DynamoDB table (DynamoDB number types do support long, but AppSync resolvers only support 32-bit integers. You can get around this if you use a lambda, for example by serializing longs to a string before transport through the AppSync resolver layer) - See (currently) open Feature Request: https://github.com/aws/aws-appsync-community/issues/21 Cons Extra latency for every invocation Cold starts = even more latency (although this can usually be minimised by keeping your lambdas warm if this is a problem for your use case) Extra cost Extra resources for each lambda, eating up the fixed 200 limit If you're doing a simple vanilla DynamoDB operation it's worth giving VTLs a go. The docs from AWS are pretty good for this: https://docs.aws.amazon.com/appsync/latest/devguide/resolver-mapping-template-reference-dynamodb.html If you're doing anything mildly complex, such as marshalling fields, looping, or generally hacky non-DRY code, then lambdas are definitely worth considering for the speed of writing and maintaining your code provided you're comfortable with the extra latency and cost.
I have been looking into AWS AppSync to create a managed GraphQL API with DynamoDB as the datastore. I know AppSync can use Apache Velocity Template Language as a resolver to fetch data from dynamoDB. However, that means I have to introduce an extra language to the programming stack, so I would prefer to write the resolvers in Javascript/Node.js Is there any downside of using a lambda function to fetch data from DynamoDB? What reasons are there to use VTL instead of a lambda for resolvers?
AWS AppSync Resolvers Lambda Function vs Velocity Template Language (VTL)
You don't need to refork again. Just add a remote (say, upstream) and fetch upstream to update your cloned repository. $ git remote add upstream <original-repo-url> $ git fetch upstream # update local with upstream $ git diff HEAD..upstream/master # see diffs between local and upstream/master (if there is no diff then both are in sync) $ git pull upstream master # pull upstream's master into local branch $ git push origin HEAD # push to your forked repo's remote branch Fetch/get the original repo's new tags: $ git fetch upstream --tags # get original repo's tags $ git push origin --tags # push to forked repo
I created the fork of some GitHub project. Then I created new branch and did a patch inside of that branch. I sent the pull request to author and he applied my patch and added some commits later. How can I synchronize my fork on GitHub with original project now? Am I to delete my fork on GitHub and create new fork for each my patch each time?
How to synchronize fork with original GitHub project?
There are numerous ways to do this; in the end, we went forchanging the permissions (READ/WRITE/ADMIN) on (team, repository) combinations via the REST API.That's not to say that webhooks, enabling/disabling branch restrictions, or the pre-merge would not work, however.
Here, we use GitHub Enterprise. We have an issue with people accidentally merging PRs during code freeze windows, which interferes with our in-house release tool. It would be nice if we could find a way to prevent this.What I'm trying to do, is find a way to disable the big green Merge button on each repo belonging to our Organisation within GitHub while our release tool is running, and then reenable it afterwards. Ideally, this would be scripted, since we have control over our release tool.How might this be accomplished?
GitHub Enterprise: enforce code freeze during release?
Based on your use case you can utilized service discovery feature of ECS, service discovery will give an endpoint(url) to communicate between different services privately. In service discovery ECS take care of updating dynamic IP and port of containers to DNS record, every time a new task is started or stopped. Reference Doc: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-discovery.html
I didn't find a solution for that two containers in separate task definitions can communicate with each other. Therefore, I follow the answer to link the two containers in the same task definitions which works well.Thanks for the answer first. However, when I read the ECS documentation, I find the following paragraph that is confusing me: Containers that are collocated on a single container instance may be able to communicate with each other without requiring links or host port mappings. Network isolation is achieved on the container instance using security groups and VPC settings. But I didn't be able to find further documentation how to achieve this. I knew that docker community try to use --network and deprecated the --link. I'm not sure if AWS makes some change to also enable these somehow. Would someone can help me understand how to achieve that? Because the container name and IP is always dynamic assigned by ECS, how can I communicate from one container to another container if in different task definition without link and port mappings?
How to make containers communicate with each other in ECS without link and port mapping?
You can use directives like this to allow an IP range for certain URL:# set env variable if URL is /rest or /rest/ SetEnvIf Request_URI "/rest(/.*)?$" rest_uri Order deny,allow # first deny all Deny from all # then allow if env var is not set Allow from env=!rest_uri # also allow your IP range Allow from 10.1.0.0/16ShareFolloweditedAug 18, 2016 at 7:19answeredAug 18, 2016 at 5:39anubhavaanubhava771k6666 gold badges582582 silver badges649649 bronze badges10This does work for the URL /rest and /rest/ but it still allows me to access /rest/foo/bar–BramAug 18, 2016 at 6:471I've tested this solution before posting so I still suspect you're not using right IP.–anubhavaAug 18, 2016 at 20:321So checking the apache logs, when visiting the home page the IP in the log matches. But when accessing rest a different IP is logged. We suspect this being an issue with the loadbalancer and are investigating it further. However your solution is indeed working because we tested it with this strage IP.–BramAug 19, 2016 at 7:281Solved now after consulting our hosting. Thanks–BramAug 19, 2016 at 15:401I'm so glad to learn it is finally working for you. It was indeed a bit tricky problem to solve.–anubhavaAug 19, 2016 at 15:43|Show5more comments
I would like to block a path from my site using the .htaccess configuration. The idea is that only a specific set of IP's can access that specific path from the URL.Note:It's a path, not a page or directory. We are trying to shield off a web-service so there will be only post calls to the URL's.I would like the urlexample.com/restto be blocked and everything behind that url based on IP. Soexample.com/rest/test_serviceandexample.com/rest/test_service/readshould be blocked.All other paths from the application should remain functional.What I've tried the following but it doesn't seem to work. Not a single page is accessible like this.SetEnvIf Request_URI "/rest$" rest_uri <RequireAll> Require env rest_uri <RequireAny> Require ip XXX.XXX.XXX.XXX </RequireAny> </RequireAll>I've tried different things but none of them seem to work. Any help is appreciated.
.htaccess path only accessible by ip
I had the same problem due to restclient misconfiguration. Have a look how restclient is created and configured in the examplehere.
I have made a kubernetes operator using this frameworkhttps://github.com/operator-framework/operator-sdkin which I have a small custom resource definition defined and a clientset generated.I create a client for this custom resource doing:imports are ( "k8s.io/client-go/kubernetes" "k8s.io/client-go/rest" ) config, err := rest.InClusterConfig() kubernetesClientset := kubernetes.NewForConfig(config) // my generated CR clientset v1alpha1.New(kubernetesClientset.RESTClient())So I simply use the config kubernetes gives to pods and created a default k8s REST clientset and use that in my custom resource's clientset (Is that even a good practice?).However when I try to use my custom resource clientset and try to create an instance of the custom resource the client errors withencoding is not allowed for this codec: *versioning.codec(I guess it comes from herehttps://github.com/kubernetes/apimachinery/blob/master/pkg/runtime/codec.go#L104).What does that exactly mean? I thought the generated client is aware of the custom resource?Thanks for help...
encoding is not allowed for this codec: *versioning.codec
Hi you can active "required string" functionality of web monitoring. It use regular expression pattern. Ciao!Required string: Required regular expression pattern. Unless retrieved content (HTML) matches the required pattern the step will fail. If empty, no check on required string is performed. For example: Homepage of Zabbix Welcome.admin Note: Referencing regular expressions created in the Zabbix frontend is not supported in this field. User macros and {HOST.} macros are supported.https://www.zabbix.com/documentation/5.4/en/manual/web_monitoringShareFollowansweredMar 6, 2022 at 9:26MisterXMisterX1111 bronze badgeAdd a comment|
I am using Zabbix 5.4.3 to monitor all of my company hosts.I want to monitor a local website address (eg.https://172.30.200.1:44443/login) which is our firewall webpage.It has got two linked WANs, one with our primary public IP and another which is a 4G backup connection without public IP (random access IP).When the connection of the primary one goes down, the firewall automatically switches from one WAN to the another and the IP changes.On the firewall webpage, the current used IP is always showed and updated. (see image for reference)Is there a way to set a trigger which shows that the IP has changed from our primary to the secondary random one based on the checks on this string? I need simply a trigger which shows "IP CHANGED FROM THE PRIMARY TO THE OTHER" and nothing more.I am able to perform a webscenario configuration inside the firewall host setup in Zabbix (with also a login), but I can't understand how to setup a trigger of this kind.Let me know guys.
Zabbix 5.4.3 - How to monitor a string in a webpage and define a trigger when it changes
Its not possible to filter objects by regular expression. It is possible to filer object by lableThis is the code that will filter by labellabelSelector := labels.Set(map[string]string{"mylabel": "ourdaomain1"}).AsSelector() informer := cache.NewSharedIndexInformer( &cache.ListWatch{ ListFunc: func(options meta_v1.ListOptions) (k8sruntime.Object, error) { options.LabelSelector = labelSelector.String() return client.CoreV1().ConfigMaps(nameSpace).List(options) }, WatchFunc: func(options meta_v1.ListOptions) (watch.Interface, error) { options.LabelSelector = labelSelector.String() return client.CoreV1().ConfigMaps(nameSpace).Watch(options) }, }, &api_v1.ConfigMap{}, 0, //Skip resyncr cache.Indexers{}, )Another thing that is important to remember is how do you add new objects to k8s I was doing something likekubectl --namespace==ourdomain1 create configmap config4 -f ./config1.yamlThis is not good. It overwrites all the fields in config map and puts whole file content into data of the new object. Proper way iskubectl create -f ./config1.yam
Im writing custom controller for kubernetes. Im creating shared informercache.NewSharedIndexInformer( &cache.ListWatch{ ListFunc: func(options meta_v1.ListOptions) (k8sruntime.Object, error) { return client.CoreV1().ConfigMaps(nameSpace).List(options) }, WatchFunc: func(options meta_v1.ListOptions) (watch.Interface, error) { return client.CoreV1().ConfigMaps(nameSpace).Watch(options) }, }, &api_v1.ConfigMap{}, 0, //Skip resyncr cache.Indexers{}, )I have option to add filtering function into call back functions to further decrease number of objects im working with. Something like thatoptions.FieldSelector := fields.OneTermEqualSelector("metadata.name", nodeName).String()I would like to filter out objects by regular expression. Or by some label at least. Unfortunately documentation is not helping. Could not find anything except for tests for code itself. Ho do i apply regular expression on filtering mechanism? Where do i get some examples on this issue?
kubernetes filter objects in Informer
0 is your desired output is something like: $host/mnt/synology/Torrents/Games/ where $host is the name of each one of these ips: (192.168.1.40 192.168.1.41 192.168.1.42 192.168.1.43) ? when building the path for mkdir you are doing $(hostname) but that command's output will be your local machine name; it won't run in each host. if you want each hosts name you should launch that command through ssh in each IP and retrieve the output. Share Follow answered Mar 23, 2017 at 15:17 odradekodradek 99177 silver badges1414 bronze badges Add a comment  | 
I'm trying to learn to write some simple bash scripts and I want to create a backup script that will use rsync to fetch predetermined directories and sync them to a backup machine. Here is the code: #!/bin/bash #Specify the hosts ip=(192.168.1.40 192.168.1.41 192.168.1.42 192.168.1.43) #currently unused webdirs=(/etc/nginx/sites-available/ /var/www/ghost) #Directory to store everything NAS=/mnt/synology/Torrents/Games/ #Remote-hosts to rsync from for i in "${ip[@]}" do HOSTNAME=$(hostname) NAS2=$HOSTNAME$NAS if [ ! -d "$NAS2" ]; then echo $NAS2 "does not exist, creating..." mkdir -p $NAS2 else echo "inside the else" sudo rsync -anvzP -e "ssh -i $HOME/.ssh/id_rsa" victor@$i:/etc $NAS2/ fi done; It's not done but I've ran into a problem. I can't figure out how to create new directories for each machine. Right now it's only creating the directory for my web server. EDIT: I solved it by using ssh and command substitution, all I did was this: HOSTNAME=$(ssh user@$i "hostname") The variable $HOSTNAME will change after each iteration. Exactly what I want.
How do I loop through an array of ip adresses to get the hostname of each machine in bash?
Your image doesn't have a command calledecho.AFROM scratchimage contains absolutely nothing at all. No shells, no libraries, no system programs, nothing. The two most common uses for it are to build a base image from a tar file or to build an extremely minimal image from a statically-linked binary; both are somewhat advanced uses.Usually you'll want to start from an image that contains a more typical set of operating system tools. On a Linux base (where I'm more familiar)ubuntuanddebianare common,alpineas well (though it has some occasionally compatibility issues). @gp. suggestsFROM microsoft/windowsservercorein a comment and that's probably a good place to start for a Windows container.
I have a docker image with the following dockerfile code:FROM scratch RUN echo "Hello World - Dockerfile"And I build my image in a powershell prompt like this:docker build -t imagename .Here is what I do when I build my image :Sending build context to Docker daemon 194.5MB Step 1/2 : FROM scratch ---> Step 2/2 : RUN echo "Hello World - Dockerfile" ---> Running in 42d5e5add10e invalid reference formatI want to run my image with a windows container. What is missing to make it work? Thanks
Docker: Run echo command don't work on my window container
It looks like what might be going on is the default.conf file in the nginx image is taking over the / location. You nginx run command has: -v $ROOT/web/flask/conf/nginx-default.conf:/etc/nginx/conf.d/default \ This should be overwriting the default.conf instead of just default. As it currently stands, it just adds another blank default file and leaves the default default.conf which has a location for /. You static route does work because there is an explicit route in the nginx-flask.conf to nginx0 and you call the file explicitly. You get a nginx1 on the nginx2 location because indexes are disabled by default (controlled by the nginx3 option).
I am trying to connect docker nginx with docker flask. Here is the structure of my project: . ├── storage │   ├── nginx │   │   └── static │   │   └── image.gif └── web └── flask ├── app │   ├── run.py │   └── templates │   └── index.html ├── conf │   ├── nginx-default.conf │   ├── nginx-flask.conf │   └── requirements.txt └── Dockerfile Although curl 127.0.0.1:50 and curl 127.0.0.1:80/static/image.gif work fine, I get a '403 Forbidden' error when I do curl 127.0.0.1. More specifically, nginx gives the following error: 2016/03/05 17:54:37 [error] 8#8: *1 directory index of "/usr/share/nginx/html/" is forbidden, client: 172.17.0.1, server: localhost, request: "GET / HTTP/1.1", host: "localhost" 172.17.0.1 - - [05/Mar/2016:17:54:37 +0000] "GET / HTTP/1.1" 403 169 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:44.0) Gecko/20100101 Firefox/44.0" "-" I create the containers like this: docker create \ -v $ROOT/web/flask/app/:/web/flask/app/ \ --name flask_data flask:0.1 docker run \ -d -p 127.0.0.1:50:50 \ --volumes-from flask_data \ --name flask_service flask:0.1 docker create \ -v $ROOT/storage/nginx/:/usr/share/nginx/html/ \ --name nginx_data nginx:1.9 docker run \ -v $ROOT/web/flask/conf/nginx-flask.conf:/etc/nginx/conf.d/nginx-flask.conf \ -v $ROOT/web/flask/conf/nginx-default.conf:/etc/nginx/conf.d/default \ -d -p 127.0.0.1:80:80 \ --volumes-from nginx_data \ --link flask_service:flask_service_alias \ --name nginx_service nginx:1.9 where, Dockerfile is: docker flask0 and docker flask1 is: docker flask2 docker flask3 is empty, and docker flask4 is: docker flask5 any ideas?
nginx error 403 - directory index is forbidden
You have one script which does that by: getting the last commit of each branch checking that commit is part of the history of master That would delete rebased branches which have been merged to master. last_commit_msg="$(git log --oneline --format=%f -1 $branch)" if [[ "$(git log --oneline --format=%f | grep $last_commit_msg | wc -l)" -eq 1 ]]; then
in our team we keep the fast-forward only merge policy for master and development branches in order to prevent merge commit hell: I do not delete my topic branches once they are merged (or rebased and then merged), so I end up with tons of these. I can delete some: git branch --merged This will only show me those which hasn't been rebased prior merge. And there are some of these, I am able to clean them up. I am looking for some strategy, script or hint how to deal with the rebased ones. There must be a script that finds all the commits from the topic branch in the master in a loop or something. Please share ;-) Thanks
Mass deleting local branches that has been rebased and merged
If it is a public cluster where each node in the cluster has an ip address the public ip will be the address of the node the pod is on. If it is a private cluster you can deploy a nat gateway for all the nodes and specify static ip addresses.you can use this terraform module for a private cluster:https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/modules/private-clusterPlus a nat gateway from herehttps://cloud.google.com/nat/docs/gke-example#terraform
Pod A is on ClusterIP service type, so incoming requests from external resources are not allowed. Pod A executes outgoing requests to 3rd party services (Such as Google APIs). And I want to specify the IP address that this request is coming from on google for security reasons.Is there a way to find the IP address this pod uses for outgoing HTTP requests?
Kubernetes Pod ipv4 address for outgoing http request
1 I figured out, request was completely incorrect: This one gonna work out: https://api.github.com/search/repositories?q=goit-js+user:realtril&per_page=1000 Share Improve this answer Follow answered Oct 27, 2022 at 18:10 AlexUAAlexUA 67711 gold badge55 silver badges1414 bronze badges Add a comment  | 
What I wanna do is just get the same filtering result as I am getting in github.com: As you can see it's 13. But when I am doing the request like that: const ghReq = await fetch( 'https://api.github.com/users/realtril/repos?q=goit-js&per_page=100' ); const ghData = await ghReq.json(); console.log(ghData); I am getting 53 items,instead of 13. So I've got the question: what is the way to correctly filter repos by name?
Filtering by name via GitHub API is not giving the correct result
Use the reset subcommand:git checkout A git reset --hard B git push --force githubAs a sidenote, you should be careful when usinggit resetwhile a branch has been pushed elsewhere already. This may cause trouble to those who have already checked out your changes.ShareFollowansweredMay 27, 2010 at 16:56Bram SchoenmakersBram Schoenmakers1,6291515 silver badges1919 bronze badges44If there are any commits on branch A, they will be lost bygit reset --hard B. If there are commits on branch A, then you should usegit rebaseto relocate the branch.–Tim HeniganMay 27, 2010 at 17:08What trouble exactly might it cause?–Bjarke Freund-HansenMar 2, 2011 at 13:12You may throw away a branch head you pushed to the server and is pulled by others in the meantime. The server cannot build on top of this removed head (so the push needs to be forced). And likewise, peer developers also need to force a pull which may be undesired.–Bram SchoenmakersMar 3, 2011 at 21:54push --force causes the HEAD to detach from the commits tail (i.e. previous commits). doesn't it? I think it's worth adding how to fix that too–dragonmnlFeb 6, 2017 at 20:47Add a comment|
The title is not very clear. What I actually need to do often is the following:Let's say I have a development going on with several commits c1,c2,... and 3 branches A,B,Cc1--c2--c3--(B)--c4--(A,C)Branch A and C are at the same commit.Now I want branch A to go back where B is, so that it looks like this:c1--c2--c3--(A,B)--c4--(C)Important is that this has to happen locally and on GitHub.
How to move a branch backwards in git?
The problem had to do withkubeadmnot installing a networking CNI-compatible solution out of the box;Therefore, without this step thekubernetesnodes/master are unable to establish any form of communication;The following task addressed the issue:- name: kubernetes.yml --> Install Flannel shell: kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml become: yes environment: KUBECONFIG: "/etc/kubernetes/admin.conf" when: inventory_hostname in (groups['masters'] | last)
I have set up my master node and I am trying to join a worker node as follows:kubeadm join 192.168.30.1:6443 --token 3czfua.os565d6l3ggpagw7 --discovery-token-ca-cert-hash sha256:3a94ce61080c71d319dbfe3ce69b555027bfe20f4dbe21a9779fd902421b1a63However the command hangs forever in the following state:[preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/Since this is just a warning, why does it actually fails?edit: I noticed the following in my/var/log/syslogMar 29 15:03:15 ubuntu-xenial kubelet[9626]: F0329 15:03:15.353432 9626 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory Mar 29 15:03:15 ubuntu-xenial systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a Mar 29 15:03:15 ubuntu-xenial systemd[1]: kubelet.service: Unit entered failed state.
Joining cluster takes forever
If each of those String Arrays are big "enough" and it appears you do want to store them - have you considered Sqlite? SharedPreferences is most effective to store primitive data in key-value pairs. Check this link - it has neat comparison about the options you have - http://developer.android.com/guide/topics/data/data-storage.html
In my app I have 5 String arrays that represent different fields of objects. i.e. String_A[1], String_B[1], String_C[1], String_D[1], String_E[1], All are attributes of the same object (which is not really an object). Now I want to store those in order to be able to use them in a new activity that I am creating. Since you are not able to pass objects around I thought that i should save them in Shared preferences. My question is: Should I save them as separate strings or create a new class with all those fields and then serialize the objects? Which is the best way in terms of memory usage? In fact is there any other way that you might achieve similar functionality? Thanks in advance Mike
Most effective way of storing Strings in Android
Here is the information you're looking for :https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html#S3-backup-limitationsBackup size limitations: AWS Backup for Amazon S3 allows you to automatically backup and restore S3 buckets up to 1 PB in size and containing fewer than 24 million objects.
I am using AWS Backup to back up S3 buckets. One of the buckets is about 190GB (the biggest of the buckets I am trying to back up) and it is the only bucket that the backup job fails on, with the error message:Bucket [Bucket name] is too large, please contact AWS for support The backup job failed to create a recovery point for your resource [Bucket ARN] due to missing permissions on role [role ARN]As you can see, these are two error messages concatenated together (probably an AWS bug) but I think that the second message is incorrect, because all the rest of the buckets were backed up successfully with the same permissions, and they are configured that same way. Thus, I think the first message is the issue.I was wondering what is the size limit for AWS backup for S3. I took a look at theAWS Backup quotaspage and there was no mention of a size limit. How do I fix this error?
AWS Backup for S3 buckets - what is the size limit?