id
stringlengths
5
11
text
stringlengths
0
146k
title
stringclasses
1 value
doc_4300
Bitmap bitmap; View v1 = MyView.getRootView(); v1.setDrawingCacheEnabled(true); bitmap = Bitmap.createBitmap(v1.getDrawingCache()); v1.setDrawingCacheEnabled(false); and it works great , but this is the case if there is activity. How can I take a screenshot from service ? my goal to take a screenshot ones in a hour ,e.i to to take screen shut every hour for example in 12 then in 1 then in 2 .... and so on A: To capture ScreenShot for your activity you have to need a View of your activity, and which one is not present in your service so you have to make a TimerTask which will call your activity at every hour and your activity responding it to with current appear view and you can capture the ScreenShot from that. (I think this one is only solution for your problem.) Or If you want to take a ScreenShot of your current device screen (any application) then you have to root permission, and read framebuffer for that which will give raw data of current screen then convert it to bitmap or any picture file you can do it in your service. A: Android Screenshot Library (ASL) provides means for taking snapshots of phone's screen without the need for signing your application or having privileged (root) access to the Android system Click here for ASL
doc_4301
UIStoryboard *storyboard = [UIStoryboard storyboardWithName:@"myStoryboard" bundle:nil]; UIViewController *viewController = [storyboard instantiateViewControllerWithIdentifier:@"myVC"]; UINavigationController *myNavigationController = [[UINavigationController alloc] initWithRootViewController:viewController]; [myNavigationController pushViewController:viewController animated:NO]; With this I get the error message: ...is pushing the same view controller instance ... more than once which is not supported and is most likely an error in the application Now, I understand that somewhere in the app, the same viewController has (or can?) been pushed, and that code line needs to be removed or otherwise dealt with. Thing is, I'm working in a huge project and have no idea where that code line might be. So, question is, how do I deal with this? Can I pop the pushed viewController programmatically before pushing the "correct" viewController? Or is there some way to locate the push that is causing the problem? Or can I solve it Ibn some other way? A: The navigation view controller is a stack of VC's. If you walk the stack you can see what type of object it is. You can then do a simple string compare of class type names.
doc_4302
WARNING: The conda.compat module is deprecated and will be removed in a future release. And after giving yes it throws md5 sum error. MD5MismatchError: Conda detected a mismatch between the expected content and downloaded content for url 'https://repo.anaconda.com/pkgs/main/osx-64/conda-4.6.14-py37_0.tar.bz2' Is there any workaround for this?
doc_4303
2020-01-27T11:39:57.597570+00:00 app[web.1]: backlog: 2048 2020-01-27T11:39:57.597572+00:00 app[web.1]: workers: 1 2020-01-27T11:39:57.597590+00:00 app[web.1]: worker_class: sync 2020-01-27T11:39:57.597592+00:00 app[web.1]: threads: 1 2020-01-27T11:39:57.597594+00:00 app[web.1]: worker_connections: 1000 2020-01-27T11:39:57.597596+00:00 app[web.1]: max_requests: 0 2020-01-27T11:39:57.597598+00:00 app[web.1]: max_requests_jitter: 0 2020-01-27T11:39:57.597600+00:00 app[web.1]: timeout: 30 2020-01-27T11:39:57.597602+00:00 app[web.1]: graceful_timeout: 30 2020-01-27T11:39:57.597604+00:00 app[web.1]: keepalive: 2 2020-01-27T11:39:57.597606+00:00 app[web.1]: limit_request_line: 4094 2020-01-27T11:39:57.597608+00:00 app[web.1]: limit_request_fields: 100 2020-01-27T11:39:57.597610+00:00 app[web.1]: limit_request_field_size: 8190 2020-01-27T11:39:57.597612+00:00 app[web.1]: reload: False 2020-01-27T11:39:57.597614+00:00 app[web.1]: reload_engine: auto 2020-01-27T11:39:57.597615+00:00 app[web.1]: spew: False 2020-01-27T11:39:57.597617+00:00 app[web.1]: check_config: False 2020-01-27T11:39:57.597619+00:00 app[web.1]: preload_app: True 2020-01-27T11:39:57.597621+00:00 app[web.1]: sendfile: None 2020-01-27T11:39:57.597623+00:00 app[web.1]: chdir: /app 2020-01-27T11:39:57.597625+00:00 app[web.1]: daemon: False 2020-01-27T11:39:57.597627+00:00 app[web.1]: raw_env: [] 2020-01-27T11:39:57.597629+00:00 app[web.1]: pidfile: None 2020-01-27T11:39:57.597630+00:00 app[web.1]: worker_tmp_dir: None 2020-01-27T11:39:57.597632+00:00 app[web.1]: user: 52893 2020-01-27T11:39:57.597634+00:00 app[web.1]: group: 52893 2020-01-27T11:39:57.597636+00:00 app[web.1]: umask: 0 2020-01-27T11:39:57.597638+00:00 app[web.1]: initgroups: False 2020-01-27T11:39:57.597640+00:00 app[web.1]: tmp_upload_dir: None 2020-01-27T11:39:57.597642+00:00 app[web.1]: secure_scheme_headers: {'X-FORWARDED-PROTOCOL': 'ssl', 'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-SSL': 'on'} 2020-01-27T11:39:57.597645+00:00 app[web.1]: forwarded_allow_ips: ['*'] 2020-01-27T11:39:57.597646+00:00 app[web.1]: accesslog: - 2020-01-27T11:39:57.597648+00:00 app[web.1]: access_log_format: %(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s" 2020-01-27T11:39:57.597650+00:00 app[web.1]: errorlog: - 2020-01-27T11:39:57.597652+00:00 app[web.1]: loglevel: debug 2020-01-27T11:39:57.597654+00:00 app[web.1]: capture_output: False 2020-01-27T11:39:57.597656+00:00 app[web.1]: logger_class: gunicorn.glogging.Logger 2020-01-27T11:39:57.597658+00:00 app[web.1]: logconfig: None 2020-01-27T11:39:57.597660+00:00 app[web.1]: syslog_addr: udp://localhost:514 2020-01-27T11:39:57.597662+00:00 app[web.1]: syslog: False 2020-01-27T11:39:57.597665+00:00 app[web.1]: syslog_prefix: None 2020-01-27T11:39:57.597667+00:00 app[web.1]: syslog_facility: user 2020-01-27T11:39:57.597669+00:00 app[web.1]: enable_stdio_inheritance: False 2020-01-27T11:39:57.597671+00:00 app[web.1]: statsd_host: None 2020-01-27T11:39:57.597673+00:00 app[web.1]: statsd_prefix: 2020-01-27T11:39:57.597674+00:00 app[web.1]: proc_name: None 2020-01-27T11:39:57.597677+00:00 app[web.1]: default_proc_name: grad.wsgi:application 2020-01-27T11:39:57.597679+00:00 app[web.1]: pythonpath: None 2020-01-27T11:39:57.597681+00:00 app[web.1]: paste: None 2020-01-27T11:39:57.597683+00:00 app[web.1]: on_starting: <function OnStarting.on_starting at 0x7fbdff281ae8> 2020-01-27T11:39:57.597685+00:00 app[web.1]: on_reload: <function OnReload.on_reload at 0x7fbdff281c80> 2020-01-27T11:39:57.597687+00:00 app[web.1]: when_ready: <function WhenReady.when_ready at 0x7fbdff281e18> 2020-01-27T11:39:57.597689+00:00 app[web.1]: pre_fork: <function Prefork.pre_fork at 0x7fbdff28b048> 2020-01-27T11:39:57.597691+00:00 app[web.1]: post_fork: <function Postfork.post_fork at 0x7fbdff28b1e0> 2020-01-27T11:39:57.597693+00:00 app[web.1]: post_worker_init: <function PostWorkerInit.post_worker_init at 0x7fbdff28b378> 2020-01-27T11:39:57.597695+00:00 app[web.1]: worker_int: <function WorkerInt.worker_int at 0x7fbdff28b510> 2020-01-27T11:39:57.597697+00:00 app[web.1]: worker_abort: <function WorkerAbort.worker_abort at 0x7fbdff28b6a8> 2020-01-27T11:39:57.597699+00:00 app[web.1]: pre_exec: <function PreExec.pre_exec at 0x7fbdff28b840> 2020-01-27T11:39:57.597700+00:00 app[web.1]: pre_request: <function PreRequest.pre_request at 0x7fbdff28b9d8> 2020-01-27T11:39:57.597702+00:00 app[web.1]: post_request: <function PostRequest.post_request at 0x7fbdff28bae8> 2020-01-27T11:39:57.597704+00:00 app[web.1]: child_exit: <function ChildExit.child_exit at 0x7fbdff28bc80> 2020-01-27T11:39:57.597706+00:00 app[web.1]: worker_exit: <function WorkerExit.worker_exit at 0x7fbdff28be18> 2020-01-27T11:39:57.597708+00:00 app[web.1]: nworkers_changed: <function NumWorkersChanged.nworkers_changed at 0x7fbdff291048> 2020-01-27T11:39:57.597710+00:00 app[web.1]: on_exit: <function OnExit.on_exit at 0x7fbdff2911e0> 2020-01-27T11:39:57.597712+00:00 app[web.1]: proxy_protocol: False 2020-01-27T11:39:57.597714+00:00 app[web.1]: proxy_allow_ips: ['127.0.0.1'] 2020-01-27T11:39:57.597716+00:00 app[web.1]: keyfile: None 2020-01-27T11:39:57.597718+00:00 app[web.1]: certfile: None 2020-01-27T11:39:57.597720+00:00 app[web.1]: ssl_version: 2 2020-01-27T11:39:57.597722+00:00 app[web.1]: cert_reqs: 0 2020-01-27T11:39:57.597724+00:00 app[web.1]: ca_certs: None 2020-01-27T11:39:57.597726+00:00 app[web.1]: suppress_ragged_eofs: True 2020-01-27T11:39:57.597728+00:00 app[web.1]: do_handshake_on_connect: False 2020-01-27T11:39:57.597730+00:00 app[web.1]: ciphers: TLSv1 2020-01-27T11:39:57.597736+00:00 app[web.1]: raw_paste_global_conf: [] 2020-01-27T11:39:57.597990+00:00 app[web.1]: Traceback (most recent call last): 2020-01-27T11:39:57.597998+00:00 app[web.1]: File "/app/.heroku/python/bin/gunicorn", line 11, in <module> 2020-01-27T11:39:57.598184+00:00 app[web.1]: sys.exit(run()) 2020-01-27T11:39:57.598190+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 74, in run 2020-01-27T11:39:57.598330+00:00 app[web.1]: WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run() 2020-01-27T11:39:57.598336+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/base.py", line 203, in run 2020-01-27T11:39:57.598498+00:00 app[web.1]: super(Application, self).run() 2020-01-27T11:39:57.598504+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/base.py", line 72, in run 2020-01-27T11:39:57.598635+00:00 app[web.1]: Arbiter(self).run() 2020-01-27T11:39:57.598641+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 60, in __init__ 2020-01-27T11:39:57.598774+00:00 app[web.1]: self.setup(app) 2020-01-27T11:39:57.598780+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 120, in setup 2020-01-27T11:39:57.598919+00:00 app[web.1]: self.app.wsgi() 2020-01-27T11:39:57.598924+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/base.py", line 67, in wsgi 2020-01-27T11:39:57.599052+00:00 app[web.1]: self.callable = self.load() 2020-01-27T11:39:57.599058+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 65, in load 2020-01-27T11:39:57.599182+00:00 app[web.1]: return self.load_wsgiapp() 2020-01-27T11:39:57.599187+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp 2020-01-27T11:39:57.599313+00:00 app[web.1]: return util.import_app(self.app_uri) 2020-01-27T11:39:57.599319+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/util.py", line 352, in import_app 2020-01-27T11:39:57.599526+00:00 app[web.1]: __import__(module) 2020-01-27T11:39:57.599554+00:00 app[web.1]: ModuleNotFoundError: No module named 'grad' 2020-01-27T11:40:01.000000+00:00 app[api]: Build succeeded 2020-01-27T11:40:13.471679+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=graduateproofreading.herokuapp.com request_id=013c7a9d-29af-445d-8417-3ca07ceeb6bb fwd="86.132.11.140" dyno= connect= service= status=503 bytes= protocol=https 2020-01-27T11:40:13.643175+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/favicon.ico" host=graduateproofreading.herokuapp.com request_id=9ba58cb3-7954-4c37-8d4a-23112dabe8f5 fwd="86.132.11.140" dyno= connect= service= status=503 bytes= protocol=https I guess that this is the operative line in that mass of confusing code 2020-01-27T11:39:57.599554+00:00 app[web.1]: ModuleNotFoundError: No module named 'grad', but I'm quite obviously no expert... My Procfile looks like this: web: gunicorn grad.wsgi:application --log-file - --log-level debug --preload --workers 1 grad.wsgi looks like this: import os from django.core.wsgi import get_wsgi_application os.environ.setdefault("DJANGO_SETTINGS_MODULE", "grad.settings") application = get_wsgi_application() My folder structure (if needed): ├── CNAME ├── get-pip.py ├── gpproject │   ├── db.sqlite3 │   ├── gp.sublime-project │   ├── gp.sublime-workspace │   ├── grad │   │   ├── __init__.py │   │   ├── __pycache__ │   │   │   ├── __init__.cpython-35.pyc │   │   │   ├── settings.cpython-35.pyc │   │   │   ├── urls.cpython-35.pyc │   │   │   └── wsgi.cpython-35.pyc │   │   ├── settings.py │   │   ├── static │   │   │   ├── office.jpeg │   │   │   └── pics │   │   │   └── office.jpeg │   │   ├── urls.py │   │   ├── views.py │   │   └── wsgi.py │   ├── jobboard │   │   ├── admin.py │   │   ├── apps.py │   │   ├── __init__.py │   │   ├── migrations │   │   │   ├── 0001_initial.py │   │   │   ├── 0002_auto_20180716_1122.py │   │   │   └── __init__.py │   │   ├── models.py │   │   ├── tests.py │   │   └── views.py │   ├── login │   │   ├── admin.py │   │   ├── apps.py │   │   ├── __init__.py │   │   ├── migrations │   │   │   ├── 0001_initial.py │   │   │   ├── __init__.py │   │   │   └── __pycache__ │   │   │   ├── 0001_initial.cpython-35.pyc │   │   │   └── __init__.cpython-35.pyc │   │   ├── models.py │   │   ├── __pycache__ │   │   │   ├── admin.cpython-35.pyc │   │   │   ├── apps.cpython-35.pyc │   │   │   ├── __init__.cpython-35.pyc │   │   │   ├── models.cpython-35.pyc │   │   │   ├── urls.cpython-35.pyc │   │   │   └── views.cpython-35.pyc │   │   ├── static │   │   │   └── pics │   │   │   └── office.jpeg │   │   ├── templates │   │   │   ├── login │   │   │   │   └── office.jpeg │   │   │   ├── loginbutton.html │   │   │   ├── logoutbutton.html │   │   │   ├── navbar.html │   │   │   ├── registration │   │   │   │   ├── login.html │   │   │   │   ├── logintut.html │   │   │   │   ├── password_reset_complete.html │   │   │   │   ├── password_reset_confirm.html │   │   │   │   ├── password_reset_done.html │   │   │   │   ├── password_reset_email.html │   │   │   │   └── password_reset_form.html │   │   │   └── signup.html │   │   ├── tests.py │   │   ├── urls.py │   │   └── views.py │   ├── manage.py │   ├── oldbackupdb.sqlite3 │   ├── staticfiles │   ├── templates │   │   ├── base.html │   │   ├── footer.html │   │   ├── Header.html │   │   ├── index2.html │   │   ├── index.html │   │   ├── jobpost.html │   │   ├── jobs.html │   │   ├── navbar.html │   │   ├── oldHeader.html │   │   ├── sidenav.html │   │   ├── signup.html │   │   └── userdropdown.html │   ├── universities.xcf │   └── users │   ├── admin.py │   ├── apps.py │   ├── forms.py │   ├── __init__.py │   ├── migrations │   │   ├── 0001_initial.py │   │   ├── __init__.py │   │   └── __pycache__ │   │   ├── 0001_initial.cpython-35.pyc │   │   └── __init__.cpython-35.pyc │   ├── models.py │   ├── __pycache__ │   │   ├── admin.cpython-35.pyc │   │   ├── forms.cpython-35.pyc │   │   ├── __init__.cpython-35.pyc │   │   ├── models.cpython-35.pyc │   │   ├── urls.cpython-35.pyc │   │   └── views.cpython-35.pyc │   ├── tests.py │   ├── urls.py │   └── views.py ├── gradproofworkspace.code-workspace ├── index-1.html ├── index1.html ├── index-2.html ├── office.jpeg ├── Pipfile ├── Pipfile.lock ├── Procfile ├── README.md ├── requirements.txt └── runtime.txt I can't think what else might be needed but please request any further info. Any clear and idiot-proof advice on how Procfile works would be great, thanks in advance. A: You correctly identified that the key line in the error message is the ModuleNotFoundError. 2020-01-27T11:39:57.599554+00:00 app[web.1]: ModuleNotFoundError: No module named 'grad' The problem is that your Django project directory (the one containing manage.py) isn't in the project root, it's in the gpproject directory. Therefore you need to add that directory to the Python path with the --python flag, so that the grad module inside it can be imported. web: gunicorn grad.wsgi:application --python gpproject --log-file - --log-level debug --preload --workers 1 A: try updating your Procfile to read as: web: gunicorn grad.wsgi --log-file - --log-level debug --preload --workers 1 I have a few Heroku-Django apps that works with a similar setup. Let me know if it works for you?
doc_4304
"no RANGE key value present" in dynamoDB spring boot Aplication error even after declaring @DynamoDBRangeKey(attributeName = "") I tried using java Spring boot AWS dynamo DB library and dynamo DB mapper for connecting my spring boot application with AWS dynamo DB even after declaring @DynamoDBRangeKey(attributeName = "created_millis") where "created_millis" is my short key and "farm_id" is my partition key. If I remove short key the data is coming from DB but the moment I add @DynamoDBRangeKey(attributeName = "created_millis") with code is showing "There was an unexpected error (type=Internal Server Error, status=500).SolarFarmData[created_millis]: no RANGE key value present" @DynamoDBTable(tableName = "solar_farm_data") public class SolarFarmData { @Id private String farmId; private Double ambientTemperature; private Long createdMillis; @DynamoDBHashKey(attributeName = "farm_id") public String getFarmId() { return farmId; } public void setFarmId(String farmId) { this.farmId = farmId; } @DynamoDBRangeKey(attributeName = "created_millis") public Long getCreatedMillis() { return createdMillis; } public void setCreatedMillis(Long createdMillis) { this.createdMillis = createdMillis; } @DynamoDBAttribute(attributeName = "ambient_temperature") public Double getAmbientTemperature() { return ambientTemperature; } public void setAmbientTemperature(Double ambientTemperature) { this.ambientTemperature = ambientTemperature; } } A: This will work once you start mapping your "created_millis" which is used here as a short key. UserCrudDaoImpl Class @Override public SolarFarmData readUser(String farmId, Long millis) { return dynamoDBMapper.load(SolarFarmData.class, farmId, millis); //added millis here for the query. } along with this please change Dao, Service, ServiceImpl, and Controller.
doc_4305
The code looks like this: VStack { item1 item2 item3 Spacer() } I have tried to add a padding() on the top of VStack, but I got hang'ed without error message: VStack { padding() item1 item2 item3 Spacer() } A: .padding()adds padding to the view it is attached to (and it has to be attached to one, it can't stand alone). what you want is a padding on the VStack: VStack { item1 item2 item3 } .padding() or .padding(.top) A: You shouldn't need any kind of padding if you used a NavigationView then a NavigationLink in a List. Example below. struct BillsView: View { @State var name: String = "" @State var value: Double = 0.0 var body: some View { List { TextField(text: $name, prompt: Text("Bill amount")) { Text("Bill Amount") } HStack { Text("Tip") Spacer() Slider(value: $value, in: 1...10) } } .navigationBarTitle("Bills", displayMode: .inline) } } struct ContentView: View { var body: some View { NavigationView { List { NavigationLink(destination: BillsView()) { Text("Navigation link here") } } .navigationBarTitle("Example", displayMode: .inline) } } }
doc_4306
I wanted to write a query for docTopics to return all the subTopics references to docTopics export const queryRefrence = groq` { "topics": *[_type == "subTopics"] { _id, topic, title, id, "topics": docsTopics[] -> {title} } }` A: After some research from the documentation. I was able to nest the subtopics inside of topics "mainTopics":*[_type == "docsTopics"] | order(title) { title, "subTopics" : *[_type == "subTopics" && topic._ref == ^._id] | order(id) { title, slug, id } }
doc_4307
But I want to cache the webpage itself so that the tests do not have to load the webpage every time. Is there a way to cache the data and then scrape data .I am using microdata to scrape the data.
doc_4308
library(tidyverse) v <- c(1, 2, 3) w <- c(4, 4) i <- 1 # Does not work anymore df <- tibble(a = v, b = v, c = v) df[i, 2:3] <- w # This used to work with tibbles df.old <- data.frame(a = v, b = v, c = v) df.old[i, 2:3] <- w This is the error that I get with the tibble: Error: Assigned data `w` must be compatible with row subscript `i`. x 1 row must be assigned. x Assigned data has 2 rows. i Only vectors of size 1 are recycled. Thanks, A: In my R-devel version, the error message includes ℹ Row updates require a list value. Do you need `list()` or `as.list()`? So the canonical way in this version is probably df[i, 2:3] <- as.list(w), which works: library(tidyverse) v <- c(1, 2, 3) w <- c(4, 4) i <- 1 df <- tibble(a = v, b = v, c = v) df[i, 2:3] <- as.list(w) df #> # A tibble: 3 x 3 #> a b c #> <dbl> <dbl> <dbl> #> 1 1 4 4 #> 2 2 2 2 #> 3 3 3 3 Created on 2020-04-26 by the reprex package (v0.3.0)
doc_4309
public class Customer { public int CustomerId { get; set; } public string Name { get; set; } public string Address { get; set; } public string Phone { get; set; } public string Email { get; set; } } public class TestDbContext : DbContext { public DbSet<Customer> Customer { get; set; } } App.Config class <connectionStrings> <add name="TestDbContext" connectionString="Data Source=XYZ-PC\MSSQLSERVER2014;Initial Catalog=SalesOrderManagement;Integrated Security=True;Connect Timeout=15;Encrypt=False;TrustServerCertificate=False;ApplicationIntent=ReadWrite;MultiSubnetFailover=False" providerName="System.Data.SqlClient" /> </connectionStrings> A: EF won't generate the database until you call the context. You can insure this by adding a static constructor with your selected initializer: public class TestDbContext : DbContext { static TestDbContext() { Database.SetInitializer(new DropCreateDatabaseIfModelChanges<TestDbContext>()); } public DbSet<Customer> Customer { get; set; } }
doc_4310
in other words... colA colB row1 1 2 //this is ok row2 1 3 //this is ok row3 2 2 //this is ok row4 2 2 //this would NOT be ok, because this is just like row 3, and that combination should be unique. A: Sounds like you just want a UNIQUE constraint on the columns in question. In MySQL something like: ALTER TABLE MyTable ADD UNIQUE (colA, colB);
doc_4311
* *Set leave=False or disable=True *tqdm_iterator.close() but they cannot remove the broken progress bar. from tqdm.notebook import tqdm tqdm_iterator = tqdm(range(10000), leave=False) for i in tqdm_iterator: time.sleep(0.0001) if i>8000: tqdm_iterator.n = 10000 break broken progress bar A: The leave=False parameter only removes the display bar after it finishes, ex. when the counter reaches 10000. If you want to clear the progress bar output immediately, you can do tqdm_iterator.container.close(), like this: from tqdm.notebook import tqdm tqdm_iterator = tqdm(range(10000), leave=False) for i in tqdm_iterator: time.sleep(0.0001) if i > 8000: tqdm_iterator.container.close() break
doc_4312
function testScript() { var sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("MY SHEET"); var range = sheet.getRange("A1"); var value = range.getValues(); } I called the fonction from the menu using this code: function onOpen() { var ui = SpreadsheetApp.getUi(); ui.createMenu('MY MENU') .addItem('TEST', 'testScript') .addToUi(); } I have nothing else in the script file. Concerning the spreadsheets: * *The first one (less than 1 sec execution) is composed of an empty sheet called "MY SHEET". *The second one (moredthan 7 sec execution) is composed of 9 sheets with a lot of data and formulas inside + one empty sheet called "MY SHEET" as well (to test the simple code). Why the execution time is so different between the 2 spreadsheets? The code is exactly the same between them and I'm only dealing with the empty sheet "MY SHEET". Thanks a lot! :) A: I'm not sure a precise answer can be given for this, but the key factors here are that Apps Script is making remote API calls behind the scenes when you call the built in Google services, and that Google's infrastructure is distributed and complex. Your calls to SpreadsheetApp functions - - getActiveSpreadsheet(),getSheetByName(),getRange(),getValues() are each making network requests to a Google service, which take time and are affected by a variety of conditions, just like any other network request. The time it takes each of those to complete can vary widely, in my experience they take from under a second up to about 2 seconds each, and this varies from execution to execution. As for why it's so different between your two documents, it's hard to say. There may be something about the amount of content in the sheets themselves that has an impact, but I would expect that to be very minor. More likely it's a quirk of the infrastructure itself, and it's also pretty likely to be a temporary condition causing the one document to perform more slowly than the other on a consistent basis. For example, those two files may be stored in entirely different data centres in entirely different places, and the amount of latency at one site may be more than the other. With complex distributed architecture like Google Drive there are many other scenarios that could account for the difference. The scripts themselves are separate files, and those could be stored in different locations from the sheets, etc.
doc_4313
=QUERY('data-campaigns-all'!A:H,"select B, count(A) where A > date '"&TEXT(DATEVALUE($S$1-14),"yyyy-mm-dd")&"' group by B order by B LABEL count(A) ''") So I have a list of active campaigns for the day. I import other values (such as number of clicks, conversion values and so) based on campaign name, using sumifs or other formulas. But I manually add a column with a budget for every single campaign. And it's just plain number. And when a new campaign pops up in the list, it adds a new row and whole dataset switch. All other metrics are tied up with campaign name, by some kind of formula (sumifs, vlookup and so) but budget (plain number) always stays at its place while other data shift. So I want to tie up the first column (with campaign names) with budgets (which are added manually and are not in source data). The only solution that came up in my mind was to add a static sheet with campaign names and corresponding budget values and then add it to my final sheet using VLOOKUP, but I hope there's an easier solution.
doc_4314
* Adds the header and the footer. * @see com.itextpdf.text.pdf.PdfPageEventHelper#onEndPage( * com.itextpdf.text.pdf.PdfWriter, com.itextpdf.text.Document) */ public void onEndPage(PdfWriter writer, Document document) { //Rectangle rect = writer.getBoxSize("art"); CalibrateItLogger.publish("Page Number :"+writer.getPageNumber()); PdfPTable footterPdfPTable = new PdfPTable(1); PdfPTable fotterLeftPdfPTable =new PdfPTable(1); PdfPTable fotterRightPdfPTable =new PdfPTable(1); //document.newPage(); document.add(Chunk.NEWLINE); footterPdfPTable = new PdfPTable(3); footterPdfPTable.setWidthPercentage(100); fotterLeftPdfPTable = new PdfPTable(1); fotterLeftPdfPTable.setWidthPercentage(100); PdfPCell planBExtendedPdfPCell = new PdfPCell(); planBExtendedPdfPCell.addElement(new Paragraph("PLAN B EXTENDED")); planBExtendedPdfPCell.setBorder(Rectangle.NO_BORDER); fotterLeftPdfPTable.addCell(planBExtendedPdfPCell); //Calibration Company Name PdfPCell comapnyNameLablePdfPCell=new PdfPCell (new Paragraph (companyname(),font4)); comapnyNameLablePdfPCell.setBorder(Rectangle.NO_BORDER); fotterLeftPdfPTable.addCell(comapnyNameLablePdfPCell); //Address PdfPCell addressPdfPCell=new PdfPCell (new Paragraph ("Address data",font3)); addressPdfPCell.setBorder(Rectangle.NO_BORDER); fotterLeftPdfPTable.addCell(addressPdfPCell); //Zipcode PdfPCell zipPdfPCell=new PdfPCell (new Paragraph ("Postal code",font3)); zipPdfPCell.setBorder(Rectangle.NO_BORDER); fotterLeftPdfPTable.addCell(zipPdfPCell); //Certificate and Serial Number PdfPCell cityStateAndZipPdfPCell=new PdfPCell (new Paragraph ("Serial No ",font3)); cityStateAndZipPdfPCell.setBorder(Rectangle.NO_BORDER); fotterLeftPdfPTable.addCell(cityStateAndZipPdfPCell); //Model Name and Asset No PdfPCell pipetteModelNamePdfPCell=new PdfPCell (new Paragraph ("Asset No",font3)); pipetteModelNamePdfPCell.setBorder(Rectangle.NO_BORDER); fotterLeftPdfPTable.addCell(pipetteModelNamePdfPCell); PdfPCell footerLeftPdfPCell = new PdfPCell(fotterLeftPdfPTable); footerLeftPdfPCell.setBorder(Rectangle.NO_BORDER); footterPdfPTable.addCell(footerLeftPdfPCell); PdfPCell emptyPdfPCell = getEmptyPdfPCell(); emptyPdfPCell.addElement(getEmptyParagraph()); PdfPTable pageNumberPdfPTable = new PdfPTable(1); pageNumberPdfPTable.addCell(emptyPdfPCell); pageNumberPdfPTable.addCell(emptyPdfPCell); pageNumberPdfPTable.addCell(emptyPdfPCell); pageNumberPdfPTable.addCell(emptyPdfPCell); pageNumberPdfPTable.addCell(emptyPdfPCell); Paragraph pageNumberParagraph = new Paragraph("Page "+pagenumber,font3); pageNumberParagraph.setAlignment(Element.ALIGN_CENTER); //pageNumberPdfPTable.addCell(pageNumberParagraph); PdfPCell pegeNumberPdfPCell = new PdfPCell(pageNumberParagraph); pegeNumberPdfPCell.setBorder(Rectangle.NO_BORDER); footterPdfPTable.addCell(pegeNumberPdfPCell); fotterRightPdfPTable = new PdfPTable(2); fotterRightPdfPTable.setWidthPercentage(100); //Date Paragraph dateParagraph = new Paragraph(new Phrase("Date :"+CurrentDate.getDateFormat(Date,"dd/MM/yyyy"),font3)); dateParagraph.setAlignment(Element.ALIGN_LEFT); PdfPCell datePdfPCell = new PdfPCell(dateParagraph); datePdfPCell.setBorder(Rectangle.NO_BORDER); fotterRightPdfPTable.addCell(datePdfPCell); //Next Due Data Paragraph nextDueParagraph = new Paragraph(new Phrase("Next Due :"+CurrentDate.getDateFormat(Date,"dd/MM/yyyy"),font3)); dateParagraph.setAlignment(Element.ALIGN_LEFT); PdfPCell nextDuePdfPCell = new PdfPCell(nextDueParagraph); nextDuePdfPCell.setBorder(Rectangle.NO_BORDER); fotterRightPdfPTable.addCell(nextDuePdfPCell); fotterRightPdfPTable.addCell(getEmptyPdfPCell()); fotterRightPdfPTable.addCell(getEmptyPdfPCell()); fotterRightPdfPTable.addCell(getEmptyPdfPCell()); fotterRightPdfPTable.addCell(getEmptyPdfPCell()); emptyPdfPCell = getEmptyPdfPCell(); emptyPdfPCell.setColspan(2); fotterRightPdfPTable.addCell(emptyPdfPCell); PdfPCell underlinePdfPCell = new PdfPCell(new Phrase(new Chunk("Sign :").setUnderline(+1f, -2f))); underlinePdfPCell.setBorder(Rectangle.NO_BORDER); underlinePdfPCell.setColspan(2); fotterRightPdfPTable.addCell(underlinePdfPCell); Paragraph employeeParagraph = new Paragraph("Govardhan Rao",font3); employeeParagraph .setAlignment(Element.ALIGN_CENTER); PdfPCell employeePdfPCell = new PdfPCell(employeePdfPCell ); employeePdfPCell .setBorder(Rectangle.NO_BORDER); calibrationEmployeePdfPCell.setColspan(2); fotterRightPdfPTable.addCell(employeePdfPCell ); PdfPCell footerRightPdfPCell = new PdfPCell(fotterRightPdfPTable); footerRightPdfPCell.setBorder(Rectangle.NO_BORDER); footterPdfPTable.addCell(footerRightPdfPCell); //footterPdfPTable.writeSelectedRows(0, -1, 34, 803, writer.getDirectContent()); document.add(footterPdfPTable);
doc_4315
In the design Help docs from Android seems like the right place is there, in the "Help" section of the action overflow but in most of the apps that I have, this kind of information usually falls into the "Settings" section. In my case, I won't have a Help section, so I thought about having my Imprint in the Settings, but then, how do I implement it? It seems to me that the Settings are meant to... yeah, change the behavior of the app and therefore all the classes of the Preference package (CheckBoxPreference, SwitchPreference, etc.) go on that direction and are not meant to just present information. Should I use something like a DialogPreference, for example or there are better ways of doing this? A: Yes, it seems normal to use one entry on the Settings screen for your "about" information, as some Google apps do this as well (Maps, Play Store, etc). You can assign an Intent to a preference: see setIntent on the documentation. It will be started when the preference is clicked. So on your onCreate of the preference activity, you can do something similar to: findPreference("your_preference_id").setIntent(new Intent(this, AboutActivity.class));
doc_4316
Ex: Datetime | v1 | v2 | v3 131003044500 | 403 | 405 | 405 131003050000 | 406 | 408 | 407 131003051500 | 405 | 407 | 406 and so on... When I have a PLC failure, I lost my measurements. I have to insert ALL the missing row in a day with the correct missing datetime, so the other fields are all 0. Ex: Missing datetime from 131003080000 to 131003150000 131003081500 | 0 | 0 | 0 131003083000 | 0 | 0 | 0 etc... This is for consumption analyses and graph generator. Sorry for my bad english. Thanks in advance. Regards. A: Do you have voltage values somewhere independent of the spolied data? If so, why not combine SELECT & INSERT queries in a usual way to obtain the desired. Maybe there's some inconvenient file format with values and you need php to process it? On the other hand, if you never did any backups, the data is lost, of course.
doc_4317
I always got this line of code: 021-09-23 08:16:13.526604+0200 Clima[1943:50811] Writing analzed variants. Optional("") This is my JSON: JSON example This is my struct for api: struct WeatherData: Decodable{ let message: String? let list: [List]? } struct List: Decodable{ let name: String? } And my function for fetching json: func parseJSON(weatherData: Data) { let decoder = JSONDecoder() do{ let decodedData = try decoder.decode(WeatherData.self, from: weatherData) print(decodedData.list![0].name!) }catch{ print(error) } } I need to fetch a name. I understand that I need to make in my main struct WeatherData let for another struct and then use it. E.g. decodedData.list.name to fetch some data. One more question: should I always use ? (optional) when fetching data from JSON ?
doc_4318
But, after I serialize my struct array, I cant get my data value (0x12, 0x34, 0x56) again, instead i get some rubbish data. What is wrong here? #pragma pack(push, 1) typedef struct { uint8_t length; uint8_t *data; } Tx_Packet; #pragma pack(pop) static void create_tx_packet(uint8_t *packet, uint8_t *src, int length); int main(void) { uint8_t packet[32]; uint8_t data[] = { 0x12, 0x34, 0x56 }; create_tx_packet(packet, data, 3); //i check using debugger, i cant get the data value correctly //but i could get length value correctly return 0; } static void create_tx_packet(uint8_t *packet, uint8_t *src, int length) { Tx_Packet *tx_packet = malloc(sizeof(*tx_packet )); tx_packet->length = length; tx_packet->data = (uint8_t *)malloc(length); memcpy(tx_packet->data, src, length); memcpy(packet, tx_packet, sizeof(*tx_packet)); } A: Right now, your create_tx_packet() function copies a Tx_Packet struct created in the function to a uint8_t array. That struct contains the length and a pointer to the data, but not the data itself. It's actually not necessary to use the struct as an intermediate step at all, particularly for such a simple packet, so you could instead do: static void create_tx_packet(uint8_t *packet, uint8_t *src, int length) { *packet = length; /* set (first) uint8_t pointed to by packet to the length */ memcpy(packet + 1, src, length); /* copy length bytes from src to the 2nd and subsequent bytes of packet */ } You still need to make sure packet points to enough space (at least length + 1 bytes) for everything (which it does). Since the version above doesn't dynamically allocate anything, it also fixes the memory leaks in your original (which should have freed tx_packet->data and tx_packet before exiting). -- If you do want to use a struct, you can (since the data is at the end) change your struct to use an array instead of a pointer for data -- then extra space past the size of the struct can be used for the data, and accessed through the data array in the struct. The struct might be: typedef struct { uint8_t length; uint8_t data[]; } Tx_Packet; and the function becomes (if a temporary struct is used): static void create_tx_packet(uint8_t *packet, uint8_t *src, int length) { /* allocate the temporary struct, with extra space at the end for the data */ Tx_Packet *tx_packet = malloc(sizeof(Tx_Packet)+length); /* fill the struct (set length, copy data from src) */ tx_packet->length = length; memcpy(tx_packet->data, src, length); /* copy the struct and following data to the output array */ memcpy(packet, tx_packet, sizeof(Tx_Packet) + length); /* and remember to free our temporary struct/data */ free(tx_packet); } Rather than allocate a temporary struct, though, you could also use struct pointer to access the byte array in packet directly and avoid the extra memory allocation: static void create_tx_packet(uint8_t *packet, uint8_t *src, int length) { /* Set a Tx_Packet pointer to point at the output array */ Tx_Packet *tx_packet = (Tx_Packet *)packet; /* Fill out the struct as before, but this time directly into the output array so we don't need to allocate and copy so much */ tx_packet->length = length; memcpy(tx_packet->data, src, length); } A: If you use memcpy(packet, tx_packet, sizeof(*tx_packet)); you are copying the memory representation of tx_Packet into packet, starting with tx_packet->length. Additionally when mallocating tx_packet that size should be sizeof(*packet)+sizeof(uint8_t) (length of packet plus length field) And again when copying the tx_packet back to packet you are writing out of the boundaries of packet. EDIT: I forgot to mention that depending on your compiler memory alignment parameter you could get any length for the fields (including tx_packet->length) to accelerate memory operation. On 32bits machine it could be 4 and padded with rubbish. A: When you serialize your struct with memcpy(packet, tx_packet, sizeof(*tx_packet)); you're copying the length and the pointer to the data, but not the data itself. You'll probably need two memcpy calls: one of sizeof(uint8_t) to copy the length field, and one of length to copy the data. A: This line: Tx_Packet *tx_packet = malloc(sizeof(*packet)); only allocates one byte for the packet header, which you then immediately write off the end of, causing undefined behavior. You probably meant Tx_Packet *tx_packet = malloc(sizeof(*tx_packet));
doc_4319
the "_jihui" collection is what I'm referring to. A: Use getCollection() method for that purpose: db.getCollection('_col').drop() A: you can enclose your collection name in square brackets when the collection contain some kind of special characters in its name. eg : db["my-db--collection"].drop()
doc_4320
I have searched a lot but can not find. A: You cannot throw errors from the outside - they always happen from within the process model (e.g. Service Tasks or Throwing Error Events). If you want to influence process flow from the outside use a message - there you will find the proper API method in the docs. Consider is as "error message" :-) Cheers Bernd
doc_4321
Link to the table is in the code (url01) I want to know if there is a possibility that the data will be read vertically and not horizontally from the table Here is my code import requests import json from bs4 import BeautifulSoup from itertools import islice #URL declaration url01 = 'https://www.statistik.at/web_de/statistiken/wirtschaft/preise/baukostenindex/030979.html' #BeautifulSoup4 response = requests.get(url01, timeout=5) content = BeautifulSoup(response.content, 'html.parser') #deletes all the empty tags empty_tags = content.find_all(lambda tag: not tag.contents) [empty_tag.extract() for empty_tag in empty_tags] #Find all td in class body in div table table-hover data = content.find_all('td') #print (data) numbers = [d.text.encode('utf-8') for d in data] #print (numbers) #create string str1 = ''.join(str(e) for e in numbers) #print (str1) str_splt = str1.split('b') #print (str_splt) #Split list into several sublists length_to_split = [45, 45, 45, 110, 110, 110, 188, 188, 188, 253, 253, 253, 383, 383, 383] Input = iter(str_splt) Output = [list(islice(Input, elem)) for elem in length_to_split] print (Output[3]) #Python dictionary dataDict = { '2015 Lohn': None, '2015 Sonstiges': None, '2015 Insgesamt': None, 'Insgesamt': None } dataDict['Insgesamt'] = str_splt #print (dataDict) #save dictionary in json file with open('indexData.json', 'w') as f: json.dump(dataDict, f) When I execute the program and want to print out my first sub list, then these are the results. It has the wanted length (45) but it was read horizontally out of the table which makes it useless ['', "'108,6'", "'110,8'", "'109,8'", "'122,1'", "'114,3'", "'118,0'", "'140,6'", "'131,9'", "'136,0'", "'162,0'", "'166,3'", "'165,2'", "'261,9'", "'189,8'", "'222,5'", "'108,6'", "'111,4'", "'110,1'", "'122,1'", "'115,0'", "'118,4'", "'140,6'", "'132,6'", "'136,4'", "'162,0'", "'167,2'", "'165,7'", "'261,9'", "'190,8'", "'223,1'", "'105,2'", "'111,9'", "'108,9'", "'118,2'", "'115,5'", "'117,1'", "'136,2'", "'133,2'", "'134,9'", "'157,0'", "'168,0'", "'163,9'", "'253,7'", "'191,7'"] A: Using pandas library: * *pd.read_html() - produces a list of dataframes (there could be multiple tables in an HTML source), get the desired one by index. *df.to_csv() - To saved data into csv file. import pandas as pd #read html page table data. table = pd.read_html("https://www.statistik.at/web_de/statistiken/wirtschaft/preise/baukostenindex/030979.html") #saved data into csv file print(table[0].to_csv("indexData.csv")) A: One possible solution, without pandas. The function get_column() returns column as tuple, indexed from 0: import requests import json from bs4 import BeautifulSoup from itertools import islice #URL declaration url01 = 'https://www.statistik.at/web_de/statistiken/wirtschaft/preise/baukostenindex/030979.html' #BeautifulSoup4 response = requests.get(url01, timeout=5) content = BeautifulSoup(response.content, 'html.parser') rows = [] for tr in content.select('tr')[:-1]: # [:-1] because we don't want the last info row data = [td.get_text(strip=True) for td in tr.select('td')] if data: rows.append(data) def get_column(rows, col_num): return [*zip(*rows)][col_num] print('2015 Lohn:') print(get_column(rows, 0)) print('2015 Sonstiges:') print(get_column(rows, 1)) print('2015 Insgesamt:') print(get_column(rows, 2)) Prints: 2015 Lohn: ('108,6', '108,6', '105,2', '105,2', '105,2', '105,2', '104,4', '105,2', '105,2', '105,2', '105,2', '105,2', '105,2', '105,2', '105,2', '102,9', '102,9', '102,9', '102,9', '102,6', '102,9', '102,9', '102,9', '102,9', '102,9', '102,9', '102,9', '102,9', '101,9', '101,9', '101,9', '101,9', '101,5', '101,9', '101,9', '101,9', '101,9', '101,9', '101,9', '101,9', '101,9', '100,8', '100,8', '100,8', '100,8', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '') 2015 Sonstiges: ('110,8', '111,4', '111,9', '111,0', '111,6', '112,4', '112,6', '113,1', '114,6', '114,8', '114,3', '113,8', '113,0', '113,3', '112,7', '111,4', '110,5', '109,9', '110,0', '106,3', '108,9', '108,9', '108,3', '107,3', '105,7', '105,0', '105,2', '106,1', '106,5', '105,1', '104,3', '104,1', '97,7', '101,6', '99,6', '99,1', '98,5', '98,5', '98,3', '98,9', '98,5', '96,2', '94,1', '93,9', '94,9', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '') 2015 Insgesamt: ('109,8', '110,1', '108,9', '108,4', '108,7', '109,1', '108,9', '109,5', '110,4', '110,4', '110,2', '109,9', '109,5', '109,6', '109,3', '107,6', '107,1', '106,8', '106,8', '104,6', '106,2', '106,2', '105,9', '105,4', '104,5', '104,1', '104,2', '104,7', '104,4', '103,6', '103,2', '103,1', '99,4', '101,7', '100,6', '100,4', '100,0', '100,0', '99,9', '100,2', '100,0', '98,2', '97,1', '97,0', '97,6', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '')
doc_4322
Thanks to all! A: It really depends on what you intended to do since there are plenty ways to setup a spring context. The simplest way is ClasspathXmlApplicationContext: ClasspathXmlApplicationContext context = new ClasspathXmlApplicationContext("root-context.xml"); Here root-context.xml is a xml bean configuration file searched on classpath root. Typically you'd put this on src/main/resources. A: Spring AOP relies on the Spring framework and only works with Spring Beans. So you cannot use it without the Spring framework. What you want to use is AspectJ. It only needs one runtime library and works with POJOs, in application servers, with EJBs or even in combination wirth Spring. It does not need any dynamic proxies because it instruments your code directly during compilation or dynamically during class-loading. It is faster, leaner and much more powerful than Spring AOP.
doc_4323
<app dir> -app.js - <modules> -module1.js -module2.js - <routes> -route1.js -route2.js - <utilties> -utility1.js -utility2.js - <authentication> -local.js -basic.js -index.js My app.js is pretty messy as I have dependencies between my files. IE I want to pass my models and authentication to my routes, utilities can be used by a bunch of things. var app = express(); ... // do express setup var authenticationStrategy = // get auth strategy from config var auth = require('./auth)(authenticationStrategy); var utility1 = require('./utilities/utility1.js'); var utility2 = require('./utilities/utility2.js'); var utilities = { utility1: utility1, utility2: utility2 } var Model1 = require('./models/model1')(utilities); var Model2 = require('./models/model2')(utility1); var models = { Model1: Model1, Model2: Model2 } // Dynamically import all routes fs.readdirSync('routes').forEach(function(file) { if (file[0] == '.') return; var route = file.substr(0, file.indexOf('.')); require('./routes/' + route)(app, models, utilities); }); ... etc I now know that I can put an index.js in each folder to clean things up but that still leaves me with having to save off things like utilities and pass that into other require calls. I can put each require in lower modules only where its needed but then I end up climbing the directory structure with my require which also seems messy, i.e.: model1.js: var utility1 = require('../utilities/utility1.js'); Basically I think my problem is that modules in lower level folders depend on other modules in other folders. With I feel like I should pull in all dependencies in app.js and pass them to the require modules that need them. Any advice. I have been trying to restructure this for a couple days and I just keep banging my head against the wall as nothing I do is really making it any better. Best would be a good node project layout that uses things like mongoose, express, w/ custom modules. and show a good way to handle interdependencies between modules. A: Edit: https://github.com/gagle/node-getmod I also had headaches to properly structure my first real and BIG node.js project. The biggest problem you have is the relative path used in the require() calls. It's a pain because these paths are relative to the current file, so you tipically have paths like these: ../../../a.js ./a/b/c/a.js ../../a/b/c/a.js All of them pointing to the same file. This is inhuman. Hard to read, hard to maintain. You shouldn't put your modules inside the node_modules directory and simply require the module by its name (require("a")) because this directory is used by third party modules. If you need to do maintenance tasks, like perform a full upgrade of all modules, deleting them and installing them again, etc. they are extremly easy and safe to do. If you put your there your modules you'll have a big problem if you don't have a good documentation and you work in a medium-big team. After a few hours I ended with this technique: NodeJS local modules for complex application structures With the mod() function you have a good control over your modules. In my real and BIG project my mod function is the following: "use strict"; var serverType = require ("./server-type"); global.mod = function (file){ var dir = file.substring (0, file.search (/\/|\\/)); if (!dir) dir = file; if (dir === "browser" || dir === "common" || dir === "mobile" || dir === "stream"){ //Relative from the web directory return require ("../" + file); } if (dir === "api"){ //Relative from the api directory return require ("../../" + (file === "api" || file === "api/latest" ? file : "api/" + APP.versions.api + "/" + file.substring (4))); } if (dir === "build"){ //Relative from the build directory return require ("../../" + file); } //Tries to load the module inside browser or mobile directories depending on //the cwd return require ("../" + serverType () + "/" + file); }; If I call to mod("api") it returns my api module, the core of my website. If I call to mod("common/db") it returns the db wrapper. If I call to mod("build/...") it returns the module needed to build the static assets (compiles css and js) etc Now, you misunderstand how the modules must be used. Example: var utility1 = require('./utilities/utility1.js'); var utility2 = require('./utilities/utility2.js'); var utilities = { utility1: utility1, utility2: utility2 } var Model1 = require('./models/model1')(utilities); var Model2 = require('./models/model2')(utility1); var models = { Model1: Model1, Model2: Model2 } Here you are passing a the utilities module to another module and this is not needed. You can get the utilities module from inside the model1 and model2 modules. Very basic example: A depends on B //app.js require ("./a").a (); //a.js var b = require ("./b"); module.exports = { a: function (){ b.b ("I'm A"); } }; //b.js module.exports = { b: function (msg){ console.log ("B says: " + msg); } }; You don't need to do this: //app.js var b = require ("./b"); require ("./a").a (b); //a.js module.exports = { a: function (b){ b.b ("I'm A"); } }; //b.js module.exports = { b: function (msg){ console.log ("B says: " + msg); } }; Is this what you are looking for? A: 1- create a file in the project root call it settings.js 2- inside this file add this code module.exports = { POST_MAX_SIZE : 40 , //MB UPLOAD_MAX_FILE_SIZE: 40, //MB PROJECT_DIR : __dirname }; 3- inside node_modules create a new module name it "settings" and inside the module index.js write this code: module.exports = require("../../settings"); 4- and any time you want your project directory just use var settings = require("settings"); settings.PROJECT_DIR; in this way you will have all project directories relative to this file ;)
doc_4324
App.Person = DS.Model.extend({ firstName: DS.attr('string'), surname: DS.attr('string'), email: DS.attr('string'), fullName: function(){ return this.get('firstName') + " " + this.get('surname'); }.property('firstName', 'surname'), }); App.Contact = App.Person.extend({ }); App.User = App.Person.extend({ }); I want to somehow pass these objects to a new route that will allow me to email them automatically. I have a mail object that references the person as a polymorphic relationship: App.Mail = DS.Model.extend({ recipients: DS.hasMany('App.Person', {polymorphic: true}), }); The problem I have is shown in this fiddle here. For some reason the model is not getting set in the App.MailPersonRoute route and I am mystified as to why. A: Because your router has nested routes: App.Router.map(function() { this.resource('mail', function(){ this.route('person', {path: 'person/:person_id'}); }); }); You are creating a {{linkTo}} passing a nested route name as mail.person: <script type="text/x-handlebars" data-template-name="index"> {{#each model}} <p>Mail to {{#linkTo mail.person this}}{{fullName}}{{/linkTo}} {{/each}} </script> That must also be reflected to your template names (as per conventions), particularly to the template related to that route. Currently you have: <script type="text/x-handlebars" data-template-name="mail"> in mail with {{email}} </script> It should be: <script type="text/x-handlebars" data-template-name="mail/person"> in mail with {{email}} </script> Nested routes carry the name of their parent resources in its key name, while resources don't have a parent resource in their names, even if they are declared under another resource. Note: Not required, but maybe you want to change your serialize to something similar or a more elegant implementation of the following: serialize: function(model){ var _personType = 'contact'; if(model instanceof App.User) { _personType = 'user' } return { person_type: _personType, person_id: model.get('id') }; } This change would also require the routes to be defined similar to the following: App.Router.map(function() { this.resource('mail', function(){ this.route('person', {path: ':person_type/:person_id'}); }); }); Implementing it like this would prevent the href of your links to be equal if you have both User and Contact with the same id. In the current state, if you visit one of these links the browser with think that both links were visited. Again, not a requirement or anything.
doc_4325
I have a file server that serves files, and that's fine, since it will serve based on the operating system in use. But I keep track of certain paths (all folder paths with a specific file inside), and load a new instance of a class for each of these paths. But my problem is that if someone is using an OS that is case-insensitive, it will load two instances for the same path, which is a no no. And yet I don't want to just compare on toLowerCase() because they could be on a case sensitive file system. Is there a way to use the folder Stat info to check whether I have the same folder? Or some other way? A: You can use fs.stat to get https://nodejs.org/api/fs.html#fs_class_fs_stats For a regular file util.inspect(stats) would return a string very similar to this: Stats { dev: 2114, ino: 48064969, mode: 33188, nlink: 1, uid: 85, gid: 100, rdev: 0, size: 527, blksize: 4096, blocks: 8, atimeMs: 1318289051000.1, mtimeMs: 1318289051000.1, ctimeMs: 1318289051000.1, birthtimeMs: 1318289051000.1, atime: Mon, 10 Oct 2011 23:24:11 GMT, mtime: Mon, 10 Oct 2011 23:24:11 GMT, ctime: Mon, 10 Oct 2011 23:24:11 GMT, birthtime: Mon, 10 Oct 2011 23:24:11 GMT } I believe ino is your inode number, which should be unique for the disk partition you are talking about. If the path indicates partition, then for a particular path you should be good. I think this could be interesting if you want to deal with symlinks and other filesystem goodies.
doc_4326
It's easy to set a cookie, but I'm unclear how to output specific values and test against them. This works fine: Cookies.set('newsletter', 'show', { expires: 30, path: '/' }); var myCookie = Cookies.get('newsletter'); But this does not: if(myCookie){ var p = Cookies.get(['expires']); document.write(p); } How do I access the expires property? A: According to this issue it is not possible It‘s not possible. Also in the FAQ it states the same thing: It's not possible. Once the cookie has been written, its attributes can‘t be read out. The underlying API we‘re constraint to use doesn‘t support this: https://developer.mozilla.org/en-US/docs/Web/API/Document/cookie
doc_4327
The term NEW, is open to interpretation. It could mean "Created this month" Or newer relative others in a set (these details are not important at the moment) . Also, "Pages" refers to fan pages (not user profiles). I have identified 3 possible approaches. Both of which I am hoping to get some input on. Regarding feasibility and process. Option 1) Somehow leverage Facebook's Graph API and develop some time of web application to generate a list of all Pages, then filter by city, then filter by date created. Option 2 (Best Case)) Write or generate a custom Graph Search URL with embedded search criteria and leverage FBs existing search feature to get results. A great example of this approach is used by the tool searchisback.com. Only this tool is used to do advanced searches on People, but I need advanced searches on Pages. Option 3) Locate a tool that already does this that I can use. I again hoping to get some input and possible some direction/recommendations. I should also mention that I actually know very little about Facebook APIs and Facebook Development. My position right now is of some who knows what they want to do, but no idea how to do it. A: Option 1: Not possible, you can only search for Pages by name, the Graph Search is not available with the API. Those are all the options: https://developers.facebook.com/docs/graph-api/using-graph-api#search Option 2: See answer to option 1. Option 3: There are tools that list Pages, but they all have to add them manually. So there is not really a tool that does what you want to achieve. In short: What you want to do is not possible.
doc_4328
The x-axis data will in the form of year.week. But, I need the data to be shown at label as year | week. How can I format this data as I want to? nv.addGraph(function() { var chart = nv.models.multiBarChart() //If 'false', every single x-axis tick label will be rendered .reduceXTicks(false) //Angle to rotate x-axis labels. .rotateLabels(0) //Allow user to switch between 'Grouped' and 'Stacked' mode .showControls(false) //Distance between each group of bars .groupSpacing(0.1) .stacked(true); chart.xAxis.tickFormat(d3.format(',f')); chart.yAxis.tickFormat(d3.format(',.1f')); d3.select('#chart1 svg').datum(exampleData()).call(chart); console.log(exampleData()[0]); d3.select('#chart2 svg').datum(exampleData()).call(chart); nv.utils.windowResize(chart.update); return chart; });
doc_4329
I succeed receiving a single event as follows: rpc PostEvent(Event) returns (google.protobuf.Empty) { option (google.api.http) = { post: "/myEP" body: "*" }; } and it works - converts a json object to Event{} struct. My question is, how to do the same thing when I want to receive an array of Event{}s. This could work: message EventsWrapper { repeated Event events = 1; } rpc PostEvents(EventsWrapper) returns (google.protobuf.Empty) { option (google.api.http) = { post: "/myEP" body: "*" }; } But then it will expect a json like: {"events":[{},..,{}]} While I receive only: [{},..,{}] I don't control the way I receive the call. Any ideas how I can tweak my code to handle such an array call? A: If you need to accept a JSON array, and the gRPC transcoding implementation you use supports it, you can use the body attribute to specify a repeated field that is mapped to the request body, instead of using *. Quoting from the Google HttpRule API documentation: If an API needs to use a JSON array for request or response body, it can map the request or response body to a repeated field. However, some gRPC Transcoding implementations may not support this feature. body — The name of the request field whose value is mapped to the HTTP request body, or * for mapping all request fields not captured by the path pattern to the HTTP body, or omitted for not having any HTTP request body. In other words, you would define your service as: message EventsWrapper { repeated Event events = 1; } // ... rpc PostEvents(EventsWrapper) returns (google.protobuf.Empty) { option (google.api.http) = { post: "/myEP" body: "events" }; } I believe this is supported by at least gRPC-Gateway (since PR #712) and likely also Envoy Proxy's gRPC-JSON transcoder. The latter also supports translating between JSON arrays and streaming gRPC methods, so if you define an rpc PostEvent(stream Event) returns (google.protobuf.Empty) method, Envoy will expect an array of Event objects in the request (and possibly even stream the translated messages as they arrive).
doc_4330
Get Valid Claims and Generate Jwt Token var claims = await GetValidClaims(users); Token = GenerateJwtToken(users, claims); GenerateJwtToken Method private string GenerateJwtToken(ApplicationUser user, List<Claim> claims) { var dicClaim = new Dictionary<string,object>(); ... var tokenDescriptor = new SecurityTokenDescriptor { Claims = dicClaim, // <<<<<<<<<< this Claims is Dictionary<string,object> ... } } GetValidClaims Method private async Task<List<Claim>> GetValidClaims(ApplicationUser user) { IdentityOptions _options = new IdentityOptions(); var claims = new List<Claim> { new Claim(JwtRegisteredClaimNames.Sub, user.UserName), new Claim(_options.ClaimsIdentity.UserIdClaimType, user.Id.ToString()), new Claim(_options.ClaimsIdentity.UserNameClaimType, user.UserName) }; var userClaims = await _userManager.GetClaimsAsync(user); var userRoles = await _userManager.GetRolesAsync(user); claims.AddRange(userClaims); foreach (var userRole in userRoles) { claims.Add(new Claim(ClaimTypes.Role, userRole)); var role = await _roleManager.FindByNameAsync(userRole); if (role != null) { var roleClaims = await _roleManager.GetClaimsAsync(role); foreach (Claim roleClaim in roleClaims) { claims.Add(roleClaim); } } } return claims; } At this line: Claims = dicClaim, // <<<<<<<<<< this Claims is Dictionary<string,object> But I don't know how to convert List to Dictionary I already tried something like this: claims.ToDictionary(x=>x,x=>x.Value) claims.ToDictionary(x=>x.Value,x=>x) Lib SecurityTokenDescriptor A: In order to create a dictionary we have to define a unique Key for it. Assuming that * *We want our key to be say, Type or at least start from Type *Claims can be repeated we can solve it like this: * *Group all claims by Type (desired key) *If there's only 1 claim in a group, use Value as Key *If not, let's generate Type_1, Type_2, ..., Type_N Keys Code var dicClaim = claims .GroupBy(claim => claim.Type) // Desired Key .SelectMany(group => group .Select((item, index) => group.Count() <= 1 ? Tuple.Create(group.Key, item) // One claim in group : Tuple.Create($"{group.Key}_{index + 1}", item) // Many claims )) .ToDictionary(tuple => tuple.Item1, tuple => tuple.Item2); If in case of repeated claims you want to get, say, the Last one only, you can do it with the code: var dicClaim = claims .GroupBy(claim => claim.Type) // Desired Key .ToDictionary(group => group.Key, group => group.Last());
doc_4331
When I enter http://localhost/MyApp/public/index.php/ it routs OK to the index action of my index controller. But if I enter http://localhost/MyApp/public/index.php, changing only the ending slash, it doesn't work and shows 404 error: A 404 error occurred Page not found. The requested URL could not be matched by routing. No Exception available This is the code of my module.config.php routing: <?php return array( 'router' => array( 'routes' => array( 'home' => array( 'type' => 'Zend\Mvc\Router\Http\Literal', 'options' => array( 'route' => '/', 'defaults' => array( 'controller' => 'Application\Controller\Index', 'action' => 'index', ), ), ), // The following is a route to simplify getting started creating // new controllers and actions without needing to create a new // module. Simply drop new controllers in, and you can access them // using the path /application/:controller/:action 'application' => array( 'type' => 'Literal', 'options' => array( 'route' => '/application', 'defaults' => array( '__NAMESPACE__' => 'Application\Controller', 'controller' => 'Index', 'action' => 'index', ), ), 'may_terminate' => true, 'child_routes' => array( 'default' => array( 'type' => 'Segment', 'options' => array( 'route' => '/[:controller[/:action]]', 'constraints' => array( 'controller' => '[a-zA-Z][a-zA-Z0-9_-]*', 'action' => '[a-zA-Z][a-zA-Z0-9_-]*', ), 'defaults' => array( ), ), ), ), ), ), ), This is my Apache's .htaccess file: RewriteEngine On # The following rule tells Apache that if the requested filename # exists, simply serve it. RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] # The following rewrites all other queries to index.php. The # condition ensures that if you are using Apache aliases to do # mass virtual hosting, the base path will be prepended to # allow proper resolution of the index.php file; it will work # in non-aliased environments as well, providing a safe, one-size # fits all solution. RewriteCond %{REQUEST_URI}::$1 ^(/.+)(.+)::\2$ RewriteRule ^(.*) - [E=BASE:%1] RewriteRule ^(.*)$ %{ENV:BASE}index.php [NC,L] Could you please tell me where i'm going wrong? Thank you in advance, Host configurations: C:\Windows\System32\drivers\etc\hosts: # Copyright (c) 1993-2009 Microsoft Corp. # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host # localhost name resolution is handled within DNS itself. # 127.0.0.1 localhost # ::1 localhost 127.0.0.1 localhost C:\xampp\apache\conf\extra\httpd-vhosts.conf: # Virtual Hosts # # Required modules: mod_log_config # If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.4/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # Use name-based virtual hosting. # ##NameVirtualHost *:80 # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ##ServerName or ##ServerAlias in any <VirtualHost> block. # ##<VirtualHost *:80> ##ServerAdmin webmaster@dummy-host.example.com ##DocumentRoot "C:/xampp/htdocs/dummy-host.example.com" ##ServerName dummy-host.example.com ##ServerAlias www.dummy-host.example.com ##ErrorLog "logs/dummy-host.example.com-error.log" ##CustomLog "logs/dummy-host.example.com-access.log" common ##</VirtualHost> ##<VirtualHost *:80> ##ServerAdmin webmaster@dummy-host2.example.com ##DocumentRoot "C:/xampp/htdocs/dummy-host2.example.com" ##ServerName dummy-host2.example.com ##ErrorLog "logs/dummy-host2.example.com-error.log" ##CustomLog "logs/dummy-host2.example.com-access.log" common ##</VirtualHost> #<VirtualHost *:80> # DocumentRoot "/xampp/htdocs/zend/public" # ServerName zend # ServerAlias www.zend # SetEnv APPLICATION_ENV "development" # <Directory "/xampp/htdocs/zend/public"> # DirectoryIndex index.php # AllowOverride All # Order allow,deny # Allow from all # </Directory> #</VirtualHost> Adding the following code didn't work either, it redirects to http: //coolapp.localhost/xampp/ and shows xampp's home page. It redirects to xamm's page whatever directory path I set. <VirtualHost *:80> ServerName coolapp.localhost DocumentRoot "C:\xampp\htdocs\CooltivateApp\public" SetEnv APPLICATION_ENV "development" <Directory "C:\xampp\htdocs\MyApp\public" DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> Second test: <VirtualHost *:80> ServerName coolapp.localhost DocumentRoot "C:\xampfgdp\htdfgdfgocs\tefgdgdfgApp\public" SetEnv APPLICATION_ENVd "development" <Directory "C:\xamdfgdpp\hdgftdocs\dltieAppdfgd\ddfpublic" DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost>
doc_4332
<?php foreach ($coaches as $key => $coach) {?> <a class="u-icon-v1" href="javascript:" onclick="openModal('<?php echo $coach['id']?>','<?php echo $coach['name']?>')" data-toggle="modal" data-target="#exampleModal" data-whatever="<?php echo $coach['id']?>"> <i class="icon-envelope-letter"></i></a> <?php } ?> bootstrap modal function function openModal(id,name){ $('#exampleModal').on('show.bs.modal', function (event) { var button = $(event.relatedTarget) var recipient = button.data('whatever') // How to call call function when user click on sendmail button? }); bootstrap modal <div class="modal fade" id="exampleModal" tabindex="-1" role="dialog" aria-labelledby="exampleModalLabel" aria-hidden="true"> <div class="modal-dialog" role="document"> <div class="modal-content"> <div class="modal-header"> <h5 class="modal-title" id="exampleModalLabel">New message</h5> <button type="button" class="close" data-dismiss="modal" aria-label="Close"> <span aria-hidden="true">&times;</span> </button> </div> <div class="modal-body"> <form> <div class="form-group"> <label for="recipient-name" class="col-form-label">Mail To:</label> <input type="text" name="mailto" id="mailto" class="form-control" id="recipient-name" readonly="readonly"> </div> <div class="form-group"> <label for="message-text" class="col-form-label">Message:</label> <textarea class="form-control" id="message_text" name="message_text"></textarea> </div> </form> </div> <div class="modal-footer"> <button type="button" class="btn btn-secondary" data-dismiss="modal">Close</button> <button type="button" class="btn btn-primary" onclick="sendMail()" id="sendmessage">Send Mail</button> </div> </div> </div> </div> A: There are several ways to do it. You have already called a method named sendMail(). You can retrieve your required values inside that method like below code - function sendMail(){ //var name = $("#mailto").val(); var description = $("#message_text").val(); $.ajax({ type:"POST", data:{mailto: mailto, description:description}, url: "your url", success: function(response){ } }); } When you are calling this openModal() function, you can pass your mailto value as 3rd argument like openModal(id, name, mailto) & update that input field value using new value & then open your modal. In this way, this info will be different for each entry. check below code for openModal() function function openModal(id,name, mailto){ $("#mailto").val(mailto); $('#exampleModal').on('show.bs.modal', function (event) { var button = $(event.relatedTarget) var recipient = button.data('whatever') // How to call call function when user click on sendmail button? }); Hope now you can do it using this dummy code. Now you can ask what will happen with my modal. Hide it when any get any positive response from your ajax call using $("#exampleModal").modal('hide');
doc_4333
At the moment, the value is going up $11 every second, although I would like to implement a change so that it goes up $11.43 every second. For example: 0, $11.43, $22.86 ... and then when it reaches $1000 it should have the comma ($1,024.56). You should get the drift. Here is my jsFiddle: http://jsfiddle.net/m9cey/14/ var millisecs = 1000; setInterval(function() { var $badge = $('#badge'); var num = parseInt($badge.text().replace(',', ''))+11; $badge.text(num.toString().replace(/\B(?=(\d{3})+(?!\d))/g, ",")); }, millisecs); Any help will be much appreciated. Cheers A: It's easy enough to add $11.43. However, there are two problems: * *avoid inserting commas after the decimal point. *rounding errors due to floating point To address both of these, I suggest the following: * *Do everything in cents. So add 1143 (not 11.43) *To format, first convert the number to a string, then split the string into a prefix and the last two characters. Put commas into the prefix, then add a period and the last two characters. Oh, and you'll need to do a couple of other things: * *get rid of the badge-support-text span that you have in your fiddle, since you're going to be adding in the cents explicitly *strip out the decimal point as well as the commas before you parse the current amount. You might also consider creating a closure that maintains its own internal data that is incremented, so you don't have to parse. Here's a modification of your jsfiddle that does the above (except the closure): var millisecs = 1000; setInterval(function() { var $badge = $('#badge'); var num = (parseInt($badge.text().replace(/[,.]/g, ''))+1143).toString(); var dollars = num.substring(0, num.length-2); var cents = num.substr(-2); $badge.text(dollars.replace(/\B(?=(\d{3})+(?!\d))/g, ",") + '.' + cents); }, millisecs); A: here is a fiddle that shows how it should look. this is just one way to do it var millisecs = 10; setInterval(function() { var $badge = $('#badge'); var num = parseFloat($badge.text().replace(',', ''))+11.34; num = num.toFixed(2); var numString = num.toString().split('.'); var dollars = numString[0].replace(/\B(?=(\d{3})+(?!\d))/g, ","); var cents = numString[1]; var str = [dollars, cents].join('.'); $badge.text( str ); }, millisecs);
doc_4334
"electron-build": "ng build && node ./bin/www" But when i tried this code "electron-build": "ng build && electron . && node ./bin/www" The build is successful and the app runs in electron but the server is not running & when i quit the electron app from GUI i got the successful connection log in console. Now when i tried this code "electron-build": "ng build && node ./bin/www && electron ." The server starts successfully but the electron app doesn't rendered. A: Problem is that two process (node ./bin/www and electron.) can't be execute in same Thread. You can do some like that const { app, BrowserWindow } = require('electron') const path = require('path'); function createWindow () { win2 = new BrowserWindow({ width: 800, height: 600 }) win2.loadURL( path.join('file://', __dirname, '/index.html') ); } app.on('ready', createWindow) And in index.html add the tag with you server.js or app.js (file with code of node/express). Like: <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Simple Mouse Control - Server</title> <script src="server.js"></script> </head> <body> Hola </body> </html> And finally: "electron-build": "ng build && electron ."
doc_4335
ERROR contentpump.DatabaseContentReader: RuntimeException reading /pdf/docIns/docIns- 222581.pdf :com.marklogic.xcc.exceptions.StreamingResultException: RequestException instantiating ResultItem 301805: Time limit exceeded 22/01/24 17:48:09 INFO contentpump.DatabaseContentReader: host name: xxx.us- central.compute.internal 22/01/24 17:48:09 INFO contentpump.DatabaseContentReader: Retrying connect 22/01/24 17:53:16 INFO contentpump.LocalJobRunner: completed 3% A: Thread count won't make a difference as each doc can only be read by one thread concurrently. The limiting factor is either network transfer time or time to read the file off MarkLogic's disk and into available memory (or some combination of these factors). You could try grabbing the document over REST (/v1/documents/ endpoint) and see if that is quicker. You could also use xdmp:zip-create to try and compress it within MarkLogic and see if downloading the compressed file is fast enough. Alternatively, consider using MarkLogic to store a URL alongside the searchable (meta)data to grab the document from something else (like a CDN or S3 for example). A: You could consider increasing the request time limit of the http server. This page explains the settings: https://docs.marklogic.com/admin-help/http-server If you are managing your cluster via the REST API, you can look here: https://docs.marklogic.com/REST/POST/manage/v2/servers Also .. there are other options for large Binary content.. you could also consider storing the PDF as a registered binary on a location with external access for clients such as S3.. then just return the reference and your clients could get the file directly assuming that they have credentials to read from the storage. For projects before, I have served large binary from S3 and other times from a different type of server as a proxy using a 1 time token.
doc_4336
Now my sencha CMD version is v6.7.0.63, app version is 6.2.167. When i try to create new app with CMD it is successfully creating the new app, and sencha app build classic on this newly created app is also good. But when i try to build the old app, it is not working. My Old app build ios is also working. But not the classic. Here is the error it gives when i run sencha app build classic Exception information: Sencha/Cmd/6.7.0.63/plugin.xml:333: The following error occurred while executing this line: htdocs/pi-v3/public/App/.sencha/app/build-impl.xml:273: The following error occurred while executing this line: htdocs/pi-v3/public/App/.sencha/app/js-impl.xml:186: com.sencha.exceptions.BasicException at com.sencha.ant.AntScript.execute(AntScript.java:121) and when i run sencha app build ios it run without any error A: The error was so unclear in linux. But i was able to solve it. As you can see the error doesn't say anything about the head or memory but it was actually error due to small heap size. so i have to increase the memory using; export _JAVA_OPTIONS="-Xms1024m -Xmx2048m"
doc_4337
A: Refer to this link which talks about creating AWS Glue resources using CLI. This blog is in Japanese. Following is the sample to create a Glue job using CLI. aws glue create-job \ --name ${GLUE_JOB_NAME} \ --role ${ROLE_NAME} \ --command "Name=glueetl,ScriptLocation=s3://${SCRIPT_BUCKET_NAME}/${ETL_SCRIPT_FILE}" \ --connections Connections=${GLUE_CONN_NAME} \ --default-arguments file://${DEFAULT_ARGUMENT_FILE} A: Follow documentation and post error if any Link to docs https://docs.aws.amazon.com/cli/latest/reference/glue/create-job.html
doc_4338
This is the hard coded service I have for the data @Service public class TodoHardcodedService { private static List<Todo> todos =new ArrayList<Todo>(); private static int idCounter=0; static { todos.add(new Todo(++idCounter,"in28minutes","learn to Dance",new Date(),false)); todos.add(new Todo(++idCounter,"in28minutes","learn about Microservices",new Date(),false)); todos.add(new Todo(++idCounter,"in28minutes","learn about AngularJS2",new Date(),false)); } public List<Todo> findAll(){ return todos; } public Todo save(Todo todo) { if(todo.getId()==-1) { todo.setId(++idCounter); todos.add(todo); }else{ deleteById(todo.getId()); todos.add(todo); } return todo; } public Todo deleteById(long id) { Todo todo = findById(id); if(todo ==null) return null; if(todos.remove(todo)) { return todo; } return null; } public Todo findById(long id) { for(Todo todo:todos) { if(todo.getId()==id) { return todo; } } return null; } } Below is the AOP configuration for central logging @Aspect @Component public class LoggingAdvice { Logger log = LoggerFactory.getLogger(LoggingAdvice.class); @Pointcut(value="execution(* com.in28minutes.rest.webservices.restfulwebservices.*.*.*(..) )") public void myPointcut() { } @Around("myPointcut()") public Object applicationLogger(ProceedingJoinPoint pjp) throws Throwable { ObjectMapper mapper = new ObjectMapper(); String methodName = pjp.getSignature().getName(); String className = pjp.getTarget().getClass().toString(); Object[] array = pjp.getArgs(); log.info("method invoked " + className + " : " + methodName + "()" + "arguments : " + mapper.writeValueAsString(array)); Object object = pjp.proceed(); log.info(className + " : " + methodName + "()" + "Response : " + mapper.writeValueAsString(object)); return object; } } Below is the output of the logging 2020-08-08 16:27:12.973 INFO 5610 --- [nio-8080-exec-6] c.i.r.w.r.todo.LoggingAdvice : method invoked class com.in28minutes.rest.webservices.restfulwebservices.todo.TodoHardcodedService : findById()arguments : [5] // This is the method name printed here 2020-08-08 16:27:12.975 INFO 5610 --- [nio-8080-exec-6] c.i.r.w.r.todo.LoggingAdvice : class com.in28minutes.rest.webservices.restfulwebservices.todo.TodoHardcodedService : findById()Response : {"id":5,"username":null,"description":"Edit made","targetDate":1596884036902,"done":false} // This is the method name printed here 2020-08-08 16:27:21.584 INFO 5610 --- [nio-8080-exec-7] c.i.r.w.r.todo.LoggingAdvice : method invoked class com.in28minutes.rest.webservices.restfulwebservices.todo.TodoHardcodedService : save()arguments : [{"id":5,"username":null,"description":"Edit made now","targetDate":1596884036902,"done":false}] 2020-08-08 16:27:21.585 INFO 5610 --- [nio-8080-exec-7] c.i.r.w.r.todo.LoggingAdvice : class com.in28minutes.rest.webservices.restfulwebservices.todo.TodoHardcodedService : save()Response : {"id":5,"username":null,"description":"Edit made now","targetDate":1596884036902,"done":false} 2020-08-08 16:27:21.604 INFO 5610 --- [nio-8080-exec-8] c.i.r.w.r.todo.LoggingAdvice : method invoked class com.in28minutes.rest.webservices.restfulwebservices.todo.TodoHardcodedService : findAll()arguments : [] 2020-08-08 16:27:21.606 INFO 5610 --- [nio-8080-exec-8] c.i.r.w.r.todo.LoggingAdvice : class com.in28minutes.rest.webservices.restfulwebservices.todo.TodoHardcodedService : findAll()Response : [{"id":1,"username":"in28minutes","description":"learn to Dance","targetDate":1596883914126,"done":false},{"id":2,"username":"in28minutes","description":"learn about Microservices","targetDate":1596883914126,"done":false},{"id":3,"username":"in28minutes","description":"learn about AngularJS2","targetDate":1596883914126,"done":false},{"id":4,"username":null,"description":"DRE","targetDate":1596884010658,"done":false},{"id":5,"username":null,"description":"Edit made now","targetDate":1596884036902,"done":false}] 2020-08-08 16:34:29.082 INFO 5610 --- [io-8080-exec-10] c.i.r.w.r.todo.LoggingAdvice : method invoked class com.in28minutes.rest.webservices.restfulwebservices.todo.TodoHardcodedService : findAll()arguments : [] 2020-08-08 16:34:29.085 INFO 5610 --- [io-8080-exec-10] c.i.r.w.r.todo.LoggingAdvice : class com.in28minutes.rest.webservices.restfulwebservices.todo.TodoHardcodedService : findAll()Response : [{"id":1,"username":"in28minutes","description":"learn to Dance","targetDate":1596883914126,"done":false},{"id":2,"username":"in28minutes","description":"learn about Microservices","targetDate":1596883914126,"done":false},{"id":3,"username":"in28minutes","description":"learn about AngularJS2","targetDate":1596883914126,"done":false},{"id":4,"username":null,"description":"DRE","targetDate":1596884010658,"done":false},{"id":5,"username":null,"description":"Edit made now","targetDate":1596884036902,"done":false}] When I will save the methods in the database, I want to put some additional information about that method i.e. findById() & User finds information by passing the id. And so on.
doc_4339
currently this is what I am trying to do "buildFlags": "-ldflags \"-w -X xxxxxxx\"", However using the escape character gives me the error invalid value "\"-w" for flag -ldflags: missing =<value> in <pattern>=<value> usage: go build [-o output] [build flags] [packages] Run 'go help build' for details. (exit status 2) Any suggestions on how I can resolve this issue ? A: Per this issue launch.json buildFlags for multiple tags, the following config should work "buildFlags": "-ldflags '-w -X'" For more information, please refer to this doc VS Code debugging
doc_4340
if it passes 8:05 am I want stop the process. How can I achieve this in PowerShell? Thanks in advance. $time = get-date -Format hh:mm $time if ($time > 8:05) { Write-Output "passed 8:05 am" exit } else { Write-Output "too early" } A: The problem with date format and date comparisons in PowerShell is, that they are annoying as hell. The easiest way is to define two variables, one for the time at the moment: $timenow = get-date And one for the set time: $timecomparison = Get-Date -Hour 8 -Minute 5 Then, for the actual comparison, you need to use the Get-Date parameter "timeofday": If ($timenow.TimeOfDay -lt $timecomparison.TimeOfDay) {do something} Also, please correct me if I am wrong, but in Powershell operators like >,<,= tend to not work properly in conditions and loops. That is why I prefer to use "larger than" -lt A: you need to convert the "time" variable to double format because the default is string, and replace ':' with '.' $time = get-date -Format hh.mm $time.replace('.',':') $double = [double]$time if ($double -ge 8.05) { Write-Output "passed 8:05 am" exit } else { Write-Output "too early" } I also advise you to read the syntax of the comparison in powershell
doc_4341
Upgrading to 64-bit is not an option at the moment. Upon calling my function, i am getting a MemoryError def M2(): print ('Loading datasets...') e1 = pd.read_csv(working_dir+"E1.txt",sep=',') E1.txt is 300,000 kb. Is there a better way of reading in this data? Update * *I do not want to use chunksize as this will not read my data in a dataframe. *I have narrowed down my .txt file from 300k kb, to 50k kb and still memory issue. Traceback: Traceback (most recent call last): File "<ipython-input-99-99e71d524b4b>", line 1, in <module> runfile('C:/AppData/FinRecon/py_code/python3/DataJoin.py', wdir='C:/AppData/FinRecon/py_code/python3') File "C:\Users\stack\AppData\Local\Continuum\anaconda3\anaconda3_32bit\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile execfile(filename, namespace) File "C:\Users\stack\AppData\Local\Continuum\anaconda3\anaconda3_32bit\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/AppData/FinRecon/py_code/python3/DataJoin.py", line 474, in <module> M2() File "C:/AppData/FinRecon/py_code/python3/DataJoin.py", line 31, in M2 e1 = pd.read_csv(working_dir+"E1.txt",sep=',') File "C:\Users\stack\AppData\Local\Continuum\anaconda3\anaconda3_32bit\lib\site-packages\pandas\io\parsers.py", line 702, in parser_f return _read(filepath_or_buffer, kwds) File "C:\Users\stack\AppData\Local\Continuum\anaconda3\anaconda3_32bit\lib\site-packages\pandas\io\parsers.py", line 435, in _read data = parser.read(nrows) File "C:\Users\stack\AppData\Local\Continuum\anaconda3\anaconda3_32bit\lib\site-packages\pandas\io\parsers.py", line 1154, in read df = DataFrame(col_dict, columns=columns, index=index) File "C:\Users\stack\AppData\Local\Continuum\anaconda3\anaconda3_32bit\lib\site-packages\pandas\core\frame.py", line 392, in __init__ mgr = init_dict(data, index, columns, dtype=dtype) File "C:\Users\stack\AppData\Local\Continuum\anaconda3\anaconda3_32bit\lib\site-packages\pandas\core\internals\construction.py", line 212, in init_dict return arrays_to_mgr(arrays, data_names, index, columns, dtype=dtype) File "C:\Users\stack\AppData\Local\Continuum\anaconda3\anaconda3_32bit\lib\site-packages\pandas\core\internals\construction.py", line 61, in arrays_to_mgr return create_block_manager_from_arrays(arrays, arr_names, axes) File "C:\Users\stack\AppData\Local\Continuum\anaconda3\anaconda3_32bit\lib\site-packages\pandas\core\internals\managers.py", line 1666, in create_block_manager_from_arrays blocks = form_blocks(arrays, names, axes) File "C:\Users\stack\AppData\Local\Continuum\anaconda3\anaconda3_32bit\lib\site-packages\pandas\core\internals\managers.py", line 1734, in form_blocks int_blocks = _multi_blockify(items_dict['IntBlock']) File "C:\Users\stack\AppData\Local\Continuum\anaconda3\anaconda3_32bit\lib\site-packages\pandas\core\internals\managers.py", line 1819, in _multi_blockify values, placement = _stack_arrays(list(tup_block), dtype) File "C:\Users\stack\AppData\Local\Continuum\anaconda3\anaconda3_32bit\lib\site-packages\pandas\core\internals\managers.py", line 1861, in _stack_arrays stacked = np.empty(shape, dtype=dtype) MemoryError
doc_4342
class Node{ Node nextNode; char key; Node prevNode; /* Would like to initialize nextNode and prevNode to instance of NullNode, something like this (I know what I am doing is wrong) */ Node() { nextNode = new NullNode(); prevNode = new NullNode(); } } class NullNode extends Node { .... } With this code I get a StackOverflowError Exception. How can I tackle this issue? A: You are getting a StackOverflow because the parent constructor is always called (see also: https://stackoverflow.com/a/527069/664108). In your case this results in endless recursion. To avoid that, you will have to add a check in the Node constructor and call it explicitly from the NullNode constructor: public class Node { Node nextNode; char key; Node prevNode; Node() { Node(true); } Node(boolean createNullNodes) { if (createNullNodes) { nextNode = new NullNode(); prevNode = new NullNode(); } } } public class NullNode extends Node { NullNode() { super(false); } } A better solution for the NullObject pattern is using interfaces. This eliminates the constructor problem and also allows to remove the not needed nextNode and prevNode variables from the NullNode. Example with interface: public interface INode { public char getKey(); public INode getNext(); public INode getPrev(); // ... } public class Node implements INode { Node nextNode; char key; Node prevNode; Node() { nextNode = new NullNode(); prevNode = new NullNode(); } public char getKey() { return key; } public INode getNext() { return nextNode; } public INode getPrev() { return prevNode; } } public class NullNode implements INode { public char getKey() { return null; } public INode getNext() { return this; } public INode getPrev() { return this; } } A: Normally we do not reference the Subclass in the Suerclass, this somehow breaks the inheritance relation. In your code there is something even worse that will cause a StackoverflowException because the superclass creates an object with the default constructor of the subclass which in turns calls the default constructor of the superclass and it will go infinitely until your program crashes. You can see an implementation of the Null Object Pattern here A: Try this public clas Node { Node nextNode; char key; Node prevNode; Node() { this(true); } Node(boolean createNullNodes) { if (createNullNodes) { nextNode = new NullNode(); prevNode = new NullNode(); } } } public class NullNode extends Node { NullNode() { super(false); } } To call one constructor from another constructor use this(args)... you cannot call it directly
doc_4343
After saw Jonathan's comment , I want to ask is the problem maybe asyc methods? Pogram.cs public class Program { public static void Main(string[] args) { CreateHostBuilder(args).Build().Run(); } public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .UseServiceProviderFactory(new AutofacServiceProviderFactory()) .ConfigureContainer<ContainerBuilder>(builder => { builder.RegisterModule(new AutofacResolverModule()); }) .ConfigureWebHostDefaults(webBuilder => { webBuilder.UseStartup<Startup>(); }) .ConfigureLogging(logging => { logging.ClearProviders(); logging.SetMinimumLevel(LogLevel.Trace); }); } //-------- AutofacResolverModule.cs public class AutofacResolverModule : Module { public AutofacResolverModule() { } protected override void Load(ContainerBuilder builder) { builder.RegisterGeneric(typeof(Repository<,>)).As(typeof(IRepository<,>)); builder.RegisterGeneric(typeof(BaseService<,,>)).As(typeof(IBaseService<,,>)); builder.RegisterType<FileLogger>(); #region AutofacInterceptorHelper var assembly = Assembly.GetExecutingAssembly(); builder.RegisterAssemblyTypes(assembly).AsImplementedInterfaces() .EnableInterfaceInterceptors(new ProxyGenerationOptions() { Selector = new AspectInterceptorSelector() }).SingleInstance().InstancePerDependency(); #endregion } } //--------- AspectInterceptorSelector.cs public class AspectInterceptorSelector: IInterceptorSelector { public IInterceptor[] SelectInterceptors(Type type, MethodInfo method, IInterceptor[] interceptors) { var classAttributes = type.GetCustomAttributes<MethodInterceptorBaseAttribute>(true).ToList(); var methodAttributes = type.GetMethod(method.Name)?.GetCustomAttributes<MethodInterceptorBaseAttribute>(true); if (methodAttributes != null) { classAttributes.AddRange(methodAttributes); } //classAttributes.Add(new LogAspect(typeof(FileLogger))); return classAttributes.OrderBy(x => x.Priority).ToArray(); } } BaseService.cs This class is generic base class and my purpose is all post methods logged into .txt file , is there any problem to use this log aspect into a generic class ? public class BaseService<TEntity, TPrimaryKey, TEntityDto> : IBaseService<TEntity, TPrimaryKey, TEntityDto> where TEntity : BaseEntity<TPrimaryKey>, new() where TEntityDto : IDto { private readonly IRepository<TEntity, TPrimaryKey> _repository; private readonly IMapper _mapper; public BaseService(IRepository<TEntity, TPrimaryKey> repository, IMapper mapper) { _repository = repository; _mapper = mapper; } [LogAspect(typeof(FileLogger))] public async Task<IResult> Add(TEntityDto entityDto) { var entity = _mapper.Map<TEntity>(entityDto); var result = await _repository.Add(entity); return result == null ? new Result(false, ErrorMessages.CreateMessage) : new Result(true, SuccessMessages.CreateMessage, result.Id); } public async Task<IResult> Find(Expression<Func<TEntity, bool>> predicate) { var result = await _repository.Find(predicate); return result == null ? new Result(true, ErrorMessages.GetMessage) : new Result(true, _mapper.Map<List<ExampleDto>>(result)); } public async Task<IResult> GetAll(Expression<Func<TEntity, bool>> predicate = null) { var result = predicate == null ? await _repository.GetAll() : await _repository.GetAll(predicate); return result == null ? new Result(true, ErrorMessages.GetMessage) : new Result(true, _mapper.Map<List<ExampleDto>>(result)); } public async Task<IResult> HardDelete(TPrimaryKey Id) { var entity = await _repository.Find(x => x.Id.Equals(Id)); var result = await _repository.HardDelete(entity); return result == 0 ? new Result(false, ErrorMessages.DeleteMessage) : new Result(true, SuccessMessages.DeleteMessage); } public async Task<IResult> Delete(TPrimaryKey Id) { var entity = await _repository.Find(x => x.Id.Equals(Id)); var result = await _repository.Delete(entity); return result == 0 ? new Result(false, ErrorMessages.DeleteMessage) : new Result(true, SuccessMessages.DeleteMessage); } public async Task<IResult> Update(TEntityDto entityDto) { var entity = _mapper.Map<TEntity>(entityDto); var result = await _repository.Update(entity); return result == null ? new Result(false, ErrorMessages.UpdateMessage) : new Result(true, SuccessMessages.UpdateMessage, result.Id); } } LogAspect.cs public class LogAspect : MethodInterceptor { private readonly LoggerServiceBase _loggerServiceBase; private readonly IHttpContextAccessor _httpContextAccessor; public LogAspect(Type loggerService) { if (loggerService.BaseType != typeof(LoggerServiceBase)) { throw new ArgumentException("Wrong Type"); } _loggerServiceBase = (LoggerServiceBase)ServiceTool.ServiceProvider.GetService(loggerService); _httpContextAccessor = ServiceTool.ServiceProvider.GetService<IHttpContextAccessor>(); } protected override void OnBefore(IInvocation invocation) { _loggerServiceBase?.Info(GetLogDetail(invocation)); } private string GetLogDetail(IInvocation invocation) { var logParameters = new List<LogParameters>(); for (var i = 0; i < invocation.Arguments.Length; i++) { logParameters.Add(new LogParameters { Name = invocation.GetConcreteMethod().GetParameters()[i].Name, Value = invocation.Arguments[i], Type = invocation.Arguments[i].GetType().Name, }); } var logDetail = new LogDetails { MethodName = invocation.Method.Name, Parameters = logParameters, User = (_httpContextAccessor.HttpContext == null || _httpContextAccessor.HttpContext.User.Identity.Name == null) ? "?" : _httpContextAccessor.HttpContext.User.Identity.Name }; return JsonConvert.SerializeObject(logDetail); } } MethodInterceptor.cs public abstract class MethodInterceptor: MethodInterceptorBaseAttribute { public override void Intercept(IInvocation invocation) { var isSuccess = true; OnBefore(invocation); try { invocation.Proceed(); var result = invocation.ReturnValue as Task; result?.Wait(); } catch (Exception e) { isSuccess = false; OnException(invocation, e); throw; } finally { if (isSuccess) { OnSuccess(invocation); } } OnAfter(invocation); } protected virtual void OnBefore(IInvocation invocation) { } protected virtual void OnAfter(IInvocation invocation) { } protected virtual void OnException(IInvocation invocation, Exception e) { } protected virtual void OnSuccess(IInvocation invocation) { } } MethodInterceptorBaseAttribute.cs [AttributeUsage(AttributeTargets.Class|AttributeTargets.Method , AllowMultiple = true,Inherited = true)] public abstract class MethodInterceptorBaseAttribute:Attribute, IInterceptor { public int Priority { get; set; } public virtual void Intercept(IInvocation invocation) { } } So i can not find this solutions for almost a month , any ideas ? A: Tarık. In my opinion, your fault is not registering generic types to assemblies configuration. You should register generic types in AutofacResolverModule.cs builder.RegisterAssemblyOpenGenericTypes(assembly).AsImplementedInterfaces() .EnableInterfaceInterceptors(new ProxyGenerationOptions() { Selector = new AspectInterceptorSelector() }).SingleInstance().InstancePerDependency();
doc_4344
>>> import socket >>> socket.gethostbyname('http://yahoo.com') Traceback (most recent call last): File "<stdin>", line 1, in <module> socket.gaierror: [Errno -2] Name or service not known A: It is because, quite frankly, http://yahoo.com is in no way a domain name, which gethostbyname expects from you. http://yahoo.com is an URL. >>> import socket >>> socket.gethostbyname("yahoo.com") '69.147.114.224' A: Very simple. "http://yahoo.com" is not a host name. Try socket.gethostbyname('yahoo.com') :)
doc_4345
volatile CString* a0; CString* a1 = const_cast<CString *>(a0); // error C2440: 'const_cast' : cannot convert from 'volatile CString' to 'CString' volatile CString b0; CString b1 = const_cast<CString>(b0); I was wondering, why const_cast only work for pointer? How can I make the 2nd case to compile? A: const_cast acts on pointers and references, to remove const and volatile qualifiers. It doesn't make sense to use it to cast to an object type, since then you would making a copy which need not have the same cv-qualifiers as the original anyway. Your second example will compile if you cast to a non-volatile reference: volatile CString b0; CString & b1 = const_cast<CString &>(b0); However, using that reference gives undefined behaviour, since the underlying object is itself volatile. You should only use const_cast to remove qualifications when you know that the underlying object does not have those qualifications (or in the case of removing const, when you know that the result won't be used to modify the object). You can only get a copy if the object allows copying of volatile objects (perhaps with a copy constructor taking a reference-to-volatile or a volatile-qualified function or conversion operator). If CString doesn't provide any of these, then you can't safely copy a volatile object of that type. A: because in the second case you are actually copying b0 and not referring to the original object you need to do a reference in that case const CString &b1 = b0;
doc_4346
Oracle JInitiator: Version 1.3.1.22 Using JRE version 1.3.1.22-internal Java HotSpot(TM) Client VM User home directory = C:\Documents and Settings\Administrator Proxy Configuration: Browser Proxy Configuration JAR cache enabled Location: C:\Documents and Settings\Administrator\Oracle Jar Cache Maximum size: 50 MB Compression level: 0 ---------------------------------------------------- c: clear console window f: finalize objects on finalization queue g: garbage collect h: display this help message l: dump classloader list m: print memory usage q: hide console s: dump system properties t: dump thread list x: clear classloader cache 0-5: set trace level to <n> ---------------------------------------------------- Loading http://192.168.10.110/forms/java/frmall_jinit.jar from JAR cache Loading http://192.168.10.110/forms/java/frmwebutil.jar from JAR cache Loading http://192.168.10.110/forms/java/jacob.jar from JAR cache proxyHost=null proxyPort=0 connectMode=HTTP, native. Forms Applet version is : 10.1.2.0 Exception occurred during event dispatching: java.lang.ExceptionInInitializerError: java.security.AccessControlException: access denied (java.lang.RuntimePermission loadLibrary.jacob) at java.security.AccessControlContext.checkPermission(Unknown Source) at java.security.AccessController.checkPermission(Unknown Source) at java.lang.SecurityManager.checkPermission(Unknown Source) at java.lang.SecurityManager.checkLink(Unknown Source) at java.lang.Runtime.loadLibrary0(Unknown Source) at java.lang.System.loadLibrary(Unknown Source) at com.jacob.com.Dispatch.<clinit>(Dispatch.java:537) at oracle.forms.webutil.ole.OleFunctions.create_obj(OleFunctions.java:398) at oracle.forms.webutil.ole.OleFunctions.getProperty(OleFunctions.java:189) at oracle.forms.handler.UICommon.onGet(Unknown Source) at oracle.forms.engine.Runform.onGetHandler(Unknown Source) at oracle.forms.engine.Runform.processMessage(Unknown Source) at oracle.forms.engine.Runform.processSet(Unknown Source) at oracle.forms.engine.Runform.onMessageReal(Unknown Source) at oracle.forms.engine.Runform.onMessage(Unknown Source) at oracle.forms.engine.Runform.processEventEnd(Unknown Source) at oracle.ewt.lwAWT.LWComponent.redispatchEvent(Unknown Source) at oracle.ewt.lwAWT.LWComponent.processEvent(Unknown Source) at java.awt.Component.dispatchEventImpl(Unknown Source) at java.awt.Container.dispatchEventImpl(Unknown Source) at java.awt.Component.dispatchEvent(Unknown Source) at java.awt.LightweightDispatcher.retargetMouseEvent(Unknown Source) at java.awt.LightweightDispatcher.processMouseEvent(Unknown Source) at java.awt.LightweightDispatcher.dispatchEvent(Unknown Source) at java.awt.Container.dispatchEventImpl(Unknown Source) at java.awt.Window.dispatchEventImpl(Unknown Source) at java.awt.Component.dispatchEvent(Unknown Source) at java.awt.EventQueue.dispatchEvent(Unknown Source) at java.awt.EventDispatchThread.pumpOneEventForHierarchy(Unknown Source) at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source) at java.awt.EventDispatchThread.pumpEvents(Unknown Source) at java.awt.EventDispatchThread.run(Unknown Source) How to solve this problem?
doc_4347
group = 'com.acme' version = '1.0' repositories { jcenter() } if (gradle.startParameter.taskNames.contains("release")) { def taskNames = gradle.startParameter.taskNames taskNames.add(0, "checkReleaseDoesntExist") gradle.startParameter.taskNames = taskNames } task checkReleaseDoesntExist() { } task release() { println "Building release" } checkReleaseDoesntExist.doLast { println "Checking repositories to make sure release hasn't already been built" // TODO What do I do here? } A: You could declare a configuration with only the artifacts for which you are trying to see if they are already published. Upon resolving this configuration, you would have an empty fileset or not. So something along the lines of: configurations { checkRelease { transitive = false } } dependency { checkRelease "$group:$name:$version" } task ('checkReleaseDoesntExist') { doLast { println "Checking repositories to make sure release hasn't already been built" try { if (!configurations.checkRelease.files.isEmpty()) { // already exists } } catch (ResolveException e) { // doesn't exist } }
doc_4348
A: You need to make sure you have the canonical object first: canonical = object.getCanonical() You can then link your translation by calling addTranslationReference on the translation: translatedObject.addTranslationReference(canonical) That's all. The LinguaPlone API is rather under-documented, your best bet is to look at the I18NBaseObject source code. The most important thing to remember is that there is always one canonical object, with any translations linked to it via references, where the reference points from translations to the canonical object. A: I think it's obj.addTranslationReference(tr_obj), the target language is taken from tr_obj. But that's only based on a brief look at I18NBaseObject.py: Plone is my day job and it's weekend. :)
doc_4349
I want to integrate a facebook like button in the page, this works ONLY when I paste the intire iframe in a razor macro and incorporate the macro on my masterpage. Though, if I want to include some parameters, and I take the iframe URL out of the code snippet to paste another parameter inside that url it stops working. However, after rendering both the iframe url is 100% identical. Anyone got an idea on how I can make sure that the correct site url is in that Facebook iframe parameter can be changed? (depending on the hostname I bind in the umbraco cms) Here is my razor code that does not work: @{ var currentLangPath = "www.newurl.com"; var iframeSource = "http://www.facebook.com/plugins/like.php?app_id=207125959336150&amp;href=" + currentLangPath + "&amp;send=false&amp;layout=button_count&amp;width=110&amp;show_faces=false&amp;action=like&amp;colorscheme=dark&amp;font=arial&amp;height=21"; } <a href="#visitFanpage"><img src="/images/facebook_logo.jpg" alt="Solex Facebook" /></a> <p>Become part of the community, and feel free to share your experience.</p> <div class="social"> <iframe src="@iframeSource" scrolling="no" frameborder="0" style="border:none; overflow:hidden; width:110px; height:25px;" allowTransparency="true"></iframe> </div> Here is the content of a Razor file that does work: <a href="#visitFanpage"><img src="/images/facebook_logo.jpg" alt="Solex Facebook" /></a> <p>Become part of the community, and feel free to share your experience.</p> <div class="social"> <iframe src="http://www.facebook.com/plugins/like.php?app_id=207125959336150&amp;href=www.newurl.com&amp;send=false&amp;layout=button_count&amp;width=110&amp;show_faces=false&amp;action=like&amp;colorscheme=dark&amp;font=arial&amp;height=21" scrolling="no" frameborder="0" style="border:none; overflow:hidden; width:110px; height:25px;" allowTransparency="true"></iframe> </div> Note that both these Razor files have EXACTLY the same output, so I find it very weird that the dynamic URL one does not work. A: The output isn't actually the same, the iframeSource variable gets HTML encoded and therefore an "&amp;" turns into: &amp;amp; You could do two things: * *Remove "amp;" after each "&" in the iframeSource *Instead of @iframeSource, use @Html.Raw(iframeSource)
doc_4350
$("#myform").validate({ rules: { details: { required: "#other:checked" } }, debug:true }); $("#other").click(function() { $("#details").valid(); }); I'm trying to make the text input required if radio button #guide is selected in example below: <input type="radio" id="outfitter" name="memtype" value="Outfitter" />Outfitter $125 <input type="radio" id="guide" name="memtype" value="Guide" />Guide $75 <input type="text" id="sponout" name="sponout" size="75" /> I just don't know where to place $("#other").click(function() { $("#details").valid(); }); within rules validation coding. A: Here's one way you could do this: $("#myform").validate({ rules: { sponout: { required: "#guide:checked" } } }); $("input[name='memtype']").change(function () { $("#myform").valid(); }); Example: http://jsfiddle.net/kqczf/1/
doc_4351
I have Visual Studio 2017 Community edition and my C++ console application compiles fine. What settings should I change in Visual studio 2017 so that the C++ compiler doesn't use the latest C++ versions but say "switches to" the C++ 03 standards ? (What I aiming is, later as a Proof of Concept, I need to copy this source files into a Linux box 'makefile project'. The linux box doesn't conform to C++11 standards. So I need to test the working) A: Visual Studio doesn't support this. Your options are C++latest, C++17, and C++14. C++17 mode is fairly well conforming, and C++14 mode is close. Further reading on learn.microsoft.com: /std (Specify Language Standard Version)
doc_4352
The input I use is downloaded from Maya's "Getting started" section. The script is written in Python in order to launch in standalone mode: http://pastebin.com/3hptPYbB The script works fine when launched within Maya. In standalone mode, the nCloth simulation is not triggered apprently as the result is the scene before simulation and the script terminates quite quickly. Does anyone know why the result is different between 2 running modes and how to fix this problem ? Maybe a nCloth plugin should be loaded beforehand ? Thanks, A: In this case I think it's actually the flags on bakeResults. I was able to get this to work in a maya standalone: import maya.mel import maya.cmds as cmds cmds.file(new=True, f=True) cmds.polyCube() cmds.polyPlane(sx = 21, sy = 22) cmds.xform(t= (.0005, .015, .0005)) # note units - my maya is working in meters... mel.eval("createNCloth 0;") maya.mel.eval("createNCloth 0;") cmds.select('pCube1') maya.mel.eval("makeCollideNCloth") cmds.playbackOptions(animationStartTime=0) cmds.playbackOptions(animationEndTime=100) cmds.play(f=True) cmds.bakeResults('pPlane1', simulation=True, t=(1,20), disableImplicitControl=True, sb = 1, shape=True, cp=True ) # use openMaya to set the frame - cmds.currentTime does not # stick in standalone: import maya.OpenMaya as om om.MGlobal.viewFrame(20) # delete the cloth solution cmds.delete('nCloth1', 'nRigid1') # delete the orphaned shape cmds.delete('pPlaneShape1') cmds.file(rename = "C:/test/cloth.mb") cmds.file(save=True) The two issues I noticed were the need to set the shape and control point flags in bake results, and the need to separate out the baked shape from the original geometry after deleting the cloth solver. In this example deleting the cloth and the rigid body without deleting pPlaneShape1 leaves a copy of the original plane hanging around in the air. You may need to take out the deletions and look at the results to know what do do in this scene.
doc_4353
The requirements are that there should be one single list, with one element on each row. Each element must be a custom control. The whole list should be dynamically re-sizable, so that when you make it longer / shorter the list adds a scroll bar when needed and when you make it thinner / wider the custom controls should resize to the width of the list. We've tried using a FlowLayoutPanel but haven't gotten resizing to work the way we want to. Preferably we should only have to set anchoring of the custom controls to Left & Right. We've also thought about using a TableLayoutPanel but found adding rows dynamically to be a too big overhead so far. This must be quite a common use case, and it seems a bit weird to me that the FlowLayoutPanel has no intuitive way of doing this. Has anyone done something similar or have tips or tricks to get us under way? Cheers! /Adam A: If you don't want to use databinding (via the DataRepeater control, as mentioned above), you could use a regular Panel control and set its AutoScroll property to true (to enable scrollbars). Then, you could manually add your custom controls, and set the Dock property of each one to Top. A: .NET 3.5 SP1 introduced a DataRepeater Windows Forms control which sounds like it'd do what you want. Bind it to the list of "downloads" (or whatever your list represents) and customize each item panel to include the controls you need.
doc_4354
For example, given the following function def fooFunc(hashMap: Map[A, Seq[B]]): Future[Either[Error, Unit]] I tried something like def fooFunc(hashMap: Map[A, Seq[B]]): Future[Either[Error, Unit]] = { val result = for { _ <- hashMap.map(entry => entry._2.map( value => Future.sequence(futureFunc(value, entry._1)).liftF ) ) } yield () result.value } which gives the compile error Type mismatch, expected: NotInferedM[Future[NotInferedA]], actual: [Future[Either[Error, Unit]]] This is my first time dealing with futures and iterating over hashmap, so I'm confused how to exactly go about this and deal with the Iterable. Any tips would be appreciated A: We can map over a HashMap just like any other collection such as List m.flatMap({ case (key, values) => values.map(f(_, key)) }) where m: Map[A, Seq[B]] and f: (B, A) => Future[Unit]. This returns Iterable[Future[Unit]] so we can use Future.sequence to invert to Future[Iterable[Unit]] m .flatMap({ case (key, values) => values.map(f(_, key)) }) .pipe(Future.sequence(_)) Futures internally represent success/failure using Try which we can convert to Either using transform and toEither like so m // : Map[A, Seq[B]] .flatMap({ case (key, values) => values.map(f(_, key)) }) // : Iterable[Future[Unit]] .pipe(Future.sequence(_)) // : Future[Iterable[Unit]] .map(_ => ()) // : Future[Unit] .transform(tryResult => Success(tryResult.toEither)) // : Future[Either[Throwable, Unit]] which returns the required Future[Either[Error, Unit]] type. The pipe method comes from import util.chaining._ Personally, the transformation from Map[A, Seq[B]] to Future[Either[Error, Unit]], feels a bit suspect.
doc_4355
Im trying to implement the sticky drag in my layout. So I have something like this : force = d3.layout.force() .charge(-120) .linkDistance(30) .size([width, height]) .on("tick", tick);; var nodeDrag = force.drag() .on("dragstart", dragstart); //then for drag i call nodeDrag on the node after I append circle : .call(nodeDrag) function dragstart(d) { d3.select(this).classed("fixed", d.fixed = true); } This returns the error : TypeError: Cannot read property 'on' of undefined Which points at the line : var nodeDrag = force.drag() If I use d3.behavior.drag(), it loads the visualization but I can't drag, as I'm guessing it's not using my force layout. Any ideas ? A: I managed to solve it myself. Basically I had to work out the movement myself, that's why none of the nodes were working. So I implemented it like so : var nodeDrag = d3.behavior.drag() .on("dragstart", dragstart) //-when you first click .on("drag", dragmove) //-when you're dragging .on("dragend", dragend); //-when the drag has ended function dragstart(d, i) //-ability to move nodes to one place and keep them there { force.stop(); //-stop the force layout as soon as you move nodes } function dragmove(d, i) //-updates the co-ordinates { //d.px += d3.event.dx; //d.py += d3.event.dy; d.x += d3.event.dx; d.y += d3.event.dy; d3.select(this).attr("transform", function(d,i) { return "translate(" + [ d.x,d.y ] + ")"; }); tick(); //-updates positions } function dragend(d, i) //-when you stop dragging the node { d.fixed = true; //-D3 giving the node a fixed attribute tick(); //-update positions } The main reason why the nodes weren't moving is because I wasn't telling them to move. Also, calling tick updated the positions of all the other nodes. Hope that helps someone that runs across the same problem :)
doc_4356
I'm using pandas, but I should calculate the number of month using the following approach: import pandas as pd pd.period_range('2014-04', periods=<number-of-month>, freq='M') Is there any way to create it automatically, I mean for example give it two arguments as beginning and end interval and then it creates an index for each month; in other words I mean: pseudo-code: pd.period_range(start='2014-04', end='2014-07', freq='M') Expected output for the above pseudo-code is [0, 0, 0] because there are 3 month from 2014-04 to 2014-07. Expected DataFrame to implement and want to access them by index: index date count 0 2014-04 0 1 2014-05 0 2 2014-06 0 At first the array place zero for all of the indices and I call them count. I want to increment the count column using date. for example: a = pd.period_range(start='2014-04', end='2014-07', freq='M') a['2014-04'] += 1 index date count 0 2014-04 1 1 2014-05 0 2 2014-06 0 How can I implement it? A: IIUC, make pandas.Series with index = pd.date_range(...): import pandas as pd s = pd.Series(0, index=pd.date_range(start='2014-04', end='2019-08', freq="M")) s['2014-04'] += 1 s.head() Output: 2014-04-30 1 2014-05-31 0 2014-06-30 0 2014-07-31 0 2014-08-31 0 Freq: M, dtype: int64 A: You need create PeriodIndex by period_range and then for add 1 to column counter use loc: a = pd.period_range(start='2014-04', end='2014-07', freq='M') df = pd.DataFrame({'count':0}, index=a) df.loc['2014-04', 'count'] += 1 print (df) count 2014-04 1 2014-05 0 2014-06 0 2014-07 0 Solution with Series: a = pd.period_range(start='2014-04', end='2014-07', freq='M') s = pd.Series(0, index=a) s['2014-04'] += 1 print (s) 2014-04 1 2014-05 0 2014-06 0 2014-07 0 Freq: M, dtype: int64
doc_4357
I always end up with a Work [ id=****, tags={ **** } ] was cancelled But if I use WorkManager#enqueue, the very same work request runs perfectly. I'm a bit clueless here. I tried to search for a similar problem elsewhere but my DDG-fu has failed me. I fear that this is tied to a coroutine problem but I'm not confortable enough with them to know where to search... A: This is expected. As per the ExistingWorkPolicy.REPLACE Javadoc: If there is existing pending (uncompleted) work with the same unique name, cancel and delete it. Then, insert the newly-specified work. Therefore it is expected that any work that uses that same unique name will be cancelled. This is true for any type of work - a Worker will get a callback to onStopped() which is the sign that it needs to cooperatively cancel itself. Coroutines support cancellation by default, so your CoroutineWorker handles cancellation automatically.
doc_4358
Below is the code I put together. However, if the List cell is drawn offscreen, the Button never shows up once it is scrolled into view. Can anyone replicate this? How do I fix it? Is it just a bug in SwiftUI? EDIT: For clarity, this is macOS Catalyst. Here's where the list is used: List(folderurls, id: \.self) { url in ListImageViewer(url: url) } And here's the full code for ListImageViewer: struct ListImageViewer: View { var url: URL @State var expanded: Bool = false { didSet { if expanded { self.coordinator.refreshListing(for: self.url) } else { self.coordinator.contents = [] } } } @ObservedObject var coordinator = FolderListingCoordinator() var body: some View { VStack { HStack { Button(action: { self.expanded.toggle() }) { Text(self.expanded ? "Collapse" : "Expand") } Text("\(url.lastPathComponent)") } HStack { ForEach(coordinator.contents, id: \.self) { url in ImageView(url: url) } Spacer() } } } } A: I have submitted a bug for a similar issue. One workaround I discovered was adding a modifier to the VStack that applies to its subviews. For me I had issues drawing TextFields, so I changed the following: var body: some View { Section(header: Text("Row \(row)")) { HStack(alignment: .center, spacing: 20) { Text("Name") Spacer() TextField("Name", text: $name) .frame(maxWidth: 120) .textFieldStyle(RoundedBorderTextFieldStyle()) } HStack(alignment: .center, spacing: 20) { Toggle("Fixed", isOn: $isFixed) } } } to var body: some View { Section(header: Text("Row \(row)")) { HStack(alignment: .center, spacing: 20) { Text("Name") Spacer() TextField("Name", text: $name) .frame(maxWidth: 120) } HStack(alignment: .center, spacing: 20) { Toggle("Fixed", isOn: $isFixed) } }.textFieldStyle(RoundedBorderTextFieldStyle()) } You may find a similar workaround by adding some style to your containing VStack. A: Welp, I stumbled upon an answer to this question somehow. Once I gave the items in the list the .id modifier, they stopped rendering incorrectly when offscreen. ListImageViewer(url: url) .id(url) The performance is poor, but it works as expected finally.
doc_4359
My question is what are the business/technical reasons why you would disable TDE on an Azure SQL DB? Have done a bit of searching, people are disabling but I can't see why you would? A: It has an impact on the DTU usage. TDE has CPU overhead despite it supports the Intel AES-NI hardware acceleration of encryption. What I mean is that some DTUs are needed to encrypt and decrypt data, but I would say you won’t notice impact for the typical query. TDE impact on DTU usage may force you to scale up the service tier and pay more. A: Generally, you should not. The only reason it is even optional is that there is a small CPU overhead and that the service was running without it in the past. So, there is a desire from the service to not break anyone who was running close to the limits before for their resource reservation size and having negative impact after turning it on. The possibility of this is remote, honestly, as the overhead is not large. There are always "outlier" application patterns, however, that may see larger impact enough to cause them problems in production if we just turned it on for everyone.
doc_4360
Se produjo un error durante el intento de conexión ya que la parte conectada no respondió adecuadamente tras un periodo de tiempo Which is something like this: An error occurred during the connection attempt as the connected party did not respond properly after a period of time The error comes when I call the method Send(mailmessage), I'm pretty sure it's not a code problem, as it works fine on my pc, I think it's some kind of network configuration but network manager told my boss that I can use the port (587) and have no firewall restrictions, so I don't know what else to try, thanks for your answers... I don't know if this useful but I'm currently connected via TeamViewer to a PC at my job that is also connected via remote desktop to another pc there, also working with C# in a visual studio (.net) project
doc_4361
pyinstaller.py [opts] nameofscript.py The prompt then tells me: Error: PyInstaller for Python 2.6+ on windows needs pywin32. Please install from http://sourceforge.net/projects/pywin32/ Why is it that PyInstaller can't "see" that I have already downloaded pywin32? A: If you are using Python 2.7, the compat.py in the C:/Python27/Lib/site-packages/PyInstaller file need to be changed to: if is_win: try: #from win32ctypes.pywin32 import pywintypes # noqa: F401 #from win32ctypes.pywin32 import win32api import pywintypes import win32api except ImportError: # This environment variable is set by seutp.py # - It's not an error for pywin32 to not be installed at that point if not os.environ.get('PYINSTALLER_NO_PYWIN32_FAILURE'): raise SystemExit('PyInstaller cannot check for assembly dependencies.\n' 'Please install PyWin32 or pywin32-ctypes.\n\n' 'pip install pypiwin32\n') in order to work. A: Got it! Found this useful tutorial: http://bojan-komazec.blogspot.ca/2011/08/how-to-create-windows-executable-from.html The 3rd paragraph tells you the how to get around the problem. The link he points to is tricky though. You need to go here to get the pywin32 installer. http://sourceforge.net/projects/pywin32/files/pywin32/Build%20218/ A: You should install pywin32 to the Python path first and then verify if it has succeeded by running this Python command: import win32com if there is no error, pywin32 is installed.
doc_4362
Facing issue in handling date picker in iOS App while automation of ios application. * *Date picker is always showing current date. So, It changes everday. *Date format is MMM DD, YYYY *As it change on daily basis its hard to automate it with rolling measurement. Code List<WebElement> dateValue = driver.findElementsByClassName("XCUIElementTypePickerWheel"); for (int i = 0; i < dateValue.size(); i++) { System.out.println(dateValue.get(i).getText()); } dateValue.get(0).sendKeys("23"); dateValue.get(0).sendKeys(Keys.TAB); dateValue.get(1).sendKeys("December"); dateValue.get(1).sendKeys(Keys.TAB); dateValue.get(2).sendKeys("2000"); dateValue.get(2).sendKeys(Keys.TAB); A: Appium provides mobile: method called selectPickerWheelValue that can be used for navigating a picker wheel using forward-and-back gestures: HashMap<String, Object> params = new HashMap<>(); params.put("order", "next"); params.put("offset", 0.15); params.put("element", ((RemoteWebElement) pickerWheelElement).getId()); driver.executeScript("mobile: selectPickerWheelValue", params); This approach is more robust to select values. You can check the full example here
doc_4363
It compiles without errors when I run node installed TypeScript from the Windows command line using: npm run tsc When I first tried building the project in Visual Studio 2015, the TypeScript didn't transpile to JavaScript, so I've tried adding the following to the .csproj file (there were previously no instructions to include TypeScript files): <ItemGroup> <TypeScriptCompile Include="**\*.ts" /> Now I get the following build errors: My project structure is like this: And my tsconfig.json { "compilerOptions": { "target": "es5", "module": "system", "moduleResolution": "node", "sourceMap": true, "emitDecoratorMetadata": true, "experimentalDecorators": true, "lib": [ "es2015", "dom" ], "noImplicitAny": false, "suppressImplicitAnyIndexErrors": true, "noStrictGenericChecks": true }, "include": [ "**/*.ts" ] } What could be going on and what might I do to get the project building successfully? A: I discovered that the reason my project wasn't building successfully through Visual Studio is that I needed to include my tsconfig.json file in the solution. That causes Visual Studio to ignore the TypeScript configuration in the .csproj file and use tsconfig.json instead I had to unload and re-load the project a few times, saving in between, until the VS2015 project properties were recognised by Visual Studio as being disabled. Closing and re-opening Visual Studio, saving any changes to the project if prompted, would probably achieve the same thing: The project now builds without having to use: <TypeScriptCompile Include="**\*.ts" /> and I no longer have any errors in the solution. (I also installed and uninstalled the 'TypeScript Compiler' using the Nuget Package Manager in Visual Studio, although I'm unsure if this had any effect, since I'd already installed the TypeScript Build Tools for VS2015 from https://www.microsoft.com/en-us/download/details.aspx?id=48593)
doc_4364
I'm successfully encrypting small data. But, I don't know how to encrypt large data. Below is my source. package com.exam.encrypttest; import java.security.NoSuchAlgorithmException; import javax.crypto.Cipher; import javax.crypto.KeyGenerator; import javax.crypto.NoSuchPaddingException; import javax.crypto.SecretKey; import javax.crypto.spec.SecretKeySpec; import android.app.Activity; import android.os.Bundle; import android.view.Menu; import android.view.View; import android.widget.Button; import android.widget.EditText; import android.widget.RadioButton; import android.widget.TextView; public class MainActivity extends Activity { TextView tv1; Button but1; EditText edit1; String type = "AES"; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); edit1 = (EditText) findViewById(R.id.edit01); edit1.setText("ABCDEFGH"); tv1 = (TextView) findViewById(R.id.text01); but1 = (Button) findViewById(R.id.button01); but1.setOnClickListener(new ButtonClick()); } class ButtonClick implements View.OnClickListener { public void onClick(View v) { int sel = v.getId(); switch (sel) { case R.id.button01: Encryption(edit1.getText().toString(), type); break; } } private void Encryption(String text, String key) { long time1 = System.currentTimeMillis(); Cipher cipher; SecretKeySpec skeySpec; KeyGenerator kgen; try { kgen = KeyGenerator.getInstance(key); SecretKey skey = kgen.generateKey(); kgen.init(128); byte[] raw = skey.getEncoded(); skeySpec = new SecretKeySpec(raw, key); cipher = Cipher.getInstance(key); byte[] encrypted = Encrypt(text, skeySpec, cipher); String sendtext = Base64.encode(encrypted); long time2 = System.currentTimeMillis(); byte[] abc = Base64.decode(sendtext); byte[] decrypted = Decrypt(abc, skeySpec, cipher); long time3 = System.currentTimeMillis(); tv1.setText("Encrypt Time(ms) : "+(time2-time1)); } catch (NoSuchAlgorithmException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (NoSuchPaddingException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (ArrayIndexOutOfBoundsException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } } public byte[] Encrypt(String data, SecretKeySpec keySpec, Cipher cipher) throws Exception, ArrayIndexOutOfBoundsException { cipher.init(Cipher.ENCRYPT_MODE, keySpec); byte[] encrypted = cipher.doFinal(data.getBytes()); return encrypted; } public byte[] Decrypt(byte[] encrypted_data, SecretKeySpec keySpec, Cipher cipher) throws Exception, ArrayIndexOutOfBoundsException { cipher.init(Cipher.DECRYPT_MODE, keySpec); byte[] decrypted = cipher.doFinal(encrypted_data); return decrypted; } } } It's works for small data. But it isn't working for large data. Please, give me some advise. A: Use multiple calls to file:///C:/java/jdk1.7.0/docs/api/javax/crypto/Cipher.html#update(java.nio.ByteBuffer,%20java.nio.ByteBuffer) and play around with buffer sizes. Finally perform a single call to doFinal. You need to define a clear set of parameters before performing any testing, otherwise you are comparing differently generated numbers. Also don't forget that Java relies on the JIT, so for long running applications you may want to perform a number of iterations before measuring (due to JIT optimizations taking place while your application is running). Note that e.g. Guava has a StopWatch class which is easier to use than System.currentTimeMillis().
doc_4365
I am taking a C++ course so please go easy one me. So my code runs without errors when complied but the problem I have is the program goes into an infinite loop if you press a letter in the menu option and in your input for your answer. Why is this and what can I do to make it run normally? #include <iostream> #include <iomanip> #include <cstdlib> #include <ctime> using namespace std; int main() { int num1, num2, choice, studentAnswer, correctAnswer; srand(time(0)); do { cout << "\n-----------------------------------\n" << " Math Tutor\n" << " M E N U\n" << "-----------------------------------\n"; cout << "1. Addition problem\n"; cout << "2. Subtraction problem\n"; cout << "3. Multiplication problem\n"; cout << "4. Division problem\n"; cout << "5. Quit this program\n"; cout << "------------------------------\n"; cout << "Enter your choice (1-5): "; cin >> choice; while (choice < 1 || choice > 5) { cout << "The valid choices are 1, 2, 3, " << "4, and 5. Please choose: "; cin >> choice; } switch (choice) { case 1: num1 = 1 + rand() % 500; num2 = 1 + rand() % 500; correctAnswer = num1 + num2; cout << "\n\n"; cout << " " << setw(4) << num1 << endl; cout << " +" << setw(4) << num2 << endl; cout << " " << "----" << endl; cout << " "; break; case 2: num1 = 1 + rand() % 999; num2 = 1 + rand() % 999; while (num2 > num1) num2 = 1 + rand() % 999; correctAnswer = num1 - num2; cout << "\n\n"; cout << " " << setw(4) << num1 << endl; cout << " -" << setw(4) << num2 << endl; cout << " " << "----" << endl; cout << " "; break; case 3: num1 = 1 + rand() % 100; num2 = 1 + rand() % 9; correctAnswer = num1 * num2; cout << "\n\n"; cout << " " << setw(4) << num1 << endl; cout << " *" << setw(4) << num2 << endl; cout << " " << "----" << endl; cout << " "; break; case 4: num2 = 1 + rand() % 9; num1 = num2 * (rand() % 50 + 1); correctAnswer = num1 / num2; cout << "\n\n"; cout << " " << num1 << " / " << num2 << " = "; break; case 5: cout << "Thank you for using Math Tutor.\n\n"; break; } if (choice >= 1 && choice <= 4) { cin >> studentAnswer; if (studentAnswer == correctAnswer) cout << "\n\nCongratulations! That's right.\n\n"; else cout << "\n\nSorry, the correct answer is " << correctAnswer << ".\n\n"; } } while (choice != 5); return 0; }
doc_4366
I manage to retrieve emails through standard scopes out of all major platforms Google, Facebook, LinkedIn, but Twitter. Twitter requires special permission to allow "verifying" user's details (e.g. email) through additional Twitter API call. In order to perform such a call, we need to utilize Twitter access token and Twitter access token secret that passed through TwitterAuthenticatedContext. Question: What would be the right place to perform such a call? Should it be OnAuthenticated delegate in TwitterAuthenticationProvider? Although, it might become a performance bottleneck, because it'll happen all the time regardless whether we've already obtained user's email. What alternatives might be valid? A: You could pass the twitter token and secret via claims to your user service (in OnAuthenticated). The in the user service you can determine if you actually need to make that additional roundtrip to twitter or not.
doc_4367
And if not, why not? Surely this functionality has been desired by DHCP administrators from the date it was born. It makes changing a router address on a big network very hard without pervasive client automation. A: Note: this only partially address the OP's question. To force only all Wi-Fi DHCP clients to renew their DHCP lease, this is what you can do. Assuming Wi-Fi DHCP clients connect to your router's SSID and your router management interface supports these functionality. * *disable DHCP server *turn off both 2.4G and 5G Wifi *re-enable Wifi network *re-enable DHCP server I used WebUI via a client connected to the router via ethernet. A: It is not possible without letting the network down and up at the client network interface. Theoretically, the DHCP server can send a FORCERENEW message to all connected clients. FORCERENEW is described at RFC3203. (See also https://serverfault.com/a/569869/107832) Unfortunately, the famous ISC DHCP Server does not support it: * *no current plans for RFC3203 support, also because of the lack of support for 3118 authentication *no real interested by users RFC 3315 states in section 19.4.1 reconfigure messages. This is, however, for IPv6 only. A message states that DOCSIS 3.0 "requires Reconfigure support (for DHCPv6) as specified in RFC3315" A: It's not directly possible - DHCP issues an IP address with a lease. That lease tells the client how long it's 'valid' for, after which it must renew. Usually - it'll renew the same IP, but this is the point at which the DCHP negotiation occurs. Forcing every machine on a network segment to renew is a really good way of creating a denial of service attack. But instead you can turn down the lease time to a few minutes whilst you're transitioning to the new network topology. Practically speaking, this will mean a small outage, but you'd get that anyway with re-acquisition of multiple IPs. A: I managed to remove all the leases by shutting down the dhcpd service, deleting the /var/lib/dhcpd/dhcpd.leases file, and restarting the daemon. Then you have to get each device to obtain an IP address from the DHCP server. For some devices a simple power cycle will work. A: psexec \\\computername ipconfig /renew
doc_4368
Error i'm getting in vscode btw Import "requests" could not be resolved from source And when i'm launching the program through cmd the error is ModuleNotFoundError: No module named 'requests' A: First, you should add more information for us to know how your computer and IDE are configured. The first thing you should do is to check that VS Code is using the Python version where you have pip installed the modules. That is, clicking at the bottom-left space as in the picture below. Then checking that the modules are within that path. Otherwise, check out virtualenv. With this tool you can create virtual environments within your project's folder and makes it easier to manage packages.
doc_4369
[1]: https://i.stack.imgur.com/58QQL.png I want the multiline comment to appear like this [2]: https://i.stack.imgur.com/m2cFy.png I don't know how to describe this problem.
doc_4370
Arrays.parallelSort(Integer[] array, Comparator<T> cmp); Is there a way using parallelSort with an array of integers, int[], and a user defined comparator ? instead , something like : Arrays.parallelSort(int[] array, IntComparator cmp); Performance is the issue (think if you've 10mio ints) A: If you want to use a custom comparator with the standard Java SE Arrays.parallelSort(...) methods on an array of integers, then it has to be an Integer[] rather than an int[]. Here is an example (untested / based on https://stackoverflow.com/a/33045466/139985) int[] ia = {99, 11, 7, 21, 4, 2}; ia = Arrays.stream(ia). boxed(). parallel(). sorted((a, b) -> b.compareTo(a)). // sort descending mapToInt(i -> i). toArray(); that converts from int[] to Integer and back using streams. There are various other alternatives using 3rd party libraries: * *How to sort an array of ints using a custom comparator? Unfortunately, none of the answers to the above use a parallel sort, but there may be alternatives that do.
doc_4371
from datasets import load_dataset, load_metric from transformers import (AutoModelForSequenceClassification, AutoTokenizer, Trainer, TrainingArguments) import wandb wandb.init() tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased') dataset = load_dataset('glue', 'mrpc') metric = load_metric('glue', 'mrpc') def encode(examples): outputs = tokenizer( examples['sentence1'], examples['sentence2'], truncation=True) return outputs encoded_dataset = dataset.map(encode, batched=True) def model_init(): return AutoModelForSequenceClassification.from_pretrained( 'distilbert-base-uncased', return_dict=True) def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = predictions.argmax(axis=-1) return metric.compute(predictions=predictions, references=labels) # Evaluate during training and a bit more often # than the default to be able to prune bad trials early. # Disabling tqdm is a matter of preference. training_args = TrainingArguments( "test", eval_steps=500, disable_tqdm=True, evaluation_strategy='steps',) trainer = Trainer( args=training_args, tokenizer=tokenizer, train_dataset=encoded_dataset["train"], eval_dataset=encoded_dataset["validation"], model_init=model_init, compute_metrics=compute_metrics, ) def my_hp_space(trial): return { "learning_rate": trial.suggest_float("learning_rate", 1e-4, 1e-2, log=True), "weight_decay": trial.suggest_float("weight_decay", 0.1, 0.3), "num_train_epochs": trial.suggest_int("num_train_epochs", 5, 10), "seed": trial.suggest_int("seed", 20, 40), "per_device_train_batch_size": trial.suggest_categorical("per_device_train_batch_size", [32, 64]), } trainer.hyperparameter_search( direction="maximize", backend="optuna", n_trials=10, hp_space=my_hp_space ) Trail 0 finishes successfully, but next Trail 1 crashes with following error: File "/home/user123/anaconda3/envs/iza/lib/python3.8/site-packages/transformers/integrations.py", line 138, in _objective trainer.train(resume_from_checkpoint=checkpoint, trial=trial) File "/home/user123/anaconda3/envs/iza/lib/python3.8/site-packages/transformers/trainer.py", line 1376, in train self.log(metrics) File "/home/user123/anaconda3/envs/iza/lib/python3.8/site-packages/transformers/trainer.py", line 1688, in log self.control = self.callback_handler.on_log(self.args, self.state, self.control, logs) File "/home/user123/anaconda3/envs/iza/lib/python3.8/site-packages/transformers/trainer_callback.py", line 371, in on_log return self.call_event("on_log", args, state, control, logs=logs) File "/home/user123/anaconda3/envs/iza/lib/python3.8/site-packages/transformers/trainer_callback.py", line 378, in call_event result = getattr(callback, event)( File "/home/user123/anaconda3/envs/iza/lib/python3.8/site-packages/transformers/integrations.py", line 754, in on_log self._wandb.log({**logs, "train/global_step": state.global_step}) File "/home/user123/anaconda3/envs/iza/lib/python3.8/site-packages/wandb/sdk/lib/preinit.py", line 38, in preinit_wrapper raise wandb.Error("You must call wandb.init() before {}()".format(name)) wandb.errors.Error: You must call wandb.init() before wandb.log() Any help is highly appreciated. A: Please check running the code on the latest versions of wandb and transformers. Works fine for me with wandb 0.11.0 and transformers 4.9.0
doc_4372
Here is what I have so far: The regex for a filepath may not be perfect but it seems to work ok. regex: ^(?<path>[^\\/*?<>|]+)\\\\(?<filename>.+)\\.(?<ext>.mp4$) file name match results name: $2 So what this is doing is searching a listing of files where the extension is mp4 and using the configured match result, it will return that as a "file name". Target string examples, \\\\folder\music\hello.mp4 result filename = "hello" What I would like to do is be able to take either the results from a regex match and be able to replace the name of the file/extension/path by a configured setting. So If someone wanted for all the matched results to replace the file name with "goodbye", how would i accomplish this. This is what i have now. std::string sz_regex_pattern("^(?<path>[^\/*?<>|]+)\\(?<filename>.+)\.(?<ext>.mp4$)"); boost::cmatch rm; boost::regex pattern(sz_regex_pattern, regex::icase|regex_constants::perl); std::string complete_file_name_path = "\\folder\music\hello.mp4"; bool result = boost::regex_match(complete_file_name_path , rm, pattern); std::string old_filename= rm.format("$2"); // returns the name of the file only What appears to work but limits it to a filename where the folder is not the same name so, \\folder\music\hello\hello.mp4 would have issues with the regex_replace below. std::string new_filename = "goodbye"; std::string sz_new_file_name_path = boost::regex_replace(complete_file_name_path, old_filename, new_filename); so i can later, boost::filesystem::rename(complete_file_name_path, sz_new_file_name_path); Any help would be appreciated. A: Find and replace is completely unnecessary because you already have all of the components you need to build the new path. REPLACE std::string sz_new_file_name_path = boost::regex_replace(complete_file_name_path, old_filename, new_filename); WITH // path + newFileName + ext std::string sz_new_file_name_path = rm.format("$1") + "\\" + new_filename + "." + rm.format("$3") A: You could probably split out the components to see what you have with: ^(.*?)\\?([^\\]+)\.([a-zA-Z0-9]+)$ edit or even less specific ^(.*?)\\?([^\\]+)\.([^.]+)$ non-validating $1 = path $2 = filename $3 = extension The separator between path, filename and extension are not captured. With this information you could construct your own new string. If you want to specifically search for say mp4's something like this would work: ^(.*?)\\?([^\\]+)\.mp4$
doc_4373
Problem I want to transport 100 units to 10 points. I only have one vehicle with a capacity of 50, so I can't transport everything in one trip. The time and the distance that the vehicle travels are not limited. I have already looked up what can be done with such a problem. * *use more vehicles *use a bigger vehicle *define a second shift for the vehicle so that it drives twice. I would like to implement it as in solution 3). The number and the vehicles should not be changed. The vehicle should drive more often. But since I don't know how long the first trip will take. I cannot say when the second shift should start. Do you have an idea how to solve this best. A: Using option#3 can easily solve the problem: In the problem file define 2 time shifts for the vehicle, add 10 delivery jobs with 24hs pf time window. The solution will have all 10 jobs served by 1 vehicle. For the example: { "fleet": { "types": [ { "id": "vehicle_1", "profile": "car_1", "costs": { "fixed": 12.0, "distance": 0.003, "time": 0.008 }, "shifts": [ { "start": { "time": "2021-07-17T00:01:00Z", "location": { "lat": 52.530971, "lng": 13.384915 } }, "end": { "time": "2021-07-17T12:00:00Z", "location": { "lat": 52.530971, "lng": 13.384915 } } }, { "start": { "time": "2021-07-17T12:01:00Z", "location": { "lat": 52.530971, "lng": 13.384915 } }, "end": { "time": "2021-07-17T23:59:00Z", "location": { "lat": 52.530971, "lng": 13.384915 } } } ], "capacity": [ 50 ], "limits": { "maxDistance": 3000000, "shiftTime": 86400 }, "amount": 1 } ], "profiles": [ { "type": "car", "name": "car_1" } ] }, "plan": { "jobs": [ { "id": "job_1", "tasks": { "deliveries": [ { "places": [ { "times": [ [ "2021-07-17T00:01:00Z", "2021-07-17T23:59:00Z" ] ], "location": { "lat": 52.400971, "lng": 13.284915 }, "duration": 900 } ], "demand": [ 10 ] } ] } }, { "id": "job_2", "tasks": { "deliveries": [ { "places": [ { "times": [ [ "2021-07-17T00:01:00Z", "2021-07-17T23:59:00Z" ] ], "location": { "lat": 52.2346971, "lng": 13.274915 }, "duration": 900 } ], "demand": [ 10 ] } ] } }, { "id": "job_3", "tasks": { "deliveries": [ { "places": [ { "times": [ [ "2021-07-17T00:01:00Z", "2021-07-17T23:59:00Z" ] ], "location": { "lat": 52.660971, "lng": 13.314915 }, "duration": 900 } ], "demand": [ 10 ] } ] } }, { "id": "job_4", "tasks": { "deliveries": [ { "places": [ { "times": [ [ "2021-07-17T00:01:00Z", "2021-07-17T23:59:00Z" ] ], "location": { "lat": 52.244971, "lng": 13.344915 }, "duration": 900 } ], "demand": [ 10 ] } ] } }, { "id": "job_5", "tasks": { "deliveries": [ { "places": [ { "times": [ [ "2021-07-17T00:01:00Z", "2021-07-17T23:59:00Z" ] ], "location": { "lat": 52.150971, "lng": 13.214915 }, "duration": 900 } ], "demand": [ 10 ] } ] } }, { "id": "job_6", "tasks": { "deliveries": [ { "places": [ { "times": [ [ "2021-07-17T00:01:00Z", "2021-07-17T23:59:00Z" ] ], "location": { "lat": 52.430971, "lng": 13.3454915 }, "duration": 900 } ], "demand": [ 10 ] } ] } }, { "id": "job_7", "tasks": { "deliveries": [ { "places": [ { "times": [ [ "2021-07-17T00:01:00Z", "2021-07-17T23:59:00Z" ] ], "location": { "lat": 52.2344971, "lng": 13.224915 }, "duration": 900 } ], "demand": [ 10 ] } ] } } , { "id": "job_8", "tasks": { "deliveries": [ { "places": [ { "times": [ [ "2021-07-17T00:01:00Z", "2021-07-17T23:59:00Z" ] ], "location": { "lat": 52.330971, "lng": 13.354915 }, "duration": 900 } ], "demand": [ 10 ] } ] } }, { "id": "job_9", "tasks": { "deliveries": [ { "places": [ { "times": [ [ "2021-07-17T00:01:00Z", "2021-07-17T23:59:00Z" ] ], "location": { "lat": 52.250971, "lng": 13.316915 }, "duration": 900 } ], "demand": [ 10 ] } ] } }, { "id": "job_10", "tasks": { "deliveries": [ { "places": [ { "times": [ [ "2021-07-17T00:01:00Z", "2021-07-17T23:59:00Z" ] ], "location": { "lat": 52.420971, "lng": 13.344915 }, "duration": 900 } ], "demand": [ 10 ] } ] } } ] } }
doc_4374
I am trying do run a query on the database to show which items have not sold in a period of time. This is what I have but It's not working ( it brings up all the records) SELECT OrderItem.Name, OrderItem.SKU, [Order].OrderDate FROM [Order] INNER JOIN OrderItem ON [Order].OrderID = OrderItem.OrderID WHERE (OrderItem.SKU NOT IN (SELECT DISTINCT OrderItem.SKU WHERE ([Order].OrderDate BETWEEN '2014-09-08' AND '2014-01-01'))) A: You can actually do this with a having clause: SELECT oi.Name, oi.SKU, max(o.OrderDate) as lastOrderDate FROM [Order] o INNER JOIN OrderItem oi ON o.OrderID = oi.OrderID GROUP BY oi.Name, oi.SKU HAVING sum(case when o.OrderDate between '2014-01-01' and '2014-09-08' then 1 else 0 end) = 0; If you are just looking for orders before this year, it is easier to write the having clause as: HAVING max(o.OrderDate) < '2014-01-01' A: Flip your dates. It should be [Begin Date] Between [End Date] WHERE (OrderItem.SKU NOT IN (SELECT DISTINCT OrderItem.SKU WHERE ([Order].OrderDate BETWEEN '2014-01-01' AND '2014-09-08'))) A: How about the following: SELECT oi.Name, oi.SKU, o.OrderDate FROM [Order] o INNER JOIN OrderItem oi ON o.OrderID = oi.OrderID WHERE oi.SKU NOT IN ( SELECT os.SKU FROM [Order] os INNER JOIN OrderItem ois ON os.OrderID = ois.OrderID WHERE os.OrderDate BETWEEN '2014-01-01' AND '2014-09-08' ) You have to join to the OrderItem table in the sub query in order to get the SKU. A: Looks like your query was build 'upside down'. SELECT DISTINCT SKU FROM OrderItem oi WHERE NOT EXISTS (SELECT 1 FROM Order o JOIN OrderItem oi2 ON o.OrderID = oi2.OrderID WHERE oi2.SKU = oi.SKU AND o.OrderDate BETWEEN '2014-01-01' AND '2014-09-08' ); A: Ideally, you should have another table containing your distinct items, so you could write the following query to see which items were not sold during a certain period (and may have never been sold at all). select i.SKU from items where not exists ( select 1 from OrderItem oi join Order o on o.OrderID = oi.OrderID where oi.SKU = i.SKU and o.OrderDate BETWEEN '2014-01-01' and '2014-09-08' ) if you don't have such a table, you can select all products that have been ordered at some point, but not during another period select i.SKU from ( select distinct oi.SKU from OrderItem oi ) i where not exists ( select 1 from OrderItem oi join Order o on o.OrderID = oi.OrderID where oi.SKU = i.SKU and o.OrderDate BETWEEN '2014-01-01' and '2014-09-08' )
doc_4375
the .vs cache directory is marked as to ignore. but when i create a Task (right click on solution explorer >> Configure tasks), it create a file named tasks.vs.json inside the .vs folder! I noticed I can move the file out of that directory and have the tasks available, but still 'Configure Tasks' will create a new file inside .vs! I know I can "un-ignore" specific files in GitIgnore, thats an option, but I seek a cleaner way. So my questions are: * *Can I tell VS that my Task.vs.json belong outside .vs folder? *Did MS intend the Tasks to be user specific ? (unlikely)
doc_4376
Can someone explain how function findRow(s) is "called"? I don't see what makes function findRow(s) run. I see how they've defined the var rA=findRow(resp.getResponseText()); It doesn't seem to make sense to me that this makes the function run. I'm obviously very green at this... does the act of just defining the variable make that next function "work"? Or is it the Logger.log part that starts the work? function findAString(){ var ss=SpreadsheetApp.getActive(); var sh1=ss.getActiveSheet(); var sh2=ss.getSheetByName('Completed'); var resp=SpreadsheetApp.getUi().prompt("Enter a String"); var s=resp.getResponseText(); if(s){ var rA=findRow(s); if (rA.length>0){ sh2.getRange(sh2.getLastRow()+1,1,rA.length,rA[0].length).setValues(rA); var rows='<br /><h3>Order details have been moved to the Completed tab</h3><br />'; rows+=Utilities.formatString(resp.getResponseText()); for(var i=0;i<rA.length;i++){ rows+=Utilities.formatString('<br /> %s',rA[i].join('<br />')); } rows+='<br /><br /><input type="button" value="Exit" onClick="google.script.host.close();" />'; if(rows){ var ui=HtmlService.createHtmlOutput(rows); SpreadsheetApp.getUi().showModelessDialog(ui, 'Search Results') } } }else{ throw('Error: Invalid Response'); } } function findRow(s) { ;// the actual search function var ss = SpreadsheetApp.getActiveSpreadsheet(); var sh = ss.getActiveSheet() var rA=[]; var vA=sh.getDataRange().getValues(); for(var i=0;i<vA.length;i++) { for(var j=0;j<vA[i].length;j++){ if(vA[i][j].toString().indexOf(s)>-1){ rA.push(vA[i]); break; } } } return rA; var sourcesheet = ss.getSheetByName('Form Responses 2'); var targetsheet = ss.getSheetByName('Completed'); var targetrange = targetsheet.getRange(targetsheet.getLastRow(), 1, sourcesheet.getLastRow(), sourcesheet.getLastColumn()); var rangeValues = sourcesheet.getRange(2, 1, sourcesheet.getLastRow(), sourcesheet.getLastColumn()).getValues(); targetrange.setValues(rangeValues); } A: As you already mentioned findRow function is called by var rA=findRow(resp.getResponseText()); Considering that a few code lines it is var s=resp.getResponseText();, you could rewrite it to save some characters like this: var rA=findRow(s); Above the above line is var resp=SpreadsheetApp.getUi().prompt("Enter a String"); This line assigns a PromptResponse object to resp. The PromptResponse object has the method getResponseText() which returns a String object. The Logger.log(rA) just logs the String value to the log. References * *Functions
doc_4377
https://www.iitp.ac.in/~halder/Papers/Journal/ACSS_2016_2.pdf and on page number 7 Database table (a) loginfo and sailinfo are describe. Some queries are written 1. start; 2. Stmt = DriverManager.getConnection(” jdbc:mysql://172.16.1.16:1115/ demo”, ”scott”, ”tiger”).createStatement(); 3. Resultset rs1 = Stmt.executeQuery(“SELECT eid FROM loginfo WHERE login = ' "+ slogin +" ' AND pass = ’ ”+ spass +” ’ ” ); 4. Resultset rs2 = Stmt.executeQuery(“SELECT ename FROM salinfo WHERE B sal> GP + ' "+vsal+" ' ”); 5. Resultset rs3 = Stmt.executeQuery(“SELECT ename FROM salinfo WHERE GP > ' "+ com +" ' * ’ ”+ rate +” ’ ”); 6. stop; (b) I want to build and convert the query from table (a) to queries as shown in table (b). The way I wrote these queries for one example are as follows <?php $servername = "localhost"; $username = "root"; $password = ""; $dbname = "thesis"; // Create connection $conn = new mysqli($servername, $username, $password, $dbname); echo "\n"; // Check connection if ($conn->connect_error) { die("Connection failed: " . $conn->connect_error); } echo "<br>"; echo "<br>"; // sql input table queries echo "Output"; //$sql="SELECT loginfo.eid FROM loginfo WHERE loginfo.login = (SELECT input.u_input_1 FROM input WHERE input.QID = 6 AND input.IID_1 = 1) AND loginfo.pass = ( //SELECT input.u_input_2 FROM input WHERE input.QID = 6 AND input.IID_2 = 2);"; $sql = "SELECT eid FROM loginfo WHERE login= (SELECT u_input_1 FROM input WHERE QID = 6 AND IID_1 = 1) AND pass = (SELECT u_input_2 FROM input WHERE QID = 6 AND IID_2 = 2) ;"; echo "<br>"; echo "<br>"; $result = $conn->query($sql); if ($result->num_rows > 0) { // output data of each row while($row = $result->fetch_assoc()) { echo "\n \n eid: " . $row["eid"]."<br>"; } } else { echo "0 results"; } echo "<br>"; echo "<br>"; // sql input table queries echo "Output"; $sql1 = "SELECT ename FROM salinfo WHERE GP > (SELECT u_input_1 FROM INPUT WHERE QID = 10 AND IID_1 = 1);"; echo "<br>"; echo "<br>"; $result = $conn->query($sql1); if ($result->num_rows > 0) { // output data of each row while($row = $result->fetch_assoc()) { echo "\n \n ename: " . $row["ename"]."<br>"; } } else { echo "0 results"; } echo "<br>"; echo "<br>"; // sql input table queries echo "Output"; $sql2 = "SELECT ename FROM salinfo WHERE B_sal > GP + (SELECT u_input_1 FROM INPUT WHERE QID = 10 AND IID_1 = 1);"; echo "<br>"; echo "<br>"; $result = $conn->query($sql2); if ($result->num_rows > 0) { // output data of each row while($row = $result->fetch_assoc()) { echo "\n \n ename: " . $row["ename"]."<br>"; } } else { echo "0 results"; } echo "<br>"; echo "<br>"; $conn->close(); ?> This technique which I have shown is called Query Re writing approach. I want to know if I want to implement it for generic queries this was just for one example then what could be a better way to do the same. Please help me with this approach.
doc_4378
* *Server to migrate: Server version: 5.5.34-0ubuntu0.12.04.1 *Server from migrate: Server version: 5.5.24-0ubuntu0.12.04.1 This is the error I can see trough Apache logs: SQLSTATE[42S22]: Column not found: 1054 Unknown column 'v.id' in 'field list', I just export from one side and import in the other side, what I miss? What can cause the error? PS: The PHP application at both sides are the same, any advice
doc_4379
However if I try it on someone else's computer it does not work - to the point that about the whole project is red. Yes, we are using the same jdk version, no, I did not add any external libs, at least not before things started going awry. A friend mentioned "build path problems" as a guess but did not know any further. I have absolutely no knowledge about fixing broken paths, especially not without an error message as it DOES work on my computer. Anyone have any idea how to fix this? A: Have you also downloaded the same android API version from the sdk manager? * * *Here you can choose a version. You both have to download the same version that is used in the project. (e.g. API 19) A: There is a good chance it has to do with the build path problems your friend mentioned. I have faced this same issue several times and the most consistent way to fix it is to right click on your application and open Build Path > Configure Build Path then go to the libraries tab. Make sure everything in there is correct including the android.jar for your version and android-support-v4.jar or whatever support version you are using. Remove anything you don't need for your project. Also go to the android tab on the left and make sure you have the correct version selected and that your libraries, if you have any, have the correct path.
doc_4380
SQL Server Assertion: File: <"d:\b\s1\sources\sql\ntdbms\hekaton\engine\core\tx.cpp">, line=7434 Failed Assertion = '!(tx->ErrorObject != nullptr) || (err == nullptr || tx->TempTableTx)'. This error may be timing-related. If the error persists after rerunning the statement, use DBCC CHECKDB to check the database for structural integrity, or restart the server to ensure in-memory data structures are not corrupted. Note: There is a predicate on the table for row level security which does uses memory optimized tables. Has anybody else encountered this/found a solution? Thanks!
doc_4381
data(mammals, package="MASS") model <- glm(log(brain) ~ log(body), data = mammals) result <- cv.glm(mammals, model, K=10) When I execute result$seed I get a vector of length 626. What seed have I exactly used here? Thanks a lot.
doc_4382
var result = new AutoFaker<MyModel>().RuleFor(x => x.AnotherModel, null).Generate(); public class MyModel { public string Test { get; set; } public AnotherModel AnotherModel { get; set; } } public class AnotherModel { public string Test1 { get; set; } } Got the message : Severity Code Description Project File Line Suppression State Error CS0121 The call is ambiguous between the following methods or properties: 'Faker<T>.RuleFor<TProperty>(Expression<Func<T, TProperty>>, Func<Faker, T, TProperty>)' and 'Faker<T>.RuleFor<TProperty>(Expression<Func<T, TProperty>>, TProperty)' Why can't I assign null to that model? A: The following should work: void Main() { var result = new AutoFaker<MyModel>() .RuleFor(x => x.AnotherModel, _ => null); result.Generate().Dump(); } public class MyModel { public string Test { get; set; } public AnotherModel AnotherModel { get; set; } } public class AnotherModel { public string Test1 { get; set; } } The reason there's an ambigrous call is because you need to be a bit more specific about what "rule for" method you want to use. eg:.RuleFor(expr, value) or .RuleFor(expr, Func<T>) etc... Thanks, hope that helps, Brian Chavez
doc_4383
<!doctype html> <html xmlns:ng='http://angularjs.org'> <script src='lib/angular-0.9.18.min.js' ng:autobind></script> <script src='angular-controller.js'></script> <head> <title>My Page</title> <link rel='stylesheet' type='text/css' href='my.css'/> </head> <body> Template stuff used by angular. </body> </html> I would like something like: <!doctype html> <html xmlns:ng='http://angularjs.org'> <script src='../web/lib/angular-0.9.18.min.js' ng:autobind></script> <script src='angular-controller-stub.js'></script> <head> <title>My Page</title> <link rel='stylesheet' type='text/css' href='../web/my.css'/> </head> <body> Template stuff used by angular. </body> </html> Note that the only differences are the .js and .css paths. If I were dealing with a language like Java, I would extract out a method and pass in the filepaths as arguments (or extract out a class and set the fields to the filepaths). How do I achieve a similar effect in HTML? The second page is for testing the 'look' of the page. As such, allowing the page to be loaded via file:// allows for really quick turnaround. angular-controller.js does xhr stuff and fills in variables. angular-controller-stub.js just stubs those variables.
doc_4384
app.post('/login/google', express.urlencoded(), async(request, response, next) => { try { console.log(`${request.method} ${request.url} was called.`); let token: string = request.body.credential; let decoded = jwt.verify(token, Globals.GoogleSecret, { algorithms: ['RS256'], ignoreExpiration: false }); response.sendStatus(200); } catch (error) { next(error); } }); The token parsed from the body looks fine (I was able to decode it on jwt.io). The error it's catching is: code: 'ERR_OSSL_PEM_NO_START_LINE' function: 'get_name' library: 'PEM routines' reason: 'no start line' message: 'error:0909006C:PEM routines:get_name:no start line' Could anyone please shed some light as to what is causing this error and what I can do to fix it? Here is some further context: * *Globals.GoogleSecret is a string that is set to the Client secret string displayed under my OAuth 2.0 Client ID Credential in the API Console. *I have an Angular web application hosted at http://localhost:4200/. *The application has Google's OAuth post the credential to the Express server using data-login_uri="http://localhost:1337/login/google". *The application is running with debugging on Windows using VSCode. A: I got it working. This issue got me thinking. I was using the wrong key. When you go to the Credentials in the API Console, there is a small icon on the right-hand side of your Credential for the OAuth 2.0 Client ID that has a tooltip on it saying, Download OAuth client. Upon doing so, inspect the JSON file and you will see the following line: "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", If you browse to that URL, you will hit a JSON payload with certificates. Only one will work to decode and verify the JWT, so you must try them all until one succeeds. I also had to modify the certificate's text format for the verify function as follows: let certificate: string = `-----BEGIN CERTIFICATE----- MIIDJjCCAg6gAwIBAgIId3xPl4cPKh0wDQYJKoZIhvcNAQEFBQAwNjE0MDIGA1UE AwwrZmVkZXJhdGVkLXNpZ25vbi5zeXN0ZW0uZ3NlcnZpY2VhY2NvdW50LmNvbTAe Fw0yMjA2MjQxNTIxNTlaFw0yMjA3MTEwMzM2NTlaMDYxNDAyBgNVBAMMK2ZlZGVy YXRlZC1zaWdub24uc3lzdGVtLmdzZXJ2aWNlYWNjb3VudC5jb20wggEiMA0GCSqG SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDLFEiG1xHTWjedvqFec+PVTYVs2Do2gvYo aQkYHlAtYxXuox3G8f+g6w/+yrvUSc1fOeZRzsW8r1F+hDHLUkYUzqArOQj7CpfA VWGkLNapbWlxzOgnw5Ne2bWW1Y7rcoXKHY2knooU5Uiceo2g/z9BbAITX+p8RCvJ 6yG/mEE8aC7d7oO4P1LMnSFMYeuKdsRHx3GasZwGup7K+ox0PECKxho/E0Q4BOFI igaTkm5D10dZC1hkp+jL293SRUWfIemyzemATDiufR5+v8aa8XlX8kasyQ5omynw 3+qm6da0f8Dteg+uMjDYDY1T9k56+3Tt/MpmPCzV3QceaccDs9azAgMBAAGjODA2 MAwGA1UdEwEB/wQCMAAwDgYDVR0PAQH/BAQDAgeAMBYGA1UdJQEB/wQMMAoGCCsG AQUFBwMCMA0GCSqGSIb3DQEBBQUAA4IBAQBUYM/QEuMEjqq/NHtd6w5tOL8FOkat +2d3txRwIhDWaMOyDeM53Tufp1yhRpp3K46NnTkZRE6h4mGN7VPJWSED6s1FQGxA 2C6WkjnDshNxVzOh8+eZt3l8/gfzaR6lfMNH6NYoInl22GoS/46XRE3qY7RO9uVk j8Uou1L6YdOPFA9buTjLHbJViGpz2vTt67C6ZMRC/exWINs8914buqXH2T99xJJM T1FVInIpj+AROcjCnONerT/M0hrhTqGZy0WHsEXy7fZX/8EsJ79LXHkcR/tooO1s ygZ79Xxy/2JDCH3QouXQJOs8iV697+3macsmzm9g/xBKXyllkEq3Q1xh -----END CERTIFICATE-----`; The key here is to replace all of the \n characters with a real newline. Notice that I am using the specific backtick template literal `. You will also need to get rid of whitespace in the literal, especially if you copy and paste the certificate you get from that endpoint into a code editor. Finally, I was able to pass this in to decode the JWT: let decoded = jwt.verify(token, certificate, { algorithms: ['RS256'], ignoreExpiration: false }); I am not sure if this is the right approach is because I am not a huge fan of pinned certificates, but maybe its more or less correct. Feel free to shed some light on this matter. I am not sure what existing implementations that verify JWT's usually do, i.e. if they first attempt to fetch the certificates directly from the auth_provider_x509_cert_url or not. Maybe pinning is better in case that URL goes down or is unreachable. However, certificates can expire, and if so the downside is that your OAuth client goes down until you update the pinned certificate, so it may be better to fetch them as needed. If there is a better way, please let me know in the comments or post an answer.
doc_4385
<div class="row success"> <div class="col-sm-4 no-pad-r img-toggle"> <?php the_post_thumbnail(); ?> </div> <div class="col-sm-6 col-sm-offset-1 no-pad-l"> <h4 class="heading-toggle"><?php the_title(); ?></h4> <div class="line"></div> <p class="intro">"<?php the_field('first_line'); ?>."</p> <div class="toggle-content" style="display: none"> <div class="main-copy"> <?php the_content(); ?></div> <h5><?php the_field('author_name'); ?></h5> <p class="position"><?php the_field('author_position'); ?></p> </div> <button class="btn toggle btn-danger"></button> </div> So i want to add some classes and slideToggle on click but i cant select proper elements. This is my attempt. $( ".heading-toggle" ).click(function() { $(this).next( ".toggle-content" ).slideToggle(); $(this).closest( ".success" ).toggleClass( "highlight" ); $(this).closest( ".toggle" ).toggleClass( "open" ); }); The only thing that works here is adding highlight class to .success div, rest two functions doesn`t work. What am i doing wrong here? A: If you want to select element on same level use .siblings method http://api.jquery.com/siblings/ So your code will look like this $( ".heading-toggle" ).click(function() { $(this).siblings( ".toggle-content" ).slideToggle(); $(this).closest( ".success" ).toggleClass( "highlight" ); $(this).siblings('.btn').toggleClass( "open" ); });
doc_4386
class GetGovtData(): def get_data_1(arg1=0, arg2=1): df = conduct_some_operations return df def get_data_2(arg1=4, arg2=5): df = conduct_some_operations_two return df I'm mostly using a class here for organization purposes. For instance, there might be a dozen different methods from one class that I need to use. I find it more aesthetically pleasing / easier to type out this: from data.get_govt_data import GetGovtData df1 = GetGovtData.get_data_1() df2 = GetGovtData.get_data_2() Rather than: from data import get_govt_data df1 = get_govt_data.get_data_1() df2 = get_govt_data.get_data_2() Which just has a boatload of underscores. So I'm just curious if this would be considered bad code to use a class like this, without bothering with 'self'? Or should I just eliminate the classes and use a bunch of functions in my files instead? A: If you develop functions within a Python class you can two ways of defining a function: The one with a self as first parameter and the other one without self. So, what is the different between the two? Function with self The first one is a method, which is able to access content within the created object. This allows you to access the internal state of an individual object, e.g., a counter of some sorts. These are methods you usually use when using object oriented programming. A short intro can be fund here [External Link]. These methods require you to create new instances of the given class. Function without self Functions without initialising an instance of the class. This is why you can directly call them on the imported class. Alternative solution This is based on the comment of Tom K. Instead of using self, you can also use the decorator @staticmethod to indicate the role of the method within your class. Some more info can be found here [External link]. Final thought To answer you initial question: You do not need to use self. In your case you do not need self, because you do not share the internal state of an object. Nevertheless, if you are using classes you should think about an object oriented design. A: I suppose you have a file called data/get_govt_data.py that contains your first code block. You can just rename that file to data/GetGovtData.py, remove the class line and not bother with classes at all, if you like. Then you can do from data import GetGovtData df1 = GetGovtData.get_data_1() Depending on your setup you may need to create an empty file data/__init__.py for Python to see data as a module. EDIT: Regarding the file naming, Python does not impose any too tight restrictions here. Note however that many projects conventionally use camelCase or CapitalCase to distinguish function, class and module names. Using CapitalCase for a module may confuse others for a second to assume it's a class. You may choose not to follow this convention if you do not want to use classes in your project. A: To answer the question in the title first: The exact string 'self' is a convention (that I can see no valid reason to ignore BTW), but the first argument in a class method is always going to be a reference to the class instance. Whether you should use a class or flat functions depends on if the functions have shared state. From your scenario it sounds like they may have a common base URL, authentication data, database names, etc. Maybe you even need to establish a connection first? All those would be best held in the class and then used in the functions.
doc_4387
for example I am setting build version based on date like 03.16 and build number start from 1. once date change from ex 17th march than build version will change to 03.17 and build number start from 1 again. I have script to set build version and build number. I can generate build version from shell command but I have to store build number. every build trigger in same date build number will increase by one for example first build on 16th march version would be 03.16(1) second build on 16th march version would be 03.16(2) . . first build on 17th march version would be 03.17(1) second build on 17th march version would be 03.17(2)
doc_4388
namespace DrawMe.Core.ViewModels { public class MenuBarViewModel : Conductor<IMenuBarItem> { [Import(typeof(IMenuBarItem))] public IMenuBarItem Item { get; set; } public MenuBarViewModel() { MefInjector.Inject(this); } } } namespace DrawMe.MenuBarItems.ViewModels { [Export(typeof(IMenuBarItem))] public class FileViewModel : Screen, IMenuBarItem { public string Name { get; set; } public List<IMenuBarOption> Options { get; set; } } } When I export FileViewModel class I get error: The export 'DrawMe.MenuBarItems.ViewModels.FileViewModel (ContractName="DrawMe.Api.Models.MenuBar.IMenuBarItem")' is not assignable to type 'DrawMe.Api.Models.MenuBar.IMenuBarItem'. I don't understand how this is possible. Most information I found is about assembly versions not matching, but I have a very basic solution and every project has the same version. I tried using Lazy<IMenuBarItem> Item, but that imports null. Also I use MefInjector, so I can do imports from classes other than MainViewModel. public static readonly string ExtensionFolderPath = Path.GetFullPath(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location) + "\\Extensions"); private static readonly DirectoryCatalog DirectoryCatalog = new DirectoryCatalog(ExtensionFolderPath); private static readonly CompositionContainer Container = new CompositionContainer(DirectoryCatalog); public static void Inject<T>(T obj) where T : class { Container.ComposeParts(obj); } And Bootstrapper protected override IEnumerable<Assembly> SelectAssemblies() { var assemblies = Directory.GetFiles(MefInjector.ExtensionFolderPath, "*.dll", SearchOption.AllDirectories).Select(Assembly.LoadFrom).ToList(); assemblies.Add(Assembly.GetExecutingAssembly()); return assemblies; } Any suggestions, what else I can try to fix this? A: did you tell CM about the assembly in SelectAssemblies property override, just merely referencing the assembly in the application doesn't tell CM about it. http://caliburnmicro.com/documentation/bootstrapper protected override IEnumerable<Assembly> SelectAssemblies() { var assemblies = base.SelectAssemblies(); var directory = new DirectoryInfo(AppDomain.CurrentDomain.BaseDirectory); var files = directory.GetFiles("*.dll", SearchOption.TopDirectoryOnly); var modules = files.Where(f => f.Name.Contains("SomeAssemblyNamespacePrefixorProjectName")) .Select(f => Assembly.LoadFile(f.FullName)); return assemblies.Concat(modules); } A: So, I guess I fixed it. To be honest I don't know what was wrong. I just removed my project and created a new one. I did everything exactly the same way and now it works. Maybe it was the issue with not matching assemblies, but I don't really know how this would be possible taking into account that I cleaned my folder many times...
doc_4389
Now I managed to build a wheel for my package that includes a Fortran extension module. The computer on which I built has Windows7 64, and Python 3.6. In order to get everything running, I followed this very helpful guideline (many thanks to Michael Hirsch). One of the steps was to install MinGW-64 with the following settings: Architecture: x86_64, Threads: posix, Exception: seh. I then installed the Python package on another testing machine (Win10 64, Python 3.6) from that wheel: D:\dist2>pip install SMUTHI-0.2.0a0-cp36-cp36m-win_amd64.whl Processing d:\dist2\smuthi-0.2.0a0-cp36-cp36m-win_amd64.whl Requirement already satisfied: scipy in c:\programdata\anaconda3\lib\site-packages (from SMUTHI==0.2.0a0) Requirement already satisfied: sympy in c:\programdata\anaconda3\lib\site-packages (from SMUTHI==0.2.0a0) Requirement already satisfied: argparse in c:\programdata\anaconda3\lib\site-packages (from SMUTHI==0.2.0a0) Requirement already satisfied: numpy in c:\programdata\anaconda3\lib\site-packages (from SMUTHI==0.2.0a0) Requirement already satisfied: matplotlib in c:\programdata\anaconda3\lib\site-packages (from SMUTHI==0.2.0a0) Requirement already satisfied: pyyaml in c:\programdata\anaconda3\lib\site-packages (from SMUTHI==0.2.0a0) Requirement already satisfied: six>=1.10 in c:\programdata\anaconda3\lib\site-packages (from matplotlib->SMUTHI==0.2.0a0) Requirement already satisfied: python-dateutil in c:\programdata\anaconda3\lib\site-packages (from matplotlib->SMUTHI==0.2.0a0) Requirement already satisfied: pytz in c:\programdata\anaconda3\lib\site-packages (from matplotlib->SMUTHI==0.2.0a0) Requirement already satisfied: cycler>=0.10 in c:\programdata\anaconda3\lib\site-packages (from matplotlib->SMUTHI==0.2.0a0) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=1.5.6 in c:\programdata\anaconda3\lib\site-packages (from matplotlib->SMUTHI==0.2.0a0) Installing collected packages: SMUTHI Successfully installed SMUTHI-0.2.0a0 However, when I started a test run of the program, I encountered the following error: D:\dist2>smuthi example_input.dat Traceback (most recent call last): File "c:\programdata\anaconda3\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "c:\programdata\anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\ProgramData\Anaconda3\Scripts\smuthi.exe\__main__.py", line 5, in <module> File "c:\programdata\anaconda3\lib\site-packages\smuthi\__main__.py", line 4, in <module> import smuthi.read_input File "c:\programdata\anaconda3\lib\site-packages\smuthi\read_input.py", line 3, in <module> import smuthi.simulation File "c:\programdata\anaconda3\lib\site-packages\smuthi\simulation.py", line 8, in <module> import smuthi.t_matrix as tmt File "c:\programdata\anaconda3\lib\site-packages\smuthi\t_matrix.py", line 6, in <module> import smuthi.nfmds.t_matrix_axsym as nftaxs File "c:\programdata\anaconda3\lib\site-packages\smuthi\nfmds\t_matrix_axsym.py", line 11, in <module> import smuthi.nfmds.taxsym ImportError: DLL load failed: Das angegebene Modul wurde nicht gefunden. The extension .pyd file (taxsym.cp36-win_amd64.pyd) was at its place - just Python couldn't load it. Next, I uninstalled MinGW from the testing machine and reinstalled MinGW-64 with the same settings that I had used on the building machine (see above). Afterwards, I could run the program, and Python was able to correctly load the extension module. My question is: Does anybody have an idea why the error occurred in the first place? And how can I avoid that the user of my Python package has to have a specific version of MinGW installed (or even any) for the package to work properly? Edit: A small example that reproduces the error: Minimal example File structure: setup.py example/ __init__.py run_hello.py extension_package/ __init__.py fortran_hello.f90 The setup.py reads: import setuptools from numpy.distutils.core import Extension from numpy.distutils.core import setup setup( name="example", version="0.1", author="My Name", author_email="my@email.com", description="Example package to demonstrate wheel issue", packages=['example', 'example.extension_package'], ext_modules=[Extension('example.extension_package.fortran_hello', ['example/extension_package/fortran_hello.f90'])], ) The run_hello.py reads: import example.extension_package.fortran_hello example.extension_package.fortran_hello.hello() The fortran_hello.f90 reads: subroutine hello print *,"Hello World!" end subroutine hello Creation of the wheel I ran python setup.py bdist_wheel which resulted in the file example-0.1-cp36-cp36m-win_amd64.whl Installation of the package on machine with correct MinGW version D:\dist>pip install example-0.1-cp36-cp36m-win_amd64.whl Processing d:\dist\example-0.1-cp36-cp36m-win_amd64.whl Installing collected packages: example Successfully installed example-0.1 D:\dist>python Python 3.6.0 |Anaconda 4.3.1 (64-bit)| (default, Dec 23 2016, 11:57:41) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import example.run_hello Hello World! >>> exit() This is as it should be. Installation of the package on machine without correct MinGW version To reproduce the error, I renamed the MinGW folder on the testing machine to some other name and then: D:\dist>pip install example-0.1-cp36-cp36m-win_amd64.whl Processing d:\dist\example-0.1-cp36-cp36m-win_amd64.whl Installing collected packages: example Successfully installed example-0.1 D:\dist>python Python 3.6.0 |Anaconda 4.3.1 (64-bit)| (default, Dec 23 2016, 11:57:41) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import example.run_hello Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\ProgramData\Anaconda3\lib\site-packages\example\run_hello.py", line 1, in <module> import example.extension_package.fortran_hello ImportError: DLL load failed: Das angegebene Modul wurde nicht gefunden. A: I recently ran into this issue by writing my own f2py building tool chain by compiling and linking all the components individually. The script was finding or installing required compilers automatically if they werent already found on the path. For cases where the gfortran tools werent on the path, but were present on the machine, I was able to inject the correct environment variables to os.environ and spawn compiler calls using Popen and the set of environment variables so that the pyd would compile. But outside of that python instance the environment variables were not correct for the pyd to run, I was getting the same DLL load failed error even on the same computer that compiled the pyds but which didnt have the correct paths setup. So, since I'm compiling all steps separately, only using f2py to generate the f and c wrappers, I simply added -static -static-libgfortran -static-libgcc to my link step, and this causes the pyd to include the required libraries to run on those machines without the correct environment variables. Achieving the same using numpy.distutils is possible (thanks to https://github.com/numpy/numpy/issues/3405): from numpy.distutils.core import Extension, setup if __name__ == "__main__": setup( name="this", ext_modules=[ Extension("fortmod_nostatic", ["src/code.f90"], ), Extension("fortmod_withstatic", ["src/code.f90"], extra_link_args=["-static", "-static-libgfortran", "-static-libgcc"] ) ] ) I put the above in a file test.py and built with python test.py build_ext --inplace --compiler=mingw32 --fcompiler=gnu95 -f For comparison there is a clear size difference. Inspecting the pyd's with dependency walker shows the nostatic one depends on libgfortran-4.dll whereas the extra flags generate a pyd that does not depend on this library. In my case after adding the static flags the machine without correct environment variables is able to run the pyds, and I suspect this case will be similar to yours since the dependency on libgfortran is removed. Hope that helps! my first SO post.. A: I can confirm that the steps described in rmar_'s answer create a pyd file that can be imported from a machine that has no MinGW installed. This is really helpful! In order to create a binary wheel, I do the following: * *Follow the steps described in rmar_'s answer in order to generate statically linked pyd files in place. *Run python setup.py bdist_wheel afterwards. For the second step, I switch off the compilation of the pyd file (e.g., by removing the ext_modules keyword from the setup() call), because I want that the pyd file created in the first step is used (and not a newly created, which might be not statically linked). I don't know if this makes sense at all, but it seems to work ...
doc_4390
stream = new StreamReader("configuration.xml"); As stand alone it works, but when I start it like procces they looking configuration.xml file in parent application path. Is it normal behavior? C:\My simple Tool\tool.exe C:\My simple Tool\configuration.xml C:\ParentTool.exe And as subproccess of ParentTool.exe the tool.exe application is looking for configuration.xml in C:\ but not in "C:\My simple Tool\", as I suppose. A: You could try changing: stream = new StreamReader("configuration.xml"); To: stream = new StreamReader(@"C:\My simple Tool\configuration.xml");
doc_4391
Authentication plays a crucial role as a bug might allow a malicious user to control others' phones. The Android client is written and works. It authenticates the user using this method. Basically, this give me a so-called SACSID token I can store in a cookie. When this cookie is set App Engine recognizes my user. The server application can then call the UserService to get a User on which to call getUserId(). getUserId() returns me an opaque string that identifies my user. So far so good. Now I am writing the JS and CLI clients. Because the CLI client has no reliable way of displaying a CAPTCHA or opening a browser, I suppose that it needs to use the Device API ("Client login" is not an option). This API requires using OAuth 2 for authentication. Also, I want the JS client to access the user's contacts, which also seems to indicates that OAuth 2 would be suitable. If my user authenticates using OAuth 2, will I be hable to somehow transform this Google OAuth 2 token into the same opaque String that I get when the Android client connects ? If not, can I modifiy my Android Application so that it uses OAuth instead of a Sacsid token ? To be more specific, I see three things that would solve my problem : * *A way of getting an OAuth 2 token from the Account Manager *A way of exchanging the OAuth 2 token for a SACSID token *A way of getting the same opaque UserID both with the SACSID token and the OAuth2, but only if I can have both authentication system on the same application. Something that seems similar to the third possible solution is to get the user's email address from both OAuth and the SACSID token, and to use that as the user ID. This however looks a bit clumsy to me : * *Every time I receive an OAuth 2 request, I would need to call Google APIs to retrieve the user's email address (or build my own system of tokens which seems insecure and introduces many other difficulties). *The email address of a given user can change, making me lose the association between the user and his previous data. A: Use End Points instead: https://developers.google.com/appengine/docs/java/endpoints/ They use oauth2, they are easy to implement and have support for android IOS and Web.
doc_4392
columns:70px 2; -webkit-columns:70px 2; /* Safari and Chrome */ -moz-columns:70px 2; Fiddle here I would like for the column on the left to text align right and the column on the right to text align left. I know I could perhaps custom make my columns as 2 divs with paragraphs, add display inline block and set the sizes and style but i was wondering if there was a way to target a specific column via the method used above. A: Short answer, no you can't! Because the css division that causes the columns doesn't break the element on two separated elements. So every other properties applied are shared between the columns Here are the properties you can use on CSS3 columns: http://davidwalsh.name/css-columns Similar question: Is there a way to specify different width to columns in CSS3?
doc_4393
#container { width: 1200px; display: grid; grid-gap: 10px; grid-template-columns: 2fr 1fr 1fr; grid-template-rows: 225px 175px 225px 160px; } .feature { grid-row: 1 / 5; } .vertical { grid-row-end: span 2; } .im_picture { background-color: #336; } .im_picture img {} .button { background-color: #808000; grid-column: 2 / 4; } <div id="container"> <div class="im_picture feature"></div> <div class="im_picture vertical"></div> <div class="im_picture"></div> <div class="im_picture vertical"></div> <div class="im_picture"></div> <div class="button">book a session</div> A: Nothing wrong with the code, it's an issue with a combination of Chrome and Windows settings https://superuser.com/questions/1139259/how-to-adjust-ui-scaling-for-chrome
doc_4394
echo '<p>Testing SiteLink...</p>'; define( 'SITELINK_URL', "https://www.smdservers.net/CCWs_3.5/CallCenterWs.asmx?WSDL"); define( 'SITELINK_CORP_CODE', "CCTST" ); define( 'SITELINK_LOC_CODE', "Demo" ); define( 'SITELINK_CORP_LOGIN', "Administrator" ); define( 'SITELINK_CORP_PASS', "Demo" ); $client = new SoapClient( SITELINK_URL ); $params->sCorpCode = SITELINK_CORP_CODE; $params->sLocationCode = SITELINK_LOC_CODE; $params->sCorpUserName = SITELINK_CORP_LOGIN; $params->sCorpPassword = SITELINK_CORP_PASS; try { $units = $client->SiteInformation( $params ); $result = $units->SiteInformationResult; } catch (Exception $e ) { die( 'Error: '.$e->getMessage().'<br>'.$e ); } echo htmlentities( $result->any ); But I am not able to understand how to fecth the data and process it using php. I have to Fetch unit sizes and their respective prices. Any help is greatly appreciated. Thanks in advance. A: SiteInformation is expecting array not Object and your $params was never declared define('SITELINK_URL', "http://www.smdservers.net/CCWs_3.5/CallCenterWs.asmx?WSDL"); define('SITELINK_CORP_CODE', "CCTST"); define('SITELINK_LOC_CODE', "Demo"); define('SITELINK_CORP_LOGIN', "Administrator"); define('SITELINK_CORP_PASS', "Demo"); $client = new SoapClient(SITELINK_URL); $params = array("sCorpCode" => SITELINK_CORP_CODE,"sLocationCode" => SITELINK_LOC_CODE,"sCorpUserName" => SITELINK_CORP_LOGIN,"sCorpPassword" => SITELINK_CORP_PASS); try { $units = $client->SiteInformation($params); header("Content-Type: text/xml"); print($units->SiteInformationResult->any); } catch ( Exception $e ) { die('Error: ' . $e->getMessage() . '<br>' . $e); } See Live DEMO A: You are not trying to get Units you are getting SiteInformation add type for $params first: $params = new stdClass; // as stdClass $params->sCorpCode = SITELINK_CORP_CODE; $params->sLocationCode = SITELINK_LOC_CODE; $params->sCorpUserName = SITELINK_CORP_LOGIN; $params->sCorpPassword = SITELINK_CORP_PASS; then: $units = $client->UnitTypePriceList($params); $result = $units->UnitTypePriceListResult->any; and you have to parse the data with DOMDocument, check the tag name you want to display and print it out: $dom = new DOMDocument; $dom->loadXML($result); $xpath = new DOMXPath($dom); $el = $xpath->query('//Table'); foreach($el as $units){ $UnitID = $xpath->query('UnitID_FirstAvailable', $units)->item(0)->nodeValue; echo $UnitID; } I don't know what are your desired values, but I took UnitID_FirstAvailable as example. A: I know this is kind of old, but I've been doing some extensive work with the API. First off, you'll want this: SiteLink API Documentation To get all available units, you could do something like this: // define API connection credentials define('SITELINK_URL', "http://www.smdservers.net/CCWs_3.5/CallCenterWs.asmx?WSDL"); define('SITELINK_CORP_CODE', "CCTST"); define('SITELINK_LOC_CODE', "Demo"); define('SITELINK_CORP_LOGIN', "Administrator"); define('SITELINK_CORP_PASS', "Demo"); $client = new SoapClient( SITELINK_URL ); $params->sCorpCode = SITELINK_CORP_CODE; $params->sLocationCode = SITELINK_LOC_CODE; $params->sCorpUserName = SITELINK_CORP_LOGIN; $params->sCorpPassword = SITELINK_CORP_PASS; $params->lngLastTimePolled = 0; $params->bTestMode = true; try { $units = $client->UnitsInformationAvailableUnitsOnly_v2($params); $result = $units->UnitsInformationAvailableUnitsOnly_v2Result; } catch (Exception $e) { die( 'Error: '.$e->getMessage().'<br />'.$e ); } echo '<table>'; $formatUnits = new SimpleXMLElement($result->any); foreach($formatUnits->NewDataSet->Table as $unit){ echo "<tr>\r\n"; echo "<td><a href='#' data-unit-number='".$unit->sUnitName."' data-unit-id='".$unit->UnitID."' data-rate='".$rate."' class='res-unit-link'>".$unit->sUnitName."</a></td>\r\n"; echo "<td>".$unit->sTypeName."</td>\r\n"; echo "</tr>\r\n"; } echo '</table>'; You could do var_dump($formatUnits) to see all available data in the object that gets returned by SiteLink. The above code is an example from something I made where I used jQuery to grab all of the data- attributes from the link to use for the rental/reservation process. I hope this helps somebody. I really could have used it when I first started using the API.
doc_4395
Because of the large amount of data it is very slow to read every time again and again the global data to detect what is change, delete or added. For reading the globals i use the GlobalsFramework.dll and InterSystems.CacheExtreme.dll. So i need tracking Data Changes without Audit Log. A: No way, no real-time notifications. The only way to get this information is from Journal files.
doc_4396
r1 = rand(100,1); r1 = array2table(r1); r1.Properties.VariableNames = {'Random1'}; r2 = rand(54,1); r2 = array2table(r2); r2.Properties.VariableNames = {'Random2'}; Is there a way, for example, for me to add a column named 'Time' that contains the character 'hours' for both based on the number of rows of the table? A: You can just add the new array to your tables: r1.Time = repmat('hours', [length(r1.Random1), 1]); r2.Time = repmat('hours', [length(r2.Random2), 1]);
doc_4397
I have tried exporting the moment function, but there are still run time problems, and a some compile errors. Has anyone successfully compiled moment.js with advanced optimizations, or know how to do so? The only solution I can come up with, is to concatenate the minified file to the compiled source and use externs for every function I use from moment.js. But this is not an ideal solution. A: I saw two issues with the code which would have to be corrected before momentjs would be compatible with ADVANCED_OPTIMIZATIONS. There may be more, but these were the glaring ones: * *Using an alias for the prototype: All references to .fn would need to be replaced with .prototype. *Using a helper function to add methods: the extend method hides definitions from the compiler. All uses of the extend helper function would have to be refactored so that they do not hide the property assignments from the compiler. A: I can't get it to work either as of 26 March 2015, but the existence of this suggests that it's possible. Here are the externs A: You've gotta write your own externs file for moment.js for what you use from it (or the entire object, but I find that a bit of extra work for no reason). For example, I've got this snippet to test if an input's date is within 14 days from now $checkout.find('.date-input').on('input', /** @this {Element} */ function () { const $this = $(this); const Days = Number($this.attr('data-days')); if (Days > 0 && moment(/** @type {string} */($this.val())).diff(moment(), 'days') < Days) { $checkout.find('.date-warning').removeClass('d-none'); } else { $checkout.find('.date-warning').addClass('d-none'); } }); And the only way I'd get that to compile correctly with advanced mode is by creating this extern. /** * @fileoverview Externs for moment * * @externs */ /** * @param {string=} date * @constructor * @return {!moment} */ function moment(date) {} /** * @param {!moment} m * @param {string} unit * @return {number} */ moment.diff = function (m, unit) {}; moment.prototype.diff = moment.diff; Now clearly that description of the moment function isn't perfect; it's missing some parameters that the moment function has, but I'm not using them so it doesn't matter to me. But that's how I start my externs. I start basic as the need arises and then I continue to grow the externs file with the more functions I need from a library. And don't forget to tell Closure Compiler where your extern is located with the flag --externs 'externs/moment.js'.
doc_4398
my tab menu has a style like bellow : <ul class="nav nav-tabs"> <li class="active"><a data-toggle="tab" href="#home">Name1</a></li> <li><a data-toggle="tab" href="#menu1">Name2</a></li> <li><a data-toggle="tab" href="#menu2">Name3</a></li> <li><a data-toggle="tab" href="#menu3">Name4</a></li> </ul> <div class="tab-content"> <br /><br /> <div id="home" class="tab-pane fade in active"> <div class="col-sm-4"> <ul class="nav navbar-link navbar-right"> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="imageslist/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> </ul> </div> <div class="col-sm-4"> <ul class="nav navbar-link navbar-right"> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> </ul> </div> <div class="col-sm-4"> <ul class="nav navbar-link navbar-right"> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> </ul> </div> </div> <div id="menu1" class="tab-pane fade"> <div class="col-sm-4"> <ul class="nav navbar-link navbar-right"> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> </ul> </div> <div class="col-sm-4"> <ul class="nav navbar-link navbar-right"> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" /> ...</a></li> </ul> </div> <div class="col-sm-4"> <ul class="nav navbar-link navbar-right"> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" />...</a></li> </ul> </div> </div> <div id="menu2" class="tab-pane fade"> </div> <div id="menu3" class="tab-pane fade"> </div> </div> </div> I want to connect this tab menu to database with repeater or any usefull control. How can I do this? A: <asp:Repeater ID="rep" runat="server"> <ItemTemplate> <div class="col-sm-4"> <ul class="nav navbar-link navbar-right"> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" /><%# Eval("FName") %></a></li> <li><a href="#"><img src="imageslist/tik.png" style="width:20px; height:20px;" /><%# Eval("ParentID").ToString() %></a></li> <li><a href="#"><img src="images/list/tik.png" style="width:20px; height:20px;" /><%# Eval("Active").ToString() %></a></li> </ul> </div> </ItemTemplate> </asp:Repeater>
doc_4399
class MyClass { static func + <T>(el: T, arr: [T]) -> [T] { var ret = arr ret.insert(el, at: 0) return ret } } The compiler has this to say about it: Member operator '+' must have at least one argument of type 'MyClass' I updated the method to make it it static, as follows: static func + <T: Sequence>(el: T, arr: [T]) -> [T] { var ret = arr ret.insert(el, at: 0) return ret } That doesn't get rid of the compiler error. I tried Googling it, but I'm not coming up with any short, simple answers. Any ideas how I might quell this error? A: As explained by @MartinR in the comments, you need to put this function at the top level (NOT inside a class). import Foundation // Will compile fine from here as is func + <T>(el: T, arr: [T]) -> [T] { var ret = arr ret.insert(el, at: 0) return ret } class TableGenerator { // will require at least one argument of the function to be of `TableGenerator` type }