id
stringlengths
5
11
text
stringlengths
0
146k
title
stringclasses
1 value
doc_3700
It is empty only on the client side. See the following which has been stripped back: MyCollection = new Meteor.Collection("mycollection"); if (Meteor.isServer) { var result = MyCollection.find({name: 'MyName'}, {limit: 25}).fetch(); console.log(result); } if (Meteor.isClient) { var result = MyCollection.find({name: 'MyName'}, {limit: 25}).fetch(); console.log(result); } I can see the result correct from the server code but not the client. What am i missing? A: Assuming you haven't removed autopublish or you're correctly publishing and subscribing, you're probably running the client code before it has received the data from the server. Try this: if (Meteor.isClient) { Deps.autorun(function() { var result = MyCollection.find({name: 'MyName'}, {limit: 25}).fetch(); console.log(result); }); } You may get one empty result logged on the client, shortly followed by the correct result (after the client receives the data and reruns the autorun function).
doc_3701
Uncaught TypeError: Object function (a,b){return new p.fn.init(a,b,c)} has no method 'curCSS' Can anyone tell me how to get rid of it? I get my autocomplete using: $("#state_auto").autocomplete({ source: site_url + "content/user/state", minLength: 1, select: function(event, ui) { $("#iStateId").val(ui.item.id); //console.log(ui.item); } }); A: It looks like from this bug report, that some versions have this problem where others do not. You may want to read through it and see if there's a version that will work out for you.
doc_3702
Background: Some old mobile devices (e.g. a HTC-Desire with Android 2.3.4) refuse to display a page containing named-entities: This page contains the following errors ... Entity 'auml' not defined. The page has a HTML5-Doctype and according to the specification auml is a valid predefined character-reference. So I think this is a browser bug but that does not help me here. What I tried is to replace special-characters with their unicode-representation. But if I place an ä or even an &#228; in the view JSF will render me an &auml;. If I place the entity-mapping in the doctype (I know that this should not be done in a HTML5-doctype) the behaviour gets really strange: <!DOCTYPE html [ <!ENTITY auml "&#228;"> <!ENTITY mdash "&#8212;"> ... ]> This will result in a correct HTML5 doctype without the mapping. But a &mdash; will be replaced by &#8212; while an &auml; won't be replaced. Has anybody an explanation for that or is there the possibility to configure JSF to allways render unicode-escaped entitites? A: This will happen if you've set <f:view encoding> to a non-Unicode compatible encoding. Fix it accordingly: <f:view encoding="UTF-8"> Since JSF2 on Facelets, this is the default value already, by the way, so you can safely omit it if you're indeed using JSF2 on Facelets.
doc_3703
Where should I put the dataTask? should it go in CellForRow at index path or should it be in the UITableViewCell subclass? Either way it seems to fire off 10's of connections when I scroll quickly, how would I get around this and cancel downloading when scrolling the cell off the screen.
doc_3704
A: The contents of that url look like url parameters. You could use urllib.parse_qs to parse them into a dict: import urllib2 import urlparse url = 'http://www.tip.it/runescape/gec/price_graph.php?avg=1&start=1327715574&mainitem=10350&item=10350' response = urllib2.urlopen(url) content = response.read() params = urlparse.parse_qs(content) print(params['values']) A: You may want to look into the re module (although if you do eventually move to HTML, regex is not the best solution). Here is a basic example that grabs the text after &values and returns the following number/comma/space combinations: >>> import re >>> import urllib2 >>> url = 'http://www.tip.it/runescape/gec/price_graph.php?avg=1&start=1327715574&mainitem=10350&item=10350' >>> contents = urllib2.urlopen(url).read() >>> values = re.findall(r'&values=([\d,\s]*)', contents) >>> values[0].split(',') ['33900000', '33900000', '33900000', #continues....]
doc_3705
strTxt = strTxt & txt.ReadAll How can I solve it, that it doesn't stop if a file is empty? Sub MergeTxtFiles() Dim fso As Object, txt As Object, strTxt As String Dim strParentFldr As String, strFile As String Dim iFreeFile As Integer, strOutPutFile As String KillProperly "D:\users\gf05856\Documents\TEst\RVA002ALL.txt" Call DeleteLine1 Call DeleteLine2 strOutPutFile = "D:\users\gf05856\Documents\TEst\RVA002ALL.txt" strParentFldr = "D:\users\gf05856\Documents\TEst" Set fso = CreateObject("Scripting.FileSystemObject") strFile = Dir(strParentFldr & "\Register van aankomst 002*.txt") If Len(strFile) > 0 Then Do Set txt = fso.OpenTextFile(strParentFldr & "\" & strFile) strTxt = strTxt & txt.ReadAll strFile = Dir Loop Until Len(strFile) = 0 If Left(strTxt, 2) = vbCrLf Then strTxt = Mid(strTxt, 2) End If iFreeFile = FreeFile Open strOutPutFile For Output As #iFreeFile Print #iFreeFile, strTxt Close #iFreeFile Set txt = Nothing Set fso = Nothing End Sub Public Sub KillProperly(Killfile As String) If Len(Dir$(Killfile)) > 0 Then SetAttr Killfile, vbNormal Kill Killfile End If End Sub Sub DeleteLine1() Const ForReading = 1 Const ForWriting = 2 Set objFSO = CreateObject("Scripting.FileSystemObject") Set objFile = objFSO.OpenTextFile("D:\users\gf05856\Documents\TEst\Register van aankomst 002 - D (Histo).txt", ForReading) Do Until objFile.AtEndOfStream strLine = objFile.ReadLine If InStr(strLine, "CLIFOUPAYCODE") = 0 Then strNewContents = strNewContents & strLine & vbCrLf End If Loop objFile.Close Set objFile = objFSO.OpenTextFile("D:\users\gf05856\Documents\TEst\Register van aankomst 002 - D (Histo).txt", ForWriting) objFile.Write strNewContents objFile.Close End Sub Sub DeleteLine2() Const ForReading = 1 Const ForWriting = 2 Set objFSO = CreateObject("Scripting.FileSystemObject") Set objFile = objFSO.OpenTextFile("D:\users\gf05856\Documents\TEst\Register van aankomst 002 - D.txt", ForReading) Do Until objFile.AtEndOfStream strLine = objFile.ReadLine If InStr(strLine, "CLIFOUPAYCODE") = 0 Then strNewContents = strNewContents & strLine & vbCrLf End If Loop objFile.Close Set objFile = objFSO.OpenTextFile("D:\users\gf05856\Documents\TEst\Register van aankomst 002 - D.txt", ForWriting) objFile.Write strNewContents objFile.Close End Sub A: .ReadAll() fails on empty/zero length/size files: >> WScript.Echo oFS.GetFile("x").Size >> Set f = oFS.OpenTextFile("x") >> s = f.ReadAll() >> 0 Error Number: 62 Error Description: Eingabe hinter Dateiende. >> So you should check the .Size to determine whether the file should be opened at all. (BTW: What's the real world problem you try to solve?)
doc_3706
In the Movies table I got Three columns for Actors. Actor_1, Actor_2, Actor_3. In these fields, I only write numbers which corresponds to a row in the Actors table. Each actor have these columns: Actor_ID, Firstname, Surname Now if I do this query: SELECT movie.titel, firstname + ' ' + surname AS name FROM Movie INNER JOIN Actors ON movie.actor_1=actor.actor_id Then I get almost what I want. I get the movie titles but only one actor per movie. I don't know how I am supposed to do for the movies where I got two or three actors. I had to translate to code so it would be more understandable. The code in its original shape works so don't mind if it's not 100% here. Just would appreciate some pointers on how I could do. A: You need to change your table design to include a junction table. So you will have Movies ID Etc Actors ID Etc MoviesActors MovieID ActorID Etc Your query might be: SELECT m.MovieTitle, a.ActorName FROM Actors a INNER JOIN (Movies INNER JOIN MoviesActors ma ON m.ID = ma.MovieID) ON a.ID = ma.ActorID Relational database design Junction tables
doc_3707
Here is a SQL Fiddle with the data * *Tables TABLEA visited_states_time AL= Alabama,2, AK=Alaska,5 AR=Arkansas,6 AZ=Arizona,10 CA=California, 10,CT=Connecticut,20 TABLEB CRITERIA AL HI CA CT AK *Desired Result visited_states ................................... total_time_spent AL= Alabama, AK=Alaska ............................ 7 CA=California, CT=Connecticut................... 30 A: That's a terrible data model. also you didn't say the condition for tableb. if any state matches, or if all? as we need to split the rows up (to sum()) and then recombine them you can use: SQL> with v as (select rownum r, 2 ','||visited_states_time||',' visited_states_time, 3 length( 4 regexp_replace(visited_states_time, '[^,]', '') 5 )+1 fields 6 from tablea) 7 select trim(both ',' from visited_states_time) visited_states_time, 8 sum(total_time_spent) total_time_spent 9 from (select * 10 from v 11 model 12 partition by (r) 13 dimension by (0 as f) 14 measures (visited_states_time, cast('' as varchar2(2)) state, 15 0 as total_time_spent, fields) 16 rules ( 17 state[for f from 0 to fields[0]-1 increment 2] 18 = trim( 19 substr(visited_states_time[0], 20 instr(visited_states_time[0], ',', 1, cv(f)+1)+1, 21 instr(visited_states_time[0], '=', 1, (cv(f)/2)+1) 22 - instr(visited_states_time[0], ',', 1, cv(f)+1)-1 23 )), 24 visited_states_time[any]= visited_states_time[0], 25 total_time_spent[any] 26 = substr(visited_states_time[0], 27 instr(visited_states_time[0], ',', 1, (cv(f)+2))+1, 28 instr(visited_states_time[0], ',', 1, (cv(f)+3)) 29 - instr(visited_states_time[0], ',', 1, (cv(f)+2))-1 30 ) 31 )) 32 where state in (select criteria from tableb) 33 group by visited_states_time; VISITED_STATES_TIME TOTAL_TIME_SPENT ------------------------------------- ---------------- CA=California, 10,CT=Connecticut,20 30 AL=Alabama,2, AK=Alaska,5 7 but seriously, rewrite that data model to store them separately to start with.
doc_3708
Divs: <div class="container1"> <a href="#cont1"> <img src="down.png"></img> </a> <h1>Day</h1> <p> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Mauris gravida ultricies suscipit. Integer in luctus enim, id varius velit. Suspendisse potenti. Quisque feugiat lectus eget est suscipit, eget aliquam mauris pharetra. Fusce aliquet dui nec mi pulvinar, eu volutpat diam volutpat. Integer eget neque facilisis, ornare felis ac, vulputate eros. Etiam et accumsan erat. Aenean porttitor egestas justo et vestibulum. Donec gravida dignissim neque id vehicula. Ut non nunc ut lectus placerat tempor. Sed porttitor ullamcorper eros, sed eleifend felis. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Quisque mauris erat, consequat sed nulla et, volutpat accumsan leo. Mauris cursus aliquet magna, eu facilisis velit scelerisque vitae. Aliquam tristique id nisl in pulvinar. Vestibulum non adipiscing dui, a commodo lorem. </p> </div> <div class="container2"> <a href="#cont"> <img src="up.png"></img> </a> <h2>Night</h2> <p name=cont1>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Mauris gravida ultricies suscipit. Integer in luctus enim, id varius velit. Suspendisse potenti. Quisque feugiat lectus eget est suscipit, eget aliquam mauris pharetra. Fusce aliquet dui nec mi pulvinar, eu volutpat diam volutpat. Integer eget neque facilisis, ornare felis ac, vulputate eros. Etiam et accumsan erat. Aenean porttitor egestas justo et vestibulum. Donec gravida dignissim neque id vehicula. Ut non nunc ut lectus placerat tempor. Sed porttitor ullamcorper eros, sed eleifend felis. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Quisque mauris erat, consequat sed nulla et, volutpat accumsan leo. Mauris cursus aliquet magna, eu facilisis velit scelerisque vitae. Aliquam tristique id nisl in pulvinar. Vestibulum non adipiscing dui, a commodo lorem. </p> </div> Css: body { background-image: url('http://media-cache-ec0.pinimg.com/736x/b8/02/64/b80264c12c88eac19d5e4c8597d051e1.jpg'); background-attachment: fixed; background-size: cover; font-family: 'Roboto', sans-serif; text-align: center; color: white; height: 100%; text-shadow: black 0 0 4px; } p { height: 100%; } .container1 { position: absolute; height: 100%; } .container2 { width: 100%; background-image: url(http://ak7.picdn.net/shutterstock/videos/5200997/preview/stock-footage-blurred-background-of-moving-lights-from-a-road-of-traffic-at-night-time.jpg); background-attachment: fixed; background-size: cover; margin: 0; position: absolute; left: 0; height: 100%; } img { max-width: 15vw; height: auto; } .container1 img { position: absolute; top: 0; left: 0; } .container2 img{ position: absolute; left: 0; } h1 { font-size: 15vw; } h2{ font-size: 15vw; } Please ignore any compatability issues or wrong use of certain tags, im working on it. Broken Result: Im wanting the day part to be full height A: Not only body, but html tag should be set to height: 100% as well. Percentage height property is calculated relative to the parent, and, as it turns out, body isn't the highest one, html is.
doc_3709
Is there any easy way of reverting to my previous pods? I'm thinking of hacking it -- I have my Podfile.lock file under source control, so I can grab the version numbers off that, then lock down the Podfile to those numbers e.g.: pod 'AFNetworking', '2.6.3' But is there a simple command like pod revert or pod undo or something? Otherwise I'll probably just suck it up and update my project code to be compatible to the new pods. A: You can go through the version changes in the pod install's output and rewrite your podfile with the desired versions. For example: In pod install output below, SwiftSpinner was updated from 0.9.5 to 1.0.2 Therefore, in the podfile change the line pod 'SwiftSpinner' to pod 'SwiftSpinner', '0.9.5' and run pod update again. Now, you might get errors reverting certain pods: [!] Unable to satisfy the following requirements: - `Realm (= 1.0.2)` required by `Podfile` - `Realm (= 1.1.0)` required by `Podfile.lock` In such a case, close Xcode, delete your podfile and run pod install again. A: The best way is to remove the pods directory from your project and update and install it again. rm -rf Pods bundle exec pod repo update bundle exec pod install A: If you use git or other version control system, just reset changes in Podfile.lock and run $ pod install in terminal. A: Remove the following files and folders: - xcworkspace - Podfile.lock - Pods (Folder) After, run: $ pod install This seems like something risky, but really it is not. I do it all the time
doc_3710
async function CompleteUpload(){ await LoadValues(); await UploaderMethod(); alert("Product Added Successfully!"); location.reload(); } the alert just pops before the 2 await function calls and also the page gets reloaded before these methods are executed. async function LoadValues(){ prName = document.getElementById('NAMEbox').value; proc = document.getElementById('PROCbox').value; } async function UploaderMethod(){ var uploadTask = firebase.storage().ref("Laptops/Img" + id+".png").put(files[0]); uploadTask.on('state_changed', function(snapshot){ }, .... //firebase upload data function the upload CompleteUpload() works perfectly if i don't put alert() and reload at the end. UPDATED** (after someone answered about returning a promise) at the end of upload task i wrote this: return new Promise(function(resolve, reject) { resolve("yeah"); } changed the complete upload to: function CompleteUpload(){ LoadValues(); UploaderMethod().then(Reeeload()); } function Reeeload(){ alert("Product Added Successfully!"); location.reload(); } A: This has absolutely nothing to do with the alert. Your UploaderMethod is defined as async so it always returns a promise, but that promise resolves before the uploadTask is complete (so it continues to the next statement (the alert followed by the reload) immediately). You should: * *Remove the async keyword from it (because it isn't awaiting any promises) *Return a promise created with the Promise constructor *Resolve that promise inside the state_changed event handler (when everything is resolved). See How do I convert an existing callback API to promises?. Aside: LoadValues does nothing except entirely synchronous DOM accesses. It shouldn't be marked as async and you shouldn't await the result. A: Is it specifically alert() that doesn't work with async/await? What if you replace the alert() and location.reload() lines with something like console.log to see if that executes first as well. It might be your LoadValues() and UploaderMethod() causing the problems. A: await won’t work in the top-level code so intead of function(){ } use this ()=>{} and it will work i made the screenshot from this site. i hade the same trouble (https://javascript.info/async-await)enter image description here
doc_3711
<Languages> <Language Type="Subbed">EN</Language> <Language Type="Dubbed">FR</Language> </Languages> Here is the XSD I currently have -- how would I add in the "subbed|dubbed" restriction? <xs:element name="Languages"> <xs:complexType> <xs:sequence> <xs:element name="Language" maxOccurs="unbounded" minOccurs="0"> <xs:complexType> <xs:simpleContent> <xs:extension base="xs:string"> <xs:attribute type="xs:string" name="Type" use="optional"/> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> A: You can achieve your goal with enumerations: Replace <xs:attribute type="xs:string" name="Type" use="optional"/> with <xs:attribute type="LanguageType" name="Type" use="optional"/> and add <xs:simpleType name="LanguageType"> <xs:restriction base="xs:string"> <xs:enumeration value="Subbed"/> <xs:enumeration value="Dubbed"/> </xs:restriction> </xs:simpleType> to restrict Language/@Type to be one of Subbed or Dubbed. Here is the above adjustment applied to a complete XSD: <?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" version="1.0"> <xs:element name="Languages"> <xs:complexType> <xs:sequence> <xs:element name="Language" maxOccurs="unbounded" minOccurs="0"> <xs:complexType> <xs:simpleContent> <xs:extension base="xs:string"> <xs:attribute type="LanguageType" name="Type" use="optional"/> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> <xs:simpleType name="LanguageType"> <xs:restriction base="xs:string"> <xs:enumeration value="Subbed"/> <xs:enumeration value="Dubbed"/> </xs:restriction> </xs:simpleType> </xs:schema> This will validate your XML, as requested.
doc_3712
/** * Gets the uptime of the application since the JVM launch * @return The uptime in a formatted String (DAYS, MONTHS, MINUTES, SECONDS) */ public static String getGameUptime() { RuntimeMXBean mxBean = ManagementFactory.getRuntimeMXBean(); DateFormat dateFormat = new SimpleDateFormat("dd:HH:mm:ss"); dateFormat.setTimeZone(TimeZone.getTimeZone("GMT+0")); return dateFormat.format(new Date(mxBean.getUptime())); } But when the application has only been running for 3 minutes and 50 seconds, it will return a "01:00:03:50". The 01 should be 00 in this case. What is the cause of this? How do I fix this? A: Try this DateFormat dateFormat = new SimpleDateFormat("HH:mm:ss"); dateFormat.setTimeZone(TimeZone.getTimeZone("GMT")); long uptime = mxBean.getUptime(); String d = uptime / (3600 * 1000 * 24) + ":" + dateFormat.format(uptime); Note that DateFormat accepts milliseconds as well
doc_3713
(I need this specific piece as a patch for an image gallery. I'm having trouble using the title attribute for specific content for different images in the gallery.) Pseudo code: If user clicks <a> with href="#" Show div #video A: $('a[href="#"]').click(function(){ $("#video").show(); }); A: $('a[href="#"]').click(function(){ $("#videoDiv").show(); }); A: $('a[href="#"]') Then do your show div! A: you can use $('a[href=#]').click(myFunction); A: Try this: $('a[href$=#]').click(function(e){ $("#video").show(); e.preventDefault(); return false; });
doc_3714
System.DateTime expiration_date = newVer.License.Status.Expiration_Date; DateTime currentDateTime = DateTime.Now; currentDateTime.ToString("MM/dd/yyyy HH:mm:ss"); int a = expiration_date.ToString("MM/dd/yyyy HH:mm:ss") .CompareTo(currentDateTime.ToString("MM/dd/yyyy HH:mm:ss")); //MessageBox.Show("int a is :" + a); if (expiration_date.ToString("MM/dd/yyyy HH:mm:ss") .CompareTo(currentDateTime.ToString("MM/dd/yyyy HH:mm:ss")) < 1) { crossDate = 1; MessageBox.Show("Cross Date Alert"+ " Expiry Date Is :"+ expiration_date.ToString("MM/dd/yyyy HH:mm:ss") + " "+"Current Date Is :"+ currentDateTime.ToString("MM/dd/yyyy HH:mm:ss")); } A: Compare dates example: DateTime d1 = DateTime.Now; DateTime d2 = DateTime.Now.AddDays(1); if ( d2.CompareTo(d1)>0 ) Console.WriteLine("d2>d1"); else Console.WriteLine("d2<=d1"); A: Your question has a two-part answer. There may be something easier, but: First, convert your string to a DateTime object. The DateTime class has several methods to help with this. Try ParseExact. Then, convert the DateTime object to a Unix timestamp. Now, you have two long ints, that you can compare, and convert the int comparison to another DateTime, and take things from there. A: don't convert to strings DateTime expiration_date = newVer.License.Status.Expiration_Date; if (expiration_date.CompareTo(DateTime.Now) < 1) { MessageBox.Show("Cross Date Alert"+ " Expiry Date Is :"+ expiration_date.ToString("MM/dd/yyyy HH:mm:ss") + " "+"Current Date Is :"+ currentDateTime.ToString("MM/dd/yyyy HH:mm:ss")); } A: Compare datetime as you would compare numbers such as DateTime expiration_date = newVer.License.Status.Expiration_Date; DateTime currentDateTime = DateTime.Now; if( expiration_date < currentDateTime) { // expired } If you need only date and not time then use DateTime expiration_date = newVer.License.Status.Expiration_Date.Date; DateTime currentDateTime = DateTime.Now.Date; You can also use day difference of two date. int daydiff = (int)((currentDateTime - expiration_date).TotalDays) A: .NET provides a great method for this to compare two datetime objects using the DateTime.Compare() method. The DateTime.Compare() method takes 2 datetime objects and compares both date and time or either one and returns an integer value. DateTime.Compare() I have demonstrated the same with a sample piece of code here : Comparing 2 DateTime objects in C#.
doc_3715
I'm trying to figure out what's happened since a couple of weeks, (it seems it started when bus traffic has increased, maybe 10-15 messages per second). I have automatic creation of subscription using subscriptionOpts.AutoDeleteOnIdle = TimeSpan.FromHours(3); Starting from lasts weeks, (when we got a traffic increment), sometimes our subscriptionclients stopped receiving messages and after 3 hours they get deleted. var messageOptions = new MessageHandlerOptions(args => { Emaillog.Warn(args.Exception, $"Client ExceptionReceived: {args.Exception}"); return Task.CompletedTask; }) { AutoComplete = true }; _subscriptionClient.RegisterMessageHandler(async (message, token) => await OnMessageReceived(message, $"{_subscriptionClient.SubscriptionName}", token), messageOptions); Is it possible that a subscription client gets disconnected and doesn't connect anymore? I have 4-5 clients processes that connect to this topic, each one with his own subscription. When I find one of these subscriptions deleted, sometimes they have all been deleted, sometimes only some of them have been deleted. Is it a bug? The only method call I do on the subscriptionClient is RegisterMessageHandler. I don't manage manually anything else... Thank you in advance A: The property AutoDeleteOnIdle is used to delete the Subscription when there is no message processing with in the Subscription for the specified time span. As you mentioned that the message flow increased to 15 messages per second, there is no chance that the Subscription is left empty (with out message flow). So there is no reason for the Subscriptions to delete. The idleness of the Subscription is decided by both incoming and outgoing messages. There can be chances that due to heavy message traffic, the downstream application processing the messages may went offline, leaving the messages unprocessed, eventually when the message flow reduced there is no receiver to process the messages, leaving the Subscription idle for 3 hours and delete.
doc_3716
Edit: Sorry, I assumed way to much in the asking of this question. I meant something along the lines of pushbuttonengine.com or similar type flash powered games. A: Flash itself is setup in a way that, with a small amount of programming and art skill, you can put together pretty simple games. If you don't have these skills (and aren't interested in spending the time to acquire them) you're looking for something wherein you can drag and drop pregenerated components and click together a game like legos. The types of games that you can make with these 'engines' will be pretty basic, and you'll most likely exhaust their potential fairly quickly, leaving you to have to learn how to do it 'the right way' anyways... With that rather lengthy caveat... You can start by just googling for 'flash game creator' and get a few promising looking items. Sploder is at the top of the list, and that looks kind of interesting, although I haven't tried it myself. Also, the advantage of learning Flash/Actionscript rather than a proprietary 'engine' is that, when you inevitably run into a problem, there exists much more documentation and community help for Flash and Actionscript than for whatever niche product you'll find. A: Agreed with JStriedl, better learn AS3 and write our own code, you can use some existing framework for speedy development, but learning AS3 will help in long run. A: Depends on what type of Flash games you are referring to. There are some specialized game creation kits that work with and generate flash files such as LASSIE for point-and-click adventure games. A: All those link leads to high quality game framework * *FlashPunk, https://github.com/Draknek *Flixel, http://flixel.org/ *Citrus Engine, http://citrusengine.com/ *Ash Framework (entity based) https://github.com/richardlord/Ash *Starling (2d GPU accelerated), http://gamua.com/starling/features/ each page describe feature to help you to choose the good one for your game project :) A: Quite a broad question. From a point and click angle this is one I have started to tinker with http://www.alpacaengine.com/ This offers the complete engine/framework for making a point and click game in flash using AS3. The save function, character, stage etc are pre-made for you. You just need design/make the artwork and sfx and everything else. Alternatively, if you want to start from scratch there a numerous tuts online. Here's one example. http://monkeypro.net/rosedragon/node/701 Thanks
doc_3717
Requirement: Here we need to instead re-trigger the past day's failed tasks, followed by running the current day's DAG. Any ideas of how we can achieve this? A: When specifying the default_args you have the ability to state how many retries you want. For example: default_args = { 'owner': 'ANDY', 'depends_on_past': True, 'start_date': datetime(2016, 1, 1), 'email': ['ANDY@email.com'], 'email_on_failure': True, 'email_on_retry': False, 'retries': 3, 'retry_delay': timedelta(minutes=1)} The subsequent tasks will run after it finally completes as long as you set the proper upstream/downstream dependencies. I hope this helps.
doc_3718
here's the onCreate() code for my TimerActivity.class super.onCreate(savedInstanceState); setContentView(R.layout.activity_timer); createComponents(); setFont(); setTimes(); buttonFunction(); My guess is that the setTimes() function might be resetting the screen to look as if the timer isn't running, but I don't know how to fix it. It's just a hunch, anyway. Here's the code for that: public void setTimes() { min1s = String.format("%02d", min1); sec1s = String.format("%02d", sec1); min2s = String.format("%02d", min2); sec2s = String.format("%02d", sec2); min3s = String.format("%02d", min3); sec3s = String.format("%02d", sec3); min4s = String.format("%02d", min4); sec4s = String.format("%02d", sec4); min5s = String.format("%02d", min5); sec5s = String.format("%02d", sec5); min6s = String.format("%02d", min6); sec6s = String.format("%02d", sec6); study1textM.setText(min1s); break1textM.setText(min2s); study2textM.setText(min3s); study1textS.setText(sec1s); break1textS.setText(sec2s); study2textS.setText(sec3s); break2textM.setText(min4s); study3textM.setText(min5s); break3textM.setText(min6s); break2textS.setText(sec4s); study3textS.setText(sec5s); break3textS.setText(sec6s); } min1-6 and sec1-6 are just ints with the default timer values, and the `textM` and `textS` are just EditTexts. As you can probably tell, I'm not sure how to go about fixing this. Any help would be appreciated. A: You should save your specific data by implementing onSaveInstanceState(Bundle savedInstanceState) in your activity. And restore saved data during onCreate() if 'savedInstanceState' you receive is'n null. See docs for details http://developer.android.com/training/basics/activity-lifecycle/recreating.html#SaveState
doc_3719
Is there a way to do the same thing with a one liner? Like from command line or from another .py file? A: If you want your plots/charts also to be saved in your html but unfortunately there is no straightforward solutions. Also, there is no commands in IPython console for that. You have to do manually Ctrl + S or Right Click ==> Save as HTML/XML or you can simulate clicking Ctrl + S using some script. But, if you want to save text logs only(without plots) Try command %logstart testingToLog.html in your IPython console of spyder to redirect your logging onto a file named testingToLog.html or of any extension. Here is the documentation for %logstart Here is a snapshot: Now, obvious question is, how would you stop the logging which you started. This can be done using command %logstop in same IPython console. Also, you can pass %logstate command to Print the status of logging system.
doc_3720
I've tried researching and have found similar problems but haven't found a working solution yet. I have tried 'for each' and 'while' loops, and 'if' statements, but haven't been able to get the code working correctly. the_window.counter = 0 if the_window.counter == 0: top_label['text'] = words [0] the_window.counter + 1 elif the_window.counter == 1: top_label['text'] = words [1] the_window.counter + 1 the code shown above produces the first word in the list only, and multiple clicks don't have any effect. does anyone have any ideas? Thanks. A: You need need to keep a global counter, and update it each time it is clicked. The following code illustrates the technique: # initialized to -1, so that the first time it is called # it gets set to zero the_window_counter = -1 def handle_click(): global the_window_counter the_window_counter += 1 try: top_label.configure(text=words[the_window_counter]) except IndexError: top_label.configure(text="no more words")
doc_3721
This code works flawless but I want to put this code in a class so I can call it to do the same job but instead of copying this code in every view I want it, I want to call it instead from which view I need for. This is the code import android.graphics.Typeface; import android.os.Bundle; import android.support.design.widget.FloatingActionButton; import android.support.v4.app.Fragment; import android.text.Layout; import android.util.Log; import android.view.Gravity; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.ImageView; import android.widget.LinearLayout; import android.widget.TextView; public class tabTueActivity extends Fragment { int i; LinearLayout layoutNewEvent, linearLayoutEventText, linearLayoutEventVoyage, linearLayoutEventIndicator, eventLayout; ImageView imageViewEventFrom, imageViewEventTo, imageViewSearch, imageViewStart; TextView textViewEventFrom, textViewEventTo; FloatingActionButton createEvent; @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View rootView = inflater.inflate(R.layout.tab_tue_frag, container, false); return rootView; } @Override public void onViewCreated(final View rootView, Bundle savedInstanceState) { super.onViewCreated(rootView, savedInstanceState); eventLayout = (LinearLayout) rootView.findViewById(R.id.event_layout); createEvent = (FloatingActionButton) rootView.findViewById(R.id.add_event_button); final LinearLayout.LayoutParams layoutNewEventParams = new LinearLayout.LayoutParams(LinearLayout.LayoutParams.WRAP_CONTENT, LinearLayout.LayoutParams.WRAP_CONTENT); layoutNewEventParams.setMargins(40, 20, 40, 10); layoutNewEventParams.height = 150; final LinearLayout.LayoutParams linearLayoutEventTextParams = new LinearLayout.LayoutParams(LinearLayout.LayoutParams.WRAP_CONTENT, LinearLayout.LayoutParams.WRAP_CONTENT); final LinearLayout.LayoutParams linearLayoutEventIndicatorParams = new LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT); final LinearLayout.LayoutParams imageViewEventFromParams = new LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.WRAP_CONTENT); imageViewEventFromParams.setMargins(0,0,0,10); imageViewEventFromParams.weight = (float) 0.5; final LinearLayout.LayoutParams imageViewEventToParams = new LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.WRAP_CONTENT); imageViewEventToParams.weight = (float) 0.5; final LinearLayout.LayoutParams linearLayoutEventVoyageParams = new LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT); linearLayoutEventVoyageParams.weight = 1; final LinearLayout.LayoutParams textViewEventFromParams = new LinearLayout.LayoutParams(LinearLayout.LayoutParams.WRAP_CONTENT, LinearLayout.LayoutParams.WRAP_CONTENT); final LinearLayout.LayoutParams textViewEventToParams = new LinearLayout.LayoutParams(LinearLayout.LayoutParams.WRAP_CONTENT, LinearLayout.LayoutParams.WRAP_CONTENT); final LinearLayout.LayoutParams imageViewSearchParams = new LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT); final LinearLayout.LayoutParams imageViewStartParams = new LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT); createEvent.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { final LinearLayout layoutNewEvent = new LinearLayout(getActivity()); final LinearLayout linearLayoutEventText = new LinearLayout(getActivity()); final LinearLayout linearLayoutEventIndicator = new LinearLayout(getActivity()); final LinearLayout linearLayoutEventVoyage = new LinearLayout(getActivity()); final ImageView imageViewEventFrom = new ImageView(getActivity()); final ImageView imageViewEventTo = new ImageView(getActivity()); final ImageView imageViewSearch = new ImageView(getActivity()); final ImageView imageViewStart = new ImageView(getActivity()); final TextView textViewEventTo = new TextView(getActivity()); final TextView textViewEventFrom = new TextView(getActivity()); i++; Log.e("EVENT BUTTON", "New event created....."); layoutNewEvent.setId(i); layoutNewEvent.setBackgroundColor(getResources().getColor(R.color.eventColor)); layoutNewEvent.setOrientation(LinearLayout.HORIZONTAL); linearLayoutEventText.setId(i); linearLayoutEventText.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { Log.e("LAYOUT EVENT","Layout Event Pressed....."); } }); linearLayoutEventText.setOrientation(LinearLayout.HORIZONTAL); linearLayoutEventText.setLayoutParams(linearLayoutEventTextParams); linearLayoutEventIndicator.setOrientation(LinearLayout.VERTICAL); linearLayoutEventIndicator.setLayoutParams(linearLayoutEventIndicatorParams); linearLayoutEventVoyage.setOrientation(LinearLayout.VERTICAL); linearLayoutEventVoyage.setPadding(10,5,10,5); linearLayoutEventVoyage.setLayoutParams(linearLayoutEventVoyageParams); imageViewEventFrom.setBackgroundColor(getResources().getColor(R.color.backgroundColor3)); imageViewEventFrom.setScaleType(ImageView.ScaleType.CENTER_CROP); imageViewEventFrom.setImageResource(R.drawable.arrow_up_white); imageViewEventFrom.setLayoutParams(imageViewEventFromParams); imageViewEventTo.setBackgroundColor(getResources().getColor(R.color.backgroundColor2)); imageViewEventTo.setScaleType(ImageView.ScaleType.CENTER_CROP); imageViewEventTo.setImageResource(R.drawable.arrow_down_white); imageViewEventTo.setLayoutParams(imageViewEventToParams); textViewEventFrom.setText("San Pawl Il Bahar"); textViewEventFrom.setTypeface(Typeface.defaultFromStyle(Typeface.BOLD)); textViewEventFrom.setGravity(Gravity.CENTER | Gravity.LEFT); textViewEventFrom.setTextSize(20); textViewEventFrom.setMaxLines(1); textViewEventFrom.setLayoutParams(textViewEventFromParams); textViewEventTo.setText("Zonqor"); textViewEventTo.setTypeface(Typeface.defaultFromStyle(Typeface.BOLD)); textViewEventTo.setGravity(Gravity.CENTER | Gravity.LEFT); textViewEventTo.setTextSize(20); textViewEventTo.setMaxLines(1); textViewEventTo.setLayoutParams(textViewEventToParams); imageViewSearch.setId(i); imageViewSearch.setBackgroundColor(getResources().getColor(R.color.backgroundColor2)); imageViewSearch.setScaleType(ImageView.ScaleType.CENTER_CROP); imageViewSearch.setImageResource(R.drawable.search_white); imageViewSearch.setPadding(5,5,5,5); imageViewSearch.setLayoutParams(imageViewSearchParams); imageViewSearch.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { Log.e("SEARCH EVENT","Search Button Pressed....."); } }); imageViewStart.setId(i); imageViewStart.setBackgroundColor(getResources().getColor(R.color.backgroundColor4)); imageViewStart.setScaleType(ImageView.ScaleType.CENTER_CROP); imageViewStart.setImageResource(R.drawable.start_white); imageViewStart.setPadding(5,5,5,5); imageViewStart.setLayoutParams(imageViewStartParams); imageViewStart.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { Log.e("START EVENT", "Start Event Pressed....."); } }); linearLayoutEventIndicator.addView(imageViewEventFrom); linearLayoutEventIndicator.addView(imageViewEventTo); linearLayoutEventVoyage.addView(textViewEventFrom); linearLayoutEventVoyage.addView(textViewEventTo); linearLayoutEventText.addView(linearLayoutEventIndicator); linearLayoutEventText.addView(linearLayoutEventVoyage); layoutNewEvent.addView(linearLayoutEventText); layoutNewEvent.addView(imageViewSearch); layoutNewEvent.addView(imageViewStart); eventLayout.addView(layoutNewEvent, layoutNewEventParams); } }); } } Any clue on how to do this ? Below is the code I want to put in a separate class createEvent.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { final LinearLayout layoutNewEvent = new LinearLayout(getActivity()); final LinearLayout linearLayoutEventText = new LinearLayout(getActivity()); final LinearLayout linearLayoutEventIndicator = new LinearLayout(getActivity()); final LinearLayout linearLayoutEventVoyage = new LinearLayout(getActivity()); final ImageView imageViewEventFrom = new ImageView(getActivity()); final ImageView imageViewEventTo = new ImageView(getActivity()); final ImageView imageViewSearch = new ImageView(getActivity()); final ImageView imageViewStart = new ImageView(getActivity()); final TextView textViewEventTo = new TextView(getActivity()); final TextView textViewEventFrom = new TextView(getActivity()); i++; Log.e("EVENT BUTTON", "New event created....."); layoutNewEvent.setId(i); layoutNewEvent.setBackgroundColor(getResources().getColor(R.color.eventColor)); layoutNewEvent.setOrientation(LinearLayout.HORIZONTAL); linearLayoutEventText.setId(i); linearLayoutEventText.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { Log.e("LAYOUT EVENT","Layout Event Pressed....."); } }); linearLayoutEventText.setOrientation(LinearLayout.HORIZONTAL); linearLayoutEventText.setLayoutParams(linearLayoutEventTextParams); linearLayoutEventIndicator.setOrientation(LinearLayout.VERTICAL); linearLayoutEventIndicator.setLayoutParams(linearLayoutEventIndicatorParams); linearLayoutEventVoyage.setOrientation(LinearLayout.VERTICAL); linearLayoutEventVoyage.setPadding(10,5,10,5); linearLayoutEventVoyage.setLayoutParams(linearLayoutEventVoyageParams); imageViewEventFrom.setBackgroundColor(getResources().getColor(R.color.backgroundColor3)); imageViewEventFrom.setScaleType(ImageView.ScaleType.CENTER_CROP); imageViewEventFrom.setImageResource(R.drawable.arrow_up_white); imageViewEventFrom.setLayoutParams(imageViewEventFromParams); imageViewEventTo.setBackgroundColor(getResources().getColor(R.color.backgroundColor2)); imageViewEventTo.setScaleType(ImageView.ScaleType.CENTER_CROP); imageViewEventTo.setImageResource(R.drawable.arrow_down_white); imageViewEventTo.setLayoutParams(imageViewEventToParams); textViewEventFrom.setText("San Pawl Il Bahar"); textViewEventFrom.setTypeface(Typeface.defaultFromStyle(Typeface.BOLD)); textViewEventFrom.setGravity(Gravity.CENTER | Gravity.LEFT); textViewEventFrom.setTextSize(20); textViewEventFrom.setMaxLines(1); textViewEventFrom.setLayoutParams(textViewEventFromParams); textViewEventTo.setText("Zonqor"); textViewEventTo.setTypeface(Typeface.defaultFromStyle(Typeface.BOLD)); textViewEventTo.setGravity(Gravity.CENTER | Gravity.LEFT); textViewEventTo.setTextSize(20); textViewEventTo.setMaxLines(1); textViewEventTo.setLayoutParams(textViewEventToParams); imageViewSearch.setId(i); imageViewSearch.setBackgroundColor(getResources().getColor(R.color.backgroundColor2)); imageViewSearch.setScaleType(ImageView.ScaleType.CENTER_CROP); imageViewSearch.setImageResource(R.drawable.search_white); imageViewSearch.setPadding(5,5,5,5); imageViewSearch.setLayoutParams(imageViewSearchParams); imageViewSearch.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { Log.e("SEARCH EVENT","Search Button Pressed....."); } }); imageViewStart.setId(i); imageViewStart.setBackgroundColor(getResources().getColor(R.color.backgroundColor4)); imageViewStart.setScaleType(ImageView.ScaleType.CENTER_CROP); imageViewStart.setImageResource(R.drawable.start_white); imageViewStart.setPadding(5,5,5,5); imageViewStart.setLayoutParams(imageViewStartParams); imageViewStart.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { Log.e("START EVENT", "Start Event Pressed....."); } }); linearLayoutEventIndicator.addView(imageViewEventFrom); linearLayoutEventIndicator.addView(imageViewEventTo); linearLayoutEventVoyage.addView(textViewEventFrom); linearLayoutEventVoyage.addView(textViewEventTo); linearLayoutEventText.addView(linearLayoutEventIndicator); linearLayoutEventText.addView(linearLayoutEventVoyage); layoutNewEvent.addView(linearLayoutEventText); layoutNewEvent.addView(imageViewSearch); layoutNewEvent.addView(imageViewStart); eventLayout.addView(layoutNewEvent, layoutNewEventParams); } }); A: Instead of creating an anonymous class createEvent.setOnClickListener(new View.OnClickListener() you can create the class explicitly. A more informative answer would require more details about what kinds of views you want to pass in. If they all have the same format than it isn't really a problem but if some have variable numbers of subviews then you might want to look into creating an abstract view and subclassing it when needed and altering the shared logic for the view. Really though you should be looking to do this in XML and not programatically.
doc_3722
Call this value K. Your program will then read any number of input data items from standard input ubtil end-of-file. Once end-of-file hase been reached, your program will then print out the largest K values it captured from the input data elements. Keep in mind that your program may be given very large amounts of data, so it will not be practical for your program to try to hold onto all of the input data until end-of-file has been reached. class MinHeap: def _init_(self): self.list = [ ] def swap(self, i, j): temp = self.list[i] self.list[i] = self.list[j] self.list[j] = temp def siftUp(self, pos): i = pos; while i > 0: parent = (i - 1) // 2 if self.list[parent] > self.list[i]: self.swap(parent, i) else: break i = parent return self def siftDown(self, lo, hi): i = lo child = 2*i + 1 while child <= hi: rchild = child + 1 if rchild <= hi and self.list[child] > self.list[rchild]: child = rchild if self.list[i] > self.list[child]: self.swap(i, child) else: break i = child child = 2*i +1 return self def getMin(self): if len(self.list) <= 0: return None else: return self.list[0] def removeMin(self): if len(self.list) <= 0: return None else: self.swap(0, -1)
doc_3723
There is webpage that html contents come after script execution. I used phantomJS with different method. 1-) Checking with document.ready var page = require('webpage').create(); console.log('The default user agent is ' + page.settings.userAgent); page.settings.userAgent = 'SpecialAgent'; page.open('http://sosyal.hurriyet.com.tr/yazar/niobe_141/seni-unutmuyoruz-pasam_40011882', function(status) { function checkReadyState() { setTimeout(function () { var readyState = page.evaluate(function () { return document.readyState; }); if ("complete" === readyState) { onPageReady(); } else { checkReadyState(); } }); } checkReadyState(); }); function onPageReady() { var htmlContent = page.evaluate(function () { return document.body.textContent; }); console.log(htmlContent); phantom.exit(); } Result:Script not loaded so unloaded html returned.. 2-)Setting timeout too long page.open(address, function (status) { if (status !== 'success') { console.log('Unable to load the address!'); phantom.exit(); } else { window.setTimeout(function () { var htmlContent = page.evaluate(function () { return document.getElementsByClassName('hsaalicc-text').textContent; }); console.log(htmlContent); }, 1000); // Change timeout as required to allow sufficient time } }); Result:Script not loaded so unloaded html returned.. So although I'm android developper and have not too much jquery knowlodge looked page code with chrome developper console... And I see all data that should be load is in script with window.articleDetailData Moreover I found the function that load data content. ('#templateArticleDetail').tmpl(data).appendTo('#articleDetailContainer'); There is no time parameter,but in mobile device it takes time. But in code I understand when page loaded it should copy to #articleDetailContainer So my question 1-) why document ready and high timeout not return loaded script page with phantomJS 2-) Is there a way to parse windows.data under script tag?? If I could not find any easy way,will use regex to parse script A: You can use PhantomJsCloud.com to do this, though in the answers below, I'll try to explain the process if you want to try to use your own phantomjs.exe instance. (Disclosure: I wrote PhantomJsCloud) Docs for PhantomJsCloud here: http://api.phantomjscloud.com/ 1) Waiting for AJAX? Using PhantomJsCloud: Just a normal request, everything is automatically loaded properly. Here is your page rendered as a PNG: http://api.phantomjscloud.com/api/browser/v2/a-demo-key-with-low-quota-per-ip-address/?request={url:%22http://sosyal.hurriyet.com.tr/yazar/niobe_141/seni-unutmuyoruz-pasam_40011882%22,renderType:%22png%22} Using PhantomJs.exe: If you do it yourself, you need to make sure all ajax resource requests are complete before rendering. (see the WebPage.OnResourceRecieved() api) 2) Parse windows.data? Using PhantomJsCloud: Set pageRequest.requestType="script", Use pageRequest.scripts to execute your script, and return the data you want. For example: http://api.phantomjscloud.com/api/browser/v2/a-demo-key-with-low-quota-per-ip-address/?request={url:%22http://example.com%22,renderType:%22script%22,scripts:{loadFinished:[%22return%20{hello:%27world%27,host:document.location.host};%22]}} Using PhantomJs.exe: You need to use http://phantomjs.org/api/webpage/method/include-js.html or injectJs and load a script that will do your parsing, and then send it back to your phantomjs.exe code via http://phantomjs.org/api/webpage/handler/on-callback.html
doc_3724
Example: 2015-05-13 23:12:11 2015-05-14 00:13:23 2015-05-14 07:12:13 2015-05-14 08:34:45 2015_05-14 19:39:44 I have to write a bash script that re-calculates these time-stamps to UTC (CEST=UTC+2hrs). The expected output file: 2015-05-13 21:12:11 2015-05-13 22:13:23 2015-05-14 05:12:13 2015-05-14 06:34:45 2015_05-14 17:39:44 I used the date command with many options (-d , -u) with no effect. I'd be grateful for any suggestions. A: I can generate your example via exactly the options you've tried: $ date +"%F %T" -d "2015-05-13 23:12:11 CEST" -u 2015-05-13 21:12:11 Stick that in a loop and you'll be able to operate on the whole file.
doc_3725
First, I reserved a static IP. Then create a vm. Then tried set-azurestaticvnetip. It complaint that subnetnames could not be null. So I created a virtual network in Azure portal and created a subnet. Then used set-azuresubnet -subnetnames subnet-1. Now it complaint that virtual network name could not be null. Problem is that none of the commands in the sequence takes -vnetname as a parameter. I then found that this parameter could be passed in new-azurevm. So I deleted (what a nice workaround. i am glad I did not spend time configuring software in the vm) the VM and tried to create using this command. (image parameter is not specified in this command. I was entering that on prompt). new-azurevmconfig -name myvm -instancesize Basic_A2| add-azureprovisioningconfig -adminusername "myvmadmin" -windows -password "myvmP123"| set-azuresubnet -subnetnames subnet-1 | set-azurestaticvnetip -ipaddress 23.101.39.28 | new-azurevm -servicename myvm -vnetname mynet –Location "East US" -waitforboot Throws error- new-azurevm : BadRequest : The static address 23.101.39.28 doesn't belong to the address space defined by the role's subnets. What is wrong here? Looks to me, these instructions are for assigning a private IP to VM. It is so easy to assign a static IP to a VM created through Web Role (just put public IP name in the config file, thats it). How do I assign a Public IP to a VM. Microsoft documentation merely states that public IP can be used for a VM but did they think everybody would know how to do that? A: I think you need to create a regional vnet. This worked for me: 1) Get your current Network Config Get-AzureVNetConfig -ExportToFile "c:\temp\MyAzNets.netcfg" 2) Open the MyAzNets.netcfg and edit/(add?) a VirtualNetworkSite. I think the key here is Location and not Affinity Group. Your reserved IP/VM will need to be in the same. You should have something like this: <VirtualNetworkSites> <VirtualNetworkSite name="yourvnet" Location="West US"> <AddressSpace> <AddressPrefix>192.168.50.0/24</AddressPrefix> </AddressSpace> <Subnets> <Subnet name="yoursubnet"> <AddressPrefix>192.168.50.0/24</AddressPrefix> </Subnet> </Subnets> </VirtualNetworkSite> </VirtualNetworkSites> 3) Send it back into Azure: Set-AzureVNetConfig -ConfigurationPath "C:\temp\MyAzNets.netcfg" 4) Add/Get your IP in the same location as your vnet. Get-AzureReservedIP / New-AzureReservedIP 5) Create your VM. When creating or moving a VM make sure the cloud service doesn't exist. To move a VM just hit the capture button in the management portal and give it a friendly name then delete both the VM AND cloud service. New-AzureVMConfig -Name "my-vm01" -InstanceSize Basic_A2 -ImageName "someimage" -Label "my-vm" | Set-AzureSubnet "**yoursubnet**" | Add-AzureEndpoint -LocalPort 3389 -Name 'RDP' -Protocol tcp -PublicPort 61030 | Add-AzureEndpoint -LocalPort 80 -Name 'HTTP' -Protocol tcp -PublicPort 80 | Add-AzureEndpoint -LocalPort 443 -Name 'HTTPS' -Protocol tcp -PublicPort 443| New-AzureVM -ServiceName "my-vm" -ReservedIPName "**reservedipname**" -Location "West US" -VNetName "**yourvnet**" 6) The IP should be assigned. If you run Get-AzureReservedIP it should now show something like this: ReservedIPName : reservedipname Address : 127.0.0.1 Id : xxx Label : Location : West US State : Created InUse : True ServiceName : my-vm DeploymentName : my-vm A: You should be able to dynamically assign a reserved IP to a VM (or Cloud Service Web Role) after the VM is created and without any tear down. Works for me. Just do this; New-AzureReservedIP –ReservedIPName MyReservedIP –Location "East US" Set-AzureReservedIPAssociation -ReservedIPName MyReservedIP -ServiceName MyVMName First command reserves the IP. Second command assigns it to the VM. The assignment is immediate and the server will reboot A: After many trials with different options in the last two days, finally this worked: New-AzureVMConfig -Name "mysite" -InstanceSize Basic_A2 -Label "mysite" | Set-AzureSubnet "subnet-1" | add-azureprovisioningconfig -adminusername "myuser" -windows -password "mypwd"| Add-AzureEndpoint -LocalPort 80 -Name 'HTTP' -Protocol tcp -PublicPort 80 | Add-AzureEndpoint -LocalPort 443 -Name 'HTTPS' -Protocol tcp -PublicPort 443| New-AzureVM -ServiceName "mysite" -ReservedIPName "mysiteip" -Location "East US" -VNetName "mysite" Don't know why but it did. I had tried this before without any luck.
doc_3726
If the initial capacity is greater than the maximum number of entries divided by the load factor, no rehash operations will ever occur. Notice how the documentation says rehash, not resize - even if a rehash will only happen when a resize will; that is when the internal size of buckets gets twice as big. And of course HashMap provides such a constructor where we could define this initial capacity. Constructs an empty HashMap with the specified initial capacity and the default load factor (0.75). OK, seems easy enough: // these are NOT chosen randomly... List<String> list = List.of("DFHXR", "YSXFJ", "TUDDY", "AXVUH", "RUTWZ", "DEDUC", "WFCVW", "ZETCU", "GCVUR"); int maxNumberOfEntries = list.size(); // 9 double loadFactor = 0.75; int capacity = (int) (maxNumberOfEntries / loadFactor + 1); // 13 So capacity is 13 (internally it is 16 - next power of two), this way we guarantee that documentation part about no rehashes. Ok let's test this, but first introduce a method that will go into a HashMap and look at the values: private static <K, V> void debugResize(Map<K, V> map, K key, V value) throws Throwable { Field table = map.getClass().getDeclaredField("table"); table.setAccessible(true); Object[] nodes = ((Object[]) table.get(map)); // first put if (nodes == null) { // not incrementing currentResizeCalls because // of lazy init; or the first call to resize is NOT actually a "resize" map.put(key, value); return; } int previous = nodes.length; map.put(key, value); int current = ((Object[]) table.get(map)).length; if (previous != current) { ++HashMapResize.currentResizeCalls; System.out.println(nodes.length + " " + current); } } And now let's test this: static int currentResizeCalls = 0; public static void main(String[] args) throws Throwable { List<String> list = List.of("DFHXR", "YSXFJ", "TUDDY", "AXVUH", "RUTWZ", "DEDUC", "WFCVW", "ZETCU", "GCVUR"); int maxNumberOfEntries = list.size(); // 9 double loadFactor = 0.75; int capacity = (int) (maxNumberOfEntries / loadFactor + 1); Map<String, String> map = new HashMap<>(capacity); list.forEach(x -> { try { HashMapResize.debugResize(map, x, x); } catch (Throwable throwable) { throwable.printStackTrace(); } }); System.out.println(HashMapResize.currentResizeCalls); } Well, resize was called and thus entries where rehashed, not what the documentation says. As said, the keys were not chosen randomly. These were set-up so that they would trigger the static final int TREEIFY_THRESHOLD = 8; property - when a bucket is converted to a tree. Well not really, since we need to hit also MIN_TREEIFY_CAPACITY = 64 for the tree to appear; until than resize happens, or a bucket is doubled in size; thus rehashing of entries happens. I can only hint to why HashMap documentation is wrong in that sentence, since before java-8, a bucket was not converted to a Tree; thus the property would hold, from java-8 and onwards that is not true anymore. Since I am not sure about this, I'm not adding this as an answer. A: The line from the documentation, If the initial capacity is greater than the maximum number of entries divided by the load factor, no rehash operations will ever occur. indeed dates from before the tree-bin implementation was added in JDK 8 (JEP 180). You can see this text in the JDK 1.6 HashMap documentation. In fact, this text dates all the way back to JDK 1.2 when the Collections Framework (including HashMap) was introduced. You can find unofficial versions of the JDK 1.2 docs around the web, or you can download a version from the archives if you want to see for yourself. I believe this documentation was correct up until the tree-bin implementation was added. However, as you've observed, there are now cases where it's incorrect. The policy is not only that resizing can occur if the number of entries divided by the load factor exceeds the capacity (really, table length). As you noted, resizes can also occur if the number of entries in a single bucket exceeds TREEIFY_THRESHOLD (currently 8) but the table length is smaller than MIN_TREEIFY_CAPACITY (currently 64). You can see this decision in the treeifyBin() method of HashMap. if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY) resize(); else if ((e = tab[index = (n - 1) & hash]) != null) { This point in the code is reached when there are more than TREEIFY_THRESHOLD entries in a single bucket. If the table size is at or above MIN_TREEIFY_CAPACITY, this bin is treeified; otherwise, the table is simply resized. Note that this can leave bins with rather more entries than TREEIFY_THRESHOLD at small table sizes. This isn't terribly difficult to demonstrate. First, some reflective HashMap-dumping code: // run with --add-opens java.base/java.util=ALL-UNNAMED static Class<?> classNode; static Class<?> classTreeNode; static Field fieldNodeNext; static Field fieldHashMapTable; static void init() throws ReflectiveOperationException { classNode = Class.forName("java.util.HashMap$Node"); classTreeNode = Class.forName("java.util.HashMap$TreeNode"); fieldNodeNext = classNode.getDeclaredField("next"); fieldNodeNext.setAccessible(true); fieldHashMapTable = HashMap.class.getDeclaredField("table"); fieldHashMapTable.setAccessible(true); } static void dumpMap(HashMap<?, ?> map) throws ReflectiveOperationException { Object[] table = (Object[])fieldHashMapTable.get(map); System.out.printf("map size = %d, table length = %d%n", map.size(), table.length); for (int i = 0; i < table.length; i++) { Object node = table[i]; if (node == null) continue; System.out.printf("table[%d] = %s", i, classTreeNode.isInstance(node) ? "TreeNode" : "BasicNode"); for (; node != null; node = fieldNodeNext.get(node)) System.out.print(" " + node); System.out.println(); } } Now, let's add a bunch of strings that all fall into the same bucket. These strings are chosen such that their hash values, as computed by HashMap, are all 0 mod 64. public static void main(String[] args) throws ReflectiveOperationException { init(); List<String> list = List.of( "LBCDD", "IKBNU", "WZQAG", "MKEAZ", "BBCHF", "KRQHE", "ZZMWH", "FHLVH", "ZFLXM", "TXXPE", "NSJDQ", "BXDMJ", "OFBCR", "WVSIG", "HQDXY"); HashMap<String, String> map = new HashMap<>(1, 10.0f); for (String s : list) { System.out.println("===> put " + s); map.put(s, s); dumpMap(map); } } Starting from an initial table size of 1 and a ridiculous load factor, this puts 8 entries into the lone bucket. Then, each time another entry is added, the table is resized (doubled) but all the entries end up in the same bucket. This eventually results in a table of size 64 with one bucket having a linear chain of nodes ("basic nodes") of length 14, before adding the next entry finally converts this to a tree. Output of the program is as follows: ===> put LBCDD map size = 1, table length = 1 table[0] = BasicNode LBCDD=LBCDD ===> put IKBNU map size = 2, table length = 1 table[0] = BasicNode LBCDD=LBCDD IKBNU=IKBNU ===> put WZQAG map size = 3, table length = 1 table[0] = BasicNode LBCDD=LBCDD IKBNU=IKBNU WZQAG=WZQAG ===> put MKEAZ map size = 4, table length = 1 table[0] = BasicNode LBCDD=LBCDD IKBNU=IKBNU WZQAG=WZQAG MKEAZ=MKEAZ ===> put BBCHF map size = 5, table length = 1 table[0] = BasicNode LBCDD=LBCDD IKBNU=IKBNU WZQAG=WZQAG MKEAZ=MKEAZ BBCHF=BBCHF ===> put KRQHE map size = 6, table length = 1 table[0] = BasicNode LBCDD=LBCDD IKBNU=IKBNU WZQAG=WZQAG MKEAZ=MKEAZ BBCHF=BBCHF KRQHE=KRQHE ===> put ZZMWH map size = 7, table length = 1 table[0] = BasicNode LBCDD=LBCDD IKBNU=IKBNU WZQAG=WZQAG MKEAZ=MKEAZ BBCHF=BBCHF KRQHE=KRQHE ZZMWH=ZZMWH ===> put FHLVH map size = 8, table length = 1 table[0] = BasicNode LBCDD=LBCDD IKBNU=IKBNU WZQAG=WZQAG MKEAZ=MKEAZ BBCHF=BBCHF KRQHE=KRQHE ZZMWH=ZZMWH FHLVH=FHLVH ===> put ZFLXM map size = 9, table length = 2 table[0] = BasicNode LBCDD=LBCDD IKBNU=IKBNU WZQAG=WZQAG MKEAZ=MKEAZ BBCHF=BBCHF KRQHE=KRQHE ZZMWH=ZZMWH FHLVH=FHLVH ZFLXM=ZFLXM ===> put TXXPE map size = 10, table length = 4 table[0] = BasicNode LBCDD=LBCDD IKBNU=IKBNU WZQAG=WZQAG MKEAZ=MKEAZ BBCHF=BBCHF KRQHE=KRQHE ZZMWH=ZZMWH FHLVH=FHLVH ZFLXM=ZFLXM TXXPE=TXXPE ===> put NSJDQ map size = 11, table length = 8 table[0] = BasicNode LBCDD=LBCDD IKBNU=IKBNU WZQAG=WZQAG MKEAZ=MKEAZ BBCHF=BBCHF KRQHE=KRQHE ZZMWH=ZZMWH FHLVH=FHLVH ZFLXM=ZFLXM TXXPE=TXXPE NSJDQ=NSJDQ ===> put BXDMJ map size = 12, table length = 16 table[0] = BasicNode LBCDD=LBCDD IKBNU=IKBNU WZQAG=WZQAG MKEAZ=MKEAZ BBCHF=BBCHF KRQHE=KRQHE ZZMWH=ZZMWH FHLVH=FHLVH ZFLXM=ZFLXM TXXPE=TXXPE NSJDQ=NSJDQ BXDMJ=BXDMJ ===> put OFBCR map size = 13, table length = 32 table[0] = BasicNode LBCDD=LBCDD IKBNU=IKBNU WZQAG=WZQAG MKEAZ=MKEAZ BBCHF=BBCHF KRQHE=KRQHE ZZMWH=ZZMWH FHLVH=FHLVH ZFLXM=ZFLXM TXXPE=TXXPE NSJDQ=NSJDQ BXDMJ=BXDMJ OFBCR=OFBCR ===> put WVSIG map size = 14, table length = 64 table[0] = BasicNode LBCDD=LBCDD IKBNU=IKBNU WZQAG=WZQAG MKEAZ=MKEAZ BBCHF=BBCHF KRQHE=KRQHE ZZMWH=ZZMWH FHLVH=FHLVH ZFLXM=ZFLXM TXXPE=TXXPE NSJDQ=NSJDQ BXDMJ=BXDMJ OFBCR=OFBCR WVSIG=WVSIG ===> put HQDXY map size = 15, table length = 64 table[0] = TreeNode LBCDD=LBCDD IKBNU=IKBNU WZQAG=WZQAG MKEAZ=MKEAZ BBCHF=BBCHF KRQHE=KRQHE ZZMWH=ZZMWH FHLVH=FHLVH ZFLXM=ZFLXM TXXPE=TXXPE NSJDQ=NSJDQ BXDMJ=BXDMJ OFBCR=OFBCR WVSIG=WVSIG HQDXY=HQDXY
doc_3727
- (void)note:(id)sender{ NSLog(@"Note"); vc=[[UIView alloc]initWithFrame:CGRectMake(0, 0,320, 568)]; [self.view addSubview:vc]; } A: Add this code as it is just chane the background color of your newely create view and check it. - (void)note:(id)sender{ NSLog(@"Note"); vc=[[UIView alloc]initWithFrame:CGRectMake(0, 0,320, 568)]; vc.backgroundColor=[UIColor redColor]; [self.view addSubview:vc]; } try this code.. A: I'm going to try to help. First thing: your example code is demonstrating a perfect answer to your question already, it creates and shows a view which is what you're asking for. However, your example code is using "vc" which I am assuming is short for "view controller" as in UIViewController. And the code in your example doesn't declare it first, so I'm guessing it's an instance variable (ivar) in your class. So if that's correct (and I may be totally wrong), then the solution is more like: - (void)note:(id)sender{ // Load a view con NSString *nibfilenamewithoutextension = @"MyAwesomeViewControllerLayoutNib"; vc = [[UIViewController alloc] initWithNibName:nibfilenamewithoutextension bundle:nil]; [self.view addSubview:vc.view]; } PLEASE NOTE: this is crossing the line of child view controllers and not including the full proper code for handling child view controllers. There is also the option of presenting a view controller. Please see the following: https://developer.apple.com/library/ios/featuredarticles/ViewControllerPGforiPhoneOS/CreatingCustomContainerViewControllers/CreatingCustomContainerViewControllers.html https://developer.apple.com/library/ios/featuredarticles/ViewControllerPGforiPhoneOS/ModalViewControllers/ModalViewControllers.html
doc_3728
source 'https://rubygems.org' gem 'sinatra', '1.0' gem 'rails', '4.2.6' gem 'sqlite3' gem 'sass-rails', '~> 5.0' gem 'uglifier', '>= 1.3.0' gem 'coffee-rails', '~> 4.1.0' gem 'jquery-rails' gem 'turbolinks' gem 'jbuilder', '~> 2.0' gem 'sdoc', '~> 0.4.0', group: :doc group :development, :test do gem 'byebug' end group :development do gem 'web-console', '~> 2.0' gem 'spring' end Help is much appreciated, thanks. A: Heroku doesn't support running rails apps using sqlite3 so you should remove that and use pg instead. Or you could set that for your development env only and set pg for production. But it's always recommended to run your local env as similar as possible to your production, so that you won't get any surprises when you go live.
doc_3729
#include <iostream> class B { public: __device__ __host__ virtual void test() = 0; }; class A: public B { public: __device__ __host__ A(int x) {number = x;}; __device__ __host__ void test() {printf("test called!\n");} int number; }; int main(int argc, char const *argv[]) { // Size of array. static const int count = 2; // Create array of pointers to A objects in memmory. B** list; // = new B*[count]; cudaMallocManaged(&list, count*sizeof(B*)); // Create objects for in array. for (int i = 0; i < count; i++) { A* tempPointer; cudaMallocManaged(&tempPointer, sizeof(A)); *tempPointer = A(500); list[i] = tempPointer; } // Gives a segmentation fault. for (int i = 0; i < count; i++) list[i]->test(); // Free memmory. for (int i = 0; i < count; i++) cudaFree(list[count]); cudaFree(list); } Using this for loop instead will result in working code, but I really need to use cudaMallocManaged so this is not an option: for (int i = 0; i < count; i++) { A* tempPointer = new A(500); list[i] = tempPointer; } A: The problem here is that the way to initialize an object of a class containing virtual methods, and therefore a virtual function pointer table: class B { public: __device__ __host__ virtual void test() = 0; }; class A: public B { public: __device__ __host__ A(int x) {number = x;}; __device__ __host__ void test() {printf("test called!\n");} int number; }; is not via object-copy: *tempPointer = A(500); That method will not initialize the virtual function pointer table in the object. instead, for this particular case, my recommendation would be to use placement new: $ cat t1674.cu #include <iostream> #include <stdio.h> class B { public: __device__ __host__ virtual void test() = 0; }; class A: public B { public: __device__ __host__ A(int x) {number = x;}; __device__ __host__ void test() {printf("test called!\n");} int number; }; int main(int argc, char const *argv[]) { // Size of array. static const int count = 2; // Create array of pointers to A objects in memmory. B** list; // = new B*[count]; cudaMallocManaged(&list, count*sizeof(B*)); // Create objects for in array. for (int i = 0; i < count; i++) { A* tempPointer; cudaMallocManaged(&tempPointer, sizeof(A)); // *tempPointer = A(500); list[i] = new(tempPointer) A(500); } // Gives a segmentation fault. for (int i = 0; i < count; i++) list[i]->test(); // Free memmory. for (int i = 0; i < count; i++) cudaFree(list[count]); cudaFree(list); } $ nvcc -o t1674 t1674.cu $ cuda-memcheck ./t1674 ========= CUDA-MEMCHECK test called! test called! ========= ERROR SUMMARY: 0 errors $ Note in the above I have also fixed another error in the code, specifically that you are attepting to free the pointer list[0] more than once, that obviously cannot be correct. I have changed it to list[count] which I assume was your intent. Having said all that, I suspect you may run into a problem with this approach, shortly. CUDA has a limitation around objects with virtual function pointer tables. In particular, the object must be created in the domain that is going to be used. If you intend to use it on the host only, initialize the object on the host. If you intend to use it on the device only, initialize the object on the device. Objects (with virtual function pointer tables) initialized in one domain cannot be safely used in the other.
doc_3730
//Get Pins counter let pathRef = 'PlaceOne/'+this.Place2; var pinDocRef = this.afs.doc(pathRef); //Run Transaction return this.afs.firestore.runTransaction(function(transaction){ return transaction.get(pinDocRef).then(function(pinDoc){ if(!pinDoc.exists){ throw "Document does not exist!" } var newPinScore = pinDoc.data().pins + 1; transaction.update(pinDocRef, { pins: newPinScore }); }); }) Gives me this error: A: You can achieve it without angularfire using firebase native method. import * as firebase from 'firebase'; Then inside your function let pinDocRef = firebase.firestore().collection('PlaceOne').doc(this.Place2); return firebase.firestore().runTransaction(function(transaction) { // This code may get re-run multiple times if there are conflicts. return transaction.get(pinDocRef).then(function(pinDoc) { if(!pinDoc.exists){ throw "Document does not exist!" } let newPinScore = pinDoc.data().pins + 1; transaction.update(pinDocRef, { pins: newPinScore }); }); }).then(function() { console.log("Transaction successfully committed!"); }).catch(function(err) { console.log("Transaction failed: ", err); }); If you want to use Angularfire way try var pinDocRef = this.afs.doc(pathRef).ref; i am not sure about the second way.
doc_3731
The problem is that I can't get anymore information than that, and I have no idea how any of it ties together. If anyone can shine some light on this, it would be great. A: I've had similar problems with crashing after dimissing the MFMailComposer. After removing the [myMailComposer release] everything is fine. I'm sure I'm following the rules for memory management since it's fine all over in the app except at this specific place. Now my "Build & Analyze" nags about it, but the app is perfectly stable. A: Please try this code that works for me. - (void)mailComposeController:(MFMailComposeViewController*)controller didFinishWithResult:(MFMailComposeResult)result error:(NSError*)error { switch (result) { case MFMailComposeResultCancelled: { break; } case MFMailComposeResultSaved: { break; } case MFMailComposeResultSent: { UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Email" message:@"Email Sent" delegate:self cancelButtonTitle:@"OK" otherButtonTitles: nil]; [alert show]; [self performSegueWithIdentifier:@"backHome" sender: self]; break; } case MFMailComposeResultFailed: { NSLog(@" Failed"); break; } default: { UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Email" message:@"Email Failed" delegate:self cancelButtonTitle:@"OK" otherButtonTitles: nil]; [alert show]; } break; } }
doc_3732
Please let me know how to do this. A: libcurl provides curl_easy_getinfo() function with that can be used with CURLINFO_CERTINFO argument to read information about certificates returned by the HTTPS server. See this page for more information about the mentioned function. This page shows a simple example (certificate.c file) how to print information about certificates. If you need to work with content of the certificate(s), you probably need to extract the content between "-----BEGIN CERTIFICATE-----" and "-----END CERTIFICATE-----" markers.
doc_3733
int my_func(int x, int y, int XMAX, int YMAX){ return x + y*XMAX; } Here is a 2D example, but I can make something generic thanks to variadic template quite easily. However, I am stucked when I want to make the same function that does not take the max value for each coordinate in parameters. Something like that : template<int XMAX, int YMAX> int my_func(int x, int y){ return x + y*XMAX; } Here it works in 2D, but I want to generalize that from 1 to N dimensions and I don't know how I could achieve that. I was thinking to pass an int N which is the number of dimension and an std::array<N, int>::iterator which is an iterator on the std::array containing the actual max value, but it does not compile. Here is the code: template <int N, std::array<size_t, N>::iterator it> void foo(){...} It says ’std::array<long unsigned int, N>::iterator’ is not a type. If i just pass the std::array, I get the following error : ’struct std::array<long unsigned int, N>’ is not a valid type for a template non-type parameter Does someone have an idea on how to solve such a problem ? I am using C++ 11 (G++ 5.4.0). A: First of all, I suppose you did a little mistake in your function, because if you need to linearize the accessing of array you need to multuply y by XMAX int my_func(int x, int y, int XMAX, int YMAX){ return x + y*XMAX; } because each row is composed of XMAX items. For answer to your question I used a template parameter pack template <int N> int my_func(int x) { assert(x < N); return x; } template <int N, int... Ns, typename ARG, typename... ARGS> ARG my_func (ARG x, ARGS... args) { assert(x < N); return x + N*my_func<Ns...>(args...); } int main() { int a = 1; int b = 2; int c = my_func<10, 3>(a, b); } The fist function is the base for the recursion, the second function use two parameter packs but also 2 explicit template parameter to make the recursion possible.
doc_3734
TypeError: Failed to fetch And the following stacktrace (which to me seems rather vague in nature): _onError @ GLTFLoader.js:77 Promise.catch (async) parse @ GLTFLoader.js:1823 parse @ GLTFLoader.js:275 (anonymous) @ GLTFLoader.js:95 (anonymous) @ three.min.js:6 load (async) load @ three.min.js:6 load @ GLTFLoader.js:91 make_3d @ main.js?m=1642396149.9939363:1646 The corresponding line 1646 in my main.js file under the make_3d function is associated with the first loader.load() invocation: const loader = new THREE.GLTFLoader(); // Throws error loader.load('assets/my_object/scene.gltf', function (gltf) { my_obj = gltf.scene; my_obj.scale.set(2, 2, 2) my_obj.position.set(1, 1, 1) scene.add(my_obj); }) // Does not throw error loader.load('assets/my_object2/scene.gltf', function (gltf) { my_obj2 = gltf.scene; my_obj2.scale.set(.1, .1, .1) scene.add(my_obj2); }) The CDN references I'm using are below: https://cdnjs.cloudflare.com/ajax/libs/three.js/r128/three.min.js https://unpkg.com/three@0.128.0/examples/js/loaders/GLTFLoader.js https://threejs.org/examples/js/controls/OrbitControls.js As I mentioned above, I am loading additional objects (albeit slightly smaller in size, scene.bin of size 6318KB vs 187KB), and these other objects loaded without any errors being thrown. There is a sizable amount of code associated with the overall project, so I've tried to narrow it down here. I hope I'm not missing something obvious here. UPDATE I've since been able to identify that the source of the error is stemming from parser.getDependencies('scene') inside of GLTFLoader.js: parse( onLoad, onError ) { const parser = this; const json = this.json; const extensions = this.extensions; // Clear the loader cache this.cache.removeAll(); // Mark the special nodes/meshes in json for efficient parse this._invokeAll( function ( ext ) { return ext._markDefs && ext._markDefs(); } ); Promise.all( this._invokeAll( function ( ext ) { return ext.beforeRoot && ext.beforeRoot(); } ) ).then( function () { console.log(parser.getDependencies('scene')); return Promise.all( [ parser.getDependencies( 'scene' ), parser.getDependencies( 'animation' ), parser.getDependencies( 'camera' ) ] ); } ).then( function ( dependencies ) { const result = { scene: dependencies[ 0 ][ json.scene || 0 ], scenes: dependencies[ 0 ], animations: dependencies[ 1 ], cameras: dependencies[ 2 ], asset: json.asset, parser: parser, userData: {} }; addUnknownExtensionsToUserData( extensions, result, json ); assignExtrasToUserData( result, json ); Promise.all( parser._invokeAll( function ( ext ) { return ext.afterRoot && ext.afterRoot( result ); } ) ).then( function () { onLoad( result ); } ); } ).catch( onError ); } A: As mentioned by @emackey in the comments, it appears that the root of the issue is grounded in a JSON parsing issue in the caching logic of GLTFLoader.js, which leads to a rejected promise in parser.getDependencies( 'scene' ). The solution for the time being is to clear your browser cached images and files and the loaded asset will appear in the scene without issue.
doc_3735
When compiling, I get error C4996: 'std::_Fill_n': Function call with parameters that may be unsafe Other people suggest either using #pragma warning( disable : 4996 ) which doesn't seem to change anything, or turning off SDL checks via properties, which turns the error into a warning, but gives me many more errors, mostly LNK2005. Any ideas how to get the code running? Additional info: Types of error when turning off SDL checks are (my project is BoostExample): error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MTd_StaticDebug' doesn't match value 'MDd_DynamicDebug' in BoostExample.obj error LNK2005: "public: __thiscall std::_Container_base12::_Container_base12(void)" (??0_Container_base12@std@@QAE@XZ) already defined in opencv_ts300d.lib(ts_perf.obj) error LNK2005: ___crtSetUnhandledExceptionFilter already defined in MSVCRTD.lib(MSVCR110D.dll) and finally fatal error LNK1169: one or more multiply defined symbols found I guess this means that Boost is interacting with OpenCV and other DLLs by redefining something. Is it possible that I installed the wrong boost version? I just grabbed the main one. A: The linker error tells basically the boost and the OpenCV were compiled off using different runtime settings, one for static lib and the other for DLL, and cannot be mixed used. you need to rebuild your boost and OpenCV to use same runtime setting.
doc_3736
String strQuery = "Insert Into cust_subs (CustomerId,SubscriptionId) Values (?,?)"; PreparedStatement objPreparedStatement = Utils.getPreparedStatement(objConnection, strQuery); objPreparedStatement.setInt(2, currentSubscriptions.get(0) ); where currentSubscriptions is: List<Integer> currentSubscriptions; I get this error even though it is Integer list:- SEVERE: java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Integer Assume that connection object already exists. And i am very sure that currentSubscriptions is not null else i wouldn't have got this error. If instead of using List i hardcode like this: objPreparedStatement.setInt(2,1); It works. I have even printed the values of List using System.out.println and it's perfectly fine. They are integers only. Don't know why is it treating them as Strings. I have even tried Integer.parseInt on list's item. Still it gives me the same error. This is one of the funniest errors I have ever faced. Thanks in advance :) EDIT :- Atleast this should work. But even this is not working :- int intSubscriptionId = Integer.parseInt( currentSubscriptions.get(0).toString()); objPreparedStatement.setInt(2, intSubscriptionId ); EDIT 2: Posting whole code :- package beans; import entities.Customer; import entities.Subscription; import java.io.IOException; import java.io.Serializable; import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Savepoint; import java.util.ArrayList; import java.util.List; import javax.faces.bean.ManagedBean; import javax.faces.bean.ViewScoped; import javax.faces.context.FacesContext; import javax.servlet.http.HttpServletRequest; import misc.Utils; @ManagedBean @ViewScoped public class AddSubscriptionBean implements Serializable { private Customer customer; private List<Integer> currentSubscriptions; private List<Subscription> subscriptionList; public List<Subscription> getSubscriptionList() { return subscriptionList; } public void setSubscriptionList(List<Subscription> subscriptionList) { this.subscriptionList = subscriptionList; } public List<Integer> getCurrentSubscriptions() { return currentSubscriptions; } public void setCurrentSubscriptions(List<Integer> currentSubscriptions) { this.currentSubscriptions = currentSubscriptions; } public Customer getCustomer() { return customer; } public void setCustomer(Customer customer) { this.customer = customer; } /** Creates a new instance of AddSubscriptionBean */ public AddSubscriptionBean() throws IOException, SQLException { Connection objConnection = null; try { HttpServletRequest objHttpServletRequest = (HttpServletRequest) FacesContext.getCurrentInstance().getExternalContext().getRequest(); int intCustomerId = Integer.parseInt(objHttpServletRequest.getParameter("cid")); String strQuery = "Select * from customer Where CustomerID = " + intCustomerId; ResultSet objResultSet = Utils.executeResultSet(objConnection, strQuery); if (objResultSet.next()) { String strFirstName = objResultSet.getString("FirstName"); String strLastName = objResultSet.getString("LastName"); customer = new Customer(intCustomerId, strFirstName, strLastName); } currentSubscriptions = new ArrayList<Integer>(); for (Subscription objSubscription : customer.getSubscriptionList()) { currentSubscriptions.add(objSubscription.getSubscriptionId()); } subscriptionList = new ArrayList<Subscription>(); strQuery = "Select * from subscription"; objResultSet = Utils.executeResultSet(objConnection, strQuery); while (objResultSet.next()) { int intSubscriptionId = objResultSet.getInt("SubscriptionId"); String strSubsriptionTitle = objResultSet.getString("Title"); String strSubsriptionType = objResultSet.getString("Type"); Subscription objSubscription = new Subscription(intSubscriptionId, strSubsriptionTitle, strSubsriptionType); subscriptionList.add(objSubscription); } } catch (Exception ex) { ex.printStackTrace(); FacesContext.getCurrentInstance().getExternalContext().redirect("index.jsf"); } finally { if (objConnection != null) { objConnection.close(); } } } public void save() throws SQLException { Connection objConnection = null; Savepoint objSavepoint = null; try { objConnection = Utils.getConnection(); objConnection.setAutoCommit(false); objSavepoint = objConnection.setSavepoint(); String strQuery = "Delete From cust_subs Where CustomerId = " + customer.getCustomerId(); if (!Utils.executeQuery(objConnection, strQuery)) { throw new Exception(); } strQuery = "Insert Into cust_subs (CustomerId,SubscriptionId) Values (?,?)"; int intCustomerId = customer.getCustomerId(); PreparedStatement objPreparedStatement = Utils.getPreparedStatement(objConnection, strQuery); for (int intIndex = 0; intIndex < currentSubscriptions.size(); intIndex++) { objPreparedStatement.setInt(1, intCustomerId); int intSubscriptionId = Integer.parseInt( currentSubscriptions.get(0).toString()); objPreparedStatement.setInt(2, intSubscriptionId ); objPreparedStatement.addBatch(); } objPreparedStatement.executeBatch(); objConnection.commit(); } catch (Exception ex) { ex.printStackTrace(); if (objConnection != null) { objConnection.rollback(objSavepoint); } } finally { if (objConnection != null) { objConnection.close(); } } } } This is my JSF page :- <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core" xmlns:msc="http://mscit/jsf"> <h:head> <title>Facelet Title</title> </h:head> <h:body> <center> <h:form> <h1>Add Subscription</h1> <b> Customer Name :</b> <h:outputText value="#{addSubscriptionBean.customer.firstName} #{addSubscriptionBean.customer.lastName}"/> <h:selectManyCheckbox value="#{addSubscriptionBean.currentSubscriptions}"> <f:selectItems value="#{addSubscriptionBean.subscriptionList}" var="row" itemLabel="#{row.title}" itemValue="#{row.subscriptionId}" /> </h:selectManyCheckbox> <h:commandButton value="Save" actionListener="#{addSubscriptionBean.save}"/> </h:form> </center> </h:body> </html> Please look at the h:selectManyCheckbox of JSF. JSF internally passes all the checkboxes that i have checked to my List. I think JSF is converting my integer list to string. A: You need to instruct h:selectManyCheckbox to convert the values to Integer by specifying javax.faces.Integer as converter. Generic types are namely unknown in EL and it treats the parameters by default as String. <h:selectManyCheckbox converter="javax.faces.Integer"> No need to use List<String> instead which would only lead to more weaktype clutter in the bean. A: It's possible to put Strings into a List<Integer> if you use unsafe operations: List<Integer> intList = new ArrayList<Integer>(); List list = intList; list.add("1"); intList and list hold references to the same list, which now contains a string. As you've seen, you get a ClassCastException upon trying to extract the element from intList. Java generics work using hidden casting, and you can defeat the type-checking. To test this, assign to a List, then print the class of every element: List currentSubscriptionsUnsafe = currentSubscriptions; for(Object o : currentSubscriptionsUnsafe) { System.out.println(o.getClass()); } EDIT: I'm not familiar with JSF, but I think your guess is correct. One solution is to make currentSubscriptions a List<String> everywhere (which JSF seems to expect). Then, get(0) will return a String, which you can parse into an Integer. There may be a cleaner method, but this should work.
doc_3737
On page ProductGrid I ve returned all my restaurants already stored in firestore then from each name of restaurant I should go to ProductDetail which will contain the details of each restaurant(dynamic routing) I ve created the routes but no page is returned I can't figure out what is exactly the problem!! This is the link leading to the productDetail page ProductGrid.js <h2> <Link to={`ecom-product-detail/${data.id}`}>{data.data.name_restaurant}</Link> </h2><< ProductDetail.js import { useParams } from "react-router-dom"; import { Link } from "react-router-dom"; import { firestore } from "../../../../../fire"; function ProductDetail(){ const {productId}= useParams(); const thisProduct = firestore.collection("Restaurants").doc().get.find((prod) => prod.id === productId); return( <div className="col-12"> <div className="card"> <h1>{thisProduct.name_restaurant}</h1> <p> Price:${thisProduct.Currency}</p> <p> {thisProduct.email} </p> </div> </div> ) ; } export default ProductDetail; Route.js const Routes = () => { const routes = [ { url: "ecom-product-detail/:productId", component: ProductDetail }, ]; return ( <Fragment> <Switch> {routes.map((data, i) => ( <Route key={i} exact path={`/${data.url}`} component={data.component} /> ))} </Switch> {/* <Footer /> */} </Fragment> ); }; export default Routes; This is my Restaurants List retreived from firestore This the page rendered when I click on the name of the restaurant A: You should call the thisProduct inside a useEffect hook and bind the result to a useState hook. Something like this : const {productId}= useParams(); const [thisProduct, setThisProduct] = useState({}); useEffect(() => { const productDetails = firestore.collection("Restaurants").doc().get.find((prod) => prod.id === productId); setThisProduct(productDetails); });
doc_3738
<defs> <g id="frame" > <rect width = '200' height = '200' /> <image href= ""/> </g> </defs> <use *ngFor = "let data in dataList" xlink:href = "#frame" [attr.x] = "1000" [attr.y] = "400" [attr.href] = "data.img" transform = "scale(0.5 0.5)" transform-origin = "1100 500"/>
doc_3739
Is it possible?
doc_3740
If that's true, I'm confused by these Scala Play JSON docs: So what’s interesting there is that JsResult[A] is a monadic structure and can be used with classic functions of such structures: flatMap[X](f: A => JsResult[X]): JsResult[X] etc But, then the docs go on to say: Please note that JsResult[A] is not just Monadic but Applicative because it cumulates errors. This cumulative feature makes JsResult[T] makes it not very good to be used with for comprehension because you’ll get only the first error and not all. Since, as I understand, a for-comprehension is syntactic sugar for flatMap, how can JsResult be both a Applicative and Monad? A: Monad is a subclass of an Applicative. Applicative's apply is weaker operation than flatMap. Thus apply could be implemented in terms of flatMap. But, in the JsResult (or actually Reads) case, it has special implementation which exploits Applicative computation's static form. E.g. the two definitions below behave equivalently with correct JSON, yet Applicative (which uses and) have better error messages in erroneous cases (e.g. mentions if both bar and quux are invalid): val applicativeReads: Reads[Foo] = ( (__ \ "bar").read[Int] and (__ \ "quux").read[String] )(Foo.apply _) val monadicReads: Reads[Foo] = for { bar <- (__ \ "bar").read[Int] quux <- (__ \ "quux").read[String] } yield Foo(bar, quux)
doc_3741
However, I'm stuck with the conversion from C char to keycodes handled by the keyboard. I'm not using the Macros defined in event-codes.h because I want my solution to work on the locale of the computer, and the macros are defined around an US Keyboard. Here's the device created using uinput : int setup_uinput_device(){ /* Temporary variable */ int i=0; /* Open the input device */ uinp_fd = open("/dev/uinput", O_WRONLY | O_NDELAY); if (fcntl(uinp_fd, F_GETFD) == -1) { printf("Unable to open /dev/uinput\n"); return -1; } memset(&uinp,0,sizeof(uinp)); /* Intialize the uInput device to NULL */ strncpy(uinp.name, "Custom Keyboard", UINPUT_MAX_NAME_SIZE); uinp.id.bustype = BUS_USB; // Setup the uinput device ioctl(uinp_fd, UI_SET_EVBIT, EV_KEY); ioctl(uinp_fd, UI_SET_EVBIT, EV_REL); ioctl(uinp_fd, UI_SET_EVBIT, EV_REP); for (i=0; i < 256; i++) { ioctl(uinp_fd, UI_SET_KEYBIT, i); } /* Create input device into input sub-system */ write(uinp_fd, &uinp, sizeof(uinp)); if (ioctl(uinp_fd, UI_DEV_CREATE)) { printf("Unable to create UINPUT device.\n"); return -1; } return 0; } I've already tried solutions using the X11 library, as depicted in this link : Convert ASCII character to x11 keycode Unfortunately, the keyboard i've managed to create using uinput takes differents keycodes than the ones X11 uses. (I think my keyboard takes the same keycodes that I can get with using the dumpkeys command). It's surely possible to convert X11 keycodes into (kernel ?) keycodes that my keyboard correctly inteprets, but I'd like to keep the numbers of dependencies low. I'm now trying to use EVIOCGKEYCODE as depicted in linux.h, but I have difficulties to understand how it works, and I think it does the inverse of what I really want. Here's an example : int main(int argc, char *argv[]) { setup_uinput_device(); struct input_keymap_entry mapping; int i =0; /* Set the max value at 130 just for the purpose of testing */ for (i=0; i<130; i++) { mapping.scancode[0] = i; if(ioctl(fd, EVIOCGKEYCODE_V2, mapping)) { perror("evdev ioctl"); } printf("Scancode= %d, Keycode = %d\n", mapping.scancode[0], mapping.keycode); } /* Simple function to destroy the device */ destroy_uinput_device(); return 0; } I get the following error : "evdev ioctl: Invalid argument". I've read somewhere that it's an old method used with PS2 keyboard, so it's probably one of the many reasons it's not working. The last solution I consider is to parse the result of dumpkeys in a table or a map that I could use later, but I think I will get performance issues, and I don't want to recreate something that perhaps already exists. Any Idea ? A: So after a lot of tryout, I finally managed to understand that the Keycode used by the kernel are the same that the one used by X11 minus 8. I'd first had to manage encoding. I've used the following code to manage multi bytes encoded characters (like €) : char *str = "Test €"; size_t mbslen; /* Number of multibyte characters in source */ wchar_t *wcs; /* Pointer to converted wide character string */ wchar_t *wp; setlocale(LC_ALL, ""); mbslen = mbstowcs(NULL, str, 0); if (mbslen == (size_t) -1) { perror("mbstowcs"); exit(ERROR_FAILURE); } wcs = calloc(mbslen + 1, sizeof(wchar_t)); if (wcs == NULL) { perror("calloc"); exit(ERROR_FAILURE); } /* Convert the multibyte character string in str to a wide character string */ if (mbstowcs(wcs, str, mbslen + 1) == (size_t) -1) { perror("mbstowcs"); exit(ERROR_FAILURE); } Then using this conversion table from ucs to keysym, I managed to translate a widechar array to its corresponding sequences of Keycode, based on the example i've provided in my original question. The last step was to feed my uinput keyboard with the X11 Keycode minus 8.
doc_3742
Array[(String, String)] = Array((http://code.google.com/webtoolkit/doc/latest/DevGuideOptimizing.html,{(https://www.google.com/accounts/Login?continue=http%3A%2F%2Fcode.google.com%2Fwebtoolkit%2Fdoc%2Flatest%2FDevGuideOptimizing.html&amp;followup=http%3A%2F%2Fcode.google.com%2Fwebtoolkit%2Fdoc%2Flatest%2FDevGuideOptimizing.html)})) In the value part, the values are of this form: {} or {(value1)} or {(value1), (value2), (value3)}. I am not able to figure out how to parse these values and make a list of them and then map it to the key. Because {} is not a Array or List. A: If you have a data of type RDD[Array[(String, String)]] then you can do rdd.map(x => x.flatMap(y => y._1.replaceAll("[{()}]", "").split(",") ++ y._2.replaceAll("[{()}]", "").split(","))) to get RDD[Array[String]] where each of the (String, String) tuples are separated and collected in an Array[String] Updated Your comment below says The data type is org.apache.spark.rdd.RDD[(String, String)] and not RDD[Array[(String, String)]] So for that case, inner map of array can be neglected and you can do as below rdd.map(x => x._1.replaceAll("[{()}]", "").split(",") ++ x._2.replaceAll("[{()}]", "").split(",")) You should get the same result as above.
doc_3743
Dim query As IQueryable(Of someObject) = New ObjectQuery(Of someObject)(queryString, db, MergeOption.NoTracking) .Where(CType(Function(x) x.Publish = True, Expression(Of Func(Of someObject, Boolean)))) And it gives me an error that says: Cannot convert expression of type Func(someObject) System.Nullable(Of Boolean) to type System.Linq.Expressions.Expression(Of System.Func(someObject, Boolean)). I have also tried: .Where(CType(Function(x) x.Publish = True, Expression(Of Func(Of someObject, Nullable(Of Boolean))))) which doesn't work either. If I don't have the CType my where comes up with a narrowing conversion error from IQueryable and IEnumerable, so I need that there, but I am not sure how to write that where parameter as an expression to so it can be converted. Any help? A: Don't use casting with CType here - create the expression directly: Dim expr As Expression(Of Func(Of someObject, Nullable(Of Boolean))) = Function(x) x.Publish = True Dim query As IQueryable(Of someObject) = New ObjectQuery(Of someObject)(queryString, db, MergeOption.NoTracking) .Where(expr) Expressions need to be created as such. It's not possible to convert existing Func into an expression.
doc_3744
In order to do that I want to install differnt things on it etc. What I'm trying to achieve here is to have a setting at the top of the bash script which will make apt accept all [y/n] questions asked during the execution of the script Question example I want to automatically accept: After this operation, 1092 kB of additional disk space will be used. Do you want to continue? [Y/n] I just started creating the file so here is what i have so far: #!/bin/bash # Constants # Set apt to accept all [y/n] questions >> some setting here << # Update and upgrade apt apt update; apt full-upgrade; # Install terminator apt install terminator A: With apt: apt -o Apt::Get::Assume-Yes=true install <package> See: man apt and man apt.conf A: If you indeed want to set it up once at the top of the file as you say and then forget about it, you can use the APT_CONFIG environment variable. See apt.conf. echo "APT::Get::Assume-Yes=yes" > /tmp/_tmp_apt.conf export APT_CONFIG=/tmp/_tmp_apt.conf apt-get update apt-get install terminator ... A: apt is meant to be used interactively. If you want to automate things, look at apt-get, and in particular its -y option: -y, --yes, --assume-yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. If an undesirable situation, such as changing a held package, trying to install an unauthenticated package or removing an essential package occurs then apt-get will abort. Configuration Item: APT::Get::Assume-Yes. See also man apt-get for many more options. A: You can set up API assume yes permanently as follow: echo "APT::Get::Assume-Yes \"true\";\nAPT::Get::allow \"true\";" | sudo tee -a /etc/apt/apt.conf.d/90_no_prompt A: Another easy way to set it at the top of the your script is to use the command alias apt-get="apt-get --assume-yes", which causes all subsequent invocations of apt-get to include the --assume-yes argument. For example apt-get upgrade would automatically get converted to apt-get --assume-yes upgrade" by bash. Please note, that this may cause errors, because some apt-get subcommands do not accept the --assume-yes argument. For example apt-get help would be converted to apt-get --assume-yes help which returns an error, because the help subcommand can't be used together with --assume-yes.
doc_3745
Because my entity Site can use Languages for 2 different usages, I use 2 join tables. So the schema is the following Relationship Join Table Fields Sites belongsToMany Vislanguages sites_vislanguages.id, sites_vislanguages.language_id, sites_vislanguages.site_id Relationship Join Table Fields Sites belongsToMany Reclanguages sites_reclanguages.id, sites_reclanguages.language_id, sites_reclanguages.site_id So the Table classes are: class VislanguagesTable extends Table { public function initialize(array $config) { parent::initialize($config); $this->table('languages'); $this->displayField('name_fr'); $this->primaryKey('id'); $this->belongsToMany('Sites', [ 'foreignKey' => 'language_id', 'targetForeignKey' => 'site_id', 'joinTable' => 'sites_vislanguages', ]); } } class SitesTable extends Table { public function initialize(array $config) { parent::initialize($config); $this->belongsToMany('Reclanguages', [ 'joinTable' => 'sites_reclanguages', 'className' => 'Languages', 'propertyName' => 'reclanguages' ]); $this->belongsToMany('Vislanguages', [ 'joinTable' => 'sites_vislanguages', 'className' => 'Languages', 'propertyName' => 'vislanguages' ]); } class SitesVislanguagesTable extends Table { public function initialize(array $config) { parent::initialize($config); $this->table('sites_vislanguages'); $this->displayField('id'); $this->primaryKey('id'); $this->belongsTo('Sites', [ 'foreignKey' => 'site_id', ]); $this->belongsTo('Languages', [ 'foreignKey' => 'language_id', ]); } I of course have the problem for add and edit forms, but I here take the example of edit. If I find() a ready made site, the data structure is: object(App\Model\Entity\Site) { 'id' => (int) 23098, 'Vislanguages' => [ (int) 0 => object(App\Model\Entity\Language) { 'id' => (int) 1, '_joinData' => object(App\Model\Entity\SitesVislanguage) { 'id' => (int) 4409, 'site_id' => (int) 23098, 'language_id' => (int) 1, ..., '[repository]' => 'SitesVislanguages' }, ..., '[repository]' => 'Vislanguages' }, (int) 1 => object(App\Model\Entity\Language) { 'id' => (int) 9, '_joinData' => object(App\Model\Entity\SitesVislanguage) { 'id' => (int) 4410, 'site_id' => (int) 23098, 'language_id' => (int) 9, ..., '[repository]' => 'SitesVislanguages' }, ..., '[repository]' => 'Vislanguages' } ], ..., '[repository]' => 'Sites' } And my corresponding ctp file is: <?= $this->Form->control('vislanguages._ids', ['options' => $languages, 'label' => __('Spoken languages:'), 'multiple' => true]); ?> The languages are correctly preselected in the input. If I submit it without any change, the patched entity is: object(App\Model\Entity\Site) { 'id' => (int) 23098, ..., 'vislanguages' => [ (int) 0 => object(App\Model\Entity\Language) { 'id' => (int) 1, ..., '[repository]' => 'Vislanguages' }, (int) 1 => object(App\Model\Entity\Language) { 'id' => (int) 9, ..., '[repository]' => 'Vislanguages' } ], '[repository]' => 'Sites' } Which seems to be correct but when I save it, I get the following error: Error: SQLSTATE[42S22]: Column not found: 1054 Unknown column 'vislanguage_id' in 'where clause' The query is: (SELECT SitesVislanguages.id AS `SitesVislanguages__id`, SitesVislanguages.site_id AS `SitesVislanguages__site_id`, SitesVislanguages.language_id AS `SitesVislanguages__language_id` FROM sites_vislanguages SitesVislanguages WHERE (site_id = :c0 AND vislanguage_id = :c1)) UNION (SELECT SitesVislanguages.id AS `SitesVislanguages__id`, SitesVislanguages.site_id AS `SitesVislanguages__site_id`, SitesVislanguages.language_id AS `SitesVislanguages__language_id` FROM sites_vislanguages SitesVislanguages WHERE (site_id = :c2 AND vislanguage_id = :c3)) Why do we see vislanguage_id in the WHERE clause whereas it correctly considers language_id instead in the SELECT clause ? In the same time, I don't really understand the UNION here. A: To get it working, I had to specify the targetForeignKey: class SitesTable extends Table { public function initialize(array $config) { parent::initialize($config); $this->belongsToMany('Reclanguages', [ 'targetForeignKey' => 'language_id, 'joinTable' => 'sites_reclanguages', 'className' => 'Languages', 'propertyName' => 'reclanguages' ]); $this->belongsToMany('Vislanguages', [ 'targetForeignKey' => 'language_id, 'joinTable' => 'sites_vislanguages', 'className' => 'Languages', 'propertyName' => 'vislanguages' ]); }
doc_3746
import time a = 'a' start = time.time() for _ in range(1000000): a += 'a' end = time.time() print(a[:5], (end-start) * 1000) The older version executes in 187ms, Python 3.11 needs about 17000ms. Does 3.10 realize that only the first 5 chars of a are needed, whereas 3.11 executes the whole loop? I confirmed this performance difference on godbolt. A: TL;DR: you should not use such a loop in any performance critical code but ''.join instead. The inefficient execution appears to be related to a regression during the bytecode generation in CPython 3.11 (and missing optimizations during the evaluation of binary add operation on Unicode strings). General guidelines This is an antipattern. You should not write such a code if you want this to be fast. This is described in PEP-8: Code should be written in a way that does not disadvantage other implementations of Python (PyPy, Jython, IronPython, Cython, Psyco, and such). For example, do not rely on CPython’s efficient implementation of in-place string concatenation for statements in the form a += b or a = a + b. This optimization is fragile even in CPython (it only works for some types) and isn’t present at all in implementations that don’t use refcounting. In performance sensitive parts of the library, the ''.join() form should be used instead. This will ensure that concatenation occurs in linear time across various implementations. Indeed, other implementations like PyPy does not perform an efficient in-place string concatenation for example. A new bigger string is created for every iteration (since strings are immutable, the previous one may be referenced and PyPy does not use a reference counting but a garbage collector). This results in a quadratic runtime as opposed to a linear runtime in CPython (at least in past implementation). Deep Analysis I can reproduce the problem on Windows 10 between the embedded (64-bit x86-64) version of CPython 3.10.8 and the one of 3.11.0: Timings: - CPython 3.10.8: 146.4 ms - CPython 3.11.0: 15186.8 ms It turns out the code has not particularly changed between CPython 3.10 and 3.11 when it comes to Unicode string appending. See for example PyUnicode_Append: 3.10 and 3.11. A low-level profiling analysis shows that nearly all the time is spent in one unnamed function call of another unnamed function called by PyUnicode_Concat (which is also left unmodified between CPython 3.10.8 and 3.11.0). This slow unnamed function contains a pretty small set of assembly instructions and nearly all the time is spent in one unique x86-64 assembly instruction: rep movsb byte ptr [rdi], byte ptr [rsi]. This instruction is basically meant to copy a buffer pointed by the rsi register to a buffer pointed by the rdi register (the processor copy rcx bytes for the source buffer to the destination buffer and decrement the rcx register for each byte until it reach 0). This information shows that the unnamed function is actually memcpy of the standard MSVC C runtime (ie. CRT) which appears to be called by _copy_characters itself called by _PyUnicode_FastCopyCharacters of PyUnicode_Concat (all the functions are still belonging to the same file). However, these CPython functions are still left unmodified between CPython 3.10.8 and 3.11.0. The non-negligible time spent in malloc/free (about 0.3 seconds) seems to indicate that a lot of new string objects are created -- certainly at least 1 per iteration -- matching with the call to PyUnicode_New in the code of PyUnicode_Concat. All of this indicates that a new bigger string is created and copied as specified above. The thing is calling PyUnicode_Concat is certainly the root of the performance issue here and I think CPython 3.10.8 is faster because it certainly calls PyUnicode_Append instead. Both calls are directly performed by the main big interpreter evaluation loop and this loop is driven by the generated bytecode. It turns out that the generated bytecode is different between the two version and it is the root of the performance issue. Indeed, CPython 3.10 generates an INPLACE_ADD bytecode instruction while CPython 3.11 generates a BINARY_OP bytecode instruction. Here is the bytecode for the loops in the two versions: CPython 3.10 loop: >> 28 FOR_ITER 6 (to 42) 30 STORE_NAME 4 (_) 6 32 LOAD_NAME 1 (a) 34 LOAD_CONST 2 ('a') 36 INPLACE_ADD <---------- 38 STORE_NAME 1 (a) 40 JUMP_ABSOLUTE 14 (to 28) CPython 3.11 loop: >> 66 FOR_ITER 7 (to 82) 68 STORE_NAME 4 (_) 6 70 LOAD_NAME 1 (a) 72 LOAD_CONST 2 ('a') 74 BINARY_OP 13 (+=) <---------- 78 STORE_NAME 1 (a) 80 JUMP_BACKWARD 8 (to 66) This changes appears to come from this issue. The code of the main interpreter loop (see ceval.c) is different between the two CPython version. Here are the code executed by the two versions: // In CPython 3.10.8 case TARGET(INPLACE_ADD): { PyObject *right = POP(); PyObject *left = TOP(); PyObject *sum; if (PyUnicode_CheckExact(left) && PyUnicode_CheckExact(right)) { sum = unicode_concatenate(tstate, left, right, f, next_instr); // <----- /* unicode_concatenate consumed the ref to left */ } else { sum = PyNumber_InPlaceAdd(left, right); Py_DECREF(left); } Py_DECREF(right); SET_TOP(sum); if (sum == NULL) goto error; DISPATCH(); } //---------------------------------------------------------------------------- // In CPython 3.11.0 TARGET(BINARY_OP_ADD_UNICODE) { assert(cframe.use_tracing == 0); PyObject *left = SECOND(); PyObject *right = TOP(); DEOPT_IF(!PyUnicode_CheckExact(left), BINARY_OP); DEOPT_IF(Py_TYPE(right) != Py_TYPE(left), BINARY_OP); STAT_INC(BINARY_OP, hit); PyObject *res = PyUnicode_Concat(left, right); // <----- STACK_SHRINK(1); SET_TOP(res); _Py_DECREF_SPECIALIZED(left, _PyUnicode_ExactDealloc); _Py_DECREF_SPECIALIZED(right, _PyUnicode_ExactDealloc); if (TOP() == NULL) { goto error; } JUMPBY(INLINE_CACHE_ENTRIES_BINARY_OP); DISPATCH(); } Note that unicode_concatenate calls PyUnicode_Append (and do some reference counting checks before). In the end, CPython 3.10.8 calls PyUnicode_Append which is fast (in-place) and CPython 3.11.0 calls PyUnicode_Concat which is slow (out-of-place). It clearly looks like a regression to me. People in the comments reported having no performance issue on Linux. However, experimental tests shows a BINARY_OP instruction is also generated on Linux, and I cannot find so far any Linux-specific optimization regarding string concatenation. Thus, the difference between the platforms is pretty surprising. Update: towards a fix I have opened an issue about this available here. One should not that putting the code in a function is significantly faster due to the variable being local (as pointed out by @Dennis in the comments). Related posts: * *How slow is Python's string concatenation vs. str.join? *Python string 'join' is faster (?) than '+', but what's wrong here? *Python string concatenation in for-loop in-place? A: As mentioned in the other answer, this is indeed a regression but it will be fixed in Python 3.12, from the GitHub issue: FTR, the linear time behavior will be restored for globals (and nonlocals) in 3.12, as a side-effect of the register VM.
doc_3747
But in a situation like this where there are no factors how do we fill it with transparent. library("ggplot2") vec1 <- data.frame(x=rnorm(2000, 0, 1)) vec2 <- data.frame(x=rnorm(3000, 1, 1.5)) ggplot() + geom_density(aes(x=x), fill="red", data=vec1) + geom_density(aes(x=x), fill="blue", data=vec2) I tried adding geom_density(alpha=0.4) but it didn't do any good. A: Like this? ggplot() + geom_density(aes(x=x), fill="red", data=vec1, alpha=.5) + geom_density(aes(x=x), fill="blue", data=vec2, alpha=.5) EDIT Response to OPs comment. This is the idiomatic way to plot multiple curves with ggplot. gg <- rbind(vec1,vec2) gg$group <- factor(rep(1:2,c(2000,3000))) ggplot(gg, aes(x=x, fill=group)) + geom_density(alpha=.5)+ scale_fill_manual(values=c("red","blue")) So we first bind the two datasets together, then add a grouping variable. Then we tell ggplot which is the grouping variable and it takes care of everything else.
doc_3748
VisitID PtCls CC DX A E NULL NULL A E CP NULL A E CP NULL A I CP HEART ATTACK A I CP HEART ATTACK B E shortbreath NULL B E shortbreath NULL B E shortbreath NULL B E shortbreath NULL C I CHECKUP DEFICIENT FE C I CHECKUP DEFICIENT FE D U NULL NULL E E NULL NULL E E CP NULL E O CP POOR SURGERY E O CP POOR SURGERY E O CP POOR SURGERY F E NULL NULL F E NULL NULL F E NULL NULL With each unique visitID being a single patient visit (so 6 visits total in this set) I need to count the number of visits where: * *DX and CC are always null (never have a real value in there) - 2 *CC is null, but DX is NOT null - 0 *PtCls is 'E' at least once within the visit - 4 *PtCls class is NEVER 'E' within the visit - 2 Plus, how to remove a group where Ptcls is never E Any ideas? I don't even know where to start! A: You can do this using proc sql and nested aggregation aggregation. First define the conditions at the visit level: select VisitID, (case when max(DC) is null and max(CC) is null then 1 else 0 end) as flag1, (case when max(DC) is not null and max(CC) is null then 1 else 0 end) as flag2, max(case when PtCls = 'E' then 1 else 0 end) as flag3, max(case when PtCls = 'E' then 0 else 1 end) as flag4 from table t group by VisitID; Next, re-aggregate this: select sum(flag1) as cnt1, sum(flag2) as cnt2, sum(flag3) as cnt3, sum(flag4) as cnt4 from (select VisitID, (case when max(DC) is null and max(CC) is null then 1 else 0 end) as flag1, (case when max(DC) is not null and max(CC) is null then 1 else 0 end) as flag2, max(case when PtCls = 'E' then 1 else 0 end) as flag3, max(case when PtCls = 'E' then 0 else 1 end) as flag4 from table t group by VisitID ) v; You can remove a group where Ptcls is never E by using exists: select t.* from table t where exists (select 1 from table t2 where t2.visitId = t.visitId and t2.PtCls = 'E' );
doc_3749
Some context Given: I have some code performing a depthwise convolution, i.e. I have a 3-dim input matrix {channels, w_in, h_in} and a 3-dim 3-by3 kernel {channels, 3, 3} to produce a 3-dim output {channels, w_out, h_out}. Depthwise convolution now means that the first slice of the kernel is applied on the first slice of the input to produce the first slice of the output, and so on. What I need: I want to rebuild this type of convolution using a standard convolution which needs a 4-dim kernel. In python (pytorch) I could do something like: kernel4d = torch.tensor((), dtype=torch.float32) kernel4d = kernel4d.new_zeros((channels, channels, 3, 3)) for i in range(channels): kernel4d[i, i, :, :] = kernel3d[i, :, :] How can I re-write this code efficiently in C++ using cv::Mat? EDIT int size3d[] = {3, 5, 5}; cv::Mat kernel3d = cv::Mat(3, size3d, CV_32F, cv::Scalar(0)); int size4d[] = {3, 3, 5, 5}; cv::Mat kernel4d = cv::Mat(4, size4d, CV_32F, cv::Scalar(1)); // Try 1 for (int i = 0; i < 3; i++){ kernel4d.at<float>(i, i) = kernel3d.at<float>(i); } // Result: Sets only single values at (i, i, 0, 0) // Try 2 for (int i = 0; i < 3; i++){ for (int k = 0; k < 5; k++){ for (int l = 0; l < 5; l++){ kernel4d.at<float>(i, i, k, l) = kernel3d.at<float>(i, k, l); } } } // Result: at does not exept 4 indexes
doc_3750
worksheet.Columns.AutoFit() and this line of code does not even compile: worksheet.Columns.[1].Columnwidth <- 15.0 Any suggestions? A: To begin with, working with Excel from F# is much easier with the help of ExcelProvider. However, given you realize the intricacies of dealing with COM from F#, operating with bare Excel is not something too complicated. As you did not provide enough details for pointing out what exactly is wrong with your own attempts, here is a self-contained snippet demonstrating the manipulation with a worksheet column width, including visual demo: #r "Microsoft.Office.Interop.Excel" open Microsoft.Office.Interop.Excel open System let ex = new ApplicationClass(Visible=true) let exO = ex.Workbooks.Add() let exOWs = exO.Worksheets.[1] :?> Worksheet exOWs.Name <- "TestSheet" let col1 = exOWs.Columns.[1] :?> Range let oldW = col1.ColumnWidth |> unbox printfn "Current width of column 1 is %f" oldW col1.ColumnWidth <- 15.0 let newW = col1.ColumnWidth |> unbox printfn "New width of column 1 is %f" newW printfn "Press any key to exit" Console.ReadLine() |> ignore exO.Close(false, false, Type.Missing) ex.Quit() Running the above fsx script with FSI on a box having Excel installed should produce the output similar to one below: --> Referenced 'C:\Program Files (x86)\Microsoft Visual Studio 14.0\Visual Studio Tools for Office\PIA\Office15\Microsoft.Office.Interop.Excel.dll' Current width of column 1 is 8.430000 New width of column 1 is 15.000000 Press any key to exit Before stopping the script you may want to locate the opened Excel application window and observe that the first column width has really changed. UPDATE: Covering Autofit part of the original question: the important tidbit to recognize is that fitting of range's cell widths and heights may be done only after the range is filled. Applying Autofit() to an empty range is of no use, while for an already filled range it works as expected. Let's fill the first 4 columns of our sample worksheet's row 1 with strings of different length and then apply Autofit by placing the following after the line showing newW value: let rowContents = "Our widths are flexible".Split(' ') in (exOWs.Rows.[1] :?> Range).Resize(ColumnSize=rowContents.Length).Value2 <- rowContents exOWs.Columns.AutoFit() |> ignore and observe the visual effect:
doc_3751
/locations?within=40.766159%2C-73.989786%2C40.772781%2C-73.979905&per_page=500 then in my model I have a scope to handle this, but can't figure out how to get the query right: scope :within, ->(box_string) { sw = box_string.split(",")[0..1].reverse.map {|c| c.to_f} ne = box_string.split(",")[2..3].reverse.map {|c| c.to_f} box = "BOX3D(#{sw[0]} #{sw[1]}, #{ne[0]} #{ne[1]})" where( ***WHAT DO I DO HERE?*** ) } A: Using the rgeo gem: Gemfile: gem 'rgeo' model.rb: def self.within_box(sw_lat, sw_lon, ne_lat, ne_lon) factory = RGeo::Geographic.spherical_factory sw = factory.point(sw_lon, sw_lat) ne = factory.point(ne_lon, ne_lat) window = RGeo::Cartesian::BoundingBox.create_from_points(sw, ne).to_geometry where("your_point_column && ?", window) end Note that the argument order for the factory point method is (lon, lat). You may want to use the activerecord-postgis-adapter gem, which includes rgeo).
doc_3752
"object apache is not a member of package org". I have used these import statement in the code : import org.apache.spark.SparkContext import org.apache.spark.SparkContext._ import org.apache.spark.SparkConf The above import statement is not running on sbt prompt too. The corresponding lib appears to be missing but I am not sure how to copy the same and at which path. A: Make sure you have entries like this in SBT: scalaVersion := "2.11.8" libraryDependencies ++= Seq( "org.apache.spark" %% "spark-core" % "2.1.0", "org.apache.spark" %% "spark-sql" % "2.1.0" ) Then make sure IntelliJ knows about these libraries by either enabling "auto-import" or doing it manually by clicking the refresh-looking button on the SBT panel. A: It is about 5 years since the previous answer, but I had the same issue and the answer mentioned here did not work. So, hopefully this answer works for those who find themselves in the same position I was in. I was able to run my scala program from sbt shell, but it was not working in Intellij. This is what I did to fix the issue: * *Imported the build.sbt file as a project. File -> Open -> select build.sbt -> choose the "project" option. *Install the sbt plugin and reload Intellij. File -> settings -> Plugins -> search and install sbt. *Run sbt. Click "View" -> Tool Windows -> sbt. Click on the refresh button in the SBT window. Project should load successfully. Rebuild the project. *Select your file and click "Run". It should ideally work.
doc_3753
Below is the function i'm using to get the base64encoded string of a message digest. -(NSString*) sha1:(NSString*)input //sha1- Digest { NSData *data = [input dataUsingEncoding:NSUTF8StringEncoding]; uint8_t digest[CC_SHA1_DIGEST_LENGTH]; CC_SHA1(data.bytes, data.length, digest); NSMutableString* output = [NSMutableString stringWithCapacity:CC_SHA1_DIGEST_LENGTH * 2]; for(int i = 0; i < CC_SHA1_DIGEST_LENGTH; i++){ [output appendFormat:@"%02x", digest[i]];//digest } return [NSString stringWithFormat:@"%@",[[[output description] dataUsingEncoding:NSUTF8StringEncoding]base64EncodedStringWithOptions:0]]; //base64 encoded } Here is my sample input string - '530279591878676249714013992002683ec3a85216db22238a12fcf11a07606ecbfb57b5' When I use this string either in java or python I get same result - '5VNqZRB1JiRUieUj0DufgeUbuHQ=' But in IOS I get 'ZTU1MzZhNjUxMDc1MjYyNDU0ODllNTIzZDAzYjlmODFlNTFiYjg3NA==' Here is the code I'm using in python: import hashlib import base64 def checkForDigestKey(somestring): msgDigest = hashlib.sha1() msgDigest.update(somestring) print base64.b64encode(msgDigest.digest()) Let me know if there is anyway to get the same result for IOS. A: You are producing a binary digest in Python, a hexadecimal digest in iOS. The digests are otherwise equal: >>> # iOS-produced base64 value ... >>> 'ZTU1MzZhNjUxMDc1MjYyNDU0ODllNTIzZDAzYjlmODFlNTFiYjg3NA=='.decode('base64') 'e5536a65107526245489e523d03b9f81e51bb874' >>> # Python-produced base64 value ... >>> '5VNqZRB1JiRUieUj0DufgeUbuHQ='.decode('base64') '\xe5Sje\x10u&$T\x89\xe5#\xd0;\x9f\x81\xe5\x1b\xb8t' >>> from binascii import hexlify >>> # Python-produced value converted to a hex representation ... >>> hexlify('5VNqZRB1JiRUieUj0DufgeUbuHQ='.decode('base64')) 'e5536a65107526245489e523d03b9f81e51bb874' Use base64.b64encode(msgDigest.hexdigest()) in Python to produce the same value, or Base-64 encode the digest bytes instead of hexadecimal characters in iOS.
doc_3754
I'm having to do a restore packages, and under VS2017 even though the file is there in the correct folder, and VS2017 is configured to use it as a repository, I keep getting an error "Unable to find version 'x.x.xxxx.xxxxx' of package 'y'." It then shows all the locations it says it didn't find it, including the folder it is in. Has anyone seen this type of problem? A: We solved the problem by installing IIS and the Nuget app, and then pointing VS2017 to the app.
doc_3755
So I will have : @Input() isLocal= false; private service: Service1|Service2; constructor(private injector: Injector) { if (this.isLocal) { this.service = injector.get(Service1); } else { this.service = injector.get(Service2); } } My problem is that I can't access my input in the constructor and I can't init my service in ngOnInit. How can I achieve this? A: It would be best if you implement a Factory pattern as is described here: angular.service vs angular.factory But for this you can use the Injector class from Angular core: import { Injector } from '@angular/core' ... @Input() isLocal= false; ... constructor(private injector: Injector){ if(this.isLocal) { this.oneService = <OneService>this.injector.get(Service1); } else { this.twoService = <TwoService>this.injector.get(Service2); } }
doc_3756
Lets say the visitor checks whether 'www.stackoverflow.com' is available. When it's available, there's no problem and the user can go order it. When it's not available, i want it to do suggestions for other extensions. Like: www.stackoverflow.com is not available, The following domains are available: www.stackoverflow.net www.stackoverflow.co.uk www.stackoverflow.info This is my current file: <?php if(isset($_POST['check'])) { if (!empty($_POST['domain_name'])){ $domain = trim($_POST['domein_naam']).$_POST['domain_list']; $result = @dns_get_record($domain, DNS_ALL); if(empty($result)) { echo "<H2 style='color:green;' >Domain $domain is available.</H2>"; } else { echo "<H2 style='color:red;'>Domain $domain is not available.</H2>"; } } else { echo "<H2 style='color:red;'>Fout: Domein kan niet leeg zijn.</H2>"; } } ?> A: Without knowing all of your code structure it's hard to give you the best method. But a simple idea: Say the site the user entered ($_POST/$_GET/etc) is stored in $strUserSite variable. Have an array ($aryFurtherChecks or whatever) with all extensions in (.com, .net, etc). Loop the array checking if each domain for what user entered is avail or not, by appending your $strUserSite var to it. If domain + extension from array is available, either echo it out (depending on your code setup/framework etc) or add to new array, then loop second array with "These are also available". Which methods you use depend on if you're using procedural all-in-one-file code, or classes etc. If the latter then setting a new array would be preferred, looping it in your view/template/whatever file and echoing out each one with your HTML and styling etc. A: dns_get_record() cannot be used to determine whether a domain is available for registration, because not all registered domains have DNS records. For instance, example.info is registered, but has no DNS records. Since it sounds as though you are planning to use this as part of a domain registration system, you presumably have access to a domain registration API. Most providers of such APIs have a call to generate suggestions - try using that. Failing that, you will need to remove the TLD from the domain input by the user and replace it successively with the alternatives you want to try.
doc_3757
The df looks like this currently ID Weight Date 1 200 1/1 1 201 1/2 1 199 1/3 1 50 1/4 2 170 1/1 2 177 1/2 2 40 1/3 2 175 1/4 I would also want to explore the possibility of adding another condition where I want the same criteria applied with a time factor. In other words, I want to exclude any measurements that have more than 10% deviation from the starting weight over a period of time (i.e. over 10 days). The df should look like this ID Weight Date 1 200 1/1 1 201 1/2 1 199 1/3 2 170 1/1 2 180 1/2 2 175 1/4 the last row from ID 1 and the 3rd row from ID 2 where excluded. A: An approach using dplyr library(dplyr) df %>% group_by(ID) %>% filter(!abs(1 - (Weight / Weight[1])) > .1) %>% ungroup() # A tibble: 6 × 3 ID Weight Date <int> <dbl> <chr> 1 1 200 1/1 2 1 201 1/2 3 1 199 1/3 4 2 170 1/1 5 2 177 1/2 6 2 175 1/4
doc_3758
The usual files are not in the top directory - make.py and mkparse.py. neither of them seem to do much.. seems like it needs a makefile, but there isn't one in any part of the distro.. > python make.py build make.py[0]: Entering directory '/Users/ron/lib/pymake-default' No makefile found any hints? A: pymake is a make utility, and running make.py looks for a Makefile (that you've created, for your own project). There's no build step specifically required for pymake itself.
doc_3759
bigip_provider: bigip1: "10.0.0.16" bigip2: "10.0.0.18" bigip3: "10.0.0.17" bigip4: "10.0.0.19" bigip5: "10.0.0.27" bigip6: "10.0.0.23" bigip7: "10.0.0.25" bigip8: "10.0.0.28" And I wanted to get the values converting them into a string: {{ bigip_provider.values() | list | join('/24 ') +'/24' }} but with the values sorted by the key: ie {{ bigip_provider | dictsort ... } I've tried to extract only the values output by dictsort with map and selectattr but I haven't found a way of doing this. Many thanks A: As you can access list elements with dot-notation (like mylist.1), so you can use map(attribute='1') to reduce lists (kind of hack, but it's ok): - hosts: localhost gather_facts: no vars: bigip_provider: bigip9: "10.0.0.16" bigip2: "10.0.0.18" bigip3: "10.0.0.17" tasks: - debug: msg: "{{ bigip_provider | dictsort | map(attribute='1') | map('regex_replace','$','/24') | list }}" Result: ok: [localhost] => { "msg": [ "10.0.0.18/24", "10.0.0.17/24", "10.0.0.16/24" ] }
doc_3760
The generator is quite complete but I would like to know if there are any conventions about some issues such as: * *how handle angle brackets in strings or regular expressions? *how to translate if-then-else (e.g. will the else node be inside the if one or not)? More generally: does such a translator already exist? Is there any existing XSD for this XML-based language? EDIT I am currently interested in free tools only. A: Parsers which perform code generation from XML and generate XML from code are readily available: * *Custom PMD Rules - O'Reilly Media *estools/esvalid: confirm that a SpiderMonkey format AST represents an ECMAScript program *jantimon/handlebars-to-ecmascript: Convert handlebars to an ecmascript AST *estools/estemplate: Proper (AST-based) JavaScript code templating with source maps support. *estools/esrecurse: AST recursive visitor *estools/estraverse: ECMAScript JS AST traversal functions *estools/escope: Escope: ECMAScript scope analyzer *estools/esquery: ECMAScript AST query library. You ask, "does such a translator exist?". If the question is, "does this exist in ANTLR?" then I suspect you'll find the answer at the ANTLR.org site. My company has exactly such a translator with all the issues resolved; if you'd like to see an output sample as an answer here I'll produce one for you specifically for JavaScript. Here's a link to an XML output for Java: What would an AST (abstract syntax tree) for an object-oriented programming language look like? References * *Parsers, Part IV: A Java Cross-Reference Tool (pdf) *o:XML Development Tool Chain *Command line rendering | Highcharts *Parser API - Mozilla | MDN
doc_3761
The current app that I am working on is an internal app for a small company that will very likely never need to run in another country. As such, it is my opinion that I do not need to set these at all. On the other hand, doing so would not be such a big deal, but it seems like it is unneccessary and could hinder readability to a degree. I understand that Microsoft's contention is to use it if it's there, period. Well, I'm technically supposed to call Dispose() on every object that implements IDisposable, but I don't bother doing that with Datasets and Datatables. I wonder what the practice in regards to globalization and localization on small-scale internal apps is "in the wild." A: I usually ignore those kinds of warnings for small internal apps. Remember that FXCop is meant to make sure that your code is good for a framework, not all of them might be relevant to you, I always disable various rules that I don't think fits with the applications as I build them. Though I would call Disponse on any classes that implements them, doesn't matter if they don't do anything now, an upgraded version of the class might start leaking something essential, and it's a good habit to get into.
doc_3762
We are using the built in remote desktop server to allow multiple users to log in. this allows for a nice collaborative working environment. Is there a way to forceably disconnect a remote user , or even stop the remote desktop service. Ive tried vboxmanage but it mostly just complains the session is locked. like below. C:\Program Files\Oracle\VirtualBox>VBoxManage modifyvm "ubuntu_20_LTS" --vrdemulticon off VBoxManage.exe: error: The machine 'ubuntu_20_LTS' is already locked for a session (or being unlocked) VBoxManage.exe: error: Details: code VBOX_E_INVALID_OBJECT_STATE (0x80bb0007), component MachineWrap, interface IMachine, callee IUnknown VBoxManage.exe: error: Context: "LockMachine(a->session, LockType_Write)" at line 554 of file VBoxManageModifyVM.cpp A: VBoxManage controlvm VMname vrde off sleep 1 VBoxManage controlvm VMname vrde on A: It seems that this can be done. From the running machine using the normal virtualbox display . type host+f until you see the virtualbox menu. then select view->remote display. This will disconnect any active virtualbox RDP sessions to the client machine.
doc_3763
Here's what I've done (the code is illustrative, but this approach works on the real algorithm). It feels a little clunky. Is there a more elegant way? class Kaggle(): """ An algorithm """ def __init__( self ): self.bar = 1 def step_one( self, some_text_data ): self.bar = 1 ** 2 # Do some data cleaning # return processed data def step_two( self ): foo = step_one(baz) # do some more processing def step_three( self ): bar = step_two() # output results def run( self ): self.step_one() self.step_two() self.step_three() if __name__ == "__main__": kaggle = Kaggle() kaggle.run() A: If your goal is for the object to be "automatically executed upon class instantiation", just put self.run() in the init: def __init__(self): self.bar = 1 self.run() As an aside, one should try to keep the __init__ method lightweight and just use it to instantiate the object. Although "clunky", your original Kaggle class is how I would design it (i.e. instantiate the object and then have a separate run method to run your pipeline). I might rename run to run_pipeline for better readability, but everything else looks good to me. A: Put all the calls in your __init__ method. Is this not what you wanted to achieve? You could add a flag with a default value, that allows you to not run the tests if you want. def __init__( self, runtests=True ): self.bar = 1 if runtests: self.step_one() self.step_two() self.step_three() A: Old thread, but was working with dataclasses and found this to work as well from dataclasses import dataclass @dataclass class MyClass: var1: str var2: str def __post_init__(self): self.func1() self.func2() def func1(self): print(self.var1) def func2(self): print(self.var2) a = MyClass('Hello', 'World!') Hello World!
doc_3764
Can Some one help me with adding .xib (layout for my cameraOverlayView) as cameraOverlayView . inside -> ViewDidLoad() base.ViewDidLoad(); var imagePickerControl = new UIImagePickerController(); imagePickerControl.SourceType = UIImagePickerControllerSourceType.Camera; imagePickerControl.ShowsCameraControls = true; //Code lines needed to add the xib "CameraOverlayView.xib" as uiview to pass next line imagePickerControl.CameraOverlayView = ; var imagePickerDelegate = new CameraImagePicker(this); imagePickerControl.Delegate = imagePickerDelegate; NavigationController.PresentModalViewController(imagePickerControl, true); I was refering to https://developer.xamarin.com/recipes/ios/general/templates/using_the_ios_view_xib_template/ But I was unable to add the xib as my CameraOverlay Ask for related code if needed .. thanks in advance
doc_3765
abstract sig Node{ arc: set Node} Is it possible in a way to specify the arc relation as a Connector in the concrete syntax relation? sig P extends Node{token:Int}{tokens>=0} It could also help me for the case above, the tokens field. Best, A: In F-Alloy, you can map a relation to a signature by defining a mapping from a pair of signatures (typing the relation) to the target signature. This means that any combination of atoms in this pair of signatures are to be mapped to a new atom typed by the target signature. In order to enforce the combinations for which an atom is created to be the ones of the relation you want to represent, you need to write a specific constraint in the guard predicate. For your example, you would thus have the mapping: mapArc: Node -> Node -> CONNECTOR and the following guard: pred guard_mapArc(n1:Node,n2:Node){ // the image of n1 via the arc relation is n2 } Also don't forget that the value predicate should keep a reference of the combinations of atoms. pred value_mapArc(n1:Node,n2:Node,c:CONNECTOR){ // state properties of c in function of the given n1 and n2 } Hope it helps
doc_3766
Let's say I'm working on branch master and try to git pull and receive the error that my local changes would be overwritten and need to be stashed or committed. If I haven't staged any of my changes and run git stash, then do a git pull and and update successfully, what happens when I git stash apply? In general, If someone else modifies files and I run git pull, what happens when I run git stash apply? does it overwrite the files that were just updated, regardless of whether or not they were staged when I stashed them? Does it overwrite every file that I just updated with git pull, with the files that were stashed? A: Quick "TL;DR" take-away version, so one can come back later and study more git stash hangs a stash-bag—this is a peculiar form of a merge commit that is not on any branch—on the current HEAD commit. A later git stash apply, when you're at any commit—probably a different commit—then tries to restore the changes git computes by looking at both the hanging stash-bag and the commit it hangs from. When you're done with the changes, you should use git stash drop to let go of the stash-bag from the commit it was "stashed on". (And, git stash pop is just shorthand for "apply, then automatically drop". I recommend keeping the two steps separate, though, in case you don't like the result of "apply" and you want to try again later.) The long version git stash is actually fairly complex. It's been said that "git makes much more sense once you understand X", for many different values of "X", which generalizes to "git makes much more sense once you understand git". :-) In this case, to really understand stash, you need to understand how commits, branches, the index/staging-area, git's reference name space, and merges all work, because git stash creates a very peculiar merge commit that is referred-to by a name outside the usual name-spaces—a weird kind of merge that is not "on a branch" at all—and git stash apply uses git's merge machinery to attempt to "re-apply" the changes saved when the peculiar merge commit was made, optionally preserving the distinction between staged and unstaged changes. Fortunately, you don't actually need to understand all of that to use git stash. Here, you're working on some branch (master) and you have some changes that aren't ready yet, so you don't want to commit them on the branch.1 Meanwhile someone else put something good—or at least, you hope it's good—into the origin/master over on the remote repo, so you want to pick those up. Let's say that you and they both started with commits that end in - A - B - C, i.e., C is the final commit that you had in your repo when you started working on branch master. The new "something good" commits, we'll call D and E. In your case you're running git pull and it fails with the "working directory not clean" problem. So, you run git stash. This commits your stuff for you, in its special weird stash-y fashion, so that your working directory is now clean. Now you can git pull. In terms of drawing of commits (a graph like you get with gitk or git log --graph), you now have something like this. The stash is the little bag-of-i-w dangling off the commit you were "on", in your master branch, when you ran git stash. (The reason for the names i and w is that these are the "i"ndex / staging-area and "w"ork-tree parts of the stash.) - A - B - C - D - E <-- HEAD=master, origin/master |\ i-w <-- the "stash" This drawing is what you get if you started working on master and never did any commits. The most recent commit you had was thus C. After making the stash, git pull was able to add commits D and E to your local branch master. The stashed bag of work is still hanging off C. If you made a few commits of your own—we'll call them Y, for your commit, and Z just to have two commits—the result of the "stash then pull" looks like this: .-------- origin/master - A - B - C - D - E - M <-- HEAD=master \ / Y - Z |\ i-w <-- the "stash" This time, after stash hung its stash-bag off Z, the pull—which is just fetch then merge—had to do a real merge, instead of just a "fast forward". So it makes commit M, the merge commit. The origin/master label still refers to commit E, not M. You're now on master at commit M, which is a merge of E and Z. You're "one ahead" of origin/master. In either case, if you now run git stash apply, the stash script (it's a shell script that uses a lot of low level git "plumbing" commands) effectively2 does this: git diff stash^ stash > /tmp/patch git apply /tmp/patch This diffs stash, which names w—the "work tree" part of the stash—against the correct3 parent. In other words, it finds out "what you changed" between the proper parent commit (C or Z, as appropriate) and the stashed work-tree. It then applies the changes to the currently-checked-out version, which is either E or M, again depending on where you started. Incidentally, git stash show -p really just runs that same git diff command (with no > /tmp/patch part of course). Without -p, it runs the diff with --stat. So if you want to see in detail what git stash apply will merge in, use git stash show -p. (This won't show you what git stash apply can attempt to apply from the index part of the stash, though; this is a minor gripe I have with the stash script.) In any case, once the stash applies cleanly, you can use git stash drop to remove the reference to the stash-bag, so that it can be garbage-collected. Until you drop it, it has a name (refs/stash, aka stash@{0}) so it sticks around "forever" ... except for the fact that if you make a new stash, the stash script "pushes" the current stash into the stash reflog (so that its name becomes stash@{1}) and makes the new stash use the refs/stash name. Most reflog entries stick around for 90 days (you can configure this to be different) and then expire. Stashes don't expire by default, but if you configure this otherwise, a "pushed" stash can get lost, so be careful about depending on "save forever" if you start configuring git to your liking. Note that git stash drop "pops" the stash stack here, renumbering stash@{2} to stash@{1} and making stash@{1} become plain stash. Use git stash list to see the stash-stack. 1It's not bad to go ahead and commit them anyway, and then do a later git rebase -i to squash or fixup further second, third, fourth, ..., nth commits, and/or rewrite the temporary "checkpoint" commit. But that's independent of this. 2It's a fair bit more complex because you can use --index to try to keep staged changes staged, but in fact, if you look in the script, you'll see the actual command sequence git diff ... | git apply --index. It really does just apply a diff, in this case! Eventually it invokes git merge-recursive directly, though, to merge in the work tree, allowing the same changes to have been brought in from elsewhere. A plain git apply would fail if your patch does something the "good stuff" commits D and E also does. 3This uses git's parent-naming magic syntax, with a little advance planning inside the stash script. Because the stash is this funky merge commit, w has two or even three parents, but the stash script sets it up so that the "first parent" is the original commit, C or Z, as appropriate. The "second parent" stash^2 is the index state at the time of the commit, shown as i in the little hanging stash-bag, and the "third parent", if it exists, is unstaged-and-maybe-ignored files, from git stash save -u or git stash save -a. Note that I assume, in this answer, that you have not carefully staged part of your work-tree and that you are not using git stash apply --index to restore the staged index. By not doing any of this, you render the i commit pretty much redundant, so that we need not worry about it during the apply step. If you are using apply --index or equivalent, and have staged items, you can get into a lot more corner cases, where the stash won't apply cleanly. These same caveats apply, with yet more corner cases, to stashes saved with -u or -a, that have that third commit. For these extra-hard cases, git stash provides a way to turn a stash into a full-fledged branch—but I'll leave all that to another answer. A: the stash git command remembers where the stash comes from: git stash list out put stash@{0}: WIP on master.color-rules.0: 35669fb [NEW] another step toward initial cube Where you can see on which SHA1 it was made on. So if you git stash, git pull, git stash apply and you got a conflict, the stash is not dropped (it will only if you drop or if the apply was successful). So you can always get the SHA1 from git stash list and git checkout 35669fb git stash apply and it is garanteed to work. I recommend using the -b option and provide a branch name for that recovery. That being said, my favorite workflow is to ALWAYS checkout under on new "personnal" name to avoid such problems A: Generally uncommitted changes are always bad. Either your changes are good, then commit them, or they are bad than discard them. Doing any git operations while having uncommitted changes tends to cause trouble and git will not be able to help you, as git does not know about anything you did not commit. Having said that, back to your question. ;) Git is generally pretty smart. When you apply your stash, it tries to merge your changes with the other changes. Most of the time this just works. If the changes really conflict, because you changed the same lines in a different way, git will tell you, and you will have to resolve the conflict by yourself. - Even in this case git will help you by having git mergetool, which will launch a suitable command to show you the conflicts and allows you to resolve them one by one.
doc_3767
from dash import dcc dcc.Dropdown( ['New York City', 'Montreal', 'San Francisco'], ['Montreal', 'San Francisco'], multi=True ) This is the code given in their website. https://dash.plotly.com/dash-core-components/dropdown If you go to the link and read the section under the heading Multi-Value Dropdown, if one option is selected, then you have just 2 options remaining and if you select all 3 options, then there isnt any more options to select. Is there an option to disable this i.e. select New York City as many times as you like. Thanks
doc_3768
I've got a Windows Form with a Tab Control with several tabs. Each tab contains arbitrary content which is added by other classes upon startup or during runtime. I want to set up the tabs in a way that scrollbars appear automatically as soon as the Form is too small for the tab's panel to display everything. What I've tried so far is setting the tab page's AutoScroll = true and setting the AutoScrollMinSize property to the size of the panel. This did not work as expected as the panel's Size always seems to be (200, 100) independent of its contents. I've created a small example application (code below) which demonstrates the issue. If you resize the form, you'll see that scroll bars only appear if the Form gets smaller than the panel (default size of (200, 100)) rather than the text box in the panel (size of 300, 150). If you set AutoScrollMinSize manually (uncomment line 34), it behaves as expected. The question is: How can the tab page retrieve the actual size of what is displayed in it? I could probably recurse through all controls and try calculating the size myself - but this feels really bad. PS: Please do not suggest setting the size of the panel to the size of the label, as the actual panels are much more complex than that. ;-) Code: Simply create an Application in Visual Studio and override Program.cs with the following code: using System; using System.Windows.Forms; namespace ScrollbarTest { static class Program { [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); var sampleForm = CreateSampleForm(); Application.Run(sampleForm); } private static Form CreateSampleForm() { var sampleForm = new Form() { }; var tabControl = new TabControl() { Dock = DockStyle.Fill }; var tabPage = new TabPage("Test") { AutoScroll = true }; sampleForm.Controls.Add(tabControl); tabControl.TabPages.Add(tabPage); var samplePanel = CreateSamplePanel(); tabPage.Controls.Add(samplePanel); // this does not provide the right size tabPage.AutoScrollMinSize = samplePanel.Size; // uncomment this to make it work //tabPage.AutoScrollMinSize = new System.Drawing.Size(300, 150); return sampleForm; } private static Control CreateSamplePanel() { // As an example, create a panel with a text box with a fixed size. var samplePanel = new Panel() { Dock = DockStyle.Fill }; var sampleSize = new System.Drawing.Size(300, 150); var textBox = new TextBox() { Dock = DockStyle.Fill, MinimumSize = sampleSize, MaximumSize = sampleSize, Size = sampleSize }; samplePanel.Controls.Add(textBox); return samplePanel; } } } A: The samplePanel.Size returns (200,100). In your CreateSamplePanel method, if you set samplePanel.MinimumSize = sampleSize; then your code will work. Panels don't calculate their size properties (e.g. Size, MinimumSize, PreferredSize) based on their child controls. You will have to subclass Panel and provide that behavior. Even TableLayoutPanel and FlowLayoutPanel don't correctly calculate the PreferredSize property, which is surprising. At the very least, normally you override the GetPreferredSize(Size proposedSize) method, and optionally have the MinimumSize property return the PreferredSize property. It's worth noting that DockStyle.Fill and MinimumSize are at odds with each other. TabPage controls are inherently DockStyle.Fill mode, which is why you have to set the AutoScrollMinSize property. Edit: Isn't there any existing function which retrieves the total required size of a list of controls (recursively), e.g. through their X/Y and Size? It's up to the host container itself (e.g. TableLayoutPanel) to calculate its PreferredSize correctly because only it knows the exact details of how its layout is performed. You can set the AutoSize property to true and then hope that GetPreferredSize(...)/PreferredSize calculates the right size. For TableLayoutPanel, I recall there was a case where it wasn't calculating correctly and I had to subclass it and override the GetPreferredSize(...) method. GetPreferredSize(...) won't be called unless AutoSize is true. If you're talking about a plain Panel or UserControl, by default these use the WYSIWYG LayoutEngine, and do not calculate the PreferredSize. You could subclass and then calculate maximum control.X + control.Width and same thing for height, and use that as the preferred size. First try setting AutoSize to true and see if that works for you. If not, you might have to override the GetPreferredSize(...) method. Here is a crude example: [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); var sampleForm = new Form() { AutoScroll = true }; var panel = new MyPanel() { AutoSize = true, AutoSizeMode = AutoSizeMode.GrowAndShrink, BackColor = Color.LightYellow }; for (int i = 0; i < 6; i++) { for (int j = 0; j < 3; j++) { Button b = new Button { Text = "Button" + panel.Controls.Count, AutoSize = true }; b.Click += delegate { MessageBox.Show("Preferred Size: " + panel.PreferredSize); }; panel.Controls.Add(b, j, i); } } sampleForm.Controls.Add(panel); Application.Run(sampleForm); } private class MyPanel : TableLayoutPanel { public override Size MinimumSize { get { return PreferredSize; } set { } } public override Size GetPreferredSize(Size proposedSize) { Size s = new Size(); int[] harr = new int[100];//this.RowCount]; int[] warr = new int[100];//this.ColumnCount]; foreach (Control c in this.Controls) { var cell = this.GetPositionFromControl(c); var ps = c.PreferredSize; Padding m = c.Margin; int w = ps.Width + m.Horizontal; int h = ps.Height + m.Vertical; if (w > warr[cell.Column]) warr[cell.Column] = w; if (h > harr[cell.Row]) harr[cell.Row] = h; } foreach (int w in warr) s.Width += w; foreach (int h in harr) s.Height += h; return s; } }
doc_3769
1. | int a = 10; 2. | int& b = a; // b is a "alias" of a or say, b is a reference of a 3. | int* c = &a; // c is a pointer to the mem location where it stores a * *pointer and reference are different, so I read here; *why line 3 a pointer can be assigned with a reference? 2.1. or &a is not a reference at all? *if &a is not a reference, what makes &b a reference? 3.1. And why &b == &a is true? 3.2. And why do we need * to dereference (since pointer is not reference) in order to use it's value? *if &a is a reference, who do it refer to? a? 4.1. if &a is a reference, does it mean a reference is essentially a pointer (or vise versa)? 4.2. if not, why a pointer can be assigned with a reference since they are different types? 4.3. if 4.1 is true, what are the particular situations that we do need both reference and pointer? Isn't a dereferenced pointer a reference to the variable it points to, that is: *c == b; //true, the value of the vaiable c == &b; //true, the address that stores the variable *&b == a; //true, dereference a pointer gives the value and *c == &a; //illegal, why? Since at line 3 *c just be assigned with &a I can keep going but I think the reset of my questions will be answered if I can get my head around the above questions.
doc_3770
A: Manually. In the "After Simulation run" code section of your opt-experiment, you can access your output: If it lives on Main, you would be able to access it via root.outputMAPE. So you need to check after each individual sim run, if you currently have best iteration. If so, you update a local variable in the optimization experiment and plot that:
doc_3771
Bassically: I would like to build something similar like this: I know how to bind one slider into one textbox, but then I don't know how to display values\ from different slider in same textbox in time format. xaml: <Calendar Margin="448,220,369,39" HorizontalContentAlignment="Center" Visibility="Visible" Name="calendarMain" SelectedDatesChanged="calendarMain_SelectedDatesChanged"/> <TextBox HorizontalAlignment="Left" Text="{Binding Path=Value, ElementName=sliderMinutes, UpdateSourceTrigger=PropertyChanged}" Visibility="Visible" Name="txtboxCal" Height="23" TextWrapping="Wrap" VerticalAlignment="Top" Width="120" Margin="321,223,0,0"/> <Slider HorizontalAlignment="Left" IsSnapToTickEnabled="True" Name="sliderHours" AutoToolTipPlacement="TopLeft" Minimum="0" Maximum="24" Margin="321,254,0,0" VerticalAlignment="Top" Width="120" Height="28"/> <Slider HorizontalAlignment="Left" IsSnapToTickEnabled="True" Name="sliderMinutes" AutoToolTipPlacement="TopLeft" Minimum="0" Maximum="60" Margin="321,287,0,0" VerticalAlignment="Top" Width="120"/> EDIT I managed to do that using Multibinding like this <TextBlock Margin="836,423,107,25" Name="txtBlockTime"> <TextBlock.Text> <MultiBinding StringFormat=" {0}:{1}"> <Binding ElementName="sliderHours" Path="Value"/> <Binding ElementName="sliderMinutes" Path="Value"/> </MultiBinding> </TextBlock.Text> </TextBlock> Thank you for tip :) A: I suggest you use converter in this behavior. <Window.Resources> <local:TimeToStringMulti x:Key="TimeToStringMulti" /> </Window.Resources> <TextBlock Margin="836,423,107,25" Name="txtBlockTime"> <TextBlock.Text> <MultiBinding Converter="{StaticResource TimeToStringMulti}" Mode="TwoWay"> <Binding ElementName="sliderHours" Path="Value"/> <Binding ElementName="sliderMinutes" Path="Value"/> </MultiBinding> </TextBlock.Text> </TextBlock> And Converter.cs public class TimeToStringMulti: IMultiValueConverter { public object Convert(object[] values, Type targetType, object parameter, System.Globalization.CultureInfo culture) { return string.Format("{0}:{1}", values[0], values[1] ); } public object[] ConvertBack(object value, Type[] targetTypes, object parameter, System.Globalization.CultureInfo culture) { string[] param = (value as string).Split(':'); return new Object[]{double.Parse(param[0]), double.Parse(param[1])}; } }
doc_3772
The class of the view I am wanting to use drop down list in: namespace Fake.Models { public class Fake { public int Id { get; set; } public List<SelectList> Categories { get; set; } } } Model of the Category class pulling from database: namespace Fake.Models { public class Categories { public int Id { get; set; } public string CategoryName { get; set; } } } controller of the view intended to use dropdown list: namespace Fake.Controllers { public class FakeController : Controller { FakeDb _db = new FakeDb(); public ActionResult Index() { var model = from a in _db.Fake1 orderby a.Date ascending select a; ViewBag.Categories = new SelectList(_db.Categories, "Id", "CategoryName"); return View(model); } The view I am trying to get dropdown list to work in: @model fakename.Models.Modelname @using (Html.BeginForm()) { @Html.AntiForgeryToken() <div class="form-group col-md-offset-2 col-md-10"> @Html.DropDownList("Categories", "Select Category") // Omitted Info not related </div> } Sorry the Snippets here dont exatly match up as I intended with this form. So I am including a window snip of the code as jpeg here Code in Notepad Snip Just in case. A: Well the answer was simple. But still complicated to a newbie like me...needed ado.net entity for it to work.
doc_3773
Is there a way to solve this without loss of my operator -? A: Prolog has some complex syntax rules around operators to avoid ambiguities. In some cases you have to insert spaces or parentheses to make clear what you want. This works (and is the form I prefer): ?- X = -(-a). X = - -a. ?- proposition(-(-a)). true. This works as well: ?- X = - -a. X = - -a. ?- proposition(- -a). true. If you find this inconvenient, one thing you could do would be to define --, ---, etc. as operators analogously to -: ?- op(200, fy, --). true. ?- op(200, fy, ---). true. ?- X = --a, Y = ---a. X = --a, Y = ---a. ?- --a = -(-a). false. Then every time you accept user input, you could first run a "preprocessor" that translates a term like --a into -(-a). Personally I think this would not be worth it, but it might be a fun exercise.
doc_3774
The Web Service work succesfully when I consume it from Java too. I want to consume it from PHP, how can I do that? NewWebService.java package NewWeb; import javax.jws.WebService; import javax.jws.WebMethod; import javax.jws.WebParam; @WebService(serviceName = "NewWebService") public class NewWebService { @WebMethod(operationName = "hello") public String hello(@WebParam(name = "name") String txt) { return "Hello " + txt + " !"; } } Client.java package client; public class Client { public static void main(String[] args) { // TODO code application logic here System.out.println(hello("Name")); } private static String hello(java.lang.String name) { newweb.NewWebService_Service service = new newweb.NewWebService_Service(); newweb.NewWebService port = service.getNewWebServicePort(); return port.hello(name); } }
doc_3775
Is it possible to compile the full CSS (components, typography, etc.) from the Angular Material 2 project, without needing AngularJS? It seems like there are styles being inserted by JS using the <style> tag. Here's a basic demo of the SCSS file I've tried to build to load the Material styles: @import "~@angular/material/theming"; @include mat-core(); This works for adding things like typography and the class names, but it doesn't add all of the possible styles for those classes. For example, the card component only gets very basic styles—it's missing the other styles such as padding that seem to be injected via JS. I also tried the material-components-web project, which has compiled CSS here. It comes very close to doing the job. But unfortunately just renaming .mdc- to .mat- is not enough. The component names and usage aren't quite the same. For example, material-components-web uses BEM: mdc-card__actions. Whereas the Angular versions calls it .mat-card-actions.
doc_3776
Maybe I am doing something wrong, but is it possible that the browser changes the codification of the data it sends? A: You can always force UTF-8. Then you can send, receive, and store data in UTF-8 ad cover most human languages without having to change character set. <meta http-equiv="Content-type" content="text/html; charset=utf-8"/> A: If you want to be sure of what character set you are accepting, set it in your form <form method="post" action="/your/url/" accept-charset="UTF-8"> </form> You can see all the acceptable character sets here: Character Sets A: But... check before encoding, if string is already UTF-8. Else you double-encode it. function str_to_utf8 ($string) { if (mb_detect_encoding($string, 'UTF-8', true) === false) { $string = utf8_encode($string); } return $str; } Or use $string = utf8_encode(utf8_decode($string)); So you do not double-encode a string. A: I solved this problem by changing mbstring.http_input = pass in my php.ini file A: You could encode the $_POST data into UTF-8 using PHP's utf8_encode function. Something like: $_POST['comments'] = utf8_encode( $_POST['comments'] );
doc_3777
Is this possible in anyway? For example this: `font = pygame.font.SysFont(None,int(conf[4])) digi_clock = font.render('text', True, conf[1],conf[2]) looking for something like digi_clock.text('new value')` is this possible in someway?
doc_3778
I was unable to find any answers online, and when I did, none of them seem to work. Database example: ID | Name 1 | Name 2 ----+---------+-------- 1 | john 1 | doe 2 | jane | doe 3 | john | doe 4 | john | doe 1 My variable which I ought to match to a row is $var = "john doe 1";, and splitting my variable is not an option. I expect the output of said SQL statement to be the row with ID = 4. EDIT: I do not believe this to be a duplicate. I'm not having any problems with the CONCAT clause itself, I was consulting about which clause to use and how to use it, not how to fix my current one. A: You seem to want: where concat(name1, ' ', name2) = 'john doe 1' Note: It is tempting to write: where concat(name1, ' ', name2) = '$var' However, you should learn to use parameters when passing constant values into a query.
doc_3779
cordova platform add ubuntu Using cordova-fetch for ubuntu@^2.0.0 Adding ubuntu project... Unable to load PlatformApi from platform. Error: Cannot find module '/home/adam/projects/helloworld/node_modules/ubuntu' (node:25739) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: Uncaught, unspecified "error" event. (The platform "ubuntu" does not appear to be a valid cordova platform. It is missing API.js. ubuntu not supported.) What am I missing? Thanks I am running Ubuntu 17.10, and Cordova 8.0.0 A: as per Cordova's blog they are deprecating ubuntu as a supporting platform from version 8 onwards https://cordova.apache.org/blog/
doc_3780
launch: function () { Ext.define("User", { extend: "Ext.data.Model", config: { fields: [{name: "title", type: "string"}] } }); var myStore = Ext.create("Ext.data.Store", { model: "User", proxy: { type: "ajax", url : "http://www.imdb.com/xml/find?json=1&nr=1&tt=on&q=twilight", reader: { type: "json", rootProperty: "title_popular" } }, autoLoad: true }); var view = Ext.Viewport.add({ xtype: 'navigationview', //we only give it one item by default, which will be the only item in the 'stack' when it loads items: [{ xtype:'formpanel', title: 'SEARCH IMDB MOVIES ', padding: 10, items: [{ xtype: 'fieldset', title: 'Search Movies from IMDB', items: [{ xtype: 'textfield', name : 'Movie Search', label: 'Search Movie' }, { xtype: 'button', text: 'Submit', handler: function () { view.push({ //this one also has a title title: 'List of Movies', padding: 10, //once again, this view has one button items: [{ xyz.show(); }] }); } }] }] }] }); var xyz = new Ext.create("Ext.List", { fullscreen: true, store: myStore, itemTpl: "{title}" }); } }); error is with xyz.show(); it will work properly if i remove xyz.show(); but i want to show list after clicking on buttton This is a navigation view on click of button i want to show list A: Try this : handler: function () { view.push({ //this one also has a title title: 'List of Movies', padding: 10, //once again, this view has one button items: [ xyz ] }); } A: Change your xyz list as follows: var xyz = new Ext.create("Ext.List", { //this one also has a title title: 'List of Movies', padding: 10, xtype:'xyz', alias:'xyz', hidden:true, fullscreen: true, store: myStore, itemTpl: "{title}" }); Then just below it write view.add(xyz); Then in your handler handler: function () { xyz.show(); } Not tested but it should work with possible adjustments.
doc_3781
I've got an asmx asp.net website (4.5). I have running applications (ios/android) that are consuming it, I want to improve my chat mechanism and after short research I've came into the SignalR. My question is if I can work in the same website with both signalr and asmx webservies without interfering. Thanks. A: Yes, you can. SignalR provides a "hub" which you can place in your asmx page. This hub then provides a connection to the server to send and receive SignalR events. I recommend that you get started by checking out http://signalr.net/ and play with some samples there.
doc_3782
<form id="msform"> <!-- progressbar --> <ul id="progressbar"> <li class="active">Account Setup</li> <li>Social Profiles</li> <li>Personal Details</li> </ul> <!-- fieldsets --> <fieldset id="form_1"> <h2 class="fs-title">Personal Details</h2> <h3 class="fs-subtitle">Step 1</h3> <input type="text" name="firstname" placeholder="First Name" /> <input type="text" name="lastname" placeholder="Last Name" /> <input type="text" name="email" id="email" placeholder="Email Address" /> <input type="button" name="next" id="next_btn1" class="next action-button" value="Next" /> </fieldset> <fieldset id="form_2"> <h2 class="fs-title">More Details</h2> <h3 class="fs-subtitle">Step 2</h3> <input type="text" name="Phone" placeholder="Phone" /> <input type="text" id="dob" name="DOB" placeholder="Date of Birth" /> <input type="text" name="gender" placeholder="Gender" /> <input type="button" name="previous" class="previous action-button" value="Previous" /> <input type="button" name="next" class="next action-button" value="Next" /> </fieldset> <fieldset id="form_3"> <h2 class="fs-title">Create Your Account</h2> <h3 class="fs-subtitle">Step 3</h3> <input type="text" name="username" id="username" placeholder="UserName" /> <input type="password" name="password" placeholder="password" /> <input type="password" name="passwordR" placeholder="Confirm Password" /> <input type="button" name="previous" class="previous action-button" value="Previous" /> <input type="submit" name="submit" class="submit action-button" value="Submit" /> </fieldset> </form> I want to add jquery validations using validation plugin to this form. If validations are valid then only can go to next step, and if go to previous step reset field values. Here is jquery code, var current_fs, next_fs, previous_fs; var left, opacity, scale; var animating; $(".next").click(function(){ if(animating) return false; animating = true; current_fs = $(this).parent(); next_fs = $(this).parent().next(); //activate next step on progressbar using the index of next_fs $("#progressbar li").eq($("fieldset").index(next_fs)).addClass("active"); //show the next fieldset next_fs.show(); //hide the current fieldset with style current_fs.animate({opacity: 0}, { step: function(now, mx) { //as the opacity of current_fs reduces to 0 - stored in "now" //1. scale current_fs down to 80% scale = 1 - (1 - now) * 0.2; //2. bring next_fs from the right(50%) left = (now * 50)+"%"; //3. increase opacity of next_fs to 1 as it moves in opacity = 1 - now; current_fs.css({'transform': 'scale('+scale+')'}); next_fs.css({'left': left, 'opacity': opacity}); }, duration: 800, complete: function(){ current_fs.hide(); animating = false; }, //this comes from the custom easing plugin easing: 'easeInOutBack' }); }); $(".previous").click(function(){ if(animating) return false; animating = true; current_fs = $(this).parent(); previous_fs = $(this).parent().prev(); //de-activate current step on progressbar $("#progressbar li").eq($("fieldset").index(current_fs)).removeClass("active"); //show the previous fieldset previous_fs.show(); //hide the current fieldset with style current_fs.animate({opacity: 0}, { step: function(now, mx) { //as the opacity of current_fs reduces to 0 - stored in "now" //1. scale previous_fs from 80% to 100% scale = 0.8 + (1 - now) * 0.2; //2. take current_fs to the right(50%) - from 0% left = ((1-now) * 50)+"%"; //3. increase opacity of previous_fs to 1 as it moves in opacity = 1 - now; current_fs.css({'left': left}); previous_fs.css({'transform': 'scale('+scale+')', 'opacity': opacity}); }, duration: 800, complete: function(){ current_fs.hide(); animating = false; }, //this comes from the custom easing plugin easing: 'easeInOutBack' }); }); $(".submit").click(function(){ return false; }) Here is demo - http://codepen.io/atakan/pen/gqbIz A: You can try this javascript project I wrote that can be found at: https://www.npmjs.com/package/multi-step-form-js or https://github.com/mgildea/Multi-Step-Form-Js All you need to do is put each form step in divs with the appropriate classes as described in the README and call the javascript function on the form object: $(".msf:first").multiStepForm(); This will use jquery unobtrusive validation or you can pass in the validation object as a parameter to use jquery validation without the unobtrusive project. Each step will be validated with jquery validation before moving to the next step.
doc_3783
class A { struct B { constexpr B(uint8_t _a, uint8_t _b) : a(_a), b(_b) {} bool operator==(const B& rhs) const { if((a == rhs.a)&& (b == rhs.b)) { return true; } return false; } uint8_t a; uint8_t b; }; constexpr static B b {B(0x00, 0x00)}; }; But g++ says error: field initializer is not constant Can't figure out where I'm wrong. A: This will work: #include <cstdint> #include <iostream> class A { struct B { bool operator==(const B& rhs) const { if((a == rhs.a)&& (b == rhs.b)) { return true; } return false; } uint8_t a; uint8_t b; }; public: constexpr static B b {0x61, 0x62}; }; int main() { std::cout << '{' << A::b.a << ',' << A::b.b << '}' << std::endl; } Removing the constructor from the struct will allow the braces initializer to work. This won't really help you if you were planning on doing something funky in the constructor. A: Clang is more helpful: 27 : error: constexpr variable 'b' must be initialized by a constant expression constexpr static B b {B(0x00, 0x00)}; ^~~~~~~~~~~~~~~~ 27 : note: undefined constructor 'B' cannot be used in a constant expression constexpr static B b {B(0x00, 0x00)}; ^ 8 : note: declared here B(uint8_t _a, uint8_t _b) : ^ Within a brace-or-equal-initializer of a member variable, constructors (including constructors of nested classes) are considered undefined; this is because it is legitimate for a constructor to refer to the values of member variables, so the member variables must be defined first even if they are lexically later in the file: struct A { struct B { int i; constexpr B(): i{j} {} }; constexpr static int j = 99; }; The workaround is to place B outside A, or perhaps within a base class.
doc_3784
The problem seems to have happened after I did some editing in Visual Studio Code to that same Makefile. That's something I do regularly - and PyCharm is usually "good" with it: it will see those as External Changes and update in the Local History appropriately. Any ideas how to get Pycharm back on track with this one file? Note that other files in the project that I have looked at still have correct Local History. I am on Pycharm Professsional 2022.3.1
doc_3785
<div class="wrapper" style="display: flex; flex-direction: column; padding: 0" > <app-header style="flex: none;"></app-header> <div style="flex: 1 0; position: relative;"> <router-outlet (activate)="onActivate()" ></router-outlet> <app-footer *ngIf="showFooter"></app-footer> </div> </div> I have an "activate" event which should check the 'token' every route changing. The problem start when token in the localStorage is expired and before its update him on the "activate" event another OnInit post request on the new page send a request with the old header which cause an error. For example I have this function on my home-component this.serverRequest.getTrainingsFromTo(weekBoundry.from.getTime(), weekBoundry.to.getTime()).subscribe((trainings: ITrainingInfo[]) => { console.log(`trainings : ${trainings}`) if (trainings) { // Ensure server sent just this week's trainings var filteredTrainings: ITrainingInfo[] = trainings.filter((ti) => { return ti.finishTime >= weekBoundry.from.getTime() && ti.finishTime < weekBoundry.to.getTime(); }); filteredTrainings.forEach((ct: ITrainingInfo) => { var index: number = new Date(ct.finishTime).getDay(); this.weekDaysWithTrainings[index] = true; this.trainingsThisWeek = this.weekDaysWithTrainings.reduce(function (total, x) { return x == true ? total + 1 : total }, 0) }); } }, error => { }); And because the header is with a wrong token , the request failed. I read a few answers , and tried to work with async await but nothing helped A: You can use RouteGuard and call API and return true or false. canActivate() { // call API to validate Token // If Validate token runs correctly return true // else return false } Make sure to include Guard to route module as well with key canActivate Example { path: 'home', component: HomeComponent, canActivate: [Guard] },
doc_3786
I want to know how to convert it into jdbc api or provide other sample example which is developed using simple jdbc api. Thanks A: I think this can be achieved quite easily.All you need is to write implementation of "GenericDao" interface. Appfuse provides GenericDao hibernate implementation called "GenericDaoHibernate" out of the box, which I encourage you to use instead. Anyway, this is what I suggest: * *Create a package called ...dao.jdbc *Create a JDBC implementation class for GenericDao Interface called "GenericDaoJdbc" in above package. It may initially looks like below in code section. *Then you can continue implementing the rest of the interface methods with jdbcTemplate instance by getJdbcTemplate() public class GenericDaoJdbcTemplate<T, PK extends Serializable> implements GenericDao<T, PK> { @Autowired private DataSource dataSource; private JdbcTemplate jdbcTemplate; protected final Log log = LogFactory.getLog(getClass()); private Class<T> persistentClass; public GenericDaoJdbcTemplate(final Class<T> persistentClass) { this.persistentClass = persistentClass; } protected JdbcTemplate getJdbcTemplate(){ if (jdbcTemplate == null) return new JdbcTemplate(dataSource); return jdbcTemplate; } @Override public List<T> getAll() { // TODO Auto-generated method stub return null; } ... }
doc_3787
I had to create SHA-1 from signing cert of my app (currently using debug cert for development) so I don't get why google needs SHA-1 of my cert, in web browser it doesn't need any thing like that. Is it related to some security like redirection url in web browser? Thanks!
doc_3788
www.website-one.com www.website-two.com are set-up for RollUp reporting in GA360 account. Scenario:   When a user navigates website-one.com from a search result - Client ID is generated. Same user, goes back to the search result page, and then from the result list, lands to website-two.com. In this case, will a new client-id be generated? Separate visits to these two sites by the same user will be de-duplicated? or will it show up in the roll report as two different users?
doc_3789
golang compiler doesn't seems to like it. var whatAmI = func(i interface{}) { a := reflect.TypeOf(i) //var typ reflect.Type = a b := make (a, 10) //10 elem with type of i //b := new (typ) fmt.Printf ("a: %v b: %v", a, b) } prog.go:21:14: a is not a type I tried various combinations of reflects but no help so far. This seems to me can be a common problem to run in to. How can I solve/workaround this? A: Get the type for a slice given a value of the element type, v: sliceType := reflect.SliceOf(reflect.TypeOf(v)) Create a slice with length and capacity (both 10 here). slice:= reflect.MakeSlice(sliceType, 10, 10) Depending on what you are doing, you may want to get the actual slice value by calling Interface() on the reflect.Value: s := slice.Interface() Run it on the playground. A: Just make like : b := make([]interface{}, 10) for i := range b { b[i] = reflect.Zero(a) }
doc_3790
<div class="layout"> <aside>leftnav</aside> <section>content fluid here</content> </div> aside { width: 200px; position: absolute; left: 0; top: 0; } section { position: absolute; top: 0; left: 200px; //fluid width to fill window } A: Edit: With absolute positionning, just add: section { position: absolute; top: 0; left: 200px; right: 0px; // this line } End of edit. Use float on the aside tag and add a margin-left to the section: aside { width: 200px; float: left; } section { margin-left: 200px; }
doc_3791
I have a for loop which loop through about 5000 records, each time it calls ajax to send data to a web service. till now it works fine. but this process takes some times like 60 seconds or more. So I need to show a progress popup window. I created the JQuery popup window and I did the progress bar. my code will first open the progress window the start the loop within the loop I update the progress when loop finish I close the progress windows. when I run the code, the progress windows dose not appear, but when the loop finish it appears and close direct. how to do this. sample code function AShowProgressDialog(dialog_name, CtrlListBox, CtrlFromDate, CtrlToDate, title){ $("#" + dialog_name ).dialog({ dialogClass: "no-close", title: title, scrollable: false, height: 120, width: 550, resizable: false, modal: true, show: 'fade', hide: 'fade', closeOnEscape: false, autoOpen: true, stack: true, open: function(event, ui) { } }); $("#" + dialog_name ).show(); }; function SubmitProcessRequest(CtrlListBox, CtrlFromDate, CtrlToDate, prProgressTitle){ AShowProgressDialog("pnlProgressCtrl", CtrlListBox, CtrlFromDate, CtrlToDate, prProgressTitle); var strEmpList = ($.map( $("#" + CtrlListBox + " option"), function (item, i) { return $.trim($(item).val()) ;// + "|" + $.trim($(item).text()); } )).join(", "); var FromDate = $('#' + CtrlFromDate).val(); var ToDate = $('#' + CtrlToDate).val(); var arrayEmpList = strEmpList.split(','); var BatchSize = 30; var count = Math.ceil(arrayEmpList.length / BatchSize); for (var i = 0; i < count; i++) { var EmpDataToSend = strEmpList.split(',').slice(i*BatchSize, (i+1)*BatchSize); SEND DATA TO AJAX; $("#ctrlProgressBar").width( ((i / count) * 100) + '%'); $("#progressMessage").text( (i*BatchSize) + " of " + arrayEmpList.length); } closeDialog("pnlProgressCtrl"); } this is the jsfiddle https://jsfiddle.net/kifahnajem/f5yb3e8o/
doc_3792
But as the cross product becomes pretty huge, I was wondering what are the options to tackle a large fact(257 billion rows, 37 tb) and relatively smaller(8.7 gb) dimension table join. In case of equi join I can make it work easily with proper bucketing on the join column/columns . (using same number of buckets for SMBM practically converting to a map join). But if we think this wont be of any advantage when its a non equi join, because the values will be there in other buckets, practically triggering a shuffle i.e. a reduce phase. If any one has any thoughts to overcome this, please suggest ..... A: If the dimension table fits in memory, you can create a Custom User Defined Function (UDF) as stated here, and perform the inequi-join in memory.
doc_3793
I would like to see the possibility of that value being looked up in a range of cells in a google sheet (instead of a single cell), and check if the data captured in the input matches any of the values in the range of cells of the google sheet. I can't find how to identify the individual value entered in the input, but I have seen that if I enter in the input all the values in the range of the Google sheet and they are separated by a comma (example: Peter,George,Sophie,Neil,Carl), the code runs fine. The point is that only one value should be captured in the input field. How do I enter a single value in the input field and it can be searched and check in the range of values of the google sheet only using javascript? HTML <!DOCTYPE html> <html> <head> <base target="_top"> </head> <br> NAME:<br> <input type="text" id="name"> <script> function checkUser(user_name) { if (document.getElementById("name").value == user_name) { alert("correct"); } else { alert("incorrect"); } } function runCheck() { google.script.run.withSuccessHandler(checkUser).fetchUserValues1() } </script> <input type='button' value='VALIDATE' onclick="runCheck()"> </html> GS function doGet() { var template = HtmlService.createTemplateFromFile("page1") return template.evaluate().setSandboxMode(HtmlService.SandboxMode.IFRAME); return HtmlService.createHtmlOutputFromFile('page1'); } function fetchUserValues1(){ var ss = SpreadsheetApp.openByUrl("Google Sheet URL"); var sheetNames = ss.getSheetByName("Sheet 1"); var user_name = sheetNames .getRange("A2:A6").getValues(); return user_name; } A: In your script, how about modifying it as follows? From: if (document.getElementById("name").value == user_name) { alert("correct"); } else { alert("incorrect"); } To: user_name = user_name.flat(); const ar = document.getElementById("name").value.split(",").filter(e => user_name.includes(e.trim())); console.log(ar); // Here, you can check the matched values in the log. if (ar.length > 0) { alert("correct"); } else { alert("incorrect"); } or, I think that you can also modify as follows. const v = document.getElementById("name").value.split(",").map(e => e.trim()); const ar = user_name.filter(([e]) => v.includes(e)); console.log(ar); // Here, you can check the matched values in the log. if (ar.length > 0) { alert("correct"); } else { alert("incorrect"); } * *By this modification, when one of the inputted values is matched in user_name, alert("correct") is run.
doc_3794
* *I have one view with several block displays. *I three templates I'm using to override the default: * *views-view-unformatted-my-list-blocks.tpl.php (styles template) *views-view-my-list-blocks.tpl.php (rows template) *views-view-fields-my-list-blocks-block-1.tpl.php (an example display template) Now, if I just copy the default code into the templates, everything seems to display as it should. But I noticed that in my styles template, the title variable is there, but nothing comes out. Even when I set my display title in basic settings, nothing shows. I would also like to access my CSS class variable from the basic settings, but I'm not sure what variable to use for that either. Can anyone shed some light on these? A: Here is an example on doing this http://www.expresstut.com/content/themeing-views-drupal-6 http://www.expresstut.com/content/themeing-views-drupal-6-part-2
doc_3795
For example within Products is: * *Hi Fi Compoments -- Amplifiers *Home Theater -- AV Recievers -- Stereo Recievers -- Systems *Portable audio -- DIgital audio players What I would like to do is call the first subcategory as well as the next level with a list of that levels posts. So for example Home Theater - AVRecievers -- list of AV Receiver Posts - Stereo Systems -- List of Stereo Systems Posts - Systems -- List of Systems Posts I have run up against two problems with the code I have found: 1. For whatever reason when I define a custom post type of Products it pulls every category within the wordpress database. Or *When it does pull in just the specific Products categories, they are all out of order and not in any parent structure anymore. I have the post type and its registered and working properly elsewhere. I have included the post type reg, from the functions.php as well. Any help would be greatly appreciated. register_post_type( 'Products', // CPT Options array( 'labels' => array( 'name' => __( 'Products' ), 'singular_name' => __( 'Product' ) ), 'public' => true, 'has_archive' => true, 'supports' => array( 'title', 'editor', 'thumbnail' ), 'taxonomies'=> array('category'), 'rewrite' => array('slug' => 'product'), ) ); } Heres an example of code I tried from here that displays the sub categories but there is no parent child relationship and its pulling categories like NEWS which is not in Products, nor are there posts. <?php $args = array( 'type' => 'products', 'child_of' => 0, 'parent' => '', 'orderby' => 'name', 'order' => 'ASC', 'hide_empty' => 1, 'hierarchical' => 1, 'pad_counts' => false ); $categories = get_categories($args); echo '<ul>'; foreach ($categories as $category) { $url = get_term_link($category);?> <li><a href="<?php echo $url;?>"><?php echo $category->name; ?></a></li> <?php } echo '</ul>'; ?> A: What you are looking for is get_terms(). This allows you to configure which taxonomy you want to retrieve terms for. $terms = get_terms( array( 'taxonomy' => 'Products', 'hide_empty' => true, ) ); The returned Term objects will include parent term info and give you the ID's and slugs needed to query for posts that are in each term. Full resource: https://developer.wordpress.org/reference/functions/get_terms/ A: register_post_type( 'Products', // CPT Options array( 'labels' => array( 'name' => __( 'Products' ), 'singular_name' => __( 'Product' ) ), 'public' => true, 'has_archive' => true, 'supports' => array( 'title', 'editor', 'thumbnail' ), 'rewrite' => array('slug' => 'product'), ) ); } // Hooking up our function to theme setup add_action( 'init', 'create_posttype' ); function my_taxonomies_product() { $labels = array( 'name' => _x( 'Product Categories', 'taxonomy general name' ), 'singular_name' => _x( 'Product Category', 'taxonomy singular name' ), 'search_items' => __( 'Search Product Categories' ), 'all_items' => __( 'All Product Categories' ), 'parent_item' => __( 'Parent Product Category' ), 'parent_item_colon' => __( 'Parent Product Category:' ), 'edit_item' => __( 'Edit Product Category' ), 'update_item' => __( 'Update Product Category' ), 'add_new_item' => __( 'Add New Product Category' ), 'new_item_name' => __( 'New Product Category' ), 'menu_name' => __( 'Product Categories' ), ); $args = array( 'labels' => $labels, 'hierarchical' => true, ); register_taxonomy( 'product_category', 'products', $args ); } add_action( 'init', 'my_taxonomies_product', 0 ); Thats what helped me once I actually defined the custom taxonomy as cats I can now distinguish between the two types of posts.
doc_3796
It is Visual studio 2010. In one solution, it says .NET 4.0 and above supports this, where should I check the version information? Thanks A: You're trying to write a Windows Phone application. You're not using .NET 4. It's not clear which version of Windows Phone you're using, but I don't think any of them support System.Drawing. You need to find the API docs for the version of Windows Phone you're developing for, and refer to those docs instead. For example, here's the Windows Phone 8 API reference.
doc_3797
The automation client disconnected. Cannot continue running tests. Using this command, running in a cypress/browsers:node12.6.0-chrome75 container: cypress run --browser=chrome A: This seems to occur when running out of shm space. By default, Docker creates a container with a /dev/shm shared memory space of 64MB. This is typically too small for Chrome and could cause Chrome to crash. I have found two options to resolve this: * *Disable usage of /dev/shm: // cypress/plugins/index.js module.exports = (on, config) => { // ref: https://docs.cypress.io/api/plugins/browser-launch-api.html#Usage on('before:browser:launch', (browser = {}, args) => { if (browser.name === 'chrome') { args.push('--disable-dev-shm-usage') return args } return args }) } *Increase the size of /dev/shm in the container: Run the container with docker run --shm-size=1gb (or whatever size you want)
doc_3798
print_r(ltrim($individual_file["uri"], "public://")); Result -: Stock_000000527255XSmall.jpg Why the missing i? But when my character starts with si, I get si in the result. Why does trim behave differently? $individual_file["uri"] = "public://siStock_000000527255XSmall.jpg"; print_r(ltrim($individual_file["uri"], "public://")); Result -: siStock_000000527255XSmall.jpg A: It's because charlist is literally a list of single characters to remove from the left side of the string and i is listed in public://. Any character that falls in this list will be removed, no matter the order. Ref: http://php.net/manual/en/function.ltrim.php In fact this: $individual_file["uri"] = "public://iStock_000000527255XSmall.jpg"; print_r(ltrim($individual_file["uri"], "publc://")); would output: ic://iStock_000000527255XSmall.jpg Another example by changing the order: $individual_file["uri"] = "public://iStock_000000527255XSmall.jpg"; print_r(ltrim($individual_file["uri"], "bcilpu:/")); would output: Stock_000000527255XSmall.jpg
doc_3799
[["BLAHBLAH\Desktop","BLAHBLAH\Documents","BLAHBLAH\Vids"],["BLAHBLAH\Pics","BLAHBLAH\Folder","BLAHBLAH\Music"]] And I wanted an output that would look like [["Desktop","Documents","Vids"],["Pics","Folder","Music"]] How would I go about doing so? This is in Python. I know you would have to use rfind with the backslashes but I'm having trouble iterating through the nested lists to maintain that nested list structure A: If your filenames are in myList, this should do it, and platform-independently too (different OSes use different folder separators, but the os.path module takes care of that for you). import os [[os.path.basename(x) for x in sublist] for sublist in myList] A: lis=[["BLAHBLAH\Desktop","BLAHBLAH\Documents","BLAHBLAH\Vids"],["BLAHBLAH\Pics","BLAHBLAH\Folder","BLAHBLAH\Music"]] def stripp(x): return x.strip('BLAHBLAH\\') lis=[list(map(stripp,x)) for x in lis] print(lis) output: [['Desktop', 'Documents', 'Vids'], ['Pics', 'Folder', 'Music']] A: You should use list comprehensions: NestedList = [["BLAHBLAH\Desktop","BLAHBLAH\Documents","BLAHBLAH\Vids"],["BLAHBLAH\Pics","BLAHBLAH\Folder","BLAHBLAH\Music"]] output = [[os.path.basename(path) for path in li] for li in NestedList] A: Something like this? from unittest import TestCase import re def foo(l): result = [] for i in l: if isinstance(i, list): result.append(foo(i)) else: result.append(re.sub('.*\\\\', '', i)) return result class FooTest(TestCase): def test_foo(self): arg = ['DOC\\Desktop', 'BLAH\\FOO', ['BLAH\\MUSIC', 'BLABLA\\TEST']] expected = ['Desktop', 'FOO', ['MUSIC', 'TEST']] actual = foo(arg) self.assertEqual(expected, actual) A: The number of answers is just great. They all work in different contexts. I am just adding this to the list: outer = [["BLAHBLAH\Desktop","BLAHBLAH\Documents","BLAHBLAH\Vids"], ["BLAHBLAH\Pics","BLAHBLAH\Folder","BLAHBLAH\Music"]] purged = [ [ item[ item.find("\\")+1: ] for item in inner ] for inner in outer ] Kudos (and +1) to * *@Junuxx who was first with the filename solution, *to @Ashwini Chaudary who got a more general solution if these are not filenames, and *to @mfusennegger who, I think, is making a joke. A: I dont have access to a computer with python atm, but the following should work: List=[["BLAHBLAH\Desktop","BLAHBLAH\Documents","BLAHBLAH\Vids"],["BLAHBLAH\Pics","BLAHBLAH\Folder","BLAHBLAH\Music"]] final=[] for varv in List: x=varv for sub_val in x: final.append(sub_val[sub_val.find("/"):])