text
stringlengths
15
59.8k
meta
dict
Q: How to modify non-configurable, non-writable properties in Javascript? I'm writing a simple EventEmitter is ES5. The objective is to ensure that all properties on EventEmitter instances are non-writable and non-configurable." After 6 hours of racking my brain I still can't figure out how to, increase the listenerCount, for example if the configurable descriptor is set to false. Here's an example of what I have: var eventEmitter = function(){ var listeners = listeners || 0; var events = events || {}; Object.defineProperties(this, { listeners: { value : 0, configurable: false, writable: false }, events: { value: {}, configurable : false, writable: false } }); return this; }; eventEmmitter.prototype.on = function(ev, cb) { if (typeof ev !== 'string') throw new TypeError("Event should be type string", "index.js", 6); if (typeof cb !== 'function' || cb === null || cb === undefined) throw new TypeError("callback should be type function", "index.js", 7); if (this.events[ev]){ this.events[ev].push(cb); } else { this.events[ev] = [cb]; } this.listeners ++; return this; }; A: I would recommend the use of an IIFE (immediatly invoked function expression): var coolObj=(function(){ var public={}; var nonpublic={}; nonpublic.a=0; public.getA=function(){nonpublic.a++;return nonpublic.a;}; return public; })(); Now you can do: coolObj.getA();//1 coolObj.getA();//2 coolObj.a;//undefined coolObj.nonpublic;//undefined coolObj.nonpublic.a;//undefined I know this is not the answer youve expected, but i think its the easiest way of doing sth like that. A: You can use a proxy which requires a key in order to define properties: function createObject() { var key = {configurable: true}; return [new Proxy({}, { defineProperty(target, prop, desc) { if (desc.value === key) { return Reflect.defineProperty(target, prop, key); } } }), key]; } function func() { var [obj, key] = createObject(); key.value = 0; Reflect.defineProperty(obj, "value", {value: key}); key.value = function() { key.value = obj.value + 1; Reflect.defineProperty(obj, "value", {value: key}); }; Reflect.defineProperty(obj, "increase", {value: key}); return obj; } var obj = func(); console.log(obj.value); // 0 try { obj.value = 123; } catch(err) {} try { Object.defineProperty(obj, "value", {value: 123}); } catch(err) {} console.log(obj.value); // 0 obj.increase(); console.log(obj.value); // 1
{ "language": "en", "url": "https://stackoverflow.com/questions/41069927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Image hiding under button android Image view is hiding under the button what changes I can do so that the image view can be above the button view pager also have bottom padding so that button can properly accommodate. The image is showing on the other parts but not above the button. <?xml version="1.0" encoding="utf-8"?> <androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" tools:context=".McqActivity" android:id="@+id/fragment" > <RelativeLayout android:layout_width="wrap_content" android:layout_height="wrap_content" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent"> <ImageView android:id="@+id/starFirst" android:layout_width="50dp" android:layout_height="50dp" android:src="@drawable/ic_baseline_star_24" /> <com.google.android.material.button.MaterialButton android:id="@+id/right" android:layout_width="match_parent" android:layout_height="wrap_content" /> </RelativeLayout> <androidx.viewpager2.widget.ViewPager2 android:id="@+id/questions_view_frag" android:layout_width="match_parent" android:layout_height="0dp" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintHorizontal_bias="0.0" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent" app:layout_constraintVertical_bias="0.0" android:orientation="horizontal" android:paddingBottom="20dp" android:paddingTop="50dp" > </androidx.viewpager2.widget.ViewPager2> </androidx.constraintlayout.widget.ConstraintLayout> A: layout_constraintBottom_toBottomOf and other layout_constraint... won't work inside RelativeLayout, these are desired to work with ConstraintLayout as strict parent. if you want to align two Views next to/below/above inside RelativeLayoyut you have to use other attributes, e.g. android:layout_below="@+id/starFirst" android:layout_above="@+id/starFirst" android:layout_toRightOf="@id/starFirst" android:layout_toLeftOf="@id/starFirst" note that every attr which starts with layout_ is desired to be read by strict parent, not by View which have such attrs set. every ViewGroup have own set of such edit: turned out that this is an elevation case/issue (Z axis), so useful attributes are android:translationZ="100dp" android:elevation="100dp"
{ "language": "en", "url": "https://stackoverflow.com/questions/70017985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Handling multiple exceptions I have written a class which loads configuration objects of my application and keeps track of them so that I can easily write out changes or reload the whole configuration at once with a single method call. However, each configuration object might potentially throw an exception when doing IO, yet I do not want those errors to cancel the overall process in order to give the other objects still a chance to reload/write. Therefore I collect all exceptions which are thrown while iterating over the objects and store them in a super-exception, which is thrown after the loop, since each exception must still be handled and someone has to be notified of what exactly went wrong. However, that approach looks a bit odd to me. Someone out there with a cleaner solution? Here is some code of the mentioned class: public synchronized void store() throws MultipleCauseException { MultipleCauseException me = new MultipleCauseException("unable to store some resources"); for(Resource resource : this.resources.values()) { try { resource.store(); } catch(StoreException e) { me.addCause(e); } } if(me.hasCauses()) throw me; } A: If you want to keep the results of the operations, which it seems you do as you purposely carry on, then throwing an exception is the wrong thing to do. Generally you should aim not to disturb anything if you throw an exception. What I suggest is passing the exceptions, or data derived from them, to an error handling callback as you go along. public interface StoreExceptionHandler { void handle(StoreException exc); } public synchronized void store(StoreExceptionHandler excHandler) { for (Resource resource : this.resources.values()) { try { resource.store(); } catch (StoreException exc) { excHandler.handle(exc); } } /* ... return normally ... */ ] A: There are guiding principles in designing what and when exceptions should be thrown, and the two relevant ones for this scenario are: * *Throw exceptions appropriate to the abstraction (i.e. the exception translation paradigm) *Throw exceptions early if possible The way you translate StoreException to MultipleCauseException seems reasonable to me, although lumping different types of exception into one may not be the best idea. Unfortunately Java doesn't support generic Throwables, so perhaps the only alternative is to create a separate MultipleStoreException subclass instead. With regards to throwing exceptions as early as possible (which you're NOT doing), I will say that it's okay to bend the rule in certain cases. I feel like the danger of delaying a throw is when exceptional situations nest into a chain reaction unnecessarily. Whenever possible, you want to avoid this and localize the exception to the smallest scope possible. In your case, if it makes sense to conceptually think of storing of resources as multiple independent tasks, then it may be okay to "batch process" the exception the way you did. In other situations where the tasks has more complicated interdependency relationship, however, lumping it all together will make the task of analyzing the exceptions harder. In a more abstract sense, in graph theory terms, I think it's okay to merge a node with multiple childless children into one. It's probably not okay to merge a whole big subtree, or even worse, a cyclic graph, into one node.
{ "language": "en", "url": "https://stackoverflow.com/questions/2444580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What is the meaning of Duration in Amazon RDS Backup window What does Duration specify? Does it mean that the backup will start between 01:00 to 01:30 and keep running until it has completed? Or does it have a different meaning? A: The duration window indicates the time in which the backup will start. I can start anywhere between the time specified and could last longer than the window.
{ "language": "en", "url": "https://stackoverflow.com/questions/58445170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Uncontrolled input of type text to be controlled warning I'm trying to create a multi step registration form using React and Redux. The main component is as follows : import React, {PropTypes} from 'react'; import {connect} from 'react-redux'; import {bindActionCreators} from 'redux'; import * as actionCreators from '../../actions/actionCreators'; import countries from '../../data/countries'; import RegistrationFormStepOne from './registrationFormStepOne'; import RegistrationFormStepTwo from './registrationFormStepTwo'; import RegistrationFormStepThree from './registrationFormStepThree'; import RegistrationFormStepFour from './registrationFormStepFour'; class RegistrationPage extends React.Component { constructor(props) { super(props); this.state = { user: Object.assign({}, this.props.userData), fileNames: {}, selectedFile: {}, icons: { idCard: 'upload', statuten: 'upload', blankLetterhead: 'upload', companyPhoto: 'upload' }, step: 1, errors: {} }; this.setUser = this.setUser.bind(this); this.onButtonClick = this.onButtonClick.bind(this); this.onButtonPreviousClick = this.onButtonPreviousClick.bind(this); this.changeCheckboxState = this.changeCheckboxState.bind(this); this.onFileChange = this.onFileChange.bind(this); this.routerWillLeave = this.routerWillLeave.bind(this); } componentDidMount() { this.context.router.setRouteLeaveHook(this.props.route, this.routerWillLeave); } routerWillLeave(nextLocation) { if (this.state.step > 1) { this.setState({step: this.state.step - 1}); return false; } } getCountries(){ return countries; } setUser(event) { const field = event.target.name; const value = event.target.value; let user = this.state.user; user[field] = value; this.setState({user: user}); } validation(){ const user = this.state.user; const languageReg = this.props.currentLanguage.default.registrationPage; let formIsValid = true; let errors = {}; if(!user.companyName){ formIsValid = false; errors.companyName = languageReg.companyNameEmpty; } if(!user.btwNumber){ formIsValid = false; errors.btwNumber = languageReg.btwNumberEmpty; } if(!user.address){ formIsValid = false; errors.address = languageReg.addressEmpty; } if(!user.country){ formIsValid = false; errors.country = languageReg.countryEmpty; } if(!user.zipcode){ formIsValid = false; errors.zipcode = languageReg.zipcodeEmpty; } if(!user.place){ formIsValid = false; errors.place = languageReg.placeEmpty; } if(!user.firstName){ formIsValid = false; errors.firstName = languageReg.firstnameEmpty; } this.setState({errors: errors}); return formIsValid; } onFileChange(name, event) { event.preventDefault(); let file = event.target.value; let filename = file.split('\\').pop(); //We get only the name of the file let filenameWithoutExtension = filename.replace(/\.[^/.]+$/, ""); //We get the name of the file without extension let user = this.state.user; let fileNames = this.state.fileNames; let selectedFile = this.state.selectedFile; let icons = this.state.icons; switch (name.btnName) { case "idCard" : fileNames[name.btnName] = filenameWithoutExtension; //Check if file is selected if(file){ selectedFile[name.btnName] = "fileSelected"; user["idCardFile"] = true; icons["idCard"] = "check"; }else{ selectedFile[name.btnName] = ""; user["idCardFile"] = false; icons["idCard"] = "upload"; } break; case "statuten" : fileNames[name.btnName] = filenameWithoutExtension; //Check if file is selected if(file){ selectedFile[name.btnName] = "fileSelected"; user["statutenFile"] = true; icons["statuten"] = "check"; }else{ selectedFile[name.btnName] = ""; user["statutenFile"] = false; icons["statuten"] = "upload"; } break; case "blankLetterhead" : fileNames[name.btnName] = filenameWithoutExtension; //Check if file is selected if(file){ selectedFile[name.btnName] = "fileSelected"; user["blankLetterheadFile"] = true; icons["blankLetterhead"] = "check"; }else{ selectedFile[name.btnName] = ""; user["blankLetterheadFile"] = false; icons["blankLetterhead"] = "upload"; } break; default: fileNames[name.btnName] = filenameWithoutExtension; //Check if file is selected if(file){ selectedFile[name.btnName] = "fileSelected"; user["companyPhotoFile"] = true; icons["companyPhoto"] = "check"; }else{ selectedFile[name.btnName] = ""; user["companyPhotoFile"] = false; icons["companyPhoto"] = "upload"; } } this.setState({user: user, fileNames: fileNames, selectedFile: selectedFile, icons: icons}); } changeCheckboxState(event) { let chcName = event.target.name; let user = this.state.user; switch (chcName) { case "chcEmailNotificationsYes": user["emailNotifications"] = event.target.checked; break; case "chcEmailNotificationsNo": user["emailNotifications"] = !event.target.checked; break; case "chcTerms": if(typeof this.state.user.terms === "undefined"){ user["terms"] = false; }else{ user["terms"] = !this.state.user.terms; } break; case "chcSmsYes": user["smsNotifications"] = event.target.checked; break; default: user["smsNotifications"] = !event.target.checked; } this.setState({user: user}); this.props.actions.userRegistration(this.state.user); } onButtonClick(name, event) { event.preventDefault(); this.props.actions.userRegistration(this.state.user); switch (name) { case "stepFourConfirmation": this.setState({step: 1}); break; case "stepTwoNext": this.setState({step: 3}); break; case "stepThreeFinish": this.setState({step: 4}); break; default: if(this.validation()) { this.setState({step: 2}); } } } onButtonPreviousClick(){ this.setState({step: this.state.step - 1}); } render() { const languageReg = this.props.currentLanguage.default.registrationPage; console.log(this.state.user); let formStep = ''; let step = this.state.step; switch (step) { case 1: formStep = (<RegistrationFormStepOne user={this.props.userData} onChange={this.setUser} onButtonClick={this.onButtonClick} countries={this.getCountries(countries)} errors={this.state.errors} step={step}/>); break; case 2: formStep = (<RegistrationFormStepTwo user={this.props.userData} onChange={this.setUser} onButtonClick={this.onButtonClick} onButtonPreviousClick={this.onButtonPreviousClick} errors={this.state.errors}/>); break; case 3: formStep = (<RegistrationFormStepThree user={this.props.userData} onFileChange={this.onFileChange} onButtonClick={this.onButtonClick} onButtonPreviousClick={this.onButtonPreviousClick} errors={this.state.errors} fileNames={this.state.fileNames} icons={this.state.icons} fileChosen={this.state.selectedFile}/>); break; default: formStep = (<RegistrationFormStepFour user={this.props.userData} onChange={this.setUser} onChangeCheckboxState={this.changeCheckboxState} onButtonClick={this.onButtonClick} onButtonPreviousClick={this.onButtonPreviousClick} errors={this.state.errors}/>); } return ( <div className="sidebar-menu-container" id="sidebar-menu-container"> <div className="sidebar-menu-push"> <div className="sidebar-menu-overlay"></div> <div className="sidebar-menu-inner"> <div className="contact-form"> <div className="container"> <div className="row"> <div className="col-md-10 col-md-offset-1 col-md-offset-right-1"> {React.cloneElement(formStep, {currentLanguage: languageReg})} </div> </div> </div> </div> </div> </div> </div> ); } } RegistrationPage.contextTypes = { router: PropTypes.object }; function mapStateToProps(state, ownProps) { return { userData: state.userRegistrationReducer }; } function mapDispatchToProps(dispatch) { return { actions: bindActionCreators(actionCreators, dispatch) }; } export default connect(mapStateToProps, mapDispatchToProps)(RegistrationPage); The first step component is as follows import React from 'react'; import Button from '../../common/formElements/button'; import RegistrationFormHeader from './registrationFormHeader'; import TextInput from '../../common/formElements/textInput'; import SelectInput from '../../common/formElements/selectInput'; const RegistrationFormStepOne = ({user, onChange, onButtonClick, errors, currentLanguage, countries}) => { const language = currentLanguage; return ( <div className="contact_form"> <form role="form" action="" method="post" id="contact_form"> <div className="row"> <RegistrationFormHeader activeTab={0} currentLanguage={language}/> <div className="hideOnBigScreens descBox"> <div className="headerTitle">{language.businessInfoConfig}</div> <div className="titleDesc">{language.businessBoxDesc}</div> </div> <div className="col-lg-12"> <h6 className="registrationFormDesc col-lg-10 col-lg-offset-1 col-lg-offset-right-2 col-xs-12"> {language.businessDesc} </h6> <div className="clearfix"></div> <div className="col-sm-6"> <TextInput type="text" name="companyName" label={language.companyNameLabel} labelClass="control-label" placeholder={language.companyNameLabel} className="templateInput" id="company" onChange={onChange} value={user.companyName} errors={errors.companyName} /> </div> <div className="col-sm-6"> <TextInput type="text" name="btwNumber" label={language.vatNumberLabel} placeholder={language.vatNumberLabel} className="templateInput" id="btwNumber" onChange={onChange} value={user.btwNumber} errors={errors.btwNumber} /> </div> <div className="col-sm-12" style={{marginBottom: 25}}> <TextInput type="text" name="address" label={language.addressLabel} placeholder={language.address1Placeholder} className="templateInput" id="address" onChange={onChange} value={user.address} errors={errors.address} /> </div> <div className="col-sm-12" style={{marginBottom: 25}}> <TextInput type="text" name="address1" placeholder={language.address2Placeholder} className="templateInput" id="address" onChange={onChange} value={user.address1} errors="" /> </div> <div className="col-sm-12"> <TextInput type="text" name="address2" placeholder={language.address3Placeholder} className="templateInput" id="address" onChange={onChange} value={user.address2} errors="" /> </div> <div className="col-sm-3"> <SelectInput name="country" label={language.selectCountryLabel} onChange={onChange} options={countries} className="templateInput selectField" defaultOption={language.selectCountry} value={user.country} errors={errors.country} /> </div> <div className="col-sm-3"> <TextInput type="text" name="zipcode" label={language.zipcodeLabel} placeholder={language.zipcodeLabel} className="templateInput" id="zipcode" onChange={onChange} value={user.zipcode} errors={errors.zipcode} /> </div> <div className="col-sm-6"> <TextInput type="text" name="place" label={language.placeLabel} placeholder={language.placeLabel} className="templateInput" id="place" onChange={onChange} value={user.place} errors={errors.place} /> </div> </div> <div className="clearfix"></div> <div className="col-lg-12" style={{marginLeft: 15, marginTop: 30}}> <Button onClick={onButtonClick.bind(this)} name="stepOneNext" value={language.btnNext} icon="arrow-circle-right" style={{margin: '0 auto 60px'}}/> </div> </div> </form> </div> ); }; export default RegistrationFormStepOne; I try to add some simple validation and I've added validation function in my main component and then I check on button click if the returned value true or false is. If it's true, than I set step state to a appropriate value. And it works if I validate only the form fields of the first step, but when I try to also validate one or more form fields of the next step (now I'm trying to validate also the first field of the second step) if(!user.firstName){ formIsValid = false; errors.firstName = languageReg.firstnameEmpty; } I get than Warning: TextInput is changing an uncontrolled input of type text to be controlled. Input elements should not switch from uncontrolled to controlled (or vice versa). Decide between using a controlled or uncontrolled input element for the lifetime of the component. Without the validation function works everything perfect. Any advice? EDIT import React, {propTypes} from 'react'; import _ from 'lodash'; const TextInput = ({errors, style, name, labelClass, label, className, placeholder, id, value, onChange, type}) => { let wrapperClass = "form-group"; if (errors) { wrapperClass += " " + "inputHasError"; } return ( <div className={wrapperClass} style={style}> <label htmlFor={name} className={labelClass}>{label}</label> <input type={type} className={className} placeholder={placeholder} name={name} id={id} value={value} style={{}} onChange={onChange} /> <div className="errorBox">{errors}</div> </div> ); }; TextInput.propTypes = { name: React.PropTypes.string.isRequired, label: React.PropTypes.string, onChange: React.PropTypes.func.isRequired, type: React.PropTypes.string.isRequired, id: React.PropTypes.string, style: React.PropTypes.object, placeholder: React.PropTypes.string, className: React.PropTypes.string, labelClass: React.PropTypes.string, value: React.PropTypes.string, errors: React.PropTypes.string }; export default TextInput; This is second step component : import React from 'react'; import Button from '../../common/formElements/button'; import RegistrationFormHeader from './registrationFormHeader'; import TextInput from '../../common/formElements/textInput'; const RegistrationFormStepTwo = ({user, onChange, onButtonClick, onButtonPreviousClick, errors, currentLanguage}) => { const language = currentLanguage; return ( <div className="contact_form"> <form role="form" action="" method="post" id="contact_form"> <div className="row"> <RegistrationFormHeader activeTab={1} currentLanguage={language}/> <div className="hideOnBigScreens descBox"> <div className="headerTitle">{language.personalInfoConfig}</div> <div className="titleDesc">{language.personalBoxDesc}</div> </div> <div className="col-lg-12"> <h6 className="registrationFormDesc col-lg-10 col-lg-offset-1 col-lg-offset-right-2 col-xs-12"> {language.personalDesc} </h6> <div className="col-lg-6 col-md-6 col-sm-6 col-xs-12"> <TextInput type="text" name="firstName" label={language.firsnameLabel} placeholder={language.firsnameLabel} className="templateInput" id="name" onChange={onChange} value={user.firstName} errors={errors.firstName} /> </div> <div className="col-lg-6 col-md-6 col-sm-6 col-xs-12"> <TextInput type="text" name="lastName" label={language.lastnameLabel} placeholder={language.lastnameLabel} className="templateInput" id="name" onChange={onChange} value={user.lastName} errors={errors.lastName} /> </div> <div className="col-lg-6 col-md-6 col-sm-6 col-xs-12"> <TextInput type="text" name="phone" label={language.phoneLabel} placeholder={language.phoneLabel} className="templateInput" id="phone" onChange={onChange} value={user.phone} errors={errors.phone} /> </div> <div className="col-lg-6 col-md-6 col-sm-6 col-xs-12"> <TextInput type="text" name="mobilePhone" label={language.mobileLabel} placeholder={language.mobileLabel} className="templateInput" id="phone" style={{}} onChange={onChange} value={user.mobilePhone} errors={errors.mobilePhone} /> </div> <div className="clearfix"></div> <div className="col-lg-12 col-md-12 col-sm-12 col-xs-12"> <TextInput type="text" name="email" id="email" label={language.emailLabel} placeholder={language.emailLabel} className="templateInput" style={{}} onChange={onChange} value={user.email} errors={errors.email} /> </div> <div className="col-lg-12 col-md-12 col-sm-12 col-xs-12"> <TextInput type="text" name="userName" label={language.usernameLabel} placeholder={language.usernameLabel} className="templateInput" id="name" onChange={onChange} value={user.userName} errors={errors.userName} /> </div> <div className="col-lg-6 col-md-6 col-sm-6 col-xs-12"> <TextInput type="password" name="password" label={language.passwordLabel} placeholder={language.passwordLabel} className="templateInput" id="password" onChange={onChange} value={user.password} errors={errors.password} /> </div> <div className="col-lg-6 col-md-6 col-sm-6 col-xs-12"> <TextInput type="password" name="confirmPassword" label={language.passwordConfirmLabel} placeholder={language.passwordConfirmLabel} className="templateInput" id="password" onChange={onChange} value={user.confirmPassword} errors={errors.confirmPassword} /> </div> </div> <div className="clearfix"></div> <div className="col-lg-6 col-xs-6" style={{marginTop: 30}}> <Button onClick={onButtonPreviousClick} name="btnPrevious" value={language.btnPrevious} icon="arrow-circle-left" style={{marginRight: 10, float: 'right'}}/> </div> <div className="col-lg-6 col-xs-6" style={{marginTop: 30}}> <Button onClick={onButtonClick} name="stepTwoNext" value={language.btnNext} icon="arrow-circle-right" style={{marginLeft: 10, float: 'left'}}/> </div> </div> </form> </div> ); }; export default RegistrationFormStepTwo; A: This is why the warning exists: When the value is specified as undefined, React has no way of knowing if you intended to render a component with an empty value or if you intended for the component to be uncontrolled. It is a source of bugs. You could do a null/undefined check, before passing the value to the input. a source A: @Kokovin Vladislav is right. To put this in code, you can do this in all your input values: <TextInput // your other code value={user.firstName || ''} /> That is, if you don't find the value of first name, then give it an empty value.
{ "language": "en", "url": "https://stackoverflow.com/questions/38014397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Interleaving the rows of two different SQL tables, group by one row My Sql query is like this $view = mysql_query ("SELECT domain,count(distinct session_id) as views FROM `statistik` left join statistik_strippeddomains on statistik_strippeddomains.id = statistik.strippeddomain WHERE `angebote_id` = '".(int)$_GET['id']."' and strippeddomain!=1 group by domain having count (distinct session_id) > ".(int)($film_daten['angebote_views']/100)." order count(distinct session_id$vladd) desc limit 25"); How can I write its Codeigniter Model I appreciate any Help A: try this $this->db->select('statistik.domain,statistik.count(DISTINCT(session_id)) as views'); $this->db->from('statistik'); $this->db->join('statistik_strippeddomains', 'statistik_strippeddomains.id = statistik.strippeddomain', 'left'); $this->db->where('angebote_id',$_GET['id']); $this->db->where('strippeddomain !=',1); $this->db->group_by('domain'); $this->db->having('count > '.$film_daten['angebote_views']/100, NULL, FALSE); $this->db->order_by('count','desc'); $this->db->limit('25'); $query = $this->db->get(); Comment me If you have any query.
{ "language": "en", "url": "https://stackoverflow.com/questions/39852573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Query conditions to insert data from a form What I'm trying to do is: If the age input in my form = 28, 30, 25 or 21 then I want to auto insert value 8 in the column (VE), else keep it empty. Is this the right way to do that? if($form_data->action == 'Insert') { $age=array(28, 30, 25, 21); $age_str=implode("','", $age); if($form_data->age == $age_str){ $query="INSERT INTO tbl (VE) VALUE ('8') WHERE id= '".$form_data->id."' "; $statement = $connect->prepare($query); $statement->execute(); } $data = array( ':date' => $date, ':first_name' => $first_name, ':last_name' => $last_name, ':age' => $age ); $query = " INSERT INTO tbl (date, first_name, last_name, age) VALUES (:date, :first_name, :last_name, :age) "; $statement = $connect->prepare($query); if($statement->execute($data)) { $message = 'Data Inserted'; } } Also, how do I insert the new row with the row id from the other form data going into tbl? A: Use php's in_array instead of trying to compare a string. To get the id of the query where you insert the form data, you can return the id of the insert row from your prepared statement. if ($form_data->action == 'Insert') { // assuming $age, $date, $first_name, $last_name // already declared prior to this block $data = array( ':date' => $date, ':first_name' => $first_name, ':last_name' => $last_name, ':age' => $age ); $query = " INSERT INTO tbl (date, first_name, last_name, age) VALUES (:date, :first_name, :last_name, :age) "; $statement = $connect->prepare($query); if ($statement->execute($data)) { $message = 'Data Inserted'; // $id is the last inserted id for (tbl) $id = $connect->lastInsertID(); // NOW you can insert your child row in the other table $ages_to_insert = array(28, 30, 25, 21); // in_array uses your array...so you don't need // if($form_data->age == $age_str){ if (in_array($form_data->age, $ages_to_insert)) { $query="UPDATE tbl SER VE = '8' WHERE id= '".$id."'"; $statement2 = $connect->prepare($query); $statement2->execute(); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/58903757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Accessing array value is null Hello I have decoded a json string that I sent to my server and Im trying to get the values from him. My problem is that I cant get the values from the inner arrays. This is my code: <?php $post = file_get_contents('php://input'); $arrayBig = json_decode($post, true); foreach ($arrayBig as $array) { $exercise = $array['exercise']; $response["exercise"] = $exercise; $response["array"] = $array; echo json_encode($response); } ?> When I get the answer from my $response I get this values: {"exercise":null,"array":[{"exercise":"foo","reps":"foo"}]} Why is $array['exercise'] null if I can see that is not null in the array Thanks. A: Because of the [{...}] you are getting an array in an array when you decode your array key. So: $exercise = $array['exercise']; Should be: $exercise = $array[0]['exercise']; See the example here. A: From looking at the result of $response['array'], it looks like $array is actually this [['exercise' => 'foo', 'reps' => 'foo']] that is, an associative array nested within a numeric one. You should probably do some value checking before blindly assigning values but in the interest of brevity... $exercise = $array[0]['exercise'];
{ "language": "en", "url": "https://stackoverflow.com/questions/21893968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Composer update show this error: VirtualAlloc() failed: [0x00000008] Composer worked find yesterday, but today after I trying install: composer require --prefer-dist "himiklab/yii2-recaptcha-widget" "*" While run composer update command it show me error: VirtualAlloc() failed: [0x00000008] VirtualAlloc() failed: [0x00000008] PHP Fatal error: Out of memory (allocated 956301312) (tried to allocate 201326600 bytes) in phar://C:/ProgramData/ComposerSetup/bin/composer.phar/src/Composer/DependencyResolver/RuleSet.php on line 84 Fatal error: Out of memory (allocated 956301312) (tried to allocate 201326600 bytes) in phar://C:/ProgramData/ComposerSetup/bin/composer.phar/src/Composer/DependencyResolver/RuleSet.php on line 84 I try update composer on my other projects, it is worked fine. After some researching I increased memory_limit: 4096M(also -1) in php.ini file. Then I tried to increase virtual memory in Computer->Properties, but still show error. I try to run next command: composer update -vvv --profile, result in attached image Composer error Any help would be greatly appreciated. A: You are probably using 32bit PHP. This version cannot allocate enough memory for composer even if you change the memory_limit to -1 (unlimited). Please use 64 bit PHP with the Composer to get rid of these memory problems.
{ "language": "en", "url": "https://stackoverflow.com/questions/49994946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I get the original values of in an Update SQL Trigger I'm not very familiar with triggers so thank you for your patience. I have a database table with four columns for user text input and just four date columns showing when the user text input was last changed. What I want the trigger to do is to compare the original and new values of the user text input columns and if they are different update the date column with getdate(). I don't know how to do this. The code I wrote can't get the pre-update value of the field so it can't be compared to the post-update value. Does anyone know how to do it? (Normally I would do this in a stored procedure. However this database table can also be directly edited by an Access database and we can't convert those changes to use the stored procedure. This only leaves us with using a trigger.) A: In sql server there are two special tables availble in the trigger called inserted and deleted. Same structure as the table on which the trigger is implemented. inserted has the new versions, deleted the old.
{ "language": "en", "url": "https://stackoverflow.com/questions/10453001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Create Executable progeam of addition in C on linux I am new to Linux. Sorry I am asking very basic question. On windows I have Main.cpp file having code for addition of two number. In Visual studio gives me .exe. But how to do it on Linux. On my Linux machine have gcc compiler no IDE. What I write in Make file and how to run. Main.cpp has code like #include <stido.h> #include <conio.h> // Static library file included //#include "Add.h" int main() { int a,b,c; a = 10; b = 20; c= a+b; //Add function in static lib (.a in case of linux) //c= Add(a,b); printf("Addition is :%d",c); return 0; } After that I want use Add function which is in Addition. How to use with above program removing commented in code? A: For c++ code, the command is usually something like: g++ Main.cpp -o FileNameToWriteTo Alternatively, if you just run g++ Main.cpp it will output to a default file called a.out. Either way, you can then run whichever file you created by doing: ./FileNameToWriteTo.out See this for more details: http://pages.cs.wisc.edu/~beechung/ref/gcc-intro.html
{ "language": "en", "url": "https://stackoverflow.com/questions/39793206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-3" }
Q: QProgressBar updates as function progress How to initializa the operation of QProgressBar, I already declare her maximum, minimum, range and values. I want to assimilate the progress of QProgressBar with the "sleep_for" function. Current code: void MainPrograma::on_pushCorre_clicked() { QPlainTextEdit *printNaTela = ui->plainTextEdit; printNaTela->moveCursor(QTextCursor::End); printNaTela->insertPlainText("corrida iniciada\n"); QProgressBar *progresso = ui->progressBar; progresso->setMaximum(100); progresso->setMinimum(0); progresso->setRange(0, 100); progresso->setValue(0); progresso->show(); WORD wEndereco = 53606; WORD wValor = 01; WORD ifValor = 0; EscreveVariavel(wEndereco, wValor); //How to assimilate QProgressBar to this function: std::this_thread::sleep_for(std::chrono::milliseconds(15000)); //StackOverFlow help me please EscreveVariavel(wEndereco, ifValor); A: use a QTimer and in the slot update the value of the progressbar MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); t = new QTimer(this); t->setSingleShot(false); c = 0; connect(t, &QTimer::timeout, [this]() { c++; if (c==100) { c=0; } qDebug() << "T..."; ui->progressBar->setValue(c); }); } MainWindow::~MainWindow() { delete ui; } void MainWindow::on_pushButton_clicked() { t->start(100); } A: I'm not sure about your intentions with such sleep: are you simulating long wait? do you have feedback about progress during such process? Is it a blocking task (as in the example) or it will be asynchronous? As a direct answer (fixed waiting time, blocking) I think it is enough to make a loop with smaller sleeps, like: EscreveVariavel(wEndereco, wValor); for (int ii = 0; ii < 100; ++ii) { progresso->setValue(ii); qApp->processEvents(); // necessary to update the UI std::this_thread::sleep_for(std::chrono::milliseconds(150)); } EscreveVariavel(wEndereco, ifValor); Note that you may end waiting a bit more time due to thread scheduling and UI refresh. For an async task you should pass the progress bar to be updated, or some kind of callback that does such update. Keep in mind that UI can only be refreshed from main thread.
{ "language": "en", "url": "https://stackoverflow.com/questions/64987457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Preallocation and Vectorization Speedup I am trying to improve the speed of script I am trying to run. Here is the code: (my machine = 4 core win 7) clear y; n=100; x=linspace(0,1,n); % no y pre-allocation using zeros start_time=tic; for k=1:n, y(k) = (1-(3/5)*x(k)+(3/20)*x(k)^2 -(x(k)^3/60)) / (1+(2/5)*x(k)-(1/20)*x(k)^2); end elapsed_time1 = toc(start_time); fprintf('Computational time for serialized solution: %f\n',elapsed_time1); Above code gives 0.013654 elapsed time. On the other hand, I was tried to use pre-allocation by adding y = zeros(1,n); in the above code where the comment is but the running time is similar around ~0.01. Any ideas why? I was told it would improve by a factor of 2. Am I missing something? Lastly is there any type of vectorization in Matlab that will allow me to forget about the for loop in the above code? Thanks, A: In your code: try with n=10000 and you'll see more of a difference (a factor of almost 10 on my machine). These things related with allocation are most noticeable when the size of your variable is large. In that case it's more difficult for Matlab to dynamically allocate memory for that variable. To reduce the number of operations: do it vectorized, and reuse intermediate results to avoid powers: y = (1 + x.*(-3/5 + x.*(3/20 - x/60))) ./ (1 + x.*(2/5 - x/20)); Benchmarking: With n=100: Parag's / venergiac's solution: >> tic for count = 1:100 y=(1-(3/5)*x+(3/20)*x.^2 -(x.^3/60))./(1+(2/5)*x-(1/20)*x.^2); end toc Elapsed time is 0.010769 seconds. My solution: >> tic for count = 1:100 y = (1 + x.*(-3/5 + x.*(3/20 - x/60))) ./ (1 + x.*(2/5 - x/20)); end toc Elapsed time is 0.006186 seconds. A: You don't need a for loop. Replace the for loop with the following and MATLAB will handle it. y=(1-(3/5)*x+(3/20)*x.^2 -(x.^3/60))./(1+(2/5)*x-(1/20)*x.^2); This may give a computational advantage when vectors become larger in size. Smaller size is the reason why you cannot see the effect of pre-allocation. Read this page for additional tips on how to improve the performance. Edit: I observed that at larger sizes, n>=10^6, I am getting a constant performance improvement when I try the following: x=0:1/n:1; instead of using linspace. At n=10^7, I gain 0.05 seconds (0.03 vs 0.08) by NOT using linspace. A: try operation element per element (.*, .^) clear y; n=50000; x=linspace(0,1,n); % no y pre-allocation using zeros start_time=tic; for k=1:n, y(k) = (1-(3/5)*x(k)+(3/20)*x(k)^2 -(x(k)^3/60)) / (1+(2/5)*x(k)-(1/20)*x(k)^2); end elapsed_time1 = toc(start_time); fprintf('Computational time for serialized solution: %f\n',elapsed_time1); start_time=tic; y = (1-(3/5)*x+(3/20)*x.^2 -(x.^3/60)) / (1+(2/5)*x-(1/20)*x.^2); elapsed_time1 = toc(start_time); fprintf('Computational time for product solution: %f\n',elapsed_time1); my data Computational time for serialized solution: 2.578290 Computational time for serialized solution: 0.010060
{ "language": "en", "url": "https://stackoverflow.com/questions/21564052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Scala and Akka HTTP: Request inside a request & issue with threads I am new to learning Scala, Akka Streams and Akka HTTP, so apologies beforehand if the question is too basic. I want to do an HTTP request inside an HTTP request, just like in the following piece of code: implicit val system = ActorSystem("ActorSystem") implicit val materializer = ActorMaterializer import system.dispatcher val requestHandler: Flow[HttpRequest, HttpResponse, _] = Flow[HttpRequest].map { case HttpRequest(HttpMethods.GET, Uri.Path("/api"), _, _, _) => val responseFuture = Http().singleRequest(HttpRequest(uri = "http://www.google.com")) responseFuture.onComplete { case Success(response) => response.discardEntityBytes() println(s"The request was successful") case Failure(ex) => println(s"The request failed with: $ex") } //Await.result(responseFuture, 10 seconds) println("Reached HttpResponse") HttpResponse( StatusCodes.OK ) } Http().bindAndHandle(requestHandler, "localhost", 8080) But in the above case the result looks like this, meaning that Reached HttpResponse is reached first before completing the request: Reached HttpResponse The request was successful I tried using Await.result(responseFuture, 10 seconds) (currently commented out) but it made no difference. What am I missing here? Any help will be greatly appreciated! Many thanks in advance! A: map is a function that takes request and produces a response: HttpRequest => HttpResponse The challenge is that response is a type of Future. Therefore, you need a function that deals with it. The function that takes HttpRequest and returns Future of HttpResponse. HttpRequest => Future[HttpResponse] And voila, mapAsync is exactly what you need: val requestHandler: Flow[HttpRequest, HttpResponse, _] = Flow[HttpRequest].mapAsync(2) { case HttpRequest(HttpMethods.GET, Uri.Path("/api"), _, _, _) => Http().singleRequest(HttpRequest(uri = "http://www.google.com")).map (resp => { resp.discardEntityBytes() println(s"The request was successful") HttpResponse(StatusCodes.OK) }) }
{ "language": "en", "url": "https://stackoverflow.com/questions/61038711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Dealing with four digits: memcache sorts 1000 before 150 from least to greatest In app engine I retrieve a list of items stored in memcache: items = memcache.get("ITEMS") and sort them by amount and price: items.sort(key = lambda x:(x.price, x.amount)) Which works most of the time, when the amount is three digits. However, when I have 2 items with 150 and 1000 amounts for the same price, the entry with 1000 goes before other one. How can I fix this? A: fixed it: items.sort(key = lambda x:((float)(x.price), (int)(x.amount)))
{ "language": "en", "url": "https://stackoverflow.com/questions/21960862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Do codigniter 3.1.7 support hmvc? I tried but no luck From the very first answer of this How to implement HMVC in codeigniter 3.0? I tried all steps with codigniter 3.7.1 but no luck. I am still getting 404. $config['modules_locations'] = array( APPPATH.'modules/' => '../modules/', ); Then I tried putting the above code in application/config/config.php but still 404
{ "language": "en", "url": "https://stackoverflow.com/questions/48606954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Saving time - Compiling c++ and plotting at the same time gnuplot Hi in order to save time whene I execute the code I command this line : g++ name.cpp && ./a.out where nome,cpp is the name of the file that contains the code. If I succesively I need to plot some data generated by the exucatable with Gnuplot there is a way to add in the previous line instead of writing: gnuplot plot "name2.dat" ? A: you can: g++ name.cpp && ./a.out && gnuplot -e "plot 'name2.dat'; pause -1" gnuplot exits when you hit return (see help pause for more options) if you want to start an interactive gnuplot session there is a dirty way I implemented. g++ name.cpp && ./a.out && gnuplot -e "plot 'name2.dat' - (pay attention to the final minus sign)
{ "language": "en", "url": "https://stackoverflow.com/questions/50157509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: How to put a Hyphen in column Total Cost Value in power bi I have a table with blank value, but i want a hyphen "-" to appear when the value is null. Using an expression similar to this: var VLGROUP = (Expression…… RETURN IF( ISBLANK(VLGROUP), BLANK(), VLGROUP) Someone know if is possible? Thanks!! enter image description here gur.com/C0u8s.png A: Try this below option- Sales for the Group = var sales = CALCULATE( SUM(Financialcostcenter[amount]), Financialcostcenter[partnercompany]= "BRE", Financialcostcenter[2 digits]=71, DATESYTD('Datas'[Date]) ) + CALCULATE( SUM(Financialcostcenter[amount]), Financialcostcenter[partnercompany]= "GRM", Financialcostcenter[2 digits]=71, DATESYTD('Datas'[Date]) ) RETURN IF(sales = BLANK(),"-", -(sales))
{ "language": "en", "url": "https://stackoverflow.com/questions/63953889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Constraints on an embedded subclass - Grails, GORM, Mongo I've been pulling my hair out with this issue for a couple of days. I have an embedded subclass with several specified constraints. My issue is that these constraints are never enforced, I'm using grails 2.3.11 and the mongodb plugin 3.0.2. Here is my setup (Simplified slightly). Media class class Media{ ObjectId id; String name; Film film; static mapWith = "mongo" static embedded = ["film"] } Film Class class Film{ ObjectId id; String name; static mapWith = "mongo" static belongsTo = [media : Media] static mapping = { lazy:false } static constraints = { name(nullable:false) //works as expected. Save fails if name is set to null } } ActionFilm Class class ActionFilm extends Film{ int score; String director; //These constraints are never enforeced. No matter what value I set the fields to the save is always successful static constraints = { score(min:50) director(nullable:true) } } Is this an issue with Mongo and Gorm? Is it possible to have contraints in bth a parent and subclass? Example code when saving public boolean saveMedia(){ ActionFilm film = new ActionFilm() film.setName("TRON"); film.setScore(2) film.setDirector("Ted") Media media = new Media() media.setName("myMedia") media.setFilm(film) media.save(flush:true, failOnError:false) //Saves successfully when it shouldn't as the score is below the minimum constrains } Edit I've played aroubd some more and the issue only persits when I'm saving the Media object with ActionFilm as an embedded object. If I save the ActionFilm object the validation is applied. ActionFilm film = new ActionFilm() film.setName("TRON"); film.setScore(2) film.setDirector("Ted") film.save(flush:true, failOnError:false) //Doesn't save as the diameter is wrong. Expected behaviour. So the constraints are applied as expected when I save the ActionFilm object but aren't applied if its an embedded object. A: I've solved my issue in case anyone else comes across this. It may not be the optimal solution but I haven't found an alternative. I've added a custom validator to the Media class that calls validate() on the embedded Film class and adds any errors that arise to the Media objects errors class Media{ ObjectId id; String name; Film film; static mapWith = "mongo" static embedded = ["film"] static constraints = { film(validator : {Film film, def obj, def errors -> boolean valid = film.validate() if(!valid){ film.errors.allErrors.each {FieldError error -> final String field = "film" final String code = "$error.code" errors.rejectValue(field,code,error.arguments,error.defaultMessage ) } } return valid } ) }
{ "language": "en", "url": "https://stackoverflow.com/questions/26994126", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: MySQL -> PHP Array -> Json need output in array plus object format i am trying to fetch data from MySQL and show it in JSON format This is the partial PHP code $sql = "SELECT item, cost, veg, spicy_level FROM food1"; $result = $conn->query($sql); while($row = $result->fetch_assoc()) { echo json_encode($row),"<br/>";} ?> i am getting output as {"item":"dosa","cost":"20","veg":"0","spicy_level":"1"} {"item":"idli","cost":"20","veg":"0","spicy_level":"2"} but i need it as food1:[ {"item":"dosa","cost":"20","veg":"0","spicy_level":"1"}, {"item":"idli","cost":"20","veg":"0","spicy_level":"2"} ] can anyone please guide me? i think what i am getting is in object format and i need output in array format i.e. with [ & ]. very new to this json and php. A: You can incapsulate query results in array and after print it; $sql = "SELECT item, cost, veg, spicy_level FROM food1"; $result = $conn->query($sql); $a = array(); while($row = $result->fetch_assoc()) { if($a['food1'] ==null) $a['food1'] = array(): array_push($a['food1'],$row);} echo json_encode($a); ?></i> A: Your code should be : $sql = "SELECT item, cost, veg, spicy_level FROM food1"; $result = $conn->query($sql); $food['food1'] = array(); while($row = $result->fetch_assoc()) { $food['food1'][] = $row; } echo json_encode($food); A: Don't call json_encode each time through the loop. Put all the rows into an array, and then encode that. $food = array(); while ($row = $result->fetch_assoc()) { $food[] = $row; } echo json_encode(array('food1' => $food));
{ "language": "en", "url": "https://stackoverflow.com/questions/28872111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Firebase hosting randomly shows "Site Not Found" at custom domain We recently launched our firebase application at https://tnb-widgets.firebaseapp.com/ and https://thenextbid.com/ (the last one being our custom domain). It all works smoothly except for some seemingly random moments in which it shows a page stating "Site Not Found". This already happened multiple times and after a couple of minutes the site seems to be back again. The last time this happened was at 2:37AM GMT-5 and the last time I deployed a new release to this same firebase hosting project was at 3:45PM the day before. This release also contained 80 files in total, so it cannot possibly be "an empty directory". Our firebase.json file looks like this: { "firestore": { "rules": "firestore.rules", "indexes": "firestore.indexes.json" }, "hosting": { "public": "build", "ignore": [ "firebase.json", "**/.*", "**/node_modules/**" ], "rewrites": [ { "source": "/api/**", "function": "app" }, { "source": "**", "destination": "/index.html" } ] }, "storage": { "rules": "storage.rules" } } There's no service workers registered. The "build" folder contains 80 files and most importantly it contains the "index.html" at its root. Does anyone have similar issues? I would appreciate any idea to solve this! Thanks.
{ "language": "en", "url": "https://stackoverflow.com/questions/52877497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Reduce Time for Checking Out Code in Visual Studio Online I'm trying out VSO and it's taking over 2 minutes to sync with a GitHub repository. It appears that it's checking out the whole thing on every build. I made sure that the "clean" box is unchecked but it had no effect. Any ideas on how to get it to cache the source or is this even possible in VSO? A: Each build in VSO uses a new VM that is spun up just for your build. Short of hosting your own Build Server connected your VSO, I don't think it can be avoided. Unless there are ways to speed up a the process of downloading the code from a git repo, I think you're stuck.
{ "language": "en", "url": "https://stackoverflow.com/questions/31390786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: How to skip a line when reading from a file I am reading stuff from a file and this is the format : c stands for circle and the double is the radius, r for rectangle and the double is width and height respectively and t for triangle and the double represents side length: c 12 c 2 r 3 4 c 2.4 t 2.9 3 c // wrong format t 2.9 10 r // wrong format I run this code: ifstream infile(names); while(infile >> names) { if(names.at(0) == 'c') { double r; infile >> r; cout << "radius = " << r << endl; } else if(names.at(0) == 'r') { double w; double h; infile >> w; infile >> h; cout << "width = " << w << ", height = " << h << endl; } else if(names.at(0) == 't') { double s; infile >> s; cout << "side = " << s << endl; } else { continue; } } infile.close() And this is the output: radius = 12 radius = 2 width = 3, height = 4 radius = 2.4 side = 2.9 radius = 0 I was wondering how I can skip the wrong format line. I have tried using geline but still no luck EDIT: radius, height, width and side have to be > 0 A: The biggest potential problems you are running up against is the fact that you assume a line is valid if it begins with a c, t or r without first validating the remainder of the line matches the format for a circle, triangle or rectangle. While not fatal to this data set, what happens if one of the lines was 'cat' or 'turtle'? By failing to validate all parts of the line fit the "mold" so to speak, you risk attempting to output values of r, h & w or s that were not read from the file. A simple conditional check of the read to catch the potential failbit or badbit will let you validate you have read what you think you read. The remainder is basically semantics of whether you use the niceties of C++ like a vector of struct for rectangles and whether you use a string instead of char*, etc. However, there are certain benefits of using a string to read/validate the remainder of each line (or you could check the stream state and use .clear() and .ignore()) Putting those pieces together, you can do something like the following. Note, there are many, many different approaches you can take, this is just one approach, #include <iostream> #include <fstream> #include <sstream> #include <string> #include <vector> using namespace std; typedef struct { /* simple typedef for vect of rectangles */ int width, height; } rect_t; int main (int argc, char **argv) { vector<double> cir; /* vector of double for circle radius */ vector<double> tri; /* vector of double for triangle side */ vector<rect_t> rect; /* vector of rect_t for rectangles */ string line; /* string to use a line buffer */ if (argc < 2) { /* validate at least one argument given */ cerr << "error: insufficient input.\n" "usage: " << argv[0] << " filename\n"; return 1; } ifstream f (argv[1]); /* open file given by first argument */ if (!f.is_open()) { /* validate file open for reading */ cerr << "error: file open failed '" << argv[1] << "'.\n"; return 1; } while (getline (f, line)) { /* read each line into 'line' */ string shape; /* string for shape */ istringstream s (line); /* stringstream to parse line */ if (s >> shape) { /* if shape read */ if (shape == "c") { /* is it a "c"? */ double r; /* radius */ string rest; /* string to read rest of line */ if (s >> r && !getline (s, rest)) /* radius & nothing else */ cir.push_back(r); /* add radius to cir vector */ else /* invalid line for circle, handle error */ cerr << "error: invalid radius or unexpected chars.\n"; } else if (shape == "t") { double l; /* side length */ string rest; /* string to read rest of line */ if (s >> l && !getline (s, rest)) /* length & nothing else */ tri.push_back(l); /* add length to tri vector */ else /* invalid line for triangle, handle error */ cerr << "error: invalid triangle or unexpected chars.\n"; } else if (shape == "r") { /* is it a rect? */ rect_t tmp; /* tmp rect_t */ if (s >> tmp.width && s >> tmp.height) /* tmp & nohtin else */ rect.push_back(tmp); /* add to rect vector */ else /* invalid line for rect, handle error */ cerr << "error: invalid width & height.\n"; } else /* line neither cir or rect, handle error */ cerr << "error: unrecognized shape '" << shape << "'.\n"; } } cout << "\nthe circles are:\n"; /* output valid circles */ for (auto& i : cir) cout << " c: " << i << "\n"; cout << "\nthe triangles are:\n"; /* output valid triangles */ for (auto& i : tri) cout << " t: " << i << "\n"; cout << "\nthe rectangles are:\n"; /* output valid rectangles */ for (auto& i : rect) cout << " r: " << i.width << " x " << i.height << "\n"; } By storing values for your circles, triangles and rectangles independent of each other, you then have the ability to handle each type of shape as its own collection, e.g. Example Use/Output $ ./bin/read_shapes dat/shapes.txt error: unrecognized shape '3'. error: unrecognized shape '10'. the circles are: c: 12 c: 2 c: 2.4 the triangles are: t: 2.9 t: 2.9 the rectangles are: r: 3 x 4 Look things over and let me know if you have further questions. The main takeaway is to insure you validate down to the point you can insure what you have read is either a round-peg to fit in the circle hole, a square-peg to fit in a square hole, etc.. A: The only thing I added was a getline where I put a comment at the "else" of the loop. while(infile >> names) { if(names.at(0) == 'c') { double r; infile >> r; cout << "radius = " << r << endl; } else if(names.at(0) == 'r') { double w; double h; infile >> w; infile >> h; cout << "width = " << w << ", height = " << h << endl; } else if(names.at(0) == 't') { double s; infile >> s; cout << "side = " << s << endl; } else { // discard of the rest of the line using getline() getline(infile, names); //cout << "discard: " << names << endl; } } Output: radius = 12 radius = 2 width = 3, height = 4 radius = 2.4 side = 2.9 side = 2.9
{ "language": "en", "url": "https://stackoverflow.com/questions/49227626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to filter post by join condition? I have a table called wpps_posts which have this structure: ID | post_title | post_type 1 foo zoacres-property 2 foo2 zoacres-property 3 foo3 post I would like to return all the posts with type zoacres-property and also I want filter them by price. Each price is stored inside the table wp_postmeta: meta_id | post_id | meta_key | meta_value 100 2 price 5000 100 1 price 0 How can I order all the posts by price ASC? I'm stuck with the following query: SELECT * FROM wpps_posts p INNER JOIN wpps_posts wp ON wp.ID = p.ID WHERE p.post_type = 'zoacres-property' ORDER BY wp.meta?? EXPECTED RESULT: ID | post_title | post_type 1 foo zoacres-property 2 foo2 zoacres-propertY A: SELECT * FROM wpps_posts p INNER JOIN wp_postmeta wp ON wp.post_ID = p.ID AND wp.meta_key='price' WHERE p.post_type = 'zoacres-property' ORDER BY wp.meta_value asc A: You could do something like this, depends what other type of meta type records you have. SELECT * FROM wpps_posts LEFT JOIN wp_postmeta ON wp_postmeta.post_id = wpps_posts.ID AND wp_postmeta.meta_key = 'price' WHERE wpps_posts.post_type = 'zoacres-property' ORDER BY wp_postmeta.meta_value
{ "language": "en", "url": "https://stackoverflow.com/questions/63576155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Convert the existing nested dictionary output in string to a list to iterate over it I have a rest api which provides a list of key value pair's and we need to fetch all the id's from this json output file. Contents of the json file { "count": 6, "results": [ { "key": "roles", "id": "1586230" }, { "key": "roles", "id": "1586951" }, { "key": "roles", "id": "1586932" }, ], "roles": { "1586230": { "name": "Systems Engineer", "deleted_at": null, "created_at": "2022-04-22T03:22:24-07:00", "updated_at": "2022-04-22T03:22:24-07:00", "id": "1586230" }, "1586951": { "name": "Engineer- Software", "deleted_at": null, "created_at": "2022-05-05T01:51:29-07:00", "updated_at": "2022-05-05T01:51:29-07:00", "id": "1586951" }, "1586932": { "name": "Engineer- SW", "deleted_at": null, "created_at": "2022-05-05T01:38:37-07:00", "updated_at": "2022-05-05T01:38:37-07:00", "id": "1586932" }, }, "meta": { "count": 6, "page_count": 5, "page_number": 1, "page_size": 20 } } The rest call saves the contents to a file called p1234.json Opened the file in python: with open ('p1234.json') as file: data2 = json.load(file) for ids in data2['results']: res= ids['id'] print(res) Similarly with open ('p1234.json') as file: data2 = json.load(file) for role in data2['roles']: res= roles['name'] print(res) fails with errors. How to iterate over a nested array do I can only get the values of names listed in roles array roles --> 1586230 --> name --> System Engineer Thank you A: You have to loop over the items of the dictionary. for key, value in data2['roles'].items(): res= value['name'] print(res) A: There is nothing wrong with your code, I run it and I didn't get any error. The problem that I see though is your Json file, some commas shouldn't be there: { "count": 6, "results": [ { "key": "roles", "id": "1586230" }, { "key": "roles", "id": "1586951" }, { "key": "roles", "id": "1586932" } \\ here ], "roles": { "1586230": { "name": "Systems Engineer", "deleted_at": null, "created_at": "2022-04-22T03:22:24-07:00", "updated_at": "2022-04-22T03:22:24-07:00", "id": "1586230" }, "1586951": { "name": "Engineer- Software", "deleted_at": null, "created_at": "2022-05-05T01:51:29-07:00", "updated_at": "2022-05-05T01:51:29-07:00", "id": "1586951" }, "1586932": { "name": "Engineer- SW", "deleted_at": null, "created_at": "2022-05-05T01:38:37-07:00", "updated_at": "2022-05-05T01:38:37-07:00", "id": "1586932" } \\ here }, "meta": { "count": 6, "page_count": 5, "page_number": 1, "page_size": 20 } after that any parsing function will do the job.
{ "language": "en", "url": "https://stackoverflow.com/questions/72434671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: how to store Image profile(jpg file) and PDF documents in AWS DynamoDB? I am migrating my Spring MVC services to the AWS API Gateway using Python Lambda with Dynamo Db , I have endpoint where i can store or retrieve the people image and also the reports which is PDF file , can you please suggest me which is the best practice to store the images and pdf files in AWS . Your help is really appreciated!! A: Keep in mind, DynamoDB has a 400KB limit on each item. I would recommend using S3 for images and PDF documents. It also allows you to set up a CDN much more easily, rather than using something like DynamoDB. You can always link your S3 link to an item in DynamoDB if you need to store data related to the file. A: AWS DynamoDB has a limit on the row size to be max of 400KB. So, it is not advisable to store the binary content of image/PDF document in a column directly. Instead, you should store the image/PDF in S3 and have the link stored in a column in DynamoDB. If you were using Java, you could have leveraged the S3Link abstraction that takes care of storing the content in S3 and maintaining the link in DynamoDB column.
{ "language": "en", "url": "https://stackoverflow.com/questions/47903547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Split a column after specific characters I have a field data in mysql db. For example quot_number ==================== UMAC/ARC/161299/801 UMAC/LAK/151542/1051 UMAC/LAK/150958/00050 Iam expecting an output as below: 801 1051 00050 Actually the last numbers or characters after the last '/' has to be shown in my sql query. Any ways to achieve it? I tried to add something like this, but not getting expected result: LEFT(quotation.quot_number, 16) as quot_number4 right(quot_number,((CHAR_LENGTH(quot_number))-(InStr(quot_number,',')))) as quot_number5 A: Use function substring_index. select substring_index(quot_number, '/', -1) from yourtable
{ "language": "en", "url": "https://stackoverflow.com/questions/41219251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: HTTPS connection using PEM Certificate I'm trying to POST HTTPS requests using a PEM certificate like following: import httplib CERT_FILE = '/path/certif.pem' conn = httplib.HTTPSConnection('10.10.10.10','443', cert_file =CERT_FILE) conn.request("POST", "/") response = conn.getresponse() print response.status, response.reason conn.close() I have the following error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.6/httplib.py", line 914, in request self._send_request(method, url, body, headers) File "/usr/lib/python2.6/httplib.py", line 951, in _send_request self.endheaders() File "/usr/lib/python2.6/httplib.py", line 908, in endheaders self._send_output() File "/usr/lib/python2.6/httplib.py", line 780, in _send_output self.send(msg) File "/usr/lib/python2.6/httplib.py", line 739, in send self.connect() File "/usr/lib/python2.6/httplib.py", line 1116, in connect self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file) File "/usr/lib/python2.6/ssl.py", line 338, in wrap_socket suppress_ragged_eofs=suppress_ragged_eofs) File "/usr/lib/python2.6/ssl.py", line 118, in __init__ cert_reqs, ssl_version, ca_certs) ssl.SSLError: [Errno 336265225] _ssl.c:339: error:140B0009:SSL routines:**SSL_CTX_use_PrivateKey_file**:PEM lib When I remove the cert_file from httplib, I've the following response: 200 ok When I add the Authentication header (like advised by MattH) with empty post payload, it works also. However, when I put the good request with the Path, the Body and the Header, like following (I simplified them...) body = '<S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/">blablabla</S:Envelope>' URLprov = "/syncaxis2/services/XXXsyncService" auth_header = 'Basic %s' % (":".join(["xxx","xxxxx"]).encode('Base64').strip('\r\n')) conn.request("POST",URLprov,body,{'Authenticate':auth_header}) I have 401 Unauthorized response ! As you can see, first, I'm asked to provide the PrivateKey ! why did I need the PrivateKey if I'm a client ? then, when I remove the PrivateKey and the certificate, and put the Path/Body/headers I have 401 Unauthorized error with the message WWW-Authenticate: Basic realm="SYNCNB Server Realm". Could any one explain this issue? Is there another way to send HTTPS request using a certificate in Python? A: It sounds like you need something similar to an answer I have provided before to perform simple client certificate authentication. Here is the code for convenience modified slightly for your question: import httplib import urllib2 PEM_FILE = '/path/certif.pem' # Renamed from PEM_FILE to avoid confusion CLIENT_CERT_FILE = '/path/clientcert.p12' # This is your client cert! # HTTPS Client Auth solution for urllib2, inspired by # http://bugs.python.org/issue3466 # and improved by David Norton of Three Pillar Software. In this # implementation, we use properties passed in rather than static module # fields. class HTTPSClientAuthHandler(urllib2.HTTPSHandler): def __init__(self, key, cert): urllib2.HTTPSHandler.__init__(self) self.key = key self.cert = cert def https_open(self, req): #Rather than pass in a reference to a connection class, we pass in # a reference to a function which, for all intents and purposes, # will behave as a constructor return self.do_open(self.getConnection, req) def getConnection(self, host): return httplib.HTTPSConnection(host, key_file=self.key, cert_file=self.cert) cert_handler = HTTPSClientAuthHandler(PEM_FILE, CLIENT_CERT_FILE) opener = urllib2.build_opener(cert_handler) urllib2.install_opener(opener) f = urllib2.urlopen("https://10.10.10.10") print f.code A: See http://docs.python.org/library/httplib.html httplib.HTTPSConnection does not do any verification of the server’s certificate. The option to include your private certificate is when the server is doing certificate based authentication of clients. I.e. the server is checking the client has a certificate signed by a CA that it trusts and is allowed to access it's resources. If you don't specify the cert optional argument, you should be able to connect to the HTTPS server, but not validate the server certificate. Update Following your comment that you've tried basic auth, it looks like the server still wants you to authenticate using basic auth. Either your credentials are invalid (have you independently verified them?) or your Authenticate header isn't formatted correctly. Modifying your example code to include a basic auth header and an empty post payload: import httplib conn = httplib.HTTPSConnection('10.10.10.10','443') auth_header = 'Basic %s' % (":".join(["myusername","mypassword"]).encode('Base64').strip('\r\n')) conn.request("POST", "/","",{'Authorization':auth_header}) response = conn.getresponse() print response.status, response.reason conn.close() A: What you're doing is trying to connect to a Web service that requires authentication based on client certificate. Are you sure you have a PEM file and not a PKCS#12 file? A PEM file looks like this (yes, I know I included a private key...this is just a dummy that I generated for this example): -----BEGIN RSA PRIVATE KEY----- MIICXQIBAAKBgQDDOKpQZexZtGMqb7F1OMwdcFpcQ/pqtCoOVCGIAUxT3uP0hOw8 CZNjLT2LoG4Tdl7Cl6t66SNzMVyUeFUrk5rkfnCJ+W9RIPkht3mv5A8yespeH27x FjGVbyQ/3DvDOp9Hc2AOPbYDUMRmVa1amawxwqAFPBp9UZ3/vfU8nxwExwIDAQAB AoGBAMCvt3svfr9zysViBTf8XYtZD/ctqYeUWEZYR9hj36CQyVLZuAnyMaWcS7j7 GmrfVNygs0LXxoO2Xvi0ZOxj/mZ6EcZd8n37LxTo0GcWvAE4JjPr7I4MR2OvGYa/ 1696e82xwEnUdpyBv9z3ebleowQ1UWP88iq40oZYukUeignRAkEA9c7MABi5OJUq hf9gwm/IBie63wHQbB2wVgB3UuCYEa4Zd5zcvJIKz7NfhsZKKcZJ6CBVxwUd84aQ Aue2DRwYQwJBAMtQ5yBA8howP2FDqcl9sovYR0jw7Etb9mwsRNzJwQRYYnqCC5yS nOaNn8uHKzBcjvkNiSOEZFGKhKtSrlc9qy0CQQDfNMzMHac7uUAm85JynTyeUj9/ t88CDieMwNmZuXZ9P4HCuv86gMcueex5nt/DdVqxXYNmuL/M3lkxOiV3XBavAkAA xow7KURDKU/0lQd+x0X5FpgfBRxBpVYpT3nrxbFAzP2DLh/RNxX2IzAq3JcjlhbN iGmvgv/G99pNtQEJQCj5AkAJcOvGM8+Qhg2xM0yXK0M79gxgPh2KEjppwhUmKEv9 o9agBLWNU3EH9a6oOfsZZcapvUbWIw+OCx5MlxSFDamg -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIDfjCCAuegAwIBAgIJAOYJ/e6lsjrUMA0GCSqGSIb3DQEBBQUAMIGHMQswCQYD VQQGEwJVUzELMAkGA1UECBMCRkwxDjAMBgNVBAcTBVRhbXBhMRQwEgYDVQQKEwtG b29iYXIgSW5jLjEQMA4GA1UECxMHTnV0IEh1dDEXMBUGA1UEAxMOd3d3LmZvb2Jh ci5jb20xGjAYBgkqhkiG9w0BCQEWC2Zvb0BiYXIuY29tMB4XDTExMDUwNTE0MDk0 N1oXDTEyMDUwNDE0MDk0N1owgYcxCzAJBgNVBAYTAlVTMQswCQYDVQQIEwJGTDEO MAwGA1UEBxMFVGFtcGExFDASBgNVBAoTC0Zvb2JhciBJbmMuMRAwDgYDVQQLEwdO dXQgSHV0MRcwFQYDVQQDEw53d3cuZm9vYmFyLmNvbTEaMBgGCSqGSIb3DQEJARYL Zm9vQGJhci5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMM4qlBl7Fm0 YypvsXU4zB1wWlxD+mq0Kg5UIYgBTFPe4/SE7DwJk2MtPYugbhN2XsKXq3rpI3Mx XJR4VSuTmuR+cIn5b1Eg+SG3ea/kDzJ6yl4fbvEWMZVvJD/cO8M6n0dzYA49tgNQ xGZVrVqZrDHCoAU8Gn1Rnf+99TyfHATHAgMBAAGjge8wgewwHQYDVR0OBBYEFHZ+ CPLqn8jlT9Fmq7wy/kDSN8STMIG8BgNVHSMEgbQwgbGAFHZ+CPLqn8jlT9Fmq7wy /kDSN8SToYGNpIGKMIGHMQswCQYDVQQGEwJVUzELMAkGA1UECBMCRkwxDjAMBgNV BAcTBVRhbXBhMRQwEgYDVQQKEwtGb29iYXIgSW5jLjEQMA4GA1UECxMHTnV0IEh1 dDEXMBUGA1UEAxMOd3d3LmZvb2Jhci5jb20xGjAYBgkqhkiG9w0BCQEWC2Zvb0Bi YXIuY29tggkA5gn97qWyOtQwDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQUFAAOB gQAv13ewjgrIsm3Yo8tyqTUHCr/lLekWcucClaDgcHlCAH+WU8+fGY8cyLrFFRdk 4U5sD+P313Adg4VDyoocTO6enA9Vf1Ar5XMZ3l6k5yARjZNIbGO50IZfC/iblIZD UpR2T7J/ggfq830ACfpOQF/+7K+LgFLekJ5dIRuD1KKyFg== -----END CERTIFICATE----- Read this question.
{ "language": "en", "url": "https://stackoverflow.com/questions/5896380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: TFS configure variables on Release definition instantiation I have a release definition setup with several tasks. When a developer wants to create a release from this definition, i'd like to give them the option of selecting which features they'd like to release (turn on/off tasks). Ideally this would be via the Create Release dialog using a variable or similar. Can this be done? Or is the only way to achieve this to create a draft release and enable/disable the tasks on each environment? Believe this is prone to error (toggle task on one environment but forget to on another) and this is not an option as administrator has locked editing of definitions (prevent incorrect setup of production releases). Understand I can create separate release definitions to cover the options but it seems like a lot of duplication. A: Unfortunately, this is not supported in TFS currently. The workarounds are just like you mentioned above, to disable and enable those steps or use draft release. This is a user voice about your request you could vote: https://visualstudio.uservoice.com/forums/330519-team-services/suggestions/19165690-select-steps-when-create-release
{ "language": "en", "url": "https://stackoverflow.com/questions/43777906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: AWS S3 Internal Server Error: Tyring to upload pdf file on after another I'm trying to generate PDF file using FPDF and PHP and later upload to AWS-s3 and generate the url. When I executed the below code in my local-machine Using XAMPP it's generating files and uploading it to S3. But When I deployed in AWS server as an API it's uploading only first file and for other its giving 500 server error Below is my code that I used in Local Machine require '../fpdf/fpdf.php'; require 'start.php'; use Aws\Exception\AwsException; $location = "bangalore"; $db = getConnection(); $query = "SELECT first_name FROM customer_info where location = '$location'"; $execute = $db->query($query); $result = $execute->fetchAll(PDO::FETCH_ASSOC); for($i = 0; $i < $len; $i++) { $pdf = new FPDF(); $pdf->AddPage(); $pdf->SetFont('Arial', 'B', 14); $txt = "Legal Document of ".$result[$i]['first_name']; $pdf->Cell(180, 0, $txt, 0, 1, 'C'); $pdf->Line(5, 20, 200, 20); $docname = $result[$i]['first_name'] . ".pdf"; $filepath = "../file/{$docname}"; $pdf->Output($filepath, 'F'); //s3 client try { $res = $S3->putObject([ 'Bucket' => $config['S3']['bucket'], 'Key' => "PATH_TO_THE_DOCUMENT/{$docname}", 'Body' => fopen($filepath, 'rb'), 'ACL' => 'public-read' ]); var_dump($res["ObjectURL"]); } catch (S3Exception $e) { echo $e->getMessage() . "\n"; } } Output: Array ( [0] => Array ( [first_name] => Mohan ) [1] => Array ( [first_name] => Prem ) [2] => Array ( [first_name] => vikash ) [3] => Array ( [first_name] => kaushik ) ) string(70) "https://streetsmartb2.s3.amazonaws.com/PATH_TO_THE FILE/Mohan.pdf" string(72) "https://streetsmartb2.s3.amazonaws.com/PATH_TO_THE FILE/Prem%20.pdf" API CODE //pdf generation another api $app->post('/pdfmail', function() use($app){ //post parameters $location = $app->request->post('loc'); $id = $app->request->post('id'); $db = getConnection(); $query = "SELECT * FROM customer_info where location = '$location'"; $execute = $db->query($query); $result = $execute->fetchAll(PDO::FETCH_ASSOC); $len = sizeof($result); //request array $request = array(); if($result != Null) { for($i = 0; $i < $len; $i++) { $pdf = new FPDF(); $pdf->AddPage(); $pdf->SetFont('Arial', 'B', 14); $txt = "Document of Mr." . $result[$i]['first_name']; $pdf->Cell(180, 0, $txt, 0, 1, 'C'); $pdf->Line(5, 20, 200, 20); $docname = $result[$i]['first_name'] . ".pdf"; var_dump($docname); $filepath = "../file/{$docname}"; var_dump($filepath); $pdf->Output($filepath, 'F'); //s3 client require '../aws/aws-autoloader.php'; $config = require('config.php'); //create s3 instance $S3 = S3Client::factory([ 'version' => 'latest', 'region' => 'REGION', 'credentials' => array( 'key' => $config['S3']['key'], 'secret' => $config['S3']['secret'] ) ]); try { $res = $S3->putObject([ 'Bucket' => $config['S3']['bucket'], 'Key' => "PATH_TO_FILE{$docname}", 'Body' => fopen($filepath, 'rb'), 'ACL' => 'public-read' ]); var_dump($res["ObjectURL"]); } catch (S3Exception $e) { echo $e->getMessage() . "\n"; } } } OUTPUT WHEN TESTED IN POSTMAN string(10) "vikash.pdf" string(18) "../file/vikash.pdf" string(71) "https://streetsmartb2.s3.amazonaws.com/PATH_TO_FILE/vikash.pdf" string(13) "pradeepan.pdf" string(21) "../file/pradeepan.pdf" After this I'm getting internal server error. A: Instead of using require_once I've use require...So as the code traversed it was creating AWS Class again and again. Code //s3 client require '../aws/aws-autoloader.php'; //use require_once $config = require('config.php'); //create s3 instance $S3 = S3Client::factory([ 'version' => 'latest', 'region' => 'REGION', 'credentials' => array( 'key' => $config['S3']['key'], 'secret' => $config['S3']['secret'] ) ]);
{ "language": "en", "url": "https://stackoverflow.com/questions/48921630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Auto-generate @version value in javadoc For the @version tag in javadoc, I use the same value as in BuildConfig.VERSION_NAME. I would like to inject that value, instead of changing every file for each release. I tried: * @version {@value BuildConfig#VERSION_NAME} and * @version @versionName (and add -tag versionName:a:"2.2.2") but none of these works. I could run sed just before the doc gets generated, but I would rather prefer something 'officially' supported. Any ideas how to solve this? A: For the second form, you can put your custom tag at the beginning of a javadoc line. /** * This is a class of Foo<br/> * * @version * * @configVersion. */ Then use command javadoc -version -tag configVersion.:a:"2.2.2" to generate your javadoc, the custom tag should be handled in this way. Note the last dot(.) character in custom tag name, as the command javadoc suggests Note: Custom tags that could override future standard tags: @configVersion. To avoid potential overrides, use at least one period character (.) in custom tag names.
{ "language": "en", "url": "https://stackoverflow.com/questions/58002547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to retrieve data from MongoDb Atlas and display in an ejs file using mongoose and Nodejs Thanks for the help in advance. I am trying to retrieve data from my database- myFirstDatabase and collection named as 'shipment' in mongondb. This is a nested schema but I am only interested in the parent data for now. I have this code which retrieves data to the console log. But how can I display or access the data in my orders.ejs file? Shipment.find({}, function (err, data) { if (err) throw err console.log(data) }) MongoDB connected... [ { _id: new ObjectId("61353311261da54811ee0ca5"), name: 'Micky Mouse', phone: '5557770000', email: 'g@gmail.com', address: { address: '10 Merrybrook Drive', city: 'Herndon', state: 'Virginia', zip: '21171', country: 'United States', _id: new ObjectId("61353311261da54811ee0ca6") }, items: { car: 'Honda Pilot 2018', boxItem: '3 uHaul boxes', furniture: 'None', electronics: '1 50" Samsung TV', bags: '2 black suites cases', _id: new ObjectId("61353311261da54811ee0ca7") }, date: 2021-09-05T21:13:53.484Z, __v: 0 } ] This is the ejs file, a table I am trying to populate the data i get from my mongodb <div class="mt-5 m-auto> <h3 class="mt-5">This is the order table</h3> <%- include ("./partials/messages"); %> <div class="col-sm"> <table class="table table-striped table-hover"> <thead> <tr> <th>#</th> <th>Customer</th> <th>Address</th> <th>City</th> <th>State</th> <th>Zip</th> <th>Phone</th> <th>Status</th> </tr> </thead> <tbody> <tr class="success"> <td>1</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> </div> </div> A: server.js const express = require('express') const mongoose = require('mongoose') const Shipment= require('./models/shipment') const app = express() mongoose.connect('mongodb://localhost/myFirstDatabase ', { useNewUrlParser: true, useUnifiedTopology: true }) app.set('view engine', 'ejs') app.use(express.urlencoded({ extended:false })) app.get('/', async (req,res) => { const data= await Shipment.find() res.render('index', { data: data}) }) app.listen(process.env.PORT || 5000); index.ejs below is part of your ejs file <table class="table table-striped table-hover"> <thead> <tr> <th>#</th> <th>Customer</th> <th>Address</th> <th>City</th> <th>State</th> <th>Zip</th> <th>Phone</th> <th>Status</th> </tr> </thead> <tbody> <% data.forEach((e, index)=> { %> <tr> <td><%= index %></td> <td><%= e.Customer %></td> <td><%= e.Address %></td> <td><%= e.City %></td> <td><%= e.State %></td> <td><%= e.Zip %></td> <td><%= e.Phone %></td> <td><%= e.Status %></td> </tr> <% }) %> </tbody> </table>
{ "language": "en", "url": "https://stackoverflow.com/questions/69067410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: node.js app running using forever inaccessible after a while I have a node.js server and a java client communicating using socket.io. I use this api https://github.com/Gottox/socket.io-java-client for the java client. I am using forever module to run my server. Everything works well but after some time , my server becomes inaccessible and I need to restart it, Also, most of the times i need to update/edit my node.js server file in order to make my server work again (restarted). Its been two weeks already and im still keep restarting my server :(. Has anyone run into the same problem ? and solution or advice please. Thanks
{ "language": "en", "url": "https://stackoverflow.com/questions/17628274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: No 'Access-Control-Allow-Origin' header is present on the requested resource. Laravel 5.4 with cors package Hi I was following this tutorial regarding Laravel and VueJs communication. https://www.youtube.com/watch?v=5hOMkFMxY90&list=PL3ZhWMazGi9IommUd5zQmjyNeF7s1sP7Y&index=8 I have done exactly like it was said in the tutorial. It uses a CORS package https://github.com/barryvdh/laravel-cors/ I have added the service provider middlewares everything as it was told in the tutorial but it just doesnt seem to work. I have tried it in Laravel 5.4 and Laravel 5.3 as well. This is my RouetServiceProvider: namespace App\Providers; use Illuminate\Support\Facades\Route; use Illuminate\Foundation\Support\Providers\RouteServiceProvider as ServiceProvider; class RouteServiceProvider extends ServiceProvider { /** * This namespace is applied to your controller routes. * * In addition, it is set as the URL generator's root namespace. * * @var string */ protected $namespace = 'App\Http\Controllers'; /** * Define your route model bindings, pattern filters, etc. * * @return void */ public function boot() { // parent::boot(); } /** * Define the routes for the application. * * @return void */ public function map() { $this->mapApiRoutes(); $this->mapWebRoutes(); // } /** * Define the "web" routes for the application. * * These routes all receive session state, CSRF protection, etc. * * @return void */ protected function mapWebRoutes() { Route::group([ 'middleware' => 'web', 'namespace' => $this->namespace, ], function ($router) { require base_path('routes/web.php'); }); } /** * Define the "api" routes for the application. * * These routes are typically stateless. * * @return void */ protected function mapApiRoutes() { Route::group([ 'middleware' => ['api' , 'cors'], 'namespace' => $this->namespace, 'prefix' => 'api', ], function ($router) { require base_path('routes/api.php'); }); } } This is my middleware code in kernel protected $middleware = [ \Barryvdh\Cors\HandleCors::class, \Illuminate\Foundation\Http\Middleware\CheckForMaintenanceMode::class, \Illuminate\Foundation\Http\Middleware\ValidatePostSize::class, \App\Http\Middleware\TrimStrings::class, \Illuminate\Foundation\Http\Middleware\ConvertEmptyStringsToNull::class, ]; I have added its service provider too. I have seen all the solutions here on stackoverflow but none of them seems to work. I do not need theoretical answer but a practical solution Thanks
{ "language": "en", "url": "https://stackoverflow.com/questions/43503718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Swift: How to not load AppDelegate during Tests I have an OS X application which on startup loads some data from a server and pushes notifications to the NSUserNotificationCenter. Now I have the problem that this also happens during my unit tests. I found no way yet to prevent this. Of course I could stub the HTTP loads. But in some cases I want to test the loading and then the notifications get sent anyway. What I'm trying to do is to make the test runs not load the AppDelegate but a fake one that I'm only using for tests. I found several examples [1] on how to do that with UIApplicationMain, where you can pass a specific AppDelegate class name. The same is not possible with NSApplicationMain [2]. What I've tried is the following: Removed @NSApplicationMain from AppDelegate.swift, then added a main.swift with the following content: class FakeAppDelegate: NSObject, NSApplicationDelegate { } NSApplication.sharedApplication() NSApp.delegate = FakeAppDelegate() NSApplicationMain(Process.argc, Process.unsafeArgv) This code runs before tests but has no effect at all. I might have to say: My AppDelegate is almost empty. To handle the MainMenu.xib stuff I made a separate view controller which does the actual loading and notification stuff in awakeFromNib. [1] http://www.mokacoding.com/blog/prevent-unit-tests-from-loading-app-delegate-in-swift/ [2] https://developer.apple.com/library/mac/documentation/Cocoa/Reference/ApplicationKit/Miscellaneous/AppKit_Functions/#//apple_ref/c/func/NSApplicationMain A: Just an update on the previous accept answer, this is my main.swift: private func isTestRun() -> Bool { return NSClassFromString("XCTestCase") != nil } if isTestRun() { // This skips setting up the app delegate NSApplication.shared.run() } else { // For some magical reason, the AppDelegate is setup when // initialized this way _ = NSApplicationMain(CommandLine.argc, CommandLine.unsafeArgv) } A bit more compact! I'm using Swift 4.1 and XCode 9.4.1 A: After days of trying and failing I found an answer on the Apple forums: The problem was that my main.swift file was initializing my AppDelegate before NSApplication had been initialized. The Apple documentation makes it clear that lots of other Cocoa classes rely on NSApplication to be up and running when they are initialized. Apparently, NSObject and NSWindow are two of them. So my final and working code in main.swift looks like this: private func isTestRun() -> Bool { return NSClassFromString("XCTest") != nil } private func runApplication( application: NSApplication = NSApplication.sharedApplication(), delegate: NSObject.Type? = nil, bundle: NSBundle? = nil, nibName: String = "MainMenu") { var topLevelObjects: NSArray? // Actual initialization of the delegate is deferred until here: application.delegate = delegate?.init() as? NSApplicationDelegate guard bundle != nil else { application.run() return } if bundle!.loadNibNamed(nibName, owner: application, topLevelObjects: &topLevelObjects ) { application.run() } else { print("An error was encountered while starting the application.") } } if isTestRun() { let mockDelegateClass = NSClassFromString("MockAppDelegate") as? NSObject.Type runApplication(delegate: mockDelegateClass) } else { runApplication(delegate: AppDelegate.self, bundle: NSBundle.mainBundle()) } So the actual problem before was that the Nib was being loaded during tests. This solution prevents this. It just loads the application with a mocked application delegate whenever it detects a test run (By looking for the XCTest class). I'm sure I will have to tweak this a bit more. Especially when a start with UI Testing. But for the moment it works.
{ "language": "en", "url": "https://stackoverflow.com/questions/39116318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to use std::transform with a lambda function that takes additional parameters In C++ 11 (or higher) can I use std::transform and a lambda function to transform a vector that also takes other parameters? For example, how do I pass param to the lambda function below? std::vector<double> a{ 10.0, 11.0, 12.0 }; std::vector<double> b{ 20.0, 30.0, 40.0 }; std::vector<double> c; double param = 1.5; //The desired function is c = (a-b)/param transform(a.begin(), a.end(), b.begin(), std::back_inserter(c), [](double x1, double x2) {return(x1 - x2)/param; }); std::transform wants a function with two input parameters. Do I need to use std::bind? A: You just need to capture param in your capture list: transform(a.begin(), a.end(), b.begin(), std::back_inserter(c), [param](double x1, double x2) {return(x1 - x2)/param; }); Capturing it by reference also works - and would be correct if param was a big class. But for a double param is fine. A: This is what the lambda capture is for. You need to specify & or = or param in the capture block ([]) of the lambda. std::vector<double> a{ 10.0, 11.0, 12.0 }; std::vector<double> b{ 20.0, 30.0, 40.0 }; std::vector<double> c; double param = 1.5; //The desired function is c = (a-b)/param transform(a.begin(), a.end(), b.begin(), std::back_inserter(c), [=](double x1, double x2) {return(x1 - x2)/param; }); // ^ capture all external variables used in the lambda by value In the above code we just capture by value since copying a double and having a reference is pretty much the same thing performance wise and we don't need reference semantics.
{ "language": "en", "url": "https://stackoverflow.com/questions/53011875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I add an intranet-only endpoint to my IIS hosted WCF service? I have a WCF service hosted in IIS that uses basicHttpBindings. I'm adding a new method to the ServiceContract that will be called from a console app to perform an administrative task. I got to thinking, well wouldn't it be nice if I gave this method its own endpoint. Then I thought and what if that endpoint wasn't even publicly accessible. It would be much better if only a computer on our LAN could access it. It might even be cool if only an AD administrator was authorized to use it, but I don't want to get too elaborate. So I added a new ServiceContract interface that includes my new method. How can I restrict it to LAN access only? Do I need a NetTcpBinding? Networking is not my strong suit and I'm a little confused, conceptually, on how a TCP endpoint could be hosted from IIS. Additionally, when hosting multiple endpoints, does each have to have its own address or can they be at the same address? A: I am gonna answer some of your questions * *there is no binding that would limit access to LAN network though you can use windows authentication to allow users from your network to use the service *the nettcpbinding is only a tcp connection and you can host it on IIS pof course check this link for more information hosting nettcp on IIS *you can have one base address for multiple endpoints , example: https://localhost:8080/calculator.svc net.tccp://localhost:8080/calculator.svc
{ "language": "en", "url": "https://stackoverflow.com/questions/29483797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: A Quickselect C Algorithm faster than C Qsort I have tried to implement a C QuickSelect algorithm as described in this post (3 way quicksort (C implementation)). However, all I get are performances 5 to 10 times less than the default qsort (even with an initial shuffling). I tried to dig into the original qsort source code as provide here (https://github.com/lattera/glibc/blob/master/stdlib/qsort.c), but it's too complex. Does anybody have a simpler, and better algorithm? Any idea is welcomed. Thanks, NB: my original problem is to try to get the Kth smallest values of an array to the first Kth indices. So I planned to call quickselect K times EDIT 1: Here is the Cython Code as copied and adapted from the link above cdef void qswap(void* a, void* b, const size_t size) nogil: cdef char temp[size]# C99, use malloc otherwise #char serves as the type for "generic" byte arrays memcpy(temp, b, size) memcpy(b, a, size) memcpy(a, temp, size) cdef void qshuffle(void* base, size_t num, size_t size) nogil: #implementation of Fisher cdef int i, j, tmp# create local variables to hold values for shuffle for i in range(num - 1, 0, -1): # for loop to shuffle j = c_rand() % (i + 1)#randomise j for shuffle with Fisher Yates qswap(base + i*size, base + j*size, size) cdef void partition3(void* base, size_t *low, size_t *high, size_t size, QComparator compar) nogil: # Modified median-of-three and pivot selection. cdef void *ptr = base cdef size_t lt = low[0] cdef size_t gt = high[0] # lt is the pivot cdef size_t i = lt + 1# (+1 !) we don't compare pivot with itself cdef int c = 0 while (i <= gt): c = compar(ptr + i * size, ptr + lt * size) if (c < 0):# base[i] < base[lt] => swap(i++,lt++) qswap(ptr + lt * size, ptr + i * size, size) i += 1 lt += 1 elif (c > 0):#base[i] > base[gt] => swap(i, gt--) qswap(ptr + i * size, ptr + gt* size, size) gt -= 1 else:#base[i] == base[gt] i += 1 #base := [<<<<<lt=====gt>>>>>>] low[0] = lt high[0] = gt cdef void qselectk3(void* base, size_t lo, size_t hi, size_t size, size_t k, QComparator compar) nogil: cdef size_t low = lo cdef size_t high = hi partition3(base, &low, &high, size, compar) if ((k - 1) < low): #k lies in the less-than-pivot partition high = low - 1 low = lo elif ((k - 1) >= low and (k - 1) <= high): #k lies in the equals-to-pivot partition qswap(base, base + size*low, size) return else: # k > high => k lies in the greater-than-pivot partition low = high + 1 high = hi qselectk3(base, low, high, size, k, compar) """ A selection algorithm to find the nth smallest elements in an unordered list. these elements ARE placed at the nth positions of the input array """ cdef void qselect(void* base, size_t num, size_t size, size_t n, QComparator compar) nogil: cdef int k qshuffle(base, num, size) for k in range(n): qselectk3(base + size*k, 0, num - k - 1, size, 1, compar) I use python timeit to get the performance of both method pyselect(with N=50) and pysort. Like this def testPySelect(): A = np.random.randint(16, size=(10000), dtype=np.int32) pyselect(A, 50) timeit.timeit(testPySelect, number=1) def testPySort(): A = np.random.randint(16, size=(10000), dtype=np.int32) pysort(A) timeit.timeit(testPySort, number=1) A: The answer by @chqrlie is the good and final answer, yet to complete the post, I am posting the Cython version along with the benchmarking results. In short, the proposed solution is 2 times faster than qsort on long vectors! cdef void qswap2(void *aptr, void *bptr, size_t size) nogil: cdef uint8_t* ac = <uint8_t*>aptr cdef uint8_t* bc = <uint8_t*>bptr cdef uint8_t t while (size > 0): t = ac[0]; ac[0] = bc[0]; bc[0] = t; ac += 1; bc += 1; size -= 1 cdef struct qselect2_stack: uint8_t *base uint8_t *last cdef void qselect2(void *base, size_t nmemb, size_t size, size_t k, QComparator compar) nogil: cdef qselect2_stack stack[64] cdef qselect2_stack *sp = &stack[0] cdef uint8_t *lb cdef uint8_t*ub cdef uint8_t *p cdef uint8_t *i cdef uint8_t *j cdef uint8_t *top if (nmemb < 2 or size <= 0): return top = <uint8_t *>base if(k < nmemb): top += k*size else: top += nmemb*size sp.base = <uint8_t *>base sp.last = <uint8_t *>base + (nmemb - 1) * size sp += 1 cdef size_t offset while (sp > stack): sp -= 1 lb = sp.base ub = sp.last while (lb < ub and lb < top): #select middle element as pivot and exchange with 1st element offset = (ub - lb) >> 1 p = lb + offset - offset % size qswap2(lb, p, size) #partition into two segments i = lb + size j = ub while 1: while (i < j and compar(lb, i) > 0): i += size while (j >= i and compar(j, lb) > 0): j -= size if (i >= j): break qswap2(i, j, size) i += size j -= size # move pivot where it belongs qswap2(lb, j, size) # keep processing smallest segment, and stack largest if (j - lb <= ub - j): sp.base = j + size sp.last = ub sp += 1 ub = j - size else: sp.base = lb sp.last = j - size sp += 1 lb = j + size cdef int int_comp(void* a, void* b) nogil: cdef int ai = (<int*>a)[0] cdef int bi = (<int*>b)[0] return (ai > bi ) - (ai < bi) def pyselect2(numpy.ndarray[int, ndim=1, mode="c"] na, int n): cdef int* a = <int*>&na[0] qselect2(a, len(na), sizeof(int), n, int_comp) Here are the benchmark results (1,000 tests): #of elements K #qsort (s) #qselect2 (s) 1,000 50 0.1261 0.0895 1,000 100 0.1261 0.0910 10,000 50 0.8113 0.4157 10,000 100 0.8113 0.4367 10,000 1,000 0.8113 0.4746 100,000 100 7.5428 3.8259 100,000 1,000 7,5428 3.8325 100,000 10,000 7,5428 4.5727 For those who are curious, this piece of code is a jewel in the field of surface reconstruction using neural networks. Thanks again to @chqrlie, your code is unique on The Web. A: Here is a quick implementation for your purpose: qsort_select is a simple implementation of qsort with automatic pruning of unnecessary ranges. Without && lb < top, it behaves like the regular qsort except for pathological cases where more advanced versions have better heuristics. This extra test prevents complete sorting of ranges that are outside the target 0 .. (k-1). The function selects the k smallest values and sorts them, the rest of the array has the remaining values in an undefinite order. #include <stdio.h> #include <stdint.h> static void exchange_bytes(uint8_t *ac, uint8_t *bc, size_t size) { while (size-- > 0) { uint8_t t = *ac; *ac++ = *bc; *bc++ = t; } } /* select and sort the k smallest elements from an array */ void qsort_select(void *base, size_t nmemb, size_t size, int (*compar)(const void *a, const void *b), size_t k) { struct { uint8_t *base, *last; } stack[64], *sp = stack; uint8_t *lb, *ub, *p, *i, *j, *top; if (nmemb < 2 || size <= 0) return; top = (uint8_t *)base + (k < nmemb ? k : nmemb) * size; sp->base = (uint8_t *)base; sp->last = (uint8_t *)base + (nmemb - 1) * size; sp++; while (sp > stack) { --sp; lb = sp->base; ub = sp->last; while (lb < ub && lb < top) { /* select middle element as pivot and exchange with 1st element */ size_t offset = (ub - lb) >> 1; p = lb + offset - offset % size; exchange_bytes(lb, p, size); /* partition into two segments */ for (i = lb + size, j = ub;; i += size, j -= size) { while (i < j && compar(lb, i) > 0) i += size; while (j >= i && compar(j, lb) > 0) j -= size; if (i >= j) break; exchange_bytes(i, j, size); } /* move pivot where it belongs */ exchange_bytes(lb, j, size); /* keep processing smallest segment, and stack largest */ if (j - lb <= ub - j) { sp->base = j + size; sp->last = ub; sp++; ub = j - size; } else { sp->base = lb; sp->last = j - size; sp++; lb = j + size; } } } } int int_cmp(const void *a, const void *b) { int aa = *(const int *)a; int bb = *(const int *)b; return (aa > bb) - (aa < bb); } #define ARRAY_SIZE 50000 int array[ARRAY_SIZE]; int main(void) { int i; for (i = 0; i < ARRAY_SIZE; i++) { array[i] = ARRAY_SIZE - i; } qsort_select(array, ARRAY_SIZE, sizeof(*array), int_cmp, 50); for (i = 0; i < 50; i++) { printf("%d%c", array[i], i + 1 == 50 ? '\n' : ','); } return 0; }
{ "language": "en", "url": "https://stackoverflow.com/questions/52016431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-3" }
Q: Get values with no repeated data I have a query like this: SELECT P.LegacyKey ,D.DesignNumber FROM tbl1 AS [SO] GROUP BY D.DesignNumber,P.LegacyKey ORDER BY LegacyKey it returning values like: +-----------+--------------+ | LegacyKey | DesignNumber | +-----------+--------------+ | 17134 | 1 | | 17134 | 2 | | 18017 | 7 | +-----------+--------------+ That I want to do is to find duplicate LegacyKeys and get only values who legacyKey is exist one time, so I use HAVING COUNT: SELECT P.LegacyKey ,D.DesignNumber , COUNT([P].[LegacyKey]) FROM tbl1 AS [SO] GROUP BY D.DesignNumber,P.LegacyKey HAVING COUNT([P].[LegacyKey]) = 1 ORDER BY LegacyKey But this is returning bad data, because it is returning LegacyKey = 17134 again and desire result is to get values where LegacyKey exists one time. So desire result should be only 18017 | 7 What am I doing wrong? A: You can simply do: SELECT P.LegacyKey, MAX(D.DesignNumber) as DesignNumber FROM tbl1 AS [SO] GROUP BY P.LegacyKey HAVING COUNT(DISTINCT D.DesignNumber) = 1; ORDER BY LegacyKey; No subquery is necessary. A: You need something like this: select t2.LegacyKey, t2.DesignNumber from ( select t.LegacyKey from tbl1 t group by t.LegacyKey having count(t.LegacyKey ) = 1 )x join tbl1 t2 on x.LegacyKey = t2.LegacyKey or select t2.LegacyKey, t2.DesignNumber from tbl1 t2 where t2.LegacyKey in ( select t.LegacyKey from tbl1 t group by t.LegacyKey having count(t.LegacyKey ) = 1 ) A: You could try this NB - This is untested SELECT * FROM ( SELECT P.LegacyKey AS LegacyKey, D.DesignNumber AS DesignNumber, COUNT([P].[LegacyKey]) AS cnt FROM tbl1 AS [SO] GROUP BY D.DesignNumber,P.LegacyKey HAVING COUNT([P].[LegacyKey]) = 1 ) a WHERE COUNT() OVER (PARTITION BY LegacyKey) = 1
{ "language": "en", "url": "https://stackoverflow.com/questions/54993000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: @IBDesignable doesn't work in "old" project I have UIView subclass, for example, this: @IBDesignable class GradientView: UIView { @IBInspectable var firstColor: UIColor = UIColor.red @IBInspectable var secondColor: UIColor = UIColor.green @IBInspectable var vertical: Bool = true override func awakeFromNib() { super.awakeFromNib() applyGradient() } func applyGradient() { let colors = [firstColor.cgColor, secondColor.cgColor] let layer = CAGradientLayer() layer.colors = colors layer.frame = self.bounds layer.startPoint = CGPoint(x: 0, y: 0) layer.endPoint = vertical ? CGPoint(x: 0, y: 1) : CGPoint(x: 1, y: 0) self.layer.addSublayer(layer) } override func prepareForInterfaceBuilder() { super.prepareForInterfaceBuilder() applyGradient() } } It successfully renders in Interface Builder for a new project, but it doesn't work for my "old" project. Does anyone know why it happens?
{ "language": "en", "url": "https://stackoverflow.com/questions/45217918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Passing object messages in Azure Queue Storage I'm trying to find a way to pass objects to the Azure Queue. I couldn't find a way to do this. As I've seen I can pass string or byte array, which is not very comfortable for passing objects. Is there anyway to pass custom objects to the Queue? Thanks! A: You can use the following classes as example: [Serializable] public abstract class BaseMessage { public byte[] ToBinary() { BinaryFormatter bf = new BinaryFormatter(); byte[] output = null; using (MemoryStream ms = new MemoryStream()) { ms.Position = 0; bf.Serialize(ms, this); output = ms.GetBuffer(); } return output; } public static T FromMessage<T>(CloudQueueMessage m) { byte[] buffer = m.AsBytes; T returnValue = default(T); using (MemoryStream ms = new MemoryStream(buffer)) { ms.Position = 0; BinaryFormatter bf = new BinaryFormatter(); returnValue = (T)bf.Deserialize(ms); } return returnValue; } } Then a StdQueue (a Queue that is strongly typed): public class StdQueue<T> where T : BaseMessage, new() { protected CloudQueue queue; public StdQueue(CloudQueue queue) { this.queue = queue; } public void AddMessage(T message) { CloudQueueMessage msg = new CloudQueueMessage(message.ToBinary()); queue.AddMessage(msg); } public void DeleteMessage(CloudQueueMessage msg) { queue.DeleteMessage(msg); } public CloudQueueMessage GetMessage() { return queue.GetMessage(TimeSpan.FromSeconds(120)); } } Then, all you have to do is to inherit the BaseMessage: [Serializable] public class ParseTaskMessage : BaseMessage { public Guid TaskId { get; set; } public string BlobReferenceString { get; set; } public DateTime TimeRequested { get; set; } } And make a queue that works with that message: CloudStorageAccount acc; if (!CloudStorageAccount.TryParse(connectionString, out acc)) { throw new ArgumentOutOfRangeException("connectionString", "Invalid connection string was introduced!"); } CloudQueueClient clnt = acc.CreateCloudQueueClient(); CloudQueue queue = clnt.GetQueueReference(processQueue); queue.CreateIfNotExist(); this._queue = new StdQueue<ParseTaskMessage>(queue); Hope this helps! A: Extension method that uses Newtonsoft.Json and async public static async Task AddMessageAsJsonAsync<T>(this CloudQueue cloudQueue, T objectToAdd) { var messageAsJson = JsonConvert.SerializeObject(objectToAdd); var cloudQueueMessage = new CloudQueueMessage(messageAsJson); await cloudQueue.AddMessageAsync(cloudQueueMessage); } A: I like this generalization approach but I don't like having to put Serialize attribute on all the classes I might want to put in a message and derived them from a base (I might already have a base class too) so I used... using System; using System.Text; using Microsoft.WindowsAzure.Storage.Queue; using Newtonsoft.Json; namespace Example.Queue { public static class CloudQueueMessageExtensions { public static CloudQueueMessage Serialize(Object o) { var stringBuilder = new StringBuilder(); stringBuilder.Append(o.GetType().FullName); stringBuilder.Append(':'); stringBuilder.Append(JsonConvert.SerializeObject(o)); return new CloudQueueMessage(stringBuilder.ToString()); } public static T Deserialize<T>(this CloudQueueMessage m) { int indexOf = m.AsString.IndexOf(':'); if (indexOf <= 0) throw new Exception(string.Format("Cannot deserialize into object of type {0}", typeof (T).FullName)); string typeName = m.AsString.Substring(0, indexOf); string json = m.AsString.Substring(indexOf + 1); if (typeName != typeof (T).FullName) { throw new Exception(string.Format("Cannot deserialize object of type {0} into one of type {1}", typeName, typeof (T).FullName)); } return JsonConvert.DeserializeObject<T>(json); } } } e.g. var myobject = new MyObject(); _queue.AddMessage( CloudQueueMessageExtensions.Serialize(myobject)); var myobject = _queue.GetMessage().Deserialize<MyObject>(); A: In case the storage queue is used with WebJob or Azure function (quite common scenario) then the current Azure SDK allows to use POCO object directly. See examples here: * *https://learn.microsoft.com/en-us/sandbox/functions-recipes/queue-storage *https://github.com/Azure/azure-webjobs-sdk/wiki/Queues#trigger Note: The SDK will automatically use Newtonsoft.Json for serialization/deserialization under the hood. A: I liked @Akodo_Shado's approach to serialize with Newtonsoft.Json. I updated it for Azure.Storage.Queues and also added a "Retrieve and Delete" method that deserializes the object from the queue. public static class CloudQueueExtensions { public static async Task AddMessageAsJsonAsync<T>(this QueueClient queueClient, T objectToAdd) where T : class { string messageAsJson = JsonConvert.SerializeObject(objectToAdd); BinaryData cloudQueueMessage = new BinaryData(messageAsJson); await queueClient.SendMessageAsync(cloudQueueMessage); } public static async Task<T> RetreiveAndDeleteMessageAsObjectAsync<T>(this QueueClient queueClient) where T : class { QueueMessage[] retrievedMessage = await queueClient.ReceiveMessagesAsync(1); if (retrievedMessage.Length == 0) return null; string theMessage = retrievedMessage[0].MessageText; T instanceOfT = JsonConvert.DeserializeObject<T>(theMessage); await queueClient.DeleteMessageAsync(retrievedMessage[0].MessageId, retrievedMessage[0].PopReceipt); return instanceOfT; } } The RetreiveAndDeleteMessageAsObjectAsync is designed to process 1 message at time, but you could obviously rewrite to deserialize the full array of messages and return a ICollection<T> or similar. A: That is not right way to do it. queues are not ment for storing object. you need to put object in blob or table (serialized). I believe queue messgae body has 64kb size limit with sdk1.5 and 8kb wih lower versions. Messgae body is ment to transfer crutial data for workera that pick it up only.
{ "language": "en", "url": "https://stackoverflow.com/questions/8550702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: How to add FTP site in IIS 7 in Windows Vista Home premium edition How to add the FTP server in IIS 7 using Windows vista Home Premium Edition? A: Please check if FTP 7.5 can be used on your Windows Vista machine, http://www.iis.net/expand/FTP If not, FileZilla is a free alternative, http://filezilla-project.org/download.php?type=server
{ "language": "en", "url": "https://stackoverflow.com/questions/2524250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: NLog not creating a log file inside AWS ec2 Linux I have a .net core application in AWS ec2 linux and it does NOT create a log file. The application on AWS ec2 linux is published with Deployment Mode: Self-contained and Target Runtime: linux-x64. I tried it on windows and it does create a log file but somehow it's not working in AWS ec2 linux. Heres what I did. * *Open ubuntu machine terminal (CLI) and Go to the project directory *Provide execute permissions: chmod 777 ./appname *Execute application *./appname Note: It displays the Hello World. public static void Main(string[] args) { try { using (var logContext = new LogContextHelper(null, "MBP", "MBP", new JObject() { { "ASD", "asd" }, { "ZXC", "asd" }, { "QWE", "asd" }, { "POI", "" } }, setAsContextLog: true)) { } } catch (Exception e) { Console.WriteLine("Error: {0}", e); } //sample s = new sample("consuna") { objectinitial = "objuna"}; Console.WriteLine("Hello World!"); Console.ReadLine(); } } Here is the NLog.config <?xml version="1.0" encoding="utf-8" ?> <nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.nlog-project.org/schemas/NLog.xsd NLog.xsd" autoReload="true" throwExceptions="false" internalLogLevel="Off" internalLogFile="c:\temp\nlog-internal.log"> <variable name="myvar" value="myvalue"/> <targets> <target xsi:type="File" name="fileMBP" fileName="/home/ec2-user/mbpTESTING.log"`enter code here` layout="Logger=&quot;MBP API&quot;, TimeStamp=&quot;${event-context:item=datetime}&quot;, TimeStampMel=&quot;${event-context:item=datetimeMel}&quot;, Message=&quot;${message}&quot;, Context=&quot;${basedir}&quot;, Command=&quot;${event-context:item=command}&quot;, ${event-context:item=properties}" archiveFileName="/home/ec2-user/MBP.{#}.txt" archiveEvery="Day" archiveNumbering="DateAndSequence" maxArchiveFiles="7" concurrentWrites="true" keepFileOpen="false"/> </targets> <rules> <logger name="MBP" minlevel="Trace" writeTo="fileMBP"/> </rules> </nlog> I expect that it will create a log file with logs on it just like on windows.
{ "language": "en", "url": "https://stackoverflow.com/questions/57800735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: EditText length with "exceptions" I have an EditText and I'd like to set maxLength to four numbers (in range -9999 to 9999). Problem is that if I setandroid:maxLength="4" in xml I can write there e.q "9999" but it doesn't accept "-9999" because of length (5 char). Is there any opportunity how to solve it (programatically or anyhow) Thanks
{ "language": "en", "url": "https://stackoverflow.com/questions/36794789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Understanding hashCode(), equals() and toString() in Java I am beginner in Java and although I have a look at several questions and answers in SO, I am not sure if I completely understand the usage of hashCode(), equals() and toString() in Java. I encountered the following code in a project and want to understand the following issues: 1. Is it meaningful to define these methods and call via super like return super.hashCode()? 2. Where should we defines these three methods? For each entity? Or any other place in Java? 3. Could you please explain me the philosophy of defining these 3 (or maybe similar ones that I have not seen before) in Java? @Entity @SequenceGenerator(name = "product_gen", sequenceName = "product_id_seq") @Table @Getter @Setter @ToString(callSuper = true) @NoArgsConstructor public class Product extends BaseEntity { @Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "product_gen") private long id; @Override public int hashCode() { return super.hashCode(); } @Override public boolean equals(Object other) { return super.equals(other); } } A: * *toString() method returns the String representation of an Object. The default implementation of toString() for an object returns the HashCode value of the Object. We'll come to what HashCode is. Overriding the toString() is straightforward and helps us print the content of the Object. @ToString annotation from Lombok does that for us. It Prints the class name and each field in the class with its value. The @ToString annotation also takes in configuration keys for various behaviours. Read here. callSuper just denotes that an extended class needs to call the toString() from its parent. hashCode() for an object returns an integer value, generated by a hashing algorithm. This can be used to identify whether two objects have similar Hash values which eventually help identifying whether two variables are pointing to the same instance of the Object. If two objects are equal from the .equals() method, they must share the same HashCode *They are supposed to be defined on the entity, if at all you need to override them. *Each of the three have their own purposes. equals() and hashCode() are majorly used to identify whether two objects are the same/equal. Whereas, toString() is used to Serialise the object to make it more readable in logs. From Effective Java: You must override hashCode() in every class that overrides equals(). Failure to do so will result in a violation of the general contract for Object.hashCode(), which will prevent your class from functioning properly in conjunction with all hash-based collections, including HashMap, HashSet, and HashTable.
{ "language": "en", "url": "https://stackoverflow.com/questions/69024813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to reference a variable from one activity to another I'm trying to get a variable string and integer from Main2Activity.java to MainActivity.java But the problem is that I don't want to use the: startActivity(intent); For it to work. I just want the information to be passed so I can use it in my current activity. Is there any way to do this? What am I missing. This is how my MainActivity looks like: btn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { TextView textView = (TextView)findViewById(R.id.textView7); Intent intent = getIntent(); String A = intent.getStringExtra("Apples"); textView.setText(A); } }); And my Main2Activty: Intent intent = new Intent(Main2Activity.this, MainActivity.class); intent.putExtra("Apples", "Red"); Thanks for helping. Please only comment if you know what you're talking about. A: There is an other way, you can define a Class DataHolder and static variable for sharing variable between Activity Example class DataHolder { public static String appleColor = ""; } Then you can use like this: Intent intent = new Intent(Main2Activity.this, MainActivity.class); DataHolder.appleColor = "RED"; Then btn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { TextView textView = (TextView)findViewById(R.id.textView7); Intent intent = getIntent(); textView.setText(DataHolder.appleColor); } });
{ "language": "en", "url": "https://stackoverflow.com/questions/48392826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Only grep img tags that contain a keyword, but not img tags that don't? Using grep/regex, I am trying to pull img tags out of a file. I only want img tags that contain 'photobucket' in the source, and I do not want img tags that do not contain photobucket. Want: <img src="/photobucket/img21.png"> Do Not Want: <img src="/imgs/test.jpg"> <img src="/imgs/thiswillgetpulledtoo.jpg"><p>We like photobucket</p> What I have tried: (<img.*?photobucket.*?>) This did not work, because it pulled the second example in "Do Not Want", as there was a 'photobucket' and then a closing bracket. How can I only check for 'photobucket' up until the first closing bracket, and if photobucket is not contained, ignore it and move on? 'photobucket' may be in different locations within the string. A: Just add a negation of > sign: (<img[^>]*?photobucket.*?>) https://regex101.com/r/tZ9lI9/2 A: grep -o '<img[^>]*src="[^"]*photobucket[^>]*>' infile -o returns only the matches. Split up: <img # Start with <img [^>]* # Zero or more of "not >" src=" # start of src attribute [^"]* # Zero or more or "not quotes" photobucket # Match photobucket [^>]* # Zero or more of "not >" > # Closing angle bracket For the input file <img src="/imgs/test.jpg"> <img src="/imgs/thiswillgetpulledtoo.jpg"><p>We like photobucket</p> <img src="/photobucket/img21.png"> <img alt="photobucket" src="/something/img21.png"> <img alt="something" src="/photobucket/img21.png"> <img src="/photobucket/img21.png" alt="something"> <img src="/something/img21.png" alt="photobucket"> this returns $ grep -o '<img[^>]*src="[^"]*photobucket[^>]*>' infile <img src="/photobucket/img21.png"> <img alt="something" src="/photobucket/img21.png"> <img src="/photobucket/img21.png" alt="something"> The non-greedy .*? works only with the -P option (Perl regexes). A: Try the following: <img[^>]*?photobucket[^>]*?> This way the regex can't got past the '>' A: Try with this pattern: <img.*src=\"[/a-zA-Z0-9_]+photobucket[/a-zA-Z0-9_]+\.\w+\".*> I´m not sure the characters admited by the name folders, but you just need add in the ranges "[]" before and after the "photobucket".
{ "language": "en", "url": "https://stackoverflow.com/questions/34882511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Icon color of html5 input date picker How can I change the icon color of the native HTML5 date input (<input type="date" />)? Is it possible to change the calendar icon color to HEX: #018bee or RGB: (1, 139, 238)? I saw a post saying that it was possible using filters, but I was unsuccessful. Codepen Example: https://codepen.io/gerisd/pen/VwPzqMy HTML: <input type="date" id="deadline" name="deadline" value="2021-01-01" required> CSS: #deadline{ width: 100%; min-width: 100px; border: none; height: 100%; background: none; outline: none; padding-inline: 5px; resize: none; border-right: 2px solid #dde2f1; cursor: pointer; color: #9fa3b1; text-align: center; } input[type="date"]::-webkit-calendar-picker-indicator { cursor: pointer; border-radius: 4px; margin-right: 2px; filter: invert(0.8) sepia(100%) saturate(10000%) hue-rotate(240deg); } A: Have you tried using webkit? I found a similar qustion from enter link description here try this code from that question maybe: ::-webkit-calendar-picker-indicator { filter: invert(1); }
{ "language": "en", "url": "https://stackoverflow.com/questions/66974856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: how can i display data in detail component from list component? i have problem to display data from one component to another component. i put the data which i fetch from the serve in prova and i display the list of all work. but I can't transfer the data that is on "prova" to another component to make the details visible through their own id. this is my apicall service import { Injectable } from '@angular/core'; import {HttpClient,HttpClientModule,HttpResponse} from '@angular/common/http'; import {Observable} from "rxjs"; import {map} from "rxjs/operators"; import {results} from "./results" @Injectable({ providedIn: 'root' }) export class GetApiService { constructor( private http:HttpClient, ) { } apiCall():Observable<results[]> { return this.http.get('https://www.themuse.com/api/public/jobs?category=Engineering&page=10') .pipe(map ( (response:any) => { const data = response.results; console.log (data) return data ; } ) ); } } this is my observable results export interface results { categories : any; company: string; contents:string; id : number; levels:string; locations:string; model_type: string; name: string; refs: string; short_name: string; type:string; } this is my component where are a list of works import {results} from "../results"; import {GetApiService} from '../get-api.service'; import {switchMap} from "rxjs/operators"; import { Observable } from 'rxjs'; import { ActivatedRoute } from '@angular/router'; @Component({ selector: 'app-work-list', templateUrl: './work-list.component.html', styleUrls: ['./work-list.component.css'] }) export class WorkListComponent implements OnInit { prova: results[]=[] ; item: any; selectedId: any ; constructor( private api :GetApiService, private route: ActivatedRoute, ) { } ngOnInit(){ this.api.apiCall() .subscribe ( (data:results[]) => { this.prova=data; console.log (data); }); } } and the respectiv html connected by id <div *ngFor="let item of prova " [class.selected]="item.id === selectedId"> <a [routerLink]="['/details',item.id]"> <h1> {{ item.name }}</h1> </a> <h1> {{ item.id }}</h1> <h1> {{ item.categories[0].name }}</h1> <h1> {{ item.name }}</h1> </div> this is the details where i can't display the selected work with its own details import { ActivatedRoute } from '@angular/router'; import {results} from '../results'; import {GetApiService} from '../get-api.service'; import {switchMap} from "rxjs/operators"; @Component({ selector: 'app-work-details', templateUrl: './work-details.component.html', styleUrls: ['./work-details.component.css'] }) export class WorkDetailsComponent implements OnInit { @Input() item: { categories: any; company: string; contents: string; id: number; levels: string; locations: string; model_type: string; name: string; refs: string; short_name: string; type: string; } | undefined; @Input () prova: results[]=[]; selectedId:string | undefined; constructor( private route: ActivatedRoute, private api: GetApiService, ) { } ngOnInit():void { this.route.paramMap.subscribe (params => { this.item=this.prova[+params.get('selectedId')]; **// it should be somewthing like this but prova is empty.** }) ; } } A: It looks like you're mixing two different mechanisms? One is of a parent -> child component relationship where you have your WorkDetailsComponent with an @Input() for prova, but at the same time, it looks like the component is its own page given your <a [routerLink]="['/details',item.id]"> and the usage of this.route.paramMap.subscribe.... Fairly certain you can't have it both ways. You either go parent -> child component wherein you pass in the relevant details using the @Input()s: <div *ngIf="selectedItem"> <app-work-details [prova]="prova" [selectedItem]="selectedItem"></app-work-details> </div> OR you go with the separate page route which can be done one of two ways: * *Use the service as a shared service; have it remember the state (prova) so that when the details page loads, it can request the data for the relevant id. *Pass the additional data through the route params For 1, it would look something like: private prova: results[]; // Save request response to this as well public getItem(id: number): results { return this.prova.find(x => x.id === id); } And then when you load your details page: ngOnInit():void { this.route.paramMap.subscribe (params => { this.selectedId=+params.get('selectedId'); this.item = this.service.getItem(this.selectedId); }); } For 2, it involves routing with additional data, something like this: <a [routerLink]="['/details',item.id]" [state]="{ data: {prova}}">..</a> This article shows the various ways of getting data between components in better detail.
{ "language": "en", "url": "https://stackoverflow.com/questions/65146369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Shifting elements of an array to the right I am aware that there are solutions for shifting arrays out there. However no solution works for me. The code should achieve the following: The method shift(int[] array, int places) takes in an array, shifts the elements places - times to the right and replaces the "leftover" elements with "0". So far I have: public static int[] shiftWithDrop(int[] array, int places) { if (places == 0 || array == null) { return null; } for (int i = array.length-places-1; i >= 0; i-- ) { array[i+places] = array[i]; array[i] = 0; } return array; } This code does only somehow work, but it does not return the desired result. What am I missing? A: There are several issues in this code: * *It returns null when places == 0 -- without shift, the original array needs to be returned *In the given loop implementation the major part of the array may be skipped and instead of replacing the first places elements with 0, actually a few elements in the beginning of the array are set to 0. Also it is better to change the signature of the method to set places before the vararg array. So to address these issues, the following solution is offered: public static int[] shiftWithDrop(int places, int ... array) { if(array == null || places <= 0) { return array; } for (int i = array.length; i-- > 0;) { array[i] = i < places ? 0 : array[i - places]; } return array; } Tests: System.out.println(Arrays.toString(shiftWithDrop(1, 1, 2, 3, 4, 5))); System.out.println(Arrays.toString(shiftWithDrop(2, new int[]{1, 2, 3, 4, 5}))); System.out.println(Arrays.toString(shiftWithDrop(3, 1, 2, 3, 4, 5))); System.out.println(Arrays.toString(shiftWithDrop(7, 1, 2, 3, 4, 5))); Output: [0, 1, 2, 3, 4] [0, 0, 1, 2, 3] [0, 0, 0, 1, 2] [0, 0, 0, 0, 0]
{ "language": "en", "url": "https://stackoverflow.com/questions/70135652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Write Json Webhook to Cloud Firestore with Cloud Functions. Cloud Function Failed to Deploy. Function failed on loading user code I have a Webhook that delivers a complex JSON payload to my Cloud Function URL, and writes that JSON to collections & documents within my Cloud Firestore. I believe the Node.JS Runtime on Google Cloud Functions uses the Express Middleware HTTP framework. I have a WooCommerce Webhook that wants me to send a JSON to a URL, I believe this is a POST http request. I've used Webhook.site to test the Webhook, and it is displaying the correct JSON payload. Developers have suggested I use cloud functions to receive the JSON, parse the JSON and write it to the Cloud Firestore. // cloud-function-name = wooCommerceWebhook exports.wooCommerceWebhook = functions.https.onRequest(async (req, res) => { const payload = req.body; // Write to Firestore - People Collection await admin.firestore().collection("people").doc().set({ people_EmailWork: payload.billing.email, }); // Write to Firestore - Volociti Collection await admin.firestore().collection("volociti").doc("fJHb1VBhzTbYmgilgTSh").collection("orders").doc("yzTBXvGja5KBZOEPKPtJ").collection("orders marketplace orders").doc().set({ ordersintuit_CustomerIPAddress: payload.customer_ip_address, }); // Write to Firestore - Companies Collection await admin.firestore().collection("companies").doc().set({ company_AddressMainStreet: payload.billing.address_1, }); return res.status(200).end(); }); I have the logs to my cloud function's failure to deploy if that is helpful. Function cannot be initialized. Error: function terminated. Recommended action: inspect logs for termination reason. Additional troubleshooting documentation can be found at https://cloud.google.com/functions/docs/troubleshooting#logging My package.json: { "name": "sample-http", "version": "0.0.1" } A: You need to correctly define the dependency with the Firebase Admin SDK for Node.js, and initialize it, as shown below. You also need to change the way you declare the function: exports.wooCommerceWebhook = async (req, res) => {...} instead of exports.wooCommerceWebhook = functions.https.onRequest(async (req, res) => {...});. The one you used is for Cloud Functions deployed through the CLI. package.json { "name": "sample-http", "version": "0.0.1", "dependencies": { "firebase-admin": "^9.4.2" } } index.js const admin = require('firebase-admin') admin.initializeApp(); exports.wooCommerceWebhook = async (req, res) => { // SEE COMMENT BELOW const payload = req.body; // Write to Firestore - People Collection await admin.firestore().collection("people").doc().set({ people_EmailWork: payload.billing.email, }); // Write to Firestore - Volociti Collection await admin.firestore().collection("volociti").doc("fJHb1VBhzTbYmgilgTSh").collection("orders").doc("yzTBXvGja5KBZOEPKPtJ").collection("orders marketplace orders").doc().set({ ordersintuit_CustomerIPAddress: payload.customer_ip_address, }); // Write to Firestore - Companies Collection await admin.firestore().collection("companies").doc().set({ company_AddressMainStreet: payload.billing.address_1, }); return res.status(200).end(); };
{ "language": "en", "url": "https://stackoverflow.com/questions/69398084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Server-side redirect that transfers to another server From this question, I concluded that Response.Redirect simply sends a message (HTTP 302) down to the browser. Server.Transfer happens without the browser knowing anything, the browser request a page, but the server returns the content of another. So doesn't send to another server. Problem: I have two servers, running IIS webserver. Server 1 receives api calls from clients, the clients have different databases, some on server 1, others on server 2. if the DB of the client is on Server 2, then his API call shall be redirected to server 2. Note that you can tell from the URL which server an API call should be routed What do I want? I want a server-side redirect method capable of redirecting to another server. in order to redirect the API calls. (And if there's any better way of doing what I'm asking for, like a proxy or some software, that big companies use, please let me know)
{ "language": "en", "url": "https://stackoverflow.com/questions/34425625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to get name of windows service from inside the service itself I have a bunch of win services written in .NET that use same exact executable with different configs. All services write to the same log file. However since I use the same .exe the service doesn't know its own service name to put in the log file. Is there a way my service can programatically retrieve its own name? A: Insight can be gained by looking at how Microsoft does this for the SQL Server service. In the Services control panel, we see: Service name: MSSQLServer Path to executable: "C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Binn\sqlservr.exe" -sMSSQLSERVER Notice that the name of the service is included as a command line argument. This is how it is made available to the service at run time. With some work, we can accomplish the same thing in .NET. Basic steps: * *Have the installer take the service name as an installer parameter. *Make API calls to set the command line for the service to include the service name. *Modify the Main method to examine the command line and set the ServiceBase.ServiceName property. The Main method is typically in a file called Program.cs. Install/uninstall commands To install the service (can omit /Name to use DEFAULT_SERVICE_NAME): installutil.exe /Name=YourServiceName YourService.exe To uninstall the service (/Name is never required since it is stored in the stateSaver): installutil.exe /u YourService.exe Installer code sample: using System; using System.Collections; using System.Configuration.Install; using System.ComponentModel; using System.Runtime.InteropServices; using System.ServiceProcess; namespace TestService { [RunInstaller(true)] public class ProjectInstaller : Installer { private const string DEFAULT_SERVICE_NAME = "TestService"; private const string DISPLAY_BASE_NAME = "Test Service"; private ServiceProcessInstaller _ServiceProcessInstaller; private ServiceInstaller _ServiceInstaller; public ProjectInstaller() { _ServiceProcessInstaller = new ServiceProcessInstaller(); _ServiceInstaller = new ServiceInstaller(); _ServiceProcessInstaller.Account = ServiceAccount.LocalService; _ServiceProcessInstaller.Password = null; _ServiceProcessInstaller.Username = null; this.Installers.AddRange(new System.Configuration.Install.Installer[] { _ServiceProcessInstaller, _ServiceInstaller}); } public override void Install(IDictionary stateSaver) { if (this.Context != null && this.Context.Parameters.ContainsKey("Name")) stateSaver["Name"] = this.Context.Parameters["Name"]; else stateSaver["Name"] = DEFAULT_SERVICE_NAME; ConfigureInstaller(stateSaver); base.Install(stateSaver); IntPtr hScm = OpenSCManager(null, null, SC_MANAGER_ALL_ACCESS); if (hScm == IntPtr.Zero) throw new Win32Exception(); try { IntPtr hSvc = OpenService(hScm, this._ServiceInstaller.ServiceName, SERVICE_ALL_ACCESS); if (hSvc == IntPtr.Zero) throw new Win32Exception(); try { QUERY_SERVICE_CONFIG oldConfig; uint bytesAllocated = 8192; // Per documentation, 8K is max size. IntPtr ptr = Marshal.AllocHGlobal((int)bytesAllocated); try { uint bytesNeeded; if (!QueryServiceConfig(hSvc, ptr, bytesAllocated, out bytesNeeded)) { throw new Win32Exception(); } oldConfig = (QUERY_SERVICE_CONFIG)Marshal.PtrToStructure(ptr, typeof(QUERY_SERVICE_CONFIG)); } finally { Marshal.FreeHGlobal(ptr); } string newBinaryPathAndParameters = oldConfig.lpBinaryPathName + " /s:" + (string)stateSaver["Name"]; if (!ChangeServiceConfig(hSvc, SERVICE_NO_CHANGE, SERVICE_NO_CHANGE, SERVICE_NO_CHANGE, newBinaryPathAndParameters, null, IntPtr.Zero, null, null, null, null)) throw new Win32Exception(); } finally { if (!CloseServiceHandle(hSvc)) throw new Win32Exception(); } } finally { if (!CloseServiceHandle(hScm)) throw new Win32Exception(); } } public override void Rollback(IDictionary savedState) { ConfigureInstaller(savedState); base.Rollback(savedState); } public override void Uninstall(IDictionary savedState) { ConfigureInstaller(savedState); base.Uninstall(savedState); } private void ConfigureInstaller(IDictionary savedState) { _ServiceInstaller.ServiceName = (string)savedState["Name"]; _ServiceInstaller.DisplayName = DISPLAY_BASE_NAME + " (" + _ServiceInstaller.ServiceName + ")"; } [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)] private static extern IntPtr OpenSCManager( string lpMachineName, string lpDatabaseName, uint dwDesiredAccess); [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)] private static extern IntPtr OpenService( IntPtr hSCManager, string lpServiceName, uint dwDesiredAccess); [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Auto)] private struct QUERY_SERVICE_CONFIG { public uint dwServiceType; public uint dwStartType; public uint dwErrorControl; public string lpBinaryPathName; public string lpLoadOrderGroup; public uint dwTagId; public string lpDependencies; public string lpServiceStartName; public string lpDisplayName; } [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)] [return: MarshalAs(UnmanagedType.Bool)] private static extern bool QueryServiceConfig( IntPtr hService, IntPtr lpServiceConfig, uint cbBufSize, out uint pcbBytesNeeded); [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)] [return: MarshalAs(UnmanagedType.Bool)] private static extern bool ChangeServiceConfig( IntPtr hService, uint dwServiceType, uint dwStartType, uint dwErrorControl, string lpBinaryPathName, string lpLoadOrderGroup, IntPtr lpdwTagId, string lpDependencies, string lpServiceStartName, string lpPassword, string lpDisplayName); [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)] [return: MarshalAs(UnmanagedType.Bool)] private static extern bool CloseServiceHandle( IntPtr hSCObject); private const uint SERVICE_NO_CHANGE = 0xffffffffu; private const uint SC_MANAGER_ALL_ACCESS = 0xf003fu; private const uint SERVICE_ALL_ACCESS = 0xf01ffu; } } Main code sample: using System; using System.ServiceProcess; namespace TestService { class Program { static void Main(string[] args) { string serviceName = null; foreach (string s in args) { if (s.StartsWith("/s:", StringComparison.OrdinalIgnoreCase)) { serviceName = s.Substring("/s:".Length); } } if (serviceName == null) throw new InvalidOperationException("Service name not specified on command line."); // Substitute the name of your class that inherits from ServiceBase. TestServiceImplementation impl = new TestServiceImplementation(); impl.ServiceName = serviceName; ServiceBase.Run(impl); } } class TestServiceImplementation : ServiceBase { protected override void OnStart(string[] args) { // Your service implementation here. } } } A: I use this function in VB Private Function GetServiceName() As String Try Dim processId = Process.GetCurrentProcess().Id Dim query = "SELECT * FROM Win32_Service where ProcessId = " & processId.ToString Dim searcher As New Management.ManagementObjectSearcher(query) Dim share As Management.ManagementObject For Each share In searcher.Get() Return share("Name").ToString() Next share Catch ex As Exception Dim a = 0 End Try Return "DefaultServiceName" End Function
{ "language": "en", "url": "https://stackoverflow.com/questions/773678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Problems with quad in sympy Can someone explain, why: from sympy.mpmath import quad x, y = symbols('x y') f, g = symbols('f g', cls=Function) f = x g = x+1 u_1 = lambda x: f + g quad(u_1,[-1,1]) gives a mistake and from sympy.mpmath import quad x, y = symbols('x y') f, g = symbols('f g', cls=Function) f = x g = x+1 u_1 = lambda x: x + x+1 quad(u_1,[-1,1]) works fine? How to make first version works right? A: lambda x: f + g This is a function that takes in x and returns the sum of two values that do not depend on x. Whatever values f and g were before they stay that value. lambda x: x + x + 1 This is a function that returns the input value x as x+x+1. This function will depend on the input. In python, unlike mathematics, when you evaluate the series of commands a = 1 b = a a = 2 the value of b is 1.
{ "language": "en", "url": "https://stackoverflow.com/questions/22326181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to wrap ConcurrentDictionary in BlockingCollection? I try to implement a ConcurrentDictionary by wrapping it in a BlockingCollection but did not seem to be successful. I understand that one variable declarations work with BlockingCollection such as ConcurrentBag<T>, ConcurrentQueue<T>, etc. So, to create a ConcurrentBag wrapped in a BlockingCollection I would declare and instantiate like this: BlockingCollection<int> bag = new BlockingCollection<int>(new ConcurrentBag<int>()); But how to do it for ConcurrentDictionary? I need the blocking functionality of the BlockingCollection on both the producer and consumer side. A: Maybe you need a concurrent dictionary of blockingCollection ConcurrentDictionary<int, BlockingCollection<string>> mailBoxes = new ConcurrentDictionary<int, BlockingCollection<string>>(); int maxBoxes = 5; CancellationTokenSource cancelationTokenSource = new CancellationTokenSource(); CancellationToken cancelationToken = cancelationTokenSource.Token; Random rnd = new Random(); // Producer Task.Factory.StartNew(() => { while (true) { int index = rnd.Next(0, maxBoxes); // put the letter in the mailbox 'index' var box = mailBoxes.GetOrAdd(index, new BlockingCollection<string>()); box.Add("some message " + index, cancelationToken); Console.WriteLine("Produced a letter to put in box " + index); // Wait simulating a heavy production item. Thread.Sleep(1000); } }); // Consumer 1 Task.Factory.StartNew(() => { while (true) { int index = rnd.Next(0, maxBoxes); // get the letter in the mailbox 'index' var box = mailBoxes.GetOrAdd(index, new BlockingCollection<string>()); var message = box.Take(cancelationToken); Console.WriteLine("Consumed 1: " + message); // consume a item cost less than produce it: Thread.Sleep(50); } }); // Consumer 2 Task.Factory.StartNew(() => { while (true) { int index = rnd.Next(0, maxBoxes); // get the letter in the mailbox 'index' var box = mailBoxes.GetOrAdd(index, new BlockingCollection<string>()); var message = box.Take(cancelationToken); Console.WriteLine("Consumed 2: " + message); // consume a item cost less than produce it: Thread.Sleep(50); } }); Console.ReadLine(); cancelationTokenSource.Cancel(); By this way, a consumer which is expecting something in the mailbox 5, will wait until the productor puts a letter in the mailbox 5. A: You'll need to write your own adapter class - something like: public class ConcurrentDictionaryWrapper<TKey,TValue> : IProducerConsumerCollection<KeyValuePair<TKey,TValue>> { private ConcurrentDictionary<TKey, TValue> dictionary; public IEnumerator<KeyValuePair<TKey, TValue>> GetEnumerator() { return dictionary.GetEnumerator(); } IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); } public void CopyTo(Array array, int index) { throw new NotImplementedException(); } public int Count { get { return dictionary.Count; } } public object SyncRoot { get { return this; } } public bool IsSynchronized { get { return true; } } public void CopyTo(KeyValuePair<TKey, TValue>[] array, int index) { throw new NotImplementedException(); } public bool TryAdd(KeyValuePair<TKey, TValue> item) { return dictionary.TryAdd(item.Key, item.Value); } public bool TryTake(out KeyValuePair<TKey, TValue> item) { item = dictionary.FirstOrDefault(); TValue value; return dictionary.TryRemove(item.Key, out value); } public KeyValuePair<TKey, TValue>[] ToArray() { throw new NotImplementedException(); } } A: Here is an implementation of a IProducerConsumerCollection<T> collection which is backed by a ConcurrentDictionary<TKey, TValue>. The T of the collection is of type KeyValuePair<TKey, TValue>. It is very similar to Nick Jones's implementation, with some improvements: public class ConcurrentDictionaryProducerConsumer<TKey, TValue> : IProducerConsumerCollection<KeyValuePair<TKey, TValue>> { private readonly ConcurrentDictionary<TKey, TValue> _dictionary; private readonly ThreadLocal<IEnumerator<KeyValuePair<TKey, TValue>>> _enumerator; public ConcurrentDictionaryProducerConsumer( IEqualityComparer<TKey> comparer = default) { _dictionary = new(comparer); _enumerator = new(() => _dictionary.GetEnumerator()); } public bool TryAdd(KeyValuePair<TKey, TValue> entry) { if (!_dictionary.TryAdd(entry.Key, entry.Value)) throw new DuplicateKeyException(); return true; } public bool TryTake(out KeyValuePair<TKey, TValue> entry) { // Get a cached enumerator that is used only by the current thread. IEnumerator<KeyValuePair<TKey, TValue>> enumerator = _enumerator.Value; while (true) { enumerator.Reset(); if (!enumerator.MoveNext()) throw new InvalidOperationException(); entry = enumerator.Current; if (!_dictionary.TryRemove(entry)) continue; return true; } } public int Count => _dictionary.Count; public bool IsSynchronized => false; public object SyncRoot => throw new NotSupportedException(); public KeyValuePair<TKey, TValue>[] ToArray() => _dictionary.ToArray(); public IEnumerator<KeyValuePair<TKey, TValue>> GetEnumerator() => _dictionary.GetEnumerator(); IEnumerator IEnumerable.GetEnumerator() => GetEnumerator(); public void CopyTo(KeyValuePair<TKey, TValue>[] array, int index) => throw new NotSupportedException(); public void CopyTo(Array array, int index) => throw new NotSupportedException(); } public class DuplicateKeyException : InvalidOperationException { } Usage example: BlockingCollection<KeyValuePair<string, Item>> collection = new(new ConcurrentDictionaryProducerConsumer<string, Item>()); //... try { collection.Add(KeyValuePair.Create(key, item)); } catch (DuplicateKeyException) { Console.WriteLine($"The {key} was rejected."); } The collection.TryTake method removes a practically random key from the ConcurrentDictionary, which is unlikely to be a desirable behavior. Also the performance is not great, and the memory allocations are significant. For these reasons I don't recommend enthusiastically to use the above implementation. I would suggest instead to take a look at the ConcurrentQueueNoDuplicates<T> that I have posted here, which has a proper queue behavior. Caution: Calling collection.TryAdd(item); is not having the expected behavior of returning false if the key exists. Any attempt to add a duplicate key results invariably in a DuplicateKeyException. For an explanation look at the aforementioned other post.
{ "language": "en", "url": "https://stackoverflow.com/questions/10736209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Shaping json object rows[ {0: c:[{0: {v:'2013'}, 1: {v: 'apple'},2: {v: '200'}}]}, {1: c:[{0: {v:'2014'}, 1: {v: 'apple'},2: {v: '1000'}}]}, {2: c:[{0: {v:'2013'}, 1: {v: 'orange'},2: {v: '200'}}]}, {3: c:[{0: {v:'2014'}, 1: {v: 'orange'},2: {v: '1000'}}]} ] I am trying to reshape it into something like this: [apple: {2013: '200', 2014: '1000'}, orange: {2013: '200', 2014: '1000'}] OR [ apple: { 2013: {year: '2013', amount: '200'}, 2014: {year: '2014', amount: '1000'} }, orange: { 2013: {year: '2013', amount: '200'}, 2014: {year: '2014', amount: '1000'} }] OR apple: [{year:2013, amount:200},{year:2014,amount:1000}] I have tried playing with lodash's .map,.uniq,.reduce,.zipObject but I am still unable to figure it out. A: Used the following lodash functions: Get the name of fruits using _.map. uniqueFruitNames = Get the unique names by _.uniq and then sort them. data2013 and data2014 = Use _.remove to get fruits of particular year and sort them respectively. Use _.zipObject to zip uniqueFruitsName and data2013 uniqueFruitsName and data2014 Then _.merge the two zipped Objects. var dataSeries = _.map(mergedData, function(asset,key) { return { name: key, data: [fruit[0].amount, fruit[1].amount] } });
{ "language": "en", "url": "https://stackoverflow.com/questions/29170350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: In python, how do I print a table using double for loops? Here is my python code, from fractions import gcd print "| 2 3 4 5 6 7 8 9 10 11 12 13 14 15" print "-----------------------------------" xlist = range(2,16) ylist = range(2,51) for b in ylist: print b, " | " for a in xlist: print gcd(a,b) I'm having trouble printing a table that will display on the top row 2-15 and on the left column the values 2-50. With a gcd table for each value from each row and each column. Here is a sample of what I'm getting | 2 3 4 5 6 7 8 9 10 11 12 13 14 15 2 | 2 1 2 A: You can have it much more concise with list comprehension: from fractions import gcd print(" | 2 3 4 5 6 7 8 9 10 11 12 13 14 15") print("-----------------------------------------------") xlist = range(2,16) ylist = range(2,51) print("\n".join(" ".join(["%2d | " % b] + [("%2d" % gcd(a, b)) for a in xlist]) for b in ylist)) Output: | 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ----------------------------------------------- 2 | 2 1 2 1 2 1 2 1 2 1 2 1 2 1 3 | 1 3 1 1 3 1 1 3 1 1 3 1 1 3 4 | 2 1 4 1 2 1 4 1 2 1 4 1 2 1 5 | 1 1 1 5 1 1 1 1 5 1 1 1 1 5 6 | 2 3 2 1 6 1 2 3 2 1 6 1 2 3 7 | 1 1 1 1 1 7 1 1 1 1 1 1 7 1 8 | 2 1 4 1 2 1 8 1 2 1 4 1 2 1 9 | 1 3 1 1 3 1 1 9 1 1 3 1 1 3 10 | 2 1 2 5 2 1 2 1 10 1 2 1 2 5 11 | 1 1 1 1 1 1 1 1 1 11 1 1 1 1 12 | 2 3 4 1 6 1 4 3 2 1 12 1 2 3 13 | 1 1 1 1 1 1 1 1 1 1 1 13 1 1 14 | 2 1 2 1 2 7 2 1 2 1 2 1 14 1 15 | 1 3 1 5 3 1 1 3 5 1 3 1 1 15 16 | 2 1 4 1 2 1 8 1 2 1 4 1 2 1 17 | 1 1 1 1 1 1 1 1 1 1 1 1 1 1 18 | 2 3 2 1 6 1 2 9 2 1 6 1 2 3 19 | 1 1 1 1 1 1 1 1 1 1 1 1 1 1 20 | 2 1 4 5 2 1 4 1 10 1 4 1 2 5 21 | 1 3 1 1 3 7 1 3 1 1 3 1 7 3 22 | 2 1 2 1 2 1 2 1 2 11 2 1 2 1 23 | 1 1 1 1 1 1 1 1 1 1 1 1 1 1 24 | 2 3 4 1 6 1 8 3 2 1 12 1 2 3 25 | 1 1 1 5 1 1 1 1 5 1 1 1 1 5 26 | 2 1 2 1 2 1 2 1 2 1 2 13 2 1 27 | 1 3 1 1 3 1 1 9 1 1 3 1 1 3 28 | 2 1 4 1 2 7 4 1 2 1 4 1 14 1 29 | 1 1 1 1 1 1 1 1 1 1 1 1 1 1 30 | 2 3 2 5 6 1 2 3 10 1 6 1 2 15 31 | 1 1 1 1 1 1 1 1 1 1 1 1 1 1 32 | 2 1 4 1 2 1 8 1 2 1 4 1 2 1 33 | 1 3 1 1 3 1 1 3 1 11 3 1 1 3 34 | 2 1 2 1 2 1 2 1 2 1 2 1 2 1 35 | 1 1 1 5 1 7 1 1 5 1 1 1 7 5 36 | 2 3 4 1 6 1 4 9 2 1 12 1 2 3 37 | 1 1 1 1 1 1 1 1 1 1 1 1 1 1 38 | 2 1 2 1 2 1 2 1 2 1 2 1 2 1 39 | 1 3 1 1 3 1 1 3 1 1 3 13 1 3 40 | 2 1 4 5 2 1 8 1 10 1 4 1 2 5 41 | 1 1 1 1 1 1 1 1 1 1 1 1 1 1 42 | 2 3 2 1 6 7 2 3 2 1 6 1 14 3 43 | 1 1 1 1 1 1 1 1 1 1 1 1 1 1 44 | 2 1 4 1 2 1 4 1 2 11 4 1 2 1 45 | 1 3 1 5 3 1 1 9 5 1 3 1 1 15 46 | 2 1 2 1 2 1 2 1 2 1 2 1 2 1 47 | 1 1 1 1 1 1 1 1 1 1 1 1 1 1 48 | 2 3 4 1 6 1 8 3 2 1 12 1 2 3 49 | 1 1 1 1 1 7 1 1 1 1 1 1 7 1 50 | 2 1 2 5 2 1 2 1 10 1 2 1 2 5 This works in Python2 and Python3. If you want zeros at the beginning of each one-digit number, replace each occurence of %2d with %02d. You probably shouldn't print the header like that, but do it more like this: from fractions import gcd xlist = range(2, 16) ylist = range(2, 51) string = " | " + " ".join(("%2d" % x) for x in xlist) print(string) print("-" * len(string)) print("\n".join(" ".join(["%2d | " % b] + [("%2d" % gcd(a, b)) for a in xlist]) for b in ylist)) This way, even if you change xlist or ylist, the table will still look good. A: Your problem is that the python print statement adds a newline by itself. One solution to this is to build up your own string to output piece by piece and use only one print statement per line of the table, like such: from fractions import gcd print "| 2 3 4 5 6 7 8 9 10 11 12 13 14 15" print "-----------------------------------" xlist = range(2,16) ylist = range(2,51) for b in ylist: output=str(b)+" | " #For each number in ylist, make a new string with this number for a in xlist: output=output+str(gcd(a,b))+" " #Append to this for each number in xlist print output #Print the string you've built up Example output, by the way: | 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ----------------------------------- 2 | 2 1 2 1 2 1 2 1 2 1 2 1 2 1 3 | 1 3 1 1 3 1 1 3 1 1 3 1 1 3 4 | 2 1 4 1 2 1 4 1 2 1 4 1 2 1 5 | 1 1 1 5 1 1 1 1 5 1 1 1 1 5 6 | 2 3 2 1 6 1 2 3 2 1 6 1 2 3 7 | 1 1 1 1 1 7 1 1 1 1 1 1 7 1 8 | 2 1 4 1 2 1 8 1 2 1 4 1 2 1 9 | 1 3 1 1 3 1 1 9 1 1 3 1 1 3 A: You can specify what kind of character end the line using the end parameter in print. from fractions import gcd print("| 2 3 4 5 6 7 8 9 10 11 12 13 14 15") print("-----------------------------------") xlist = range(2,16) ylist = range(2,51) for b in ylist: print(b + " | ",end="") for a in xlist: print(gcd(a,b),end="") print("")#Newline If you are using python 2.x, you need to add from __future__ import print_function to the top for this to work.
{ "language": "en", "url": "https://stackoverflow.com/questions/35260965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Mongo/Monk count of objects in collection I'm 2 hours into Mongo/Monk with node.js and I want to count how many objects I have inserted into a collection but I can't see any docs on how to do this with Monk. Using the below doesn't seem to return what I expect var mongo = require('mongodb'); var monk = require('monk'); var db = monk('localhost:27017/mydb'); var collection = db.get('tweets'); collection.count() Any ideas? A: You need to pass a query and a callback. .count() is asynchronous and will just return a promise, not the actual document count value. collection.count({}, function (error, count) { console.log(error, count); }); A: Or you can use co-monk and take the rest of the morning off: var monk = require('monk'); var wrap = require('co-monk'); var db = monk('localhost/test'); var users = wrap(db.get('users')); var numberOfUsers = yield users.count({}); Of course that requires that you stop doing callbacks... :)
{ "language": "en", "url": "https://stackoverflow.com/questions/22972121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: matching div heights with jQuery trying to match div heights using jQuery, and it seems I can only get it to match the smallest height, but I want it to match the tallest column. from what I can tell the code should find tallest column? so I am very confused, here is what I am using function matchColHeights(col1, col2) { var col1Height = $(col1).height(); var col2Height = $(col2).height(); if (col1Height < col2Height) { $(col1).height(col2Height); } else { $(col2).height(col1Height); } } $(document).ready(function() { matchColHeights('#leftPanel', '#rightPanel'); }); trying to do it here: http://www.tigerstudiodesign.com/blog/ A: This should be able to set more than one column to maxheight. Just specify the selectors just like you would if you wanted to select all your elements with jQuery. function matchColHeights(selector){ var maxHeight=0; $(selector).each(function(){ var height = $(this).height(); if (height > maxHeight){ maxHeight = height; } }); $(selector).height(maxHeight); }; $(document).ready(function() { matchColHeights('#leftPanel, #rightPanel, #middlePanel'); }); A: one line alternative $(".column").height(Math.max($(col1).height(), $(col2).height())); Check out this fiddle: http://jsfiddle.net/c4urself/dESx6/ It seems to work fine for me? javascript function matchColHeights(col1, col2) { var col1Height = $(col1).height(); console.log(col1Height); var col2Height = $(col2).height(); console.log(col2Height); if (col1Height < col2Height) { $(col1).height(col2Height); } else { $(col2).height(col1Height); } } $(document).ready(function() { matchColHeights('#leftPanel', '#rightPanel'); }); css .column { width: 48%; float: left; border: 1px solid red; } html <div class="column" id="leftPanel">Lorem ipsum...</div> <div class="column" id="rightPanel"></div> A: When I load your site in Chrome, #leftPanel has a height of 1155px and #rightPanel has a height of 1037px. The height of #rightPanel is then set, by your matchColHeights method, to 1155px. However, if I allow the page to load, then use the Chrome Developer Tools console to remove the style attribute that sets an explicit height on #rightPanel, its height becomes 1473px. So, your code is correctly setting the shorter of the two columns to the height of the taller at the time the code runs. But subsequent formatting of the page would have made the other column taller. A: The best tut on this subject could be found here: http://css-tricks.com/equal-height-blocks-in-rows/
{ "language": "en", "url": "https://stackoverflow.com/questions/8595657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Zebra printer ignores the command I have got Zebra GC420d. Using zebra 0.0.3a, this is an example of my issue: label = """ ^XA ^FO10,10 ^A0,40,40 ^FD Hello World ^FS ^XZ """ from zebra import zebra z = zebra('Zebra_GC420d') z.output(label) The printer ignores the command and prints the contents of the variable "label". How can I fix it? A: It sounds like the printer is not configured to understand ZPL. Look at this article to see how to change the printer from line-print mode (where it simply prints the data it receives) to ZPL mode (where it understands ZPL commands). Command not being understood by Zebra iMZ320 Basically, you may need to send this command: ! U1 setvar "device.languages" "zpl" Notice that you need to include a newline character (or carriage return) at the end of this command. A: zebra 0.0.3a is for EPL2, Not for ZPL2 !!!! See the site : https://pypi.python.org/pypi/zebra/
{ "language": "en", "url": "https://stackoverflow.com/questions/19790053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Node.js WebSocket server is only accepting one client from my local machine I have a really simple Node.js server that uses 'ws' for WebSockets, but it's only accepting one client in what I believe is a multi-client server. Here's literally the program I'm using to test it right now. Short and simple, but isn't working. const WebSocket = require('ws'); const fs = require('fs'); const https = require('https'); let server = new https.createServer({ key: fs.readFileSync("./ssl/private.key"), cert: fs.readFileSync("./ssl/certificate.crt"), ca: fs.readFileSync("./ssl/ca_bundle.crt") }).listen(443); let wss = new WebSocket.Server({ noServer: true }); server.on("upgrade", (request, socket, head) => { wss.handleUpgrade(request, socket, head, (ws) => { ws.onopen = (event) => { console.log("A client has connected"); }; ws.onclose = (event) => { console.log("A client has disconnected"); } }); }); Both clients are running the same code in Google Chrome (Also tested with firefox) Code: <!DOCTYPE html> <html> <head> </head> <body> <script> var ws = new WebSocket("wss://example.com/ws/"); ws.onopen = function(e){ console.log("Open: ", e); } ws.onmessage = function(e){ console.log("Message: ", e); } ws.onerror = function(e){ console.log("Error: ", e); } ws.onclose = function(e){ console.log("Close: ", e); } </script> </body> </html> One client will log the open connection, every other client will log a timeout, error, and close event. Thanks in advance
{ "language": "en", "url": "https://stackoverflow.com/questions/62316993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
7
Add dataset card